Well at the risk of being repetetive too:
"or another box".
So yes I am considering it, but that is probably the option that requires less
guidance in this thread.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
OK, replying to myself after having played around a bit with both Unix under
Billware and vice versa:
1)
- Somehow the HDs turned into a "GPT Protective Partition", which XP cannot
read. Googling a bit reveals that XP cannot read these (although a utility for
destroying and reformatting is ava
@ kebabber:
> There was a guy doing that: Windows as host and
> OpenSolaris as guest with raw access to his disks. He
> lost his 12 TB data. It turned out that VirtualBox
> dont honor the write flush flag (or something
> similar).
That story is in the link I provided, and as has been pointed out
Pardon in advance my n00b ignorance. (Yes I have googled a [i]lot[/i] before
asking.)
I am considering VirtualBoxing away one physical machine at home, and running
WinXP as host (yes, as atrocious it may seem, explanation below [1]) and
OpenSolaris guest as file server, with OpenSolaris (why?[2
n 1km)?
Thanks, NIls
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
BTW, this was on snv_111b - sorry I forgot to mention.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
When trying to execute processes in them, I got exec failures like this
one:
# zlogin ZONE
[Connected to zone 'ZONE' pts/2]
zlogin: exec failure: I/O error
Is this issue known to anyone already?
Thank you, Nils
___
zfs-discuss mailing l
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a
'du -sh' on the filesystem root, I only get appr. 300GB which is the correct
size.
The file system became full during Christmas and I increased the quota from 1
to 1.5 to 2TB and then decreased to 1.5TB. No reservatio
analogy in memory
management (proper swap space reservation vs. the oom-killer).
But I realize that talking about an "implicit expectation" to give some
motivation for reservations probably lead to some misunderstanding.
Sorry, Nils
___
z
but I don't see how those
should work with the currently implemented concept.
Do we need something like a de-dup-reservation, which is substracted from the
pool free space?
Thank you for reading,
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
..@zfs-auto-snap:frequent-2009-11-03-22:04:46 rpool/test
cannot create 'rpool/test': out of space
I don't see how a similar guarantee could be given with de-dup.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a belittling term to express disagreement. I hope
it doesn't derail the discussion.
It certainly won't on my side. Thank you for the clarification.
Thanks, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
obviously something that should
be addressed.
Would the idea I mentioned not address this issue as well?
Thanks, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e that it's been put back into snv_125.
At any rate, I think that the main selling point for 7xxxs is really the add on
s/w and I believe that making the core technology openly available will
strengthen the product rather than weakening it.
Thank
nfig, or are the two always
the same".
Anyway, this subtle detail might not make a difference in most scenarios.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t the
nvsram settings which can be read with
service -d -c read -q nvsram region=0xf2 host=0x00
do not necessarily reflect the current configuration and that the only way to
make sure the controller is running with that configuration is to reset it.
Nils
Hi Bob and all,
I should update this paper since the performance is now radically
different and the StorageTek 2540 CAM configurables have changed.
That would be great, I think you'd do the community (and Sun, probably) a big
favor.
Is this information still current for F/W 07.35.44.10 ?
ow if the controller has been booted since the nvsram
potentially got modified?
Thank you, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I should add that I have quite a lot of datasets:
and maybe I should also add that I'm still running an old zpool version in order
to keep the ability to boot snv_98:
aggis:~$ zpool upgrade
This system is currently running ZFS pool version 14.
The following pools are out of date, and can b
s:
r...@haggis:~# zfs list -r -t filesystem | wc -l
49
r...@haggis:~# zfs list -r -t volume | wc -l
14
r...@haggis:~# zfs list -r -t snapshot | wc -l
6018
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
> ::threadlist!grep zil_clean| wc -l
1037
Thanks, Nils
P.S.: Please don't spend too much time on this, for me, this question is really
academic - but I'd be grateful for any good answers.
___
zfs-discuss mailing list
zfs-discuss@opensol
f a better way than to hard reboot the machine.
This happened on snv_111 running as an xvm dom0.
My question is if anyone is interested in analyzing this.
I'll provide some detail here and in case anyone is in interested, I could
provide crash and core-dumps.
Thank you, Nils
--
Here
rpool ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ead documenting a similar issue, but it did not contain a real
solution:
http://www.opensolaris.org/jive/thread.jspa?threadID=77876
Does anyone have a clue how I can correct ZFS'es idea of my disks name?
Thank you,
Nils
___
zfs-discuss mail
be successive in order to get optimal
load distribution with the hashes I've seen in the field.
That's a topic I'll probably revisit..
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s always a good argument.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s make any sense?
Thank you,
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
interpretation match the original intention, or are there any other or
better reasons? It there a reason why inheritable ACEs are split always, even if
the particular chmod call would not require splitting them?
Thank you, Nils
___
zfs-discuss mailing
Well done, Nathan, thank you taking on the additional effort to write it all up.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> If you run the id on the box, does it show the users
> secondary groups?
id never shows secondary groups.
Use id -a
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ail-archive.com/[EMAIL PROTECTED]/msg97466.html
Cheers, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
entially rather than ZFS on disk
structures), so I doubt if an improvement can be expected anytime soon.
That said, the core developers on zfs-discuss will know more than me and might
be willing to give more background on this.
Nils
___
zfs-di
nments unless you really need
to, but this looks like a nfs client caching issue.
Is this an nfsv3 or nfsv4 mount? What happens if you use one or the other?
Please provide nfsstat -m output.
Nils
___
zfs-discuss mailing list
zfs-disc
that he would not want to hardcode any
assumptions as to which snapshots one would want to delete first, and I think
he
is very right in doing so.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
many snapshots as necessary to give the user
a
change to move a snapshot from a "finer class" to a "coarser class" to prevent
snapshotted data from the time frame in question from expiring prematurely in
case a "coarser" snapshot was not be taken.
Nils
__
> Before re-inventing the wheel, does anyone have any nice shell script to do
> this
> kind of thing (to be executed from cron)?
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11
_
/O on the same disks or I/Os being split up
unnecessarily. All of this highly depends upon the configuration.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
See
http://www.opensolaris.org/jive/thread.jspa?messageID=271983
The case mentioned there is one where concatenation in zdevs would be useful.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
ance for cache warmup times etc).
In short, I consider this optimization approach worthwhile exploring, but I
don't think I'll be able to do this myself.
I would appreciate any pointers to background information regarding this
question.
Thank you,
Nils
___
ritten in that format. There are other issues related
to the fact that the GRUB zfs implementation is lightweight, for instance it
cannot read a boot archive which is created with compression=gzip enabled on
the
filesystem (or at least it could not a couple of months ago, have not
Hi,
> It is important to remember that ZFS is ideal for writing new files from
> scratch.
IIRC, maildir MTAs never overwrite mail files. But courier-imap does maintain
some additional index files which will be overwritten and I guess other IMAP
servers will probably do the same.
Not knowing of a better place to put this, I have created
http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB
Please make any corrections there.
Thanks, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
_grub
* umount, export
umount /mnt
zpool export rpool
At least this has worked for me.
Would it be a good idea to put this into indiana release notes?
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e full RAIDZ line only for the
degraded RAID case.
I think that this could make a big difference for write-once read many random
access-type applications like DSS systems etc.
Is this feasible at all?
Nils
___
zfs-discuss mailing list
zf
> I Ben's argument, and the main point IMHO is how the RAID behaves in the
^
second
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
th a
degraded RAID.
What about for instance writing 16MB chunks and reading 8K random? Wouldn't
RAIDZ access only the disks containing the 8K bits?
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
placing
the failed disk, which is an argument for not using too large disks (see
another
thread on this list).
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
glitch:
> have you tried mounting and re-mounting all filesystems which are not
^^^
unmounting
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Hi David,
have you tried mounting and re-mounting all filesystems which are not
being mounted automatically? See other posts to zfs-discuss.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
zfs itself can't, but Tim Foster has written a nice script, integrated into
SMF, which can be used to automatically create and delete snapshots at various
intervals.
see http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10 for the latest
release and http://opensolaris.org/jive/thread.js
pool on your metadevices
zpool create raidz /dev/md/dsk/d11 /dev/md/dsk/d12 /dev/md/dsk/d13
/dev/md/dsk/d14 /dev/md/dsk/d15
Again: I have never tried this, so please don't blame me if this doesn't work.
Nils
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g taken Mathias' initial hint for
what it was.
By the way, I could finally scrub the pool, no problems remaining
(except for the task to make it a real mirror).
Thanks again,
Nils
This message posted from opensolaris.org
___
zfs-discuss mail
Matthias,
that does not answer my question.
The question is: Why can't I decide that I consciously want to destroy the (two
way)
mirror (and, yes, do away with any redundancy).
Nils
This message posted from opensolaris.org
___
zfs-discuss ma
Hi,
I thought that this question must have been answered already, but I have
not found any explanations. I'm sorry in advance if this is redundant, but:
Why exactly doesn't ZFS let me detach a device from a degraded mirror?
haggis:~# zpool status
pool: rmirror
state: DEGRADED
status: One or
solution, how do you invalidate the cache if a property is being
changed or deleted (this is trivial, but not yet implemented)?
- Does your solution handle white space, quotes etc. in svcprop values properly
(I think there is an issue regarding white space, but I have not tested it)?
My previous reply via email did not get linked to this post, so let me resend
it:
can roles run cron jobs ?),
>>> No. You need a user who can take on the role.
>> Darn, back to the drawing board.
> I don't have all the context on this but Solaris RBAC roles *can* run cron
> jobs. Roles don
You need a user who can take on the role.
Thanks again, and keep up the good work (and please think again about
the at-vanteges ;-)
Nils
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
An example from the readme does not work and fails with:
Error: Cant schedule at job: at midnight sun
Change:
--- README.zfs-auto-snapshot.txt.o Sun Jun 29 11:23:35 2008
+++ README.zfs-auto-snapshot.txtSun Jun 29 11:24:31 2008
@@ -171,7 +171,7 @@
'setprop zfs/at_timespec = as
And how about making this an official project?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
due when)
- added validation functions for various SMF properties
Cheers,
Nils
This message posted from opensolaris.org
zfs-auto-snapshot-0.10_atjobs2.tar.bz2
Description: BZip2 compressed data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
and the tar file ...
This message posted from opensolaris.org
zfs-auto-snapshot-0.10_atjobs.tar.bz2
Description: BZip2 compressed data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t post
files to Tims blog (or can I?).
Tim, feel free to integrate my suggestions or not, I wont feel offended if you
don't, but at any rate I am very happy that you maintain this tool.
Thanks again, Nils
This message posted from opensolaris.org
___
see: http://bugs.opensolaris.org/view_bug.do?bug_id=6700597
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri 05/12/06 at 13:46 PM, [EMAIL PROTECTED] wrote:
> On Fri, May 12, 2006 at 03:32:43PM -0500, Nicolas Williams wrote:
> > Also, I'm getting tired of replying to some e-mail only to get a "post
> > awaits moderator approval" reply.
> >
> > I understand why we do that for non-subscribers.
> >
>
64 matches
Mail list logo