Re: [zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Erik Trimble
Darren Dunham wrote: Also, even if it could read the data from a subset of the disks, isn't it a feature that every read is also verifying the parity for correctness/silent corruption? It doesn't -- we only read the data, not the parity. (See line 708 of vdev_raidz.c.) The parity is che

[zfs-discuss] ZFS direct IO

2007-01-04 Thread dudekula mastan
Hi All, As you all know that DIRECT IO is not supported by ZFS file sytem. When ZFS people will add DIRECT IO support to ZFS ? What is the roadmap for ZFS direct IO ? Do you have any idea on this, Please let me know. Thanks & Regards Masthan ___

Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Matthew Ahrens
[EMAIL PROTECTED] wrote: Common case to me is, how much would be freed by deleting the snapshots in order of age from oldest to newest always starting with the oldest. That would be possible. A given snapshot's "space used by this and all prior snapshots" would be the prev snap's "used+prior"

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-04 Thread Tomas Ögren
On 04 January, 2007 - Tomas Ögren sent me these 1,0K bytes: > On 03 January, 2007 - [EMAIL PROTECTED] sent me these 0,5K bytes: > > > > > >Hmmm, so there is lots of evictable cache here (mostly in the MFU > > >part of the cache)... could you make your core file available? > > >I would like to ta

[zfs-discuss] odd versus even

2007-01-04 Thread Peter Tribble
I'm being a bit of a dunderhead at the moment and neither the site search nor google are picking up the information I seek... I'm setting up a thumper and I'm sure I recall some discussion of the optimal number of drives in raidz1 and raidz2 vdevs. I also recall that it was something like you wou

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-04 Thread Tomas Ögren
On 03 January, 2007 - [EMAIL PROTECTED] sent me these 0,5K bytes: > > >Hmmm, so there is lots of evictable cache here (mostly in the MFU > >part of the cache)... could you make your core file available? > >I would like to take a look at it. > > Isn't this just like: > 6493923 nfsfind on ZFS file

Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Wade . Stuart
Matthew, I really do appreciate this discussion, thank you for taking the time to go over this with me. Matthew Ahrens <[EMAIL PROTECTED]> wrote on 01/04/2007 01:49:00 PM: > [EMAIL PROTECTED] wrote: > > [9:40am] [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED] > > [9:41am] [/data/

[zfs-discuss] Re: Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Anton B. Rang
> I'd thought I'd read that unlike any other RAID implementation, ZFS checked > and verified parity on normal data access. Not yet, it appears. :-) (Incidentally, some hardware RAID controllers do verify parity, but generally only for RAID-3, where the extra reads are free as long as you have

Re: [zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Bart Smaalders
Darren Dunham wrote: Also, even if it could read the data from a subset of the disks, isn't it a feature that every read is also verifying the parity for correctness/silent corruption? It doesn't -- we only read the data, not the parity. (See line 708 of vdev_raidz.c.) The parity is checked on

Re: [zfs-discuss] zfs recv

2007-01-04 Thread Matthew Ahrens
Robert Milkowski wrote: Hello zfs-discuss, zfs recv -v at the end reported: received 928Mb stream in 6346 seconds (150Kb/sec) I'm not sure but shouldn't it be 928MB and 150KB ? Or perhaps we're counting bits? That's correct, it is in bytes and should use capital B. Please file a bug. --

Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Matthew Ahrens
[EMAIL PROTECTED] wrote: [9:40am] [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED] [9:41am] [/data/test]:test% zfs snapshot data/[EMAIL PROTECTED] ... [9:42am] [/data/test/images/fullres]:test% zfs list NAME USED AVAIL REFER MOUNTPOINT data/test 13.4G

Re: [zfs-discuss] Checksum errors...

2007-01-04 Thread eric kustarz
errors: The following persistent errors have been detected: DATASET OBJECT RANGE z_tsmsun1_pool/tsmsrv1_pool 26208464760832-8464891904 Looks like I have possibly a single file that is corrupted. My question is how do I find the file. Is it as si

Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Matthew Ahrens
Darren Dunham wrote: Is the problem of displaying the potential space freed by multiple destructions one of calculation (do you have to walk snapshot trees?) or one of formatting and display? Both, because you need to know for each snapshot, how much of the data it references was first referen

Re: [zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Darren Dunham
> > Also, even if it could read the data from a subset of the disks, isn't > > it a feature that every read is also verifying the parity for > > correctness/silent corruption? > > It doesn't -- we only read the data, not the parity. (See line 708 of > vdev_raidz.c.) The parity is checked only wh

[zfs-discuss] Scrubbing on active zfs systems (many snaps per day)

2007-01-04 Thread Wade . Stuart
>From what I have read, it looks like there is a known issue with scrubbing restarting when any of the other usages of the same code path run (re-silver, snap ...). It looks like there is a plan to put in a marker so that scrubbing knows where to start again after being preempted. This is goo

[zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Anton B. Rang
> What happens when a sub-block is missing (single disk failure)? Surely > it doesn't have to discard the entire checksum and simply trust the > remaining blocks? The checksum is over the data, not the data+parity. So when a disk fails, the data is first reconstructed, and then the block checksu

Re: [zfs-discuss] Re: zfs list and snapshots..

2007-01-04 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 01/03/2007 04:21:00 PM: > [EMAIL PROTECTED] wrote: > > which is not the behavior I am seeing.. > > Show me the output, and I can try to explain what you are seeing. [9:36am] [~]:test% zfs create data/test [9:36am] [~]:test% zfs set compression=on data/test [9:37am]

Re: [zfs-discuss] Re: Re[2]: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Anton Rang
On Jan 4, 2007, at 10:26 AM, Roch - PAE wrote: All filesystems will incur a read-modify-write when application is updating portion of a block. For most Solaris file systems it is the page size, rather than the block size, that affects read-modify-write; hence 8K (SPARC) or 4K (x86

Re: [zfs-discuss] Re: Re[2]: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Roch - PAE
Anton B. Rang writes: > >> In our recent experience RAID-5 due to the 2 reads, a XOR calc and a > >> write op per write instruction is usually much slower than RAID-10 > >> (two write ops). Any advice is greatly appreciated. > > > > RAIDZ and RAIDZ2 does not suffer from this malady (the RAID

Re: [zfs-discuss] Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Darren Dunham
> It's the block checksum that requires reading all of the disks. If > ZFS stored sub-block checksums for the RAID-Z case then short reads > could often be satisfied without reading the whole block (and all > disks). What happens when a sub-block is missing (single disk failure)? Surely it doesn

Re: [zfs-discuss] Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Casper . Dik
>So actually I mis-spoke slightly; rather than "all disks", I should >have said "all data disks." >In practice this has the same effect: No more than one read may be >processed at a time. But aren't short blocks sometimes stored on only a subset of disks? Casper _

Re: [zfs-discuss] Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Anton Rang
On Jan 4, 2007, at 3:25 AM, [EMAIL PROTECTED] wrote: Is there some reason why a small read on a raidz2 is not statistically very likely to require I/O on only one device? Assuming a non-degraded pool of course. ZFS stores its checksums for RAIDZ/RAIDZ2 in such a way that all disks must b

Re: [zfs-discuss] ZFS --> Grub shell

2007-01-04 Thread Detlef Drewanz
I am not sure what your problem is. Does it not start dtlogin/gdm ? Or is the zpool not mounted ? Just run after login on cli and run "svcs -x" and "df -k" to see if all services are running and the zpool has been mounted. It can be that your zpool was not mounted on /export (depending on your

[zfs-discuss] ZFS --> Grub shell

2007-01-04 Thread Lubos Kocman
Hi, I've tried to create zfs pool on my c0d0s7 and use it as a tank for my /export/home Everything was perfect, I moved my files from a backup there and still ok. I also deleted old line with ufs mountpoint /export/home in vfstab file. But after reboot, there was just bash shell. I've tried to

Re: [zfs-discuss] zfs clones

2007-01-04 Thread Darren J Moffat
Matthew Ahrens wrote: now wouldnt it be more natural way of usage when I intend to create a clone, that by default the zfs clone command will create the needed snapshot from the current image internally as part of taking the clone unless I explicitely specify that I do want to take a clone of a

Re: [zfs-discuss] Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Robert Milkowski
Hello Anton, Thursday, January 4, 2007, 3:46:48 AM, you wrote: >> Is there some reason why a small read on a raidz2 is not statistically very >> likely to require I/O on only one device? Assuming a non-degraded pool of >> course. ABR> ZFS stores its checksums for RAIDZ/RAIDZ2 in such a way tha

Re: [zfs-discuss] Re: RAIDZ2 vs. ZFS RAID-10

2007-01-04 Thread Casper . Dik
>> Is there some reason why a small read on a raidz2 is not statistically very >> likely to require I/O on only one device? Assuming a non-degraded pool of >> course. > >ZFS stores its checksums for RAIDZ/RAIDZ2 in such a way that all disks must be >read to compute and verify the checksum. B