[zfs-discuss] Re: Re: Solid State Drives?

2007-01-05 Thread Anton B. Rang
> > Summary (1.8" form factor): write: 35MB/Sec, Read: 62MB/Sec IOPS: 7,000 > > > That is on par with a 5400 rpm disk, except for the 100x more small, random > read iops. The biggest issue is the pricing, which will become interestingly > competitive for mortals this year. $600+ for a 32 GB de

Re: [zfs-discuss] RAIDZ2 vs. ZFS RAID-10

2007-01-05 Thread Richard Elling
Peter Schuller wrote: I've been using a simple model for small, random reads. In that model, the performance of a raidz[12] set will be approximately equal to a single disk. For example, if you have 6 disks, then the performance for the 6-disk raidz2 set will be normalized to 1, and the perform

Re: [zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-05 Thread Richard Elling
Darren Dunham wrote: That would be useless, and not provide anything extra. I think it's useless if a (disk) block of data holding RAIDZ parity never has silent corruption, or if scrubbing was a lightweight operation that could be run often. The problem is that you will still need to

Re: [zfs-discuss] zfs recv

2007-01-05 Thread Richard Elling
Matthew Ahrens wrote: Robert Milkowski wrote: Hello zfs-discuss, zfs recv -v at the end reported: received 928Mb stream in 6346 seconds (150Kb/sec) I'm not sure but shouldn't it be 928MB and 150KB ? Or perhaps we're counting bits? That's correct, it is in bytes and should use capital B.

[zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-05 Thread Anton B. Rang
> It's not about the checksum but about how a fs block is stored in > raid-z[12] case - it's spread out to all non-parity disks so in order > to read one fs block you have to read from all disks except parity > disks. However, if we didn't need to verify the checksum, we wouldn't have to read the

Re: [zfs-discuss] Re: Solid State Drives?

2007-01-05 Thread Richard Elling
Al Hopper wrote: On Fri, 5 Jan 2007, Anton B. Rang wrote: If [SSD or Flash] devices become more prevalent, and/or cheaper I'm curious what ways ZFS could be made to bast take advantage of them? The intent log is a possibility, but this would work better with SSD than Flash; Flash wr

Re: [zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-05 Thread Darren Dunham
> >> ... If the block checksums > >> show OK, then reading the parity for the corresponding data yields no > >> additional useful information. > > > > It would yield useful information about the status of the parity > > information on disk. > > > > The read would be done because you're already payi

Re: [zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-05 Thread Toby Thain
... If the block checksums show OK, then reading the parity for the corresponding data yields no additional useful information. It would yield useful information about the status of the parity information on disk. The read would be done because you're already paying the penalty for reading all

Re[2]: [zfs-discuss] Re: Re: Re: Snapshots impact on performance

2007-01-05 Thread Robert Milkowski
Hello Chris, Wednesday, December 13, 2006, 12:25:40 PM, you wrote: CG> Robert Milkowski wrote: >> Hello Chris, >> >> Wednesday, December 6, 2006, 6:23:48 PM, you wrote: >> >> CG> One of our file servers internally to Sun that reproduces this >> CG> running nv53 here is the dtrace output: >> >>

[zfs-discuss] Re: zfs pool in degraded state,

2007-01-05 Thread Eric Hill
Ok, now I'm getting somewhere. vault:/#dd if=/dev/zero of=/dev/dsk/c5t6d0 bs=512 count=64000 64000+0 records in 64000+0 records out vault:/#dd if=/dev/zero of=/dev/dsk/c5t6d0 bs=512 count=64000 oseek=976174591 64000+0 records in 64000+0 records out vault:/#zpool replace pool c5t6d0 vault:/# Looks

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Tomas Ögren
On 05 January, 2007 - Mark Maybee sent me these 2,9K bytes: > Tomas Ögren wrote: > >On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes: > > > >>So it looks like this data does not include ::kmastat info from *after* > >>you reset arc_reduce_dnlc_percent. Can I get that? > > > >Yeah, attac

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Tomas Ögren
On 05 January, 2007 - Tomas Ögren sent me these 33K bytes: > On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes: > > > So it looks like this data does not include ::kmastat info from *after* > > you reset arc_reduce_dnlc_percent. Can I get that? > > Yeah, attached. (although about 18 ho

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Mark Maybee
Tomas Ögren wrote: On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes: So it looks like this data does not include ::kmastat info from *after* you reset arc_reduce_dnlc_percent. Can I get that? Yeah, attached. (although about 18 hours after the others) Excellent, this confirms #3 b

[zfs-discuss] Re: zfs pool in degraded state,

2007-01-05 Thread Eric Hill
And to add more fuel to the fire, an fmdump -eV shows the following: Jan 05 2007 11:30:38.030057310 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0x88c01b571200801 detector = (embedded nvlist) nvlist version: 0

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-05 Thread Jonathan Edwards
On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem. Direct I/O as generally understood (i.e. not UFS-specific) is an optimization which allows data to be transferred directly between user data bu

[zfs-discuss] Re: zfs pool in degraded state,

2007-01-05 Thread Eric Hill
Hi Bill, vault:/#zpool replace pool c5t6d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c5t6d0s0 is part of active ZFS pool pool. Please see zpool(1M). vault:/#zpool replace -f pool c5t6d0 invalid vdev specification the following errors must be manually repaired:

Re: [zfs-discuss] Solid State Drives?

2007-01-05 Thread Jason J. W. Williams
Could this ability (separate ZIL device) coupled with an SSD give something like a Thumper the write latency benefit of battery-backed write cache? Best Regards, Jason On 1/5/07, Neil Perrin <[EMAIL PROTECTED]> wrote: Robert Milkowski wrote On 01/05/07 11:45,: > Hello Neil, > > Friday, Januar

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Tomas Ögren
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes: > So it looks like this data does not include ::kmastat info from *after* > you reset arc_reduce_dnlc_percent. Can I get that? Yeah, attached. (although about 18 hours after the others) > What I suspect is happening: > 1 with you

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Mark Maybee
So it looks like this data does not include ::kmastat info from *after* you reset arc_reduce_dnlc_percent. Can I get that? What I suspect is happening: 1 with your large ncsize, you eventually ran the machine out of memory because (currently) the arc is not accounting for

Re: [zfs-discuss] Solid State Drives?

2007-01-05 Thread Neil Perrin
Robert Milkowski wrote On 01/05/07 11:45,: Hello Neil, Friday, January 5, 2007, 4:36:05 PM, you wrote: NP> I'm currently working on putting the ZFS intent log on separate devices NP> which could include seperate disks and nvram/solid state devices. NP> This would help any application using fs

Re: [zfs-discuss] Re: Solid State Drives?

2007-01-05 Thread Al Hopper
On Fri, 5 Jan 2007, Anton B. Rang wrote: > > If [SSD or Flash] devices become more prevalent, and/or cheaper I'm curious > > what > > ways ZFS could be made to bast take advantage of them? > > The intent log is a possibility, but this would work better with SSD than > Flash; Flash writes can act

Re[2]: [zfs-discuss] Solid State Drives?

2007-01-05 Thread Robert Milkowski
Hello Neil, Friday, January 5, 2007, 4:36:05 PM, you wrote: NP> I'm currently working on putting the ZFS intent log on separate devices NP> which could include seperate disks and nvram/solid state devices. NP> This would help any application using fsync/O_DSYNC - in particular NP> DB and NFS. Fro

Re: [zfs-discuss] zfs pool in degraded state, zpool offline fails with no valid replicas

2007-01-05 Thread Bill Moore
On Fri, Jan 05, 2007 at 10:14:21AM -0800, Eric Hill wrote: > I have a pool of 48 500GB disks across four SCSI channels (12 per > channel). One of the disks failed, and was replaced. The pool is now > in a degraded state, but I can't seem to get the pool to be happy with > the replacement. I did

[zfs-discuss] zfs pool in degraded state, zpool offline fails with no valid replicas

2007-01-05 Thread Eric Hill
I have a pool of 48 500GB disks across four SCSI channels (12 per channel). One of the disks failed, and was replaced. The pool is now in a degraded state, but I can't seem to get the pool to be happy with the replacement. I did a resilver and the pool is error free with the exception of this

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Tomas Ögren
On 05 January, 2007 - Mark Maybee sent me these 0,8K bytes: > Thomas, > > This could be fragmentation in the meta-data caches. Could you > print out the results of ::kmastat? http://www.acc.umu.se/~stric/tmp/zfs-dumps.tar.bz2 memstat, kmastat and dnlc_nentries from 10 minutes after boot up unt

Re: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Mark Maybee
Thomas, This could be fragmentation in the meta-data caches. Could you print out the results of ::kmastat? -Mark Tomas Ögren wrote: On 05 January, 2007 - Robert Milkowski sent me these 3,8K bytes: Hello Tomas, I saw the same behavior here when ncsize was increased from default. Try with de

[zfs-discuss] Re: Solid State Drives?

2007-01-05 Thread Anton B. Rang
> If [SSD or Flash] devices become more prevalent, and/or cheaper I'm curious > what > ways ZFS could be made to bast take advantage of them? The intent log is a possibility, but this would work better with SSD than Flash; Flash writes can actually be slower than sequential writes to a real dis

[zfs-discuss] Re: ZFS direct IO

2007-01-05 Thread Anton B. Rang
> DIRECT IO is a set of performance optimisations to circumvent shortcomings of > a given filesystem. Direct I/O as generally understood (i.e. not UFS-specific) is an optimization which allows data to be transferred directly between user data buffers and disk, without a memory-to-memory copy.

Re: [zfs-discuss] Re: Re: RAIDZ2 vs. ZFS RAID-10

2007-01-05 Thread Darren Dunham
> > Ah, that's a major misconception on my part then. I'd thought I'd read > > that unlike any other RAID implementation, ZFS checked and verified > > parity on normal data access. > That would be useless, and not provide anything extra. I think it's useless if a (disk) block of data holding R

Re: [zfs-discuss] Solid State Drives?

2007-01-05 Thread Neil Perrin
I'm currently working on putting the ZFS intent log on separate devices which could include seperate disks and nvram/solid state devices. This would help any application using fsync/O_DSYNC - in particular DB and NFS. From protoyping considerable peformanace improvements have been seen. Neil. Ky

[zfs-discuss] Solid State Drives?

2007-01-05 Thread Kyle McDonald
I know there's been much discussion on the list lately about getting HW arrays to use (or not use) their caches in a way that helps ZFS the most. Just yesterday I started seeing articles on NAND Flash Drives, and I know other Solid Stae Drive technologies have been around for a while and many

Re: Re[2]: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Tomas Ögren
On 05 January, 2007 - Robert Milkowski sent me these 3,8K bytes: > Hello Tomas, > > I saw the same behavior here when ncsize was increased from default. > Try with default and lets see what will happen - if it works then it's > better than hung every an hour or so. That's still not the point.. I

Re[2]: [zfs-discuss] ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Robert Milkowski
Hello Tomas, Friday, January 5, 2007, 4:00:53 AM, you wrote: TÖ> On 04 January, 2007 - Tomas Ögren sent me these 1,0K bytes: >> On 03 January, 2007 - [EMAIL PROTECTED] sent me these 0,5K bytes: >> >> > >> > >Hmmm, so there is lots of evictable cache here (mostly in the MFU >> > >part of the ca

[zfs-discuss] Re: ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Jürgen Keil
> >Hmmm, so there is lots of evictable cache here (mostly in the MFU > >part of the cache)... could you make your core file available? > >I would like to take a look at it. > > Isn't this just like: > 6493923 nfsfind on ZFS filesystem quickly depletes memory in a 1GB system > > Which was introduc

Re: [zfs-discuss] ZFS direct IO

2007-01-05 Thread Roch Bourbonnais
DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem. Check out http://blogs.sun.com/roch/entry/zfs_and_directio Then I would be interested to know what is the expectation for ZFS/DIO. Le 5 janv. 07 à 06:39, dudekula mastan a écrit : Hi