[zfs-discuss] Zpool with data errors

2011-06-20 Thread Todd Urie
I have a zpool that shows the following from a zpool status -v brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101 pool:ABC0101 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Daniel Carosone
On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote: > Yes. I've been looking at what the value of zfs_vdev_max_pending should be. > The old value was 35 (a guess, but a really bad guess) and the new value is > 10 (another guess, but a better guess). I observe that data from a fast, >

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Garrett D'Amore
For SSD we have code in illumos that disables disksort. Ultimately, we believe that the cost of disksort is in the noise for performance. -- Garrett D'Amore On Jun 20, 2011, at 8:38 AM, "Andrew Gabriel" wrote: > Richard Elling wrote: >> On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote: >>

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Andrew Gabriel
Richard Elling wrote: On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote: Richard Elling wrote: Actually, all of the data I've gathered recently shows that the number of IOPS does not significantly increase for HDDs running random workloads. However the response time does :-( My data i

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-20 Thread Richard Elling
On Jun 15, 2011, at 1:33 PM, Nomen Nescio wrote: > Has there been any change to the server hardware with respect to number of > drives since ZFS has come out? Many of the servers around still have an even > number of drives (2, 4) etc. and it seems far from optimal from a ZFS > standpoint. All you

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Richard Elling
On Jun 20, 2011, at 6:31 AM, Gary Mills wrote: > On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote: >> On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote: From: Richard Elling [mailto:richard.ell...@gmail.com] Sent: Saturday, June 18, 2011 7:47 PM Actually, all

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Gary Mills
On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote: > On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote: > >> From: Richard Elling [mailto:richard.ell...@gmail.com] > >> Sent: Saturday, June 18, 2011 7:47 PM > >> > >> Actually, all of the data I've gathered recently shows that the n

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > Sent: Sunday, June 19, 2011 11:03 AM > > > I was planning, in the near > > future, to go run iozone on some system with, and without the disk cache > > enabled according to format -e. If my hypothesis is right, it shouldn't > > significan

Re: [zfs-discuss] zpool import crashs SX11 trying to recovering a corrupted zpool

2011-06-20 Thread Stefano Lassi
Thank you very much Jim for your suggestions. Trying any kind of import (including importing Read-Only) on SX11 will lead, everytime, a system panic with following error: panic[cpu12]/thread=ff02de5e20c0: assertion failed: zap_count(os, object, &count) == 0, file: ../../common/fs/zfs/ddt_z

[zfs-discuss] ZFS raid1 crash kernel panic

2011-06-20 Thread Aleksey
Hello, I have a ZFS raid1 from 2 drives to 1TB . Recently, my system OS: FreeBSD 8.2-RELEASE has crashed, with kernel panic: panic: solaris assert: ss->ss_end >= end (0x6a80753600 >= 0x6a80753800), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/