Re: [zfs-discuss] resilver = defrag?

2010-09-10 Thread Darren J Moffat
On 10/09/2010 04:24, Bill Sommerfeld wrote: C) Does zfs send zfs receive mean it will defrag? Scores so far: 1 No 2 Yes "maybe". If there is sufficient contiguous freespace in the destination pool, files may be less fragmented. But if you do incremental sends of multiple snapshots, you may w

Re: [zfs-discuss] [mdb-discuss] mdb -k - I/O usage

2010-09-10 Thread Piotr Jasiukajtis
Ok, now I know it's not related to the I/O performance, but to the ZFS itself. At some time all 3 pools were locked in that way: extended device statistics errors --- r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 0.0

Re: [zfs-discuss] [mdb-discuss] mdb -k - I/O usage

2010-09-10 Thread Carson Gaspar
On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote: Ok, now I know it's not related to the I/O performance, but to the ZFS itself. At some time all 3 pools were locked in that way: extended device statistics errors --- r/sw/s kr/s kw/s wait actv wsv

Re: [zfs-discuss] [mdb-discuss] mdb -k - I/O usage

2010-09-10 Thread Piotr Jasiukajtis
I don't have any errors from fmdump or syslog. The machine is SUN FIRE X4275 I don't use mpt or lsi drivers. It could be a bug in a driver since I see this on 2 the same machines. On Fri, Sep 10, 2010 at 9:51 PM, Carson Gaspar wrote: > On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote: >> >> Ok, now I

Re: [zfs-discuss] [mdb-discuss] mdb -k - I/O usage

2010-09-10 Thread Richard Elling
You are both right. More below... On Sep 10, 2010, at 2:06 PM, Piotr Jasiukajtis wrote: > I don't have any errors from fmdump or syslog. > The machine is SUN FIRE X4275 I don't use mpt or lsi drivers. > It could be a bug in a driver since I see this on 2 the same machines. > > On Fri, Sep 10, 2

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-10 Thread Richard Elling
On Sep 9, 2010, at 5:55 PM, Fei Xu wrote: > Just to update the status and findings. Thanks for the update. > I've checked TLER settings and they are off by default. > > I moved the source pool to another chassis and do the 3.8TB send again. this > time, not any problems! the difference is >

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-10 Thread Richard Elling
On Sep 9, 2010, at 6:39 AM, Marty Scholes wrote: > Erik wrote: >> Actually, your biggest bottleneck will be the IOPS >> limits of the >> drives. A 7200RPM SATA drive tops out at 100 IOPS. >> Yup. That's it. >> So, if you need to do 62.5e6 IOPS, and the rebuild >> drive can do just 100 >> IOPS,

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-10 Thread Fei Xu
> > by the way, in HDtune, I saw C7: Ultra DMA CRC > error count is a little high which indicates a > potential connection issue. Maybe all are caused by > the enclosure? > > Bingo! You are right, I've done a lot of tests and the defect is narrorw down the "problem hardware". The two pool wo

[zfs-discuss] Solaris 10u9 with zpool version 22, but no DEDUP (version 21 reserved)

2010-09-10 Thread Hans Foertsch
bash-3.00# uname -a SunOS testxx10 5.10 Generic_142910-17 i86pc i386 i86pc bash-3.00# zpool upgrade -v This system is currently running ZFS pool version 22. The following versions are supported: VER DESCRIPTION --- 1 Initial ZFS versi