Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-06 Thread Garrett D'Amore
in RAM? That would create a huge performance > > cliff. > > > > -Original Message- > > From: zfs-discuss-boun...@opensolaris.org on behalf of Eric D. > > Mudama > > Sent: Wed 5/4/2011 12:55 PM > > To: Adam Serediuk > > Cc: zfs-discuss@opensolaris.org > > Su

Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
gt;>> My first thought is dedup... perhaps you've got dedup enabled and >>> the DDT no longer fits in RAM? That would create a huge performance >>> cliff. >>> >>> -Original Message- >>> From: zfs-discuss-boun...@opensolaris.org on behalf

Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
olaris.org > Subject: Re: [zfs-discuss] Extremely Slow ZFS Performance > > On Wed, May 4 at 12:21, Adam Serediuk wrote: > >Both iostat and zpool iostat show very little to zero load on the devices > >even while blocking. > > > >Any suggestions on avenues of approach fo

Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
iostat doesn't show any high service times and fsstat also shows low throughput. Occasionally I can generate enough load that you do see some very high asvc_t but when that occurs the pool is performing as expected. As a precaution I just added two extra drives to the zpool incase zfs was having

Re: [zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Eric D. Mudama
On Wed, May 4 at 12:21, Adam Serediuk wrote: Both iostat and zpool iostat show very little to zero load on the devices even while blocking. Any suggestions on avenues of approach for troubleshooting? is 'iostat -en' error free? -- Eric D. Mudama edmud...@bounceswoosh.org _

[zfs-discuss] Extremely Slow ZFS Performance

2011-05-04 Thread Adam Serediuk
We have an X4540 running Solaris 11 Express snv_151a that has developed an issue where its write performance is absolutely abysmal. Even touching a file takes over five seconds both locally and remotely. /pool1/data# time touch foo real0m5.305s user0m0.001s sys 0m0.004s /pool1/data#