Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-30 Thread Jim Mauro
Sogranted, tank is about 77% full (not to split hairs ;^), but in this case, 23% is 640GB of free space. I mean, it's not like 15 years ago when a file system was 2GB total, and 23% free meant a measely 460MB to allocate from. 640GB is a lot of space, and our largest writes are less than 5MB.

Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Sanjeev
Kevin, Looking at the stats I think the tank pool is about 80% full. And at this point you are possibly hitting the bug : 6596237 - "Stop looking and start ganging Also, there is another ZIL related bug which worsens the case by fragmenting the space : 6683293 concurrent O_DSYNC writes to a file

Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Mike Gerdts
On Thu, Jan 29, 2009 at 6:13 AM, Kevin Maguire wrote: > I have tried to establish if some client or clients are thrashing the > server via nfslogd, but without seeing anything obvious. Is there > some kind of per-zfs-filesystem iostat? The following should work in bash or ksh, so long as the lis

[zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Kevin Maguire
Hi We have been using a Solaris 10 system (Sun-Fire-V245) for a while as our primary file server. This is based on Solaris 10 06/06, plus patches up to approx May 2007. It is a production machine, and until about a week ago has had few problems. Attached to the V245 is a SCSI RAID array, which pr