I’m experiencing occasional slow responsiveness on an OpenSolaris b118 system typically noticed when running an ‘ls’ (no extra flags, so no directory service lookups). There is a delay of between 2 and 30 seconds but no correlation has been noticed with load on the server and the slow return. This problem has only been noticed via NFS (v3. We are migrating to NFSv4 once the O_EXCL/mtime bug fix has been integrated - anticipated for snv_124). The problem has been observed both locally on the primary filesystem, in an locally automounted reference (/home/foo) and remotely via NFS.

zpool is RAIDZ2 comprised of 10 * 15kRPM SAS drives behind an LSI 1078 w/ 512MB BBWC exposed as RAID0 LUNs (Dell MD1000 behind PERC 6/E) with 2x SSDs each partitioned as 10GB slog and 36GB remainder as l2arc behind another LSI 1078 w/ 256MB BBWC (Dell R710 server with PERC 6/i).

The system is configured as an NFS (currently serving NFSv3), iSCSI (COMSTAR) and CIFS (using the SUN SFW package running Samba 3.0.34) with authentication taking place from a remote openLDAP server.

Automount is in use both locally and remotely (linux clients). Locally /home/* is remounted from the zpool, remotely /home and another filesystem (and children) are mounted using autofs. There was some suspicion that automount is the problem, but no definitive evidence as of yet.

The problem has definitely been observed with stats (of some form, typically ‘/usr/bin/ls’ output) both remotely, locally in /home/* and locally in /zpool/home/* (the true source location). There is a clear correlation with recency of reads of the directories in question and reoccurrence of the fault in that one user has scripted a regular (15m/ 30m/hourly tests so far) ‘ls’ of the filesystems of interested and this has reduced the fault to have minimal noted impact since starting down this path (just for themself).

I have removed the l2arc(s) (cache devices) from the pool and the same behaviour has been observed. My suspicion here was that there was perhaps occasional high synchronous load causing heavy writes to the slog devices and when a stat was requested it may have been faulting from ARC to L2ARC prior to going to the primary data store. The slowness has been reported since removing the extra cache devices.

Another thought I had was along the lines of fileystem caching and heavy writes causing read blocking. I have no evidence that this is the case, but some suggestions on list recently of limiting the ZFS memory usage for write caching. Can anybody comment to the effectiveness of this (I have 256MB write cache in front of the slog SSDs and 512MB in front of the primary storage devices).

My DTrace is very poor but I’m suspicious that this is the best way to root cause this problem. If somebody has any code that may assist in debugging this problem and was able to share it would much appreciated.

Any other suggestions for how to identify this fault and work around it would be greatly appreciated.

cheers,
James

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to