Let me elaborate slightly on the reason I ask these questions.

I am performing some simple benchmarking, and during this a file is created by 
sequentially writing 64k blocks until the 100Gb file is created. I am seeing, 
and this is the exact same as VxFS, large pauses while the system reclaims the 
memory that it has consumed.

I assume that since ZFS (back to the write cache question) is copy-on-write and 
is not write caching anything (correct me if I am wrong), it is instead using 
memory for my read-cache. Also, since I have 32Gb of memory the reclaim periods 
are quite long while it frees this memory - basically rendering my volume 
unusable until that memory is reclaimed.

With VxFS I was able to tune the file system with write_throttle, and this 
allowed me to find a balance basically whereby the system writes crazy fast, 
and then reclaims memory, and repeats that cycle.

I guess I could modify c_max in the kernel, to provide the same type of result, 
but this is not a supported tuning practice - and thus I do not want to do that.

I am simply trying to determine where ZFS is different, the same, and where how 
I can modify its default behaviours (or if I ever will).

Also, FYI, I'm testing on Solaris 10 11/06 (All testing must be performed in 
production versions of Solaris) but if there are changes in Nevada that will 
show me different results, I would be interested in those as an aside.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to