John,
Where is the file you are writing to located? A single stream of
512KB will saturate most disk drives. Even if this is just tmpfs on
the server (as the name you picked implies), you might still be
saturating your Ethernet with just a single thread.
Reads may be getting local caching in your client. A 160MB file
will cache in almost all clients quite easily. Thus the multiple
threads are being served out of local client cache, while the writes
must wait for the server to catch up. You might try increasing the
size of the file a great deal so it doesn't cache, or turn off read
caching on the nfs client.
Drew
On Jun 9, 2009, at 2:19 PM, John Fitzgerald wrote:
I'm using filebench to measure some NFS/RDMA performance, but I
don't see the randomwrite scaling at all; I get the same thruput
numbers whether using 1, 8, or 16 streams. I'm using the same
profiles for randomreads and they seem to scale as expected.
Sample .prof stanzas:
DEFAULTS {
runtime = 120;
dir = /mnt_tmpfs_rdma;
stats = /tmp;
filesystem = nfs;
description = "randomwrite tmpfs";
}
CONFIG randomwrite512k {
function = generic;
personality = randomwrite;
filesize = 160m;
iosize = 512k;
nthreads = 8;
directio = 1;
}
I vary the iosize between 4K and 1024K, so this is just a sample.
As I said, I get the same numbers no matter what nthreads is set
to. Weird. What am I missing?
Client and server are both running snv_115.
Thanks,
John.
--
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org