John,
Thanks for the further details about what you are doing. Yes, setting
directio is supposed to turn off read caching. What speeds are you
getting with reads?
What is the CPU utilization on your client machine with FileBench? Also,
just out of curiosity, which version of FileBench are you using?
Drew
On 06/09/09 15:39, John Fitzgerald wrote:
On 06/09/09 14:55, Andrew Wilson wrote:
John,
Where is the file you are writing to located? A single stream of
512KB will saturate most disk drives. Even if this is just tmpfs on
the server (as the name you picked implies), you might still be
saturating your Ethernet with just a single thread.
Hi Drew, thanks for the info.
The file is under /tmp on my server, and I'm using Infiniband as the
underlying interconnect. With IB and NFS/RDMA we've seen thruput
scale up to 1000 MB/s for writes with another tool (vdbench) on tmpfs,
whereas I'm limited to 470 MB/s with filebench (any no. of threads).
Reads may be getting local caching in your client. A 160MB file
will cache in almost all clients quite easily. Thus the multiple
threads are being served out of local client cache, while the writes
must wait for the server to catch up. You might try increasing the
size of the file a great deal so it doesn't cache, or turn off read
caching on the nfs client.
The goal is to not use any client-side caching at all, thus the
"directio = 1" keyword, and the mount is also using forcedirectio:
output from 'mount' command:
/mnt_tmpfs_rdma on sanib1-ibd0:/tmp/rdma
remote/read/write/setuid/devices/proto=rdma/forcedirectio/xattr/dev=5680003
on Mon Jun 8 15:08:47 2009
Doesn't turning on directio in the CONFIG statement disable the cache?
I will do some tinkering with larger files just to get a better handle
on what is occurring.
Thanks,
John.
Drew
On Jun 9, 2009, at 2:19 PM, John Fitzgerald wrote:
I'm using filebench to measure some NFS/RDMA performance, but I
don't see the randomwrite scaling at all; I get the same thruput
numbers whether using 1, 8, or 16 streams. I'm using the same
profiles for randomreads and they seem to scale as expected.
Sample .prof stanzas:
DEFAULTS {
runtime = 120;
dir = /mnt_tmpfs_rdma;
stats = /tmp;
filesystem = nfs;
description = "randomwrite tmpfs";
}
CONFIG randomwrite512k {
function = generic;
personality = randomwrite;
filesize = 160m;
iosize = 512k;
nthreads = 8;
directio = 1;
}
I vary the iosize between 4K and 1024K, so this is just a sample.
As I said, I get the same numbers no matter what nthreads is set
to. Weird. What am I missing?
Client and server are both running snv_115.
Thanks,
John.
--
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org