Hello Jeffrey, Thursday, August 30, 2007, 9:53:53 PM, you wrote:
JWB> On Thu, 2007-08-30 at 13:07 -0700, eric kustarz wrote: >> On Aug 30, 2007, at 12:33 PM, Jeffrey W. Baker wrote: >> > >> > Uh, whoops. As I freely admit this is my first encounter with >> > opensolaris, I just built the software on the assumption that it would >> > be 64-bit by default. But it looks like all my benchmarks were built >> > 32-bit. Yow. I'd better redo them with -m64, eh? >> > >> > [time passes] >> > >> > Well, results are _substantially_ worse with bonnie++ recompiled at >> > 64-bit. Way, way worse. 54MB/s linear reads, 23MB/s linear writes, >> > 33MB/s mixed. >> >> Hmm, what are you parameters? JWB> bonnie++ -g daemon -d /tank/bench/ -f JWB> This becomes more interesting. The very slow numbers above were on an JWB> aged (post-benchmark) filesystem. After destroying and recreating the JWB> zpool, the numbers are similar to the originals (55/87/37). Does ZFS JWB> really age that quickly? I think I need to do more investigating here. >> >> For the randomio test, it looks like you used an io_size of 4KB. Are >> >> those aligned? random? How big is the '/dev/sdb' file? >> > >> > Randomio does aligned reads and writes. I'm not sure what you mean >> > by /dev/sdb? The file upon which randomio operates is 4GiB. >> Another thing to know about ZFS is that it has a variable block size >> (that maxes out at 128KB). And since ZFS is COW, we can grow the >> block size on demand. For instance, if you just create a small file, >> say 1B, you're block size is 512B. If you go over to 513B, we double >> you to 1KB, etc. JWB> # zfs set recordsize=2K tank/bench JWB> # randomio bigfile 10 .25 .01 2048 60 1 JWB> total | read: latency (ms) | write: latency (ms) JWB> iops | iops min avg max sdev | iops min avg max sdev JWB> --------+-----------------------------------+---------------------------------- JWB> 463.9 | 346.8 0.0 21.6 761.9 33.7 | 117.1 0.0 21.3 883.9 33.5 JWB> Roughly the same as when the RS was 128K. But, if I set the RS to 2K JWB> before creating bigfile: You have to. If large file was created with recordsize 128K then all block will be 128K. Only new written blocks will be 2K - however I'm not sure if modified block in the same file will start to be 2K... -- Best regards, Robert Milkowski mailto:[EMAIL PROTECTED] http://milek.blogspot.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss