I've got a mail machine here that I built using ZFS boot/root. It's been having some major I/O performance problems, which I posted once before... but that post seems to have disappeared.
Now I've managed to obtain another identical machine, and I've built it in the same way as the original. Running Solaris 10 U6, I've got it fully patched as of 2009/10/06. It's using a mirrored disk via the PERC (LSI Megaraid) controller. The main problem seems to be ZFS. If I do the following on a UFS filesystem: # /usr/bin/time dd if=/dev/zero of=whee.bin bs=1024000 count=<x> ... then I get "real" times of the following: x time 128 35. 4 256 1:01.8 512 2:19.8 It's all very linear and fairly decent. However, if I then destroy that filesystem and recreate it using ZFS (no special options or kernel variables set) performance degrades substantially. With the same dd, I get: x time 128 3:45.3 256 6:52.7 512 15:40.4 So basically a 6.5x loss across the board. I realize that a simple 'dd' is an extremely weak test, but real-world use on these machines shows similar problems... long delays logging in, and running a command that isn't cached can take 20-30 seconds (even something as simple as 'psrinfo -vp'). Ironically, the machine works just fine for simple email, because the files are small and very transient and thus can exist quite easily just in memory. But more complex things, like a local copy of our mailmaps, cripples the machine. I'm about to rebuild the machine with the RAID controller in passthrough mode, and I'll see what that accomplishes. Most of the machines here are Linux and use the hardware RAID1, so I was/am hesitant to "break standard" that way. Does anyone have any experience or suggestions for trying to make ZFS boot+root work fine on this machine? -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss