[zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-19 Thread Tony Galway
To give you fine people an update, it seems that the reason for the skewed results shown earlier is due to Veritas' ability to take advantage of all the free memory available on my server. My test system has 32G of Ram, and my test data file is 10G. Basically, Veritas was able to cache the entir

[zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-17 Thread Richard L. Hamilton
> # zfs create pool raidz d1 … d8 Surely you didn't create the zfs pool on top of SVM metadevices? If so, that's not useful; the zfs pool should be on top of raw devices. Also, because VxFS is extent based (if I understand correctly), not unlike how MVS manages disk space I might add, _it ought_

Re: [zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-17 Thread Torrey McMahon
Anton B. Rang wrote: Second, VDBench is great for testing raw block i/o devices. I think a tool that does file system testing will get you better data. OTOH, shouldn't a tool that measures raw device performance be reasonable to reflect Oracle performance when configured for raw devices?

[zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-17 Thread Anton B. Rang
> Second, VDBench is great for testing raw block i/o devices. > I think a tool that does file system testing will get you > better data. OTOH, shouldn't a tool that measures raw device performance be reasonable to reflect Oracle performance when configured for raw devices? I don't know the curre

[zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-16 Thread William D. Hathaway
Why are you using software-based RAID 5/RAIDZ for the tests? I didn't think this was a common setup in cases where file system performance was the primary consideration. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-disc