To give you fine people an update, it seems that the reason for the skewed
results shown earlier is due to Veritas' ability to take advantage of all the
free memory available on my server. My test system has 32G of Ram, and my test
data file is 10G. Basically, Veritas was able to cache the entir
> # zfs create pool raidz d1 … d8
Surely you didn't create the zfs pool on top of SVM metadevices? If so,
that's not useful; the zfs pool should be on top of raw devices.
Also, because VxFS is extent based (if I understand correctly), not unlike how
MVS manages disk space I might add, _it ought_
Anton B. Rang wrote:
Second, VDBench is great for testing raw block i/o devices.
I think a tool that does file system testing will get you
better data.
OTOH, shouldn't a tool that measures raw device performance be reasonable to reflect
Oracle performance when configured for raw devices?
> Second, VDBench is great for testing raw block i/o devices.
> I think a tool that does file system testing will get you
> better data.
OTOH, shouldn't a tool that measures raw device performance be reasonable to
reflect Oracle performance when configured for raw devices? I don't know the
curre
Why are you using software-based RAID 5/RAIDZ for the tests? I didn't think
this was a common setup in cases where file system performance was the primary
consideration.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc