On 8/8/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB
> LUNs, connected via FC SAN.
> The filesystems that were created on LUNS: UFS,VxFS,ZFS.
> Unfortunately the ZFS test couldn't complete bacuase the box was hung
> under very moderate load (3000 IOPs).
> Additional tests were done using UFS and VxFS that were built on ZFS
> raw devices (Zvolumes).
> Results can be seen here:
> http://napobo3.blogspot.com/2006/08/spec-sfs-bencmark-of-zfsufsvxfs.html
>
hiya leon,
Out of curiosity, how was the setup for each filesystem type done?
I wasn't sure what "4 ZFS'es" in "The bad news that the test on 4 ZFS'es
couldn't run at all" meant... so something like 'zpool status' would be
great.
Hi Eric,
here it is:
[EMAIL PROTECTED] ~ # zpool status
pool: pool1
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
c4t001738010140000Bd0 ONLINE 0 0 0
errors: No known data errors
pool: pool2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool2 ONLINE 0 0 0
c4t001738010140000Cd0 ONLINE 0 0 0
errors: No known data errors
pool: pool3
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool3 ONLINE 0 0 0
c4t001738010140001Cd0 ONLINE 0 0 0
errors: No known data errors
pool: pool4
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pool4 ONLINE 0 0 0
c4t0017380101400012d0 ONLINE 0 0 0
errors: No known data errors
Do you know what you're limiting factor was for ZFS (CPU, memory, I/O...)?
Thanks to George Wilson who pointed me to the fact that the memory was
fully consumed.
I removed the line
"set ncsize = 0x100000" from /etc/system
and the now the host isn't hung during the test anymore.
But performance is still an issue.
-- Leon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss