I can achive 140MBps to individual disks until I hit a 1GBps system ceiling
which I suspect 1GBps may be all that the 4x SAS HBA connection on a 3Gbps sas
expander can handle. (just a guess)
Anyway, with ZFS or SVM I can't do much beyond a single disk performance total
(if that) I am thinking
I wonder if this has anything to do with it:
http://opensolaris.org/jive/thread.jspa?messageID=33739菋
Anyway, I've already blown away my OSOL install to test Linux performance - so
I can't test ZFS at the moment.
--
This message posted from opensolaris.org
__
Horace -
I've run more tests and come up with basically the exact same numbers you do.
On Opensolaris - I get about the same from my drives (140MBps) and hit a 1GBps
(almost exactly) top end system bottle neck when pushing data to all drives.
However, if I give ZFS more than one drive (mirror, s
I'm about to do some testing with that dtrace script..
However, in the meantime - I've disabled primarycache (set primarycache=none)
since I noticed that it was easily caching /dev/zero and I wanted to do some
tests within the OS rather than over FC.
I am getting the same results through dd.
Vi
I had the same problem after disabling multipath and some of my device names
having changed. I performed replace -f - then noticed that the pool was
resilvering. Once finished it displayed the new device name if I recall
correctly.
I could be wrong, but that's how I remember it.
--
This messa
> You should look at your disk IO patterns which will
> likely lead you to find unset IO queues in sd.conf.
> Look at this
> http://blogs.sun.com/chrisg/entry/latency_bubble_in_yo
> ur_io as a place to start.
Any idea why I would get this message from the dtrace script?
(I'm new to dtrace / open
Good idea.
I will keep this test in mind - I'd do it immediately except for the fact that
it would be somewhat difficult to connect power to the drives considering the
design of my chassis, but I'm sure I can figure something out if it comes to
it...
--
This message posted from opensolaris.org
I believe, I'm in a very similar situation than yours.
Have you figured something out?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Robert -
I tried all of your suggestions but unfortunately my performance did not
improve.
I tested single disk performance and I get 120-140MBps read/write to a single
disk. As soon as I add an additional disk (mirror, stripe, raidz) , my
performance drops significantly.
I'm using 8Gbit F
y not?
> This sounds very similar to another post last month.
> http://opensolaris.org/jive/thread.jspa?messageID=4874
> 53
>
> The trouble appears to be below ZFS, so you might try
> asking on the
> storage-discuss forum.
> -- richard
> On Jul 28, 2010, at 5:23 PM, Ka
> Update to my own post. Further tests more
> consistently resulted in closer to 150MB/s.
>
> When I took one disk offline, it was just shy of
> 100MB/s on the single disk. There is both an obvious
> improvement with the mirror, and a trade-off (perhaps
> the latter is controller related?).
>
>
Sorry - I said the 2 iostats were run at the same time - the second was run
after the first during the same file copy operation.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
Hi Eric - thanks for your reply.
Yes, zpool iostat -v
I've re-configured the setup into two pools for a test:
1st pool: 8 disk stripe vdev
2nd pool: 8 disk stripe vdev
The SSDs are currently not in the pool since I am not even reaching what the
spinning rust is capable of - I believe I have a de
Hi r2ch
The operations column shows about 370 operations for read - per spindle
(Between 400-900 for writes)
How should I be measuring iops?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
I appear to be getting between 2-9MB/s reads from individual disks in my zpool
as shown in iostat -v
I expect upwards of 100MBps per disk, or at least aggregate performance on par
with the number of disks that I have.
My configuration is as follows:
Two Quad-core 5520 processors
48GB ECC/REG ra
15 matches
Mail list logo