eric kustarz writes:
>
> >ES> Second, you may be able to get more performance from the ZFS filesystem
> >ES> on the HW lun by tweaking the max pending # of reqeusts. One thing
> >ES> we've found is that ZFS currently has a hardcoded limit of how many
> >ES> outstanding requests to send to t
Hello Torrey,
Wednesday, August 9, 2006, 5:39:54 AM, you wrote:
TM> I read through the entire thread, I think, and have some comments.
TM> * There are still some "granny smith" to "Macintosh" comparisons
TM> going on. Different OS revs, it looks like different server types,
TM> a
Hello Torrey,
Wednesday, August 9, 2006, 4:59:08 AM, you wrote:
TM> Robert Milkowski wrote:
>> Hello Richard,
>>
>> Monday, August 7, 2006, 6:54:37 PM, you wrote:
>>
>> RE> Hi Robert, thanks for the data.
>> RE> Please clarify one thing for me.
>> RE> In the case of the HW raid, was there just on
I read through the entire thread, I think, and have some comments.
* There are still some "granny smith" to "Macintosh" comparisons
going on. Different OS revs, it looks like different server types,
and I can't tell about the HBAs, links or the LUNs being tested.
* Before you test
Robert Milkowski wrote:
Hello Richard,
Monday, August 7, 2006, 6:54:37 PM, you wrote:
RE> Hi Robert, thanks for the data.
RE> Please clarify one thing for me.
RE> In the case of the HW raid, was there just one LUN? Or was it 12 LUNs?
Just one lun which was build on 3510 from 12 luns in raid-1
Hello Richard,
Monday, August 7, 2006, 6:54:37 PM, you wrote:
RE> Hi Robert, thanks for the data.
RE> Please clarify one thing for me.
RE> In the case of the HW raid, was there just one LUN? Or was it 12 LUNs?
Just one lun which was build on 3510 from 12 luns in raid-1(0).
--
Best regards,
ES> Second, you may be able to get more performance from the ZFS filesystem
ES> on the HW lun by tweaking the max pending # of reqeusts. One thing
ES> we've found is that ZFS currently has a hardcoded limit of how many
ES> outstanding requests to send to the underlying vdev (35). This works
ES
Hi Robert, thanks for the data.
Please clarify one thing for me.
In the case of the HW raid, was there just one LUN? Or was it 12 LUNs?
-- richard
Robert Milkowski wrote:
Hi.
3510 with two HW controllers, configured on LUN in RAID-10 using 12 disks in
head unit (FC-AL 73GB 15K disks). Optimi
On Mon, Aug 07, 2006 at 06:16:12PM +0200, Robert Milkowski wrote:
>
> ES> Second, you may be able to get more performance from the ZFS filesystem
> ES> on the HW lun by tweaking the max pending # of reqeusts. One thing
> ES> we've found is that ZFS currently has a hardcoded limit of how many
> ES
-discuss] 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID
Hi.
3510 with two HW controllers, configured on LUN in RAID-10 using 12 disks in
head unit (FC-AL 73GB 15K disks). Optimization set to random, stripe size 32KB.
Connected to v440 using two links, however in tests only one link was used (no
MPxIO
Hello Eric,
Monday, August 7, 2006, 5:53:38 PM, you wrote:
ES> Cool stuff, Robert. It'd be interesting to see some RAID-Z (single- and
ES> double-parity) benchmarks as well, but understandably this takes time
ES> ;-)
I intend to test raid-z. Not sure there'll be enough time for raidz2.
ES> Th
Cool stuff, Robert. It'd be interesting to see some RAID-Z (single- and
double-parity) benchmarks as well, but understandably this takes time
;-)
The first thing to note is that the current Nevada bits have a number of
performance fixes not in S10u2, so there's going to be a natural bias
when com
Hi.
3510 with two HW controllers, configured on LUN in RAID-10 using 12 disks in
head unit (FC-AL 73GB 15K disks). Optimization set to random, stripe size 32KB.
Connected to v440 using two links, however in tests only one link was used (no
MPxIO).
I used filebench and varmail test with default
13 matches
Mail list logo