I have some old dd numbers from when I was experimenting to find a
UFS/gstripe combination that wasn't horrifyingly slow to read. I was
not then adjusting filesystem blocksize, and not until moving UFS2 bs to
the maximum did initial results seem worth resuming iozone tests. Raid
HW stripe-wi
Hi,
I posted earlier about some results with this same system using UFS2.
Now trying to test ZFS. This is a Dell PE2950 with two Perc6
controllers and 4 md1000 disk shelves with 750GB drives. 16GB RAM, dual
quad core Xeon. I recompiled our kernel to use the ULE scheduler instead
of default.
Hi Aristedes,
We are/were testing FreeBSD on a Dell PE2950 with a Myricom 10GB
PCI-Express copper CX card. The driver seems mature. In tests out of
the box, I only saw about 3Gbps from iperf (testing against a linux
system...and maybe there are other issues with our environment/that
system
surprise.
thanks,
Ben
Ivan Voras wrote:
Benjeman J. Meekhof wrote:
My baseline was this - on linux 2.6.20 we're doing 800MB/s write and
greater read with this configuration: 2 raid6 volumes volumes striped
into a raid0 volume using linux software raid, XFS filesystem. Each
raid6 is
Should clarify that the first test mentioned below used the same gstripe
setup as the latter one but did not specify any newfs blocksize:
#gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
#newfs -U /dev/stripe/test
sorry,
Ben
Hello,
I think this m
Hello,
I think this might be useful information, and am also hoping for a
little input.
We've been doing some FreeBSD benchmarking on Dell PE2950 systems with
Perc6 controllers (dual-quad Xeon, 16GB, Perc6=LSI card, mfi driver,
7.0-RELEASE). There are two controllers in each system, and eac