- Original Message -
From: "Petr Janda"
Hi guys,
Just want to share these pgbench results done by DragonFlyBSD, and would
like some input on why these numbers look so bad and what can be done to
improve (ie. kernel tunables etc) the performance.
http://lists.dragonflybsd.org/piperma
First off avoid using mfi its a RAID card which adds a extra layer
you dont want even in jbob mode.
Whats the mfi max_cmd set to? If you havent already set it to -1
/boot/loader.conf (hw.mfi.max_cmds=-1) which means controller max.
Next 4GB on a machine avoid it if possible, thats whats disablin
- Original Message -
From: "Tom Evans"
I think that a good SA will at least consider how drives are arranged.
We don't just slap ZFS on a single disk and expect magic to happen, we
consider how write heavy a system will be and consider a dedicated
ZIL, we consider what proportion of fi
- Original Message -
From: "Michael Larabel"
I was the on that carried out the testing and know that it was on the
same system.
All of the testing, including the system tables, is fully automated.
Under FreeBSD sometimes the parsing of some component strings isn't as
nice as Linu
- Original Message -
From: "Urmas Lett"
On 10/18/2011 3:36 PM, Steven Hartland wrote:
What happens if you either:
1. disable HT in the bios
Intel says i5-2400 has no HT:
Processor Number i5-2400
# of Cores 4
# of Threads 4
and BIOS has no HT disable knob
Ahh yes
What happens if you either:
1. disable HT in the bios
2. limit the threads to 4?
Regards
Steve
- Original Message -
From: "Urmas Lett"
To: "Ivan Klymenko"
Cc:
Sent: Tuesday, October 18, 2011 1:12 PM
Subject: Re: ffmpeg & ULE
On 10/18/2011 2:30 PM, Ivan Klymenko wrote:
prob
As noted in the FreeBSD TCP tuning and performance thread here:
http://lists.freebsd.org/pipermail/freebsd-performance/2009-December/003909.html
There seems to be a significant performance drop when using 8.0 vs 7.0
after digging around this seems to be caused by the use of increased
kern.ipc.ma
m: owner-freebsd-performa...@freebsd.org
[mailto:owner-freebsd-performa...@freebsd.org] On Behalf Of Steven Hartland
Sent: ceturtdiena, 2009. gada 10. decembrī 15:20
To: Noisex; freebsd-performance@freebsd.org
Subject: Re: FreeBSD TCP tuning and performance
What app are you using there and is it setti
What app are you using there and is it setting the send / receive buffers
correctly?
- Original Message -
From: "Noisex"
To:
Sent: Monday, December 07, 2009 12:41 PM
Subject: FreeBSD TCP tuning and performance
Hi! I have a problem with TCP performance on FBSD boxes with 1Gbps net i
2009 12:44 PM
Subject: Re: Comparison of FreeBSD/Linux TCP Throughput performance
Steven Hartland wrote:
Try with something like this, which is the standard set we use on our
file serving machines.
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=65536
kern.ipc.maxsockbu
Try with something like this, which is the standard set we use on our
file serving machines.
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=65536
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
Regards
Steve
- Original Message ---
Just noticed the following posted on phoronix:
http://www.phoronix.com/scan.php?page=article&item=freebsd8_ubuntu910&num=1
Comments?
Regards
Steve
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or e
http://www.phoronix.com/scan.php?page=article&item=os_threeway_2008&num=1
Was interesting until I saw this:-
"However, it's important to reiterate that all three operating systems were left in their stock configurations and that no
additional tweaking had occurred."
I kernel debugging stuff s
- Original Message -
From: "Steven Hartland"
- Original Message -
From: "Robert Watson" <[EMAIL PROTECTED]>
It looks like the attachment got lost on the way through the mailing list.
I think the first starting point is: what sort of stall is this? Is it, for
- Original Message -
From: "Robert Watson" <[EMAIL PROTECTED]>
It looks like the attachment got lost on the way through the mailing list.
I think the first starting point is: what sort of stall is this? Is it, for
example, all network communication stalling, all disk I/O stalling, or
We've been suffering on our stats box for some time now
where by the machine will just stall for several seconds
preventing everything from tab completion to vi newfile.txt.
I was hoping an upgrade to 7.0 and ULE may help the situation
but unfortunately it hasn't.
I've attached both dmesg and ou
- Original Message -
From: "Brett Bump" <[EMAIL PROTECTED]>
I would call 120 processes with a load average of 0.03 and 99.9 idle
with 10-20 sendmail processes and 30 apache jobs nothing to write home
about. But when that jumps to 250 processes, a load average of 30 with
50% idle (5-10 se
- Original Message -
From: "Steven Hartland" <[EMAIL PROTECTED]>
- Original Message -
From: "Eric Anderson" <[EMAIL PROTECTED]>
Wait - if it returns EAGAIN for a while, then look at that code above.
It will hold the sysctl lock for some indef
- Original Message -
From: "Eric Anderson" <[EMAIL PROTECTED]>
Wait - if it returns EAGAIN for a while, then look at that code above.
It will hold the sysctl lock for some indefinite amount of time. Maybe
it should look like this instead:
do {
SYSCTL_LOCK();
req.oldi
- Original Message -
From: "Ivan Voras" <[EMAIL PROTECTED]>
...
geom debugging I get:-
Feb 1 06:04:45 geomtest kernel: g_post_event_x(0x802394c0,
0xff00010e6100, 2, 0)
Feb 1 06:04:45 geomtest kernel: ref 0xff00010e6100
Feb 1 06:04:45 geomtest kernel: g_post_event_x(0xf
- Original Message -
From: "Eric Anderson" <[EMAIL PROTECTED]>
I saw this once before, a long time back, and every time I went through a debugging session, it came to some kind of lock on the
sysctl tree with regards to the geom info (maybe the XML kind of tree dump or something). I
The plot thickens This stall is not just related to newfs you have to
have gstat running as well. If I do the newfs without gstat running then
no stall occurs. As soon as Im running gstat while doing the newfs then
everything locks as described.
Running truss on gstat shows the issue / cause
- Original Message -
From: "Dieter" <[EMAIL PROTECTED]>
What *exactly* do you mean by
machine still locks up with no activity for anywhere from 20 to 30 seconds.
Is there disk activity? (e.g. activity light(s) flashing if you have them)
Cant tell if there is disk activity its in a
- Original Message -
From: "Ivan Voras" <[EMAIL PROTECTED]>
The machine is running with ULE on 7.0 as mention using an Areca 1220
controller over 8 disks in RAID 6 + Hotspare.
I'd suggest you first try to reproduce the stall without ULE, while
keeping all other parameters exactly the
- Original Message -
From: "Ivan Voras" <[EMAIL PROTECTED]>
Steven Hartland wrote:
The machine is running with ULE on 7.0 as mention using an Areca 1220
controller over 8 disks in RAID 6 + Hotspare.
I'd suggest you first try to reproduce the stall without U
I'm just in the midst of setting up a new machine using 7.0-PRERELEASE
and while running newfs to init the data partitions the entire machine
stalled for a good 20seconds when processing a 500GB partition.
I had a number of windows open at the time including:-
1. gstat
2. top showing IO inc syste
Some interesting comments here:
http://www.anandtech.com/IT/showdoc.aspx?i=3162&p=10
Regards
Steve
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirec
Wasn't Jack Vogel (Intel?) only talking the other day about
committing a new 10Gb Intel driver.
The "New driver coming soon" thread on current / net.
Steve
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or e
You might want to try setting:
net.inet.tcp.inflight.enable=0
Just changing this on our FreeBSD 6.2 boxes enabled them to achieve
full line rate with ftp / proftpd transfers.
Steve
- Original Message -
From: "security" <[EMAIL PROTECTED]>
Switch is the Netgear GS105 (5 port, supp
- Original Message -
From: "Randy Schultz" <[EMAIL PROTECTED]>
On Tue, 15 May 2007, Kevin Kobb spaketh thusly:
-}These reports on poor performance using mpt seem to be on SATA
-}drives. Has anybody been seeing this using SAS drives?
-}
-}We are testing Dell PE840s with hot swap SAS dri
- Original Message -
From: "Volodymyr Kostyrko" <[EMAIL PROTECTED]>
I`ll be very thankful if someone can give me any ideas:)
Try looking at hard drive usage.
Totally idle during the test ( all in cache )
This e.mail is private and co
- Original Message -
From: "Cheffo" <[EMAIL PROTECTED]>
CPU: Intel(R) Xeon(R) CPU5130 @ 2.00GHz (2002.99-MHz K8-class CPU)
avail memory = 8265285632 (7882 MB)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
da0 at arcmsr0 bus 0 target 0 lun 0
da0: Fixed Direct Access SCS
Just a quick note to let everyone know the outstanding performance
we achieved using 6.2-RELEASE on bge fibre gig via an Extreme Black
Diamond and base ftp + proftpd.
When transferring a single ISO image via the above setup we see
92MB/s.
Both machines where AMD Opteron based with HighPoint 1820
Andrew Hammond wrote:
Performance is a pretty weak reason to upgrade, unless of course you
have a performance problem. The one thing that will really push me to
upgrade is bug fixes to stuff that I use where the risk of exposure to
the bug outweighs the risk and cost of upgrade.
This may be the
Francisco Reyes wrote:
What combination of FreeBSD+Mysql will have multiple threads run by
different CPUs?
In the few SMP FreeBSD + Mysql setups (mysql 4.X) that I have at work
I only see mysql in one cpu as reported by top.
We tested FreeBSD 6.2 and Mysql 5.0 I'd say the main requirement will
Francisco Reyes wrote:
> What combination of FreeBSD+Mysql will have multiple threads run by
> different CPUs?
>
> In the few SMP FreeBSD + Mysql setups (mysql 4.X) that I have at work
> I only see mysql in one cpu as reported by top.
I just did the tests highlighted on the thread:
Progress on sc
- Original Message -
From: "Robert Watson" <[EMAIL PROTECTED]>
I understand that Kris is preparing a summary to post to the lists in the next
couple of days. The thrust of the work has been an investigation of MySQL on
an 8-core system, and in particular, how to improve FreeBSD scalab
- Original Message -
From: "Robert Watson" <[EMAIL PROTECTED]>
As part of Kris and Jeff's recent work on improving MySQL scalability on
FreeBSD
Are there any results / info on what's been done that we can look at?
Steve
This e.mail
I'm looking at new machines for high access forums / DB
and wonder if anyone has any experience with how well
FreeBSD specifically 6.2 scales on Dual Quad Core Intel's.
We have some Dual Dual Core's here but I'm considering
the Quad Core upgrade but am a little concerned that
this may start to be
Mike Tancsa wrote:
I cvsup'd to todays kernel and re-ran some of the tests, controlling
for CPU defs in the kernel. Posted at
http://www.tancsa.com/blast.html
Statistically, I think the results are too close to say they are
different.
Whats wrong with that web page the display is totally b
Steve Peterson wrote:
I guess the fundamental question is this -- if I have a 4 disk
subsystem that supports an aggregate ~100MB/sec transfer raw to the
underlying disks, is it reasonable to expect a ~5MB/sec transfer rate
for a RAID5 hosted on that subsystem -- a 95% overhead.
Absolutely not,
Julian Elischer wrote:
are there any patches that take the gettimeofday() calls and replace
them with something that is cheap
such as only doing every 10th one and just returning the last value
++ 1
uSec for the other ones..
a ktrace of Mysql shows a LOT of gettimeofday() calls.
Yes there ar
- Original Message -
From: "David O'Brien"
On Thu, Apr 27, 2006 at 12:11:05PM +0100, Steven Hartland wrote:
Getting off topic now but I'd submit to you that a 1207 pin vs 940 pin
is setting up for the access requirements of quad core something that
AM2 is not go
- Original Message -
From: "Mike Jakubik" <[EMAIL PROTECTED]>
Just look around the list on the continuous problems people have with
that and the nve card. I would never feel safe putting these in production.
I would agree with nve but not had any problems with bge
here and we put them
- Original Message -
From: "Mike Jakubik" <[EMAIL PROTECTED]>
Martin Nilsson wrote:
Mike Jakubik wrote:
As much as i love AMDs cpus, the availability of good server
motherboards and chipsets stinks, hopefully that will change when
socket AM2 comes out.
That is an old myth: http://
- Original Message -
From: "Mike Jakubik" <[EMAIL PROTECTED]>
No! Socket AM2 is the DDR2 939-pin Athlon64 desktop replacement.
Socket F(1207) is DDR2 the 940-pin Opteron server replacement.
Same crap, different pins. The change simply allows AMD cpus to use DDR2
memory, nothing mo
- Original Message -
From: "Mike Jakubik" <[EMAIL PROTECTED]>
Steven Hartland wrote:
IIRC AM2 is not a server solution just a client one the new server
socket is significantly different.
Its not a server/desktop thing, its a new socket that will allow AMD to
use
- Original Message -
From: "Mike Jakubik" <[EMAIL PROTECTED]>
David Gilbert wrote:
This isn't random. As I understand the issue, the Opteron HT bus
handles synchronization much faster. So for a game --- this doesn't
matter ... games don't (usually) need sync. Databases, however, liv
Forget Intel and go for AMD who beat them hands down for DB work:
http://www.anandtech.com/IT/showdoc.aspx?i=2745
- Original Message -
From: "Bill Moran" <[EMAIL PROTECTED]>
Our current Dells have 2M cache, and I'm trying to determine whether
the 8M cache will make a significant diffe
Just retested on a dual dual core so 2 * as quick as before
Dual 265 ( 4 * 1.8 Ghz Cores )
== 4BSD + libthr + ACPI-Fast ==
super-smack -d mysql select-key.smack 100 1
Query Barrel Report for client smacker1
connect: max=36ms min=0ms avg= 18ms from 100 clients
Query_type num_queries
Looking at this on a dual box here ( waiting for the new MB for dual dual core )
All the time is spent processing super-smack and only 25% on mysqld.
Even dropping to 10 clients a large portion is take by the clients.
That said there is a lot that can be gained by using the tweaks out there
i.e. U
Has anyone had any dealings with the HP Smart Array 6i?
Specifically looking for info on:
* Performance
* Disk failure recover
* Available tools for monitoring etc.
Regards
Steve
This e.mail is private and confidential between Multiplay (UK)
- Original Message -
From: "Sean Chittenden" <[EMAIL PROTECTED]>
You can *never* rely on, or use auto negotiation. Its very common to
have the switch be set to auto, the PC to be set to 100 FDX, and have
the switch settle on 100 half-duplex (Cisco switches in particular).
netstat -i wi
Just did a few quick tests on 5.4 here ( not upgraded to 6.0 yet )
and on Gig I get a max of 20Mb/s using samba with the following
options:
socket options = TCP_NODELAY SO_RCVBUF=131072 SO_SNDBUF=131072
max xmit = 131072
With ftp I can get 45Mb/s
Steve
==
- Original Message -
From: <[EMAIL PROTECTED]>
I have been thinking about getting one. The guy I bought my 3ware 9500S-12
card has one he's getting ready to benchmark. I asked him to send me the
details and I will pass them along here if I get them. He did mention it
supports SATA2 -
- Original Message -
From: "Mike Tancsa" <[EMAIL PROTECTED]>
# dd if=/dev/zero of=/spool/test bs=32k count=2
2+0 records in
2+0 records out
65536 bytes transferred in 7.587819 secs (86370011 bytes/sec)
Interesting results there that's very similar to what I get here (
No problem I was initially very impressed with this card, great throughput
( after tweaking ), easy install and cheap; but then this problem hit.
It gets a DMA timeout on one of the disks which it then drops from the
RAID5 unfortunately it then gets the same error on another disk and does
the sam
I2C features. Sounds good to me. ;)
Good luck. :)
Túlio G. da Silva
Peter Losher wrote:
On Tuesday 18 October 2005 03:36 pm, Steven Hartland wrote:
Anyone got any SATA RAID 5 controllers they can recommend
64Bit PCIX.
I personally would reccommend the HighPoint RocketRAID series (we use t
Anyone got any SATA RAID 5 controllers they can recommend
64Bit PCIX.
Steve
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is pr
- Original Message -
From: "Arne Wörner" <[EMAIL PROTECTED]>
That seems to be 2 or about 2 times faster than disc->disc
transfer... But still slower, than I would have expected...
SATA150 sounds like the drive can do 150MB/sec...
LOL, you might want to read up on what SATA150 means.
In
That's actually pretty good for a sustained read / write on a single disk.
Steve
- Original Message -
From: "Patrick Proniewski" <[EMAIL PROTECTED]>
To:
Sent: Sunday, October 02, 2005 3:57 PM
Subject: dd(1) performance when copiing a disk to another
Hi,
(carte mère supermicro
chi
From what I've seen with such a slow machine and only
3 disks I doubt you would get good performance.
Steve
- Original Message -
From: "Jeff Tchang" <[EMAIL PROTECTED]>
I have a 3Ware 7500-4 card. I am experiencing some sluggishness with the
RAID5 implementation. It has been running
Might be silly but do u get similar results if u:
1. expand to a memory backed disk
2. expand to /dev/null
Steve
- Original Message -
From: "JG" <[EMAIL PROTECTED]>
I had to unpack a lot of tar archives and I occasional noticed terrible
bad performance on freebsd5.
Thanks for that.
Steve / K
- Original Message -
From: "Eric Anderson" <[EMAIL PROTECTED]>
I use the em cards (Intel Pro 1000/MT's and the like) in many machines
here, and they are rock solid. You'll pay a little more for them, but
there is a reason for it.
=
Yer know about the list but was looking for real usage experiences
as I've tried supported cards before e.g. netgear and it just panics the
machine with just ping :(
Steve / K
- Original Message -
From: "Sten Spans" <[EMAIL PROTECTED]>
On Thu, 23 Jun 2005,
Can anyone give me a Fibre Gig card recommendations PCI-X
based that they have used in FreeBSD 5.4 machines preferably?
Steve
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed.
Have you tried the vfs patch posted a few weeks ago?
Steve / K
- Original Message -
From: <[EMAIL PROTECTED]>
I was doing a dd of dev/zero into a file on a UFS2 filesystem
(softupdates disabled) on a clean 5.4-R system.
an exec of top took approximately 30 seconds to complete
and
Also check out the recent thread on RAID performance on this list for
additional info on speeding up RAID performance. I was able to increase
from:
Read: ~50MB/s
Write: ~150MB/s
to:
Read: ~200MB/s
Write: ~150MB/s
Steve / K
- Original Message -
From: "Martin Nilsson" <[EMAIL PROTECTED]>
Did you also try the sys/param.h change that helped here.
Also when testing on FS I found bs=1024k to degrade performance
try with 64k.
Is this a raid volume? If so on my setup anything other that a 16k stripe
and performance went out the window.
For the 'time' its easier to understand if u use:
/u
Summary of results:
RAID0:
Changing vfs.read_max 8 -> 16 and MAXPHYS 128k -> 1M
increased read performance significantly from 129Mb/s to 199MB/s
Max raw device speed here was 234Mb/s
FS -> Raw device: 35Mb/s 14.9% performance loss
RAID5:
Changing vfs.read_max 8 -> 16 produced a small increase
129M
On 5/2/2005 4:56 PM, Jonathan Noack
Look at the difference in sys times for raw vs. filesystem reads. With
raw we're at 2.73s while reading from the filesystem requires 12.33s!
From my position of complete ignorance that seems like a lot...
Indeed thats why I hit on using time as well as just
- Original Message - >
Raw read:
/usr/bin/time -h dd of=/dev/null if=/dev/da0 bs=64k count=10
10+0 records in
10+0 records out
655360 bytes transferred in 32.028544 secs (204617482 bytes/sec)
32.02s real 0.02s user 2.73s sys
Out of
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
On -current and 5.4 you don't have to make partitions if you
intend to use the entire disk (and provided you don't want
to boot from it). You can simply:
newfs /dev/da0
mount /dev/da0 /where_ever
/dev/da0: 1526216.3MB (31
On 5/2/2005 3:43 PM, Steven Hartland wrote:
Nope thats 5.4-STABLE this should be at the very least
260Mb/s as thats what the controller has been measured on
linux at even through the FS.
Um... not quite. That was the number you listed for S/W RAID5. In that
case you're not benchmarkin
- Original Message -
From: "Robert Watson" <[EMAIL PROTECTED]>
I'm not sure if we've seen Linux and FreeBSD dmsg output yet, but
if nothing else it would be good to confirm if the drivers on both systems
negotiate the same level of throughput to each drive.
Both drivers ( FreeBSD and Li
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
0. Does the user know enough about what he is doing.
Im no expert but then again Im not beginner either :)
1. Write performance being nearly 3x that of read performance
2. Read performance only equalling that of single disk
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Ok from what your saying it sounds like RAID on FreeBSD is useless
apart to create large disks. Now to the damaging facts the results
from my two days worth of testing:
Now, cool down a moment and lets talk about what you
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Interesting stuff so:
1. How to we test if this is happening?
Calculate by hand what the offset of the striped/raid part of the disk
is (ie: take slice+partition stats into account).
How's that done? An explained example w
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Wouldn't this be a problem for writes then too?
I presume you would only compare read to write performance on a RAID5
device which has battery backed cache.
Without a battery backed cache (or pretending to have one) RAID5
w
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with different disks, and not just
me - other users in this very thread see the same issue..
Uhm, if you are usi
Its highly unlikely that the 4 people on different hardware that have tested
this all have disks with bad sectors.
I've just finished doing a full battery of tests across:
FreeBSD: 4.11-RELEASE, 5.4-STABLE, 6.0-CURRENT, Suse 9.1
I'll post the results soon but suffice to say the results for FreeBSD
Ok thanks for that kama good to have some comparison with 4.x
I've changed the subject as this seems definitely like a more generic
issue something that needs to be fixed before 5.4 release?
- Original Message -
From: "kama" <[EMAIL PROTECTED]>
I have just tested on my system between 4.11
- Original Message -
From: "Scott Long" <[EMAIL PROTECTED]>
Ok some real strange going on write performance is ~ 140MB/s:
gstat:
dT: 0.505 flag_I 50us sizeof 240 i -1
L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name
0 1100 4 63 13.2 1096 140313
There is no precompiled version for 5.3 but looking at the openbuild version
its the same driver as the built in.
80MB/s is still terrible should be looking closer to 200MB/s.
Steven Hartland wrote:
5.4-STABLE Highpoint 1820a RAID 5 ( 5 disk )
dd if=/dev/da0 of=/dev/null bs=64k count=1
1+0
Scott I've sent this to you as from reading around you did the
original driver conversion and as such may have an idea
on the areas I could look at hope you dont mind.
Ok some real strange going on write performance is ~ 140MB/s:
gstat:
dT: 0.505 flag_I 50us sizeof 240 i -1
L(q) ops/sr/
- Original Message -
From: "Arne Wörner" <[EMAIL PROTECTED]>
Did you try RedHat Linux or FreeBSD R4?
Haven't tried R4 or Linux yet. Just finished restoring
700GB onto the machine and would rather not have
to do that again :)
Steve
Thi
- Original Message -
From: "Eric Anderson" <[EMAIL PROTECTED]>
Correct - I misread the dd line. When you are doing the dd, what is
your system busy doing? (top/ps info)
The machine is idle only me doing the test via an ssh session.
What do you suspect?
I really dont know what it could
Only on write this is a read test.
- Original Message -
From: "Arne Wörner" <[EMAIL PROTECTED]>
Furthermore RAID-5 needs to read the parity block, before it can
update that block, so that there are 2 disc transactions more,
which could explain the better performance of a single disk, too?
- Original Message -
From: "Eric Anderson" <[EMAIL PROTECTED]>
Where do I start looking?
First, understand that RAID 5 is dependant on fast hardware to performa
the XOR operations. A single disk without any RAID can easily
outperform a RAID array if the RAID array is on a 'slow' contr
Sorry wanted to send to performance not current :)
Steve
- Original Message -
I've just finished putting together a new server box spec:
Dual AMD 244, 2GB ram, 5 * Seagate SATA 400GB on a
Highpoint 1820a RAID 5 array.
The machine is currently running 5.4-STABLE ( from the
weekend ) Afte
cat current.iso > /dev/null
0.054u 1.585s 0:01.64 99.3% 10+180k 0+0io 0pf+0w
159Mb/s ( memory speed )
Comments: dd speed seems very low, netspeed seems good
using nttcp considering its using the cheapo onboard BCM5705
Steve
- Original Message -
From: "Steven Hartland" [EMA
I will be putting together a dual Opteron this weekend with the hope
of testing network throughput.
Spec will be:
Dual 244, 2Gb RAM, 5x400Gb SATA RAID 5 on a Highpoint 1820a
Broadcom 5705, Intel gE and a Intel dual port ( PCI 32 ) for comparison.
Will let you know the results.
Steve
- Origin
92 matches
Mail list logo