There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 2/17/09 11:52 PM, "Rajesh Kumar Mallah" wrote:
the raid10 voulme was benchmarked again
taking in consideration above points
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests o
One thing to note, is that linux's md sets the readahead to 8192 by default
instead of 128. I've noticed that in many situations, a large chunk of the
performance boost reported is due to this alone.
On 2/18/09 12:57 AM, "Grzegorz Jaśkiewicz" wrote:
have you tried hanging bunch of raid1 to l
On 2/18/09 12:31 AM, "Scott Marlowe" wrote:
> Effect of ReadAhead Settings
> disabled,256(default) , 512,1024
>
> xfs_ra0 414741 , 66144
> xfs_ra256403647, 545026 all tests on sda6
> xfs_ra512411357, 564769
> xfs_ra1024 404392,
2009/2/18 Rajesh Kumar Mallah :
> On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz
> wrote:
>> have you tried hanging bunch of raid1 to linux's md, and let it do
>> raid0 for you ?
>
> Hmmm , i will have only 3 bunches in that case as system has to boot
> from first bunch
> as system has onl
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz wrote:
> have you tried hanging bunch of raid1 to linux's md, and let it do
> raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from first bunch
as system has only 8 drives. i think reducing spindles will red
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
I heard plenty of stories where this actually sped up performance. One
noticeable is case of youtube servers.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your su
On Wed, Feb 18, 2009 at 1:44 AM, Rajesh Kumar Mallah
wrote:
>>> Effect of ReadAhead Settings
>>> disabled,256(default) , 512,1024
>>>
> SEQUENTIAL
>>> xfs_ra0 414741 , 66144
>>> xfs_ra256403647, 545026 all tests on sda6
>>> xfs_ra512411357
>> Effect of ReadAhead Settings
>> disabled,256(default) , 512,1024
>>
SEQUENTIAL
>> xfs_ra0 414741 , 66144
>> xfs_ra256403647, 545026 all tests on sda6
>> xfs_ra512411357, 564769
>> xfs_ra1024 404392, 431168
>>
>> looks like 512
On Wed, Feb 18, 2009 at 12:52 AM, Rajesh Kumar Mallah
wrote:
> the raid10 voulme was benchmarked again
> taking in consideration above points
> Effect of ReadAhead Settings
> disabled,256(default) , 512,1024
>
> xfs_ra0 414741 , 66144
> xfs_ra256403647, 545026
ocumentation on
>> the >blockdev command, and here is a little write-up I found with a couple
>> web searches:
>>http://portal.itauth.com/2007/11/20/howto-linux-double-your-disk-read-performance-single-command
>
>
>>
>> ____________
ad-performance-single-command
>
>
> From: pgsql-performance-ow...@postgresql.org
> [pgsql-performance-ow...@postgresql.org] On Behalf Of Rajesh Kumar Mallah
> [mallah.raj...@gmail.com]
> Sent: Tuesday, February 17, 2009 5:25 AM
> To:
Of Rajesh Kumar Mallah
[mallah.raj...@gmail.com]
Sent: Tuesday, February 17, 2009 5:25 AM
To: Matthew Wakeling
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i
controller
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling wrote:
&
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling wrote:
> On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
>>
>> sda6 --> xfs with default formatting options.
>> sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
>> sda8 --> ext3 (default)
>>
>> it looks like mkfs.xfs options sunit=128 and
On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
sda6 --> xfs with default formatting options.
sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
sda8 --> ext3 (default)
it looks like mkfs.xfs options sunit=128 and swidth=512 did not improve
io throughtput as such in bonnie++ tests .
it
The URL of the result is
http://98.129.214.99/bonnie/report.html
(sorry if this was a repost)
On Tue, Feb 17, 2009 at 2:04 AM, Rajesh Kumar Mallah
wrote:
> BTW
>
> our Machine got build with 8 15k drives in raid10 ,
> from bonnie++ results its looks like the machine is
> able to do 400 Mbyte
BTW
our Machine got build with 8 15k drives in raid10 ,
from bonnie++ results its looks like the machine is
able to do 400 Mbytes/s seq write and 550 Mbytes/s
read. the BB cache is enabled with 256MB
sda6 --> xfs with default formatting options.
sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /
Arjen van der Meijden writes:
> When we purchased our Perc 5/e with MD1000 filled with 15 15k rpm sas disks,
> my
> colleague actually spend some time benchmarking the PERC and a ICP Vortex
> (basically a overclocked Adaptec) on those drives. Unfortunately he doesn't
> have too many comparable r
On 2/6/09 9:53 AM, "Arjen van der Meijden" wrote:
When we purchased our Perc 5/e with MD1000 filled with 15 15k rpm sas
disks, my colleague actually spend some time benchmarking the PERC and a
ICP Vortex (basically a overclocked Adaptec) on those drives.
Unfortunately he doesn't have too many com
On Fri, 6 Feb 2009, Bruce Momjian wrote:
Stupid question, but why do people bother with the Perc line of cards if
the LSI brand is better?
Because when you're ordering a Dell server, all you do is click a little
box and you get a PERC card with it. There aren't that many places that
carry t
On 6-2-2009 16:27 Bruce Momjian wrote:
The experiences I have heard is that Dell looks at server hardware in
the same way they look at their consumer gear, "If I put in a cheaper
part, how much will it cost Dell to warranty replace it". Sorry, but I
don't look at my performance or downtime in th
> 3. Pure s/w RAID10 if I can convince the PERC to let the OS see the disks
Look for JBOD mode.
PERC 6 does not have JBOD mode exposed. Dell disables the feature from the LSI
firmware in their customization.
However, I have been told that you can convince them to tell you the 'secret
hands
On 4-2-2009 22:36 Scott Marlowe wrote:
We purhcased the Perc 5E, which dell wanted $728 for last fall with 8
SATA disks in an MD-1000 and the performance is just terrible. No
matter what we do the best throughput on any RAID setup was about 30
megs/second write and 60 Megs/second read. I can ge
On Fri, Feb 6, 2009 at 8:19 AM, Matt Burke wrote:
> Glyn Astill wrote:
>>> Stupid question, but why do people bother with the Perc line of
>>> cards if the LSI brand is better? It seems the headache of trying
>>> to get the Perc cards to perform is not worth any money saved.
>>
>> I think in most
Bruce Momjian wrote:
Matt Burke wrote:
we'd have no choice other than replacing the server+shelf+disks.
I want to see just how much better a high-end Areca/Adaptec controller
is, but I just don't think I can get approval for a ?1000 card "because
some guy on the internet said the
Matt Burke wrote:
> Glyn Astill wrote:
> >> Stupid question, but why do people bother with the Perc line of
> >> cards if the LSI brand is better? It seems the headache of trying
> >> to get the Perc cards to perform is not worth any money saved.
> >
> > I think in most cases the dell cards actu
Glyn Astill wrote:
>> Stupid question, but why do people bother with the Perc line of
>> cards if the LSI brand is better? It seems the headache of trying
>> to get the Perc cards to perform is not worth any money saved.
>
> I think in most cases the dell cards actually cost more, people end
> u
--- On Fri, 6/2/09, Bruce Momjian wrote:
> Stupid question, but why do people bother with the Perc
> line of cards if
> the LSI brand is better? It seems the headache of trying
> to get the
> Perc cards to perform is not worth any money saved.
I think in most cases the dell cards actually cost
Matt Burke wrote:
> Scott Carey wrote:
> > You probably don?t want a single array with more than 32 drives anyway,
> > its almost always better to start carving out chunks and using software
> > raid 0 or 1 on top of that for various reasons. I wouldn?t put more than
> > 16 drives in one array on a
On Fri, Feb 6, 2009 at 2:04 AM, Matt Burke wrote:
> Scott Carey wrote:
>> You probably don't want a single array with more than 32 drives anyway,
>> its almost always better to start carving out chunks and using software
>> raid 0 or 1 on top of that for various reasons. I wouldn't put more than
>
Scott Carey wrote:
> You probably don’t want a single array with more than 32 drives anyway,
> its almost always better to start carving out chunks and using software
> raid 0 or 1 on top of that for various reasons. I wouldn’t put more than
> 16 drives in one array on any of these RAID cards, they
Rajesh Kumar Mallah wrote:
>> I've checked out the latest Areca controllers, but the manual
>> available on their website states there's a limitation of 32 disks
>> in an array...
>
> Where exactly is there limitation of 32 drives. the datasheet of
> 1680 states support upto 128drives using en
On 2/5/09 4:40 AM, "Matt Burke" wrote:
Are there any reasonable choices for bigger (3+ shelf) direct-connected
RAID10 arrays, or are hideously expensive SANs the only option? I've
checked out the latest Areca controllers, but the manual available on
their website states there's a limitation of 32
On Thu, Feb 5, 2009 at 6:10 PM, Matt Burke wrote:
> Arjen van der Meijden wrote:
>
>> Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
>> not identical in layout etc), so it would be a bit weird if they
>> performed much less than the similar LSI's wouldn't you think?
>
> I'
On Thu, 2009-02-05 at 12:40 +, Matt Burke wrote:
> Arjen van der Meijden wrote:
>
> Are there any reasonable choices for bigger (3+ shelf) direct-connected
> RAID10 arrays, or are hideously expensive SANs the only option? I've
> checked out the latest Areca controllers, but the manual availab
Scott Marlowe writes:
> We purhcased the Perc 5E, which dell wanted $728 for last fall with 8
> SATA disks in an MD-1000 and the performance is just terrible. No
> matter what we do the best throughput on any RAID setup was about 30
> megs/second write and 60 Megs/second read.
Is that sequent
Glyn Astill wrote:
> Did you try flashing the PERC with the LSI firmware?
>
> I tried flashing a PERC3/dc with LSI firmware, it worked fine but I
> saw no difference in performance so I assumed it must be somethign
> else on the board that cripples it.
No, for a few reasons:
1. I read somewhere
Matt Burke wrote:
> Arjen van der Meijden wrote:
>
>> Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
>> not identical in layout etc), so it would be a bit weird if they
>> performed much less than the similar LSI's wouldn't you think?
>
> I've recently had to replace a PE
--- On Thu, 5/2/09, Matt Burke wrote:
> From: Matt Burke
> Subject: Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i
> controller
> To: pgsql-performance@postgresql.org
> Date: Thursday, 5 February, 2009, 12:40 PM
> Arjen van der Meijden wrote:
>
&
Arjen van der Meijden wrote:
> Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
> not identical in layout etc), so it would be a bit weird if they
> performed much less than the similar LSI's wouldn't you think?
I've recently had to replace a PERC4/DC with the exact same ca
Scott Carey wrote:
Sorry for the top post --
Assuming Linux --
1: PERC 6 is still a bit inferior to other options, but not that bad.
Its random IOPS is fine, sequential speeds are noticeably less than
say the latest from Adaptec or Areca.
In the archives there was big thread about this ve
Sorry for the top post --
Assuming Linux --
1: PERC 6 is still a bit inferior to other options, but not that bad. Its
random IOPS is fine, sequential speeds are noticeably less than say the latest
from Adaptec or Areca.
2: Random iops will probably scale ok from 6 to 8 drives, but depending o
Sorry for the top posts, I don't have a client that is inline post friendly.
Most PERCs are rebranded LSI's lately. The difference between the 5 and 6 is
PCIX versus PCIe LSI series, relatively recent ones. Just look at the
OpenSolaris drivers for the PERC cards for a clue to what is what.
On Wed, Feb 4, 2009 at 2:11 PM, Arjen van der Meijden
wrote:
> On 4-2-2009 21:09 Scott Marlowe wrote:
>>
>> I have little experience with the 6i. I do have experience with all
>> the Percs from the 3i/3c series to the 5e series. My experience has
>> taught me that a brand new, latest model $700
On 4-2-2009 21:09 Scott Marlowe wrote:
I have little experience with the 6i. I do have experience with all
the Percs from the 3i/3c series to the 5e series. My experience has
taught me that a brand new, latest model $700 Dell RAID controller is
about as good as a $150 LSI, Areca, or Escalade/3W
Rajesh Kumar Mallah wrote:
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function in
On Wed, Feb 4, 2009 at 11:45 AM, Rajesh Kumar Mallah
wrote:
> Hi,
>
> I am going to get a Dell 2950 with PERC6i with
> 8 * 73 15K SAS drives +
> 300 GB EMC SATA SAN STORAGE,
>
> I seek suggestions from users sharing their experience with
> similar hardware if any. I have following specific concern
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function in PERC5 is not really
strip
48 matches
Mail list logo