On Thu, 15 Jan 2009, Jean-David Beyer wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
M. Edward (Ed) Borasky wrote:
| Luke Lonergan wrote:
|> Not to mention the #1 cause of server faults in my experience: OS
|> kernel bug causes a crash. Battery backup doesn't help you much there.
|>
|
|
Jean-David Beyer wrote:
> M. Edward (Ed) Borasky wrote:
> | Luke Lonergan wrote:
> |> Not to mention the #1 cause of server faults in my experience: OS
> |> kernel bug causes a crash. Battery backup doesn't help you much there.
> |>
> |
> | Well now ... that very much depends on where you *got* th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
M. Edward (Ed) Borasky wrote:
| Luke Lonergan wrote:
|> Not to mention the #1 cause of server faults in my experience: OS
|> kernel bug causes a crash. Battery backup doesn't help you much there.
|>
|
| Well now ... that very much depends on where yo
On Jan 11, 2009, at 9:43 PM, M. Edward (Ed) Borasky wrote:
Luke Lonergan wrote:
Not to mention the #1 cause of server faults in my experience: OS
kernel bug causes a crash. Battery backup doesn't help you much
there.
Not that long ago (a month or so) we ran into a problem where hpacucl
On Sun, 11 Jan 2009, M. Edward (Ed) Borasky wrote:
Where you *will* have some major OS risk is with testing-level software
or "bleeding edge" Linux distros like Fedora.
I just ran "uptime" on my home machine, and it said 144 days. Debian
unstable, on no-name hardware. I guess the last time I r
On Sun, Jan 11, 2009 at 8:08 PM, Robert Haas wrote:
>> Where you *will* have some major OS risk is with testing-level software
>> or "bleeding edge" Linux distros like Fedora. Quite frankly, I don't
>> know why people run Fedora servers -- if it's Red Hat compatibility you
>> want, there's CentOS.
M. Edward (Ed) Borasky wrote:
Greg Smith wrote:
Right, this is why I only rely on Linux deployments using a name I
trust: Dell.
Returning to reality, the idea that there are brands you can buy that
make all your problems go away is rather optimistic. The number of
"branded" servers I've see
Greg Smith wrote:
> Right, this is why I only rely on Linux deployments using a name I
> trust: Dell.
>
> Returning to reality, the idea that there are brands you can buy that
> make all your problems go away is rather optimistic. The number of
> "branded" servers I've seen that are just nearly o
On Sun, 11 Jan 2009, M. Edward (Ed) Borasky wrote:
And you're probably in pretty good shape with Debian stable and the RHEL
respins like CentOS.
No one is in good shape until they've done production-level load testing
on the system and have run the sort of "unplug it under load" tests that
S
Robert Haas wrote:
>> Where you *will* have some major OS risk is with testing-level software
>> or "bleeding edge" Linux distros like Fedora. Quite frankly, I don't
>> know why people run Fedora servers -- if it's Red Hat compatibility you
>> want, there's CentOS.
>
> I've had no stability proble
> Where you *will* have some major OS risk is with testing-level software
> or "bleeding edge" Linux distros like Fedora. Quite frankly, I don't
> know why people run Fedora servers -- if it's Red Hat compatibility you
> want, there's CentOS.
I've had no stability problems with Fedora. The worst
Luke Lonergan wrote:
> Not to mention the #1 cause of server faults in my experience: OS kernel bug
> causes a crash. Battery backup doesn't help you much there.
Well now ... that very much depends on where you *got* the server OS and
how you administer it. If you're talking a correctly-maintain
On Sun, Jan 11, 2009 at 4:16 PM, Luke Lonergan wrote:
> Not to mention the #1 cause of server faults in my experience: OS kernel bug
> causes a crash. Battery backup doesn't help you much there.
I've been using pgsql since way back, in a lot of projects, and almost
almost of them on some flavor
tt Marlowe ;
pgsql-performance@postgresql.org
Sent: Sun Jan 11 15:35:22 2009
Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
On Sun, 11 Jan 2009, Glyn Astill wrote:
> --- On Sun, 11/1/09, Scott Marlowe wrote:
>
>> They also told me we could never lose power in
On Sun, 11 Jan 2009, Glyn Astill wrote:
--- On Sun, 11/1/09, Scott Marlowe wrote:
They also told me we could never lose power in the hosting
center
because it was so wonder and redundant and that I was
wasting my time.
We'll that's just plain silly, at the very least there's always going to
--- On Sun, 11/1/09, Scott Marlowe wrote:
> They also told me we could never lose power in the hosting
> center
> because it was so wonder and redundant and that I was
> wasting my time.
We'll that's just plain silly, at the very least there's always going to be
some breakers / fuzes in between
On Sun, Jan 11, 2009 at 11:07 AM, Scott Marlowe wrote:
> running pgsql. The others, running Oracle, db2, Ingress and a few
> other databases all came back up with corrupted data on their drives
> and forced nearly day long restores.
Before anyone thinks I'm slagging all other databases here, the
On Sat, Jan 10, 2009 at 2:56 PM, Ron wrote:
> At 10:36 AM 1/10/2009, Gregory Stark wrote:
>>
>> "Scott Marlowe" writes:
>>
>> > On Sat, Jan 10, 2009 at 5:40 AM, Ron wrote:
>> >> At 03:28 PM 1/8/2009, Merlin Moncure wrote:
>> >>> just be aware of the danger . hard reset (power off) class of fail
> Here some number from a mine old pgfouine report:
> - query peak: 378 queries/s
> - select: 53,1%, insert 3,8%, update 2,2 %, delete 2,8 %
>
>
Actually the percentages are wrong (I think pgfouine counts also other types
of query like ET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE;):
These a
Hi All,
I ran pgbench. Here some result:
-bash-3.1$ pgbench -c 50 -t 1000
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
number of clients: 50
number of transactions per client: 1000
number of transactions actually processed: 5/5
tps = 377.351354 (including co
Ron wrote:
I think the idea is that with SSDs or a RAID with a battery backed
cache you
can leave fsync on and not have any significant performance hit since
the seek
times are very fast for SSD. They have limited bandwidth but
bandwidth to the
WAL is rarely an issue -- just latency.
Yes, Greg
On Sun, 11 Jan 2009, Mark Kirkwood wrote:
da...@lang.hm wrote:
On Sat, 10 Jan 2009, Luke Lonergan wrote:
The new MLC based SSDs have better wear leveling tech and don't suffer the
pauses. Intel X25-M 80 and 160 GB SSDs are both pause-free. See
Anandtech's test results for details.
they d
da...@lang.hm wrote:
On Sat, 10 Jan 2009, Luke Lonergan wrote:
The new MLC based SSDs have better wear leveling tech and don't
suffer the pauses. Intel X25-M 80 and 160 GB SSDs are both
pause-free. See Anandtech's test results for details.
they don't suffer the pauses, but they still don't
...@enterprisedb.com ; mar...@bluegap.ch
; scott.marl...@gmail.com ;
rjpe...@earthlink.net ; pgsql-performance@postgresql.org
Sent: Sat Jan 10 16:03:32 2009
Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
On Sat, 10 Jan 2009, Luke Lonergan wrote:
> The new MLC based SSDs h
ge -
From: pgsql-performance-ow...@postgresql.org
To: Gregory Stark
Cc: Markus Wanner ; Scott Marlowe ;
Ron ; pgsql-performance@postgresql.org
Sent: Sat Jan 10 14:40:51 2009
Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
On Sat, 10 Jan 2009, Gregory Stark wrote:
&g
Sent: Sat Jan 10 14:40:51 2009
Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
On Sat, 10 Jan 2009, Gregory Stark wrote:
da...@lang.hm writes:
On Sat, 10 Jan 2009, Markus Wanner wrote:
My understanding of SSDs so far is, that they are not that bad at
writing *on average*
Ron writes:
> At 10:36 AM 1/10/2009, Gregory Stark wrote:
>>
>> Or a system crash. If the kernel panics for any reason when it has dirty
>> buffers in memory the database will need to be restored.
>
> A power conditioning UPS should prevent a building wide or circuit level bad
> power event
Exce
On Sat, 10 Jan 2009, Ron wrote:
At 10:36 AM 1/10/2009, Gregory Stark wrote:
"Scott Marlowe" writes:
> On Sat, Jan 10, 2009 at 5:40 AM, Ron wrote:
>> At 03:28 PM 1/8/2009, Merlin Moncure wrote:
>>> just be aware of the danger . hard reset (power off) class of failure
>>> when fsync = off mea
At 10:36 AM 1/10/2009, Gregory Stark wrote:
"Scott Marlowe" writes:
> On Sat, Jan 10, 2009 at 5:40 AM, Ron wrote:
>> At 03:28 PM 1/8/2009, Merlin Moncure wrote:
>>> just be aware of the danger . hard reset (power off) class of failure
>>> when fsync = off means you are loading from backups.
>
On Sat, 10 Jan 2009, Gregory Stark wrote:
da...@lang.hm writes:
On Sat, 10 Jan 2009, Markus Wanner wrote:
My understanding of SSDs so far is, that they are not that bad at
writing *on average*, but to perform wear-leveling, they sometimes have
to shuffle around multiple blocks at once. So th
da...@lang.hm writes:
> On Sat, 10 Jan 2009, Markus Wanner wrote:
>
>> My understanding of SSDs so far is, that they are not that bad at
>> writing *on average*, but to perform wear-leveling, they sometimes have
>> to shuffle around multiple blocks at once. So there are pretty awful
>> spikes for
On Sat, 10 Jan 2009, Markus Wanner wrote:
da...@lang.hm wrote:
On Sat, 10 Jan 2009, Gregory Stark wrote:
I think the idea is that with SSDs or a RAID with a battery backed
cache you
can leave fsync on and not have any significant performance hit since
the seek
times are very fast for SSD. They
On Sat, Jan 10, 2009 at 12:00 PM, Markus Wanner wrote:
> Hi,
>
> da...@lang.hm wrote:
>> On Sat, 10 Jan 2009, Gregory Stark wrote:
>>> I think the idea is that with SSDs or a RAID with a battery backed
>>> cache you
>>> can leave fsync on and not have any significant performance hit since
>>> the
Hi,
da...@lang.hm wrote:
> On Sat, 10 Jan 2009, Gregory Stark wrote:
>> I think the idea is that with SSDs or a RAID with a battery backed
>> cache you
>> can leave fsync on and not have any significant performance hit since
>> the seek
>> times are very fast for SSD. They have limited bandwidth b
On Sat, 10 Jan 2009, Gregory Stark wrote:
...and of course, those lucky few with bigger budgets can use SSD's and not
care what fsync is set to.
Would that prevent any corruption if the writes got out of order
because of lack of fsync? Or partial writes? Or wouldn't fsync still
need to be tu
"Scott Marlowe" writes:
> On Sat, Jan 10, 2009 at 5:40 AM, Ron wrote:
>> At 03:28 PM 1/8/2009, Merlin Moncure wrote:
>>> just be aware of the danger . hard reset (power off) class of failure
>>> when fsync = off means you are loading from backups.
>>
>> That's what redundant power conditioning
On Sat, Jan 10, 2009 at 5:40 AM, Ron wrote:
> At 03:28 PM 1/8/2009, Merlin Moncure wrote:
>> just be aware of the danger . hard reset (power off) class of failure
>> when fsync = off means you are loading from backups.
>
> That's what redundant power conditioning UPS's are supposed to help preven
At 03:28 PM 1/8/2009, Merlin Moncure wrote:
On Thu, Jan 8, 2009 at 9:42 AM, Stefano Nichele
wrote:
> Merlin Moncure wrote:
>> IIRC that's the 'perc 6ir' card...no write caching. You are getting
>> killed with syncs. If you can restart the database, you can test with
>> fsync=off comparing load
On Thu, Jan 8, 2009 at 9:42 AM, Stefano Nichele
wrote:
> Merlin Moncure wrote:
>> IIRC that's the 'perc 6ir' card...no write caching. You are getting
>> killed with syncs. If you can restart the database, you can test with
>> fsync=off comparing load to confirm this. (another way is to compare
>
om: Stefano Nichele
>> Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
>> To: "Scott Marlowe"
>> Cc: pgsql-performance@postgresql.org
>> Date: Thursday, 8 January, 2009, 8:36 AM
>> Find !
>>
>> Dell CERC SATA RAID 2 PCI SATA 6ch
>>
Merlin Moncure wrote:
On Thu, Jan 8, 2009 at 3:36 AM, Stefano Nichele
wrote:
Find !
Dell CERC SATA RAID 2 PCI SATA 6ch
Running lspci -v:
03:09.0 RAID bus controller: Adaptec AAC-RAID (rev 01)
Subsystem: Dell CERC SATA RAID 2 PCI SATA 6ch (DellCorsair)
IIRC that's the 'perc
Glyn Astill wrote:
--- On Thu, 8/1/09, Stefano Nichele wrote:
From: Stefano Nichele
Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
To: "Scott Marlowe"
Cc: pgsql-performance@postgresql.org
Date: Thursday, 8 January, 2009, 8:36 AM
Find !
Dell CERC SATA RAID
--- On Thu, 8/1/09, Stefano Nichele wrote:
> From: Stefano Nichele
> Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
> To: "Scott Marlowe"
> Cc: pgsql-performance@postgresql.org
> Date: Thursday, 8 January, 2009, 8:36 AM
> Find !
>
>
On Thu, Jan 8, 2009 at 3:36 AM, Stefano Nichele
wrote:
> Find !
>
> Dell CERC SATA RAID 2 PCI SATA 6ch
>
> Running lspci -v:
>
> 03:09.0 RAID bus controller: Adaptec AAC-RAID (rev 01)
> Subsystem: Dell CERC SATA RAID 2 PCI SATA 6ch (DellCorsair)
IIRC that's the 'perc 6ir' card...no write
Find !
Dell CERC SATA RAID 2 PCI SATA 6ch
Running lspci -v:
03:09.0 RAID bus controller: Adaptec AAC-RAID (rev 01)
Subsystem: Dell CERC SATA RAID 2 PCI SATA 6ch (DellCorsair)
Flags: bus master, 66MHz, slow devsel, latency 32, IRQ 209
Memory at f800 (32-bit, prefetchab
Scott Marlowe wrote:
> On Wed, Jan 7, 2009 at 3:34 PM, Greg Smith wrote:
>> On Wed, 7 Jan 2009, Scott Marlowe wrote:
>>
>>> I cannot understand how Dell stays in business.
>> There's a continuous stream of people who expect RAID5 to perform well, too,
>> yet this surprises you?
>
> I guess I've u
> Sequential read performance means precisely squat for most database
> loads.
Depends on the database workload. Many queries for me may scan 50GB of data
for aggregation.
Besides, it is a good test for making sure your RAID card doesn't suck.
Especially running tests with sequential access C
On Wed, Jan 7, 2009 at 6:19 PM, Merlin Moncure wrote:
RE: Perc raid controllers
> Unfortunately switching the disks to jbod and going software
> raid doesn't seem to help much. The biggest problem with dell
Yeah, I noticed that too when I was trying to get a good config from
the perc 5e. Also
On Wed, Jan 7, 2009 at 7:11 PM, Scott Carey wrote:
> If you're stuck with a Dell, the Adaptec 5 series works, I'm using 5085's in
> a pair and get 1200 MB/sec streaming reads best case with 20 SATA drives in
> RAID 10 (2 sets of 10, software raid 0 on top). Of course, Dell doesn't
> like you put
If you're stuck with a Dell, the Adaptec 5 series works, I'm using 5085's in a
pair and get 1200 MB/sec streaming reads best case with 20 SATA drives in RAID
10 (2 sets of 10, software raid 0 on top). Of course, Dell doesn't like you
putting in somebody else's RAID card, but they support the r
Since the discussion involves Dell PERC controllers, does anyone know if
the performance of LSI cards (those with the same chipsets as Dell) also
have similarly poor performance?
I have a LSI ELP card, so would like to know what other people's
experiences are...
-bborie
Scott Marlowe wr
On Wed, Jan 7, 2009 at 4:36 PM, Glyn Astill wrote:
> --- On Wed, 7/1/09, Scott Marlowe wrote:
>
>> Just to elaborate on the horror that is a Dell perc5e. We
>> have one in
>> a 1950 with battery backed cache (256 Meg I think). It has
>> an 8 disk
>> 500Gig SATA drive RAID-10 array and 4 1.6GHz
--- On Wed, 7/1/09, Scott Marlowe wrote:
> Just to elaborate on the horror that is a Dell perc5e. We
> have one in
> a 1950 with battery backed cache (256 Meg I think). It has
> an 8 disk
> 500Gig SATA drive RAID-10 array and 4 1.6GHz cpus and 10
> Gigs ram.
Our perc5i controllers performed be
--- On Wed, 7/1/09, Scott Marlowe wrote:
> The really bad news is that
> you can't
> generally plug in a real RAID controller on a Dell. We put
> an Areca
> 168-LP PCI-x8 in one of our 1950s and it wouldn't even
> turn on, got a
> CPU Error.
>
Hmm, I had to pull the perc5i's out of our dell s
On Wed, Jan 7, 2009 at 3:34 PM, Greg Smith wrote:
> On Wed, 7 Jan 2009, Scott Marlowe wrote:
>
>> I cannot understand how Dell stays in business.
>
> There's a continuous stream of people who expect RAID5 to perform well, too,
> yet this surprises you?
I guess I've underestimated the human capaci
On Wed, 7 Jan 2009, Scott Marlowe wrote:
I cannot understand how Dell stays in business.
There's a continuous stream of people who expect RAID5 to perform well,
too, yet this surprises you?
--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
Sent via pgsql-perfor
Just to elaborate on the horror that is a Dell perc5e. We have one in
a 1950 with battery backed cache (256 Meg I think). It has an 8 disk
500Gig SATA drive RAID-10 array and 4 1.6GHz cpus and 10 Gigs ram.
This server currently serves as a mnogo search server. Here's what
vmstat 1 looks like dur
On Wed, Jan 7, 2009 at 2:02 PM, Stefano Nichele
wrote:
>
> On Tue, Jan 6, 2009 at 7:45 PM, Scott Marlowe
> wrote:
>>
>> I concur with Merlin you're I/O bound.
>>
>> Adding to his post, what RAID controller are you running, does it have
>> cache, does the cache have battery backup, is the cache se
On Tue, Jan 6, 2009 at 7:45 PM, Scott Marlowe wrote:
> I concur with Merlin you're I/O bound.
>
> Adding to his post, what RAID controller are you running, does it have
> cache, does the cache have battery backup, is the cache set to write
> back or write through?
At the moment I don't have such
Ok, here some information:
OS: Centos 5.x (Linux 2.6.18-53.1.21.el5 #1 SMP Tue May 20 09:34:18 EDT
2008 i686 i686 i386 GNU/Linux)
RAID: it's a hardware RAID controller
The disks are 9600rpm SATA drives
(6 disk 1+0 RAID array and 2 separate disks for the OS).
About iostat (on sdb I have pg_xl
Simon Waters wrote:
> On Wednesday 07 January 2009 04:17:10 M. Edward (Ed) Borasky wrote:
>> 1. The package it lives in is called "sysstat". Most Linux distros do
>> *not* install "sysstat" by default. Somebody should beat up on them
>> about that. :)
>
> Hehe, although sysstat and friends did hav
On Wednesday 07 January 2009 04:17:10 M. Edward (Ed) Borasky wrote:
>
> 1. The package it lives in is called "sysstat". Most Linux distros do
> *not* install "sysstat" by default. Somebody should beat up on them
> about that. :)
Hehe, although sysstat and friends did have issues on Linux for a lo
David Rees wrote:
> On Tue, Jan 6, 2009 at 11:02 AM, Stefano Nichele
> wrote:
>> BTW, why did you said I/O bound ? Which are the parameters that highlight
>> that ? Sorry for my ignorance
>
> In addition to the percentage of time spent in wait as Scott said, you
> can also see the number of p
On Tue, Jan 6, 2009 at 11:02 AM, Stefano Nichele
wrote:
> BTW, why did you said I/O bound ? Which are the parameters that highlight
> that ? Sorry for my ignorance
In addition to the percentage of time spent in wait as Scott said, you
can also see the number of processes which are blocked (b
On Tue, Jan 6, 2009 at 12:02 PM, Stefano Nichele
wrote:
> Thanks for your help. I'll give you the info you asked as soon as I'll have
> it (i have also to install iostat but I don't have enough privilege to do
> that).
>
> BTW, why did you said I/O bound ? Which are the parameters that highlight
>
Thanks for your help. I'll give you the info you asked as soon as I'll
have it (i have also to install iostat but I don't have enough privilege
to do that).
BTW, why did you said I/O bound ? Which are the parameters that
highlight that ? Sorry for my ignorance
ste
Merlin Moncure wrote:
I concur with Merlin you're I/O bound.
Adding to his post, what RAID controller are you running, does it have
cache, does the cache have battery backup, is the cache set to write
back or write through?
Also, what do you get for this (need contrib module pgbench installed)
pgbench -i -s 100
pgben
On Tue, Jan 6, 2009 at 11:50 AM, Stefano Nichele
wrote:
> Hi list,
> I would like to ask your help in order to understand if my postgresql server
> (ver. 8.2.9) is well configured.
> It's a quad-proc system (32 bit) with a 6 disk 1+0 RAID array and 2 separate
> disks for the OS and write-ahead log
Hi list,
I would like to ask your help in order to understand if my postgresql
server (ver. 8.2.9) is well configured.
It's a quad-proc system (32 bit) with a 6 disk 1+0 RAID array and 2
separate disks for the OS and write-ahead logs with 4GB of RAM.
I don't know what is the best info to help
69 matches
Mail list logo