On Wednesday 31 October 2007 12:45, Ketema wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being
> used). I have read so many articles now that I am just saturated. I
> have a general idea but would like f
From: [EMAIL PROTECTED]
To: pgsql-performance@postgresql.org
Subject: [PERFORM]
Date: Fri, 2 Nov 2007 15:03:12 -0500
In an 8 disk configuration where 2 are used for OS; 2 for xlog, and 4 for the
database.. is this possible given Dell's possible configurations only allow 2
different RA
Ketema wrote:
> RAM? The more the merrier right? Understanding shmmax and the pg
> config file parameters for shared mem has to be adjusted to use it.
> Disks? standard Raid rules right? 1 for safety 5 for best mix of
> performance and safety?
> Any preference of SCSI over SATA? What about us
On 11/1/07, Mark Floyd <[EMAIL PROTECTED]> wrote:
> Hello,
> Dell PowerEdge Energy 2950
> (2) Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB
> 4GB 667Mhz Dual Ranked DIMMs, Energy Smart
>
> PERC 5/i, x8 Backplane, Integrated Controller Card
>
> Hard Drive Configuration: Integrated S
Hello,
I am new to setting up PostgreSQL machines for our operational
environments and would appreciate if someone can take a look at this
setup; throw tomatoes if it looks too bad. We're expecting an
initial load of about 5 million text meta-data records to our
database; and are expecti
On Thu, 2007-11-01 at 11:16 -0700, Steve Crawford wrote:
> Magnus Hagander wrote:
> > Ow Mun Heng wrote:
> >>> You're likely better off (performance-wise) putting it on the same disk
> >>> as the database itself if that one has better RAID, for example.
> >> I'm thinking along the lines of since n
Magnus Hagander wrote:
> Ow Mun Heng wrote:
>>> You're likely better off (performance-wise) putting it on the same disk
>>> as the database itself if that one has better RAID, for example.
>> I'm thinking along the lines of since nothing much writes to the OS
>> Disk, I should(keyword) be safe.
>
Ketema wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being
> used). I have read so many articles now that I am just saturated. I
> have a general idea but would like feedback from others.
Describe a bit
> > You're likely better off (performance-wise) putting it on the same disk
> > as the database itself if that one has better RAID, for example.
> I'm thinking along the lines of since nothing much writes to the OS
> Disk, I should(keyword) be safe.
You are almost certainly wrong about this; thin
Ow Mun Heng wrote:
>> You're likely better off (performance-wise) putting it on the same disk
>> as the database itself if that one has better RAID, for example.
>
> I'm thinking along the lines of since nothing much writes to the OS
> Disk, I should(keyword) be safe.
Unless it's *always* in the
On Thu, 2007-11-01 at 07:54 +0100, Magnus Hagander wrote:
> Ow Mun Heng wrote:
> > On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:
> >
> >> 2) separate the transaction log from the database
> >>
> >> It's mostly written, and it's the most valuable data you have. And in
> >> case yo
Ow Mun Heng wrote:
> On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:
>
>> 2) separate the transaction log from the database
>>
>> It's mostly written, and it's the most valuable data you have. And in
>> case you use PITR, this is the only thing that really needs to be
>> backed
Tomas Vondra wrote:
>> How does pg utilize multiple processors? The more the better?
>
> Linux version uses processes, so it's able to use multiple processors.
> (Not sure about Windows version, but I guess it uses threads.)
No, the Windows version also uses processes.
//Magnus
-
On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:
> 2) separate the transaction log from the database
>
> It's mostly written, and it's the most valuable data you have. And in
> case you use PITR, this is the only thing that really needs to be
> backed up.
My main DB datastore
> > Who has built the biggest baddest Pg server out there and what do you
> > use?
In my last job we had a 360GB database running on a 8 way opteron with
32 Gigs of ram. Two of those beasts connected to a SAN for hot
failover purposes.
We did not have much web traffic, but tons of update/insert t
I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for Pg
to work at the highest level
(meaning speed at returning results)?
Depends heavily on the particular application, but mo
>>> On Wed, Oct 31, 2007 at 11:45 AM, in message
<[EMAIL PROTECTED]>, Ketema
<[EMAIL PROTECTED]> wrote:
> Who has built the biggest baddest Pg server out there and what do you
> use?
I don't think that would be us, but I can give you an example of
what can work. We have a 220 GB database whic
On 31-10-2007 17:45 Ketema wrote:
I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for Pg
to work at the highest level
(meaning speed at returning results)?
It really depends
Ketema wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being
> used). I have read so many articles now that I am just saturated. I
> have a general idea but would like feedback from others.
>
> I understa
I am trying to build a very Robust DB server that will support 1000+
concurrent users (all ready have seen max of 237 no pooling being
used). I have read so many articles now that I am just saturated. I
have a general idea but would like feedback from others.
I understand query tuning and table
20 matches
Mail list logo