Jim C. Nasby wrote:
> On Thu, Jan 20, 2005 at 10:08:47AM -0500, Stephen Frost wrote:
>
>>* Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote:
>>
>>>PostgreSQL has replication, but not partitioning (which is what you want).
>>
>>It doesn't have multi-server partitioning.. It's got partitioning
>>w
Josh Berkus wrote:
> Tatsuo,
>
>
>>Yes. However it would be pretty easy to modify pgpool so that it could
>>cope with Slony-I. I.e.
>>
>>1) pgpool does the load balance and sends query to Slony-I's slave and
>> master if the query is SELECT.
>>
>>2) pgpool sends query only to the master if the
On 1/28/2005 2:49 PM, Christopher Browne wrote:
But there's nothing wrong with the idea of using "pg_dump --data-only"
against a subscriber node to get you the data without putting a load
on the origin. And then pulling the schema from the origin, which
oughtn't be terribly expensive there.
And th
On 1/20/2005 9:23 AM, Jean-Max Reymond wrote:
On Thu, 20 Jan 2005 15:03:31 +0100, Hervé Piedvache <[EMAIL PROTECTED]> wrote:
We were at this moment thinking about a Cluster solution ... We saw on the
Internet many solution talking about Cluster solution using MySQL ... but
nothing about PostgreSQL
> Tell me if I am wrong but it sounds to me like like
> an endless problem
Agreed. Such it is with caching. After doing some informal
benchmarking with 8.0 under Solaris, I am convinced that our major choke
point is WAL synchronization, at least for applications with a high
commit rate.
W
Le Vendredi 21 Janvier 2005 19:18, Marty Scholes a écrit :
> The indexes can be put on a RAM disk tablespace and that's the end of
> index problems -- just make sure you have enough memory available. Also
> make sure that the machine can restart correctly after a crash: the
> tablespace is dropped
fsync on.
Alex Turner
NetEconomist
On Fri, 28 Jan 2005 11:19:44 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> > With the right configuration you can get very serious throughput. The
> > new system is processing over 2500 insert transactions per second. We
> > don't need more RAM with this
On 01/28/2005-05:57PM, Alex Turner wrote:
> >
> > Your system A has the absolute worst case Raid 5, 3 drives. The more
> > drives you add to Raid 5 the better it gets but it will never beat Raid
> > 10. On top of it being the worst case, pg_xlog is not on a separate
> > spindle.
> >
>
> True for
On 01/28/2005-10:59AM, Alex Turner wrote:
> At this point I will interject a couple of benchmark numbers based on
> a new system we just configured as food for thought.
>
> System A (old system):
> Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID
> 1, one 3 Disk RAID 5 on 10k R
I know what I would choose. I'd get the mega server w/ a ton of RAM and skip
all the trickyness of partitioning a DB over multiple servers. Yes your data
will grow to a point where even the XXGB can't cache everything. On the
otherhand, memory prices drop just as fast. By that time, you can ebay yo
William Yu <[EMAIL PROTECTED]> writes:
> 1 beefy server w/ 32GB RAM = $16K
>
> I know what I would choose. I'd get the mega server w/ a ton of RAM and skip
> all the trickyness of partitioning a DB over multiple servers. Yes your data
> will grow to a point where even the XXGB can't cache everyt
On Fri, 28 Jan 2005 11:54:57 -0500, Christopher Weimann
<[EMAIL PROTECTED]> wrote:
> On 01/28/2005-10:59AM, Alex Turner wrote:
> > At this point I will interject a couple of benchmark numbers based on
> > a new system we just configured as food for thought.
> >
> > System A (old system):
> > Compaq
Hervé Piedvache wrote:
My point being is that there is no free solution. There simply isn't.
I don't know why you insist on keeping all your data in RAM, but the
mysql cluster requires that ALL data MUST fit in RAM all the time.
I don't insist about have data in RAM but when you use PostgreS
PFC wrote:
> So, here is something annoying with the current approach : Updating rows
> in a table bloats ALL indices, not just those whose indexed values have
> been actually updated. So if you have a table with many indexed fields and
> you often update some obscure timestamp field, all the
[EMAIL PROTECTED] (Andrew Sullivan) writes:
> On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:
>>
>> IIRC it hates pg_dump mainly on master. If you are able to run pg_dump
>> from slave, it should be ok.
>
> For the sake of the archives, that's not really a good idea. There
> is som
On Fri, 28 Jan 2005 10:59:58 -0500
Alex Turner <[EMAIL PROTECTED]> wrote:
> At this point I will interject a couple of benchmark numbers based on
> a new system we just configured as food for thought.
>
> System A (old system):
> Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAI
> With the right configuration you can get very serious throughput. The
> new system is processing over 2500 insert transactions per second. We
> don't need more RAM with this config. The disks are fast enough.
> 2500 transaction/second is pretty damn fast.
fsync on/off?
Merlin
--
At this point I will interject a couple of benchmark numbers based on
a new system we just configured as food for thought.
System A (old system):
Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID
1, one 3 Disk RAID 5 on 10k RPM drives, 2GB PC133 RAM. Original
Price: $6500
Syst
On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:
>
> IIRC it hates pg_dump mainly on master. If you are able to run pg_dump
> from slave, it should be ok.
For the sake of the archives, that's not really a good idea. There
is some work afoot to solve it, but at the moment dumping fr
On Thu, Jan 20, 2005 at 04:07:51PM +0100, Hervé Piedvache wrote:
> Yes seems to be the only solution ... but I'm a little disapointed about
> this ... could you explain me why there is not this kind of
> functionnality ... it seems to be a real need for big applications no ?
I hate to be snarky,
On Thu, Jan 20, 2005 at 03:54:23PM +0100, Hervé Piedvache wrote:
> Slony do not use RAM ... but PostgreSQL will need RAM for accessing a
> database
> of 50 Gb ... so having two servers with the same configuration replicated by
> slony do not slove the problem of the scalability of the database .
On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:
>
> I was thinking the same! I'd like to know how other databases such as Oracle
> do it.
You mean "how Oracle does it". They're the only ones in the market
that really have this technology.
A
--
Andrew Sullivan | [EMAIL
On Thu, Jan 20, 2005 at 04:02:39PM +0100, Hervé Piedvache wrote:
>
> I don't insist about have data in RAM but when you use PostgreSQL with
> big database you know that for quick access just for reading the index file
> for example it's better to have many RAM as possible ... I just want to
http://borg.postgresql.org/docs/8.0/interactive/storage-page-layout.html
If you vacuum as part of the transaction it's going to be more efficient
of resources, because you have more of what you need right there (ie:
odds are that you're on the same page as the old tuple). In cases like
that it ver
Hannu Krosing <[EMAIL PROTECTED]> writes:
> But can't clearing up the index be left for "later" ?
Based on what? Are you going to store the information about what has to
be cleaned up somewhere else, and if so where?
> Indexscan has to check the data tuple anyway, at least for visibility.
> wou
Ühel kenal päeval (teisipäev, 25. jaanuar 2005, 10:41-0500), kirjutas
Tom Lane:
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > Why is removing index entries essential ?
>
> Because once you re-use the tuple slot, any leftover index entries would
> be pointing to the wrong rows.
That much I under
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Why is removing index entries essential ?
Because once you re-use the tuple slot, any leftover index entries would
be pointing to the wrong rows.
regards, tom lane
---(end of broadcast)---
> > > Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > > > Probably VACUUM works well for small to medium size tables, but not
> > > > for huge ones. I'm considering about to implement "on the spot
> > > > salvaging dead tuples".
> > >
> > > That's impossible on its face, except for the special case w
Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 11:02-0500), kirjutas
Rod Taylor:
> Slony has some other issues with databases > 200GB in size as well
> (well, it hates long running transactions -- and pg_dump is a regular
> long running transaction)
IIRC it hates pg_dump mainly on master. If yo
Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 16:00+0100), kirjutas
Hervé Piedvache:
> > Will both do what you want. Replicator is easier to setup but
> > Slony is free.
>
> No ... as I have said ... how I'll manage a database getting a table of may
> be
> 250 000 000 records ? I'll need incr
Ühel kenal päeval (pühapäev, 23. jaanuar 2005, 15:40-0500), kirjutas Tom
Lane:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Changing the idea slightly might be better: if a row update would cause
> > a block split, then if there is more than one row version then we vacuum
> > the whole block first
Ühel kenal päeval (esmaspäev, 24. jaanuar 2005, 11:52+0900), kirjutas
Tatsuo Ishii:
> > Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > > Probably VACUUM works well for small to medium size tables, but not
> > > for huge ones. I'm considering about to implement "on the spot
> > > salvaging dead tuples
On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:
> The real issue with any such scheme is that you are putting maintenance
> costs into the critical paths of foreground processes that are executing
> user queries. I think that one of the primary advantages of the
> Postgres storage design
Tatsuo,
I agree completely that vacuum falls apart on huge tables. We could
probably do the math and figure out what the ratio of updated rows per
total rows is each day, but on a constantly growing table, that ratio
gets smaller and smaller, making the impact of dead tuples in the table
propo
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvaging dead tuples".
>
> That's impossible on its face, except for the special case where the
> same transact
> Tatsuo,
>
> > I'm not clear what "pgPool only needs to monitor "update switching" by
> >
> > *connection* not by *table*" means. In your example:
> > > (1) 00:00 User A updates "My Profile"
> > > (2) 00:01 "My Profile" UPDATE finishes executing.
> > > (3) 00:02 User A sees "My Profile" re-displ
> The real issue with any such scheme is that you are putting maintenance
> costs into the critical paths of foreground processes that are executing
> user queries. I think that one of the primary advantages of the
> Postgres storage design is that we keep that work outside the critical
> path and
Tatsuo,
> I'm not clear what "pgPool only needs to monitor "update switching" by
>
> *connection* not by *table*" means. In your example:
> > (1) 00:00 User A updates "My Profile"
> > (2) 00:01 "My Profile" UPDATE finishes executing.
> > (3) 00:02 User A sees "My Profile" re-displayed
> > (6) 00:
On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:
> There was some discussion in Toronto this week about storing bitmaps
> that would tell VACUUM whether or not there was any need to visit
> individual pages of each table. Getting rid of useless scans through
> not-recently-changed areas o
Simon Riggs <[EMAIL PROTECTED]> writes:
> Changing the idea slightly might be better: if a row update would cause
> a block split, then if there is more than one row version then we vacuum
> the whole block first, then re-attempt the update.
"Block split"? I think you are confusing tables with in
On Sat, 2005-01-22 at 16:10 -0500, Tom Lane wrote:
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvaging dead tuples".
>
> That's impossible on its face, ex
After a long battle with technology, [EMAIL PROTECTED] (Hervé Piedvache), an
earthling, wrote:
> Joshua,
>
> Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a écrit :
>> Hervé Piedvache wrote:
>> >
>> >My company, which I actually represent, is a fervent user of PostgreSQL.
>> >We used to make all
Quoth Ron Mayer <[EMAIL PROTECTED]>:
> Merlin Moncure wrote:
>> ...You need to build a bigger, faster box with lots of storage...
>> Clustering ... B: will cost you more, not less
>
>
> Is this still true when you get to 5-way or 17-way systems?
>
> My (somewhat outdated) impression is that up to a
In the last exciting episode, [EMAIL PROTECTED] (Hervé Piedvache) wrote:
> Le Jeudi 20 Janvier 2005 16:05, Joshua D. Drake a écrit :
>> Christopher Kings-Lynne wrote:
>> >>> Or you could fork over hundreds of thousands of dollars for Oracle's
>> >>> RAC.
>> >>
>> >> No please do not talk about thi
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Hervé
Piedvache) transmitted:
> Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
>> > Is there any solution with PostgreSQL matching these needs ... ?
>>
>> You want: http://www.slony.info/
>>
>> > Do we have
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> Probably VACUUM works well for small to medium size tables, but not
> for huge ones. I'm considering about to implement "on the spot
> salvaging dead tuples".
That's impossible on its face, except for the special case where the
same transaction inserts an
>From http://developer.postgresql.org/todo.php:
Maintain a map of recently-expired rows
This allows vacuum to reclaim free space without requiring a sequential
scan
On Sat, Jan 22, 2005 at 12:20:53PM -0500, Greg Stark wrote:
> Dawid Kuroczko <[EMAIL PROTECTED]> writes:
>
> > Quick thought -- w
On Sat, 2005-01-22 at 12:41 -0600, Bruno Wolff III wrote:
> On Sat, Jan 22, 2005 at 12:13:00 +0900,
> Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
> >
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvagi
On Sat, Jan 22, 2005 at 12:13:00 +0900,
Tatsuo Ishii <[EMAIL PROTECTED]> wrote:
>
> Probably VACUUM works well for small to medium size tables, but not
> for huge ones. I'm considering about to implement "on the spot
> salvaging dead tuples".
You are probably vacuuming too often. You want to wa
Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> Quick thought -- would it be to possible to implement a 'partial VACUUM'
> per analogiam to partial indexes?
No.
But it gave me another idea. Perhaps equally infeasible, but I don't see why.
What if there were a map of modified pages. So every time a
On Sat, 22 Jan 2005 12:13:00 +0900 (JST), Tatsuo Ishii
<[EMAIL PROTECTED]> wrote:
> IMO the bottle neck is not WAL but table/index bloat. Lots of updates
> on large tables will produce lots of dead tuples. Problem is, There'
> is no effective way to reuse these dead tuples since VACUUM on huge
> ta
> Peter, Tatsuo:
>
> would happen with SELECT queries that, through a function or some
> > other mechanism, updates data in the database? Would those need to be
> > passed to pgpool in some special way?
>
> Oh, yes, that reminds me. It would be helpful if pgPool accepted a control
> string ...
IMO the bottle neck is not WAL but table/index bloat. Lots of updates
on large tables will produce lots of dead tuples. Problem is, There'
is no effective way to reuse these dead tuples since VACUUM on huge
tables takes longer time. 8.0 adds new vacuum delay
paramters. Unfortunately this does not h
> Tatsuo,
>
> > Suppose table A gets updated on the master at time 00:00. Until 00:03
> > pgpool needs to send all queries regarding A to the master only. My
> > question is, how can pgpool know a query is related to A?
>
> Well, I'm a little late to head off tangental discussion about this, but
Peter, Tatsuo:
would happen with SELECT queries that, through a function or some
> other mechanism, updates data in the database? Would those need to be
> passed to pgpool in some special way?
Oh, yes, that reminds me. It would be helpful if pgPool accepted a control
string ... perhaps one in
] Behalf Of Tatsuo Ishii
Sent: Thursday, January 20, 2005 5:40 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED];
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering
> On January 20, 2005 06:49 am, Joshua D. Drake wr
> >Technically, you can also set up a rule to do things on a select with
DO
> >ALSO. However putting update statements in there would be considered
(at
> >least by me) very bad form. Note that this is not a trigger because
it
> >does not operate at the row level [I know you knew that already :-)].
This is probably a lot easier than you would think. You say that your
DB will have lots of data, lots of updates and lots of reads.
Very likely the disk bottleneck is mostly index reads and writes, with
some critical WAL fsync() calls. In the grand scheme of things, the
actual data is likely
Tatsuo,
> Suppose table A gets updated on the master at time 00:00. Until 00:03
> pgpool needs to send all queries regarding A to the master only. My
> question is, how can pgpool know a query is related to A?
Well, I'm a little late to head off tangental discussion about this, but
The syst
Yes, I wasn't really choosing my examples particularly carefully, but I
think the conclusion stands: pgpool (or anyone/thing except for the
server) cannot in general tell from the SQL it is handed by the client
whether an update will occur, nor which tables might be affected.
That's not to say
> Uhmmm no :) There is no such thing as a select trigger. The closest
you
> would get
> is a function that is called via select which could be detected by
> making sure
> you are prepending with a BEGIN or START Transaction. Thus yes pgPool
> can be made
> to do this.
Technically, you can also set
Joshua D. Drake wrote:
Matt Clark wrote:
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select does
Matt Clark wrote:
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select doesn't cause an update,
a
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select doesn't cause an update,
and you can't be su
> Tatsuo,
>
> > Yes. However it would be pretty easy to modify pgpool so that it could
> > cope with Slony-I. I.e.
> >
> > 1) pgpool does the load balance and sends query to Slony-I's slave and
> >master if the query is SELECT.
> >
> > 2) pgpool sends query only to the master if the query is o
PROTECTED]] On Behalf Of Mitch Pirtle
Sent: Thursday, January 20, 2005 4:42 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering
On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
<[EMAIL PROTECTED]> wrote:
Another Option to c
-
From: "Christopher Kings-Lynne" <[EMAIL PROTECTED]>
To: "Hervé Piedvache" <[EMAIL PROTECTED]>
Cc: "Jeff" <[EMAIL PROTECTED]>;
Sent: Thursday, January 20, 2005 4:58 PM
Subject: [spam] Re: [PERFORM] PostgreSQL clustering VS MySQL clustering
Tatsuo,
> Yes. However it would be pretty easy to modify pgpool so that it could
> cope with Slony-I. I.e.
>
> 1) pgpool does the load balance and sends query to Slony-I's slave and
>master if the query is SELECT.
>
> 2) pgpool sends query only to the master if the query is other than
>SEL
1) pgpool does the load balance and sends query to Slony-I's slave and
master if the query is SELECT.
2) pgpool sends query only to the master if the query is other than
SELECT.
Remaining problem is that Slony-I is not a sync replication
solution. Thus you need to prepare that the load balance
27;" <[EMAIL PROTECTED]>;
Sent: Friday, January 21, 2005 10:30 AM
Subject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering
On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:
I was thinking the same! I'd like to know how other databases such as
Oracle
d
> On January 20, 2005 06:49 am, Joshua D. Drake wrote:
> > Stephen Frost wrote:
> > >* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> > >>Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
> > >>>* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> > Is there any solution with PostgreSQL matching
On Thu, Jan 20, 2005 at 07:12:42AM -0800, Joshua D. Drake wrote:
>
> >>then I was thinking. Couldn't he use
> >>multiple databases
> >>over multiple servers with dblink?
> >>
> >>It is not exactly how I would want to do it, but it would provide what
> >>he needs I think???
> >>
> >>
> >
> >Yes
On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:
>
> I was thinking the same! I'd like to know how other databases such as Oracle
> do it.
>
In a nutshell, in a clustered environment (which iirc in oracle means
shared disks), they use a set of files for locking and consiste
On Thu, Jan 20, 2005 at 10:08:47AM -0500, Stephen Frost wrote:
> * Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote:
> > PostgreSQL has replication, but not partitioning (which is what you want).
>
> It doesn't have multi-server partitioning.. It's got partitioning
> within a single server (does
Bruno,
> Which brings up another question: why not just cluster at the hardware
> layer? Get an external fiberchannel array, and cluster a bunch of dual
> Opterons, all sharing that storage. In that sense you would be getting
> one big PostgreSQL 'image' running across all of the servers.
>
> Or i
RFORM] PostgreSQL clustering VS MySQL clustering
On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
<[EMAIL PROTECTED]> wrote:
>
> Another Option to consider would be pgmemcache. that way you just build
the
> farm out of lots of large memory, diskless boxes for keeping the whole
> da
Merlin Moncure wrote:
...You need to build a bigger, faster box with lots of storage...
Clustering ...
B: will cost you more, not less
Is this still true when you get to 5-way or 17-way systems?
My (somewhat outdated) impression is that up to about 4-way systems
they're price competitive; but bey
Ron Mayer wrote:
http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2002-53
Wrong link...
http://research.microsoft.com/research/pubs/view.aspx?type=Technical%20Report&id=812
This is the one that discusses scalability, price, performance,
failover, power consumption, hardware
Hervé Piedvache wrote:
Dealing about the hardware, for the moment we have only a bi-pentium Xeon
2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ... so
we are thinking about a new solution with maybe several servers (server
design may vary from one to other) ... to get a k
> Dealing about the hardware, for the moment we have only a bi-pentium Xeon
> 2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ...
> so
> we are thinking about a new solution with maybe several servers (server
> design may vary from one to other) ... to get a kind of cluster to
Two way xeon's are as fast as a single opteron, 150M rows isn't a big
deal.
Clustering isn't really the solution, I fail to see how clustering
actually helps since it has to slow down file access.
Dave
Hervé Piedvache wrote:
Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
Hervé Piedvache <[EMAIL PROTECTED]> writes:
> Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
> > Could you explain us what do you have in mind for that solution? I mean,
> > forget the PostgreSQL (or any other database) restrictions and explain us
> > how this hardware would be. W
On Thu, 20 Jan 2005 12:13:17 -0700, Steve Wampler <[EMAIL PROTECTED]> wrote:
> Mitch Pirtle wrote:
> But that's not enough, because you're going to be running separate
> postgresql backends on the different hosts, and there are
> definitely consistency issues with trying to do that. So far as
> I
Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
> Could you explain us what do you have in mind for that solution? I mean,
> forget the PostgreSQL (or any other database) restrictions and explain us
> how this hardware would be. Where the data would be stored?
>
> I've something in
Mitch Pirtle wrote:
Which brings up another question: why not just cluster at the hardware
layer? Get an external fiberchannel array, and cluster a bunch of dual
Opterons, all sharing that storage. In that sense you would be getting
one big PostgreSQL 'image' running across all of the servers.
This
On January 20, 2005 10:42 am, Mitch Pirtle wrote:
> On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
>
> <[EMAIL PROTECTED]> wrote:
> > Another Option to consider would be pgmemcache. that way you just build
> > the farm out of lots of large memory, diskless boxes for keeping the
> > whole da
On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
<[EMAIL PROTECTED]> wrote:
>
> Another Option to consider would be pgmemcache. that way you just build the
> farm out of lots of large memory, diskless boxes for keeping the whole
> database in memory in the whole cluster. More information on
Bruno Almeida do Lago
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Hervé Piedvache
Sent: Thursday, January 20, 2005 1:31 PM
To: Merlin Moncure
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering
Le Jeu
On January 20, 2005 06:51 am, Christopher Kings-Lynne wrote:
> >>>Sorry but I don't agree with this ... Slony is a replication solution
> >>> ... I don't need replication ... what will I do when my database will
> >>> grow up to 50 Gb ... I'll need more than 50 Gb of RAM on each server
> >>> ??? Th
On January 20, 2005 06:49 am, Joshua D. Drake wrote:
> Stephen Frost wrote:
> >* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> >>Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
> >>>* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> Is there any solution with PostgreSQL matching these needs
The problem is very large ammounts of data that needs to be both read
and updated. If you replicate a system, you will need to
intelligently route the reads to the server that has the data in RAM
or you will always be hitting DIsk which is slow. This kind of routing
AFAIK is not possible with curr
On Thu, 20 Jan 2005 16:32:27 +0100, Hervé Piedvache wrote:
> Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a écrit :
>> Google uses something called the google filesystem, look it up in
>> google. It is a distributed file system.
>
> Yes that's another point I'm working on ... make a cluster of ser
Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution ... I
don't need replication ... what will I do when my database will grow up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
Have you confi
Steve Wampler <[EMAIL PROTECTED]> writes:
> Hervé Piedvache wrote:
>
> > No ... as I have said ... how I'll manage a database getting a table of may
> > be 250 000 000 records ? I'll need incredible servers ... to get quick
> > access
> > or index reading ... no ?
>
> Probably by carefully part
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
> I think maybe a SAN in conjunction with tablespaces might be the answer.
> Still need one honking server.
That's interesting- can a PostgreSQL partition be acress multiple
tablespaces?
Stephen
signature.asc
Description: Digital signature
What you want is some kind of huge pararell computing , isn't it? I have heard
from many groups of Japanese Pgsql developer did it but they are talking in
japanese website and of course in Japanese.
I can name one of them " Asushi Mitani" and his website
http://www.csra.co.jp/~mitani/jpug/pgclust
On Thu, 2005-01-20 at 15:36 +0100, Hervé Piedvache wrote:
> Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
> > > Is there any solution with PostgreSQL matching these needs ... ?
> >
> > You want: http://www.slony.info/
> >
> > > Do we have to backport our development to MySQL for
cc: Hervé
Piedvache <[EMAIL PROTECTED]>, pgsql-performance@postgresql.org
[EMAIL PROTECTED] Subject: Re: [PERFORM]
PostgreSQL clustering VS MySQL clustering
Christopher Kings-Lynne wrote:
Probably by carefully partitioning their data. I can't imagine anything
being fast on a single table in 250,000,000 tuple range. Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some
Probably by carefully partitioning their data. I can't imagine anything
being fast on a single table in 250,000,000 tuple range. Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some internal
partitioning is being
1 - 100 of 131 matches
Mail list logo