On Wed, 2005-01-19 at 21:00 -0800, [EMAIL PROTECTED] wrote:
> Let's see if I have been paying enough attention to the SQL gurus.
> The planner is making a different estimate of how many deprecated<>'' versus
> how many broken <> ''.
> I would try SET STATISTICS to a larger number on the ports ta
Well you probably will need to run your own tests to get a conclusive
answer. It should be that hard -- compile once with gcc, make a copy of
the installed binaries to pgsql.gcc -- then repeat with the HP compiler.
In general though, gcc works best under x86 computers. Comparisons of
gcc on x86
On Wed, 2005-01-19 at 20:37 -0500, Dan Langille wrote:
> Hi folks,
>
> Running on 7.4.2, recently vacuum analysed the three tables in
> question.
>
> The query plan in question changes dramatically when a WHERE clause
> changes from ports.broken to ports.deprecated. I don't see why.
> Well,
Hi,
I have the go ahead of a customer to do some testing on Postgresql in a couple
of weeks as a
replacement for Oracle.
The reason for the test is that the number of users of the warehouse is going
to increase and this
will have a serious impact on licencing costs. (I bet that sounds familiar)
Hello everyone,
I'm having a problem with some of my tables and I'm not sure if
postgres' behaviour is maybe even a bug. I'm (still) using 8.0rc5 at
present.
I have a table that contains among other columns one of the sort:
purge_date timestamp
most records will have this field set to NU
Hi to all, I have the following 2 examples. Now,
regarding on the offset if it is small(10) or big(>5) what is the impact
on the performance of the query?? I noticed that if I return more
data's(columns) or if I make more joins then the query runs even
slower if the OFFSET is bigger. Ho
On 20 Jan 2005 at 9:34, Ragnar Hafstað wrote:
> On Wed, 2005-01-19 at 20:37 -0500, Dan Langille wrote:
> > Hi folks,
> >
> > Running on 7.4.2, recently vacuum analysed the three tables in
> > question.
> >
> > The query plan in question changes dramatically when a WHERE clause
> > changes from por
Andrei Bintintan wrote:
Hi to all,
I have the following 2 examples. Now, regarding on the offset if it
is small(10) or big(>5) what is the impact on the performance of
the query?? I noticed that if I return more data's(columns) or if I
make more joins then the query runs even slower if the OFFS
Andrei:
Hi to all,
I have the following 2 examples. Now, regarding on the offset if it is
small(10) or big(>5) what is the impact on the performance of the query?? I
noticed that if I return more data's(columns) or if I make more joins then the
query runs even slower if the OFFSET is bigge
If you're using this to provide "pages" of results, could you use a
cursor?
What do you mean by that? Cursor?
Yes I'm using this to provide "pages", but If I jump to the last pages it
goes very slow.
Andy.
- Original Message -
From: "Richard Huxton"
To: "Andrei Bintintan" <[EMAIL PROTE
Dear community,
My company, which I actually represent, is a fervent user of PostgreSQL.
We used to make all our applications using PostgreSQL for more than 5 years.
We usually do classical client/server applications under Linux, and Web
interface (php, perl, C/C++). We used to manage also public
On Wed, 19 Jan 2005, Dan Langille wrote:
> Hi folks,
>
> Running on 7.4.2, recently vacuum analysed the three tables in
> question.
>
> The query plan in question changes dramatically when a WHERE clause
> changes from ports.broken to ports.deprecated. I don't see why.
> Well, I do see why: a seq
On Thu, 20 Jan 2005 15:03:31 +0100, Hervé Piedvache <[EMAIL PROTECTED]> wrote:
> We were at this moment thinking about a Cluster solution ... We saw on the
> Internet many solution talking about Cluster solution using MySQL ... but
> nothing about PostgreSQL ... the idea is to use several servers
Is there any solution with PostgreSQL matching these needs ... ?
You want: http://www.slony.info/
Do we have to backport our development to MySQL for this kind of problem ?
Is there any other solution than a Cluster for our problem ?
Well, Slony does replication which is basically what you want :)
* Matt Casters ([EMAIL PROTECTED]) wrote:
> I have the go ahead of a customer to do some testing on Postgresql in a
> couple of weeks as a
> replacement for Oracle.
> The reason for the test is that the number of users of the warehouse is going
> to increase and this
> will have a serious impact
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another possibility would be slony, though
I'm not sure it's to the point you need it at yet, depends on if you can
handle some delay before an insert makes
On 20 Jan 2005 at 6:14, Stephan Szabo wrote:
> On Wed, 19 Jan 2005, Dan Langille wrote:
>
> > Hi folks,
> >
> > Running on 7.4.2, recently vacuum analysed the three tables in
> > question.
> >
> > The query plan in question changes dramatically when a WHERE clause
> > changes from ports.broken to
Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
> > Is there any solution with PostgreSQL matching these needs ... ?
>
> You want: http://www.slony.info/
>
> > Do we have to backport our development to MySQL for this kind of problem
> > ? Is there any other solution than a Cluster
Le Jeudi 20 Janvier 2005 15:38, Christopher Kings-Lynne a écrit :
> > Sorry but I don't agree with this ... Slony is a replication solution ...
> > I don't need replication ... what will I do when my database will grow up
> > to 50 Gb ... I'll need more than 50 Gb of RAM on each server ???
> > This
Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
> * Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> > Is there any solution with PostgreSQL matching these needs ... ?
>
> You might look into pg_pool. Another possibility would be slony, though
> I'm not sure it's to the point you need it at ye
Sorry but I don't agree with this ... Slony is a replication solution ... I
don't need replication ... what will I do when my database will grow up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
I need a Cluster solution not a repl
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
> > * Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> > > Is there any solution with PostgreSQL matching these needs ... ?
> >
> > You might look into pg_pool. Another possibility would be slony, th
Hervé Piedvache wrote:
Dear community,
My company, which I actually represent, is a fervent user of PostgreSQL.
We used to make all our applications using PostgreSQL for more than 5 years.
We usually do classical client/server applications under Linux, and Web
interface (php, perl, C/C++). We used
On Jan 20, 2005, at 9:36 AM, Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution
... I
don't need replication ... what will I do when my database will grow
up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
Slony doesn't use much ram. The
Stephen Frost wrote:
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another possi
Sorry but I don't agree with this ... Slony is a replication solution ...
I don't need replication ... what will I do when my database will grow up
to 50 Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
I need a Cluster solution not a replica
Le Jeudi 20 Janvier 2005 15:48, Jeff a écrit :
> On Jan 20, 2005, at 9:36 AM, Hervé Piedvache wrote:
> > Sorry but I don't agree with this ... Slony is a replication solution
> > ... I
> > don't need replication ... what will I do when my database will grow
> > up to 50
> > Gb ... I'll need more th
Or you could fork over hundreds of thousands of dollars for Oracle's
RAC.
No please do not talk about this again ... I'm looking about a PostgreSQL
solution ... I know RAC ... and I'm not able to pay for a RAC certify
hardware configuration plus a RAC Licence.
There is absolutely zero PostgreSQ
Joshua,
Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a écrit :
> Hervé Piedvache wrote:
> >
> >My company, which I actually represent, is a fervent user of PostgreSQL.
> >We used to make all our applications using PostgreSQL for more than 5
> > years. We usually do classical client/server appli
Le Jeudi 20 Janvier 2005 15:51, Christopher Kings-Lynne a écrit :
> >>>Sorry but I don't agree with this ... Slony is a replication solution
> >>> ... I don't need replication ... what will I do when my database will
> >>> grow up to 50 Gb ... I'll need more than 50 Gb of RAM on each server
> >>> ?
No please do not talk about this again ... I'm looking about a PostgreSQL
solution ... I know RAC ... and I'm not able to pay for a RAC certify
hardware configuration plus a RAC Licence.
What you want does not exist for PostgreSQL. You will either
have to build it yourself or pay somebody to
* Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote:
> PostgreSQL has replication, but not partitioning (which is what you want).
It doesn't have multi-server partitioning.. It's got partitioning
within a single server (doesn't it? I thought it did, I know it was
discussed w/ the guy from Cox Co
So what we would like to get is a pool of small servers able to make one
virtual server ... for that is called a Cluster ... no ?
I know they are not using PostgreSQL ... but how a company like Google do to
get an incredible database in size and so quick access ?
You could use dblink with mu
Christopher Kings-Lynne wrote:
Or you could fork over hundreds of thousands of dollars for Oracle's
RAC.
No please do not talk about this again ... I'm looking about a
PostgreSQL solution ... I know RAC ... and I'm not able to pay for a
RAC certify hardware configuration plus a RAC Licence.
Th
Le Jeudi 20 Janvier 2005 16:05, Joshua D. Drake a écrit :
> Christopher Kings-Lynne wrote:
> >>> Or you could fork over hundreds of thousands of dollars for Oracle's
> >>> RAC.
> >>
> >> No please do not talk about this again ... I'm looking about a
> >> PostgreSQL solution ... I know RAC ... and
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> I know they are not using PostgreSQL ... but how a company like Google do to
> get an incredible database in size and so quick access ?
They segment their data across multiple machines and have an algorithm
which tells the application layer which mac
Hervé Piedvache wrote:
No ... as I have said ... how I'll manage a database getting a table of may be
250 000 000 records ? I'll need incredible servers ... to get quick access or
index reading ... no ?
So what we would like to get is a pool of small servers able to make one
virtual server ...
then I was thinking. Couldn't he use
multiple databases
over multiple servers with dblink?
It is not exactly how I would want to do it, but it would provide what
he needs I think???
Yes seems to be the only solution ... but I'm a little disapointed about
this ... could you explain me why the
> No please do not talk about this again ... I'm looking about a PostgreSQL
> solution ... I know RAC ... and I'm not able to pay for a RAC certify
> hardware configuration plus a RAC Licence.
Are you totally certain you can't solve your problem with a single server
solution?
How about:
Price ou
On Thu, 20 Jan 2005, Dan Langille wrote:
> On 20 Jan 2005 at 6:14, Stephan Szabo wrote:
>
> > On Wed, 19 Jan 2005, Dan Langille wrote:
> >
> > > Hi folks,
> > >
> > > Running on 7.4.2, recently vacuum analysed the three tables in
> > > question.
> > >
> > > The query plan in question changes drama
Andrei Bintintan wrote:
If you're using this to provide "pages" of results, could you use a
cursor?
What do you mean by that? Cursor?
Yes I'm using this to provide "pages", but If I jump to the last pages
it goes very slow.
DECLARE mycursor CURSOR FOR SELECT * FROM ...
FETCH FORWARD 10 IN mycurso
Google uses something called the google filesystem, look it up in
google. It is a distributed file system.
Dave
Hervé Piedvache wrote:
Joshua,
Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a écrit :
Hervé Piedvache wrote:
My company, which I actually represent,
Le Jeudi 20 Janvier 2005 16:14, Steve Wampler a écrit :
> Once you've got the data partitioned, the question becomes one of
> how to inhance performance/scalability. Have you considered RAIDb?
No but I'll seems to be very interesting ... close to the explanation of
Joshua ... but automaticly don
Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a écrit :
> Google uses something called the google filesystem, look it up in
> google. It is a distributed file system.
Yes that's another point I'm working on ... make a cluster of server using
GFS ... and making PostgreSQL running with it ...
But I
Le Jeudi 20 Janvier 2005 16:16, Merlin Moncure a écrit :
> > No please do not talk about this again ... I'm looking about a PostgreSQL
> > solution ... I know RAC ... and I'm not able to pay for a RAC certify
> > hardware configuration plus a RAC Licence.
>
> Are you totally certain you can't solve
On 20 Jan 2005 at 7:26, Stephan Szabo wrote:
> On Thu, 20 Jan 2005, Dan Langille wrote:
>
> > On 20 Jan 2005 at 6:14, Stephan Szabo wrote:
> >
> > > On Wed, 19 Jan 2005, Dan Langille wrote:
> > >
> > > > Hi folks,
> > > >
> > > > Running on 7.4.2, recently vacuum analysed the three tables in
> >
I'm dealing with big database [3.8 Gb] and records of 3 millions . Some of the
query seems to be slow eventhough just a few users in the night. I would like
to know which parameter list below is most effective in rising the speed of
these queries?
Shmmax = 32384*8192 =265289728
Share buffer = 3238
Hervé Piedvache wrote:
Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a écrit :
Google uses something called the google filesystem, look it up in
google. It is a distributed file system.
Yes that's another point I'm working on ... make a cluster of server using
GFS ... and making PostgreSQL running
Probably by carefully partitioning their data. I can't imagine anything
being fast on a single table in 250,000,000 tuple range. Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some internal
partitioning is being
Christopher Kings-Lynne wrote:
Probably by carefully partitioning their data. I can't imagine anything
being fast on a single table in 250,000,000 tuple range. Nor can I
really imagine any database that efficiently splits a single table
across multiple machines (or even inefficiently unless some
I think maybe a SAN in conjunction with tablespaces might be the answer.
Still need one honking server.
Rick
Stephen Frost
[EMAIL PROTECTED] wrote:
I'm dealing with big database [3.8 Gb] and records of 3 millions . Some of the
query seems to be slow eventhough just a few users in the night. I would like
to know which parameter list below is most effective in rising the speed of
these queries?
Shmmax = 32384*8192 =2652
On Thu, 2005-01-20 at 15:36 +0100, Hervé Piedvache wrote:
> Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a écrit :
> > > Is there any solution with PostgreSQL matching these needs ... ?
> >
> > You want: http://www.slony.info/
> >
> > > Do we have to backport our development to MySQL for
I have never seen benchmarks for RAID 0+1. Very few people use it
because it's not very fault tolerant, so I couldn't answer for sure.
I would imagine that RAID 0+1 could acheive better read throughput
because you could, in theory, read from each half of the mirror
independantly. Write would be
What you want is some kind of huge pararell computing , isn't it? I have heard
from many groups of Japanese Pgsql developer did it but they are talking in
japanese website and of course in Japanese.
I can name one of them " Asushi Mitani" and his website
http://www.csra.co.jp/~mitani/jpug/pgclust
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
> I think maybe a SAN in conjunction with tablespaces might be the answer.
> Still need one honking server.
That's interesting- can a PostgreSQL partition be acress multiple
tablespaces?
Stephen
signature.asc
Description: Digital signature
I am curious - I wasn't aware that postgresql supported partitioned tables,
Could someone point me to the docs on this.
Thanks,
Alex Turner
NetEconomist
On Thu, 20 Jan 2005 09:26:03 -0500, Stephen Frost <[EMAIL PROTECTED]> wrote:
> * Matt Casters ([EMAIL PROTECTED]) wrote:
> > I have the go ahe
"Matt Casters" <[EMAIL PROTECTED]> writes:
> I've been reading up on partitioned tabes on pgsql, will the performance
> benefit will be comparable to Oracle partitioned tables?
Postgres doesn't have any built-in support for partitioned tables. You can do
it the same way people did it on Oracle u
I am also very interesting in this very question.. Is there any way to
declare a persistant cursor that remains open between pg sessions?
This would be better than a temp table because you would not have to
do the initial select and insert into a fresh table and incur those IO
costs, which are oft
Steve Wampler <[EMAIL PROTECTED]> writes:
> Hervé Piedvache wrote:
>
> > No ... as I have said ... how I'll manage a database getting a table of may
> > be 250 000 000 records ? I'll need incredible servers ... to get quick
> > access
> > or index reading ... no ?
>
> Probably by carefully part
Richard Huxton wrote:
If you've got a web-application then you'll probably want to insert the
results into a cache table for later use.
If I have quite a bit of activity like this (people selecting 1 out
of a few million rows and paging through them in a web browser), would
it be good to have
Alex Turner wrote:
I am also very interesting in this very question.. Is there any way
to declare a persistant cursor that remains open between pg sessions?
Not sure how this would work. What do you do with multiple connections?
Only one can access the cursor, so which should it be?
This would b
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
> > If you're using this to provide "pages" of results, could you use a cursor?
> What do you mean by that? Cursor?
>
> Yes I'm using this to provide "pages", but If I jump to the last pages it goes
> very slow.
The best way to do pages for is not t
> I am also very interesting in this very question.. Is there any way to
> declare a persistant cursor that remains open between pg sessions?
> This would be better than a temp table because you would not have to
> do the initial select and insert into a fresh table and incur those IO
> costs, whic
Ron Mayer wrote:
Richard Huxton wrote:
If you've got a web-application then you'll probably want to insert
the results into a cache table for later use.
If I have quite a bit of activity like this (people selecting 1 out
of a few million rows and paging through them in a web browser), would
i
Hervé Piedvache wrote:
Sorry but I don't agree with this ... Slony is a replication solution ... I
don't need replication ... what will I do when my database will grow up to 50
Gb ... I'll need more than 50 Gb of RAM on each server ???
This solution is not very realistic for me ...
Have you confi
On Thu, 20 Jan 2005 16:32:27 +0100, Hervé Piedvache wrote:
> Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a écrit :
>> Google uses something called the google filesystem, look it up in
>> google. It is a distributed file system.
>
> Yes that's another point I'm working on ... make a cluster of ser
The problem is very large ammounts of data that needs to be both read
and updated. If you replicate a system, you will need to
intelligently route the reads to the server that has the data in RAM
or you will always be hitting DIsk which is slow. This kind of routing
AFAIK is not possible with curr
Greg Stark wrote:
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
If you're using this to provide "pages" of results, could you use a cursor?
What do you mean by that? Cursor?
Yes I'm using this to provide "pages", but If I jump to the last pages it goes
very slow.
The best way to do pages for is
On January 20, 2005 06:49 am, Joshua D. Drake wrote:
> Stephen Frost wrote:
> >* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> >>Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
> >>>* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> Is there any solution with PostgreSQL matching these needs
On January 20, 2005 06:51 am, Christopher Kings-Lynne wrote:
> >>>Sorry but I don't agree with this ... Slony is a replication solution
> >>> ... I don't need replication ... what will I do when my database will
> >>> grow up to 50 Gb ... I'll need more than 50 Gb of RAM on each server
> >>> ??? Th
Isn't this a prime example of when to use a servlet or something similar
in function? It will create the cursor, maintain it, and fetch against
it for a particular page.
Greg
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 20, 2005 10:21 AM
To:
Randolf Richardson wrote:
While this doesn't exactly answer your question, I use this little
tidbit of information when "selling" people on PostgreSQL. PostgreSQL
was chosen over Oracle as the database to handle all of the .org TLDs
information. ...
Do you have a link for that informatio
Could you explain us what do you have in mind for that solution? I mean,
forget the PostgreSQL (or any other database) restrictions and explain us
how this hardware would be. Where the data would be stored?
I've something in mind for you, but first I need to understand your needs!
C ya.
Bruno Al
On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
<[EMAIL PROTECTED]> wrote:
>
> Another Option to consider would be pgmemcache. that way you just build the
> farm out of lots of large memory, diskless boxes for keeping the whole
> database in memory in the whole cluster. More information on
On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
> The best way to do pages for is not to use offset or cursors but to use an
> index. This only works if you can enumerate all the sort orders the
> application might be using and can have an index on each of them.
>
> To do this the query woul
On Thu, 2005-01-20 at 19:12 +, Ragnar Hafstað wrote:
> On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:
>
> > The best way to do pages for is not to use offset or cursors but to use an
> > index. This only works if you can enumerate all the sort orders the
> > application might be using an
On January 20, 2005 10:42 am, Mitch Pirtle wrote:
> On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen
>
> <[EMAIL PROTECTED]> wrote:
> > Another Option to consider would be pgmemcache. that way you just build
> > the farm out of lots of large memory, diskless boxes for keeping the
> > whole da
> this will only work unchanged if the index is unique. imagine , for
> example if you have more than 50 rows with the same value of col.
>
> one way to fix this is to use ORDER BY col,oid
nope! oid is
1. deprecated
2. not guaranteed to be unique even inside a (large) table.
Use a sequence inst
Mitch Pirtle wrote:
Which brings up another question: why not just cluster at the hardware
layer? Get an external fiberchannel array, and cluster a bunch of dual
Opterons, all sharing that storage. In that sense you would be getting
one big PostgreSQL 'image' running across all of the servers.
This
Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
> Could you explain us what do you have in mind for that solution? I mean,
> forget the PostgreSQL (or any other database) restrictions and explain us
> how this hardware would be. Where the data would be stored?
>
> I've something in
On Thu, 20 Jan 2005 12:13:17 -0700, Steve Wampler <[EMAIL PROTECTED]> wrote:
> Mitch Pirtle wrote:
> But that's not enough, because you're going to be running separate
> postgresql backends on the different hosts, and there are
> definitely consistency issues with trying to do that. So far as
> I
Hervé Piedvache <[EMAIL PROTECTED]> writes:
> Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
> > Could you explain us what do you have in mind for that solution? I mean,
> > forget the PostgreSQL (or any other database) restrictions and explain us
> > how this hardware would be. W
Thanks Stephen,
My main concern is to get as much read performance on the disks as possible
on this given system. CPU is rarely a problem on a typical data warehouse
system, this one's not any different.
We basically have 2 RAID5 disk sets (300Gb) and 150Gb) with a third one
coming along.(arou
Two way xeon's are as fast as a single opteron, 150M rows isn't a big
deal.
Clustering isn't really the solution, I fail to see how clustering
actually helps since it has to slow down file access.
Dave
Hervé Piedvache wrote:
Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :
> Dealing about the hardware, for the moment we have only a bi-pentium Xeon
> 2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ...
> so
> we are thinking about a new solution with maybe several servers (server
> design may vary from one to other) ... to get a kind of cluster to
Matt Casters wrote:
Thanks Stephen,
My main concern is to get as much read performance on the disks as possible
on this given system. CPU is rarely a problem on a typical data warehouse
system, this one's not any different.
We basically have 2 RAID5 disk sets (300Gb) and 150Gb) with a third one
Hervé Piedvache wrote:
Dealing about the hardware, for the moment we have only a bi-pentium Xeon
2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ... so
we are thinking about a new solution with maybe several servers (server
design may vary from one to other) ... to get a k
Joshua,
Actually that's a great idea!
I'll have to check if Solaris wants to play ball though.
We'll have to see as we don't have the new disks yet, ETA is next week.
Cheers,
Matt
-Oorspronkelijk bericht-
Van: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Verzonden: donderdag 20 januari 2
How do you create a temporary view that has only a small subset of the
data from the DB init? (Links to docs are fine - I can read ;). My
query isn't all that complex, and my number of records might be from
10 to 2k depending on how I implement it.
Alex Turner
NetEconomist
On Thu, 20 Jan 2005
On Fri, 21 Jan 2005 02:36 am, Dan Langille wrote:
> On 20 Jan 2005 at 7:26, Stephan Szabo wrote:
[snip]
> > Honestly I expected it to be slower (which it was), but I figured it's
> > worth seeing what alternate plans it'll generate (specifically to see how
> > it cost a nested loop on that join to
Matt Casters wrote:
Hi,
My questions to the list are: has this sort of thing been attempted before? If
so, what where the
performance results compared to Oracle?
I've been reading up on partitioned tabes on pgsql, will the performance
benefit will be
comparable to Oracle partitioned tables?
What
Ron Mayer wrote:
http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2002-53
Wrong link...
http://research.microsoft.com/research/pubs/view.aspx?type=Technical%20Report&id=812
This is the one that discusses scalability, price, performance,
failover, power consumption, hardware
Merlin Moncure wrote:
...You need to build a bigger, faster box with lots of storage...
Clustering ...
B: will cost you more, not less
Is this still true when you get to 5-way or 17-way systems?
My (somewhat outdated) impression is that up to about 4-way systems
they're price competitive; but bey
I sometimes also think it's fun to point out that Postgresql
bigger companies supporting it's software - like this one:
http://www.fastware.com.au/docs/FujitsuSupportedPostgreSQLWhitePaper.pdf
with $43 billion revenue -- instead of those little companies
like Mysql AB or Oracle.
:)
---
On Thu, Jan 20, 2005 at 11:31:29 -0500,
Alex Turner <[EMAIL PROTECTED]> wrote:
> I am curious - I wasn't aware that postgresql supported partitioned tables,
> Could someone point me to the docs on this.
Some people have been doing it using a union view. There isn't actually
a partition feature.
On Thu, Jan 20, 2005 at 11:14:28 +0100,
Bernd Heller <[EMAIL PROTECTED]> wrote:
>
> I wondered why the planner was making such bad assumptions about the
> number of rows to find and had a look at pg_stats. and there was the
> surprise:
> there is no entry in pg_stats for that column at all!! I
I was thinking the same! I'd like to know how other databases such as Oracle
do it.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mitch Pirtle
Sent: Thursday, January 20, 2005 4:42 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] PostgreS
On 21 Jan 2005 at 8:38, Russell Smith wrote:
> On Fri, 21 Jan 2005 02:36 am, Dan Langille wrote:
> > On 20 Jan 2005 at 7:26, Stephan Szabo wrote:
>
> [snip]
> > > Honestly I expected it to be slower (which it was), but I figured
> > > it's worth seeing what alternate plans it'll generate
> > > (s
Bruno,
> Which brings up another question: why not just cluster at the hardware
> layer? Get an external fiberchannel array, and cluster a bunch of dual
> Opterons, all sharing that storage. In that sense you would be getting
> one big PostgreSQL 'image' running across all of the servers.
>
> Or i
1 - 100 of 117 matches
Mail list logo