On Wed, 2010-07-14 at 08:58 -0500, Kevin Grittner wrote:
> Scott Marlowe wrote:
> > Hannu Krosing wrote:
> >> One example where you need a separate connection pool is pooling
> >> really large number of connections, which you may want to do on
> >> another host than the database itself is running
Scott Marlowe wrote:
> Hannu Krosing wrote:
>> One example where you need a separate connection pool is pooling
>> really large number of connections, which you may want to do on
>> another host than the database itself is running.
>
> Definitely. Often it's best placed on the individual webser
On Thu, Jul 8, 2010 at 11:48 PM, Hannu Krosing wrote:
> One example where you need a separate connection pool is pooling really
> large number of connections, which you may want to do on another host
> than the database itself is running.
Definitely. Often it's best placed on the individual webs
On Fri, 2010-07-09 at 00:42 -0400, Tom Lane wrote:
> Samuel Gendler writes:
> > On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
> > wrote:
> >> If you're not using a connection pool, start using one.
>
> > I see this issue and subsequent advice cross this list awfully
> > frequently. Is there in a
Craig Ringer wrote:
> It'll need to separate "running queries" from "running processes", or
> start threading backends, so that one way or the other a single query
> can benefit from the capabilities of multiple CPUs. The same separation,
> or a move to async I/O, might be needed to get one query t
Two problems to recognize. First is that building something in has the
potential to significantly limit use and therefore advancement of work
on external pools, because of the "let's use the built in one instead of
installing something extra" mentality. I'd rather have a great external
Jesper Krogh wrote:
I dont think a build-in connection-poller (or similiar) would in any
way limit the actions and abillities of an external one?
Two problems to recognize. First is that building something in has the
potential to significantly limit use and therefore advancement of work
on e
On 2010-07-10 00:59, Greg Smith wrote:
Matthew Wakeling wrote:
> If you have an external pool solution, you can put it somewhere
> else - maybe on multiple somewhere elses.
This is the key point to observe: if you're at the point where you
have so many connections that you need a pool, the l
On 10/07/10 00:56, Brad Nicholson wrote:
On Fri, 2010-07-09 at 00:42 -0400, Tom Lane wrote:
Perhaps not, but there's no obvious benefit either. Since there's
More Than One Way To Do It, it seems more practical to keep that as a
separate problem that can be solved by a choice of add-on pack
Greg Smith wrote:
> if you're at the point where you have so many connections that you
> need a pool, the last place you want to put that is on the
> overloaded database server itself. Therefore, it must be an
> external piece of software to be effective, rather than being part
> of the server
Matthew Wakeling wrote:
If you have an external pool solution, you can put it somewhere else -
maybe on multiple somewhere elses.
This is the key point to observe: if you're at the point where you have
so many connections that you need a pool, the last place you want to put
that is on the ov
On Fri, Jul 9, 2010 at 12:42 AM, Tom Lane wrote:
> Samuel Gendler writes:
>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
>> wrote:
>>> If you're not using a connection pool, start using one.
>
>> I see this issue and subsequent advice cross this list awfully
>> frequently. Is there in architec
"Jorge Montero" wrote:
> If anything was built in the database to handle such connections,
> I'd recommend a big, bold warning, recommending the use of client-
> side pooling if available.
+1
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chang
If anything was built in the database to handle such connections, I'd recommend
a big, bold warning, recommending the use of client-side pooling if available.
For something like, say, a web-server, pooling connections to the database
provides a massive performance advantage regardless of how goo
Matthew Wakeling wrote:
> On Fri, 9 Jul 2010, Kevin Grittner wrote:
>>> Interesting idea. As far as I can see, you are suggesting
>>> solving the too many connections problem by allowing lots of
>>> connections, but only allowing a certain number to do anything
>>> at a time?
>>
>> Right.
>
> I t
On Fri, 9 Jul 2010, Kevin Grittner wrote:
Interesting idea. As far as I can see, you are suggesting solving
the too many connections problem by allowing lots of connections,
but only allowing a certain number to do anything at a time?
Right.
I think in some situations, this arrangement would
If your app is running under Tomcat, connection pooling is extremely easy to
set up from there: It has connection pooling mechanisms built in. Request your
db connections using said mechanisms, instead of doing it manually, make a
couple of changes to server.xml, and the problem goes away. Hundr
In case there's any doubt, the questions below aren't rhetorical.
Matthew Wakeling wrote:
> Interesting idea. As far as I can see, you are suggesting solving
> the too many connections problem by allowing lots of connections,
> but only allowing a certain number to do anything at a time?
Rig
On Fri, 9 Jul 2010, Kevin Grittner wrote:
Any thoughts on the "minimalist" solution I suggested a couple weeks
ago?:
http://archives.postgresql.org/pgsql-hackers/2010-06/msg01385.php
http://archives.postgresql.org/pgsql-hackers/2010-06/msg01387.php
So far, there has been no comment by anyone...
Brad Nicholson wrote:
> Just like replication, pooling has different approaches. I do
> think that in both cases, having a solution that works, easily,
> out of the "box" will meet the needs of most users.
Any thoughts on the "minimalist" solution I suggested a couple weeks
ago?:
http://arc
Thanx you all for the replies.
I got a gist on where should I head towards
like I should rely a bit on postgres for performance and rest on my
tomcat and application.
And will try connection pooling on postgres part.
And if I come back for any query (related to this topic) then this
time it will b
On Fri, 2010-07-09 at 00:42 -0400, Tom Lane wrote:
> Samuel Gendler writes:
> > On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
> > wrote:
> >> If you're not using a connection pool, start using one.
>
> > I see this issue and subsequent advice cross this list awfully
> > frequently. Is there in a
Otherwise I'm wondering if PostgreSQL will begin really suffering in
performance on workloads where queries are big and expensive but there
are relatively few of them running at a time.
Oh, I should note at this point that I'm *not* whining that "someone"
should volunteer to do this, or that "t
Thanx you all for the replies.
I got a gist on where should I head towards
like I should rely a bit on postgres for performance and rest on my
tomcat and application.
And will try connection pooling on postgres part.
And if I come back for any query (related to this topic) then this
time it will b
On 09/07/10 12:42, Tom Lane wrote:
> Samuel Gendler writes:
>> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
>> wrote:
>>> If you're not using a connection pool, start using one.
>
>> I see this issue and subsequent advice cross this list awfully
>> frequently. Is there in architectural reason w
Samuel Gendler writes:
> On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
> wrote:
>> If you're not using a connection pool, start using one.
> I see this issue and subsequent advice cross this list awfully
> frequently. Is there in architectural reason why postgres itself
> cannot pool incoming co
On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
wrote:
> If you're not using a connection pool, start using one.
>
> Do you really need 100 *active* working query threads at one time? Because
> if you do, you're going to need a scary-big disk subsystem and a lot of
> processors.
I see this issue and
On 9/07/2010 3:20 AM, Harpreet singh Wadhwa wrote:
Hi,
I want to fine tune my postgresql to increase number of connects it
can handle in a minutes time.
Decrease the response time per request etc.
The exact case will be to handle around 100 concurrent requests.
If you're not using a connectio
Harpreet singh Wadhwa wrote:
> I want to fine tune my postgresql to increase number of connects
> it can handle in a minutes time.
> Decrease the response time per request etc.
> The exact case will be to handle around 100 concurrent requests.
I have found that connection pooling is crucial.
Hi,
I want to fine tune my postgresql to increase number of connects it
can handle in a minutes time.
Decrease the response time per request etc.
The exact case will be to handle around 100 concurrent requests.
Can any one please help me in this.
Any hardware suggestions are also welcomed.
Reg
Robert Haas wrote:
On Mon, Mar 23, 2009 at 1:08 PM, Anne Rosset wrote:
enable_nestloop = off
That may be the source of your problem. Generally setting enable_* to
off is a debugging tool, not something you ever want to do in
production.
...Robert
Thanks Robert. It seems to have
On Mon, Mar 23, 2009 at 1:08 PM, Anne Rosset wrote:
> enable_nestloop = off
That may be the source of your problem. Generally setting enable_* to
off is a debugging tool, not something you ever want to do in
production.
...Robert
--
Sent via pgsql-performance mailing list (pgsql-performance@p
Tom Lane wrote:
Robert Haas writes:
On Fri, Mar 20, 2009 at 1:16 PM, Anne Rosset wrote:
The db version is 8.2.4
Something is wrong here. How can setting enable_seqscan to off result
in a plan with a far lower estimated cost than the original plan?
Planner bug no
On Fri, Mar 20, 2009 at 4:29 PM, Anne Rosset wrote:
> Alvaro Herrera wrote:
>> Robert Haas escribió:
>>> Something is wrong here. How can setting enable_seqscan to off result
>>> in a plan with a far lower estimated cost than the original plan? If
>>> the planner thought the non-seq-scan plan is
Robert Haas writes:
> On Fri, Mar 20, 2009 at 1:16 PM, Anne Rosset wrote:
>> The db version is 8.2.4
> Something is wrong here. How can setting enable_seqscan to off result
> in a plan with a far lower estimated cost than the original plan?
Planner bug no doubt ... given how old the PG release
Alvaro Herrera wrote:
Robert Haas escribió:
Something is wrong here. How can setting enable_seqscan to off result
in a plan with a far lower estimated cost than the original plan? If
the planner thought the non-seq-scan plan is cheaper, it would have
picked that one to begin with.
Robert Haas escribió:
> Something is wrong here. How can setting enable_seqscan to off result
> in a plan with a far lower estimated cost than the original plan? If
> the planner thought the non-seq-scan plan is cheaper, it would have
> picked that one to begin with.
GEQO? Anne, what's geqo_th
On Fri, Mar 20, 2009 at 1:16 PM, Anne Rosset wrote:
> Richard Huxton wrote:
>> Anne Rosset wrote:
>>> EXPLAIN ANALYZE
>>> SELECT
>>> audit_change.id AS id,
>>> audit_change.audit_entry_id AS auditEntryId,
>>> audit_entry.object_id AS objectId,
>>> audit_change.property_name
Richard Huxton wrote:
Anne Rosset wrote:
EXPLAIN ANALYZE
SELECT
audit_change.id AS id,
audit_change.audit_entry_id AS auditEntryId,
audit_entry.object_id AS objectId,
audit_change.property_name AS propertyName,
audit_change.property_type AS propertyType,
audit_chang
Richard Huxton writes:
>> Hash Join (cost=8.79..253664.55 rows=4 width=136) (actual
>> time=4612.674..6683.158 rows=4 loops=1)
>> Hash Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)
>> -> Seq Scan on audit_change (cost=0.00..225212.52 rows=7584852
>> width=123) (actual tim
Anne Rosset wrote:
> EXPLAIN ANALYZE
> SELECT
> audit_change.id AS id,
> audit_change.audit_entry_id AS auditEntryId,
> audit_entry.object_id AS objectId,
> audit_change.property_name AS propertyName,
> audit_change.property_type AS propertyType,
> audit_change.old_v
Hi,
We have the following 2 tables:
\d audit_change
Table "public.audit_change"
Column | Type | Modifiers
++---
id | character varying(32) | not null
audit_entry_id | character varying(32) |
...
In
Hi Mark,
I Rohan Pethkar wants to run some of the DBT2 tests on PostgreSQL. I have
downloaded latest DBT2 tarball from http://git.postgresql.org .
I tried steps from which are there in INSTALL file. cmake CMakeLists.txt
command runs fine.I tried to install DBT2 but getting following exception
whil
"Scott Marlowe" writes:
> Isn't it amazing how many small businesses won't buy from other small
> businesses?
It's entertaining that Dell is considered one of the "safe choices"
in this thread. They were a pretty small business not so long ago
(and remain a lot smaller than IBM or HP) ...
On Sat, 2008-12-13 at 19:16 -0700, Scott Marlowe wrote:
> Isn't it amazing how many small businesses won't buy from other small
> businesses? They'd much rather give their money to a company they
> don't like because "they'll be around a while" (the big company).
>
True enough!
Joshua D. Drake
On Sat, Dec 13, 2008 at 1:03 PM, Joshua D. Drake wrote:
> On Sat, 2008-12-13 at 12:57 -0700, Scott Marlowe wrote:
>> On Sat, Dec 13, 2008 at 12:47 PM, Joshua D. Drake
>> wrote:
>> > On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
>> >> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
>
El Sábado, 13 de Diciembre de 2008 Scott Marlowe escribió:
> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
> wrote:
> > On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
> >> On Sat, 13 Dec 2008, Robert Haas wrote:
> >
> >> > This may be a little off-topic, but I'd be interested in hear
On Sat, 2008-12-13 at 12:57 -0700, Scott Marlowe wrote:
> On Sat, Dec 13, 2008 at 12:47 PM, Joshua D. Drake
> wrote:
> > On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
> >> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
> >> wrote:
> >> > On Sat, 2008-12-13 at 07:44 -0800, da...@lan
On Sat, Dec 13, 2008 at 12:47 PM, Joshua D. Drake
wrote:
> On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
>> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
>> wrote:
>> > On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
>> >> On Sat, 13 Dec 2008, Robert Haas wrote:
>
>> > htt
On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
> wrote:
> > On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
> >> On Sat, 13 Dec 2008, Robert Haas wrote:
> > http://h71016.www7.hp.com/ctoBases.asp?oi=E9CED&BEID=19701&SBLID=&Prod
On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
wrote:
> On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
>> On Sat, 13 Dec 2008, Robert Haas wrote:
>
>> > This may be a little off-topic, but I'd be interested in hearing more
>> > details about how you (or others) would do this... manuf
On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
> On Sat, 13 Dec 2008, Robert Haas wrote:
> > This may be a little off-topic, but I'd be interested in hearing more
> > details about how you (or others) would do this... manufacturer,
> > model, configuration? How many hard drives do you n
On Sat, Dec 13, 2008 at 6:22 AM, Robert Haas wrote:
> On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake
> wrote:
>>> Those intel SSDs sound compelling. I've been waiting for SSDs to get
>>> competitive price and performance wise for a while, and when the
>>> intels came out and I read the first b
On Sat, 13 Dec 2008, Robert Haas wrote:
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake wrote:
Those intel SSDs sound compelling. I've been waiting for SSDs to get
competitive price and performance wise for a while, and when the
intels came out and I read the first benchmarks I immediately be
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake wrote:
>> Those intel SSDs sound compelling. I've been waiting for SSDs to get
>> competitive price and performance wise for a while, and when the
>> intels came out and I read the first benchmarks I immediately began
>> scheming. Sadly, that was r
On Tue, 9 Dec 2008, Scott Carey wrote:
My system is now CPU bound, the I/O can do sequential reads of more than
1.2GB/sec but Postgres can't do a seqscan 30% as fast because it eats up
CPU like crazy just reading and identifying tuples... In addition to the
fadvise patch, postgres needs to mer
Scott Marlowe wrote:
involves tiny bits of data scattered throughout the database. Our
current database is about 20-25 Gig, which means it's quickly reaching
the point where it will not fit in our 32G of ram, and it's likely to
grow too big for 64Gig before a year or two is out.
...
I wonde
Tom,
Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
not the road we want to go
Greg Stark wrote:
On Sun, Dec 7, 2008 at 7:38 PM, Josh Berkus wrote:
Also, the following patches currently still have bugs, but when the bugs are
fixed I'll be looking for performance testers, so please either watch the
wiki or watch this space:
...
-- posix_fadvise (Gregory Stark)
Eh? Quite
I would expect higher shared_buffers to raise the curve before the first
breakpoint but after the first breakpoint make the drop steeper and deeper.
The equilibrium where the curve becomes flatter should be lower.
On SpecJAppserver specifically, I remember seeing a drop when the
database size
On Tue, 9 Dec 2008, Scott Carey wrote:
For what it is worth, you can roughly double to triple the iops of an Intel
X-25M on pure random reads if you queue up
multiple concurrent reads rather than serialize them. But it is not due to
spindles, it is due to the latency of the
SATA interface and
, "Scott Carey" <[EMAIL PROTECTED]> wrote:
>
> From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of > Jean-David Beyer
> [EMAIL PROTECTED]
> Sent: Tuesday, December 09, 2008 5:08 AM
> To: pgsql-performance@postgresql.org
&
Just to clarify, I'm not talking about random I/O bound loads today, on hard
drives, targetted by the fadvise stuff - these aren't CPU bound, and they will
be helped by it.
For sequential scans, this situation is different, since the OS has sufficient
read-ahead prefetching algorithms of its ow
> Well, when select count(1) reads pages slower than my disk, its 16x + slower
> than my RAM. Until one can demonstrate that the system can even read pages
> in RAM faster than what disks will do next year, it doesn't matter much that
> RAM is faster. It does matter that RAM is faster for sorts,
justin wrote:
Tom Lane wrote:
Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
no
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU eff
Matthew Wakeling <[EMAIL PROTECTED]> writes:
> On Tue, 9 Dec 2008, Scott Marlowe wrote:
>> I wonder how many hard drives it would take to be CPU bound on random
>> access patterns? About 40 to 60? And probably 15k / SAS drives to
>> boot. Cause that's what we're looking at in the next few year
Scott Carey <[EMAIL PROTECTED]> writes:
> And as far as I can tell, even after the 8.4 fadvise patch, all I/O is in
> block_size chunks. (hopefully I am wrong)
>...
> In addition to the fadvise patch, postgres needs to merge adjacent I/O's
> into larger ones to reduce the overhead. It only really
Prefetch CPU cost should be rather low in the grand scheme of things, and does
help performance even for very fast I/O. I would not expect a very large CPU
use increase from that sort of patch in the grand scheme of things - there is a
lot that is more expensive to do on a per block basis.
The
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU
On Tue, 9 Dec 2008, Robert Haas wrote:
I don't believe the thesis. The gap between disk speeds and memory
speeds may narrow over time, but I doubt it's likely to disappear
altogether any time soon, and certainly not for all users.
I think the "not for all users" is the critical part. In 2 yea
On Tue, 2008-12-09 at 17:38 -0500, Tom Lane wrote:
> Scott Carey <[EMAIL PROTECTED]> writes:
> > Which brings this back around to the point I care the most about:
> > I/O per second will diminish as the most common database performance
> > limiting factor in Postgres 8.4's lifetime, and become alm
> Hmm ... I wonder whether this means that the current work on
> parallelizing I/O (the posix_fadvise patch in particular) is a dead
> end. Because what that is basically going to do is expend more CPU
> to improve I/O efficiency. If you believe this thesis then that's
> not the road we want to g
Scott Carey <[EMAIL PROTECTED]> writes:
> Which brings this back around to the point I care the most about:
> I/O per second will diminish as the most common database performance limiting
> factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
> Becoming more CPU efficient will
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU efficient will become very important, and for some, already
On Tue, 2008-12-09 at 15:07 -0500, Merlin Moncure wrote:
> On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> > Hard drives work, their cheap and fast. I can get 25 spindles, 15k in a
> > 3U with controller and battery backed cache for <$10k.
>
> While I agree with your g
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Hard drives work, their cheap and fast. I can get 25 spindles, 15k in a
> 3U with controller and battery backed cache for <$10k.
While I agree with your general sentiments about early adoption, etc
(the intel ssd products
On Tue, 2008-12-09 at 11:08 -0700, Scott Marlowe wrote:
> On Tue, Dec 9, 2008 at 11:01 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> > Let me re-phrase this.
> >
> > For today, at 200GB or less of required space, and 500GB or less next year.
> >
> > "Where we're going, we don't NEED spindles."
>
>
On Tue, Dec 9, 2008 at 11:01 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> Let me re-phrase this.
>
> For today, at 200GB or less of required space, and 500GB or less next year.
>
> "Where we're going, we don't NEED spindles."
Those intel SSDs sound compelling. I've been waiting for SSDs to get
co
Let me re-phrase this.
For today, at 200GB or less of required space, and 500GB or less next year.
"Where we're going, we don't NEED spindles."
Seriously, go down to the store and get 6 X25-M's, they're as cheap as $550
each and will be sub $500 soon. These are more than sufficient for all bu
On Tue, Dec 9, 2008 at 10:35 AM, Matthew Wakeling <[EMAIL PROTECTED]> wrote:
> On Tue, 9 Dec 2008, Scott Marlowe wrote:
>>
>> I wonder how many hard drives it would take to be CPU bound on random
>> access patterns? About 40 to 60? And probably 15k / SAS drives to
>> boot. Cause that's what we'r
On Tue, 9 Dec 2008, Scott Marlowe wrote:
I wonder how many hard drives it would take to be CPU bound on random
access patterns? About 40 to 60? And probably 15k / SAS drives to
boot. Cause that's what we're looking at in the next few years where
I work.
There's a problem with that thinking.
> Lucky you, having needs that are fulfilled by sequential reads. :)
> I wonder how many hard drives it would take to be CPU bound on random
> access patterns? About 40 to 60? And probably 15k / SAS drives to
> boot. Cause that's what we're looking at in the next few years where
> I work.
Abo
On Tue, 2008-12-09 at 10:21 -0700, Scott Marlowe wrote:
> On Tue, Dec 9, 2008 at 9:37 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> Lucky you, having needs that are fulfilled by sequential reads. :)
>
> I wonder how many hard drives it would take to be CPU bound on random
> access patterns? Abou
On Tue, Dec 9, 2008 at 9:37 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> As for tipping points and pg_bench -- It doesn't seem to reflect the kind of
> workload we use postgres for at all, though my workload does a lot of big
> hashes and seqscans, and I'm curious how much improved those may be
>
> From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of > Jean-David Beyer
> [EMAIL PROTECTED]
> Sent: Tuesday, December 09, 2008 5:08 AM
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Need help with 8.4 Performance
On Sun, Dec 7, 2008 at 7:38 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
>
> Also, the following patches currently still have bugs, but when the bugs are
> fixed I'll be looking for performance testers, so please either watch the
> wiki or watch this space:
>...
> -- posix_fadvise (Gregory Stark)
Eh
On Tuesday 09 December 2008 13:08:14 Jean-David Beyer wrote:
>
> and even if they can, I do not know if postgres uses that ability. I doubt
> it, since I believe (at least in Linux) a process can do that only if run
> as root, which I imagine few (if any) users do.
Disclaimer: I'm not a system pr
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Greg Smith wrote:
| On Mon, 8 Dec 2008, Merlin Moncure wrote:
|
|> I wonder if shared_buffers has any effect on how far you can go before
|> you hit the 'tipping point'.
|
| If your operating system has any reasonable caching itself, not so much at
|
Greg Smith <[EMAIL PROTECTED]> writes:
> On Mon, 8 Dec 2008, Merlin Moncure wrote:
>
>> I wonder if shared_buffers has any effect on how far you can go before
>> you hit the 'tipping point'.
>
> If your operating system has any reasonable caching itself, not so much at
> first. As long as the in
On Mon, 8 Dec 2008, Merlin Moncure wrote:
I wonder if shared_buffers has any effect on how far you can go before
you hit the 'tipping point'.
If your operating system has any reasonable caching itself, not so much at
first. As long as the index on the account table fits in shared_buffers,
e
On Mon, Dec 8, 2008 at 5:52 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 8 Dec 2008, Scott Marlowe wrote:
>
>> Well, I have 32 Gig of ram and wanted to test it against a database
>> that was at least twice as big as memory. I'm not sure why you'd
>> consider the results uninteresting though
On Mon, 8 Dec 2008, Scott Marlowe wrote:
Well, I have 32 Gig of ram and wanted to test it against a database
that was at least twice as big as memory. I'm not sure why you'd
consider the results uninteresting though, I'd think knowing how the
db will perform with a very large transactional stor
On Mon, Dec 8, 2008 at 1:15 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Sun, 7 Dec 2008, Scott Marlowe wrote:
>
>> When I last used pgbench I wanted to test it with an extremely large
>> dataset, but it maxes out at -s 4xxx or so, and that's only in the
>> 40Gigabyte range. Is the limit raised
On Sun, 7 Dec 2008, Scott Marlowe wrote:
When I last used pgbench I wanted to test it with an extremely large
dataset, but it maxes out at -s 4xxx or so, and that's only in the
40Gigabyte range. Is the limit raised for the pgbench included in
contrib in 8.4? I'm guessing it's an arbitrary limi
I'll be glad to test the patches using pgbench on my POWER4 box
running AIX 5.3 and an IA64 that runs HP-UX 11.31.
Derek
On Dec 7, 2008, at 2:38 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
Database performance geeks,
We have a number of patches pending for 8.4 designed to improve
databas
On Sun, Dec 7, 2008 at 12:38 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
I've got a pair of 8 core opteron 16 drive machines I would like to
test it on. If nothing else I'll just take queries from the log to
run against an 8.4 install. It'll have to be late at night though...
> If you are going
Josh,
Since a number of these performance patches use our hash function, would
it make sense to apply the last patch to upgrade the hash function mix()
to the two function mix()/final()? Since the additional changes increases
the performance of the hash function by another 50% or so. My two cents.
Database performance geeks,
We have a number of patches pending for 8.4 designed to improve database
performance in a variety of circumstances. We need as many users as possible
to build test versions of PostgreSQL with these patches, and test how well
they perform, and report back in some det
[EMAIL PROTECTED] wrote:
The report contains more than 70,000 records but it takes more
than half an hour and some time no result.
Please help me to generating the report fast.
You'll need to provide some more information before anyone can help.
Something along the lines of:
1 - 100 of 159 matches
Mail list logo