On 28/07/10 04:40, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund wrote:
The problem is harder for us because a backend can't switch identities
once it's been assigned to a database. I haven't heard an adequate
explanation of why that could
Robert Haas writes:
> On Tue, Jul 27, 2010 at 9:56 PM, Tom Lane wrote:
>> Other than the fork() cost itself and whatever authentication activity
>> there might be, practically all the startup cost of a new backend can be
>> seen as cache-populating activities. You'd have to redo all of that,
>>
Alvaro Herrera writes:
> Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
>> Well, the issue you're hitting is that the executor is dividing the
>> query into batches to keep the size of the in-memory hash table below
>> work_mem. The planner should expect that and estimate the
Hello,
> the join column, lsid, is a poor choice for a join column as it is a
> long varchar value (avg length 101 characters) that us only gets
> unique way out on the right hand side.
Would a join on subtring on the 'way out on the right hand side' (did you
mean 'rightmost characters' or 'only
Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
> Peter Hussey writes:
> > 2) How is work_mem used by a query execution?
>
> Well, the issue you're hitting is that the executor is dividing the
> query into batches to keep the size of the in-memory hash table below
> work_mem.
On Tue, Jul 27, 2010 at 9:56 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Tue, Jul 27, 2010 at 4:40 PM, Tom Lane wrote:
>>> "Flushing them all" is not zero-cost; it's not too hard to believe that
>>> it could actually be slower than forking a clean new backend.
>
>> I'm not so sure I believe
Robert Haas writes:
> On Tue, Jul 27, 2010 at 4:40 PM, Tom Lane wrote:
>> "Flushing them all" is not zero-cost; it's not too hard to believe that
>> it could actually be slower than forking a clean new backend.
> I'm not so sure I believe that.
I'm not asserting it's true, just suggesting it's
On Tue, Jul 27, 2010 at 4:40 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund wrote:
The problem is harder for us because a backend can't switch identities
once it's been assigned to a database. I haven't heard an adequate
explanation of
Peter Hussey writes:
> I have spent the last couple of weeks digging into a Postgres performance
> problem that ultimately boiled down to this: the planner was choosing to
> use hash joins on a set of join keys that were much larger than the
> configured work_mem.
What Postgres version is this,
Hi,
On Tue, Jul 27, 2010 at 04:08:16PM -0700, Peter Hussey wrote:
> Now for the questions:
> 1) If we tell the customer to set his work_mem value to 500MB or 1GB in
> postgres.config, what problems might they see? the documentation and the
> guidelines we received from Rupinder Singh in support
I have spent the last couple of weeks digging into a Postgres performance
problem that ultimately boiled down to this: the planner was choosing to
use hash joins on a set of join keys that were much larger than the
configured work_mem. We found we could make the performance much better by
either
Yeb Havinga writes:
> Liviu Mirea-Ghiban wrote:
>> My question is: Why is it doing a Bitmap Heap Scan / Recheck Cond?
>> I've executed dozens of such queries and not once did the rechecking
>> remove any rows. Is there any way to disable it, or do you have any
>> other suggestions for optimizat
On 7/20/10 8:51 PM, kuopo wrote:
> Let me make my problem clearer. Here is a requirement to log data from a
> set of objects consistently. For example, the object maybe a mobile
> phone and it will report its location every 30s. To record its
> historical trace, I create a table like
> /CREATE TABL
Robert Haas writes:
> On Thu, Jul 22, 2010 at 5:29 PM, Andres Freund wrote:
>>> The problem is harder for us because a backend can't switch identities
>>> once it's been assigned to a database. I haven't heard an adequate
>>> explanation of why that couldn't be changed, though.
>> Possibly it m
Liviu Mirea-Ghiban wrote:
My question is: Why is it doing a Bitmap Heap Scan / Recheck Cond?
I've executed dozens of such queries and not once did the rechecking
remove any rows. Is there any way to disable it, or do you have any
other suggestions for optimizations (because I'm all out of ide
On Tue, Jul 20, 2010 at 9:51 PM, kuopo wrote:
> Let me make my problem clearer. Here is a requirement to log data from a
> set of objects consistently. For example, the object maybe a mobile phone
> and it will report its location every 30s. To record its historical trace, I
> create a table like
Piotr Gasidło wrote:
EXPLAIN ANALYZE SELECT ...
Total runtime: 4.782 ms
Time: 25,970 ms
Vitalii Tymchyshyn wrote:
Actually it's 20ms, so I suspect your point about planning time is correct.
Craig Ringer wrote:
Oh, a commas-as-fraction-separator locale.
That makes sense. Thanks for the catc
On Mon, 2010-07-26 at 14:34 -0400, Greg Smith wrote:
> Matthew Wakeling wrote:
> > Yeb also made the point - there are far too many points on that graph
> > to really tell what the average latency is. It'd be instructive to
> > have a few figures, like "only x% of requests took longer than y".
>
On Sat, Jul 24, 2010 at 2:23 AM, Craig Ringer
wrote:
> On 24/07/10 01:28, Robert Haas wrote:
>
>> Well, if we could change the backends so that they could fully
>> reinitialize themselves (disconnect from a database to which they are
>> bound, etc.), I don't see why we couldn't use the Apache appr
Hello,
I have a simple table which has a cube column and a cube GiST index. The
column contains 3-dimensional points (not cubes or intervals). The
problem is that I'm getting very slow queries when I'm using the index.
The table has about 130,000 rows and is full-vacuumed after any
updates/i
On Sat, 2010-07-24 at 14:36 +0800, Craig Ringer wrote:
> On 24/07/10 13:23, Greg Smith wrote:
> > Joshua Tolley wrote:
> >> Relatively minor, but it would be convenient to avoid having to query
> >> $external_pooler to determine the client_addr of an incoming connection.
> >>
> >
> > You sugges
On Fri, 2010-07-23 at 09:52 -0700, Joshua D. Drake wrote:
> On Thu, 2010-07-22 at 20:56 +0100, Hannu Krosing wrote:
> >
> > > Let's extend this shall we:
> > >
> > > Avoid adding yet another network hop
> >
> > postgreSQL is multi-process, so you either have a separate "pooler
> > process" or n
Let me make my problem clearer. Here is a requirement to log data from a set
of objects consistently. For example, the object maybe a mobile phone and it
will report its location every 30s. To record its historical trace, I create
a table like
*CREATE TABLE log_table
(
id integer NOT NULL,
data_
Whit Armstrong wrote:
> But there is no such risk to turning off write barriers?
Supposedly not:
http://xfs.org/index.php/XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.3F
> Did you get a substantial performace boost from disabling write
> barriers?
"Kneringer, Armin" writes:
> I think I found a potential performance gain if the query planner would be
> optimized. All Tests has been performed with 8.4.1 (and earlier versions) on
> CentOS 5.3 (x64)
> The following query will run on my database (~250 GB) for ca. 1600 seconds
> and the sort
Thanks.
But there is no such risk to turning off write barriers?
I'm only specifying noatime for xfs at the moment.
Did you get a substantial performace boost from disabling write
barriers? like 10x or more like 2x?
Thanks,
Whit
On Tue, Jul 27, 2010 at 1:19 PM, Kevin Grittner
wrote:
> "Kev
On Sat, Jul 17, 2010 at 4:50 AM, Srikanth wrote:
> I am sending u the query along with execution plan. Please help
Looks to me like your biggest problem is right here:
"-> Seq Scan
on subs s (cost=0.00..5599.42 rows=58313 width=69) (actua
"Kevin Grittner" wrote:
> Basically, you should never use fsync unless you are OK with
> losing everything in the database server if you have an OS or
> hardware failure.
s/use/disable/
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
Whit Armstrong wrote:
> While we're on the topic, do you also diable fsync?
We only disable fsync during bulk loads, where we would be starting
over anyway if there was a failure. Basically, you should never use
fsync unless you are OK with losing everything in the database
server if you have
Kevin,
While we're on the topic, do you also diable fsync?
We use xfs with battery-backed raid as well. We have had no issues with xfs.
I'm curious whether anyone can comment on his experience (good or bad)
using xfs/battery-backed-cache/fsync=off.
Thanks,
Whit
On Tue, Jul 27, 2010 at 9:48 A
Mark Kirkwood wrote:
> Also xfs has seen quite a bit of development in these later
> kernels, any thoughts on that?
We've been using xfs for a few years now with good performance and
no problems other than needing to disable write barriers to get good
performance out of our battery-backed RAID
31 matches
Mail list logo