On Tue, Sep 24, 2013 at 3:35 AM, Tom Lane wrote:
> Kevin Grittner writes:
> > Are we talking about the probe for the end (or beginning) of an
> > index? If so, should we even care about visibility of the row
> > related to the most extreme index entry? Should we even go to the
> > heap during
On Wed, Sep 11, 2013 at 2:10 PM, Andres Freund wrote:
> On 2013-09-11 15:06:23 -0400, Andrew Dunstan wrote:
> >
> > One thing that this made me wonder is why we don't have
> transaction_timeout,
> > or maybe transaction_idle_timeout.
>
> Because it's harder than it sounds, at least if you want to
Hi
On Wed, Sep 25, 2013 at 1:30 AM, Jeff Janes wrote:
> On Tue, Sep 24, 2013 at 11:03 AM, didier wrote:
>
>>
>> As a matter of fact you get the same slow down after a rollback until
>> autovacuum, and if autovacuum can't keep up...
>>
>
> Have you experimentally verified the last part? btree
> As a matter of fact you get the same slow down after a rollback until
autovacuum, and if autovacuum can't keep up...
Actually, this is not what we observe. The performance goes back to the
normal level immediately after committing or aborting the transaction.
On Wed, Sep 25, 2013 at 1:30 AM,
On Tue, Sep 24, 2013 at 11:03 AM, didier wrote:
> Hi
>
>
> On Tue, Sep 24, 2013 at 5:01 PM, wrote:
>
>>
>> Apparently it is waiting for locks, cant the check be make in a
>> "non-blocking" way, so if it ends up waiting for a lock then it just
>> assumes non-visible and moves onto the next non-bl
On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong wrote:
> This event_log table has 4 million rows.
>
> “log_id” is the primary key (bigint),
>
> there is a composite index “event_data_search” over (event::text,
> insert_time::datetime).
I think you need to add log_id to that composite index to get pg t
On Tue, Sep 24, 2013 at 4:24 AM, Sam Wong wrote:
> Hi There,
>
>
>
> I have hit a query plan issue that I believe is a bug or under-estimation,
> and would like to know if there it is known or if there is any workaround…
>
>
>
> This event_log table has 4 million rows.
>
> “log_id” is the primary
Hi all,
Bringing up new slaves is a task we are performing very frequently. Our
current process is:
1. launch a new instance, perform a pg_basebackup
2. start postgresql on the slave and let the slave recover from the archive
all the xlog files it missed and that got created during step 1.
3. let
Hi
On Tue, Sep 24, 2013 at 5:01 PM, wrote:
>
> Apparently it is waiting for locks, cant the check be make in a
> "non-blocking" way, so if it ends up waiting for a lock then it just
> assumes non-visible and moves onto the next non-blocking?
>
Not only, it's a reason but you can get the same s
On 09/24/2013 08:01 AM, jes...@krogh.cc wrote:
> This stuff is a 9.2 feature right? What was the original problem to be
> adressed?
Earlier, actually. 9.1? 9.0?
The problem addressed was that, for tables with a "progressive" value
like a sequence or a timestamp, the planner tended to estimate 1
> Kevin Grittner writes:
>> Are we talking about the probe for the end (or beginning) of an
>> index? If so, should we even care about visibility of the row
>> related to the most extreme index entry? Should we even go to the
>> heap during the plan phase?
>
> Consider the case where some tran
Kevin Grittner writes:
> Are we talking about the probe for the end (or beginning) of an
> index? If so, should we even care about visibility of the row
> related to the most extreme index entry? Should we even go to the
> heap during the plan phase?
Consider the case where some transaction i
Hi There,
I have hit a query plan issue that I believe is a bug or under-estimation,
and would like to know if there it is known or if there is any workaround.
This event_log table has 4 million rows.
"log_id" is the primary key (bigint),
there is a composite index "event_data_search" o
13 matches
Mail list logo