On 2013-09-25 11:17:51 -0700, Jeff Janes wrote:
> On Wed, Sep 25, 2013 at 10:53 AM, Andres Freund wrote:
>
> > On 2013-09-25 00:06:06 -0700, Jeff Janes wrote:
> > > > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a
> > > > given transaction has finished yet can be a
> > > > >
On Wed, Sep 25, 2013 at 10:53 AM, Andres Freund wrote:
> On 2013-09-25 00:06:06 -0700, Jeff Janes wrote:
> > > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a
> > > given transaction has finished yet can be a
> > > > serious point of system-wide contention, because it takes t
On Thu, Sep 19, 2013 at 1:47 PM, Josh Berkus wrote:
> 4. n-distinct will still be listed as -1 (unique) for the column.
>
> Why?
Because of this:
https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/rangetypes_typanalyze.c#L205
We only collect and use histograms of lower and
On 2013-09-25 00:06:06 -0700, Jeff Janes wrote:
> > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a
> > given transaction has finished yet can be a
> > > serious point of system-wide contention, because it takes the
> > > ProcArrayLock, once per row which needs to be checked.
On 09/25/2013 12:06 AM, Jeff Janes wrote:
>> Why do we need a procarraylock for this? Seems like the solution would
>> be not to take a lock at all; the information on transaction commit is
>> in the clog, after all.
>>
>
> My understanding is that you are not allowed to check the clog until afte
On 09/19/2013 01:47 PM, Josh Berkus wrote:
> Test:
>
> 1. create a table with a range type column.
> 2. insert 1000 identical values into that column.
> 3. analyze
> 4. n-distinct will still be listed as -1 (unique) for the column.
>
> Why?
>
Anyone?
--
Josh Berkus
PostgreSQL Experts Inc.
htt
On Wed, Sep 25, 2013 at 12:54 PM, Merlin Moncure wrote:
>> I was thinking an index over:
>>
>> (event, date_trunc('day', insert_time), log_id)
>>
>> And the query like
>>
>> SELECT min(log_id) FROM event_log
>> WHERE event='S-Create' AND
>> date_trunc('day',insert_time) = '2013-09-15'
>>
>>
>> Tha
> -Original Message-
> From: pgsql-performance-ow...@postgresql.org
> [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Sam Wong
> Sent: Thursday, September 26, 2013 0:34
> To: 'postgres performance list'
> Subject: Re: [PERFORM] Slow plan for MAX/MIN or LIMIT 1?
>
> > -Orig
> -Original Message-
> From: pgsql-performance-ow...@postgresql.org
> [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Merlin
> Moncure
> Sent: Wednesday, September 25, 2013 23:55
> To: Claudio Freire
> Cc: Sam Wong; postgres performance list
> Subject: Re: [PERFORM] Slow plan f
On Wed, Sep 25, 2013 at 8:58 AM, Jim Garrison wrote:
> I spent about a week optimizing a query in our performance-testing
> environment, which has hardware similar to production.
>
> I was able to refactor the query and reduce the runtime from hours to
> about 40 seconds, through the use of CTEs
On Wed, Sep 25, 2013 at 10:05 AM, AI Rumman wrote:
> Hi,
>
> I have a table with zip_code and latitude and longitude.
>
> \d zip_code_based_lng_lat
> Table "public.zip_code_based_lng_lat"
> Column | Type | Modifiers
> ++---
> zip|
I spent about a week optimizing a query in our performance-testing environment,
which has hardware similar to production.
I was able to refactor the query and reduce the runtime from hours to about 40
seconds, through the use of CTEs and a couple of new indexes.
The database was rebuilt and ref
On Wed, Sep 25, 2013 at 10:20 AM, Claudio Freire wrote:
> On Wed, Sep 25, 2013 at 10:29 AM, Merlin Moncure wrote:
>> On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire
>> wrote:
>>> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong wrote:
This event_log table has 4 million rows.
“log_id” i
On Wed, Sep 25, 2013 at 10:29 AM, Merlin Moncure wrote:
> On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire
> wrote:
>> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong wrote:
>>> This event_log table has 4 million rows.
>>>
>>> “log_id” is the primary key (bigint),
>>>
>>> there is a composite index “e
Hi,
I have a table with zip_code and latitude and longitude.
\d zip_code_based_lng_lat
Table "public.zip_code_based_lng_lat"
Column | Type | Modifiers
++---
zip| character varying(100) |
state | character varying(100) |
city
On Tue, Sep 24, 2013 at 4:56 PM, Claudio Freire wrote:
> On Tue, Sep 24, 2013 at 6:24 AM, Sam Wong wrote:
>> This event_log table has 4 million rows.
>>
>> “log_id” is the primary key (bigint),
>>
>> there is a composite index “event_data_search” over (event::text,
>> insert_time::datetime).
>
>
Hi All and Merlin,
So here is the explain analyze output.
--
Query A -- single row output, but very slow query
--
SELECT min(log_id) FROM event_log
WHERE event='S-Create' AND
insert_time>'2013-09-15' and insert_time<'2013-09-16'
http://explain.depesz.com/s/3H5
Result (cost=134.48..134.4
On Tue, Sep 24, 2013 at 10:43 AM, Josh Berkus wrote:
> On 09/24/2013 08:01 AM, jes...@krogh.cc wrote:
> > This stuff is a 9.2 feature right? What was the original problem to be
> > adressed?
>
> Earlier, actually. 9.1? 9.0?
>
> The problem addressed was that, for tables with a "progressive" val
18 matches
Mail list logo