[GENERAL] duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Mitu Verma
HI,

Following error is continuously seen with the postgreSQL database which we are 
using at customer site.

Current Errors observed: 
./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04 01:00:16 
CESTERROR: duplicate key value violates unique constraint 
"pg_class_relname_nsp_index"


Any pointers on why these errors are coming? What is the meaning of duplicate 
key value violates unique constraint "pg_class_relname_nsp_index"
If it is due to some index corruption or duplicate index? Please help.


Following tables are used frequently in our case

CREATE TABLE AuditTrailLogEntry
(
eventint2,
inNodeID VARCHAR(80),
inNodeName   VARCHAR(80),
sourceID VARCHAR(300),
inTime   TIMESTAMP,  -- /MM/DD HH:MM:SS.mmm
outNodeIDVARCHAR(80),
outNodeName  VARCHAR(80),
destinationIDVARCHAR(300),
outTime  TIMESTAMP,  -- /MM/DD HH:MM:SS.mmm
bytesbigint,
cdrs bigint,
tableIndex   bigint,-- Unique key
noOfSubfilesInFile   bigint,
recordSequenceNumberList VARCHAR(1000),
primary key (tableindex)
) TABLESPACE MMDATA; -


CREATE TABLE EventLogEntry
(
tableIndexint4,   -- Unique key
objectVARCHAR(80),
methodVARCHAR(80),
bgwUser   VARCHAR(80),
time  CHAR(23), -- /MM/DD HH:MM:SS.mmm
realUser  VARCHAR(80),
host  VARCHAR(80),
application   VARCHAR(80)
) TABLESPACE MMDATA;


Regards
Mitu



Re: [GENERAL] Can we simulate Oracle Flashback with pg_export_snapshot()?

2015-05-27 Thread Albe Laurenz
William Dunn wrote:
> In terms of benefit over a lagging replica Flashback has the benefit of being 
> transparent to the user
> (the user can query over the same database connection, etc), it does not 
> incur the full cost of having
> a replica...

Yes, Flashback (in all ist forms) is something that PostgreSQL doesn't offer.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Albe Laurenz
Mitu Verma wrote:
> Following error is continuously seen with the postgreSQL database which we 
> are using at customer site.
> 
> Current Errors observed: 
> ./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04 01:00:16
> CESTERROR: duplicate key value violates unique constraint 
> "pg_class_relname_nsp_index"
> 
> Any pointers on why these errors are coming? What is the meaning of duplicate 
> key value violates
> unique constraint "pg_class_relname_nsp_index"
> 
> If it is due to some index corruption or duplicate index? Please help.

This is perfectly harmless from a data integrity point of view, it is just an 
error
message that the user gets when he or she tries to insert a row whose primary 
key
already exists in the table:

test=> CREATE TABLE test (id integer PRIMARY KEY, val text);
CREATE TABLE
test=> INSERT INTO test VALUES (1, 'one');
INSERT 0 1
test=> INSERT INTO test VALUES (2, 'one');
INSERT 0 1
test=> INSERT INTO test VALUES (1, 'two');
ERROR:  duplicate key value violates unique constraint "test_pkey"
DETAIL:  Key (id)=(1) already exists.

If you don't want user errors to be logged in the server log, you can
increase "log_min_messages" to "log".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Pete Hollobon
On 27 May 2015 at 09:57, Albe Laurenz  wrote:
>
> Mitu Verma wrote:
> > Following error is continuously seen with the postgreSQL database which
we are using at customer site.
> >
> > Current Errors observed:
./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04 01:00:16
> > CESTERROR: duplicate key value violates unique constraint
"pg_class_relname_nsp_index"
> >
> > Any pointers on why these errors are coming? What is the meaning of
duplicate key value violates
> > unique constraint "pg_class_relname_nsp_index"
> >
> > If it is due to some index corruption or duplicate index? Please help.
>
> This is perfectly harmless from a data integrity point of view, it is
just an error
> message that the user gets when he or she tries to insert a row whose
primary key
> already exists in the table:

But the constraint here is pg_class_relname_nsp_index - the unique index on
pg_class (relname, relnamespace). You don't get that error if you try to
create a table that already exists. How could you end up with that error
(short of attempting to directly insert rows in pg_class)?


Re: [GENERAL] Re: duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Mitu Verma
Hi,

I have tried to see the postgres code(although new to this !) and found this.
declare unique index pg_class_relname_nsp_index 2663 on pg_class using 
btree(relname name_ops, relnamespace oid_ops)

Not sure why ‘duplicate key value violates unique constraint 
"pg_class_relname_nsp_index"’ message is coming in the logs ?
Any leads?

Regards
Mitu
From: Pete Hollobon [mailto:postg...@hollobon.com]
Sent: May 27, 2015 3:04 PM
To: Albe Laurenz
Cc: Mitu Verma; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Re: duplicate key value violates unique constraint 
"pg_class_relname_nsp_index"

On 27 May 2015 at 09:57, Albe Laurenz 
mailto:laurenz.a...@wien.gv.at>> wrote:
>
> Mitu Verma wrote:
> > Following error is continuously seen with the postgreSQL database which we 
> > are using at customer site.
> >
> > Current Errors observed: 
> > ./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04 01:00:16
> > CESTERROR: duplicate key value violates unique constraint 
> > "pg_class_relname_nsp_index"
> >
> > Any pointers on why these errors are coming? What is the meaning of 
> > duplicate key value violates
> > unique constraint "pg_class_relname_nsp_index"
> >
> > If it is due to some index corruption or duplicate index? Please help.
>
> This is perfectly harmless from a data integrity point of view, it is just an 
> error
> message that the user gets when he or she tries to insert a row whose primary 
> key
> already exists in the table:

But the constraint here is pg_class_relname_nsp_index - the unique index on 
pg_class (relname, relnamespace). You don't get that error if you try to create 
a table that already exists. How could you end up with that error (short of 
attempting to directly insert rows in pg_class)?


Re: [GENERAL] Re: duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Albe Laurenz
> From: Pete Hollobon [mailto:postg...@hollobon.com]
> On 27 May 2015 at 09:57, Albe Laurenz  wrote:
>> Mitu Verma wrote:
>>> Following error is continuously seen with the postgreSQL database which we 
>>> are using at customer site.
>>>
>>> Current Errors observed: 
>>> ./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04 01:00:16
>>> CESTERROR: duplicate key value violates unique constraint 
>>> "pg_class_relname_nsp_index"
>>>
>>> Any pointers on why these errors are coming? What is the meaning of 
>>> duplicate key value violates
>>> unique constraint "pg_class_relname_nsp_index"
>>>
>>> If it is due to some index corruption or duplicate index? Please help.

>> This is perfectly harmless from a data integrity point of view, it is just 
>> an error
>> message that the user gets when he or she tries to insert a row whose 
>> primary key
>> already exists in the table:

> But the constraint here is pg_class_relname_nsp_index - the unique index on 
> pg_class (relname,
> relnamespace). You don't get that error if you try to create a table that 
> already exists. How could
> you end up with that error (short of attempting to directly insert rows in 
> pg_class)?

Oops, I didn't see that it was a system index.  My mistake.

Mitu Verma wrote:
> I have tried to see the postgres code(although new to this !) and found this.
> 
> declare unique index pg_class_relname_nsp_index 2663 on pg_class using 
> btree(relname name_ops,
> relnamespace oid_ops)
> 
> Not sure why ‘duplicate key value violates unique constraint 
> "pg_class_relname_nsp_index"’ message is
> coming in the logs ?
> 
> Any leads?

Yes, that is could indeed be data corruption, unless somebody messes around with
the system catalogs (can you exclude that?).

I would shut down the cluster right away and take a physical backup of the files
before doing more.

Have there been any crashes lately?

Do you have any idea what action triggers the error message?

If you try "REINDEX TABLE pg_class" as superuser, does the problem go away?

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [plainto_tsquery] using OR

2015-05-27 Thread Sven R. Kunze

Hi everybody,

I struggle to find a clean solution in an application to safely parse 
the query input string of a user.


Currently, I am inclined to split the string with some magic regex and 
build it safely together using ' | '. That solution, however, is 
error-prone and hides the powerful features of tsquery that PostgreSQL 
could provide from the user.


Is there a way to safely create a tsquery out of user's input? 
Optionally, when the user types '&', '|', '!' '(' and ')' it could be 
interpreted like typical tsquery syntax BUT still creates valid tsquery.


Regards,

--
Sven R. Kunze
TBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz
Tel: +49 (0)371 33714721, Fax: +49 (0)371 5347920
e-mail: srku...@tbz-pariv.de
web: www.tbz-pariv.de

Geschäftsführer: Dr. Reiner Wohlgemuth
Sitz der Gesellschaft: Chemnitz
Registergericht: Chemnitz HRB 8543



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Melvin Davidson
What this indicates is that someone, or some thing, is trying to create a
table in a schema that already exists.
Here is the structure of pg_class_relname_nsp_index:
CREATE UNIQUE INDEX pg_class_relname_nsp_index ON pg_class USING btree
(relname, relnamespace)

What you should also see in the error log is a line immediately following
that error which shows you exactly what the conflict is, but you have not
provided that to us.

On Wed, May 27, 2015 at 4:19 AM, Mitu Verma  wrote:

>  HI,
>
>
>
> Following error is continuously seen with the postgreSQL database which we
> are using at customer site.
>
>
>
> Current Errors observed:
> ./fm_db_VoiceReprocessing1/data/pg_log/postgresql-04.log:2015-04-04
> 01:00:16 CESTERROR: duplicate key value violates unique constraint
> "pg_class_relname_nsp_index"
>
>
>
>
>
> Any pointers on why these errors are coming? What is the meaning of
> duplicate key value violates unique constraint "pg_class_relname_nsp_index"
>
> If it is due to some index corruption or duplicate index? Please help.
>
>
>
>
>
> Following tables are used frequently in our case
>
>
>
> CREATE TABLE AuditTrailLogEntry
>
> (
>
> eventint2,
>
> inNodeID VARCHAR(80),
>
> inNodeName   VARCHAR(80),
>
> sourceID VARCHAR(300),
>
> inTime   TIMESTAMP,  -- /MM/DD HH:MM:SS.mmm
>
> outNodeIDVARCHAR(80),
>
> outNodeName  VARCHAR(80),
>
> destinationIDVARCHAR(300),
>
> outTime  TIMESTAMP,  -- /MM/DD HH:MM:SS.mmm
>
> bytesbigint,
>
> cdrs bigint,
>
> tableIndex   bigint,-- Unique key
>
> noOfSubfilesInFile   bigint,
>
> recordSequenceNumberList VARCHAR(1000),
>
> primary key (tableindex)
>
> ) TABLESPACE MMDATA; –
>
>
>
>
>
> CREATE TABLE EventLogEntry
>
> (
>
> tableIndexint4,   -- Unique key
>
> objectVARCHAR(80),
>
> methodVARCHAR(80),
>
> bgwUser   VARCHAR(80),
>
> time  CHAR(23), -- /MM/DD HH:MM:SS.mmm
>
> realUser  VARCHAR(80),
>
> host  VARCHAR(80),
>
> application   VARCHAR(80)
>
> ) TABLESPACE MMDATA;
>
>
>
>
>
> Regards
>
> Mitu
>
>
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Adrian Klaver

On 05/27/2015 06:05 AM, Melvin Davidson wrote:

What this indicates is that someone, or some thing, is trying to create
a table in a schema that already exists.


The error you see in that situation is:

postgres-2015-05-27 06:25:10.173 PDT-0ERROR:  relation "table1" already 
exists
postgres-2015-05-27 06:25:10.173 PDT-0STATEMENT:  create table table1 (i 
int);



Best guess is as Pete and Albe said, some user code is directly 
accessing pg_class or the index has been corrupted.



Here is the structure of pg_class_relname_nsp_index:
CREATE UNIQUE INDEX pg_class_relname_nsp_index ON pg_class USING btree
(relname, relnamespace)

What you should also see in the error log is a line immediately
following that error which shows you exactly what the conflict is, but
you have not provided that to us.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reg: BULK COLLECT

2015-05-27 Thread Andy Colson





On 05/25/2015 07:24 AM, Medhavi Mahansaria wrote:
 > Hello,
 >
 > I am porting my application from Oracle to PostgreSQL.
 >
 > We are using BULK COLLECT functionality of oracle.
 > How can i change the 'BULK COLLECT' fetch of the data from the cursor
to make if compatible for pg/plsql?
 >
 > A small example is as below (This is just an example and the query is
much more complex which returns huge amount of data)
 >
 >
 > */CREATE OR REPLACE FUNCTION abc() RETURNS VOID AS $body$/*
 >
 > */DECLARE/*
 > */l_data b%ROWTYPE;/*
 >
 > */POPULATE_STATS CURSOR IS/*
 > */(/*
 > */SELECT * from a/*
 > */)/*
 > */;// query returning a huge amount of data/*
 >
 > */BEGIN/*
 > */  OPEN POPULATE_STATS;/*
 > */  LOOP/*
 > */  FETCH POPULATE_STATS BULK COLLECT INTO l_data LIMIT 1000;/*
 > */IF POPULATE_STATS%ROWCOUNT > 0/*
 > */THEN/*
 > */FORALL i IN 1..l_data.COUNT/*
 > */  INSERT INTO b VALUES l_data(i);/*
 > */END IF;/*
 > */  IF NOT FOUND THEN EXIT; END IF; /*
 > */  END LOOP;/*
 > */  CLOSE POPULATE_STATS;/*
 > */EXCEPTION/*
 > */  WHEN OTHERS THEN/*
 > */  CODE := SQLSTATE;/*
 > */  MSG := SQLERRM;/*
 > */ INSERT INTO tracker VALUES (CODE,MSG,LOCALTIMESTAMP);/*
 > */  RAISE NOTICE 'SQLERRM';/*
 > */  RAISE NOTICE '%', SQLSTATE;/*
 > */  RAISE NOTICE '%', MSG;/*
 > */END;
 > /*
 > */$body$/*
 > */LANGUAGE PLPGSQL;/*
 >
 > How can i change the 'BULK COLLECT' fetch of the data from the cursor
to make if compatible for pg/plsql?
 >
 >
 > Thanks & Regards
 > Medhavi Mahansaria
 > Mailto: medhavi.mahansa...@tcs.com
 >

That seems pretty over complicated version of:

insert into b select * from a;

Which is all you'll need in PG.  It it does something else, then I
failed to understand the stored proc.

-Andy




On 5/27/2015 12:52 AM, Medhavi Mahansaria wrote:> Dear Andy,
>
> We are using bulk collect to enhance the performance as the data is huge.
>
> But as you said it is ideally insert into b select * from a;
>
> So now I am using the looping through query result option as Adrian
> suggested.
>
> 
http://www.postgresql.org/docs/9.4/interactive/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING 


>
>
> Thank You Adrian.
>
>
> Thanks & Regards
> Medhavi Mahansaria




Did you time it?  I'll bet "insert into b select * from a" is the 
fastest method.


-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] duplicate key value violates unique constraint "pg_class_relname_nsp_index"

2015-05-27 Thread Tom Lane
Adrian Klaver  writes:
> On 05/27/2015 06:05 AM, Melvin Davidson wrote:
>> What this indicates is that someone, or some thing, is trying to create
>> a table in a schema that already exists.

> The error you see in that situation is:

> postgres-2015-05-27 06:25:10.173 PDT-0ERROR:  relation "table1" already 
> exists
> postgres-2015-05-27 06:25:10.173 PDT-0STATEMENT:  create table table1 (i 
> int);

> Best guess is as Pete and Albe said, some user code is directly 
> accessing pg_class or the index has been corrupted.

I don't think it's necessary to make such assumptions to explain the
errors.  What is more likely is that two sessions are trying to create
identically named tables at about the same time.  You do get the nice
user-friendly "already exists" error if the conflicting table was
committed before CREATE TABLE looks --- but in a concurrent-transactions
situation, neither CREATE TABLE will see the other's table as already
existing.  In race conditions like this, it's the unique index on the
catalog that is the duplicate-preventer of last resort, and it's going
to throw this error.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reg: BULK COLLECT

2015-05-27 Thread Simon Riggs
On 27 May 2015 at 06:52, Medhavi Mahansaria 
wrote:

>
> We are using bulk collect to enhance the performance as the data is huge.
>
>

Using BULK COLLECT speeds up retrieving rows into memory, but if the data
is huge that wasn't a great plan anyway, so doing it faster doesn't matter
much.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] delaying autovacuum freeze via storage params?

2015-05-27 Thread Steve Kehlet
On Tue, May 26, 2015 at 7:27 PM Alvaro Herrera 
wrote:

> See the docs about the freeze max age storage parameter -- the per-table
> setting can decrease the global setting but not increase it.


Thanks Alvaro, that explains it. I found it in the docs: "Note that
autovacuum will ignore attempts to set a
per-tableautovacuum_freeze_max_age larger
than the system-wide setting (it can only be set smaller)."


[GENERAL] Python 3.2 XP64 and Numpy...

2015-05-27 Thread Rémi Cura
Hey dear List,

On a windows XP 64.

I installed python (64b),
it works.
CReating plpython3u works, and python works within database.

I installed numpy (manually compiled,64b),
it works outside of Postgres,
but inside a plpython3u function, simply doing
'import numpy' raises an error saying that python 32 is not a valid win32
application.

I'm really stuck and have checked everything I could (path, rights,
depends...)

Somebody has an idea of how to make it works?

Thanks
Cheers,
Rémi-C


Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-27 Thread PT

Well, whether good or bad, my employer has nixed the idea of paying me
to work on this, and I don't have personal time right now to do it,
so it's not going to be addressed by me at this time.

-- 
Bill 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Raster performance

2015-05-27 Thread PT
On Tue, 26 May 2015 12:52:24 -0500
David Haynes II  wrote:

> Hello,
> 
> I have a question about the query optimizer and its performance on spatial
> datasets, specifically rasters. My use case is rather unique, the
> application that I am developing allows users to request summarizations of
> various geographic boundaries around the world. Therefore our raster
> datasets are global. We are in the process of conducting some benchmarks
> for our system and we noticed something unexpected.
> 
> The query is the same except the first is run on a raster (46gigs) in out
> of database (outdb) and the second is the same raster (46gigs) stored in
> database (indb). The raster is multibanded (13), with each band
> representing one entire MODIS global scene. A single year of MODIS is
> approximately 3.6 gigs.
> 
> The outdb is being out performed by indb, because the query optimizer gets
> smarter. But what is also interesting is all the extra pieces that are
> brought in with outdb.
> 
> with poly as
> ( SELECT gid, label as name, ST_Transform(geom, 6842) as geom  FROM
> us_counties )
> , rast_select as
> ( SELECT r.rid as id, s.name , ST_CLIP(r.rast, s.geom) as rast from
> rasters.glc2000 as r inner join poly as s on ST_Intersects(r.rast, s.geom) )
> select r.id, r.name, ST_Count(r.rast, 1, True)
> 
> 
>QUERY PLAN With Outdb
> --
> Sort   (cost=93911.29..93926.80 rows=6204 width=254)
>   Sort Key: r.id, r.name
>   CTE poly
> ->  Seq Scan on us_counties  (cost=0.00..112.86 rows=3109 width=62247)
>   CTE rast_select
> ->  Nested Loop  (cost=0.28..76131.41 rows=62033 width=1086)
>   ->  CTE Scan on poly s  (cost=0.00..62.18 rows=3109 width=250)
>   ->  Index Scan using modis_rast_gist on modis r_1
>  (cost=0.28..24.40 rows=2 width=836)
> Index Cond: ((rast)::geometry && s.geom)
> Filter: _st_intersects(s.geom, rast, NULL::integer)
>   ->  HashAggregate  (cost=17214.16..17276.20 rows=6204 width=254)
> ->  CTE Scan on rast_select r  (cost=0.00..1240.66 rows=62033
> width=254)
> 
> QUERY PLAN With Indb
> 
> -
> Sort   (cost=69547.29..69562.80 rows=6204 width=254)
>   Sort Key: r.id, r.name
>   CTE poly
> ->  Seq Scan on us_counties  (cost=0.00..112.86 rows=3109 width=62247)
>   CTE rast_select
> ->  Nested Loop  (cost=0.28..51767.41 rows=62033 width=272)
>   ->  CTE Scan on poly s  (cost=0.00..62.18 rows=3109 width=250)
>   ->  Index Scan using modis_noout_rast_gist on modis_noout r_1
>  (cost=0.28..16.56 rows=2 width=22)
> Index Cond: ((rast)::geometry && s.geom)
> Filter: _st_intersects(s.geom, rast, NULL::integer)
>   ->  HashAggregate  (cost=17214.16..17276.20 rows=6204 width=254)
> ->  CTE Scan on rast_select r  (cost=0.00..1240.66 rows=62033
> width=254)

I could be missing something here, but I don't see how long the queries
actually take to run. Have you actually run the queries and timed them?
Keep in mind that analyze does not actually run the query, it only plans
it, so the actual run time is unknown if all you do is analyze.

The query plans appear to be equal, assuming there are slight variances
in the names of tables from one DB to another (and I assume that your
description of indb and outdb reflects the fact that there are (for
reasons unknown) two copies of the data).

The only purpose to those estimates is to choose a good plan. If the
plan is bad for one database and both databases have the same data, then
the plan will be bad for both.

Since there have been no other responses, I'm guessing that others are
confused by your question as well. Can you describe the actual problem
that you're seeing?

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Constraint exclusion not working - need different constraints?

2015-05-27 Thread Doug Gorley
Good day,

I'm attempting to implement table partitioning with constraint exclusions,
but I'm not seeing the planner limit the plan to an appropriate set of
child tables.  I'm wondering if the functions in my constraints are making
the exclusion impossible.

My constraints look like this:

    ALTER TABLE response_data.queries_response_2015w23
  ADD CONSTRAINT queries_response_2015w23_timestamp_check
  CHECK (
 date_part('year'::text, "timestamp"::timestamp without time zone) 
= 2015::double precision AND
 date_part('week'::text, "timestamp"::timestamp without time zone) 
= 23::double precision
  );

And my query looks like this:

    explain select * from public.queries_response where 
age("timestamp"::timestamp) < '24 hours';
   
http://www.postgresql.org/docs/9.4/static/ddl-partitioning.html gives this 
caveat:

    "Constraint exclusion only works when the query's WHERE clause contains
    constants (or externally supplied parameters). For example, a comparison
    against a non-immutable function such as CURRENT_TIMESTAMP cannot be
    optimized, since the planner cannot know which partition the function
    value might fall into at run time."
   
Will I likely need to replace the date_part functions with actual dates to
make this work?

Thanks,
--
Doug Gorley | d...@gorley.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-27 Thread Steve Kehlet
I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
just dropped new binaries in place) but it wouldn't start up. I found this
in the logs:

waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
 database system was shut down at 2015-05-27 13:12:55 PDT
2015-05-27 13:13:00 PDT [27342]: [1-1] FATAL:  the database system is
starting up
.2015-05-27 13:13:00 PDT [27341]: [2-1] FATAL:  could not access status of
transaction 1
2015-05-27 13:13:00 PDT [27341]: [3-1] DETAIL:  Could not open file
"pg_multixact/offsets/": No such file or directory.
2015-05-27 13:13:00 PDT [27340]: [1-1] LOG:  startup process (PID 27341)
exited with exit code 1
2015-05-27 13:13:00 PDT [27340]: [2-1] LOG:  aborting startup due to
startup process failure
 stopped waiting

I poked around in pg_multixact/offsets and there are lots of other files in
there, just not .

I tried launching postgres in standalone mode to try some debugging but it
yields the same error.

AFAICT, there's nothing wrong with hardware, it's a VM, and it's been
working okay. About a week ago we upgraded it from 9.3.5 to 9.4.1 via
pg_upgrade --link, which worked amazingly well.

I found [this report from a couple days ago](
https://bugs.archlinux.org/task/45071) from someone else that looks like
the same problem.

I put this box's [postgresql.conf up on a gist](
https://gist.github.com/skehlet/3589b0d83f2cafe19624).

What can I try next? Thanks!


Re: [GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-27 Thread Alvaro Herrera
Steve Kehlet wrote:
> I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
> just dropped new binaries in place) but it wouldn't start up. I found this
> in the logs:
> 
> waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
>  database system was shut down at 2015-05-27 13:12:55 PDT
> 2015-05-27 13:13:00 PDT [27342]: [1-1] FATAL:  the database system is
> starting up
> .2015-05-27 13:13:00 PDT [27341]: [2-1] FATAL:  could not access status of
> transaction 1

I am debugging today a problem currently that looks very similar to
this.  AFAICT the problem is that WAL replay of an online checkpoint in
which multixact files are removed fails because replay tries to read a
file that has already been removed.

(I was nervous about removing the check to omit reading pg_multixact
files while on recovery.  Looks like my hunch was right, though the
actual problem is not what I was fearing.)

I think the fix to this is to verify whether the file exists on disk
before reading it; if it doesn't, assume the truncation has already
happened and that it's not necessary to remove it.

> I found [this report from a couple days ago](
> https://bugs.archlinux.org/task/45071) from someone else that looks like
> the same problem.

Right :-(

I think a patch like this should be able to fix it ... not tested yet.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
index 9568ff1..bb8cbd7 100644
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -2208,6 +2208,12 @@ SetMultiXactIdLimit(MultiXactId oldest_datminmxid, Oid oldest_datoid)
 	 * to one.  It will instead point to the multixact ID that will be
 	 * assigned the next time one is needed.
 	 *
+	 * Note that when this is called during xlog replay, the required files
+	 * might have already been removed, and it would be an error to try to read
+	 * them.  To work around this, we test the file for existance before trying
+	 * to read it; if the file doesn't exist, we just don't read it.  We trust
+	 * that a further call to this routine later will set things straight.
+	 *
 	 * NB: oldest_dataminmxid is the oldest multixact that might still be
 	 * referenced from a table, unlike in DetermineSafeOldestOffset, where we
 	 * do this same computation based on the oldest value that might still
@@ -2217,16 +2223,24 @@ SetMultiXactIdLimit(MultiXactId oldest_datminmxid, Oid oldest_datoid)
 	 * new multixacts, which requires the old ones to have first been
 	 * truncated away by a checkpoint.
 	 */
-	LWLockAcquire(MultiXactGenLock, LW_SHARED);
-	if (MultiXactState->nextMXact == oldest_datminmxid)
-	{
-		oldestOffset = MultiXactState->nextOffset;
-		LWLockRelease(MultiXactGenLock);
-	}
-	else
 	{
+		MultiXactId	nextMulti;
+		MultiXactOffset nextOffset;
+		int			pageno;
+
+		/* grab data that requires lock first */
+		LWLockAcquire(MultiXactGenLock, LW_SHARED);
+		nextMulti = MultiXactState->nextMXact;
+		nextOffset = MultiXactState->nextOffset;
 		LWLockRelease(MultiXactGenLock);
-		oldestOffset = find_multixact_start(oldest_datminmxid);
+
+		pageno = MultiXactIdToOffsetPage(oldest_datminmxid);
+
+		if ((nextMulti != oldest_datminmxid) &&
+			(!InRecovery || SimpleLruDoesPhysicalPageExist(pageno)))
+			oldestOffset = find_multixact_start(oldest_datminmxid);
+		else
+			oldestOffset = nextOffset;
 	}
 
 	/* Grab lock for just long enough to set the new limit values */

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint exclusion not working - need different constraints?

2015-05-27 Thread Tom Lane
"=?utf-8?b?RG91ZyBHb3JsZXk=?="  writes:
> I'm attempting to implement table partitioning with constraint exclusions,
> but I'm not seeing the planner limit the plan to an appropriate set of
> child tables.  I'm wondering if the functions in my constraints are making
> the exclusion impossible.

> My constraints look like this:

>     ALTER TABLE response_data.queries_response_2015w23
>       ADD CONSTRAINT queries_response_2015w23_timestamp_check
>           CHECK (
>              date_part('year'::text, "timestamp"::timestamp 
> without time zone) = 2015::double precision AND
>              date_part('week'::text, "timestamp"::timestamp 
> without time zone) = 23::double precision
>           );

> And my query looks like this:

>     explain select * from public.queries_response where 
> age("timestamp"::timestamp) < '24 hours';
>    
> http://www.postgresql.org/docs/9.4/static/ddl-partitioning.html gives this 
> caveat:

>     "Constraint exclusion only works when the query's WHERE clause contains
>     constants (or externally supplied parameters). For example, a 
> comparison
>     against a non-immutable function such as CURRENT_TIMESTAMP cannot be
>     optimized, since the planner cannot know which partition the function
>     value might fall into at run time."
>    
> Will I likely need to replace the date_part functions with actual dates to
> make this work?

Well, you can't make it work like that, for sure.  The planner has no clue
that there's any connection between age() and date_part().  And if it did
fully understand that relationship, it still would refuse to do constraint
exclusion in this example, because the age() constraint is
current-timestamp-dependent.  It couldn't assume that now() when executing
the query is the same as it was when planning the query, so it wouldn't
know which partition to select.

Worse still, if I'm right in guessing that the timestamp column is
timestamp WITH time zone (if it isn't, why are you bothering with the
casts?) then the check constraints themselves aren't immutable, because
their effective behavior depends on the current setting of TimeZone.
So the planner will refuse to make any deductions at all with them.

You'd be much better off using child-table constraints like

  "timestamp" >= '2015-01-01' AND "timestamp" < '2015-01-08'

because the planner can reason about them.  But I'm afraid the age()
technique still loses.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-27 Thread Steve Kehlet
On Wed, May 27, 2015 at 3:21 PM Alvaro Herrera 
wrote:

> I think a patch like this should be able to fix it ... not tested yet.
>

Thanks Alvaro. I got a compile error, so looked for other uses of
SimpleLruDoesPhysicalPageExist and added MultiXactOffsetCtl, does this look
right?

+  (!InRecovery || SimpleLruDoesPhysicalPageExist(MultiXactOffsetCtl,
pageno)))

It compiled then, but it didn't seem to change anything: postgres gave the
same errors and still would not start up.

I'm recompiling it now just to be sure I didn't make any mistakes. I have
to leave for the day in a few minutes so I thought I'd check on the above
argument with you. Thanks again!


Re: [GENERAL] Python 3.2 XP64 and Numpy...

2015-05-27 Thread Adrian Klaver

On 05/27/2015 09:49 AM, Rémi Cura wrote:

Hey dear List,

On a windows XP 64.

I installed python (64b),
it works.


What version of Python 2 or 3 or both?

What does python -V show at the command line?


CReating plpython3u works, and python works within database.

I installed numpy (manually compiled,64b),
it works outside of Postgres,
but inside a plpython3u function, simply doing
'import numpy' raises an error saying that python 32 is not a valid
win32 application.


Is there a 32 bit version of numpy on your machine?



I'm really stuck and have checked everything I could (path, rights,
depends...)


Does that include PYTHONPATH?



Somebody has an idea of how to make it works?

Thanks
Cheers,
Rémi-C



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-27 Thread Alvaro Herrera
Steve Kehlet wrote:
> On Wed, May 27, 2015 at 3:21 PM Alvaro Herrera 
> wrote:
> 
> > I think a patch like this should be able to fix it ... not tested yet.
> >
> 
> Thanks Alvaro. I got a compile error, so looked for other uses of
> SimpleLruDoesPhysicalPageExist and added MultiXactOffsetCtl, does this look
> right?
> 
> +  (!InRecovery || SimpleLruDoesPhysicalPageExist(MultiXactOffsetCtl,
> pageno)))

Meh, I sent you the wrong version of the patch.  Yeah, that's obviously
better.

> It compiled then, but it didn't seem to change anything: postgres gave the
> same errors and still would not start up.

I'm setting up a reproducer for the problem to verify that the patch
fixes it (evidently not).  Maybe I've messed up the conditional or
something else ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: missing chunk number 0 for toast value 1821556134 in pg_toast_17881

2015-05-27 Thread Tory M Blue
Sorry I think I posted this to the wrong group, sent to performance vs
General. My apologies to everyone in perf!

Greetings and salutations.
>
> I've got some weirdness.
>
> Current:
> Postgres 9.3.4
> Slony 2.2.3
> CentOS 6.5
>
> Prior running Postgres 9.1.2 w/slony 2.1.3 CentOS 6.2
>
> I found that if I tried to run a vacuum full on 1 table that I recently
> reindexed (out of possibly 8 tables) that I get this error:
>
> # vacuum full table.ads;
>
> ERROR:  missing chunk number 0 for toast value 1821556134 in pg_toast_17881
>
> If I run a vacuum analyze it completes fine, but I can't run a vacuum full
> without it throwing an error. I seem to be able to query the table and I
> seem to be able to add data to the table and slony seems fine as does
> postgres.
>
> I'm unclear why the vacuum full is failing with this error. I've done some
> searching and there are hints to prior bugs, but I didn't catch anything in
> 9.3.3 to 9.3.7 that talks about this.
>
> My next steps without your fine assistance, will be to drop the table from
> slon and re-add it (meaning it will drop the table completely from this db
> and recreate it from the master (there we can do a vacuum full without
> failure)..
>
> I have already tried to remove the indexes and just create those, but no
> luck.
>
> Ideas?
>
> Thanks
>
> Tory
>
>
>
>


Re: [HACKERS] Re: [GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-27 Thread Robert Haas
On Wed, May 27, 2015 at 6:21 PM, Alvaro Herrera
 wrote:
> Steve Kehlet wrote:
>> I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
>> just dropped new binaries in place) but it wouldn't start up. I found this
>> in the logs:
>>
>> waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
>>  database system was shut down at 2015-05-27 13:12:55 PDT
>> 2015-05-27 13:13:00 PDT [27342]: [1-1] FATAL:  the database system is
>> starting up
>> .2015-05-27 13:13:00 PDT [27341]: [2-1] FATAL:  could not access status of
>> transaction 1
>
> I am debugging today a problem currently that looks very similar to
> this.  AFAICT the problem is that WAL replay of an online checkpoint in
> which multixact files are removed fails because replay tries to read a
> file that has already been removed.

Hmm, so what exactly is the sequence of events here?  It's possible
that I'm not thinking clearly just now, but it seems to me that if
we're replaying the same checkpoint we replayed previously, the offset
of the oldest multixact will be the first file that we didn't remove.
However, I can see that there could be a problem if we try to replay
an older checkpoint after having already replayed a new one - for
example, if a standby replays checkpoint A truncating the members
multixact and performs a restart point, and then replays checkpoint B
truncating the members multixact again but without performing a
restartpoint, and then is shut down, it will resume replay from
checkpoint A, and trouble will ensue.  Is that the scenario, or is
there something else?

> I think the fix to this is to verify whether the file exists on disk
> before reading it; if it doesn't, assume the truncation has already
> happened and that it's not necessary to remove it.

That might be an OK fix, but this implementation doesn't seem very
clean.  If we're going to remove the invariant that
MultiXactState->oldestOffset will always be valid after replaying a
checkpoint, then we should be explicit about that and add a flag
indicating whether or not it's currently valid.  Shoving nextOffset in
there and hoping that's good enough seems like a bad idea to me.

I think we should modify the API for find_multixact_start.  Let's have
it return a Boolean and return oldestOffset via an out parameter.  If
!InRecovery, it will always return true and set the out parameter; but
if in recovery, it is allowed to return false without setting the out
parameter.  Both values can get stored in MultiXactState, and we can
adjust the logic elsewhere to disregard oldestOffset when the
accompanying flag is false.

This still leaves open an ugly possibility: can we reach normal
running without a valid oldestOffset?  If so, until the next
checkpoint happens, autovacuum has no clue whether it needs to worry.
There's got to be a fix for that, but it escapes me at the moment.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-27 Thread Alvaro Herrera
Robert Haas wrote:
> On Wed, May 27, 2015 at 6:21 PM, Alvaro Herrera
>  wrote:
> > Steve Kehlet wrote:
> >> I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
> >> just dropped new binaries in place) but it wouldn't start up. I found this
> >> in the logs:
> >>
> >> waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
> >>  database system was shut down at 2015-05-27 13:12:55 PDT
> >> 2015-05-27 13:13:00 PDT [27342]: [1-1] FATAL:  the database system is
> >> starting up
> >> .2015-05-27 13:13:00 PDT [27341]: [2-1] FATAL:  could not access status of
> >> transaction 1
> >
> > I am debugging today a problem currently that looks very similar to
> > this.  AFAICT the problem is that WAL replay of an online checkpoint in
> > which multixact files are removed fails because replay tries to read a
> > file that has already been removed.
> 
> Hmm, so what exactly is the sequence of events here?  It's possible
> that I'm not thinking clearly just now, but it seems to me that if
> we're replaying the same checkpoint we replayed previously, the offset
> of the oldest multixact will be the first file that we didn't remove.

Well I'm not very clear on what's the problematic case.  The scenario I
actually saw this first reported was a pg_basebackup taken on a very
large database, so the master could have truncated multixact and the
standby receives a truncated directory but actually tries to apply a
checkpoint that is much older than what the master currently has
transmitted as pg_multixact contents.

> > I think the fix to this is to verify whether the file exists on disk
> > before reading it; if it doesn't, assume the truncation has already
> > happened and that it's not necessary to remove it.
> 
> That might be an OK fix, but this implementation doesn't seem very
> clean.  If we're going to remove the invariant that
> MultiXactState->oldestOffset will always be valid after replaying a
> checkpoint, then we should be explicit about that and add a flag
> indicating whether or not it's currently valid.  Shoving nextOffset in
> there and hoping that's good enough seems like a bad idea to me.
> 
> I think we should modify the API for find_multixact_start.  Let's have
> it return a Boolean and return oldestOffset via an out parameter.  If
> !InRecovery, it will always return true and set the out parameter; but
> if in recovery, it is allowed to return false without setting the out
> parameter.  Both values can get stored in MultiXactState, and we can
> adjust the logic elsewhere to disregard oldestOffset when the
> accompanying flag is false.

Sounds good.  I think I prefer that multixact creation is rejected
altogether if the new flag is false.  Is that what you mean when you say
"adjust the logic"?

> This still leaves open an ugly possibility: can we reach normal
> running without a valid oldestOffset?  If so, until the next
> checkpoint happens, autovacuum has no clue whether it needs to worry.
> There's got to be a fix for that, but it escapes me at the moment.

I think the fix to that issue is to set the oldest offset on
TrimMultiXact.  That way, once WAL replay finished we're certain that we
have a valid oldest offset to create new multixacts with.

I'm also wondering whether the call to DetermineSafeOldestOffset on
StartupMultiXact is good.  At that point, we haven't replayed any WAL
yet, so the oldest multi might be pointing at a file that has already
been removed -- again considering the pg_basebackup scenario where the
multixact files are copied much later than pg_control, so the checkpoint
to replay is old but the pg_multixact contents have already been
truncated in the master and are copied truncated.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-27 Thread Robert Haas
On Wed, May 27, 2015 at 10:14 PM, Alvaro Herrera
 wrote:
> Well I'm not very clear on what's the problematic case.  The scenario I
> actually saw this first reported was a pg_basebackup taken on a very
> large database, so the master could have truncated multixact and the
> standby receives a truncated directory but actually tries to apply a
> checkpoint that is much older than what the master currently has
> transmitted as pg_multixact contents.

OK, that makes sense.

>> That might be an OK fix, but this implementation doesn't seem very
>> clean.  If we're going to remove the invariant that
>> MultiXactState->oldestOffset will always be valid after replaying a
>> checkpoint, then we should be explicit about that and add a flag
>> indicating whether or not it's currently valid.  Shoving nextOffset in
>> there and hoping that's good enough seems like a bad idea to me.
>>
>> I think we should modify the API for find_multixact_start.  Let's have
>> it return a Boolean and return oldestOffset via an out parameter.  If
>> !InRecovery, it will always return true and set the out parameter; but
>> if in recovery, it is allowed to return false without setting the out
>> parameter.  Both values can get stored in MultiXactState, and we can
>> adjust the logic elsewhere to disregard oldestOffset when the
>> accompanying flag is false.
>
> Sounds good.  I think I prefer that multixact creation is rejected
> altogether if the new flag is false.  Is that what you mean when you say
> "adjust the logic"?

No.  I'm not sure quite what you mean here.  We can't reject multixact
creation during normal running, and during recovery, we won't create
any really new mulitxacts, but we must replay the creation of
multixacts.  What I meant was stuff like this:

if (!MultiXactIdPrecedes(result, MultiXactState->multiVacLimit) ||
(MultiXactState->nextOffset - MultiXactState->oldestOffset
> MULTIXACT_MEMBER_SAFE_THRESHOLD))

I meant that we'd change the second prong of the test to check
multiXactState->nextOffsetValid && MultiXactState->nextOffset -
MultiXactState->oldestOffset > MULTIXACT_MEMBER_SAFE_THRESHOLD.  And
likewise change anything else that relies on oldestOffset.  Or else we
guarantee that we can't reach those points until the oldestOffset is
valid, and then check that it is with an Assert() or elog().

>> This still leaves open an ugly possibility: can we reach normal
>> running without a valid oldestOffset?  If so, until the next
>> checkpoint happens, autovacuum has no clue whether it needs to worry.
>> There's got to be a fix for that, but it escapes me at the moment.
>
> I think the fix to that issue is to set the oldest offset on
> TrimMultiXact.  That way, once WAL replay finished we're certain that we
> have a valid oldest offset to create new multixacts with.
>
> I'm also wondering whether the call to DetermineSafeOldestOffset on
> StartupMultiXact is good.  At that point, we haven't replayed any WAL
> yet, so the oldest multi might be pointing at a file that has already
> been removed -- again considering the pg_basebackup scenario where the
> multixact files are copied much later than pg_control, so the checkpoint
> to replay is old but the pg_multixact contents have already been
> truncated in the master and are copied truncated.

Moving the call from StartupMultiXact() to TrimMultiXact() seems like
a good idea.  I'm not sure why we didn't do that before.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint exclusion not working - need different constraints?

2015-05-27 Thread Doug Gorley

On Wed, 27 May 2015 18:21:58 -0400, Tom Lane  wrote:

  "=?utf-8?b?RG91ZyBHb3JsZXk=?="  writes:

I'm attempting to implement table partitioning with constraint exclusions,
but I'm not seeing the planner limit the plan to an appropriate set of
child tables.  I'm wondering if the functions in my constraints are making
the exclusion impossible.



My constraints look like this:



    ALTER TABLE response_data.queries_response_2015w23
  ADD CONSTRAINT queries_response_2015w23_timestamp_check
  CHECK (
 date_part('year'::text,
"timestamp"::timestamp without time zone) = 2015::double precision AND
 date_part('week'::text,
"timestamp"::timestamp without time zone) = 23::double precision
  );



And my query looks like this:



    explain select * from public.queries_response where
age("timestamp"::timestamp) < '24 hours';
   
http://www.postgresql.org/docs/9.4/static/ddl-partitioning.html gives
this caveat:



    "Constraint exclusion only works when the query's WHERE clause contains
    constants (or externally supplied parameters). For example, a comparison
    against a non-immutable function such as CURRENT_TIMESTAMP cannot be
    optimized, since the planner cannot know which partition the function
    value might fall into at run time."
   
Will I likely need to replace the date_part functions with actual dates to
make this work?


Well, you can't make it work like that, for sure. The planner has no clue
that there's any connection between age() and date_part(). And if it did
fully understand that relationship, it still would refuse to do constraint
exclusion in this example, because the age() constraint is
current-timestamp-dependent. It couldn't assume that now() when executing
the query is the same as it was when planning the query, so it wouldn't
know which partition to select.

Worse still, if I'm right in guessing that the timestamp column is
timestamp WITH time zone (if it isn't, why are you bothering with the
casts?) then the check constraints themselves aren't immutable, because
their effective behavior depends on the current setting of TimeZone.
So the planner will refuse to make any deductions at all with them.

You'd be much better off using child-table constraints like

"timestamp" >= '2015-01-01' AND "timestamp" < '2015-01-08'

because the planner can reason about them. But I'm afraid the age()
technique still loses.

regards, tom lane

Thanks Tom, I will re-work the constraints to use static dates.
 
If I do that, will I be able to use age() (or some other technique) to
apply constraint exclusion when running a query asking, "show me
all records where the timestamp is within the last 24 hours"?
 
Thanks,
Doug



Re: [GENERAL] Constraint exclusion not working - need different constraints?

2015-05-27 Thread Tom Lane
"=?utf-8?b?RG91ZyBHb3JsZXk=?="  writes:
> On Wed, 27 May 2015 18:21:58 -0400, Tom Lane  wrote:
> You'd be much better off using child-table constraints like
> "timestamp" >= '2015-01-01' AND "timestamp" < '2015-01-08'
> because the planner can reason about them. But I'm afraid the age()
> technique still loses.

> Thanks Tom, I will re-work the constraints to use static dates.
>  
> If I do that, will I be able to use age() (or some other technique) to
> apply constraint exclusion when running a query asking, "show me
> all records where the timestamp is within the last 24 hours"?

Basically the query will need to look like

WHERE "timestamp" >= 'timestamp-constant'

or the planner won't be able to eliminate any partitions.

People have occasionally resorted to lying to the planner in order to get
this result without doing timestamp arithmetic on the client side.  That
is, something like

create function ago(interval) returns timestamp as
'select localtimestamp - $1'
language sql immutable;

select ... where "timestamp" >= ago('24 hours');

Labeling this function immutable is a plain lie, and if you use it in any
context other than an interactive query submitted for immediate execution,
you'll deserve the pain you'll get ;-).  But within that context, it's just
what you want that the function gets folded to a constant immediately;
that happens soon enough that the WHERE clause looks like "timestamp" >=
'timestamp-constant' for the purposes of constraint exclusion.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [SQL] extracting PII data and transforming it across table.

2015-05-27 Thread John R Pierce

On 5/21/2015 9:51 AM, Suresh Raja wrote:


I'm looking at directions or help in extracting data from
production and alter employee id information while
extracting.  But at the same time maintain referential
integrity across tables. Is it possible to dump data to flat
file and then run some script to change emp id data on all
files.  I'm looking for a easy solution.

Thanks,
-Suresh Raja


Steve:
I too would like to update the id's before dumping.  can i write a sql 
to union all tables and at the same time create unique key valid 
across tables.


it sounds like you have a weak grasp of delational database design

I would have a single Employee table, with employee ID as the primary 
key, and any other attributes which are common to all employees, then I 
would have other tables which contain information about specific 
employee types or groups, these other tables could have their own 
primary key, but would reference the Employee table EmployeeID field for 
the common employee attributes.




--
john r pierce, recycling bits in santa cruz