Re: [GENERAL] Compiler versions on different platforms

2009-05-12 Thread Peter Eisentraut
On Tuesday 12 May 2009 02:30:56 Vikram Patil wrote:
> I was able to compile on all of the platforms but I want to know about
> minimum and recommended version requirements for these systems. I am
> concerned about performance of these binaries and looking to achieve
> performance of these binary as close as possible to binaries provided at
> PostgreSQL website. Or if somebody can point out which binaries
> EnterpriseDB uses for their binaries it will be great.

The answers to these questions are not well-defined.  Use a compiler that you 
trust or that your trusted operating system vendor supports.  PostgreSQL is 
not mplayer or the linux kernel; which exact compiler you use doesn't matter 
much for performance or stability, as long as the compiler is not outright 
broken.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unable to access table named "user"

2009-05-12 Thread Joe Kramer
Hi,

I have table named user (lower case) in public schema.
I can access it using Pgadmin as
SELECT * from "user"; SELECT * from "public.user"; SELECT * FROM public.user;

I can't find any way to access this table from Jdbc.
Tried to access as public.user, user in single and double quotes, nothing helps.
I get exceptions like
ERROR: relation "public.user" does not exist
ERROR: relation "user" does not exist

Also tried to execute "SET search_path TO public" beforehand in same
jdbc connection, makes no difference.

Is there any way?

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to access table named "user"

2009-05-12 Thread Scott Marlowe
On Tue, May 12, 2009 at 1:26 AM, Joe Kramer  wrote:
> Hi,
>
> I have table named user (lower case) in public schema.
> I can access it using Pgadmin as
> SELECT * from "user"; SELECT * from "public.user"; SELECT * FROM public.user;

Try "public"."user"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Selecting data from bytea field in 8.3

2009-05-12 Thread Tomasz Rejman

Hi there.
Few days ago I have changed my postgreSQL to 8.3 version and I have problem 
with bytea fields. Let me show you an example:


CREATE TABLE testtable
(
test bytea
);
insert into testTable (test) VALUES 
(E'\304\205\304\207\305\274\303\263\305\202wjfdsafdsa');

select encode(test, 'escape') from testTable;

In the previous version last Select will display exact characters in UTF8 
encoding (some polish letters with dot-s and comma-s on them), but in 8.3 
i'm getting an escaped string with byte codes (like this one in Insert 
querry)


(correct data should be like that:
select E'\304\205\304\207\305\274\303\263\305\202wjfdsafdsa';
)

What should I do to tell the postgreSQL to unescape that data?
because I cant add an E letter to column name.
Something like that:

Select E test;


btw. I know that bytea field should be used for binary data, but this is 
only an example.
I'm using this type of field in my functions to encrypt/decrypt some data 
and i need to use set_byte()/get_byte() functions.


--
Tomasz Rejman 





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Difference between "in (...)" and "= any(...)" queries when using arrays

2009-05-12 Thread Richard Huxton

Francisco Figueiredo Jr. wrote:

Hi all!

I was playing with "in (...)"  and "= any (...)" queries and found a
difference between them and I wonder:

why this works:

select * from table_of_integers where integer_column = any (array[5,6]);


This checks if integer_column matches any value in the specified array.


select * from table_of_integers where integer_column in (array[5,6]);


This checks if integer_column IS an array[5,6] (which it isn't).

You probably want ... IN (5,6)


SOME is a synonym for ANY. IN is equivalent to = ANY.


I think this is probably talking with respect to sub-queries.

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to access table named "user"

2009-05-12 Thread Joe Kramer
Don't help either.

Jdbc statement:
SELECT * from "public"."user" ;

Exception:
ERROR: relation "public.user" does not exist


On Tue, May 12, 2009 at 5:55 PM, Scott Marlowe  wrote:
> On Tue, May 12, 2009 at 1:26 AM, Joe Kramer  wrote:
>> Hi,
>>
>> I have table named user (lower case) in public schema.
>> I can access it using Pgadmin as
>> SELECT * from "user"; SELECT * from "public.user"; SELECT * FROM public.user;
>
> Try "public"."user"
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Markus Wollny
Hello!

Recently one of my PostgreSQL servers has started throwing error
messages like these:

ERROR:  could not access status of transaction 3489956864
DETAIL:  Could not open file "pg_clog/0D00": Datei oder Verzeichnis
nicht gefunden. (file not found)

The machine in question doesn't show any signs of a hardware defect,
we're running a RAID-10 over 10 disks for this partition on a 3Ware
hardware RAID controller with battery backup unit, the controller
doesn't show any defects at all. We're running PostgreSQL 8.3.5 on that
box, kernel is 2.6.18-6-amd64 of Debian Etch, the PostgreSQL binaries
were compiled from source on that machine.

I searched the lists and though I couldn't find an exact hint as to
what's causing this, I found a suggestion for a more or less hotfix
solution:
Create a file of the required size filled with zeroes and then put that
into the clog-directory, i.e.
dd bs=262144 count=1 if=/dev/zero of=/tmp/pg_clog_replacements/0002
chown postgres.daemon /tmp/pg_clog_replacements/0002
chmod 600 /tmp/pg_clog_replacements/0002
mv /tmp/pg_clog_replacements/0002 /var/lib/pgsql/data/pg_clog

I know that I'd be loosing some transactions, but in our use case this
is not critical. Anyway, this made the problem go away for a while but
now I'm getting those messages again - and indeed the clog-files in
question appear to be missing altogether. And what's worse, the
workaround no longer works properly but makes PostgreSQL crash:

magazine=# vacuum analyze pcaction.article;
PANIC:  corrupted item pointer: 5
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

And from the logfile:

<2009-05-12 11:38:09 CEST - 6606: [loc...@magazine>PANIC:  corrupted
item pointer: 5
<2009-05-12 11:38:09 CEST - 6606: [loc...@magazine>STATEMENT:  vacuum
analyze pcaction.article;
<2009-05-12 11:38:09 CEST - 29178: @>LOG:  server process (PID 6606) was
terminated by signal 6: Aborted
<2009-05-12 11:38:09 CEST - 29178: @>LOG:  terminating any other active
server processes
<2009-05-12 11:38:09 CEST - 6607:
192.168.222.134(57292)@magazine>WARNING:  terminating connection because
of crash of another server process
<2009-05-12 11:38:09 CEST - 6607:
192.168.222.134(57292)@magazine>DETAIL:  The postmaster has commanded
this server process to roll back the current transaction and exit,
because another server process exited abnormally and possibly corrupted
shared memory.
<2009-05-12 11:38:09 CEST - 6569: 192.168.222.134(57214)@bluebox>HINT:
In a moment you should be able to reconnect to the database and repeat
your command.
[...]
<2009-05-12 11:38:09 CEST - 29178: @>LOG:  all server processes
terminated; reinitializing
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  database system was
interrupted; last known up at 2009-05-12 11:37:51 CEST
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  database system was not
properly shut down; automatic recovery in progress
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  redo starts at 172/8B4EE118
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  record with zero length at
172/8B6AD510
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  redo done at 172/8B6AD4E0
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  last completed transaction was
at log time 2009-05-12 11:38:09.550175+02
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  checkpoint starting: shutdown
immediate
<2009-05-12 11:38:09 CEST - 6619: @>LOG:  checkpoint complete: wrote 351
buffers (1.1%); 0 transaction log file(s) added, 0 removed, 2 recycled;
write=0.008s, sync=0.000 s, total=0.009 s
<2009-05-12 11:38:09 CEST - 6622: @>LOG:  autovacuum launcher started
<2009-05-12 11:38:09 CEST - 29178: @>LOG:  database system is ready to
accept connections

Now what exactly is causing those missing clog files, what can I do to
prevent this and what can I do to recover my database cluster, as this
issue seems to prevent proper dumps at the moment?

Kind regards

   Markus

Jede Stimme zahlt, jetzt voten fur die besten Games: www.bamaward.de

Computec Media AG
Sitz der Gesellschaft und Registergericht: Furth (HRB 8818)
Vorstandsmitglieder: Albrecht Hengstenberg (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jurg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Putting many related fields as an array

2009-05-12 Thread Sam Mason
On Tue, May 12, 2009 at 01:23:14PM +0800, Ow Mun Heng wrote:
> | sum of count | sum_of_count_squared | qty | qty < 100 | qty < 500 |
> 
> 
> I'm thinking of lumping them into 1 column via an array instead of into
> 5 different columns. Not sure how to go about this, hence the email to
> the list.

The normal array constructor should work:

  SELECT ARRAY[MIN(v),MAX(v),AVG(v),STDEV(v)]
  FROM (VALUES (1),(3),(4)) x(v);

Not sure why this is better than using separate columns though.  Maybe a
new datatype and a custom aggregate would be easier to work with?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to access table named "user"

2009-05-12 Thread Sam Mason
On Tue, May 12, 2009 at 06:57:07PM +1000, Joe Kramer wrote:
> Don't help either.
> 
> Jdbc statement:
> SELECT * from "public"."user" ;
> 
> Exception:
> ERROR: relation "public.user" does not exist

Are you sure the table exists or you're connecting to the correct
database then?  If you connect with psql and do "\dt" does it show up?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Glyn Astill

--- On Tue, 12/5/09, Markus Wollny  wrote:

> From: Markus Wollny 
> Subject: [GENERAL] Could not open file "pg_clog/"
> To: pgsql-general@postgresql.org
> Date: Tuesday, 12 May, 2009, 11:04 AM
> Hello!
> 
> Recently one of my PostgreSQL servers has started throwing
> error
> messages like these:
> 
> ERROR:  could not access status of transaction 3489956864
> DETAIL:  Could not open file "pg_clog/0D00":
> Datei oder Verzeichnis
> nicht gefunden. (file not found)
> 
> The machine in question doesn't show any signs of a
> hardware defect,
> we're running a RAID-10 over 10 disks for this
> partition on a 3Ware
> hardware RAID controller with battery backup unit, the
> controller
> doesn't show any defects at all. We're running
> PostgreSQL 8.3.5 on that
> box, kernel is 2.6.18-6-amd64 of Debian Etch, the
> PostgreSQL binaries
> were compiled from source on that machine.

Apart from not being on the latest release you have a very similar setup to me.

> 
> I know that I'd be loosing some transactions, but in
> our use case this
> is not critical. Anyway, this made the problem go away for
> a while but
> now I'm getting those messages again - and indeed the
> clog-files in
> question appear to be missing altogether. And what's
> worse, the
> workaround no longer works properly but makes PostgreSQL
> crash:
> 

The first thing I would have done if I've been forced to do that (if there was 
no other option?) would be a dump / restore directly afterwards, then pick 
through for any inconsistencies.

> Now what exactly is causing those missing clog files, what
> can I do to
> prevent this and what can I do to recover my database
> cluster, as this
> issue seems to prevent proper dumps at the moment?

Probably wait for the big-wigs to reply but perhaps a reindex may get you going.

I'd definately be starting with a fresh database once I got out of the whole 
though...




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Markus Wollny
Hi! 

> -Ursprüngliche Nachricht-
> Von: Glyn Astill [mailto:glynast...@yahoo.co.uk] 
> Gesendet: Dienstag, 12. Mai 2009 12:33
> An: pgsql-general@postgresql.org; Markus Wollny

> The first thing I would have done if I've been forced to do 
> that (if there was no other option?) would be a dump / 
> restore directly afterwards, then pick through for any 
> inconsistencies.

That's a lot of data - somewhere around 43GB at the moment. And pg_dump seems 
to fail altogether on the affected databases, so the pg_clog issue actually 
means that I cannot make any current backups.
 
> Probably wait for the big-wigs to reply but perhaps a reindex 
> may get you going.

Tried that, but it also makes PostgreSQL crash, so no luck there either. I also 
dropped template0, recreated it from template1, did a VACUUM FREEZE on it, 
marked it as template again and disallowed connections.
 
> I'd definately be starting with a fresh database once I got 
> out of the whole though...

Yes, but that'll be a nightshift and I need some way to actually get at a 
working dump now...

Kind regards

   Markus

Jede Stimme zählt, jetzt voten für die besten Games: www.bamaward.de

Computec Media AG
Sitz der Gesellschaft und Registergericht: Fürth (HRB 8818)
Vorstandsmitglieder: Albrecht Hengstenberg (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jürg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Glyn Astill

--- On Tue, 12/5/09, Markus Wollny  wrote:

> From: Markus Wollny 
> Subject: AW: [GENERAL] Could not open file "pg_clog/"
> To: glynast...@yahoo.co.uk, pgsql-general@postgresql.org
> Date: Tuesday, 12 May, 2009, 11:52 AM
> Hi! 
> 
> > -Ursprüngliche Nachricht-
> > Von: Glyn Astill [mailto:glynast...@yahoo.co.uk] 
> > Gesendet: Dienstag, 12. Mai 2009 12:33
> > An: pgsql-general@postgresql.org; Markus Wollny
> 
> > The first thing I would have done if I've been
> forced to do 
> > that (if there was no other option?) would be a dump /
> 
> > restore directly afterwards, then pick through for any
> 
> > inconsistencies.
> 
> That's a lot of data - somewhere around 43GB at the
> moment. And pg_dump seems to fail altogether on the affected
> databases, so the pg_clog issue actually means that I cannot
> make any current backups.
>  
> > Probably wait for the big-wigs to reply but perhaps a
> reindex 
> > may get you going.
> 
> Tried that, but it also makes PostgreSQL crash, so no luck
> there either. I also dropped template0, recreated it from
> template1, did a VACUUM FREEZE on it, marked it as template
> again and disallowed connections.
>  
> > I'd definately be starting with a fresh database
> once I got 
> > out of the whole though...
> 
> Yes, but that'll be a nightshift and I need some way to
> actually get at a working dump now...
> 

It appears to be failing on the pcaction.article table. Could you get away 
without that? Perhaps, and it'd be a longshot, you'd be able to dump the rest 
of the data with it gone?

I'm going to duck out of this now though, and I think you should probably wait 
until someone a little more knowlegable replies.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Markus Wollny
Hi! 

> It appears to be failing on the pcaction.article table. Could 
> you get away without that? Perhaps, and it'd be a longshot, 
> you'd be able to dump the rest of the data with it gone?
> 
> I'm going to duck out of this now though, and I think you 
> should probably wait until someone a little more knowlegable replies.

Yes, I could get away with it without any problems, as this table isn't
really needed any more (just some remains of a previous site version,
which I didn't yet drop so we could still take a look at it if the need
should arise). It is somewhat funny that this should happen on some
table that is hardy accessed at all, whereas the more popular database
objects seem to be in order. But I am not yet shure, that this one is
the only affected object, but I'll try to dump that db without the
pcaction.schema, see what happens.

Kind regards

   Markus

Jede Stimme zahlt, jetzt voten fur die besten Games: www.bamaward.de

Computec Media AG
Sitz der Gesellschaft und Registergericht: Furth (HRB 8818)
Vorstandsmitglieder: Albrecht Hengstenberg (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jurg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Glyn Astill




--- On Tue, 12/5/09, Glyn Astill  wrote:

> I'm going to duck out of this now though, and I think
> you should probably wait until someone a little more
> knowlegable replies.
> 

Also see here:

http://archives.postgresql.org/pgsql-general/2006-07/msg01147.php


 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting data from bytea field in 8.3

2009-05-12 Thread Albe Laurenz
Tomasz Rejman wrote:
> Few days ago I have changed my postgreSQL to 8.3 version and I have problem 
> with bytea fields. Let me show you an example:
> 
> CREATE TABLE testtable
> (
> test bytea
> );
> insert into testTable (test) VALUES 
> (E'\304\205\304\207\305\274\303\263\305\202wjfdsafdsa');
> select encode(test, 'escape') from testTable;
> 
> In the previous version last Select will display exact characters in UTF8 
> encoding (some polish letters with dot-s and comma-s on them), but in 8.3 
> i'm getting an escaped string with byte codes (like this one in Insert 
> querry)
> 
> (correct data should be like that:
> select E'\304\205\304\207\305\274\303\263\305\202wjfdsafdsa';
> )
> 
> What should I do to tell the postgreSQL to unescape that data?

select convert_from(test, 'UTF8') from testTable;

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regexp_matches problem

2009-05-12 Thread WP Perquin
(postgresql 8.3.7, linux centos)

I made the following regexp_matches

SELECT regexp_matches( 
( 
SELECT content 
FROM page 
WHERE idpage = 2 
) 
,',']))*>' 
, 'ig' 
) AS result

 

The result looks like: {"\"",NULL,NULL,"\""} in phpPgAdmin. 

Which is not the right result. I want to see als the images and their alt,
title and src.

 

Can anyone help with telling what I am doing wrong?

 

Willem



[GENERAL] Postgres BackUp and Restore: ERROR: duplicate key violates unique constraint "pg_largeobject_loid_pn_index"

2009-05-12 Thread CM J
Hi,

  I backup my database using the following command:

pg_dump.exe -f  Backup.sql  -U  -p  MYDB

   Now,before restoring, i drop all my tables,indexes,foreign key
constraints.Then, i restore using the cmd below:


psql.exe -U  -p -d MYDB -f Backup.sql

 The above results in following exception and the data is not getting
restored:

ERROR:  duplicate key violates unique constraint
"pg_largeobject_loid_pn_index"

  What else do i need to drop/delete to completely reinitialize my
database ? Also, instead of dropping the tables,indexes,foreign key
constraints, if drop the database and recreate it, the above error is not
thrown.However, this approach is not an option for me due to some limitation
in my application.

   Kindly help.

Thanks


Re: [GENERAL] Putting many related fields as an array

2009-05-12 Thread Ow Mun Heng
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
On Tue, May 12, 2009 at 01:23:14PM +0800, Ow Mun Heng wrote:
>> | sum of count | sum_of_count_squared | qty | qty < 100 | qty < 500 |
>> 
>> 
>> I'm thinking of lumping them into 1 column via an array instead of into
>> 5 different columns. Not sure how to go about this, hence the email to
>> the list.

>The normal array constructor should work:
>
>  SELECT ARRAY[MIN(v),MAX(v),AVG(v),STDEV(v)]
>  FROM (VALUES (1),(3),(4)) x(v);
>
>Not sure why this is better than using separate columns though.  Maybe a
>new datatype and a custom aggregate would be easier to work with?

The issue here is the # of columns needed to populate the table.

The table I'm summarizing has close to between 50 to 100+ columns, if the
1:5x is used as a yardstick, then the table will get awfully wide quickly.

I need to know how to do it first, then test accordingly for performance and
corner cases.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cannot login for short period of time

2009-05-12 Thread Henry


Greets,

Pg:  8.3.7

I'm trying to diagnose why I cannot login to Pg on occasion.  The psql  
command will just hang (so I cannot get in to see what it's doing) and  
a telnet into 5432 will give the usual:


Connected to localhost (127.0.0.1).
Escape character is '^]'.

indicating the backend accepting the connection, but not proceeding.

The number of connections at the time are small, file descriptor usage  
will be small, etc, server load low.


It will then spontaneously recover as if it's just completed doing  
something IO intensive.


My config:

listen_addresses = '1.2.1.1,127.0.0.1'
port = 5432
max_connections = 2048
shared_buffers = 520MB
temp_buffers = 128MB
work_mem = 1512MB
max_fsm_pages = 512000
fsync = off
checkpoint_segments = 32
effective_cache_size = 512MB
constraint_exclusion = on
logging_collector = on
track_counts = on
autovacuum = on
autovacuum_freeze_max_age = 20

I'm rerunning things with statement logging turned on to get a clue.
Incidentally, I did have to reindex the system tables in single-user  
mode (got a "Cannot find namespace 0" error at some point).


Anyone have an idea what this strange connectivity-delay could be about?

Cheers
Henry


pgplt4Jrfvjng.pgp
Description: PGP Digital Signature


Re: [GENERAL] Luhn algorithm (credit card verify / check) implementation - FIX

2009-05-12 Thread David Fetter
On Tue, May 12, 2009 at 02:54:29PM +0800, Craig Ringer wrote:
> The Luhn algorithm implemention I posted earlier (upthread) is  
> internally consistent and will verify checksums it created, but it is  
> actually not a correct implementation of the Luhn algorithm.

This looks like a great candidate for inclusion in the Snippets page
 page, or possibly even the
docs for SQL functions :)

Cheers,
David.
>
> The earlier code added the doubled digits directly to the checksum,  
> rather than adding each digit of the the doubled digits.
>
> Here's a corrected version that passes tests against other  
> implementations in other languages.
>
> --
> -- Luhn algorithm implementation by Craig Ringer
> -- in pure SQL (PostgreSQL function dialect, but
> -- should be easily adapted to other DBMSs).
> -- Note that this implementation is purely
> -- arithmetic; it avoids string manipulation entirely.
> --
> -- See: http://en.wikipedia.org/wiki/Luhn_algorithm
> -- 
>
> CREATE OR REPLACE FUNCTION luhn_verify(int8) RETURNS boolean AS $$
> -- Take the sum of the
> -- doubled digits and the even-numbered undoubled digits, and see if
> -- the sum is evenly divisible by zero.
> SELECT
> -- Doubled digits might in turn be two digits. In that case,
> -- we must add each digit individually rather than adding the
> -- doubled digit value to the sum. Ie if the original digit was
> -- `6' the doubled result was `12' and we must add `1+2' to the
> -- sum rather than `12'.
> MOD(SUM(doubled_digit / INT8 '10' + doubled_digit % INT8 '10'),  
> 10) = 0
> FROM
> -- Double odd-numbered digits (counting left with
> -- least significant as zero). If the doubled digits end up
> -- having values
> -- > 10 (ie they're two digits), add their digits together.
> (SELECT
> -- Extract digit `n' counting left from least significant
> --as zero
> MOD( ( $1::int8 / (10^n)::int8 ), 10::int8)
> -- Double odd-numbered digits
> * (MOD(n,2) + 1)
> AS doubled_digit
> FROM generate_series(0, ceil(log( $1 ))::integer - 1) AS n
> ) AS doubled_digits;
>
> $$ LANGUAGE 'SQL'
> IMMUTABLE
> STRICT;
>
> COMMENT ON FUNCTION luhn_verify(int8) IS 'Return true iff the last digit
> of the input is a correct check digit for the rest of the input
> according to Luhn''s algorithm.';
>
> CREATE OR REPLACE FUNCTION luhn_generate_checkdigit(int8) RETURNS int8 AS $$
> SELECT
> -- Add the digits, doubling even-numbered digits (counting left
> -- with least-significant as zero). Subtract the remainder of
> -- dividing the sum by 10 from 10, and take the remainder
> -- of dividing that by 10 in turn.
> ((INT8 '10' - SUM(doubled_digit / INT8 '10' + doubled_digit % INT8  
> '10') % INT8 '10') % INT8 '10')::INT8
> FROM (SELECT
> -- Extract digit `n' counting left from least significant\
> -- as zero
> MOD( ($1::int8 / (10^n)::int8), 10::int8 )
> -- double even-numbered digits
> * (2 - MOD(n,2))
> AS doubled_digit
> FROM generate_series(0, ceil(log($1))::integer - 1) AS n
> ) AS doubled_digits;
>
> $$ LANGUAGE 'SQL'
> IMMUTABLE
> STRICT;
>
> COMMENT ON FUNCTION luhn_generate_checkdigit(int8) IS 'For the input
> value, generate a check digit according to Luhn''s algorithm';
>
> CREATE OR REPLACE FUNCTION luhn_generate(int8) RETURNS int8 AS $$
> SELECT 10 * $1 + luhn_generate_checkdigit($1);
> $$ LANGUAGE 'SQL'
> IMMUTABLE
> STRICT;
>
> COMMENT ON FUNCTION luhn_generate(int8) IS 'Append a check digit
> generated according to Luhn''s algorithm to the input value. The
> input value must be no greater than (maxbigint/10).';
>
> CREATE OR REPLACE FUNCTION luhn_strip(int8) RETURNS int8 AS $$
> SELECT $1 / 10;
> $$ LANGUAGE 'SQL'
> IMMUTABLE
> STRICT;
>
> COMMENT ON FUNCTION luhn_strip(int8) IS 'Strip the least significant
> digit from the input value. Intended for use when stripping the check
> digit from a number including a Luhn''s algorithm check digit.';
>
>
>
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Putting many related fields as an array

2009-05-12 Thread Sam Mason
On Tue, May 12, 2009 at 08:06:25PM +0800, Ow Mun Heng wrote:
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> On Tue, May 12, 2009 at 01:23:14PM +0800, Ow Mun Heng wrote:
> >Not sure why this is better than using separate columns though.  Maybe a
> >new datatype and a custom aggregate would be easier to work with?
> 
> The issue here is the # of columns needed to populate the table.
> 
> The table I'm summarizing has close to between 50 to 100+ columns, if the
> 1:5x is used as a yardstick, then the table will get awfully wide quickly.
> 
> I need to know how to do it first, then test accordingly for performance and
> corner cases.

Yes, those are going to be pretty wide tables!  Maybe if you can make
the source tables a bit "narrower" it will help things; PG has to read
entire rows from the table, so if your queries are only touching a few
columns then it's going to need a lot more disk bandwidth to get a
specific number of rows back from the table.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot login for short period of time

2009-05-12 Thread Thomas Markus
Hi,

check your hardware (especially harddrive) for errors.

regards
Thomas

Henry schrieb:
>
> Greets,
>
> Pg:  8.3.7
>
> I'm trying to diagnose why I cannot login to Pg on occasion.  The psql
> command will just hang (so I cannot get in to see what it's doing) and
> a telnet into 5432 will give the usual:
>
> Connected to localhost (127.0.0.1).
> Escape character is '^]'.
>
> indicating the backend accepting the connection, but not proceeding.
>
> The number of connections at the time are small, file descriptor usage
> will be small, etc, server load low.
>
> It will then spontaneously recover as if it's just completed doing
> something IO intensive.
>
> My config:
>
> listen_addresses = '1.2.1.1,127.0.0.1'
> port = 5432
> max_connections = 2048
> shared_buffers = 520MB
> temp_buffers = 128MB
> work_mem = 1512MB
> max_fsm_pages = 512000
> fsync = off
> checkpoint_segments = 32
> effective_cache_size = 512MB
> constraint_exclusion = on
> logging_collector = on
> track_counts = on
> autovacuum = on
> autovacuum_freeze_max_age = 20
>
> I'm rerunning things with statement logging turned on to get a clue.
> Incidentally, I did have to reindex the system tables in single-user
> mode (got a "Cannot find namespace 0" error at some point).
>
> Anyone have an idea what this strange connectivity-delay could be about?
>
> Cheers
> Henry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres BackUp and Restore: ERROR: duplicate key violates unique constraint "pg_largeobject_loid_pn_index"

2009-05-12 Thread Sam Mason
On Tue, May 12, 2009 at 05:25:26PM +0530, CM J wrote:
>  The above results in following exception and the data is not getting
> restored:
> 
> ERROR:  duplicate key violates unique constraint 
> "pg_largeobject_loid_pn_index"
> 
>   What else do i need to drop/delete to completely reinitialize my
> database ?

There are "large objects"[1] remaining in the system.  The way I've
removed them in the past was by doing a boring:

  DELETE FROM pg_largeobject;

A TRUNCATE could probably be used here as well.  I wouldn't go as far as
dropping and re-creating the table though.

-- 
  Sam  http://samason.me.uk/
 
 [1] http://www.postgresql.org/docs/current/static/catalog-pg-largeobject.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Putting many related fields as an array

2009-05-12 Thread Andrew Gould
On Tue, May 12, 2009 at 7:06 AM, Ow Mun Heng  wrote:

> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> On Tue, May 12, 2009 at 01:23:14PM +0800, Ow Mun Heng wrote:
> >> | sum of count | sum_of_count_squared | qty | qty < 100 | qty < 500 |
> >>
> >>
> >> I'm thinking of lumping them into 1 column via an array instead of into
> >> 5 different columns. Not sure how to go about this, hence the email to
> >> the list.
>
> >The normal array constructor should work:
> >
> >  SELECT ARRAY[MIN(v),MAX(v),AVG(v),STDEV(v)]
> >  FROM (VALUES (1),(3),(4)) x(v);
> >
> >Not sure why this is better than using separate columns though.  Maybe a
> >new datatype and a custom aggregate would be easier to work with?
>
> The issue here is the # of columns needed to populate the table.
>
> The table I'm summarizing has close to between 50 to 100+ columns, if the
> 1:5x is used as a yardstick, then the table will get awfully wide quickly.
>
> I need to know how to do it first, then test accordingly for performance
> and
> corner cases.
>
>
I apologize for coming into this conversation late.   I used to do analysis
of a public use data flat file that had one row per patient and up to 24
diagnosis codes, each in a different column.  Is this analogous to your
situation?  I found it was worth the effort to convert the flat file into a
relational data model where the patients' diagnosis codes were in one column
in a separate table.  This model also makes more complex analysis easier.

Since there were several types of fields that needed to be combined into
their own tables, I found it took less time to convert the flat file to the
relational model using a script prior to importing the data into the
database server.  A Python script would read the original file and create 5
clean, tab-delimited files that were ready to be imported.

I hope this helps.

Andrew


Re: [GENERAL] Unable to access table named "user"

2009-05-12 Thread Adrian Klaver
On Tuesday 12 May 2009 3:16:34 am Sam Mason wrote:
> On Tue, May 12, 2009 at 06:57:07PM +1000, Joe Kramer wrote:
> > Don't help either.
> >
> > Jdbc statement:
> > SELECT * from "public"."user" ;
> >
> > Exception:
> > ERROR: relation "public.user" does not exist
>
> Are you sure the table exists or you're connecting to the correct
> database then?  If you connect with psql and do "\dt" does it show up?
>
> --
>   Sam  http://samason.me.uk/

I would have to second Sam on this. If you find you are connected to the 
correct 
database, does the JDBC driver allow you to select other tables or is the 
problem restricted to this table?

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Difference between "in (...)" and "= any(...)" queries when using arrays

2009-05-12 Thread Francisco Figueiredo Jr.
On Tue, May 12, 2009 at 05:02, Richard Huxton  wrote:
> Francisco Figueiredo Jr. wrote:
>>
>> Hi all!
>>
>> I was playing with "in (...)"  and "= any (...)" queries and found a
>> difference between them and I wonder:
>>
>> why this works:
>>
>> select * from table_of_integers where integer_column = any (array[5,6]);
>
> This checks if integer_column matches any value in the specified array.
>
>> select * from table_of_integers where integer_column in (array[5,6]);
>
> This checks if integer_column IS an array[5,6] (which it isn't).
>
> You probably want ... IN (5,6)
>
>> SOME is a synonym for ANY. IN is equivalent to = ANY.
>
> I think this is probably talking with respect to sub-queries.
>


A, thank you very much for your feedback and for explaining it, Richard.

Now I understand it better.

-- 
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://fxjr.blogspot.com
http://twitter.com/franciscojunior

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Alvaro Herrera
Markus Wollny wrote:

> magazine=# vacuum analyze pcaction.article;
> PANIC:  corrupted item pointer: 5
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

This error comes from this piece of PageRepairFragmentation:

/* Need to compact the page the hard way */
itemidbase = (itemIdSort) palloc(sizeof(itemIdSortData) * nstorage);
itemidptr = itemidbase;
totallen = 0;
for (i = 0; i < nline; i++)
{
lp = PageGetItemId(page, i + 1);
if (ItemIdHasStorage(lp))
{
itemidptr->offsetindex = i;
itemidptr->itemoff = ItemIdGetOffset(lp);
if (itemidptr->itemoff < (int) pd_upper ||
itemidptr->itemoff >= (int) pd_special)
ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
 errmsg("corrupted item pointer: %u",
itemidptr->itemoff)));
itemidptr->alignedlen = MAXALIGN(ItemIdGetLength(lp));
totallen += itemidptr->alignedlen;
itemidptr++;
}
   }

What it means, AFAICT is that the page is corrupted beyond recovery.  If you
want to salvage the rest of the data on the table, you could zero out this
page, for which you'll obviously need to know what page it was.  It's not a
trivial patch to add that info to the error message, because
PageRepairFragmentation does not receive it.

I think it is worth our trouble to report block numbers on this kind of errors;
and in fact I had started on a patch to add errcontext() to vacuum and analyze
calls, but never finished it.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_matches problem

2009-05-12 Thread WP Perquin
When I make the following simplified example:

SELECT regexp_matches(''

,'((title\s*=\s*\"([^"]*)")+)|((src\s*=\s*\"([^"]*)")+)','ig')

 

My result are 2 rows:

"{NULL,NULL,NULL,"src=\"\"","src=\"\"",}"

"{"title=\"dit is een title tekst\"","title=\"dit is een title tekst\"","dit
is een title tekst",NULL,NULL,NULL}"

 

I would like to have 1 row which contains both the records. Does anyone know
how I can solve this?

 

 

Willem 

  _  

Van: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Namens WP Perquin
Verzonden: dinsdag 12 mei 2009 13:44
Aan: pgsql-general@postgresql.org
Onderwerp: [GENERAL] regexp_matches problem

 

(postgresql 8.3.7, linux centos)

I made the following regexp_matches

SELECT regexp_matches( 
( 
SELECT content 
FROM page 
WHERE idpage = 2 
) 
,',']))*>' 
, 'ig' 
) AS result

 

The result looks like: {"\"",NULL,NULL,"\""} in phpPgAdmin. 

Which is not the right result. I want to see als the images and their alt,
title and src.

 

Can anyone help with telling what I am doing wrong?

 

Willem



Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Alvaro Herrera
Alvaro Herrera wrote:
> Markus Wollny wrote:

> > magazine=# vacuum analyze pcaction.article;
> > PANIC:  corrupted item pointer: 5
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> > The connection to the server was lost. Attempting reset: Failed.
> 
> This error comes from this piece of PageRepairFragmentation:

> if (itemidptr->itemoff < (int) pd_upper ||
> itemidptr->itemoff >= (int) pd_special)
> ereport(ERROR,
> (errcode(ERRCODE_DATA_CORRUPTED),
>  errmsg("corrupted item pointer: %u",
> itemidptr->itemoff)));

I forgot to mention that this ERROR is promoted to PANIC by the
critical section defined in lazy_vacuum_page().

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump/pg_restore schema and data separately and foreign key constraints

2009-05-12 Thread Vasiliy Vasin
I have database on production server that backups every day. Database is not 
big ~ 10mb.
But I added several tables that takes big capacity and I don't want to backup 
data from them.

So, I backup my database in two files: schema and data:
pg_dump -s -E utf-8 -f ${filename.schema} -F custom -n public -O -Z 9 -h 
${connection.hostname} -U ${connection.username} ${connection.database}
pg_dump -a -T table1 -T table2-E utf-8 -f ${filename.data} -F custom -n public 
-O -Z 9 -h ${connection.hostname} -U ${connection.username} 
${connection.database}

Then I tried to restore this backups:
pg_restore -s -d ${connection.database} -h ${connection.hostname} -U 
${connection.username} -O -F custom ${schemaFileName}
pg_restore -a -d ${connection.database} -h ${connection.hostname} -U 
${connection.username} -O -F custom ${dataFileName}

Schema backup restored successfully.

But then I restoring data backup I receiving errors like:
COPY failed: ERROR:  insert or update on table "sometable" violates foreign key 
constraint "bla-blah"

I tried -1 option for pg_restore, it not helps.

I think this is common problem, but I don't found answer in google, only 
questions... :(

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump/pg_restore schema and data separately and foreign key constraints

2009-05-12 Thread Adrian Klaver


- "Vasiliy Vasin"  wrote:

> I have database on production server that backups every day. Database
> is not big ~ 10mb.
> But I added several tables that takes big capacity and I don't want to
> backup data from them.
> 
> So, I backup my database in two files: schema and data:
> pg_dump -s -E utf-8 -f ${filename.schema} -F custom -n public -O -Z 9
> -h ${connection.hostname} -U ${connection.username}
> ${connection.database}
> pg_dump -a -T table1 -T table2-E utf-8 -f ${filename.data} -F custom
> -n public -O -Z 9 -h ${connection.hostname} -U ${connection.username}
> ${connection.database}
> 
> Then I tried to restore this backups:
> pg_restore -s -d ${connection.database} -h ${connection.hostname} -U
> ${connection.username} -O -F custom ${schemaFileName}
> pg_restore -a -d ${connection.database} -h ${connection.hostname} -U
> ${connection.username} -O -F custom ${dataFileName}
> 
> Schema backup restored successfully.
> 
> But then I restoring data backup I receiving errors like:
> COPY failed: ERROR:  insert or update on table "sometable" violates
> foreign key constraint "bla-blah"
> 
> I tried -1 option for pg_restore, it not helps.
> 
> I think this is common problem, but I don't found answer in google,
> only questions... :(
> 


Sounds like an ordering problem, you are restoring the child table before the 
parent. Best bet is to dump the table data to individual files and restore them 
in the proper order.

Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump/pg_restore schema and data separately and foreign key constraints

2009-05-12 Thread Alan Hodgson
On Tuesday 12 May 2009, Vasiliy Vasin  wrote:
> I have database on production server that backups every day. Database is
> not big ~ 10mb. But I added several tables that takes big capacity and I
> don't want to backup data from them.
>
> So, I backup my database in two files: schema and data:
> pg_dump -s -E utf-8 -f ${filename.schema} -F custom -n public -O -Z 9 -h
> ${connection.hostname} -U ${connection.username} ${connection.database}
> pg_dump -a -T table1 -T table2-E utf-8 -f ${filename.data} -F custom -n
> public -O -Z 9 -h ${connection.hostname} -U ${connection.username}
> ${connection.database}
>
> Then I tried to restore this backups:
> pg_restore -s -d ${connection.database} -h ${connection.hostname} -U
> ${connection.username} -O -F custom ${schemaFileName} pg_restore -a -d
> ${connection.database} -h ${connection.hostname} -U
> ${connection.username} -O -F custom ${dataFileName}
>
> Schema backup restored successfully.
>
> But then I restoring data backup I receiving errors like:
> COPY failed: ERROR:  insert or update on table "sometable" violates
> foreign key constraint "bla-blah"
>
> I tried -1 option for pg_restore, it not helps.
>
> I think this is common problem, but I don't found answer in google, only
> questions... :(

If you restore the whole schema first, it creates foreign key constraints. 
Loading data after that would have to be done in full order of 
dependencies.

A normal full schema + data restore creates the constraints only after the 
data is already loaded. So the restore command doesn't worry about ordering 
the data for foreign key dependencies.

In short, if you are constructing a partial backup + restore plan, you'll 
have to order the data yourself, or create the schema in such a way that 
you can load the data before creating the foreign key constraints. 
pg_restore isn't going to cut it.


-- 
Even a sixth-grader can figure out that you can’t borrow money to pay off 
your debt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump/pg_restore schema and data separately and foreign key constraints

2009-05-12 Thread Vasiliy Vasin
In the end, decided to make easier: make a complete dump except tables that I 
don't want to backup, then just dump schema of these tables. So I have 2 files: 
schema+data for all except table1 and table2, schema only for table1 and table2.
It's a strangely that pg_restore is still no option to check the constraints 
after you insert all data.
Something like 
http://www.postgresql.org/docs/8.3/static/sql-set-constraints.html

13.05.09, 01:17, "Adrian Klaver" :

> Sounds like an ordering problem, you are restoring the child table before the 
> parent. Best bet is to dump the table data to individual files and restore 
> them in the proper order.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump/pg_restore schema and data separately and foreign key constraints

2009-05-12 Thread Dragan Zubac

Vasiliy Vasin wrote:

I have database on production server that backups every day. Database is not 
big ~ 10mb.
But I added several tables that takes big capacity and I don't want to backup 
data from them.

So, I backup my database in two files: schema and data:
pg_dump -s -E utf-8 -f ${filename.schema} -F custom -n public -O -Z 9 -h 
${connection.hostname} -U ${connection.username} ${connection.database}
pg_dump -a -T table1 -T table2-E utf-8 -f ${filename.data} -F custom -n public 
-O -Z 9 -h ${connection.hostname} -U ${connection.username} 
${connection.database}

Then I tried to restore this backups:
pg_restore -s -d ${connection.database} -h ${connection.hostname} -U 
${connection.username} -O -F custom ${schemaFileName}
pg_restore -a -d ${connection.database} -h ${connection.hostname} -U 
${connection.username} -O -F custom ${dataFileName}

Schema backup restored successfully.

But then I restoring data backup I receiving errors like:
COPY failed: ERROR:  insert or update on table "sometable" violates foreign key 
constraint "bla-blah"

I tried -1 option for pg_restore, it not helps.

I think this is common problem, but I don't found answer in google, only 
questions... :(

  

What I did is the following:

1. create table 'copy_tables' that will contain all tables that are 
supposed to be in backup procedure. That table contains those tables in 
that order which will prevent this errors which occur when You restore 
tables in order that will make 'foreign-key dependencies'. If table A 
has foreign key to table B,You should first restore table B and then 
table A,so foreign-keys that are supposed to be created will have an 
object to refer to.


2. make a perl/php/bash script that will read table 'copy_tables' and 
make backup or restore them. Backup is from lowest to biggest ID in that 
table,while restore is from biggest to lower.


pg_dump/pg_restore does not have any intelligence over foreign-key 
dependencies between tables.


Sincerely

Dragan Zubac

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump/pg_restore schema and data separately and foreign key constraints

2009-05-12 Thread Adrian Klaver
On Tuesday 12 May 2009 2:35:10 pm Vasiliy Vasin wrote:
> In the end, decided to make easier: make a complete dump except tables that
> I don't want to backup, then just dump schema of these tables. So I have 2
> files: schema+data for all except table1 and table2, schema only for table1
> and table2. It's a strangely that pg_restore is still no option to check
> the constraints after you insert all data. Something like
> http://www.postgresql.org/docs/8.3/static/sql-set-constraints.html


It is more of a conceptual problem. Unless you really do a complete dump (which 
you actually did not do above) then the schema and data become decoupled. A 
less than complete restore could be any combination of data and schema from 
various points in time. A partial pg_restore has no way of "knowing" what is 
correct, therefore it is left for the user to decide correctness.

>
> 13.05.09, 01:17, "Adrian Klaver" :
> > Sounds like an ordering problem, you are restoring the child table before
> > the parent. Best bet is to dump the table data to individual files and
> > restore them in the proper order.



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Best way to monitor, control, or rewrite data definition commands?

2009-05-12 Thread Turner, Ian
Hello list,

I am trying to implement automatic audit log tracking for newly created
tables. When a user creates a table, I would like to create a second
table for the audit log, along with the relevant rules. So for example,
if a user does

CREATE TABLE foo (fooid integer);

Then I would also like to do

CREATE TABLE foo_audit (fooid integer, );

along with the creation of some other triggers, rules, etc.

Is there any way to be notified when a user executes data definition
commands such as CREATE TABLE? It doesn't appear possible to apply
triggers or rules to the system tables, and the query rewrite engine
only seems to apply to SELECT, INSERT, and UPDATE. Thoughts?

Cheers,

--Ian Turner

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to monitor, control, or rewrite data definition commands?

2009-05-12 Thread Greg Smith

On Tue, 12 May 2009, Turner, Ian wrote:


Is there any way to be notified when a user executes data definition
commands such as CREATE TABLE? It doesn't appear possible to apply
triggers or rules to the system tables, and the query rewrite engine
only seems to apply to SELECT, INSERT, and UPDATE. Thoughts?


This topic really deserves a FAQ entry.

You can set "log_statement=ddl" and see a log of them that way: 
http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html#GUC-LOG-STATEMENT


Setup CSV format logs and you can even import that into a database table, 
appropriate log rotation is needed to give you breakpoints to import at 
though.


The other common idiom here to detect changes is to save the output from 
"pgdump -s" regularly and look for changes via diff.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to monitor, control, or rewrite data definition commands?

2009-05-12 Thread John DeSoi


On May 12, 2009, at 7:59 PM, Turner, Ian wrote:


CREATE TABLE foo (fooid integer);

Then I would also like to do

CREATE TABLE foo_audit (fooid integer, );

along with the creation of some other triggers, rules, etc.

Is there any way to be notified when a user executes data definition
commands such as CREATE TABLE? It doesn't appear possible to apply
triggers or rules to the system tables, and the query rewrite engine
only seems to apply to SELECT, INSERT, and UPDATE. Thoughts?



Correct, there are no triggers on the system tables.

Maybe some type of cron process that ensures there is foo_audit for  
table foo?



John DeSoi, Ph.D.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general