[BUGS] BUG #6130: TOAST tables bug restoring to PostgreSQL 9.0.4

2011-07-26 Thread Arne Scheffer

The following bug has been logged online:

Bug reference:  6130
Logged by:  Arne Scheffer
Email address:  bteam...@uni-muenster.de
PostgreSQL version: 9.0.4
Operating system:   RHEL 5.6
Description:TOAST tables bug restoring to PostgreSQL 9.0.4
Details: 

Sorry, missed to fill out this form, please see

http://archives.postgresql.org/pgsql-bugs/2011-07/msg00172.php

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6130: TOAST tables bug restoring to PostgreSQL 9.0.4

2011-07-26 Thread Kevin Grittner
"Arne Scheffer"  wrote:
 
> Sorry, missed to fill out this form, please see
> 
> http://archives.postgresql.org/pgsql-bugs/2011-07/msg00172.php
 
As mentioned in the reply to that post, this is not a bug.
 
You can wait for autovacuum to fix things up, or run VACUUM ANALYZE
against the database.  If you have a lot of rows in the restore
which you don't expect to be deleted or updated for a long time, you
might want to make that a VACUUM FREEZE ANALYZE.
 
In the future, please post questions like this to the pgsql-general
list.
 
-Kevin

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6086: Segmentation fault

2011-07-26 Thread noordsij

After a few hours of watching strange things happening I finally stumbled
on the cause.

Very short summary: the postgres binary needs to be linked to libpthread,
as this will ensure a special fork() inside libthr (the FreeBSD libpthread
implementation/wrapper) is used which correctly deals with any pthread
usage (I am guessing especially statically initialized pthread_X variables
in for example libxml2).

This can be accomplished by adding "-lpthread" to the LDFLAGS when building
either libxml2 or postgresql itself, or by preloading libpthread.so when
starting postgresql.

Before advocating any particular solution, I'd like to throw this out first
and see how strongly people feel about this.


Cheers, and thanks to everyone who replied so far
Dennis


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6086: Segmentation fault

2011-07-26 Thread Tom Lane
noordsij  writes:
> After a few hours of watching strange things happening I finally stumbled
> on the cause.

> Very short summary: the postgres binary needs to be linked to libpthread,
> as this will ensure a special fork() inside libthr (the FreeBSD libpthread
> implementation/wrapper) is used which correctly deals with any pthread
> usage (I am guessing especially statically initialized pthread_X variables
> in for example libxml2).

> This can be accomplished by adding "-lpthread" to the LDFLAGS when building
> either libxml2 or postgresql itself, or by preloading libpthread.so when
> starting postgresql.

Linking pthreads into the backend is likely to cause more problems than
it solves, especially if you're proposing that we do that everywhere.

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6127: pg_restore failing: unexpected message type 0x58 during COPY from stdin

2011-07-26 Thread Greg Johnson
Just a follow up on this bug. After further checking.. some of the data did
not load, but it was able to continue restoring. (looks like it skipped the
bad records?)

The interesting thing.. after I discovered the missing data. I dumped just
those three tables(form_config, form_field_config, form_big)  using -t  flag
and they loaded in just fine...

Here is what I found in the log file:
LOG:  statement: BEGIN
LOG:  statement:
TRUNCATE TABLE ONLY form_config;
LOG:  statement: COPY form_field_config (form_field_config_id,
form_config_id, identifier, label, field_type, field_group, required,
display_on_summary, new_line, col_span, field_order, searchable,
fk_form_config_id, fk_label, fk_on_delete_action, primary_key,
form_big_column) FROM stdin;
LOG:  statement: COPY form_field_config_select (form_field_config_select_id,
form_field_config_id, label, value, "position") FROM stdin;
LOG:  statement: COPY form_config (form_config_id, customer_id, identifier,
form_config_type, label, description, icon, location, allow_multiple,
created_by, create_date, last_updated_by, last_updated, view_on_update,
view_on_insert, "position", record_label, converted, text_count,
bigint_count, timestamp_count, real_count, boolean_count) FROM stdin;
ERROR:  unexpected message type 0x58 during COPY from stdin
CONTEXT:  COPY form_field_config, line 1
STATEMENT:  COPY form_field_config (form_field_config_id, form_config_id,
identifier, label, field_type, field_group, required, display_on_summary,
new_line, col_span, field_order, searchable, fk_form_config_id, fk_label,
fk_on_delete_action, primary_key, form_big_column) FROM stdin;
ERROR:  unexpected message type 0x58 during COPY from stdin
CONTEXT:  COPY form_field_config_select, line 1
STATEMENT:  COPY form_field_config_select (form_field_config_select_id,
form_field_config_id, label, value, "position") FROM stdin;
ERROR:  unexpected message type 0x58 during COPY from stdin
CONTEXT:  COPY form_config, line 1
STATEMENT:  COPY form_config (form_config_id, customer_id, identifier,
form_config_type, label, description, icon, location, allow_multiple,
created_by, create_date, last_updated_by, last_updated, view_on_update,
view_on_insert, "position", record_label, converted, text_count,
bigint_count, timestamp_count, real_count, boolean_count) FROM stdin;
LOG:  could not send data to client: Broken pipe
STATEMENT:  COPY form_field_config_select (form_field_config_select_id,
form_field_config_id, label, value, "position") FROM stdin;
LOG:  could not send data to client: Broken pipe
STATEMENT:  COPY form_field_config (form_field_config_id, form_config_id,
identifier, label, field_type, field_group, required, display_on_summary,
new_line, col_span, field_order, searchable, fk_form_config_id, fk_label,
fk_on_delete_action, primary_key, form_big_column) FROM stdin;
LOG:  could not send data to client: Broken pipe
STATEMENT:  COPY form_config (form_config_id, customer_id, identifier,
form_config_type, label, description, icon, location, allow_multiple,
created_by, create_date, last_updated_by, last_updated, view_on_update,
view_on_insert, "position", record_label, converted, text_count,
bigint_count, timestamp_count, real_count, boolean_count) FROM stdin;
LOG:  unexpected EOF on client connection
LOG:  unexpected EOF on client connection
LOG:  unexpected EOF on client connection
LOG:  checkpoint starting: time
LOG:  checkpoint complete: wrote 208 buffers (0.1%); 0 transaction log
file(s) added, 0 removed, 0 recycled; write=41.602 s, sync=0.542 s,
total=42.273 s
ERROR:  unexpected message type 0x58 during COPY from stdin
CONTEXT:  COPY form_big, line 11102095
STATEMENT:  COPY form_big (form_big_id, customer_id, client_id, debt_id,
debt_trans_id, created_by, create_date, last_update, last_updated_by,
text, text0001, text0002, text0003, text0004, text0005, text0006,
text0007, text0008, text0009, text0010, text0011, text0012, text0013,
text0014, text0015, text0016, text0017, text0018, text0019, text0020,
text0021, text0022, text0023, text0024, text0025, text0026, text0027,
text0028, text0029, text0030, text0031, text0032, text0033, text0034,
text0035, text0036, text0037, text0038, text0039, text0040, text0041,
text0042, text0043, text0044, text0045, text0046, text0047, text0048,
text0049, text0050, text0051, text0052, text0053, text0054, text0055,
text0056, text0057, text0058, text0059, text0060, text0061, text0062,
text0063, text0064, text0065, text0066, text0067, text0068, text0069,
text0070, text0071, text0072, text0073, text0074, text0075, text0076,
text0077, text0078, text0079, text0080, text0081, text0082, text0083,
text0084, text0085, text0086, text0087, text0088, text0089, text0090,
text0091, text0092, text0093, text0094, text0095, text0096, text0097,
text0098, text0099, text0100, text0101, text0102, text0103, text0104,
text0105, text0106, text0107, text0108, text0109, text0110, text0111,
text0112, text0113, text0114, text0115, text0116, text0117, text0118,
text0119, text0120, text012

[BUGS] BUG #6131: Query Returning Incorrect Results

2011-07-26 Thread David Johnston

The following bug has been logged online:

Bug reference:  6131
Logged by:  David Johnston
Email address:  pol...@yahoo.com
PostgreSQL version: 9.0.4
Operating system:   Windows 7 64-bit
Description:Query Returning Incorrect Results
Details: 

The embedded script exhibits some strange behavior.  When the query is run
the [adjustment_paid] column for [technician] "6338B" should show +/- 25.00.
 Instead, if I run the last query immediately after creating the schema and
inserting the data the results I get for that technician is 0.00 for both
records.  However, if I continue to run the query, or explicitly run ANALYZE
, I then get the correct results.

BTW, the other two records (techs 500A and 600A) show the expected values
each time the query is run.  Also, only those four records are expected.  

While I understand that ANALYZE should be run after loading in data I should
not get blatantly incorrect results - at worse I should suffer a performance
hit; though with only 40 or so records in the table performance is not
really a concern.

I can get this behavior to manifest consistently on my machine: 

PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit

FWIW, I execute all the following from a commercial GUI called PostgreSQL
Maestro.

Thanks,

David J.

--Begin Script

DROP VIEW IF EXISTS laborwip_payroll_entry_adjusted_ticket;
DROP VIEW IF EXISTS laborwip_payroll_technician_effectiverate;
DROP TABLE IF EXISTS laborwip_payroll_ticket;

CREATE TABLE laborwip_payroll_ticket (

s_id varchar NOT NULL,

accountnumber varchar NOT NULL,
date_reference date NOT NULL,
ticketnumber varchar NOT NULL,
technician varchar NOT NULL,
date_paid date NOT NULL,
detail_category varchar NOT NULL CHECK (detail_category IN
('++','--','+0','0+','-0','0-')),

hours_paid numeric(7,2) NOT NULL,
amount_paid numeric(7,2) NOT NULL,
rate_paid numeric(7,2) NULL,

CONSTRAINT laborwip_payroll_ticket_houramountconsistency
CHECK ((amount_paid >= 0 AND hours_paid >= 0) OR 
   (amount_paid <= 0 AND hours_paid <= 0)),

PRIMARY KEY (s_id, accountnumber, date_reference, ticketnumber, 
technician,
date_paid, detail_category)

);



INSERT INTO laborwip_payroll_ticket (s_id, accountnumber, date_reference,
ticketnumber, technician, date_paid, detail_category, amount_paid,
hours_paid, rate_paid) VALUES
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318421', '7001',
'1976-06-04', '++', 25, 2.5, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318426', '6120',
'1976-06-01', '++', 10, 1, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318427', '6120',
'1976-06-02', '++', 15, 1.5, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318210', '6124',
'1976-06-02', '++', 30, 2, 15.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318322', '6126',
'1976-06-01', '++', 10, 0.5, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318361', '6126',
'1976-06-01', '++', 30, 1.5, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318285', '6126',
'1976-06-01', '++', 10, 0.5, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318227', '6124',
'1976-06-01', '++', 45, 3, 15.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318396', '6126',
'1976-06-02', '++', 80, 4, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318431', '6126',
'1976-06-02', '++', 50, 2.5, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '317873', '6124',
'1976-06-02', '++', 15, 1, 15.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318421', '6126',
'1976-06-02', '++', 50, 2.5, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318262', '6287',
'1976-06-01', '++', 2, 0.2, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318352', '6126',
'1976-06-01', '++', 25, 1, 25.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318364', '6126',
'1976-06-02', '++', 30, 1.5, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318210', '6124',
'1976-06-01', '++', 45, 3, 15.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318267', '7003',
'1976-06-04', '--', -10, -1, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318274', '6287',
'1976-06-01', '++', 2, 0.2, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318412', '6120',
'1976-06-02', '++', 5, 0.5, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318182', '6338',
'1976-06-02', '++', 25, 1, 25.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318279', '6338B',
'1976-06-01', '++', 25, 1, 25.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318274', '7002',
'1976-06-04', '--', -30, -3, 10.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318379', '6126',
'1976-06-02', '++', 30, 1.5, 20.00),
  ('[TESTSG]STORE01', 'LABORWIP', '1976-06-15', '318349', '6126',
'1976-06-01', '++', 20, 1, 20.00),
  ('[TESTSG]STORE01', 'LA

Re: [BUGS] BUG #6086: [SOLVED] Segmentation fault

2011-07-26 Thread noordsij
> Linking pthreads into the backend is likely to cause more problems than
> it solves, especially if you're proposing that we do that everywhere.
> 
>   regards, tom lane


No, I am not proposing that. Not linking to pthread when not needed should
actually be an
optimization since the libc noop stubs get used instead.

I've found the issue now (yes, again :)

Short version:
FreeBSD compiles the main server and contrib modules without and with
pthreads, respectively. No bug in postgresql. If you're in a hurry, no need
to read any further.


Longer version:
With FreeBSD ports, the contrib part is compiled separately. For some
reason, the contrib units are being made to link to pthread, but the server
itself isn't.

So the backend forks to handle a request, some stored procedure touches
some XML, libxml2 thinks it is initializing some mutex by calling some
pthread_ stubs in libc, the stored procedure updates a column which has a
trigram index, postgresql dynamically loads pg_trgm.so, which links to
libpthread, and now the weak libc symbols are overridden and a little bit
later we end up calling a real pthread_ method passing a non-initialized
mutex and get a bus error.

Lots of fun to debug, seeing pthread_self() suddenly changing return value
within a single thread/backend in the middle of processing a single
request..


With a custom (correct) compilation everything functions correctly with an
unmodified libxml2 and postgresql, and without any linking to or preloading
libpthread.

Question is, why are postgresql90-server and postgresql90-contrib compiled
differently ?

It seems it may have been intended as a quick fix for pgbench, as the
configure script for the -contrib port does not end up with "-pthread" in
the PTHREAD_CFLAGS in Makefile.global which breaks compilation of pgbench,
the server does have this, haven't found the reason yet. Side effect is
that all modules link to pthread while the server doesn't ..


I will continue this discussion within FreeBSD, as this is not a PostgreSQL
bug (though if there is ever a loadable contrib module which does need
pthreads the above situation will occur again).

Thanks again to all who helped!

Cheers
Dennis





-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6131: Query Returning Incorrect Results

2011-07-26 Thread Tom Lane
"David Johnston"  writes:
> The embedded script exhibits some strange behavior.  When the query is run
> the [adjustment_paid] column for [technician] "6338B" should show +/- 25.00.
>  Instead, if I run the last query immediately after creating the schema and
> inserting the data the results I get for that technician is 0.00 for both
> records.  However, if I continue to run the query, or explicitly run ANALYZE
> , I then get the correct results.

Interesting example.  I don't believe it's a Postgres bug though, but
rather indeterminism in your query.  If you pull out the subquery that
is being fed to the window aggregate (row_number()):

SELECT s_id, date_reference, accountnumber, technician, rate_paid, COUNT(*) AS 
rate_count
FROM laborwip_payroll_ticket
GROUP BY s_id, date_reference, accountnumber, technician, rate_paid
ORDER BY s_id, date_reference, accountnumber, technician, rate_count DESC;

you will find that it produces slightly different output row ordering
before and after the ANALYZE.  I get

  s_id   | date_reference | accountnumber | technician | rate_paid | 
rate_count 
-++---++---+
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 500A   | 15.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 500B   | 15.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6016   | 10.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6120   | 10.00 |
  3
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6124   | 15.00 |
  5
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6126   | 20.00 |
  9
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6126   | 25.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6287   | 10.00 |
  4
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338   | 25.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338B  |  0.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338B  | 25.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338B  | 10.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 7001   | 10.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 7002   | 10.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 7003   | 10.00 |
  2
 [TESTSG]STORE02 | 1976-06-15 | LABORWIP  | 600A   | 10.00 |
  4
 [TESTSG]STORE02 | 1976-06-15 | LABORWIP  | 600A   |  0.00 |
  2
 [TESTSG]STORE02 | 1976-06-15 | LABORWIP  | 600B   |  0.00 |
  1
(18 rows)

versus

  s_id   | date_reference | accountnumber | technician | rate_paid | 
rate_count 
-++---++---+
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 500A   | 15.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 500B   | 15.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6016   | 10.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6120   | 10.00 |
  3
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6124   | 15.00 |
  5
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6126   | 20.00 |
  9
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6126   | 25.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6287   | 10.00 |
  4
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338   | 25.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338B  | 25.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338B  |  0.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 6338B  | 10.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 7001   | 10.00 |
  1
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 7002   | 10.00 |
  2
 [TESTSG]STORE01 | 1976-06-15 | LABORWIP  | 7003   | 10.00 |
  2
 [TESTSG]STORE02 | 1976-06-15 | LABORWIP  | 600A   | 10.00 |
  4
 [TESTSG]STORE02 | 1976-06-15 | LABORWIP  | 600A   |  0.00 |
  2
 [TESTSG]STORE02 | 1976-06-15 | LABORWIP  | 600B   |  0.00 |
  1
(18 rows)

Note in particular that the first two rows for technician 6338B are
presented in different orders.  This is not a bug because they are not
distinct so far as the ORDER BY clause is concerned.  However, when
you feed them to the window aggregate stuff:

SELECT ...,
(row_number() OVER (PARTITION BY s_id, date_reference, accountnumber,
 technician ORDER BY rate_count DESC)

Re: [BUGS] BUG #6131: Query Returning Incorrect Results

2011-07-26 Thread David Johnston


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Tuesday, July 26, 2011 7:42 PM
To: David Johnston
Cc: pgsql-bugs@postgresql.org
Subject: Re: [BUGS] BUG #6131: Query Returning Incorrect Results 

"David Johnston"  writes:
> The embedded script exhibits some strange behavior.  When the query is 
> run the [adjustment_paid] column for [technician] "6338B" should show +/-
25.00.
>  Instead, if I run the last query immediately after creating the 
> schema and inserting the data the results I get for that technician is 
> 0.00 for both records.  However, if I continue to run the query, or 
> explicitly run ANALYZE , I then get the correct results.

Interesting example.  I don't believe it's a Postgres bug though, but rather
indeterminism in your query.  If you pull out the subquery that is being fed
to the window aggregate (row_number()):

SELECT s_id, date_reference, accountnumber, technician, rate_paid, COUNT(*)
AS rate_count FROM laborwip_payroll_ticket GROUP BY s_id, date_reference,
accountnumber, technician, rate_paid ORDER BY s_id, date_reference,
accountnumber, technician, rate_count DESC;

you will find that it produces slightly different output row ordering before
and after the ANALYZE.  I get

... you get only the first one of those two rows.  And then your upper
query's results vary depending on which rate_paid you got.  So basically you
need to add more columns to the window aggregate PARTITION/ORDER BY clauses
to make the result more deterministic.

-

Now I feel like a schmuck...sorry for the noise.  I should/do know better
but my mind is fried.  Thank you so much for the quick response.

David J.





-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] pg_restore silently chokes on object comments/descriptions ending in a backslash

2011-07-26 Thread Julian Mehnle
I observe the following issue on PostgreSQL 9.0.4 on at least the
following platforms:

  * FreeBSD 6.3 (amd64)
`uname -a`:
FreeBSD  6.3-STABLE FreeBSD 6.3-STABLE #1: Fri May 30 18:11:47 
PDT 2008
  
root@:/data/obj/data/home//symbols/builddir_amd64/usr/src/sys/MESSAGING_GATEWAY.amd64_INSTALL
  amd64

  * Mac OS X 10.6.8 (i386)
`uname -a`:
Darwin joule 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 
2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 i386

  * semi-current Debian testing (amd64)
`uname -a`:
Linux gray 2.6.30-2-amd64 #1 SMP Mon Dec 7 05:21:45 UTC 2009 x86_64 
GNU/Linux

If the comment/description of a database object (table, function, etc.)
ends in a backslash (which generally works fine otherwise), then
pg_restore is unable to completely restore a custom-format dump of the
schema.  pg_restore does not complain, but silently(!) stops issuing DDL
statements to the server starting with the first "COMMENT ON …" statement
that would have set an object comment/description ending in a backslash.

Reproduce as follows:

$ createdb test0
$ createdb test1
$ psql -c "CREATE TABLE bar (); COMMENT ON TABLE bar IS 'bar\\';" test0
COMMENT
$ psql -c "CREATE TABLE foo (); COMMENT ON TABLE foo IS 'foo';" test0
COMMENT
$ pg_dump --format custom --file test0.pg_dump --schema-only test0
$ pg_restore -d test1 test0.pg_dump 
$ psql -c '\dt+' test0
   List of relations
 Schema | Name | Type  | Owner  |  Size   | Description 
+--+---++-+-
 public | bar  | table | julian | 0 bytes | bar\
 public | foo  | table | julian | 0 bytes | foo
(2 rows)

$ psql -c '\dt+' test1
   List of relations
 Schema | Name | Type  | Owner  |  Size   | Description 
+--+---++-+-
 public | bar  | table | julian | 0 bytes | 
(1 row)

This also happens with PostgreSQL 8.4.

To demonstrate that this is not an academic issue, these are a few
functions I have defined, and their comments:

List of functions
-[ RECORD 1 
]---+
Schema  | public
Name| escape_are
... : ...
Description | escape advanced regexp (ARE) special characters: 
.*+?|[](){}^$\
-[ RECORD 2 
]---+
Schema  | public
Name| escape_control
... : ...
Description | escape control characters: \a\b\t\n\v\f\r\e\\
-[ RECORD 3 
]---+
Schema  | public
Name| escape_like
... : ...
Description | escape LIKE pattern special characters: %_\

I have worked around the issue by appending a space character to each of
those function descriptions.  What makes the problem really bad is that it
silently renders your custom-format database dumps (which pg_dump creates
just fine) useless, which you notice only after you do a restore (without
an error being thrown) and your restored database being incomplete.

-Julian


signature.asc
Description: This is a digitally signed message part.