Re: [GENERAL] PG 924, Windows 2012, error code 487

2014-01-13 Thread Abraham, Danny
Hi,


>Are the servers on the Windows versions you mention or the clients or both?
Both clients as well as server are located on the same Windows Server machine.

>Where is this error showing up, the Windows  or Postgres logs?
Both. It appears in the pg_log as well as failing the client 

>What does the other log show?
Nothing special. Once logged in PG is fine.

>Is there anything preceding the above that might shed light?
Unfortunately not. Memory problems within Windows (Error 487) used to be common 
in 8.3.7 ... but got solved in 8.3.15.
We are now having them again with 9.2.4.

>Are the connections across a network and if so is there a firewall between the 
>clients and server(s)?
No FW involved. All the connections are local. This is a pure , inconsistent 
Server problem.
See BUG #5578: postrgesql database crash 


Thanks

Danny



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] excution time for plpgsql function and subfunction

2014-01-13 Thread Guillaume Lelarge
On Fri, 2014-01-10 at 10:10 +0100, Rémi Cura wrote:
> Hey List,
> kind of a simple question :
> 
> I'm using the postgis_topology extension,
> and I'm trying to figure where the slowness comes from when importing data.
> 
> It involves plpgsql function calling other plpgsql functions, insert,
> update, etc etc.
> 
> I know I can use explain analyze for one querry, but I don't know how to
> get details of how much time takes each sub-function called by a main
> function.
> 
> Thus it is very difficult to guess where is the bottleneck.
> 

Shameless plug, but here is how I do it:
http://blog.guillaume.lelarge.info/index.php/post/2012/03/31/Profiling-PL/pgsql-functions


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] excution time for plpgsql function and subfunction

2014-01-13 Thread Rémi Cura
Thanks all =)

Cheers,

Rémi-C


2014/1/13 Guillaume Lelarge 

> On Fri, 2014-01-10 at 10:10 +0100, Rémi Cura wrote:
> > Hey List,
> > kind of a simple question :
> >
> > I'm using the postgis_topology extension,
> > and I'm trying to figure where the slowness comes from when importing
> data.
> >
> > It involves plpgsql function calling other plpgsql functions, insert,
> > update, etc etc.
> >
> > I know I can use explain analyze for one querry, but I don't know how to
> > get details of how much time takes each sub-function called by a main
> > function.
> >
> > Thus it is very difficult to guess where is the bottleneck.
> >
>
> Shameless plug, but here is how I do it:
>
> http://blog.guillaume.lelarge.info/index.php/post/2012/03/31/Profiling-PL/pgsql-functions
>
>
> --
> Guillaume
> http://blog.guillaume.lelarge.info
> http://www.dalibo.com
>
>


Re: [GENERAL] DB Authentication Design

2014-01-13 Thread François Beausoleil
Hello Chris,

Le 2014-01-12 à 23:24, Chris Travers a écrit :
> 
> On Sun, Jan 12, 2014 at 6:30 AM, François Beausoleil  
> wrote:
> Hi all,
> 
> I'm thinking that all apps that connect to the database should have their own 
> user. For example, the web application process is one user, then a report 
> builder process should have another user, and a different process that 
> imports data should have his own too, and so on. Would you generally agree 
> with that?
> 
> I'm thinking that by having different users, PGbouncer can create different 
> pools, and better allow me to control concurrency.
> 
> 
> It really depends on what you are doing, what your security model is, what 
> your concurrency constraints are, etc.  What you are describing is a fairly 
> typical approach and it sacrifices some real security possibilities for 
> connection pooling possibilities.  The fundamental question is whether the 
> security of your application's user should be tied to the database 
> connection. 

This database cluster is not exposed to the outside world. What I really need 
is a way to control the number of simultaneous execution of queries. Your "per 
application" approach is a better name for what I described.

I also have web-facing applications, in which case the per-user approach sounds 
good.

Thanks!
François



smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] pg_upgrade & tablespaces

2014-01-13 Thread Joseph Kregloh
On Fri, Jan 10, 2014 at 11:53 AM, Adrian Klaver wrote:

> On 01/10/2014 08:40 AM, Joseph Kregloh wrote:
>
>> Just as a followup to this. The process that I am using to do the
>> upgrade is as follows:
>>
>> 1. Install Postgres 9.3 in /opt dir.
>> 2. In 9.0 instance update spclocation in pg_tablespace.
>> 3. Update the symlinks in the pg_tblspace folder.
>> 4. Move the tablespace folders to new location.
>> 5. Run pg_upgrade.
>> 6. Test upgrade results on 9.3 cluster.
>> 7. Run the sh files created by pg_upgrade.
>> 8. Uninstall Postgres 9.3 in /opt dir.
>> 9. Install Postgres 9.3 in default location.
>> 10. Enjoy Postgres 9.3.
>>
>
> For completeness, the new location you are moving the tablespaces to, is
> that in or out of $PGDATA?
>
>
The new location is /usr/local/pgsql/tablespaces/


> FYI, from comments over on --hackers, I believe Bruce Momjian may offer
> some insight on what is going on.
>
>
>> I could actually move the 9.0 cluster after moving the table spaces and
>> install 9.3 in the default location as the documentation shows. But I
>> haven't experimented with that scenario yet.
>>
>> -Joseph
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] pg_upgrade & tablespaces

2014-01-13 Thread Joseph Kregloh
Right.  I know there were multiple issue with this upgrade, jails
> probably being the biggest, but a new one I had never heard is that _if_
> you are placing your tablespaces in the PGDATA directory, and you are
> upgrading from pre-9.2, if you rename the old data directory, you also
> need to start the old server and update pg_tablespace.spclocation.
>
>
Just to have it on the record. I did the upgrade outside of the jail to
make sure. I also tested it within jails and it worked also.


Re: [GENERAL] pg_largeobject related issue with 9.2

2014-01-13 Thread sramay
Hi Kevin,

Thanks for the prompt answer. 
The Page 0 of the record is very much existing.
But still If take a dump of pg_largeobject_metadata from source database
still the same
message.

The database runs under the 'postgres' superuser only. 

Can I recreate the pg_largeobject_metadata by which command?

Thanks in advance .

Regards

Rama 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-largeobject-related-issue-with-9-2-tp5784969p5786648.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_largeobject related issue with 9.2

2014-01-13 Thread sramay
Hi Kevin,


>From Mr Momijan's suggestion by running the following query, I was in a
position
to create the pg_largeobject_metadata table instead of copying,  I assumed
that
postgres is the user at source and hence it should work for destination
also.  But
by running the following command 

select pg_catalog.lo_create(t.loid)
from (sleect  distinct  loid from pg_catalog.pg_largeobject) as t;


My issues are solved.  Thanks for the hint to you and to Mr. Momijan.

Regards

Rama



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-largeobject-related-issue-with-9-2-tp5784969p5786658.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to specify/mock the statistic data of tables in PostgreSQL

2014-01-13 Thread Tom Lane
=?GB2312?B?RmVsaXgu0Ow=?=  writes:
> //line 194 : In a "most common values" slot, staop is the OID of the "="
> operator used to decide whether values are the same or not.
> //line 206 : A "histogram" slot describes the distribution of scalar data.
>  staop is the OID of the "<" operator that describes the sort ordering.

> I don't understand the function of staop here, how is it used in optimizer,

In principle a data type could have more than one sort ordering, and if
we were to collect stats according to multiple orderings, staop would be
needed to identify which ordering a particular set of statistics was
created with.  That flexibility isn't being used right now, at least not
by any built-in code.  There are types with more than one ordering (more
than one btree opclass), but ANALYZE only collects stats for the default
btree opclass.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to specify/mock the statistic data of tables in PostgreSQL

2014-01-13 Thread ygnhzeus
I see, thanks, so columns of staop* are not currently used by the planner by 
default, right? 
The type of staop is oid, which table is related to it?

2014-01-13



ygnhzeus



发件人:Tom Lane 
发送时间:2014-01-13 23:21
主题:Re: [GENERAL] How to specify/mock the statistic data of tables in PostgreSQL
收件人:"Felix.徐"
抄送:"Amit Langote","Atri 
Sharma","pgsql-general"

=?GB2312?B?RmVsaXgu0Ow=?=  writes: 
> //line 194 : In a "most common values" slot, staop is the OID of the "=" 
> operator used to decide whether values are the same or not. 
> //line 206 : A "histogram" slot describes the distribution of scalar data. 
>  staop is the OID of the "<" operator that describes the sort ordering. 

> I don't understand the function of staop here, how is it used in optimizer, 

In principle a data type could have more than one sort ordering, and if 
we were to collect stats according to multiple orderings, staop would be 
needed to identify which ordering a particular set of statistics was 
created with.  That flexibility isn't being used right now, at least not 
by any built-in code.  There are types with more than one ordering (more 
than one btree opclass), but ANALYZE only collects stats for the default 
btree opclass. 

regards, tom lane 

Re: [GENERAL] PG 924, Windows 2012, error code 487

2014-01-13 Thread Adrian Klaver

On 01/13/2014 01:44 AM, Abraham, Danny wrote:

Hi,



Are the servers on the Windows versions you mention or the clients or both?

Both clients as well as server are located on the same Windows Server machine.


Where is this error showing up, the Windows  or Postgres logs?

Both. It appears in the pg_log as well as failing the client


What does the other log show?

Nothing special. Once logged in PG is fine.


Is there anything preceding the above that might shed light?

Unfortunately not. Memory problems within Windows (Error 487) used to be common 
in 8.3.7 ... but got solved in 8.3.15.
We are now having them again with 9.2.4.


From what I can find WSACancelBlockingCall is a Winsock function has to 
do with networking/messaging and not memory issues per se. So what makes 
you think it is a memory issue or are the memory problems a separate issue?


See below for more detail on WSACancelBlockingCall, it might help you 
determine how to debug:


http://www.sockets.com/winsock.htm#CancelBlockingCall




Are the connections across a network and if so is there a firewall between the 
clients and server(s)?

No FW involved. All the connections are local. This is a pure , inconsistent 
Server problem.
See BUG #5578: postrgesql database crash 


Looked at the bug report and the last comment was from Robert Haas who 
indicated there was not enough information to work with. I would say we 
are at that same point now.


In your first post you mention that this is related to connection attempts.

So:

What clients are connecting and is it restricted to particular clients?

How many connections are you attempting at a time?

You said many connection refusals are happening, how many as percent of 
total?


You also mentioned 'Sometimes it even fails the AUTOVACUM  process'.

What does that mean, the AUTOVACUM process cannot connect, it connects 
but then fails, other?


Also what is the log entry when AUTOVACUM fails?





Thanks

Danny




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG 924, Windows 2012, error code 487

2014-01-13 Thread Abraham, Danny
Hi,

Thanks a lot for your reply ...

>From what I can find WSACancelBlockingCall is a Winsock function has to do 
>with networking/messaging and not memory issues per se. So what makes you 
>think it is a memory issue >or are the memory problems a separate issue?


See below for more detail on WSACancelBlockingCall, it might help you determine 
how to debug:

http://www.sockets.com/winsock.htm#CancelBlockingCall

>Looked at the bug report and the last comment was from Robert Haas who 
>indicated there was not enough information to work with. I would say we are at 
>that same point now.
>In your first post you mention that this is related to connection attempts.
>So:
>What clients are connecting and is it restricted to particular clients?

This is in fact a long installation thread that is using  JDBC code, some 
libpq/C, some psql scripts. 
Not many connections in parallel. Just one thread that is opening and closing a 
few connections to the PG server.

>How many connections are you attempting at a time?
Very few of them

>You said many connection refusals are happening, how many as percent of total?
This is a brand new server, automatically installed. 
Once initdb is over ... PG will not allow connections.


>You also mentioned 'Sometimes it even fails the AUTOVACUM  process'.

2014-01-07 15:56:11.232 GMTLOG:  autovacuum launcher started
2014-01-07 15:56:23.687 GMTLOG:  could not reserve shared memory region 
(addr=0108) for child 1354: error code 487
2014-01-07 15:56:23.687 GMTLOG:  could not fork new process for connection: A 
blocking operation was interrupted by a call to WSACancelBlockingCall.


>What does that mean, the AUTOVACUM process cannot connect, it connects but 
>then fails, other?

As you see above, it does not connect...

>Also what is the log entry when AUTOVACUM fails?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] question about checksum in 9.3

2014-01-13 Thread Mike Broers
Hello, I am in the process of planning a 9.3 migration of postgres and I am
curious about the checksum features available.  In my test 9.3 instance it
seemed like this feature provides a log entry of the exact database/oid of
the corrupt object when it is accessed, but not much else.  I can't find
much documentation on anything else this feature provides.

Is there a built-in method of scanning the server to check for corruption
or will I have to wait for a corrupt object to be accessed to see the log
entry?  Is there a relation that stores last checksum status or anyway of
reporting on what objects are identified by postgres as corrupt or not
corrupt?

Are there any other features of the checksum I am missing besides the log
entry?

Thanks
Mike


Re: [GENERAL] PG 924, Windows 2012, error code 487

2014-01-13 Thread Adrian Klaver

On 01/13/2014 08:43 AM, Abraham, Danny wrote:

Hi,

Thanks a lot for your reply ...


From what I can find WSACancelBlockingCall is a Winsock function has to do with 
networking/messaging and not memory issues per se. So what makes you think it is a 
memory issue >or are the memory problems a separate issue?



See below for more detail on WSACancelBlockingCall, it might help you determine 
how to debug:

http://www.sockets.com/winsock.htm#CancelBlockingCall


Looked at the bug report and the last comment was from Robert Haas who 
indicated there was not enough information to work with. I would say we are at 
that same point now.
In your first post you mention that this is related to connection attempts.
So:
What clients are connecting and is it restricted to particular clients?


This is in fact a long installation thread that is using  JDBC code, some 
libpq/C, some psql scripts.
Not many connections in parallel. Just one thread that is opening and closing a 
few connections to the PG server.


I know I am asking a lot of questions, just trying narrow down the 
parameters. In all honesty my knowledge of Windows internals is limited 
but the hope is that sufficient information will trigger an aha moment 
in others on the list with more experience in this area.



So this is just happening when you do the install of the server?

So you are using an install of your own making and not using the 
graphical installer from the download site, or are using it in combination?


Of course this leads to, what exactly is the install procedure trying to do?

By any chance are trying to load plugin_debugger.dll as in the #5578 bug 
report?





How many connections are you attempting at a time?

Very few of them


You said many connection refusals are happening, how many as percent of total?

This is a brand new server, automatically installed.
Once initdb is over ... PG will not allow connections.


So how does that reconcile with this from your previous post?

"Nothing special. Once logged in PG is fine."





You also mentioned 'Sometimes it even fails the AUTOVACUM  process'.


2014-01-07 15:56:11.232 GMTLOG:  autovacuum launcher started
2014-01-07 15:56:23.687 GMTLOG:  could not reserve shared memory region 
(addr=0108) for child 1354: error code 487
2014-01-07 15:56:23.687 GMTLOG:  could not fork new process for connection: A 
blocking operation was interrupted by a call to WSACancelBlockingCall.


So is this during the same install process as above or after the server 
is 'up' and running?




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index space growing even after cleanup via autovacuum in Postgres 9.2

2014-01-13 Thread Francisco Olarte
Hi:

On Mon, Jan 13, 2014 at 5:26 PM, Tirthankar Barari  wrote:
> On 01/10/2014 07:06 AM, Francisco Olarte wrote:
>> Not related to your vacuum problem, but if your pattern is something
>> like deleting everything inserted 15 days ago you may want to think of
>> using partitioning or simple inheritance. Make the scheduled task
>> create  a new partition/child table, redirect insertions to it, drop
>> the oldest partition.
...
> Thanks for your input. The rentention window was supposed to be variable and
> dynamically changeable. So, partitioning is our last resort. Will try the
> vacuum after delete instead of autovacuum.

The ability to use partition/inheritance does not depend on an static
window, but on wether your deletion pattern is as described. Supose
you do it daily. You can name your partitions / child_tables as
child_MMDD. Then to delete data that is >N days old you just build
the cutoff date, select from the system catalog relations whos name is
like child_\d\d\d\d\d\d\d\d , whose name is greater than
child_12345678 ( substitute the curoff date ) and  whose parent is the
appropiate table and drop all of them. If the retention window just
grew ( say from 15 to 20 ), the first 5 days you'll find no child
table ( same as when starting, this looks like the window grew from 0
to N ). If it shrank from 15 to 10 the first day you'll drop 10
tables. Depending on how you change the retention window you can also
delete just the appropiate partition, ignoring error in case it does
not exists ( to acomodate window growing cases, you can even use just
a drop if exists ) and when the window shrinks you can zap extra
tables manually or on the procedure which shrinks the window. The
advantage of this is avoiding system catalog query, but I personally
would use first alternative. The logic is much the same as a deleting,
just using partition drops.

Regards.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index space growing even after cleanup via autovacuum in Postgres 9.2

2014-01-13 Thread Tirthankar Barari

On 01/10/2014 07:06 AM, Francisco Olarte wrote:

Hi:

On Thu, Jan 9, 2014 at 7:50 PM, Tirthankar Barari  wrote:

We have a table where we insert about 10 million rows everyday. We keep 14
day's worth of entries (i.e. 140 mil). A scheduled task wakes up every day
and deletes all entries past the 14 day window (i.e. deletes entries from
the 15th day in the past).

Not related to your vacuum problem, but if your pattern is something
like deleting everything inserted 15 days ago you may want to think of
using partitioning or simple inheritance. Make the scheduled task
create  a new partition/child table, redirect insertions to it, drop
the oldest partition.

Also, if the pattern is insert-only all the time, delete whole day, it
may be easier to exclude the table from autovacuum and make the
scheduled task vacuum the table after deleting.

Francisco Olarte.


Thanks for your input. The rentention window was supposed to be variable and 
dynamically changeable. So, partitioning is our last resort. Will try the 
vacuum after delete instead of autovacuum.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore - table restoration

2014-01-13 Thread Day, David
Hi,



This is sort of a continuation of = problems I was working on last week
with selective restorations of an archive file at the schema or table level.  ( 
V9.3)
Given that I dumped the entire database ( pg_dump -Fc  my_db -f archive_file )

When I pg_restore an entire schema ( -n ) everything is wonderful.


If I try to attempt two tables in one of the schemas I encounter problems.

I get a success of sort with these option  variations:

pg_restore -c  -t tbl1 -t tbl2 -U  -d my_db  archive_file

In this case the tables are recreated with data but all the original  
constraints for these tables are missing
As are triggers that are associated with the tables.   I guess I can understand 
this.

This variation seems encouranging but ultimately fails:

pg_restore -a -v  -c  -t  translator_sys -t translator_sys_mbr -U  -d 
my_db  archive_file

pg_restore: connecting to database for restore
pg_restore: dropping TABLE DATA translator_sys
pg_restore: dropping TABLE DATA translator_sys_mbr
pg_restore: dropping TABLE DATA translator_sys
pg_restore: processing data for table "translator_sys"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 4247; 0 332255 TABLE DATA 
translator_sys redcom
pg_restore: [archiver (db)] COPY failed for table "translator_sys": ERROR:  
duplicate key value violates unique constraint "translator_sys_pkey"
DETAIL:  Key (translator_id)=(1) already exists.
CONTEXT:  COPY translator_sys, line 1
pg_restore: processing data for table "translator_sys_mbr"
pg_restore: [archiver (db)] Error from TOC entry 4248; 0 332262 TABLE DATA 
translator_sys_mbr redcom
pg_restore: [archiver (db)] COPY failed for table "translator_sys_mbr": ERROR:  
duplicate key value violates unique constraint "translator_sys_mbr_pkey"
DETAIL:  Key (translator_id, tid_seq)=(1, 1) already exists.
CONTEXT:  COPY translator_sys_mbr, line 1
pg_restore: processing data for table "translator_sys"

It seems like it is geared to TRUNCATE or DELETE the specified table data in 
this case based on the verbose output.
However I see no SQL commands in the std_out to support this verbose message so 
it ultimately fails because of the
Duplication of PK associated with the table..


Is this a bug or a mis-understanding on my part?



Regards


Dave Day




Re: [GENERAL] pg_restore - table restoration

2014-01-13 Thread Adrian Klaver

On 01/13/2014 01:43 PM, Day, David wrote:

Hi,

This is sort of a continuation of = problems I was working on last week

with selective restorations of an archive file at the schema or table
level.  ( V9.3)

Given that I dumped the entire database ( pg_dump –Fc  my_db –f
archive_file )

When I pg_restore an entire schema ( -n ) everything is wonderful.

If I try to attempt two tables in one of the schemas I encounter problems.

I get a success of sort with these option  variations:

pg_restore -c  -t tbl1 –t tbl2 –U  -d my_db  archive_file

In this case the tables are recreated with data but all the original
  constraints for these tables are missing

As are triggers that are associated with the tables.   I guess I can
understand this.


I just tried something similar here and I am not seeing that. I see 
constraints and triggers.


Is there anything in the log that might help?




Is this a bug or a mis-understanding on my part?


One thing you can do to help see what is going on is restore to a file 
instead of the database. This creates a plain text file with the SQL 
statements. So:


 pg_restore -c  -t tbl1 –t tbl2 –U  -f plain_text.sql 
archive_file




Regards

Dave Day




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore - table restoration

2014-01-13 Thread Adrian Klaver

On 01/13/2014 01:43 PM, Day, David wrote:

Hi,

This is sort of a continuation of = problems I was working on last week

with selective restorations of an archive file at the schema or table
level.  ( V9.3)

Given that I dumped the entire database ( pg_dump –Fc  my_db –f
archive_file )

When I pg_restore an entire schema ( -n ) everything is wonderful.

If I try to attempt two tables in one of the schemas I encounter problems.

I get a success of sort with these option  variations:

pg_restore -c  -t tbl1 –t tbl2 –U  -d my_db  archive_file

In this case the tables are recreated with data but all the original
  constraints for these tables are missing

As are triggers that are associated with the tables.   I guess I can
understand this.




Is this a bug or a mis-understanding on my part?



Oops, turns out I did not exactly replicate what you where doing and my 
previous answer is wrong.


What I found.

When I do this:

/usr/local/pgsql93/bin/pg_dump -Fc  -U hplc_admin -p 5452 -f hplc.out hplc

and then this:

/usr/local/pgsql93/bin/pg_restore -c -t student_sessions -t 
student_attendance -f hplc_table.sql hplc.out


I see what you see, no constraints or triggers in the SQL.

When I do what I originally posted about:

/usr/local/pgsql93/bin/pg_dump -Fc -c -t student_sessions -t 
student_attendance -U hplc_admin -p 5452 -f hplc_tables_2.out hplc


and then this:

/usr/local/pgsql93/bin/pg_restore -c -f hplc_table_2.sql hplc_tables_2.out

I do see the constraints and triggers in the output.

I was under the impression that the result should be the same for both. 
So I would consider this puzzling at the least and a bug at the most. 
Someone else will have to chime in on what is really happening because I 
do not know and I see nothing in the docs to suggest the behavior should 
be different.





Regards

Dave Day




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about checksum in 9.3

2014-01-13 Thread Adrian Klaver

On 01/13/2014 08:50 AM, Mike Broers wrote:

Hello, I am in the process of planning a 9.3 migration of postgres and I
am curious about the checksum features available.  In my test 9.3
instance it seemed like this feature provides a log entry of the exact
database/oid of the corrupt object when it is accessed, but not much
else.  I can't find much documentation on anything else this feature
provides.

Is there a built-in method of scanning the server to check for
corruption or will I have to wait for a corrupt object to be accessed to
see the log entry?  Is there a relation that stores last checksum status
or anyway of reporting on what objects are identified by postgres as
corrupt or not corrupt?

Are there any other features of the checksum I am missing besides the
log entry?


I could not find anything and the reasons for that would seem to a 
result of this sentence:


"The checksum is not valid at all times on a data page!!"

from the below where the gory details are laid out:

src/backend/storage/page/README




Thanks
Mike



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question about checksum in 9.3

2014-01-13 Thread Michael Paquier
On Tue, Jan 14, 2014 at 1:50 AM, Mike Broers  wrote:
> Hello, I am in the process of planning a 9.3 migration of postgres and I am
> curious about the checksum features available.  In my test 9.3 instance it
> seemed like this feature provides a log entry of the exact database/oid of
> the corrupt object when it is accessed, but not much else.  I can't find
> much documentation on anything else this feature provides.
Few things:
- The only way to know if a server is using data checksums is to use
pg_controldata.
- Be aware as well of the potential performance impact on your CPU,
checksums are checked each time a page is read, and recalculated each
time a page is updated.
- ignore_checksum_failure can be used to ignore failures. Don't use
that on a production system.

> Is there a built-in method of scanning the server to check for corruption or
> will I have to wait for a corrupt object to be accessed to see the log
> entry?
You can as well access manually tables with some for example
sequential scan to check if blocks are broken or not.

> Is there a relation that stores last checksum status or anyway of
> reporting on what objects are identified by postgres as corrupt or not
> corrupt?
No, you could build one though with a background worker that scans
relation pages and registers that failing blocks.

> Are there any other features of the checksum I am missing besides the log
> entry?
9.4 has a new GUC parameter called data_checksums that allow you to
check with a psql client if checksums are used on a server.

Regards,
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general