[GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread pinker
I made mistake in a filename in pg_dump command, i.e. have used path from
another server, which not exists on this one. pg_dump instead of checking
permissions / existence of output file first dumped the whole database and
at the end (after some time ... ) threw an error:

(...)
pg_dump: saving database definition
pg_dump: [archiver] could not open output file
"/home/.../dum-_20150707_1059.sql": No such file or directory

Is it correct behavior? Why wasting so much time and resources leaving
checking the output file at the last moment?



--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database-level lockdown

2015-07-07 Thread Filipe Pina
Exactly, that's the twist I've decided after some tests yesterday (the 
"lock all tables on last try" degraded performance in an obscene way): 
giving up on serializable by default.


I wanted to use serializable so developers wouldn't have to worry about 
properly using locks, but implementing this "serialization failure" 
handler is becoming a major headache...


Thanks guys!

On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver  
wrote:

On 07/06/2015 07:15 AM, Filipe Pina wrote:

Yes, I've tried to come up with guideline to enumerate tables used in
each process, but it's not simple because it's python application
calling pgsql functions that use other functions, so it's tricky for 
a
developer re-using existing functions to enumerate the tables used 
for

those. Even if everything is well documented and can be re-used seems
like a nasty task...


Still not sure what is you are trying to accomplish.

Is it really necessary that every transaction be serialized?

Or to put it another way, why are you running in serializable by 
default?


Or yet another way, what is the problem you are trying to solve with 
serialized transactions?






For now, I'm locking all to be able to close the gap, but I'm also
wondering if I could do it in a pgsql function as I mentioned in the
question:

FUNCTION A
-> FUNCTION B
> lock TABLE
-> FUNCTION C
> TABLE is not locked anymore because function B frees it as 
soon as

it returns

Is there someway to have a function that locks some tables on the
"outter" transaction instead of its own subtransaction?





--
Adrian Klaver
adrian.kla...@aklaver.com


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-07 Thread John McKown
On Tue, Jul 7, 2015 at 12:10 AM, John R Pierce  wrote:

> On 7/6/2015 9:55 PM, c.bu...@posteo.jp wrote:
>
>> On 2015-07-05 22:16 John R Pierce  wrote:
>>
>>> >at a bare minimum, a database administrator needs to create database
>>> >roles (users) and databases for an app like yours.
>>>
>> The admin don't need to create the db. It is done by the application
>> (sqlalchemy-utils on Python3) itself.
>>
>
> an application should not have the privileges to do that.   you don't run
> your apps as 'root', do you?   why would you run them as a database
> administrator ?


​Trigger Warning (Thanks, Mallard Fillmore)

I agree with you on this. If I were a customer and some vendor said: "Oh
yes, to run our product, you must configure your multi-user data base to
disable passwords and run it as a DBA so that it can make schema changes on
the fly", then I'd simply say "no sale". Of course, in regards to the
schema, it would be proper to document what the DBA needs to do to set up
the data base with the proper tables and other items. WRT to the data base
userid and password, that, IMO, should be some sort of installation
parameter, not "hard coded" into the code itself.

SQLite, which I guess the OP has decided to use, is a much better choice
for _this_ application. IMO, it does not seem to "play well with others".
​


> --
> john r pierce, recycling bits in santa cruz
>
>
-- 

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread John McKown
On Tue, Jul 7, 2015 at 4:26 AM, pinker  wrote:

> I made mistake in a filename in pg_dump command, i.e. have used path from
> another server, which not exists on this one. pg_dump instead of checking
> permissions / existence of output file first dumped the whole database and
> at the end (after some time ... ) threw an error:
>
> (...)
> pg_dump: saving database definition
> pg_dump: [archiver] could not open output file
> "/home/.../dum-_20150707_1059.sql": No such file or directory
>
> Is it correct behavior? Why wasting so much time and resources leaving
> checking the output file at the last moment?
>
>
​What version of PostgreSQL? What OS? What was the command line? On Linux
x86_64, Fedora 22​, PostgreSQL version 9.4.4, I did:

pg_dump -f /junk/x tsh009

and, almost immediately, got back:

pg_dump: [archiver] could not open output file "/junk/x": No such file or
directory


I even looked at the source to pg_dump (not that I'm a good C developer!)
and it appears to me that it basically parses the options, opens the output
file, then connects to the database server.

-- 

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread pinker
John McKown wrote
> ​What version of PostgreSQL? What OS? What was the command line?

*OS*: Linux OptiPlex-760 3.8.0-19-generic #29-Ubuntu SMP Wed Apr 17 18:16:28
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Linux Mint 15 Olivia

*pg_dump*: pg_dump (PostgreSQL) 9.4.1

*command line:*
opt/PostgreSQL/9.4/bin/pg_dump --host localhost --port 000 --username "000"
--no-password  --format plain --no-owner --create --clean --encoding UTF8
--inserts --column-inserts --no-privileges --no-tablespaces --ignore-version
--verbose --no-unlogged-table-data --file "000-$(date +%Y%m%d_%H%M).sql"
--schema "000" "db-000"




--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856930.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread Adrian Klaver

On 07/07/2015 05:23 AM, pinker wrote:

John McKown wrote

​What version of PostgreSQL? What OS? What was the command line?


*OS*: Linux OptiPlex-760 3.8.0-19-generic #29-Ubuntu SMP Wed Apr 17 18:16:28
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
Linux Mint 15 Olivia

*pg_dump*: pg_dump (PostgreSQL) 9.4.1

*command line:*
opt/PostgreSQL/9.4/bin/pg_dump --host localhost --port 000 --username "000"
--no-password  --format plain --no-owner --create --clean --encoding UTF8
--inserts --column-inserts --no-privileges --no-tablespaces --ignore-version
--verbose --no-unlogged-table-data --file "000-$(date +%Y%m%d_%H%M).sql"
--schema "000" "db-000"


So this was not the actual command you ran as I see no path specification.

Some notes:

1) plain format is the default so it does not need to specified.

2) Either --inserts or --column-inserts not both. FYI this really slows 
down the restore process.


3) --ignore-version is deprecated in 9.4 and is itself ignored.






--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856930.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread pinker
Adrian Klaver-4 wrote
> So this was not the actual command you ran as I see no path specification.
> 
> Some notes:
> 
> 1) plain format is the default so it does not need to specified.
> 
> 2) Either --inserts or --column-inserts not both. FYI this really slows 
> down the restore process.
> 
> 3) --ignore-version is deprecated in 9.4 and is itself ignored.


command was copied from pgadmin, I've changed only parameters.
I'm doing dump from my computer where I have pg_dump 9.4.1 installed from db
which is 8.4.



--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856940.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database-level lockdown

2015-07-07 Thread Alban Hertroys
On 7 July 2015 at 12:55, Filipe Pina  wrote:
> On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver 
> wrote:
>> Still not sure what is you are trying to accomplish. Is it really necessary
>> that every transaction be serialized? Or to put it another way, why are you
>> running in serializable by default? Or yet another way, what is the problem
>> you are trying to solve with serialized transactions?
>
> Exactly, that's the twist I've decided after some tests yesterday (the "lock
> all tables on last try" degraded performance in an obscene way): giving up
> on serializable by default.
>
> I wanted to use serializable so developers wouldn't have to worry about
> properly using locks, but implementing this "serialization failure" handler
> is becoming a major headache...

What Adrian was trying to get out of you is why you think you need those locks.

You're working with an RDBMS, it does the job of keeping data
integrity in a multi-user environment already. You can trust it do do
that well.
So unless you're doing something really special, you shouldn't need to
lock anything.
Among the special cases where you do need locks are things like
needing a gapless sequence (invoice numbers come to mind); things like
that.

So once again, what do you need those locks for?

P.S. Please don't top-post on this list. I had to edit your message to
fix the flow of the conversation.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database-level lockdown

2015-07-07 Thread Filipe Pina

On Ter, Jul 7, 2015 at 2:31 , Alban Hertroys  wrote:
On 7 July 2015 at 12:55, Filipe Pina  
wrote:
 On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver 


 wrote:
 Still not sure what is you are trying to accomplish. Is it really 
necessary
 that every transaction be serialized? Or to put it another way, 
why are you
 running in serializable by default? Or yet another way, what is 
the problem

 you are trying to solve with serialized transactions?


 Exactly, that's the twist I've decided after some tests yesterday 
(the "lock
 all tables on last try" degraded performance in an obscene way): 
giving up

 on serializable by default.

 I wanted to use serializable so developers wouldn't have to worry 
about
 properly using locks, but implementing this "serialization failure" 
handler

 is becoming a major headache...


What Adrian was trying to get out of you is why you think you need 
those locks.


You're working with an RDBMS, it does the job of keeping data
integrity in a multi-user environment already. You can trust it do do
that well.
So unless you're doing something really special, you shouldn't need to
lock anything.
Among the special cases where you do need locks are things like
needing a gapless sequence (invoice numbers come to mind); things like
that.

So once again, what do you need those locks for?

P.S. Please don't top-post on this list. I had to edit your message to
fix the flow of the conversation.

--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


It probably wasn't clear but the conversation did start with exposing 
my issue, I'll try to rephrase.


Serializable keeps data integrity of the database, that's true. But it 
does that by "refusing" transactions that would compromise it (with 
error 40001).


I need to make sure no transactions fail due to such errors, so I made 
an handler to retry transactions if they fail due to 40001.
But I can't leave it retrying forever, so I have this (configurable) 
limit of 5 retries.
5 retries might be enough in some case, but it might not be enough in 
others.


So how do I make sure the last try would not fail with serialization 
error?
I could only come up with the idea of using LOCK database (on that last 
try) but as it is not available in PG, I went for "lock all tables".


Does it make sense now?
I was probably having the wrong approach to the problem anyway, I've 
went with "read committed" mode now and normal locking (SELECT .. FOR 
UPDATE on the required spots) and it works better..
I just come from a GTM background (and all the other developers in the 
project) and we're used to the TP RESTART concept where, in the final 
retry, GTM locks the database region(s) that were used by that 
transaction (not the whole DB, just the regions).


Thanks and sorry for the top-posting, not really used to mailing lists 
and proper posting styles..


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread Adrian Klaver

On 07/07/2015 06:00 AM, pinker wrote:

Adrian Klaver-4 wrote

So this was not the actual command you ran as I see no path specification.

Some notes:

1) plain format is the default so it does not need to specified.

2) Either --inserts or --column-inserts not both. FYI this really slows
down the restore process.

3) --ignore-version is deprecated in 9.4 and is itself ignored.



command was copied from pgadmin, I've changed only parameters.
I'm doing dump from my computer where I have pg_dump 9.4.1 installed from db
which is 8.4.


So how did you get the wrong file name then, the pgAdmin backup dialog 
has a file selector?


Or did you cut and paste into the file field?





--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856940.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database-level lockdown

2015-07-07 Thread Adrian Klaver

On 07/07/2015 06:44 AM, Filipe Pina wrote:

On Ter, Jul 7, 2015 at 2:31 , Alban Hertroys  wrote:

On 7 July 2015 at 12:55, Filipe Pina  wrote:

On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver
 wrote:

Still not sure what is you are trying to accomplish. Is it
really necessary that every transaction be serialized? Or to
put it another way, why are you running in serializable by
default? Or yet another way, what is the problem you are
trying to solve with serialized transactions?

Exactly, that's the twist I've decided after some tests yesterday
(the "lock all tables on last try" degraded performance in an
obscene way): giving up on serializable by default. I wanted to
use serializable so developers wouldn't have to worry about
properly using locks, but implementing this "serialization
failure" handler is becoming a major headache...

What Adrian was trying to get out of you is why you think you need
those locks. You're working with an RDBMS, it does the job of keeping
data integrity in a multi-user environment already. You can trust it
do do that well. So unless you're doing something really special, you
shouldn't need to lock anything. Among the special cases where you do
need locks are things like needing a gapless sequence (invoice numbers
come to mind); things like that. So once again, what do you need those
locks for? P.S. Please don't top-post on this list. I had to edit your
message to fix the flow of the conversation.
--
If you can't see the forest for the trees, Cut the trees and you'll
see there is no forest.


It probably wasn't clear but the conversation did start with exposing my
issue, I'll try to rephrase.

Serializable keeps data integrity of the database, that's true. But it
does that by "refusing" transactions that would compromise it (with
error 40001).

I need to make sure no transactions fail due to such errors, so I made
an handler to retry transactions if they fail due to 40001.
But I can't leave it retrying forever, so I have this (configurable)
limit of 5 retries.
5 retries might be enough in some case, but it might not be enough in
others.

So how do I make sure the last try would not fail with serialization error?
I could only come up with the idea of using LOCK database (on that last
try) but as it is not available in PG, I went for "lock all tables".

Does it make sense now?
I was probably having the wrong approach to the problem anyway, I've
went with "read committed" mode now and normal locking (SELECT .. FOR
UPDATE on the required spots) and it works better..


You also mentioned that this for a Django application, have you looked 
at its new(since 1.6) transaction management?:


https://docs.djangoproject.com/en/1.8/topics/db/transactions/

It allows you fairly fine grained control over transactions, including 
access to savepoints.




I just come from a GTM background (and all the other developers in the
project) and we're used to the TP RESTART concept where, in the final
retry, GTM locks the database region(s) that were used by that
transaction (not the whole DB, just the regions).

Thanks and sorry for the top-posting, not really used to mailing lists
and proper posting styles..



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread pinker
Adrian Klaver-4 wrote
> So how did you get the wrong file name then, the pgAdmin backup dialog 
> has a file selector?
> 
> Or did you cut and paste into the file field?

I've cut, pasted and changed parameters.





--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856969.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread pinker
Adrian Klaver-4 wrote
> So how did you get the wrong file name then, the pgAdmin backup dialog 
> has a file selector?
> 
> Or did you cut and paste into the file field?

Anyway, shouldn't postgresql first check if dump can be saved? and then
start doing it?



--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856979.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread Tom Lane
John McKown  writes:
> On Tue, Jul 7, 2015 at 4:26 AM, pinker  wrote:
>> I made mistake in a filename in pg_dump command, i.e. have used path from
>> another server, which not exists on this one. pg_dump instead of checking
>> permissions / existence of output file first dumped the whole database and
>> at the end (after some time ... ) threw an error:
>> 
>> (...)
>> pg_dump: saving database definition
>> pg_dump: [archiver] could not open output file
>> "/home/.../dum-_20150707_1059.sql": No such file or directory
>> 
>> Is it correct behavior? Why wasting so much time and resources leaving
>> checking the output file at the last moment?

> ​What version of PostgreSQL? What OS? What was the command line? On Linux
> x86_64, Fedora 22​, PostgreSQL version 9.4.4, I did:
> pg_dump -f /junk/x tsh009
> and, almost immediately, got back:
> pg_dump: [archiver] could not open output file "/junk/x": No such file or
> directory

There is a noticeable delay if you dump a database with a significant
number of objects in it, eg with the regression test database I get:

$ time pg_dump -f /z/q regression
pg_dump: [archiver] could not open output file "/z/q": No such file or directory

real0m1.164s
user0m0.062s
sys 0m0.034s

However, I don't see that with any of the non-plain-text output formats:

$ time pg_dump -f /z/q -Fc regression
pg_dump: [custom archiver] could not open output file "/z/q": No such file or 
directory

real0m0.005s
user0m0.004s
sys 0m0.001s

The reason the non-plain-text output formats fail promptly is that they
try to open the output file in InitArchiveFmt_Custom and siblings.
But plain-text format does not, leaving it until RestoreArchive; which is
code that is shared with pg_restore.  I'm not sure how painful that would
be to fix without breaking pg_restore, but almost certainly it would
involve duplicating the file open/close logic into two different places.
Dunno that this problem is significant enough to justify such effort.
You would need an awful lot of objects (not data, but DDL objects) for
the delay to be major.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database-level lockdown

2015-07-07 Thread Filipe Pina


On Ter, Jul 7, 2015 at 2:51 , Adrian Klaver  
wrote:

On 07/07/2015 06:44 AM, Filipe Pina wrote:
On Ter, Jul 7, 2015 at 2:31 , Alban Hertroys  
wrote:
On 7 July 2015 at 12:55, Filipe Pina  
wrote:


On Ter, Jul 7, 2015 at 1:41 , Adrian Klaver
 wrote:

Still not sure what is you are trying to accomplish. Is it
really necessary that every transaction be serialized? Or to
put it another way, why are you running in serializable by
default? Or yet another way, what is the problem you are
trying to solve with serialized transactions?

Exactly, that's the twist I've decided after some tests 
yesterday

(the "lock all tables on last try" degraded performance in an
obscene way): giving up on serializable by default. I wanted to
use serializable so developers wouldn't have to worry about
properly using locks, but implementing this "serialization
failure" handler is becoming a major headache...

What Adrian was trying to get out of you is why you think you need
those locks. You're working with an RDBMS, it does the job of 
keeping

data integrity in a multi-user environment already. You can trust it
do do that well. So unless you're doing something really special, 
you
shouldn't need to lock anything. Among the special cases where you 
do
need locks are things like needing a gapless sequence (invoice 
numbers
come to mind); things like that. So once again, what do you need 
those
locks for? P.S. Please don't top-post on this list. I had to edit 
your

message to fix the flow of the conversation.
--
If you can't see the forest for the trees, Cut the trees and you'll
see there is no forest.


It probably wasn't clear but the conversation did start with 
exposing my

issue, I'll try to rephrase.

Serializable keeps data integrity of the database, that's true. But 
it

does that by "refusing" transactions that would compromise it (with
error 40001).

I need to make sure no transactions fail due to such errors, so I 
made

an handler to retry transactions if they fail due to 40001.
But I can't leave it retrying forever, so I have this (configurable)
limit of 5 retries.
5 retries might be enough in some case, but it might not be enough in
others.

So how do I make sure the last try would not fail with serialization 
error?
I could only come up with the idea of using LOCK database (on that 
last

try) but as it is not available in PG, I went for "lock all tables".

Does it make sense now?
I was probably having the wrong approach to the problem anyway, I've
went with "read committed" mode now and normal locking (SELECT .. FOR
UPDATE on the required spots) and it works better..


You also mentioned that this for a Django application, have you 
looked at its new(since 1.6) transaction management?:


https://docs.djangoproject.com/en/1.8/topics/db/transactions/

It allows you fairly fine grained control over transactions, 
including access to savepoints.



I just come from a GTM background (and all the other developers in 
the

project) and we're used to the TP RESTART concept where, in the final
retry, GTM locks the database region(s) that were used by that
transaction (not the whole DB, just the regions).

Thanks and sorry for the top-posting, not really used to mailing 
lists

and proper posting styles..



--
Adrian Klaver
adrian.kla...@aklaver.com


Yes, I was using transation.atomic in some django-only operations, but 
the problem is that some django operations call PG functions that call 
other PG functions and the purpose was to have the django code 
"agnostic" of the what the functions do..


But we've switched to "read committed" plus proper "FOR UPDATE" 
statements in the few functions we have at the moment and the 
performance is actually quite better (at the little extra cost of 
people being careful with locks).


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread Adrian Klaver

On 07/07/2015 06:55 AM, pinker wrote:

Adrian Klaver-4 wrote

So how did you get the wrong file name then, the pgAdmin backup dialog
has a file selector?

Or did you cut and paste into the file field?


I've cut, pasted and changed parameters.


I was talking about when you ran the backup from pgAdmin. The backup 
dialog has a file selector button to select the path/file you want to 
back up to. So I was wondering how you got an incorrect path in the 
first place? Then it  occurred to me you might not have have used the 
selector, but directly pasted the path into the file field, is that the 
case?


Another thought just occurred, that the delay is pgAdmin trying to 
verify the path/file you entered. I say this because of this line:


pg_dump: saving database definition

from your original post. I do not remember ever seeing that when using 
pg_dump directly.


Have you tried doing the backup directly from the command line?







--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856969.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread Adrian Klaver

On 07/07/2015 07:12 AM, pinker wrote:

Adrian Klaver-4 wrote

So how did you get the wrong file name then, the pgAdmin backup dialog
has a file selector?

Or did you cut and paste into the file field?


Anyway, shouldn't postgresql first check if dump can be saved? and then
start doing it?


pgAdmin != Postgres. You are accessing the pg_dump via another program 
and so I am trying to help you figure out which one is causing the issue.


So:
1) How did you provide the wrong path to pgAdmin?

2) What happens if you run the command from the command line, in other 
directly and not through pgAdmin?






--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856979.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread Adrian Klaver

On 07/07/2015 07:12 AM, pinker wrote:

Adrian Klaver-4 wrote

So how did you get the wrong file name then, the pgAdmin backup dialog
has a file selector?

Or did you cut and paste into the file field?


Anyway, shouldn't postgresql first check if dump can be saved? and then
start doing it?


Ignore my comment about

'saving database definition'

Looking at the source that is from pg_dump and appears when you run with 
--verbose, which I do not do. That explains why I am not seen it.







--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856979.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread pinker
Adrian Klaver-4 wrote
> I was talking about when you ran the backup from pgAdmin. The backup 
> dialog has a file selector button to select the path/file you want to 
> back up to. So I was wondering how you got an incorrect path in the 
> first place? Then it  occurred to me you might not have have used the 
> selector, but directly pasted the path into the file field, is that the 
> case?
> 
> Have you tried doing the backup directly from the command line?
> 
> -- 
> Sent via pgsql-general mailing list (

> pgsql-general@

> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

I'm doing it directly from command line, just took command with options from
pgadmin window (because i don't like to read the whole documentation page
with pg_dump options), but always running it from bash, changing only
db-name, filepath, port etc. (in bash console)




--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856999.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump (PostgreSQL) 9.4.1 - delay in checking if file exists

2015-07-07 Thread Adrian Klaver

On 07/07/2015 08:15 AM, pinker wrote:

Adrian Klaver-4 wrote

I was talking about when you ran the backup from pgAdmin. The backup
dialog has a file selector button to select the path/file you want to
back up to. So I was wondering how you got an incorrect path in the
first place? Then it  occurred to me you might not have have used the
selector, but directly pasted the path into the file field, is that the
case?

Have you tried doing the backup directly from the command line?

--
Sent via pgsql-general mailing list (



pgsql-general@



)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


I'm doing it directly from command line, just took command with options from
pgadmin window (because i don't like to read the whole documentation page
with pg_dump options),


That is sort of dangerous:) As an example, do you really want --inserts 
or --column-inserts, they really slow down a restore.



but always running it from bash, changing only
db-name, filepath, port etc. (in bash console)


Then see Tom Lanes explanation.






--
View this message in context: 
http://postgresql.nabble.com/pg-dump-PostgreSQL-9-4-1-delay-in-checking-if-file-exists-tp5856890p5856999.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.4 on Ubuntu 15.04: ENETUNREACH error?

2015-07-07 Thread Chas. Munat
I recently upgraded to Ubuntu 15.04 with PostgreSQL 9.4.4 with uuids and 
plv8 coffeescript extensions. I can access the database via psql. My web 
application uses node.js via pg-bricks. It works perfectly on my Mac dev 
machine.


On the server, I get this error:

preload failed { [Error: connect ENETUNREACH 127.0.0.0:5432 - Local 
(0.0.0.0:0)]

  code: 'ENETUNREACH',
  errno: 'ENETUNREACH',
  syscall: 'connect',
  address: '127.0.0.0',
  port: 5432 }

(Preload is a script in my app that does multiple queries via pg-bricks.)

I have turned off ufw, set pg_hba.conf to TRUST, done everything I can 
think of -- no  joy.


Any ideas why this might be happening?

--
νz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general