Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-02 Thread David G. Johnston
Adding "raw" content present on Nabble that gets filtered by the mailing
list.

On Wednesday, April 1, 2015, Taytay  wrote:

> We make heavy use of `GET STACKED DIAGNOSTICS` to determine where errors
> happened.
> However, I am trying to use RAISE EXCEPTION to report errors, and have
> discovered that RAISE is specifically prevented from adding to the error
> context:



static void
plpgsql_exec_error_callback(void *arg)
{
PLpgSQL_execstate *estate = (PLpgSQL_execstate *) arg;

/* if we are doing RAISE, don't report its location */
if (estate->err_text == raise_skip_msg)
return;



>
> So that means that this doesn't work:
>
>
>
 RAISE EXCEPTION 'This exception will not get a stack trace';

EXCEPTION WHEN others THEN

-- If the exception we're catching is one that Postgres threw,
-- like a divide by zero error, then this will get the full
-- stack trace of the place where the exception was thrown.
-- However, since we are catching an exception we raised manually
-- using RAISE EXCEPTION, there is no context/stack trace!
GET STACKED DIAGNOSTICS v_error_stack = PG_EXCEPTION_CONTEXT;


> I've posted more details here:
>
> http://dba.stackexchange.com/questions/96743/postgres-how-to-get-stack-trace-for-a-manually-raised-exception
>
> That context would be awfully useful for us, even for manually generated
> exceptions.
> Can anyone shed some light on A) why this is, and B) if it's still desired,
> and C) if so, are there any workarounds? Is there an exception I can trick
> Postgres into throwing that will include my user-generated string?
>
> Many thanks for any help you can offer.
>
>


Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-02 Thread Pavel Stehule
2015-04-02 9:13 GMT+02:00 David G. Johnston :

> Adding "raw" content present on Nabble that gets filtered by the mailing
> list.
>
> On Wednesday, April 1, 2015, Taytay  wrote:
>
>> We make heavy use of `GET STACKED DIAGNOSTICS` to determine where errors
>> happened.
>> However, I am trying to use RAISE EXCEPTION to report errors, and have
>> discovered that RAISE is specifically prevented from adding to the error
>> context:
>
>
> 
> static void
> plpgsql_exec_error_callback(void *arg)
> {
> PLpgSQL_execstate *estate = (PLpgSQL_execstate *) arg;
>
> /* if we are doing RAISE, don't report its location */
> if (estate->err_text == raise_skip_msg)
> return;
> 
>

We talked more time about a change of this behave - the discus finished
without end

http://www.postgresql.org/message-id/cafj8prdfhqv7ckxp-k+ubzw6627hytt7bct4zxdbselxc1r...@mail.gmail.com

I am thinking, so current design is too simple and should be changed, and
if will be a agreement I can add this patch to next commitfest.

Regards

Pavel


>
>
>>
>> So that means that this doesn't work:
>>
>>
>>
>  RAISE EXCEPTION 'This exception will not get a stack trace';
>
> EXCEPTION WHEN others THEN
>
> -- If the exception we're catching is one that Postgres threw,
> -- like a divide by zero error, then this will get the full
> -- stack trace of the place where the exception was thrown.
> -- However, since we are catching an exception we raised manually
> -- using RAISE EXCEPTION, there is no context/stack trace!
> GET STACKED DIAGNOSTICS v_error_stack = PG_EXCEPTION_CONTEXT;
>
>
>> I've posted more details here:
>>
>> http://dba.stackexchange.com/questions/96743/postgres-how-to-get-stack-trace-for-a-manually-raised-exception
>>
>> That context would be awfully useful for us, even for manually generated
>> exceptions.
>> Can anyone shed some light on A) why this is, and B) if it's still
>> desired,
>> and C) if so, are there any workarounds? Is there an exception I can trick
>> Postgres into throwing that will include my user-generated string?
>>
>> Many thanks for any help you can offer.
>>
>>
>
>


Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-02 Thread David G. Johnston
On Thursday, April 2, 2015, Pavel Stehule  wrote:

>
>
> 2015-04-02 9:13 GMT+02:00 David G. Johnston  >:
>
>> Adding "raw" content present on Nabble that gets filtered by the mailing
>> list.
>>
>> On Wednesday, April 1, 2015, Taytay > > wrote:
>>
>>> We make heavy use of `GET STACKED DIAGNOSTICS` to determine where errors
>>> happened.
>>> However, I am trying to use RAISE EXCEPTION to report errors, and have
>>> discovered that RAISE is specifically prevented from adding to the error
>>> context:
>>
>>
>> 
>> static void
>> plpgsql_exec_error_callback(void *arg)
>> {
>> PLpgSQL_execstate *estate = (PLpgSQL_execstate *) arg;
>>
>> /* if we are doing RAISE, don't report its location */
>> if (estate->err_text == raise_skip_msg)
>> return;
>> 
>>
>
> We talked more time about a change of this behave - the discus finished
> without end
>
>
> http://www.postgresql.org/message-id/cafj8prdfhqv7ckxp-k+ubzw6627hytt7bct4zxdbselxc1r...@mail.gmail.com
>
> I am thinking, so current design is too simple and should be changed, and
> if will be a agreement I can add this patch to next commitfest.
>
> Regards
>
> Pavel
>
>
>>
>>
>>>
>>> So that means that this doesn't work:
>>>
>>>
>>>
>>  RAISE EXCEPTION 'This exception will not get a stack trace';
>>
>> EXCEPTION WHEN others THEN
>>
>> -- If the exception we're catching is one that Postgres threw,
>> -- like a divide by zero error, then this will get the full
>> -- stack trace of the place where the exception was thrown.
>> -- However, since we are catching an exception we raised manually
>> -- using RAISE EXCEPTION, there is no context/stack trace!
>> GET STACKED DIAGNOSTICS v_error_stack = PG_EXCEPTION_CONTEXT;
>>
>>
>>> I've posted more details here:
>>>
>>> http://dba.stackexchange.com/questions/96743/postgres-how-to-get-stack-trace-for-a-manually-raised-exception
>>>
>>> That context would be awfully useful for us, even for manually generated
>>> exceptions.
>>> Can anyone shed some light on A) why this is, and B) if it's still
>>> desired,
>>> and C) if so, are there any workarounds? Is there an exception I can
>>> trick
>>> Postgres into throwing that will include my user-generated string?
>>>
>>> Many thanks for any help you can offer.
>>>
>>>
>>
>>
>
>
The OP on this thread has introduced a potential compromise.  Keep the
current printing behavior for RAISE but the construction of the error
itself should contain all of the relevant detail so that the caller can get
to the suppressed information via, in this instance, GET STACKED
DIAGNOSTICS inside an exception handler - a situation where the error is in
context but has not yet been printed.

Giving the function author the ability, via a new using clause, to bypass
the printing short-circuit is another feature to consider.

I haven't fully comprehended the design decisions and trade-offs but
omitting data to facilitate printing doesn't sound like an appropriate
solution - not that I have any clue how hard it would be separate the two
aspects.  Likewise with adding in a short-circuit that is judgemental
without providing some means to bypass it.  We are not talking about data
integrity or performance here and while I'll admit reducing verbosity is a
valid goal mis-use of a provided work-around mechanic is not that serious a
transgression and one readily noticed and somewhat readily corrected.

David J.


Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-02 Thread Pavel Stehule
The OP on this thread has introduced a potential compromise.  Keep the
> current printing behavior for RAISE but the construction of the error
> itself should contain all of the relevant detail so that the caller can get
> to the suppressed information via, in this instance, GET STACKED
> DIAGNOSTICS inside an exception handler - a situation where the error is in
> context but has not yet been printed.
>
> Giving the function author the ability, via a new using clause, to bypass
> the printing short-circuit is another feature to consider.
>
> I haven't fully comprehended the design decisions and trade-offs but
> omitting data to facilitate printing doesn't sound like an appropriate
> solution - not that I have any clue how hard it would be separate the two
> aspects.  Likewise with adding in a short-circuit that is judgemental
> without providing some means to bypass it.  We are not talking about data
> integrity or performance here and while I'll admit reducing verbosity is a
> valid goal mis-use of a provided work-around mechanic is not that serious a
> transgression and one readily noticed and somewhat readily corrected.
>

There is more aspects - current behave is too simply fix - but it works
almost time. We can enhance a RAISE statement, but default behave should be
practical. Usually we don't need stack for NOTICE level (and maybe for
WARNING level) and usually we would to have stack for EXCEPTION level. Now,
there is workaround due GET DIAGNOSTICS STACK, but it is workaround - and
the enhancing of GET DIAGNOSTICS was not designed for this purpose. Sure,
there are more variant of fixing - and we can implement more than one.

Regards

Pavel


>
> David J.
>


[GENERAL] implicit CAST on CSV COPY FROM

2015-04-02 Thread Geoff Winkless
Hi

I have a set of CSV data that I'm importing containing dates stored as INT
values (eg 20150402). The value 0 represents a null date in this format.

I've created a function and cast that (ab)uses the system text::date cast:

CREATE FUNCTION to_date(integer) RETURNS date AS $$SELECT CASE WHEN $1<=0
THEN NULL ELSE CONCAT(($1/1)::text, '-', (($1/100)%100)::text, '-',
($1%100)::text)::date END$$ LANGUAGE SQL;

CREATE CAST (integer AS date) WITH FUNCTION to_date(integer) AS IMPLICIT;

This works fine if I

CREATE TABLE mytable (dt date NULL);

INSERT INTO mytable (dt) VALUES (0);
mydb=# INSERT INTO mytable (dt) VALUES (0);
INSERT 0 1
Time: 1.562 ms

but if I use

COPY mytable (dt) FROM STDIN WITH (FORMAT csv, HEADER false);
0
\.

it fails.

ERROR:  date/time field value out of range: "0"
HINT:  Perhaps you need a different "datestyle" setting.
CONTEXT:  COPY mytable, line 1, column dt: "0"

Is there a trick I can use to get COPY FROM to use my cast? Is it somehow
treating all the CSV values as strings and then trying to cast to the
target type? I tried creating similar CASTs for text, char and varchar to
call my to_date function but none of them made a difference.

Thanks!

Geoff


[GENERAL] Relation name stored in Postgres

2015-04-02 Thread Ravi Kiran
Hi,

I want to know how the relation name is stored in postgres, In which part
of the postgres source code could I find the relation name being stored.

Thank you

-- 
Regards,

K.Ravikiran

ᐧ


Re: [GENERAL] implicit CAST on CSV COPY FROM

2015-04-02 Thread Adrian Klaver

On 04/02/2015 04:07 AM, Geoff Winkless wrote:

Hi

I have a set of CSV data that I'm importing containing dates stored as
INT values (eg 20150402). The value 0 represents a null date in this format.

I've created a function and cast that (ab)uses the system text::date cast:

CREATE FUNCTION to_date(integer) RETURNS date AS $$SELECT CASE WHEN
$1<=0 THEN NULL ELSE CONCAT(($1/1)::text, '-', (($1/100)%100)::text,
'-', ($1%100)::text)::date END$$ LANGUAGE SQL;

CREATE CAST (integer AS date) WITH FUNCTION to_date(integer) AS IMPLICIT;

This works fine if I

CREATE TABLE mytable (dt date NULL);

INSERT INTO mytable (dt) VALUES (0);
mydb=# INSERT INTO mytable (dt) VALUES (0);
INSERT 0 1
Time: 1.562 ms

but if I use

COPY mytable (dt) FROM STDIN WITH (FORMAT csv, HEADER false);
0
\.

it fails.

ERROR:  date/time field value out of range: "0"
HINT:  Perhaps you need a different "datestyle" setting.
CONTEXT:  COPY mytable, line 1, column dt: "0"

Is there a trick I can use to get COPY FROM to use my cast? Is it
somehow treating all the CSV values as strings and then trying to cast
to the target type? I tried creating similar CASTs for text, char and
varchar to call my to_date function but none of them made a difference.


FYI Postgres will handle 20150402 with out any changes:

test=> select '20150402'::date;
date

 2015-04-02


Postgres is applying the valid date test before doing any casting and 0 
is not a valid date:


test=> select '0'::date;
ERROR:  date/time field value out of range: "0"
LINE 1: select '0'::date;
   ^
HINT:  Perhaps you need a different "datestyle" setting.

Your options:

1) Use a combination of NULL, FORCE_NOT_NULL, FORCE_NULL to get the 0 to 
be null, though I could see that getting messy.


2) Pre-process the data to turn 0 into NULL.

3) With Postgres 9.3+ run the COPY FROM through a PROGRAM to do 1
http://www.postgresql.org/docs/9.4/interactive/sql-copy.html

4) COPY FROM to a holding table that has a varchar field for the date 
data and then do a conversion and transfer to the final table.





Thanks!

Geoff



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Relation name stored in Postgres

2015-04-02 Thread Pavel Stehule
Hi

it is in system catalog - table pg_class, column relname

Regards

Pavel Stehule

2015-04-02 15:52 GMT+02:00 Ravi Kiran :

> Hi,
>
> I want to know how the relation name is stored in postgres, In which part
> of the postgres source code could I find the relation name being stored.
>
> Thank you
>
> --
> Regards,
>
> K.Ravikiran
>
> ᐧ
>


Re: [GENERAL] Relation name stored in Postgres

2015-04-02 Thread Ravi Kiran
Hi,
Thank you Sir.
Also, could you tell me during which stage(whether parser,optimizer or
executor) does the table name gets stored, and if possible could you tell
me which program specifically does that.

Thank you.
ᐧ

On Thu, Apr 2, 2015 at 7:32 PM, Pavel Stehule 
wrote:

> Hi
>
> it is in system catalog - table pg_class, column relname
>
> Regards
>
> Pavel Stehule
>
>
> 2015-04-02 15:52 GMT+02:00 Ravi Kiran :
>
>> Hi,
>>
>> I want to know how the relation name is stored in postgres, In which part
>> of the postgres source code could I find the relation name being stored.
>>
>> Thank you
>>
>> --
>> Regards,
>>
>> K.Ravikiran
>>
>>
>


-- 
Regards,

K.Ravikiran


Re: [GENERAL] Relation name stored in Postgres

2015-04-02 Thread Ravi Kiran
Hi,

Also, could you tell me during which stage(whether parser,optimizer or
executor) does the table name gets stored, and if possible could you tell
me which program specifically does that.


ᐧ

On Thu, Apr 2, 2015 at 7:56 PM, Ravi Kiran  wrote:

> Hi,
> Thank you Sir.
> Also, could you tell me during which stage(whether parser,optimizer or
> executor) does the table name gets stored, and if possible could you tell
> me which program specifically does that.
>
> Thank you.
>
> On Thu, Apr 2, 2015 at 7:32 PM, Pavel Stehule 
> wrote:
>
>> Hi
>>
>> it is in system catalog - table pg_class, column relname
>>
>> Regards
>>
>> Pavel Stehule
>>
>>
>> 2015-04-02 15:52 GMT+02:00 Ravi Kiran :
>>
>>> Hi,
>>>
>>> I want to know how the relation name is stored in postgres, In which
>>> part of the postgres source code could I find the relation name being
>>> stored.
>>>
>>> Thank you
>>>
>>> --
>>> Regards,
>>>
>>> K.Ravikiran
>>>
>>>
>>
>
>
> --
> Regards,
>
> K.Ravikiran
>
>


-- 
Regards,

K.Ravikiran


Re: [GENERAL] Relation name stored in Postgres

2015-04-02 Thread Pavel Stehule
2015-04-02 16:26 GMT+02:00 Ravi Kiran :

> Hi,
> Thank you Sir.
> Also, could you tell me during which stage(whether parser,optimizer or
> executor) does the table name gets stored, and if possible could you tell
> me which program specifically does that.
>

Usually parser, optimizer and command execution work with data from
pg_class table. I don't understand to second question. if you need some
points to postgresql source code - then magic word is "relname"

Regards

Pavel




>
> Thank you.
> ᐧ
>
> On Thu, Apr 2, 2015 at 7:32 PM, Pavel Stehule 
> wrote:
>
>> Hi
>>
>> it is in system catalog - table pg_class, column relname
>>
>> Regards
>>
>> Pavel Stehule
>>
>>
>> 2015-04-02 15:52 GMT+02:00 Ravi Kiran :
>>
>>> Hi,
>>>
>>> I want to know how the relation name is stored in postgres, In which
>>> part of the postgres source code could I find the relation name being
>>> stored.
>>>
>>> Thank you
>>>
>>> --
>>> Regards,
>>>
>>> K.Ravikiran
>>>
>>>
>>
>
>
> --
> Regards,
>
> K.Ravikiran
>
>


Re: [GENERAL] How to recover or resent the password for the user 'postgres'

2015-04-02 Thread Arup Rakshit
On Monday, March 30, 2015 06:27:19 AM Adrian Klaver wrote:
> On 03/30/2015 01:09 AM, Arup Rakshit wrote:
> > Hi,
> >
> > I am trying to follow what has been mentioned below **Setting Up Postgres** 
> > (https://www.digitalocean.com/community/tutorials/how-to-setup-ruby-on-rails-with-postgres).
> >
> > But no luck!
> >
> > [shreyas@myapp(add_postgres_addapter)]$ su - postgres
> > Password:
> > su: Sorry
> > [shreyas@myapp (add_postgres_addapter)]$
> >
> > It is asking me the password, whereas I don't know the password of the user 
> > `postgres`. How would I create a new role with a password in this case ?
> 
> How did you install Postgres?

I didn't install. I joined a company as an Application developer, where the 
machine I have been assigned had PostgreSql already installed. I changed the 
configuration of the pg_hba.config as *trust*, and it worked.

Thanks for the help to all of you.

-- 

Regards,
Arup Rakshit

Debugging is twice as hard as writing the code in the first place. Therefore, 
if you write the code as cleverly as possible, you are, by definition, not 
smart enough to debug it.

--Brian Kernighan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Relation name stored in Postgres

2015-04-02 Thread Melvin Davidson
The table name is stored in pg_class when you execute the CREATE TABLE
statement. The PostgreSQL main program, "postmaster" handles all the work.

It appears to me your concept of how PostgreSQL works is very distorted.
Perhaps you would best be served by purchasing and reading "Beginning
Databases With PostgreSQL".

http://www.amazon.com/Beginning-Databases-PostgreSQL-Novice-Professional/dp/1590594789/ref=sr_1_2?s=books&ie=UTF8&qid=1427987843&sr=1-2&keywords=postgresql


On Thu, Apr 2, 2015 at 10:43 AM, Pavel Stehule 
wrote:

>
>
> 2015-04-02 16:26 GMT+02:00 Ravi Kiran :
>
>> Hi,
>> Thank you Sir.
>> Also, could you tell me during which stage(whether parser,optimizer or
>> executor) does the table name gets stored, and if possible could you tell
>> me which program specifically does that.
>>
>
> Usually parser, optimizer and command execution work with data from
> pg_class table. I don't understand to second question. if you need some
> points to postgresql source code - then magic word is "relname"
>
> Regards
>
> Pavel
>
>
>
>
>>
>> Thank you.
>> ᐧ
>>
>> On Thu, Apr 2, 2015 at 7:32 PM, Pavel Stehule 
>> wrote:
>>
>>> Hi
>>>
>>> it is in system catalog - table pg_class, column relname
>>>
>>> Regards
>>>
>>> Pavel Stehule
>>>
>>>
>>> 2015-04-02 15:52 GMT+02:00 Ravi Kiran :
>>>
 Hi,

 I want to know how the relation name is stored in postgres, In which
 part of the postgres source code could I find the relation name being
 stored.

 Thank you

 --
 Regards,

 K.Ravikiran


>>>
>>
>>
>> --
>> Regards,
>>
>> K.Ravikiran
>>
>>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-02 Thread Taytay
There appears to be a fair amount of nuance here, but I am _very_ impressed
with how quickly you have responded. Thank you for your quick attention to
this issue! (Yet another thing that makes me happy to be using Postgres).

We have fair amount of business logic in Postgres functions, and the
ability to throw and catch exceptions, and pinpoint their source location
in our logs, will be _very_ helpful for us. I look forward to it.

This is perhaps a separate discussion, but as I await this patch, I'm
looking for a way to "trick" Postgres into throwing an exception that will
include a string of my choosing. Something like  "SELECT 'This is an
exception' / 0 "
That would give me a "real" exception that is not subject to this
filtering, and if the error message was "Divide by zero: 'This is an
exception cannot' / 0"
I'm totally making that up of course, but you see what I'm getting at. Does
someone with better Postgres/SQL knowledge know of such an exception?

Thanks again,

Taylor







On Thu, Apr 2, 2015 at 2:07 AM, Pavel Stehule [via PostgreSQL] <
ml-node+s1045698n5844393...@n5.nabble.com> wrote:

>
>
>
> The OP on this thread has introduced a potential compromise.  Keep the
>> current printing behavior for RAISE but the construction of the error
>> itself should contain all of the relevant detail so that the caller can get
>> to the suppressed information via, in this instance, GET STACKED
>> DIAGNOSTICS inside an exception handler - a situation where the error is in
>> context but has not yet been printed.
>>
>> Giving the function author the ability, via a new using clause, to bypass
>> the printing short-circuit is another feature to consider.
>>
>> I haven't fully comprehended the design decisions and trade-offs but
>> omitting data to facilitate printing doesn't sound like an appropriate
>> solution - not that I have any clue how hard it would be separate the two
>> aspects.  Likewise with adding in a short-circuit that is judgemental
>> without providing some means to bypass it.  We are not talking about data
>> integrity or performance here and while I'll admit reducing verbosity is a
>> valid goal mis-use of a provided work-around mechanic is not that serious a
>> transgression and one readily noticed and somewhat readily corrected.
>>
>
> There is more aspects - current behave is too simply fix - but it works
> almost time. We can enhance a RAISE statement, but default behave should be
> practical. Usually we don't need stack for NOTICE level (and maybe for
> WARNING level) and usually we would to have stack for EXCEPTION level. Now,
> there is workaround due GET DIAGNOSTICS STACK, but it is workaround - and
> the enhancing of GET DIAGNOSTICS was not designed for this purpose. Sure,
> there are more variant of fixing - and we can implement more than one.
>
> Regards
>
> Pavel
>
>
>>
>> David J.
>>
>
>
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://postgresql.nabble.com/Why-doesn-t-RAISE-EXCEPTION-provide-error-context-tp5844382p5844393.html
>  To unsubscribe from Why doesn't `RAISE EXCEPTION` provide error context?, 
> click
> here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://postgresql.nabble.com/Why-doesn-t-RAISE-EXCEPTION-provide-error-context-tp5844382p5844473.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-02 Thread Melvin Davidson
I believe the availability of trapping the error codes and raising the
appropriate message
is already in PLPGSQL. Please see the two sections below.

http://www.postgresql.org/docs/9.4/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

http://www.postgresql.org/docs/9.4/interactive/errcodes-appendix.html

On Thu, Apr 2, 2015 at 12:01 PM, Taytay  wrote:

> There appears to be a fair amount of nuance here, but I am _very_
> impressed with how quickly you have responded. Thank you for your quick
> attention to this issue! (Yet another thing that makes me happy to be using
> Postgres).
>
> We have fair amount of business logic in Postgres functions, and the
> ability to throw and catch exceptions, and pinpoint their source location
> in our logs, will be _very_ helpful for us. I look forward to it.
>
> This is perhaps a separate discussion, but as I await this patch, I'm
> looking for a way to "trick" Postgres into throwing an exception that will
> include a string of my choosing. Something like  "SELECT 'This is an
> exception' / 0 "
> That would give me a "real" exception that is not subject to this
> filtering, and if the error message was "Divide by zero: 'This is an
> exception cannot' / 0"
> I'm totally making that up of course, but you see what I'm getting at.
> Does someone with better Postgres/SQL knowledge know of such an exception?
>
> Thanks again,
>
> Taylor
>
>
>
>
>
>
>
> On Thu, Apr 2, 2015 at 2:07 AM, Pavel Stehule [via PostgreSQL] <[hidden
> email] > wrote:
>
>>
>>
>>
>> The OP on this thread has introduced a potential compromise.  Keep the
>>> current printing behavior for RAISE but the construction of the error
>>> itself should contain all of the relevant detail so that the caller can get
>>> to the suppressed information via, in this instance, GET STACKED
>>> DIAGNOSTICS inside an exception handler - a situation where the error is in
>>> context but has not yet been printed.
>>>
>>> Giving the function author the ability, via a new using clause, to
>>> bypass the printing short-circuit is another feature to consider.
>>>
>>> I haven't fully comprehended the design decisions and trade-offs but
>>> omitting data to facilitate printing doesn't sound like an appropriate
>>> solution - not that I have any clue how hard it would be separate the two
>>> aspects.  Likewise with adding in a short-circuit that is judgemental
>>> without providing some means to bypass it.  We are not talking about data
>>> integrity or performance here and while I'll admit reducing verbosity is a
>>> valid goal mis-use of a provided work-around mechanic is not that serious a
>>> transgression and one readily noticed and somewhat readily corrected.
>>>
>>
>> There is more aspects - current behave is too simply fix - but it works
>> almost time. We can enhance a RAISE statement, but default behave should be
>> practical. Usually we don't need stack for NOTICE level (and maybe for
>> WARNING level) and usually we would to have stack for EXCEPTION level. Now,
>> there is workaround due GET DIAGNOSTICS STACK, but it is workaround - and
>> the enhancing of GET DIAGNOSTICS was not designed for this purpose. Sure,
>> there are more variant of fixing - and we can implement more than one.
>>
>> Regards
>>
>> Pavel
>>
>>
>>>
>>> David J.
>>>
>>
>>
>>
>> --
>>  If you reply to this email, your message will be added to the
>> discussion below:
>>
>> http://postgresql.nabble.com/Why-doesn-t-RAISE-EXCEPTION-provide-error-context-tp5844382p5844393.html
>>  To unsubscribe from Why doesn't `RAISE EXCEPTION` provide error
>> context?, click here.
>> NAML
>> 
>>
>
>
> --
> View this message in context: Re: Why doesn't `RAISE EXCEPTION` provide
> error context?
> 
> Sent from the PostgreSQL - general mailing list archive
>  at
> Nabble.com.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-02 Thread Taylor Brown
Indeed it is possible Melvin. I read through those links, and I am afraid I
wasn't clear enough.

If Postgres throws an exception, we can handle it and get the context,
which will allow us to pinpoint exactly where the problem was, and what
functions were called leading up to the error. However, we are trying to
add our own manually RAISED exceptions for particular circumstances. And in
that case, I can "catch" the exception, but there is no context associated
with it. I'm looking for a workaround to tide me over until this is fixed.

Example: We have some functions that take a JSON parameter, extract some
values, and will eventually insert those values into the database. If a
property isn't set on the JSON that gets passed into that function, that
parameter is null, and we'll eventually see an error because the INSERT
into the database violates a NOT NULL constraint. Since Postgres threw that
error, I can handle that exception and return the context, showing me where
to look. Through debugging, _eventually_ figure out that the developer
might not have passed in the appropriate JSON data into the function (vs
many other reasons we could have seen a NOT NULL constraint error). So, I
would rather put a check like this at the top of my function:

--
important_variable = (p_request::json->>'important_variable')::integer;
IF (important_variable is NULL) THEN RAISE EXCEPTION 'important_variable
must not be NULL.'; END IF;
--

But I won't be able to get the context for that exception, and all I'll be
able to return from the function or write to the logs is
'important_variable must not be NULL.'. If that's the only place I throw
that error, I'll know where to look. Otherwise, I have no context, and
won't be able to determine where my exception was thrown. So I'm actually
better off _not_ throwing my own custom exceptions, even though I would
prefer to be more defensive about this sort of thing in my code.

I thought I might be able to "trick" postgres into throwing another
unrelated exception that would not only include my custom error message,
but allow me to extract the context, telling me the function where I should
begin debugging.

Many thanks,

Taylor

On Thu, Apr 2, 2015 at 10:45 AM, Melvin Davidson 
wrote:

> I believe the availability of trapping the error codes and raising the
> appropriate message
> is already in PLPGSQL. Please see the two sections below.
>
>
> http://www.postgresql.org/docs/9.4/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
>
> http://www.postgresql.org/docs/9.4/interactive/errcodes-appendix.html
>
> On Thu, Apr 2, 2015 at 12:01 PM, Taytay  wrote:
>
>> There appears to be a fair amount of nuance here, but I am _very_
>> impressed with how quickly you have responded. Thank you for your quick
>> attention to this issue! (Yet another thing that makes me happy to be using
>> Postgres).
>>
>> We have fair amount of business logic in Postgres functions, and the
>> ability to throw and catch exceptions, and pinpoint their source location
>> in our logs, will be _very_ helpful for us. I look forward to it.
>>
>> This is perhaps a separate discussion, but as I await this patch, I'm
>> looking for a way to "trick" Postgres into throwing an exception that will
>> include a string of my choosing. Something like  "SELECT 'This is an
>> exception' / 0 "
>> That would give me a "real" exception that is not subject to this
>> filtering, and if the error message was "Divide by zero: 'This is an
>> exception cannot' / 0"
>> I'm totally making that up of course, but you see what I'm getting at.
>> Does someone with better Postgres/SQL knowledge know of such an exception?
>>
>> Thanks again,
>>
>> Taylor
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Apr 2, 2015 at 2:07 AM, Pavel Stehule [via PostgreSQL] <[hidden
>> email] > wrote:
>>
>>>
>>>
>>>
>>> The OP on this thread has introduced a potential compromise.  Keep the
 current printing behavior for RAISE but the construction of the error
 itself should contain all of the relevant detail so that the caller can get
 to the suppressed information via, in this instance, GET STACKED
 DIAGNOSTICS inside an exception handler - a situation where the error is in
 context but has not yet been printed.

 Giving the function author the ability, via a new using clause, to
 bypass the printing short-circuit is another feature to consider.

 I haven't fully comprehended the design decisions and trade-offs but
 omitting data to facilitate printing doesn't sound like an appropriate
 solution - not that I have any clue how hard it would be separate the two
 aspects.  Likewise with adding in a short-circuit that is judgemental
 without providing some means to bypass it.  We are not talking about data
 integrity or performance here and while I'll admit reducing verbosity is a
 valid goal mis-use of a provided work-around mechanic is not that serious 

Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-02 Thread James Cloos
> "SC" == Steve Crawford  writes:

SC> Very convoluted calculation as others have noted. As to why it is
SC> "off", you are casting one part of the statement to an integer thus
SC> truncating the microseconds but are not doing the same on the other
SC> side of the calculation.

It wasn't the microsecond difference I asked about, it was the 6 hour 
difference.

The original, ancient code I used needed to return integer seconds.  And
it always gave answers consistant with date +%s.

What I haven't determined is why converting back is off by 21600 seconds.

-JimC
-- 
James Cloos  OpenPGP: 0x997A9F17ED7DAEA6


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-02 Thread David G. Johnston
On Thu, Apr 2, 2015 at 10:27 AM, James Cloos  wrote:

> > "SC" == Steve Crawford  writes:
>
> SC> Very convoluted calculation as others have noted. As to why it is
> SC> "off", you are casting one part of the statement to an integer thus
> SC> truncating the microseconds but are not doing the same on the other
> SC> side of the calculation.
>
> It wasn't the microsecond difference I asked about, it was the 6 hour
> difference.
>
> The original, ancient code I used needed to return integer seconds.  And
> it always gave answers consistant with date +%s.
>
> What I haven't determined is why converting back is off by 21600 seconds.
>
>
​What timezone is your server set to - and/or the client requesting the
calculation?

​I haven't looked to see if that is a plausible explanation but if you are
+/- 6hrs from UTC...

David J.


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-02 Thread Steve Crawford

On 04/02/2015 10:34 AM, David G. Johnston wrote:
On Thu, Apr 2, 2015 at 10:27 AM, James Cloos >wrote:


> "SC" == Steve Crawford mailto:scrawf...@pinpointresearch.com>> writes:

...
What I haven't determined is why converting back is off by 21600
seconds.


​ What timezone is your server set to - and/or the client requesting 
the calculation?


​ I haven't looked to see if that is a plausible explanation but if 
you are +/- 6hrs from UTC...


David J.

I was actually just looking at the microseconds being off. Now I'm 
curious again and haven't been able to come up with a plausible 
explanation. My client and server are in America/Pacific time zone. What 
I've seen so far:


First, there appears to be some lingering automatic casting:
select 'epoch';
 ?column?
--
 epoch

select 'epoch' at time zone 'UTC';
  timezone
-
 1970-01-01 00:00:00

In the Pacific time zone, I should be -07 from UTC but if I strip down 
James' statement to the following the result shows as -08, not -07:


select 'epoch'::timestamptz;
  timestamptz

 1969-12-31 16:00:00-08

Which we can see is correct:
select '1969-12-31 16:00:00-08'::timestamptz at time zone 'UTC';
  timezone
-
 1970-01-01 00:00:00

But something gets crossed up when we add a couple calculations:

select (now() - (now() - 'epoch')) ;
?column?

 1969-12-31 17:00:00-08

Now we are off by an hour:
select (now() - (now() - 'epoch')) at time zone 'UTC';
  timezone
-
 1970-01-01 01:00:00


select (now()::timestamp without time zone - (now()::timestamp without 
time zone - 'epoch'));

  ?column?
-
 1970-01-01 00:00:00

That's all I've discovered so far but I have to run to a meeting.

Cheers,
Steve


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-02 Thread James Cloos
> "DGJ" == David G Johnston  writes:

DGJ> ​What timezone is your server set to - and/or the client requesting the
DGJ> calculation?

Everything is in UTC.

-JimC
-- 
James Cloos  OpenPGP: 0x997A9F17ED7DAEA6


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-02 Thread James Cloos
> "SC" == Steve Crawford  writes:

SC> select (now() - (now() - 'epoch')) ;
SC> ?column?
SC> 
SC>  1969-12-31 17:00:00-08

My servers all run in UTC, so that query works here.

The first query where I noticed this, I had just run date +%s and used
that value in now() - 1427998368::reltime, like:

One term:
  :; date +%s
  1427998617

Other term:

  cloos=# select now() - 1427998617::reltime;
  ?column? 
  ---
   1970-01-01 06:00:03.137866+00 

(As you can see it took me 3 seconds to do the copy-paste...:)

For now()-'epoch' I get the format:

  16527 days 18:27:01.688195

but for 1427999266::reltime I get:

  45 years 3 mons 1 day 12:27:46

I wonder whether the YMD to D conversion takes into account the actual
number of Bissextile years?

If so, I guess that is another nail in reltime's coffin.

now() - to_timestamp(1427999266) worked correctly, but that is not
unexpected given to_timestamp's definition.

-JimC
-- 
James Cloos  OpenPGP: 0x997A9F17ED7DAEA6


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bdr global sequence not initialized

2015-04-02 Thread Vu Nguyen
I install patched PostgreSQL 9.4.1 with BDR 0.9.0, and set up a BDR group of
2 linux hosts, each has 4 replicated databases.  Global sequence is enabled
(whose configuration is added in postgresql.conf).

When I insert new records into any of 4 databases in the first host (created
via bdr.bdr_group_create), all get replicated to the second host (created
via bdr.bdr_group_join).

In reverse, when I insert new records into any of 4 databases in the second
host, data are only successfully replicated to 3 out of 4 databases.  From
the database where failures occur, I observe this error:

# select nextval('table_id_seq');
ERROR:  global sequence public.table_id_seq is not initialized yet
HINT:  All nodes must agree before the sequence is usable. Try again soon.
Check all nodes are up if the condition persists.

Here are current BDR status:

[first host]# select node_name, node_status from bdr.bdr_nodes;
 node_name | node_status 
--+-
 first| r
 second| i

[second host]# # select node_name, node_status from bdr.bdr_nodes;
 node_name | node_status 
--+-
 first| r
 second| r

I'd like to ask for suggestions on how to troubleshoot and recover from this
situation.

Thank you.





--
View this message in context: 
http://postgresql.nabble.com/bdr-global-sequence-not-initialized-tp5844515.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-02 Thread TonyS
On Wed, April 1, 2015 5:50 pm, Tom Lane-2 [via PostgreSQL] wrote:
>

>
> TonyS  writes:
>
>> The analyze function has crashed again while the overcommit entries
>> were as above. The last bit of the PostgreSQL log shows: MdSmgr: 41934848
>> total in 14 blocks; 639936 free (0 chunks); 41294912 used ident parser
>> context: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>> hba parser context: 7168 total in 3 blocks; 2288 free (1 chunks); 4880
>> used LOCALLOCK hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
>> used Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728
>> used ErrorContext: 8192 total in 1 blocks; 8160 free (6 chunks); 32 used
>>  2015-04-01 14:23:27 EDT ERROR:  out of memory
>> 2015-04-01 14:23:27 EDT DETAIL:  Failed on request of size 80.
>> 2015-04-01 14:23:27 EDT STATEMENT:  analyze verbose;
>>
>
> We need to see all of that memory map, not just the last six lines of it.
>
>
> regards, tom lane
>


I have used the procedures from this web page to try to get a core dump:
https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

If I follow the procedure and kill the postmaster pid while psql is
connected to it, it does generate a core dump; however, no core dump is
generated when the error I have been experiencing occurs.

I guess at this point I am just going to rebuild from the Linux
installation up. I also tried changing the work_mem to 16MB, but that
didn't seem to make a difference.





--
View this message in context: 
http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197p5844517.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-02 Thread Adrian Klaver

On 04/02/2015 11:01 AM, Steve Crawford wrote:

On 04/02/2015 10:34 AM, David G. Johnston wrote:

On Thu, Apr 2, 2015 at 10:27 AM, James Cloos mailto:cl...@jhcloos.com>>wrote:

> "SC" == Steve Crawford mailto:scrawf...@pinpointresearch.com>> writes:

...
What I haven't determined is why converting back is off by 21600
seconds.


​ What timezone is your server set to - and/or the client requesting
the calculation?

​ I haven't looked to see if that is a plausible explanation but if
you are +/- 6hrs from UTC...

David J.


I was actually just looking at the microseconds being off. Now I'm
curious again and haven't been able to come up with a plausible
explanation. My client and server are in America/Pacific time zone. What
I've seen so far:

First, there appears to be some lingering automatic casting:
select 'epoch';
  ?column?
--
  epoch

select 'epoch' at time zone 'UTC';
   timezone
-
  1970-01-01 00:00:00

In the Pacific time zone, I should be -07 from UTC but if I strip down
James' statement to the following the result shows as -08, not -07:


Which would be correct for 12/31/1969 as standard time was in effect.



select 'epoch'::timestamptz;
   timestamptz

  1969-12-31 16:00:00-08

Which we can see is correct:
select '1969-12-31 16:00:00-08'::timestamptz at time zone 'UTC';
   timezone
-
  1970-01-01 00:00:00

But something gets crossed up when we add a couple calculations:

select (now() - (now() - 'epoch')) ;
 ?column?

  1969-12-31 17:00:00-08


Now you are mixing intervals and timestamps, something I try to avoid 
because of all the converting that goes on.




Now we are off by an hour:
select (now() - (now() - 'epoch')) at time zone 'UTC';
   timezone
-
  1970-01-01 01:00:00


select (now()::timestamp without time zone - (now()::timestamp without
time zone - 'epoch'));
   ?column?
-
  1970-01-01 00:00:00


Or:

test=> select now() - extract('epoch' from (now() - 
'epoch'::timestamptz)) * interval '1 sec';

?column?

 1969-12-31 16:00:00-08



That's all I've discovered so far but I have to run to a meeting.

Cheers,
Steve



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread Octavi Fors
Hi,

this is somehow overlapping one thread which was already posted in this
list here .

However, I'm newbie in PostgreSQL and would need some help from experts on
two aspects. I apologize if these were already implicitely mentioned in the
thread above.

My situation is the following:
  1-running postgresql-9.2 server under my Ubuntu 12.04 LTS desktop,
  2-user postgres created on the system,
  3-two databases with several tables created, populated and owned by
postgres,
  4-these databases can be queried (SELECT, INSERT, etc.) by any system
user of my desktop.

Now here are my two questions:

a) how can I know where my two databases are stored (I presume in a
directory somewhere in my desktop path)?

b) this is the though one (at least for me ;)  For data size reasons, I
want to store my two databases in a NAS which I have mounted via NFS, but
still keeping the postgresql server on my desktop.
I've seen in this message
 of the
thread above, that NFS noac mounting option is highly recommended.
Apart from this, I don't see how to migrate the databases from my desktop
directory determined in a) to my NAS. Could someone please provide the
steps to accomplish that?


Thanks so much in advance,

-- 
Octavi Fors


Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread David G. Johnston
On Thu, Apr 2, 2015 at 1:01 PM, Octavi Fors  wrote:

>  I don't see how to migrate the databases from my desktop directory
> determined in a) to my NAS. Could someone please provide the steps to
> accomplish that?
>

ALTER DATABASE name SET TABLESPACE new_tablespace

​You are solely responsible for any future frustration or harm you
experience for choosing to access your data over NFS.​

David J.


Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread John McKown
On Thu, Apr 2, 2015 at 3:01 PM, Octavi Fors  wrote:
> Hi,
>
> this is somehow overlapping one thread which was already posted in this list
>here.
>
> However, I'm newbie in PostgreSQL and would need some help from experts on
> two aspects. I apologize if these were already implicitely mentioned in the
> thread above.
>
> My situation is the following:
>   1-running postgresql-9.2 server under my Ubuntu 12.04 LTS desktop,

I'm running release 9.4 on Fedora 21, but hopefully what I can test
will be of some use to you.

>   2-user postgres created on the system,

Good.

>   3-two databases with several tables created, populated and owned by
> postgres,
>   4-these databases can be queried (SELECT, INSERT, etc.) by any system user
> of my desktop.
>
> Now here are my two questions:
>
> a) how can I know where my two databases are stored (I presume in a
> directory somewhere in my desktop path)?

You might want to read: https://help.ubuntu.com/community/PostgreSQL

They will be stored in the default location. On Fedora, this is
/var/lib/pgsql/data. I don't know what it is on Ubuntu, but you can
find out by using psql. Perhaps something like:

sudo su - postgres -c 'SHOW data_directory;'

Just using my normal id on Fedora, which is a PostgreSQL (not UNIX)
super user, I can see:

tsh009=# SHOW data_directory;
   data_directory
-
 /var/lib/pgsql/data
(1 row)

tsh009=#


>
> b) this is the though one (at least for me ;)  For data size reasons, I want
> to store my two databases in a NAS which I have mounted via NFS, but still
> keeping the postgresql server on my desktop.

Well, if you insist. But I would not move the default data there. I'd
keep it local. See below for an alternative.

> I've seen in this message of the thread above, that NFS noac mounting option
> is highly recommended.
> Apart from this, I don't see how to migrate the databases from my desktop
> directory determined in a) to my NAS. Could someone please provide the steps
> to accomplish that?

Well, there are a couple of approaches. Both will require UNIX root to
accomplish. If you are running SELinux enabled & enforcing, it is even
more complicated. But basically create an new, empty subdirectory on
your NAS box. Look at the owner & group on the directory shown above
(/var/lib/pgsql/data) and make it the new directory have the same
information. Something like:

sudo mkdir -p /NAS-box/pgsql-data
sudo chown --reference /var/lib/pgsql/data /NAS-box/pgsql-data

This sets up the new NAS-box/pgsql-data directory and assigns it the
proper owner:group for use by the server.

=== Moving the data - BAD IDEA, IMO ==

#stop the postgresql server on your system, however you do that. I
don't know Ubuntu.

sudo cp -a /var/lib/pgsql/data/* /NAS-box/pgsql-data
sudo cp -a /var/lib/pgsql/data/.* /NAS-box/pgsql-data
sudo mount --bind /var/lib/pgsql/data /NAS-box/pgsql-data
sudo restorecon -R /var/lib/pgsql/data # restore SELinux attributes

Commands #1 and #2 copy all the data to the new directory.
Command #3 does a "bind" mount so that any reference to
/var/lib/pgsql/data is redirected to /NAS-box/pgsql
Command #4 is if you run with SELinux.

Update your fstab to maintain the bind mount when you reboot. The
entry will look something like:

/NAS-box/pgsql-data /var/lib/pgsql/data none bind

=== THE ABOVE IS A BAD IDEA ===

=== Decent idea: ===
Create the NAS directory as above. Do _NOT_ copy anything into it! In
psql, as a postgres super user, create a new TABLESPACE
ref: http://www.postgresql.org/docs/9.2/interactive/sql-createtablespace.html

CREATE TABLESPACE onNAS LOCATION '/NAS-box/pgsql-data';
CREATE DATABASE somedb TABLESPACE onNAS;
\c somedb --connect to new database stored on the NAS
CREATE TABLE sometable ... ;


Now all the stuff in the "somedb" data base will reside on your NAS
box. Now all you need worry about is that it is reliable and fast
enough. Hopefully on at least a 1Gb ethernet.
In this case, I don't know the SELinux commands to set the SELinux
attributes. It is complicated.

=== Opinion.

I have a NAS box. But I would worry about responsiveness. What is
better, IMO, is an external SATA connected DAS box. DAS is "Direct
Attached Storage". Many PCs have a eSATA port on the back side. You
connect your DAS box to that will an eSATA cable. Now you have an
external disk expansion which runs at full SATA speed, just like an
internal disk drive. It's just an alternative. But you'd do the same
thing as with the NAS to get data onto it. Well, except that you could
dedicate an entire filesystem on the DAS to the /var/lib/pgsql/data
with something like:

# /dev/sd?n is the value for the DAS box.
sudo mkfs.ext4 /dev/sd?n #create new filesystem on DAS
sudo mount /dev/sd?n /tmp #mount it
sudo cp -a /var/lib/pgsql/data/* /tmp
sudo cp -a /var/lib/pgsql/data/.* /tmp
sudo umount /tmp
sudo mount /dev/sd?n /var/lib/pgsql/data #mount it
sudo restorecon -R /var/lib/pgsql/data # update SELinux attributes

The restorecon is only needed if you run 

[GENERAL] Error handling in C API function calls in a way that doesn't close client connection

2015-04-02 Thread Igor Stassiy
Hello all,

This question refers to version 9.4 of Postgres. I have have a function

Datum do_something(PG_FUNCTION_ARGS)
{
...
if(error_occured) {
ereport(ERROR, (errmsg("some error occured")));
}
...
}

When I call do_something in a way to deliberately cause the error
condition, the client connection (psql) is terminated, however for example
when you say (or many other function calls that cause ereport to be called):

psql > SELECT ST_IsSimple('POINT(1, 1)');

I simply get the message "ERROR:  parse error - invalid geometry" but the
connection is not closed.

The subqueries also know that the inner query had terminated with an error
somehow, without terminating the connection:

psql> select * from (select st_issimple('point(0, 0)')) as a;

ERROR:  parse error - invalid geometry
I guess this could be done with surrounding the function call with
PG_TRY/CATCH however these macros are not present in Postgis code.

Can you please suggest how to implement this functionality in my code?

Thanks,
Igor


Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread Octavi Fors
Thanks John for your extensive and helpful response.

A few quick answers which may clarify my desktop-NAS system details:

If you are running SELinux enabled & enforcing, it is even
> more complicated.
>

-no, I'm not running SELinux.

-My NAS is a Synology DS2415+
 . I created a volume of
~72.44Tb with 12 8Tb Seagate ST8000AS0002

drives, with no encryption and Synology Hybrid RAID-2 (SHR) protection.

-Yes, the NAS is accessible only from two computers in the same Gigabit
LAN. Actually, I have defined trunks on the router for speeding up the
computers<->NAS access.

-The rest of the LAN has ~4 tcp/ip devices with very low traffic.

I have a NAS box. But I would worry about responsiveness. What is
> better, IMO, is an external SATA connected DAS box. DAS is "Direct
> Attached Storage". Many PCs have a eSATA port on the back side.
>

as far as I understand eSATA is not an option for me. First because I
already have bought the DS2415+ NAS, which does not has eSATA I/O :(
And second, because I need the database to be accessible from two computers
in the same LAN.


See below the results to your comments/commands:

Just using my normal id on Fedora, which is a PostgreSQL (not UNIX)
> super user, I can see:
>
> tsh009=# SHOW data_directory;
>data_directory
> -
>  /var/lib/pgsql/data
>


postgres@ev1:~$ psql -c 'SHOW data_directory;'
data_directory
--
 /var/lib/postgresql/9.2/main

Note that inside /var/lib/postgresql/9.2/main directory there are the
following files and subdirs:

postgres@ev1:~/9.2/main$ cd /var/lib/postgresql/9.2/main
postgres@ev1:~/9.2/main$ ls -ltr
drwx-- 2 postgres postgres 4096 Aug 27  2014 pg_snapshots
drwx-- 2 postgres postgres 4096 Aug 27  2014 pg_serial
-rw--- 1 postgres postgres4 Aug 27  2014 PG_VERSION
drwx-- 2 postgres postgres 4096 Aug 27  2014 pg_twophase
drwx-- 2 postgres postgres 4096 Aug 27  2014 pg_tblspc
drwx-- 4 postgres postgres 4096 Aug 27  2014 pg_multixact
drwx-- 2 postgres postgres 4096 Sep  3  2014 pg_clog
drwx-- 2 postgres postgres 4096 Sep  3  2014 pg_subtrans
drwx-- 7 postgres postgres 4096 Sep  3  2014 base
-rw--- 1 postgres postgres   93 Mar  9 20:10 postmaster.pid
-rw--- 1 postgres postgres  133 Mar  9 20:10 postmaster.opts
drwx-- 2 postgres postgres 4096 Mar  9 20:10 pg_notify
drwx-- 2 postgres postgres 4096 Mar  9 20:11 global
drwx-- 3 postgres postgres 4096 Mar 27 13:57 pg_xlog
drwx-- 2 postgres postgres 4096 Apr  2 17:35 pg_stat_tmp

postgres@ev1:~/9.2/main$ du -sh ./*
485M./base
448K./global
624K./pg_clog
28K./pg_multixact
12K./pg_notify
4.0K./pg_serial
4.0K./pg_snapshots
32K./pg_stat_tmp
176K./pg_subtrans
4.0K./pg_tblspc
4.0K./pg_twophase
4.0K./PG_VERSION
129M./pg_xlog
4.0K./postmaster.opts
4.0K./postmaster.pid


ofors@ev1:~$ sudo /etc/init.d/postgresql stop
 * Stopping PostgreSQL 9.2 database server [ OK ]

ofors@ev1:~$ sudo mkdir -p /home/ofors/Documents/nas/
ofors@ev1:~$ sudo chown --reference /var/lib/postgresql/9.2/main
/home/ofors/Documents/nas/pgsql-data

ofors@ev1:~$ sudo /etc/init.d/postgresql start
 * Starting PostgreSQL 9.2 database server [ OK ]

postgres@ev1:~/9.2/main$ psql
psql (9.3.6, server 9.2.10)
Type "help" for help.

postgres=# CREATE TABLESPACE onNAS LOCATION
'/home/ofors/Documents/nas/pgsql-data';
CREATE TABLESPACE
postgres=# ALTER DATABASE catalogs SET TABLESPACE onNAS;
ALTER DATABASE

You see that I used the ALTER from David in last message, instead your
suggestion of creating the whole database again.


Anyway, it seems the data from catalogs database is in the NAS.
See the following:

ofors@ev1:~$ sudo ls -l /home/ofors/Documents/nas/pgsql-data/
drwx-- 3 postgres postgres 4096 Apr  2 18:02 PG_9.2_201204301

ofors@ev1:~$ sudo du -sh
/home/ofors/Documents/nas/pgsql-data/PG_9.2_201204301
467M/home/ofors/Documents/nas/pgsql-data/PG_9.2_201204301

postgres@ev1:~/9.2/main$ cd /var/lib/postgresql/9.2/main
postgres@ev1:~/9.2/main$ ls -ltr
drwx-- 2 postgres postgres 4096 Aug 27  2014 pg_snapshots
drwx-- 2 postgres postgres 4096 Aug 27  2014 pg_serial
-rw--- 1 postgres postgres4 Aug 27  2014 PG_VERSION
drwx-- 2 postgres postgres 4096 Aug 27  2014 pg_twophase
drwx-- 4 postgres postgres 4096 Aug 27  2014 pg_multixact
drwx-- 2 postgres postgres 4096 Sep  3  2014 pg_clog
drwx-- 2 postgres postgres 4096 Sep  3  2014 pg_subtrans
drwx-- 3 postgres postgres 4096 Mar 27 13:57 pg_xlog
drwx-- 2 postgres postgres 4096 Apr  2 17:56 pg_tblspc
drwx-- 6 postgres postgres 4096 Apr  2 18:01 base
-rw--- 1 postgres postgres   91 Apr  2 18:19 postmaster.pid
drwx-- 2 postgres postgres 4096 Apr  2 18:19 pg_notify
-rw--- 1 postgres pos

Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread John McKown
On Thu, Apr 2, 2015 at 7:09 PM, Octavi Fors  wrote:
> Thanks John for your extensive and helpful response.
>

> You see that I used the ALTER from David in last message, instead your
> suggestion of creating the whole database again.

Looks good!


>
> Two only questions remain for me:
>
> 1) could you confirm that I don't have to mount --bind
> /var/lib/postgresql/9.2/main ?

If it is working for you now, then you don't need to mess with the
fstab or the mount --bind at all.

>
> 2) on my my /etc/fstab here is the current line for my NAS:
> nas_ip:/volume1/data /home/ofors/Documents/nas nfs
>
> Which NFS client and server (on NAS side) options/arguments do you suggest
> for optimizing performance? Or in other words, for minimizing database
> corruption in case of NAS (note that NAS drives are in RAID6) or computer
> failure?

Hum, performance is a bit over my head on disk drives, especially NAS.
I have a small NAS, but my main 2Tb disk is eSATA. My fstab entry for
the NAS just looks like:

smartstor1:/VOLUME1/DV7T /SmartStor1/Volume1/DV7T nfs
user,rw,noauto,noexec,nosuid 0 0

which is basically what you have, performance wise, as best as I can tell.

Oh, are you going to try to share a single data base file with
different PostgreSQL servers on different machines __AT THE SAME
TIME__??? I think you're going to "Corruption City". There are locks
which work fairly well on local disks (to share something like a file
or directory) which don't work very well on some NAS boxes. At least,
that is what Dr. Richard Hipp, who wrote and maintains SQLite,
maintains in his support forum. And he is much brighter than I!

>
>
> Thanks so much in advance,
>
> --
> Octavi Fors
>
> On Thu, Apr 2, 2015 at 4:54 PM, John McKown 
> wrote:
>>
>> On Thu, Apr 2, 2015 at 3:01 PM, Octavi Fors  wrote:
>> > Hi,
>> >
>> > this is somehow overlapping one thread which was already posted in this
>> > list
>> >here.
>> >
>> > However, I'm newbie in PostgreSQL and would need some help from experts
>> > on
>> > two aspects. I apologize if these were already implicitely mentioned in
>> > the
>> > thread above.
>> >
>> > My situation is the following:
>> >   1-running postgresql-9.2 server under my Ubuntu 12.04 LTS desktop,
>>
>> I'm running release 9.4 on Fedora 21, but hopefully what I can test
>> will be of some use to you.
>>
>> >   2-user postgres created on the system,
>>
>> Good.
>>
>> >   3-two databases with several tables created, populated and owned by
>> > postgres,
>> >   4-these databases can be queried (SELECT, INSERT, etc.) by any system
>> > user
>> > of my desktop.
>> >
>> > Now here are my two questions:
>> >
>> > a) how can I know where my two databases are stored (I presume in a
>> > directory somewhere in my desktop path)?
>>
>> You might want to read: https://help.ubuntu.com/community/PostgreSQL
>>
>> They will be stored in the default location. On Fedora, this is
>> /var/lib/pgsql/data. I don't know what it is on Ubuntu, but you can
>> find out by using psql. Perhaps something like:
>>
>> sudo su - postgres -c 'SHOW data_directory;'
>>
>> Just using my normal id on Fedora, which is a PostgreSQL (not UNIX)
>> super user, I can see:
>>
>> tsh009=# SHOW data_directory;
>>data_directory
>> -
>>  /var/lib/pgsql/data
>> (1 row)
>>
>> tsh009=#
>>
>>
>> >
>> > b) this is the though one (at least for me ;)  For data size reasons, I
>> > want
>> > to store my two databases in a NAS which I have mounted via NFS, but
>> > still
>> > keeping the postgresql server on my desktop.
>>
>> Well, if you insist. But I would not move the default data there. I'd
>> keep it local. See below for an alternative.
>>
>> > I've seen in this message of the thread above, that NFS noac mounting
>> > option
>> > is highly recommended.
>> > Apart from this, I don't see how to migrate the databases from my
>> > desktop
>> > directory determined in a) to my NAS. Could someone please provide the
>> > steps
>> > to accomplish that?
>>
>> Well, there are a couple of approaches. Both will require UNIX root to
>> accomplish. If you are running SELinux enabled & enforcing, it is even
>> more complicated. But basically create an new, empty subdirectory on
>> your NAS box. Look at the owner & group on the directory shown above
>> (/var/lib/pgsql/data) and make it the new directory have the same
>> information. Something like:
>>
>> sudo mkdir -p /NAS-box/pgsql-data
>> sudo chown --reference /var/lib/pgsql/data /NAS-box/pgsql-data
>>
>> This sets up the new NAS-box/pgsql-data directory and assigns it the
>> proper owner:group for use by the server.
>>
>> === Moving the data - BAD IDEA, IMO ==
>>
>> #stop the postgresql server on your system, however you do that. I
>> don't know Ubuntu.
>>
>> sudo cp -a /var/lib/pgsql/data/* /NAS-box/pgsql-data
>> sudo cp -a /var/lib/pgsql/data/.* /NAS-box/pgsql-data
>> sudo mount --bind /var/lib/pgsql/data /NAS-box/pgsql-data
>> sudo restorecon -R /var/lib/pgsql/data # restore SELinux attributes
>>
>> Comma

[GENERAL] ERROR: out of memory

2015-04-02 Thread Dzmitry Nikitsin
Hey folks, 
 I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon
E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1
LTS, 

Master configuration:
default_statistics_target = 50

maintenance_work_mem = 1GB

constraint_exclusion = on

checkpoint_completion_target = 0.9

effective_cache_size = 22GB

work_mem = 120MB

wal_buffers = 8MB

checkpoint_segments = 16

shared_buffers = 7GB

max_connections = 300



Slave configuration:



max_connections = 300

shared_buffers = 10GB

effective_cache_size = 45GB

work_mem = 19660kB

maintenance_work_mem = 2GB

checkpoint_segments = 32

checkpoint_completion_target = 0.7

wal_buffers = 16MB

default_statistics_target = 100



I am using XSF file system, size of my database ­ 168GB.

For linux kernel I have settings:

vm.swappiness = 0

vm.overcommit_memory = 2

vm.overcommit_ratio = 50

kernel.shmall = 7864320
kernel.shmmax = 32212254720

kernel.shmmni = 4096



Master is primary to write data. Slave ­ for reporting. In reality I am
using not more then 100 connections to slave server at the same time.
Writing about 3000 records in a minute.



I have one table where I writing statistics, that is portioned by month.
Below is table size with biggest relations

public.stats_201408   | 9212 MB

 public.stats_201503   | 8868 MB

 pg_toast.pg_toast_6404464 | 8319 MB

 pg_toast.pg_toast_317921  | 7520 MB

 public.stats_201409   | 7101 MB

 public.stats_201412   | 4458 MB



I see here pg_toast, from doc I read it¹s large objects there, but It¹s not
related to my table stats, which is read/write heavy(type of the biggest
column in this table => character varying(3000)). I.e. - it¹s related to
different table. 



My application create 100 connections & keeping them during whole life
cycle(usually until next deploy ­ that may happen in couple days), with time
­ connection growing in memory(checking using htop) & free memory going
down. As result with time(usually 3-4 hours) my DB start throwing



ERROR:  out of memory

DETAIL:  Failed on request of size 2048.



After I restart my application(reconnect to DB), it start working fine
again. 



It even fail on simple query like:



 SELECT COUNT(*) FROM ³stats" WHERE "stats²."bt_id" = $1 AND
(stats.created_at >= '2015-04-02 04:00:00.00') AND (stats.created_at <=
'2015-04-03 03:59:59.99') AND ("stats"."source" IS NOT NULL) AND
"stats"."device" IN (1, 2) AND "stats²."ra" = 0 AND "stats"."paid" = ¹t'




Any help appreciated.








Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread Steve Atkins

On Apr 2, 2015, at 5:09 PM, Octavi Fors  wrote:

> 
> And second, because I need the database to be accessible from two computers 
> in the same LAN.

If you do this, you will destroy your database[1].

Why not have the database running on one machine, all the time, potentially 
with a real disk subsystem then just access it from wherever on the LAN you 
need to? Postgresql is a client-server database, and you can access it over the 
network as easily as you can from the machine it's running on.

Cheers,
  Steve

[1] Almost certainly, unless you make absolutely sure postgresql is only 
started on one machine at a time, amongst other things.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread David G. Johnston
On Thu, Apr 2, 2015 at 5:09 PM, Octavi Fors  wrote:

> Thanks John for your extensive and helpful response.
>
> I have a NAS box. But I would worry about responsiveness. What is
>> better, IMO, is an external SATA connected DAS box. DAS is "Direct
>> Attached Storage". Many PCs have a eSATA port on the back side.
>>
>
> as far as I understand eSATA is not an option for me. First because I
> already have bought the DS2415+ NAS, which does not has eSATA I/O :(
> And second, because I need the database to be accessible from two
> computers in the same LAN.
>
>
​This is new - and while the desire for using your multi-terrabyte device
may ​make the risk of running over NFS potentially worthwhile there is no
reasonable way to have two running databases share data files.  You can
share the NAS device and have each PostgreSQL instance consume an unshared
portion of its resources.

​You appear to either be "mis-speaking" or omitting potentially critically
important details here...

1) could you confirm that I don't have to mount --bind
> /var/lib/postgresql/9.2/main ?
>
>
​/var/lib/... is not on the NAS but, likely, on whatever your primary
internal hard drive is.  Note, the most important aspect of that fact is
that your WAL gets written to your data directory and not to the tablespace
on which the database tables reside. (i.e. WAL does not make it to NAS -
unless you setup wal shipping).

2) on my my /etc/fstab here is the current line for my NAS:
> nas_ip:/volume1/data /home/ofors/Documents/nas nfs
>
> Which NFS client and server (on NAS side) options/arguments do you suggest
> for optimizing performance? Or in other words, for minimizing database
> corruption in case of NAS (note that NAS drives are in RAID6) or computer
> failure?
>
>
​I am a little out of my league here but the main risk is that incomplete
data is sent/received by/from the NAS.  Once the data is in the NAS it is
really no different than any other storage medium in terms of durability
concerns. I do not really know how checkpoints and transient failed reads
interact with PostgreSQL and what circumstances would prevent properly
recorded WAL from being used to restore should a read/write failure occur.

​David J.​​


Re: [GENERAL] ERROR: out of memory

2015-04-02 Thread David G. Johnston
On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin  wrote:

> Hey folks,
>  I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
>  streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon
> E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1
> LTS,
>
> Master configuration:
>
> default_statistics_target = 50
>
> maintenance_work_mem = 1GB
>
> constraint_exclusion = on
>
> checkpoint_completion_target = 0.9
>
> effective_cache_size = 22GB
>
> work_mem = 120MB
>
> wal_buffers = 8MB
>
> checkpoint_segments = 16
>
> shared_buffers = 7GB
>
> max_connections = 300
>
>
> Slave configuration:
>
>
> max_connections = 300
>
> shared_buffers = 10GB
>
> effective_cache_size = 45GB
>
> work_mem = 19660kB
>
> maintenance_work_mem = 2GB
>
> checkpoint_segments = 32
>
> checkpoint_completion_target = 0.7
>
> wal_buffers = 16MB
>
> default_statistics_target = 100
>
>
> I am using XSF file system, size of my database – 168GB.
>
> For linux kernel I have settings:
>
> vm.swappiness = 0
>
> vm.overcommit_memory = 2
>
> vm.overcommit_ratio = 50
> kernel.*shm*all = 7864320
>
> kernel.*shm*max = 32212254720
>
> kernel.*shm*mni = 4096
>
>
> Master is primary to write data. Slave – for reporting. In reality I am
> using not more then 100 connections to slave server at the same time.
> Writing about 3000 records in a minute.
>
>
> I have one table where I writing statistics, that is portioned by month.
> Below is table size with biggest relations
>
> public.stats_201408   | 9212 MB
>
>  public.stats_201503   | 8868 MB
>
>  pg_toast.pg_toast_6404464 | 8319 MB
>
>  pg_toast.pg_toast_317921  | 7520 MB
>
>  public.stats_201409   | 7101 MB
>
>  public.stats_201412   | 4458 MB
>
>
> I see here pg_toast, from doc I read it’s large objects there, but It’s
> not related to my table stats, which is read/write heavy(type of the
> biggest column in this table => character varying(3000)). I.e. - it’s
> related to different table.
>
>
> My application create 100 connections & keeping them during whole life
> cycle(usually until next deploy – that may happen in couple days), with
> time – connection growing in memory(checking using htop) & free memory
> going down. As result with time(usually 3-4 hours) my DB start throwing
>
>
> ERROR:  out of memory
>
> DETAIL:  Failed on request of size 2048.
>
>
> After I restart my application(reconnect to DB), it start working fine
> again.
>
>
> It even fail on simple query like:
>
>
>  SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND
> (stats.created_at >= '2015-04-02 04:00:00.00') AND (stats.created_at <=
> '2015-04-03 03:59:59.99') AND ("stats"."source" IS NOT NULL) AND
> "stats"."device" IN (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'
>
>
>
> Any help appreciated.
>

​Start here:

http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

Research

Ask more specific questions.

I suspect you not only are keeping your 100 sessions directly and
permanently connected to the database but many of them are also keeping
open transactions.

​I'd also drop any preconception about which tables you think are
problematic - since likely the tables themselves are not the issue.​

David J.​


Re: [GENERAL] ERROR: out of memory

2015-04-02 Thread Melvin Davidson
Well right of the bat, if your master shared_buffers = 7GB and 3 slaves
shared_buffers = 10GB, that is 37GB total, which means you are guaranteed
to exceed the 30GB physical limit on your machine. General recommendation
is to only allocate 1/4 total memory for shared_buffers, so start by
cutting back the shared_buffers for ALL PG servers to 6GB (max. 4 or 5GB is
probably better)  and things should be happier.

On Thu, Apr 2, 2015 at 8:24 PM, Dzmitry Nikitsin  wrote:

> Hey folks,
>  I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
>  streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon
> E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1
> LTS,
>
> Master configuration:
>
> default_statistics_target = 50
>
> maintenance_work_mem = 1GB
>
> constraint_exclusion = on
>
> checkpoint_completion_target = 0.9
>
> effective_cache_size = 22GB
>
> work_mem = 120MB
>
> wal_buffers = 8MB
>
> checkpoint_segments = 16
>
> shared_buffers = 7GB
>
> max_connections = 300
>
>
> Slave configuration:
>
>
> max_connections = 300
>
> shared_buffers = 10GB
>
> effective_cache_size = 45GB
>
> work_mem = 19660kB
>
> maintenance_work_mem = 2GB
>
> checkpoint_segments = 32
>
> checkpoint_completion_target = 0.7
>
> wal_buffers = 16MB
>
> default_statistics_target = 100
>
>
> I am using XSF file system, size of my database – 168GB.
>
> For linux kernel I have settings:
>
> vm.swappiness = 0
>
> vm.overcommit_memory = 2
>
> vm.overcommit_ratio = 50
> kernel.*shm*all = 7864320
>
> kernel.*shm*max = 32212254720
>
> kernel.*shm*mni = 4096
>
>
> Master is primary to write data. Slave – for reporting. In reality I am
> using not more then 100 connections to slave server at the same time.
> Writing about 3000 records in a minute.
>
>
> I have one table where I writing statistics, that is portioned by month.
> Below is table size with biggest relations
>
> public.stats_201408   | 9212 MB
>
>  public.stats_201503   | 8868 MB
>
>  pg_toast.pg_toast_6404464 | 8319 MB
>
>  pg_toast.pg_toast_317921  | 7520 MB
>
>  public.stats_201409   | 7101 MB
>
>  public.stats_201412   | 4458 MB
>
>
> I see here pg_toast, from doc I read it’s large objects there, but It’s
> not related to my table stats, which is read/write heavy(type of the
> biggest column in this table => character varying(3000)). I.e. - it’s
> related to different table.
>
>
> My application create 100 connections & keeping them during whole life
> cycle(usually until next deploy – that may happen in couple days), with
> time – connection growing in memory(checking using htop) & free memory
> going down. As result with time(usually 3-4 hours) my DB start throwing
>
>
> ERROR:  out of memory
>
> DETAIL:  Failed on request of size 2048.
>
>
> After I restart my application(reconnect to DB), it start working fine
> again.
>
>
> It even fail on simple query like:
>
>
>  SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND
> (stats.created_at >= '2015-04-02 04:00:00.00') AND (stats.created_at <=
> '2015-04-03 03:59:59.99') AND ("stats"."source" IS NOT NULL) AND
> "stats"."device" IN (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'
>
>
>
> Any help appreciated.
>
>
>
>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] ERROR: out of memory

2015-04-02 Thread David G. Johnston
On Thursday, April 2, 2015, Melvin Davidson  wrote:

> Well right of the bat, if your master shared_buffers = 7GB and 3 slaves
> shared_buffers = 10GB, that is 37GB total, which means you are guaranteed
> to exceed the 30GB physical limit on your machine.
>

I don't get why you are adding these together.  There isn't any reason to
assume the slaves are virtual machines sharing the same 30gb as opposed to
each having 30gb for 120gb total between all 4.

David J.


Re: [GENERAL] ERROR: out of memory

2015-04-02 Thread Dzmitry Nikitsin
it¹s 4 different servers.

From:  "David G. Johnston" 
Date:  Thursday, April 2, 2015 at 9:37 PM
To:  Melvin Davidson 
Cc:  Bob Jones , "pgsql-general@postgresql.org"

Subject:  Re: [GENERAL] ERROR: out of memory

On Thursday, April 2, 2015, Melvin Davidson  wrote:
> Well right of the bat, if your master shared_buffers = 7GB and 3 slaves
> shared_buffers = 10GB, that is 37GB total, which means you are guaranteed to
> exceed the 30GB physical limit on your machine.

I don't get why you are adding these together.  There isn't any reason to
assume the slaves are virtual machines sharing the same 30gb as opposed to
each having 30gb for 120gb total between all 4.

David J.






Re: [GENERAL] ERROR: out of memory

2015-04-02 Thread Dzmitry Nikitsin
Thank you David. I see some queries running for 10+ seconds, but I do not
have transactions there, it’s just select queries. More thoughts ?

Thanks,
 Dzmitry 

From:  "David G. Johnston" 
Date:  Thursday, April 2, 2015 at 8:57 PM
To:  Bob Jones 
Cc:  "pgsql-general@postgresql.org" 
Subject:  Re: [GENERAL] ERROR: out of memory



On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin
 wrote:
> Hey folks, 
>  I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
> streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680
> v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1 LTS,
> 
> Master configuration:
> default_statistics_target = 50
> 
> maintenance_work_mem = 1GB
> 
> constraint_exclusion = on
> 
> checkpoint_completion_target = 0.9
> 
> effective_cache_size = 22GB
> 
> work_mem = 120MB
> 
> wal_buffers = 8MB
> 
> checkpoint_segments = 16
> 
> shared_buffers = 7GB
> 
> max_connections = 300
> 
> 
> 
> Slave configuration:
> 
> 
> 
> max_connections = 300
> 
> shared_buffers = 10GB
> 
> effective_cache_size = 45GB
> 
> work_mem = 19660kB
> 
> maintenance_work_mem = 2GB
> 
> checkpoint_segments = 32
> 
> checkpoint_completion_target = 0.7
> 
> wal_buffers = 16MB
> 
> default_statistics_target = 100
> 
> 
> 
> I am using XSF file system, size of my database – 168GB.
> 
> For linux kernel I have settings:
> 
> vm.swappiness = 0
> 
> vm.overcommit_memory = 2
> 
> vm.overcommit_ratio = 50
> 
> kernel.shmall = 7864320
> kernel.shmmax = 32212254720
> 
> kernel.shmmni = 4096
> 
> 
> 
> Master is primary to write data. Slave – for reporting. In reality I am using
> not more then 100 connections to slave server at the same time. Writing about
> 3000 records in a minute.
> 
> 
> 
> I have one table where I writing statistics, that is portioned by month. Below
> is table size with biggest relations
> 
> public.stats_201408   | 9212 MB
> 
>  public.stats_201503   | 8868 MB
> 
>  pg_toast.pg_toast_6404464 | 8319 MB
> 
>  pg_toast.pg_toast_317921  | 7520 MB
> 
>  public.stats_201409   | 7101 MB
> 
>  public.stats_201412   | 4458 MB
> 
> 
> 
> I see here pg_toast, from doc I read it’s large objects there, but It’s not
> related to my table stats, which is read/write heavy(type of the biggest
> column in this table => character varying(3000)). I.e. - it’s related to
> different table. 
> 
> 
> 
> My application create 100 connections & keeping them during whole life
> cycle(usually until next deploy – that may happen in couple days), with time –
> connection growing in memory(checking using htop) & free memory going down. As
> result with time(usually 3-4 hours) my DB start throwing
> 
> 
> 
> ERROR:  out of memory
> 
> DETAIL:  Failed on request of size 2048.
> 
> 
> 
> After I restart my application(reconnect to DB), it start working fine again.
> 
> 
> 
> It even fail on simple query like:
> 
> 
> 
>  SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND (stats.created_at
> >= '2015-04-02 04:00:00.00') AND (stats.created_at <= '2015-04-03
> 03:59:59.99') AND ("stats"."source" IS NOT NULL) AND "stats"."device" IN
> (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'
> 
> 
> 
> 
> Any help appreciated.

​Start here:

http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-ACTI
VITY-VIEW

Research

Ask more specific questions.

I suspect you not only are keeping your 100 sessions directly and
permanently connected to the database but many of them are also keeping open
transactions.

​I'd also drop any preconception about which tables you think are
problematic - since likely the tables themselves are not the issue.​

David J.​
 





Re: [GENERAL] ERROR: out of memory

2015-04-02 Thread Dzmitry Nikitsin
Actually I checked it wrong, state for queries I mentioned is idle, I.e. -
they are showing previous transaction, so I do not see any long running
transactions right now.

Thanks,
  Dzmitry 

From:  "David G. Johnston" 
Date:  Thursday, April 2, 2015 at 8:57 PM
To:  Bob Jones 
Cc:  "pgsql-general@postgresql.org" 
Subject:  Re: [GENERAL] ERROR: out of memory



On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin
 wrote:
> Hey folks, 
>  I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
> streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680
> v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1 LTS,
> 
> Master configuration:
> default_statistics_target = 50
> 
> maintenance_work_mem = 1GB
> 
> constraint_exclusion = on
> 
> checkpoint_completion_target = 0.9
> 
> effective_cache_size = 22GB
> 
> work_mem = 120MB
> 
> wal_buffers = 8MB
> 
> checkpoint_segments = 16
> 
> shared_buffers = 7GB
> 
> max_connections = 300
> 
> 
> 
> Slave configuration:
> 
> 
> 
> max_connections = 300
> 
> shared_buffers = 10GB
> 
> effective_cache_size = 45GB
> 
> work_mem = 19660kB
> 
> maintenance_work_mem = 2GB
> 
> checkpoint_segments = 32
> 
> checkpoint_completion_target = 0.7
> 
> wal_buffers = 16MB
> 
> default_statistics_target = 100
> 
> 
> 
> I am using XSF file system, size of my database – 168GB.
> 
> For linux kernel I have settings:
> 
> vm.swappiness = 0
> 
> vm.overcommit_memory = 2
> 
> vm.overcommit_ratio = 50
> 
> kernel.shmall = 7864320
> kernel.shmmax = 32212254720
> 
> kernel.shmmni = 4096
> 
> 
> 
> Master is primary to write data. Slave – for reporting. In reality I am using
> not more then 100 connections to slave server at the same time. Writing about
> 3000 records in a minute.
> 
> 
> 
> I have one table where I writing statistics, that is portioned by month. Below
> is table size with biggest relations
> 
> public.stats_201408   | 9212 MB
> 
>  public.stats_201503   | 8868 MB
> 
>  pg_toast.pg_toast_6404464 | 8319 MB
> 
>  pg_toast.pg_toast_317921  | 7520 MB
> 
>  public.stats_201409   | 7101 MB
> 
>  public.stats_201412   | 4458 MB
> 
> 
> 
> I see here pg_toast, from doc I read it’s large objects there, but It’s not
> related to my table stats, which is read/write heavy(type of the biggest
> column in this table => character varying(3000)). I.e. - it’s related to
> different table. 
> 
> 
> 
> My application create 100 connections & keeping them during whole life
> cycle(usually until next deploy – that may happen in couple days), with time –
> connection growing in memory(checking using htop) & free memory going down. As
> result with time(usually 3-4 hours) my DB start throwing
> 
> 
> 
> ERROR:  out of memory
> 
> DETAIL:  Failed on request of size 2048.
> 
> 
> 
> After I restart my application(reconnect to DB), it start working fine again.
> 
> 
> 
> It even fail on simple query like:
> 
> 
> 
>  SELECT COUNT(*) FROM “stats" WHERE "stats”."bt_id" = $1 AND (stats.created_at
> >= '2015-04-02 04:00:00.00') AND (stats.created_at <= '2015-04-03
> 03:59:59.99') AND ("stats"."source" IS NOT NULL) AND "stats"."device" IN
> (1, 2) AND "stats”."ra" = 0 AND "stats"."paid" = ’t'
> 
> 
> 
> 
> Any help appreciated.

​Start here:

http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-ACTI
VITY-VIEW

Research

Ask more specific questions.

I suspect you not only are keeping your 100 sessions directly and
permanently connected to the database but many of them are also keeping open
transactions.

​I'd also drop any preconception about which tables you think are
problematic - since likely the tables themselves are not the issue.​

David J.​
 





[GENERAL] quick q re execute & scope of new

2015-04-02 Thread Scott Ribe
Easier to give an example than describe the question, any chance of making 
something like this work?

execute('insert into ' || tblname || ' values(new.*)');

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
https://www.linkedin.com/in/scottribe/
(303) 722-0567 voice







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] quick q re execute & scope of new

2015-04-02 Thread Tom Lane
Scott Ribe  writes:
> Easier to give an example than describe the question, any chance of making 
> something like this work?
> execute('insert into ' || tblname || ' values(new.*)');

Not like that, for certain.  It might work to use EXECUTE ... USING new.*
or some variant of that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] quick q re execute & scope of new

2015-04-02 Thread Adrian Klaver

On 04/02/2015 08:30 PM, Scott Ribe wrote:

Easier to give an example than describe the question, any chance of making 
something like this work?


You doing this in plpgsql trigger function I presume?



execute('insert into ' || tblname || ' values(new.*)');



So 
http://www.postgresql.org/docs/9.4/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN:


EXECUTE 'insert into ' || quote_ident(tblname) || ' values(' || new.* || ')'

--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] quick q re execute & scope of new

2015-04-02 Thread Scott Ribe
On Apr 2, 2015, at 10:10 PM, Tom Lane  wrote:
> 
> Not like that, for certain.  It might work to use EXECUTE ... USING new.*
> or some variant of that.

Couldn't get a variant of that to work, but this did:

execute('insert into ' || tblnm || ' select $1.*') using new;

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
https://www.linkedin.com/in/scottribe/
(303) 722-0567 voice







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] quick q re execute & scope of new

2015-04-02 Thread Scott Ribe
On Apr 2, 2015, at 10:14 PM, Adrian Klaver  wrote:
> 
> EXECUTE 'insert into ' || quote_ident(tblname) || ' values(' || new.* || ')'

Not that easy, strings are not quoted correctly, and null values are blank. 
Might be a function to translate new.* into a string as needed for this use, 
but I found another way based on Tom's suggestion:

execute('insert into ' || tblnm || ' select $1.*') using new;

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
https://www.linkedin.com/in/scottribe/
(303) 722-0567 voice







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error handling in C API function calls in a way that doesn't close client connection

2015-04-02 Thread Tom Lane
Igor Stassiy  writes:
> This question refers to version 9.4 of Postgres. I have have a function

> Datum do_something(PG_FUNCTION_ARGS)
> {
> ...
> if(error_occured) {
> ereport(ERROR, (errmsg("some error occured")));
> }
> ...
> }

> When I call do_something in a way to deliberately cause the error
> condition, the client connection (psql) is terminated,

[ shrug... ]  You did not show us whatever is causing that problem.

Given a self-contained example, it would be possible to offer some
useful advice.  On the basis of what you've said here, though, I can
only suggest that you're looking for the problem in the wrong place.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] quick q re execute & scope of new

2015-04-02 Thread Andrew J. Kopciuch
On April 2, 2015, Scott Ribe wrote:
> On Apr 2, 2015, at 10:14 PM, Adrian Klaver  wrote:
> > EXECUTE 'insert into ' || quote_ident(tblname) || ' values(' || new.* ||
> > ')'
>
> Not that easy, strings are not quoted correctly, and null values are blank.
> Might be a function to translate new.* into a string as needed for this
> use, but I found another way based on Tom's suggestion:
>
> execute('insert into ' || tblnm || ' select $1.*') using new;
>

I've done similar in triggers for partition schemes, something like this :

EXECUTE 'INSERT INTO ' || partitionName || ' (SELECT ( masterTableName ' || 
quote_literal(NEW) || ').*)';

I can't remember the reference I found on line that helped me get there though.
The key is doing quote_literal on the "NEW", and casting it to a compatible 
type.

HTH,


Andy




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread Octavi Fors
Hi David, John et al.,

as far as I understand eSATA is not an option for me. First because I
>> already have bought the DS2415+ NAS, which does not has eSATA I/O :(
>> And second, because I need the database to be accessible from two
>> computers in the same LAN.
>>
>
> ​This is new - and while the desire for using your multi-terrabyte device
> may ​make the risk of running over NFS potentially worthwhile there is no
> reasonable way to have two running databases share data files.  You can
> share the NAS device and have each PostgreSQL instance consume an unshared
> portion of its resources.
>
> ​You appear to either be "mis-speaking" or omitting potentially critically
> important details here...
>

Oops, sorry yes I think I may "miss-spoke" when explaining my second reason
why not choosing eSATA.
My situation is the following:

-Two computers (C1 & C2) and NAS (with no eSATA I/O) on the same LAN.
-C1 acquires images from a telescope and periodically stores them via NFS
in the NAS (no database involved here, just in the ext4 filesystem).
-C2 is a 12 xeon core-class server designed to analyze the stored images in
the NAS, and compute astrometry & photometry measurements (catalogs & light
curves) for every star & image. These measurements are inserted in the
catalogs database inside the NAS.

Therefore there's only *one* computer (C2) which will run postgresql server
with the tablespace onNAS.

So does this approach sound like feasible if the NFS parameters are set
properly?

I have a small NAS, but my main 2Tb disk is eSATA. My fstab entry for
> the NAS just looks like:
> smartstor1:/VOLUME1/DV7T /SmartStor1/Volume1/DV7T nfs
> user,rw,noauto,noexec,nosuid 0 0
> which is basically what you have, performance wise, as best as I can tell.
>

I see this message

in this list any ideas suggesting to use the NFS parameters 'sync' (for
synchronizing changes to a file) and 'noac' (for no caching).

Could you confirm that
nas_ip:/volume1/data /home/ofors/Documents/nas nfs noac,sync
would be good options for /etc/fstab?

Any additional NFS parameter?


Note, the most important aspect of that fact is that your WAL gets written
> to your data directory and not to the tablespace on which the database
> tables reside. (i.e. WAL does not make it to NAS - unless you setup wal
> shipping).
>

first time I hear about the importance of WAL and NFS.
I googled some and found this discussion

about the topic.
Any ideas on how to include the options they mention (archinve_mode?) into
NAS or /etc/fstab?

Thanks a lot in advance,
-- 
Octavi Fors


On Thu, Apr 2, 2015 at 8:47 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Apr 2, 2015 at 5:09 PM, Octavi Fors  wrote:
>
>> Thanks John for your extensive and helpful response.
>>
>> I have a NAS box. But I would worry about responsiveness. What is
>>> better, IMO, is an external SATA connected DAS box. DAS is "Direct
>>> Attached Storage". Many PCs have a eSATA port on the back side.
>>>
>>
>> as far as I understand eSATA is not an option for me. First because I
>> already have bought the DS2415+ NAS, which does not has eSATA I/O :(
>> And second, because I need the database to be accessible from two
>> computers in the same LAN.
>>
>>
> ​This is new - and while the desire for using your multi-terrabyte device
> may ​make the risk of running over NFS potentially worthwhile there is no
> reasonable way to have two running databases share data files.  You can
> share the NAS device and have each PostgreSQL instance consume an unshared
> portion of its resources.
>
> ​You appear to either be "mis-speaking" or omitting potentially critically
> important details here...
>
> 1) could you confirm that I don't have to mount --bind
>> /var/lib/postgresql/9.2/main ?
>>
>>
> ​/var/lib/... is not on the NAS but, likely, on whatever your primary
> internal hard drive is.  Note, the most important aspect of that fact is
> that your WAL gets written to your data directory and not to the tablespace
> on which the database tables reside. (i.e. WAL does not make it to NAS -
> unless you setup wal shipping).
>
> 2) on my my /etc/fstab here is the current line for my NAS:
>> nas_ip:/volume1/data /home/ofors/Documents/nas nfs
>>
>> Which NFS client and server (on NAS side) options/arguments do you
>> suggest for optimizing performance? Or in other words, for minimizing
>> database corruption in case of NAS (note that NAS drives are in RAID6) or
>> computer failure?
>>
>>
> ​I am a little out of my league here but the main risk is that incomplete
> data is sent/received by/from the NAS.  Once the data is in the NAS it is
> really no different than any other storage medium in terms of durability
> concerns. I do not really know how checkpoints and transient failed reads
> interact with PostgreSQL