Re: [GENERAL] triggers and FK cascades

2011-03-18 Thread Grzegorz Jaśkiewicz
There's a generic trigger that sends a signal to a process whenever
changes are made (via listen/notify mechanism), but when FK cascade
fires it will cause a mass amount of notifies to be send out and I
want to avoid it.


2011/3/18 David Johnston :
> Don't know if this would work but could you check to see if the corresponding 
> PK exists on A?
>
> It may also help to explain why you would want to do such a thing so that 
> someone may be able to provide an alternative solution as opposed to simply 
> responding to a generic feature question.
>
> David J.
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Grzegorz Jaskiewicz
> Sent: Thursday, March 17, 2011 6:41 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] triggers and FK cascades
>
> Considering the following example.
> Tables A and B.
> Table A contains some data.
> Table B reefers to table A using FK with 'on delete cascade'. Table B has a 
> trigger on it, after delete per row
>
> Now, is there any way I can tell in the trigger on table B that it has been 
> called from a direct delete on that table, as oppose to the indirect (FK) 
> delete on table A?
>
> Trigger is PLpg/SQL or C function.
>
>
> --
> GJ
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
> changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>



-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Convert Simple Query into tsvector & tsquery format.

2011-03-18 Thread Ivan Sergio Borgonovo
On Fri, 18 Mar 2011 12:30:50 +0530
Adarsh Sharma  wrote:

> Dear all,
> 
> I have a simple query mentioned below :
> 
> select  count(*)  from page_content where (content like
> '%Militant%' OR content like '%jihad%' OR  content like
> '%Mujahid%'  OR content like '%fedayeen%' OR content like
> '%insurgent%'  OR content like '%terrORist%' OR
>   content like '%cadre%'  OR content like '%civilians%' OR content
> like '%police%' OR content like '%defence%' OR content like
> '%cops%' OR content like '%crpf%' OR content like '%dsf%' OR
> content like '%ssb%') AND (content like '%kill%' OR content like
> '%injure%');

It's not clear where the problem is.

select to_tsquery('english', '(yellow | blue) & (red | black)');
 to_tsquery  
-
 ( 'yellow' | 'blue' ) & ( 'red' | 'black' )

select to_tsvector('english', 'yellow red') @@ to_tsquery('english',
'(yellow | blue) & (red | black)');

The only thing I can see that could cause problems is you may have
previously "mangled" words in the ilike query while you'd leave that
task to ts engine that will find a proper lexeme.

Could you be more specific about the problem you're encountering.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key vs unique index

2011-03-18 Thread Scott Marlowe
Autovacuum is generally more effective as it can run when it needs to
not having to wait til the end of the day.  If you delete big chunks
several times a day autovac can keep up.  Also, it's enabled by
default in 8.4 and up so the end user would have to actively turn it
off in this instance.

On Fri, Mar 18, 2011 at 6:21 AM, Voils, Steven M  wrote:
> Our application supports pg 8.4 onwards.  The vacuums are run automatically 
> by the application rather than requiring end users to enable autovacuum; the 
> vacuums being performed are regular.
>
> -Original Message-
> From: Scott Marlowe [mailto:scott.marl...@gmail.com]
> Sent: Thursday, March 17, 2011 6:31 PM
> To: Voils, Steven M
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Primary key vs unique index
>
> On Thu, Mar 17, 2011 at 12:59 PM, Voils, Steven M
>  wrote:
>> Thanks for the reply.  I should have mentioned in the first post that we do 
>> delete significant amounts of the table which I thought was the cause of the 
>> bloat.  We are already performing automatic vacuums nightly.
>
> Automatic regular vacuums?  So you do or don't have autovac turned on?
>
> What version of pg are you running (8.3 or before, 8.4 or later?)
>
> Are your nightly vacuums FULL or regular vacuums?
>



-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] error messages during restore

2011-03-18 Thread Geoffrey Myers
So we are in the process of converting our databases from SQL_ASCII to 
UTF8.  If a particular row won't import because of the encoding issue we 
get an error like:


pg_restore: [archiver (db)] Error from TOC entry 5317; 0 1266711 TABLE 
DATA logs postgres
pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence 
for encoding "UTF8": 0x90
HINT:  This error can also happen if the byte sequence does not match 
the encoding expected by the server, which is controlled by 
"client_encoding".

CONTEXT:  COPY logs, line 590744

So as far as I can tell, this identifies the table by name, logs in this 
case, and then identifies the actula record by line.


Question is, it would be really nice if we could figure out the actual 
column name in that table.  Noting that I do get a line number that 
produced the error, but the fact that this is a binary dump makes it 
difficult to view that line.


Is there a way to view that data line without converting this dump to a 
text dump?


All I'd like to do is know which column in the table caused the problem 
so I could apply my fix to that particular column.


--
Until later, Geoffrey

"I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them."
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SOCK_wait_for_ready function call caused a query to get stuck

2011-03-18 Thread tamanna madaan
Hi All
 
 I am using postgres-8.4.0 and psqlodbc-08.03.0400-1 and unixODBC-2.2.14-000.01 
driver to connect
to the database. One of the queries I executed from my application have got 
stuck for an
indefinite amount of time causing my application to hang. So I cored the 
application. The
core file gives a backtrace which shows it got stuck while waiting for a socket 
to get
ready as follows :
 
(gdb) bt
#0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
#1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x7f1be67ff720, output=0, 
retry_count=1) at socket.c:531
#2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x7f1be67ff720, peek=0) at 
socket.c:940
#3  0x7f1c3a82db92 in SOCK_get_id (self=0x7f1be67ff720) at socket.c:696
#4  0x7f1c3a8090ca in CC_send_query_append (self=0x7f1be68bf430, 
query=, qi=0x0, flag=, stmt=0x0,
appendq=) at connection.c:2498
#5  0x7f1c3a818ae5 in PGAPI_Transact (henv=0x0, hdbc=0x7f1be68bf430, 
fType=0) at execute.c:1143
#6  0x7f1c3a8424ec in SQLEndTran (HandleType=, 
Handle=0x7f1be68bf430, CompletionType=-1) at odbcapi30.c:178
#7  0x7f1c3f62fa2b in SQLEndTran (handle_type=, 
handle=0x7f1beff16b90, completion_type=0) at SQLEndTran.c:360

One other thread of the same process was also stuck :
 
(gdb) bt
#0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
#1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x2bcde60, output=0, 
retry_count=1) at socket.c:531
#2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x2bcde60, peek=0) at 
socket.c:940
#3  0x7f1c3a82db92 in SOCK_get_id (self=0x2bcde60) at socket.c:696
#4  0x7f1c3a8090ca in CC_send_query_append (self=0x2bd13a0, query=, qi=0x0, flag=, stmt=0x7f1bf766c380,
appendq=) at connection.c:2498
#5  0x7f1c3a836c94 in SC_execute (self=0x7f1bf766c380) at statement.c:1879
#6  0x7f1c3a81907e in Exec_with_parameters_resolved (stmt=0x7f1bf766c380, 
exec_end=0x7f1c2c59e4c0) at execute.c:386
#7  0x7f1c3a81a600 in PGAPI_Execute (hstmt=0x7f1bf766c380, flag=) at execute.c:1070
#8  0x7f1c3a83fd82 in SQLExecute (StatementHandle=0x7f1bf766c380) at 
odbcapi.c:374
#9  0x7f1c3f630c77 in SQLExecute (statement_handle=0x7f1be4b069e0) at 
SQLExecute.c:283

I had the same issue while using postgres-8.1.2 and was advised to upgrade 
postgres.

But upgrading the postgres version didn't resolve the issue  .
There doesn't seem to be any locking issue either . 
 
Can anyone please shed some light on this issue .

Thanks...
Tamanna


Re: [GENERAL] Primary key vs unique index

2011-03-18 Thread Voils, Steven M
Our application supports pg 8.4 onwards.  The vacuums are run automatically by 
the application rather than requiring end users to enable autovacuum; the 
vacuums being performed are regular.

-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com] 
Sent: Thursday, March 17, 2011 6:31 PM
To: Voils, Steven M
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Primary key vs unique index

On Thu, Mar 17, 2011 at 12:59 PM, Voils, Steven M
 wrote:
> Thanks for the reply.  I should have mentioned in the first post that we do 
> delete significant amounts of the table which I thought was the cause of the 
> bloat.  We are already performing automatic vacuums nightly.

Automatic regular vacuums?  So you do or don't have autovac turned on?

What version of pg are you running (8.3 or before, 8.4 or later?)

Are your nightly vacuums FULL or regular vacuums?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary key vs unique index

2011-03-18 Thread Voils, Steven M
What are the general guidelines under which autovacuum will trigger?  I was 
unaware it was turned on by default for the newer versions.  Would it be 
worthwhile to leave the manual vacuuming on?  Currently it runs immediately 
after large sections of the tables are deleted.  Or would it be expected that 
autovac would pick these changes up and run anyway?  

-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com] 
Sent: Friday, March 18, 2011 8:36 AM
To: Voils, Steven M
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Primary key vs unique index

Autovacuum is generally more effective as it can run when it needs to
not having to wait til the end of the day.  If you delete big chunks
several times a day autovac can keep up.  Also, it's enabled by
default in 8.4 and up so the end user would have to actively turn it
off in this instance.

On Fri, Mar 18, 2011 at 6:21 AM, Voils, Steven M  wrote:
> Our application supports pg 8.4 onwards.  The vacuums are run automatically 
> by the application rather than requiring end users to enable autovacuum; the 
> vacuums being performed are regular.
>
> -Original Message-
> From: Scott Marlowe [mailto:scott.marl...@gmail.com]
> Sent: Thursday, March 17, 2011 6:31 PM
> To: Voils, Steven M
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Primary key vs unique index
>
> On Thu, Mar 17, 2011 at 12:59 PM, Voils, Steven M
>  wrote:
>> Thanks for the reply.  I should have mentioned in the first post that we do 
>> delete significant amounts of the table which I thought was the cause of the 
>> bloat.  We are already performing automatic vacuums nightly.
>
> Automatic regular vacuums?  So you do or don't have autovac turned on?
>
> What version of pg are you running (8.3 or before, 8.4 or later?)
>
> Are your nightly vacuums FULL or regular vacuums?
>



-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] Different encoding for string values and identifier strings? Or (select 'tést' as tést) returns different values for string and identifier...

2011-03-18 Thread Francisco Figueiredo Jr.
Oh, sorry for that.

My client code is Npgsql. I pulled those bytes from a debugging session
directly from the network stream. I wanted to know what bytes npgsql was
receiving.

This is the method which reads the data:

public static String ReadString(Stream network_stream)
{
NpgsqlEventLog.LogMethodEnter(LogLevel.Debug, CLASSNAME, "ReadString");

List buffer = new List();
for (int bRead = network_stream.ReadByte(); bRead != 0; bRead =
network_stream.ReadByte())
{
if (bRead == -1)
{
throw new IOException();
}
else
{
buffer.Add((byte) bRead);
}
}

if (NpgsqlEventLog.Level >= LogLevel.Debug)
NpgsqlEventLog.LogMsg(resman, "Log_StringRead",
LogLevel.Debug, ENCODING_UTF8.GetString(buffer.ToArray()));

return ENCODING_UTF8.GetString(buffer.ToArray());
}

My database has encoding set to UTF-8 although my lc_collate is pt.BR.UTF-8
this lc setting my have cause some trouble?

I also have problems with psql client where the char doesn't appear at all.
Andreas could see the char though...

I hope it helps.

Thanks in advance.
--
Sent from my Android phone

Francisco Figueiredo Jr.
Npgsql lead developer
fxjr.blogspot.com
twitter.com/franciscojunior
Em 18/03/2011 01:29, "Tom Lane"  escreveu:


Re: [GENERAL] SOCK_wait_for_ready function call caused a query to get stuck

2011-03-18 Thread Merlin Moncure
On Fri, Mar 18, 2011 at 4:31 AM, tamanna madaan
 wrote:
> Hi All
>
>  I am using postgres-8.4.0 and psqlodbc-08.03.0400-1 and
> unixODBC-2.2.14-000.01 driver to connect
> to the database. One of the queries I executed from my application have got
> stuck for an
> indefinite amount of time causing my application to hang. So I cored the
> application. The
> core file gives a backtrace which shows it got stuck while waiting for a
> socket to get
> ready as follows :
>
> (gdb) bt
> #0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
> #1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x7f1be67ff720,
> output=0, retry_count=1) at socket.c:531
> #2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x7f1be67ff720, peek=0)
> at socket.c:940
> #3  0x7f1c3a82db92 in SOCK_get_id (self=0x7f1be67ff720) at socket.c:696
> #4  0x7f1c3a8090ca in CC_send_query_append (self=0x7f1be68bf430,
> query=, qi=0x0, flag=, stmt=0x0,
>     appendq=) at connection.c:2498
> #5  0x7f1c3a818ae5 in PGAPI_Transact (henv=0x0, hdbc=0x7f1be68bf430,
> fType=0) at execute.c:1143
> #6  0x7f1c3a8424ec in SQLEndTran (HandleType=,
> Handle=0x7f1be68bf430, CompletionType=-1) at odbcapi30.c:178
> #7  0x7f1c3f62fa2b in SQLEndTran (handle_type=,
> handle=0x7f1beff16b90, completion_type=0) at SQLEndTran.c:360
> One other thread of the same process was also stuck :
>
> (gdb) bt
> #0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
> #1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x2bcde60, output=0,
> retry_count=1) at socket.c:531
> #2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x2bcde60, peek=0) at
> socket.c:940
> #3  0x7f1c3a82db92 in SOCK_get_id (self=0x2bcde60) at socket.c:696
> #4  0x7f1c3a8090ca in CC_send_query_append (self=0x2bd13a0, query= optimized out>, qi=0x0, flag=, stmt=0x7f1bf766c380,
>     appendq=) at connection.c:2498
> #5  0x7f1c3a836c94 in SC_execute (self=0x7f1bf766c380) at
> statement.c:1879
> #6  0x7f1c3a81907e in Exec_with_parameters_resolved
> (stmt=0x7f1bf766c380, exec_end=0x7f1c2c59e4c0) at execute.c:386
> #7  0x7f1c3a81a600 in PGAPI_Execute (hstmt=0x7f1bf766c380, flag= optimized out>) at execute.c:1070
> #8  0x7f1c3a83fd82 in SQLExecute (StatementHandle=0x7f1bf766c380) at
> odbcapi.c:374
> #9  0x7f1c3f630c77 in SQLExecute (statement_handle=0x7f1be4b069e0) at
> SQLExecute.c:283
> I had the same issue while using postgres-8.1.2 and was advised to upgrade
> postgres.
> But upgrading the postgres version didn't resolve the issue  .
> There doesn't seem to be any locking issue either .
>
> Can anyone please shed some light on this issue .

In separate session with psql, you query pg_stat_activity and look for
waiting queries? (need to eliminate lock issue).  Are you opening
multiple sessions through the odbc driver? Is your application
multi-threaded?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade

2011-03-18 Thread Davenport, Julie
This helped, is now down from 14.9 min to 10.9 min to run the entire script.  
Thanks.


Dne 17.3.2011 19:29, Davenport, Julie napsal(a):
 I have not yet had time to try Tomas' suggestion of bumping up the 
 work_mem first (trying to figure out how to do that from within a 
 coldfusion script).  Many thanks for all your help guys!

>>Well, just execute this 'SQL query' just like the other ones

>>set work_mem='8MB'

>>and it will increase the amount of memory for that connection.

>>Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] TO_CHAR(timestamptz,datetimeformat) wrong after DST change

2011-03-18 Thread jonathansfl
My TO_CHAR function is now an hour off thanks to Daylight Savings Time. 
The dates are correct (I'm in EST: TZ=-04) but my function now returns TIME
an hour early.
(prior to DST we were TZ=-05).

TIMESTAMPTZ data (v_dt): 2011-03-17 18:21:50-04
FUNCTION SNIPPET: to_char(v_dt, 'mm/dd/ hh:mi AM')
FUNCTION RETURNS: 03/18/2011 09:21 AM
FUNCTION SHOULD RETURN: 03/18/2011 10:21 AM

postgres=# show time zone;
 TimeZone
--
 America/New_York
(1 row)



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/TO-CHAR-timestamptz-datetimeformat-wrong-after-DST-change-tp3966319p3966319.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: why does extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) == 16?

2011-03-18 Thread bubba postgres
I found a work around... Not sure why this is the behavior
select extract ( HOUR FROM (TIMESTAMP '2010-01-01 00:00:00' at time zone
'utc' ) at time zone 'utc' ) gives what I expect would be the correct
answer
BUT..
select extract ( EPOCH FROM (TIMESTAMP '2010-01-01 00:00:00' at time zone
'utc' ) at time zone 'utc' ) does not...

Can anyone explain this?



On Thu, Mar 17, 2011 at 5:05 PM, bubba postgres wrote:

> Is this the correct behavior? It seems like if I specify the utc offset it
> should be 0, not 16.. It seems to be the opposite behavior from extract
> epoch.
>
> select extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' ) as defhour,
> extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'PST' ) as
> psthour, extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone
> 'utc' ) as utchour, extract ( epoch FROM TIMESTAMP '2010-01-01 00:00:00' at
> time zone 'utc' ) as utcepoch;
>
> 0,0,16,1262304000
>
>
>
>
> @Test
> public void testFoo() {
> TimeZone  tz  = TimeZone.getTimeZone("GMT");
> GregorianCalendar cal = new GregorianCalendar(tz);
> cal.set(2010,0,1,0,0,0);
> cal.set(GregorianCalendar.MILLISECOND, 0 );
> System.out.println("" + cal.getTimeInMillis() );
> System.out.println("" + String.format( "%1$tY-%1$tm-%1$td
> %1$tH:%1$tM:%1$tS.%1$tL", cal ) );
> System.out.println("" + cal.get(GregorianCalendar.HOUR_OF_DAY ) );
> }
>
> In Java:
> 126230400
> 2010-01-01 00:00:00.000 (UTC)
> 0
>


Re: [GENERAL] why does extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) == 16?

2011-03-18 Thread Steve Crawford

On 03/17/2011 05:05 PM, bubba postgres wrote:
Is this the correct behavior? It seems like if I specify the utc 
offset it should be 0, not 16.. It seems to be the opposite behavior 
from extract epoch.


select extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' ) as 
defhour, extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time 
zone 'PST' ) as psthour, extract ( HOUR FROM TIMESTAMP '2010-01-01 
00:00:00' at time zone 'utc' ) as utchour, extract ( epoch FROM 
TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) as utcepoch;


0,0,16,1262304000


Looks reasonable to me. It appears that you are in the US/Pacific time 
zone. Per the docs, "Conversions between timestamp without time zone and 
timestamp with time zone normally assume that the timestamp without time 
zone value should be taken or given as timezone local time. A different 
time zone can be specified for the conversion using AT TIME ZONE. "


Eliminating the "extract" from your statements so you can see the actual 
values:


select
TIMESTAMP '2010-01-01 00:00:00'  as defhour,
TIMESTAMP '2010-01-01 00:00:00' at time zone 'PST' as psthour,
TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' as utchour,
extract ( epoch FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' 
) as utcepoch;


defhour  | 2010-01-01 00:00:00
psthour  | 2010-01-01 00:00:00-08
utchour  | 2009-12-31 16:00:00-08
utcepoch | 1262304000

So if your timezone is set to PST8PDT, TIMESTAMP '2010-01-01 00:00:00' 
means that you are displaying that time, in local Pacific time, and not 
including any timezone information.


The second one is taking that same time, assumed to be your local 
Pacific time, and specifying a time zone (in this case the same as your 
default time) thus causing the result to be a type timestamptz (note the 
-08).


The third case specifies that the time given is in UTC - basically 
'2010-01-01 00:00:00-00'. Again the result is a timestamptz. And that 
point in time, from the PST8PDT perspective is 2009-12-31 16:00:00-08.


.select timestamptz '2010-01-01 00:00:00-00';
  timestamptz

 2009-12-31 16:00:00-08

However, if you take a timestamp *with* time zone and specifically ask 
for it to be displayed in a different (or same) time zone the result 
with be a timestamp *without* time zone with the timestamp being correct 
for the zone you requested.


BTW, utcepoch doesn't really make sense - the epoch is identical 
regardless of time zone since it is the number of seconds since January 
1 1970 00:00:00 UTC which is the same everywhere.


So you need to be *very* careful in understanding the actual date you 
are specifying and the way you are displaying it.


You may want to stick with timestamptz as your data type but beware of 
another foot-gun. Here's your original query changing to timestamptz:


select
TIMESTAMPTZ '2010-01-01 00:00:00'  as defhour,
TIMESTAMPTZ '2010-01-01 00:00:00' at time zone 'PST' as psthour,
TIMESTAMPTZ '2010-01-01 00:00:00' at time zone 'utc' as utchour,
extract ( epoch FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' 
) as utcepoch;

-[ RECORD 1 ]
defhour  | 2010-01-01 00:00:00-08
psthour  | 2010-01-01 00:00:00
utchour  | 2010-01-01 08:00:00
utcepoch | 1262304000

Still probably not what you wanted since timestamptz '2010-01-01 
00:00:00' is assumed to include the '-08' offset but the timestamptz 
gets converted to timestamp with the 'at time zone'.


You probably want to use timestamptz as the data type and include the 
zone in the source data:

select
TIMESTAMPTZ '2010-01-01 00:00:00'  as defhour,
TIMESTAMPTZ '2010-01-01 00:00:00 PST8PDT' as psthour,
TIMESTAMPTZ '2010-01-01 00:00:00 UTC' as utchour;

defhour | 2010-01-01 00:00:00-08
psthour | 2010-01-01 00:00:00-08
utchour | 2009-12-31 16:00:00-08

Then you can convert whatever fully qualified time you specify into 
whatever time zone you want using "at time zone" or by using 'set 
timezone to ..."


Cheers,
Steve







@Test
public void testFoo() {
TimeZone  tz  = TimeZone.getTimeZone("GMT");
GregorianCalendar cal = new GregorianCalendar(tz);
cal.set(2010,0,1,0,0,0);
cal.set(GregorianCalendar.MILLISECOND, 0 );
System.out.println("" + cal.getTimeInMillis() );
System.out.println("" + String.format( "%1$tY-%1$tm-%1$td 
%1$tH:%1$tM:%1$tS.%1$tL", cal ) );

System.out.println("" + cal.get(GregorianCalendar.HOUR_OF_DAY ) );
}

In Java:
126230400
2010-01-01 00:00:00.000 (UTC)
0



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SOCK_wait_for_ready function call caused a query to get stuck

2011-03-18 Thread tamanna madaan
Thanks for your reply Merlin .
 
If I am getting you correctly, you mean to say that I should check for waiting 
queries in pg_stat_activity table 
while my application is hung at SOCK_wait_for_ready function call  . Right ??
 
Yes I am opening multiple seesions with ODBC driver  and my application is 
multi threaded .
 
 
Tamanna



From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: Fri 3/18/2011 8:56 PM
To: tamanna madaan
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] SOCK_wait_for_ready function call caused a query to get 
stuck



On Fri, Mar 18, 2011 at 4:31 AM, tamanna madaan
 wrote:
> Hi All
>
>  I am using postgres-8.4.0 and psqlodbc-08.03.0400-1 and
> unixODBC-2.2.14-000.01 driver to connect
> to the database. One of the queries I executed from my application have got
> stuck for an
> indefinite amount of time causing my application to hang. So I cored the
> application. The
> core file gives a backtrace which shows it got stuck while waiting for a
> socket to get
> ready as follows :
>
> (gdb) bt
> #0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
> #1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x7f1be67ff720,
> output=0, retry_count=1) at socket.c:531
> #2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x7f1be67ff720, peek=0)
> at socket.c:940
> #3  0x7f1c3a82db92 in SOCK_get_id (self=0x7f1be67ff720) at socket.c:696
> #4  0x7f1c3a8090ca in CC_send_query_append (self=0x7f1be68bf430,
> query=, qi=0x0, flag=, stmt=0x0,
> appendq=) at connection.c:2498
> #5  0x7f1c3a818ae5 in PGAPI_Transact (henv=0x0, hdbc=0x7f1be68bf430,
> fType=0) at execute.c:1143
> #6  0x7f1c3a8424ec in SQLEndTran (HandleType=,
> Handle=0x7f1be68bf430, CompletionType=-1) at odbcapi30.c:178
> #7  0x7f1c3f62fa2b in SQLEndTran (handle_type=,
> handle=0x7f1beff16b90, completion_type=0) at SQLEndTran.c:360
> One other thread of the same process was also stuck :
>
> (gdb) bt
> #0  0x7f1c3e5ed366 in poll () from /lib64/libc.so.6
> #1  0x7f1c3a82d0a5 in SOCK_wait_for_ready (sock=0x2bcde60, output=0,
> retry_count=1) at socket.c:531
> #2  0x7f1c3a82d8c3 in SOCK_get_next_byte (self=0x2bcde60, peek=0) at
> socket.c:940
> #3  0x7f1c3a82db92 in SOCK_get_id (self=0x2bcde60) at socket.c:696
> #4  0x7f1c3a8090ca in CC_send_query_append (self=0x2bd13a0, query= optimized out>, qi=0x0, flag=, stmt=0x7f1bf766c380,
> appendq=) at connection.c:2498
> #5  0x7f1c3a836c94 in SC_execute (self=0x7f1bf766c380) at
> statement.c:1879
> #6  0x7f1c3a81907e in Exec_with_parameters_resolved
> (stmt=0x7f1bf766c380, exec_end=0x7f1c2c59e4c0) at execute.c:386
> #7  0x7f1c3a81a600 in PGAPI_Execute (hstmt=0x7f1bf766c380, flag= optimized out>) at execute.c:1070
> #8  0x7f1c3a83fd82 in SQLExecute (StatementHandle=0x7f1bf766c380) at
> odbcapi.c:374
> #9  0x7f1c3f630c77 in SQLExecute (statement_handle=0x7f1be4b069e0) at
> SQLExecute.c:283
> I had the same issue while using postgres-8.1.2 and was advised to upgrade
> postgres.
> But upgrading the postgres version didn't resolve the issue  .
> There doesn't seem to be any locking issue either .
>
> Can anyone please shed some light on this issue .

In separate session with psql, you query pg_stat_activity and look for
waiting queries? (need to eliminate lock issue).  Are you opening
multiple sessions through the odbc driver? Is your application
multi-threaded?

merlin




Re: [GENERAL] TO_CHAR(timestamptz,datetimeformat) wrong after DST change

2011-03-18 Thread Steve Crawford

On 03/18/2011 07:59 AM, jonathansfl wrote:

My TO_CHAR function is now an hour off thanks to Daylight Savings Time.
The dates are correct (I'm in EST: TZ=-04) but my function now returns TIME
an hour early.
(prior to DST we were TZ=-05).

TIMESTAMPTZ data (v_dt): 2011-03-17 18:21:50-04
FUNCTION SNIPPET: to_char(v_dt, 'mm/dd/ hh:mi AM')
FUNCTION RETURNS: 03/18/2011 09:21 AM
FUNCTION SHOULD RETURN: 03/18/2011 10:21 AM

postgres=# show time zone;
  TimeZone
--
  America/New_York
(1 row)



You are leaving out something critical. Based on what you've shown us, 
you shouldn't get either answer, you should get "03/17/2011 06:21 PM".


select to_char(timestamptz '2011-03-17 18:21:50-04', 'mm/dd/ hh:mi AM');

to_char
---
03/17/2011 06:21 PM

My money is on an accidental conversion between timestamp and 
timestamptz in some part of the function you haven't shown us.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade

2011-03-18 Thread Tomas Vondra
Dne 18.3.2011 16:42, Davenport, Julie napsal(a):
> This helped, is now down from 14.9 min to 10.9 min to run the entire script.  
> Thanks.

Still, it's way slower than the 8.0 :-(

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TO_CHAR(timestamptz,datetimeformat) wrong after DST change

2011-03-18 Thread Jonathan Brinkman
I was setting TIME ZONE to 'EST' in my formatting function, and I've now
learned that EST is NOT the same as 'America/New_York', as EST is not
DST-sensitive. 

I mistyped, the 2011-03-17 18:21:50-04 should have been 2011-03-17
10:21:50-04

Thank you all!!



-Original Message-
From: Steve Crawford [mailto:scrawf...@pinpointresearch.com] 
Sent: Friday, March 18, 2011 1:05 PM
To: jonathansfl
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] TO_CHAR(timestamptz,datetimeformat) wrong after DST
change

On 03/18/2011 07:59 AM, jonathansfl wrote:
> My TO_CHAR function is now an hour off thanks to Daylight Savings Time.
> The dates are correct (I'm in EST: TZ=-04) but my function now returns
TIME
> an hour early.
> (prior to DST we were TZ=-05).
>
> TIMESTAMPTZ data (v_dt): 2011-03-17 18:21:50-04
> FUNCTION SNIPPET: to_char(v_dt, 'mm/dd/ hh:mi AM')
> FUNCTION RETURNS: 03/18/2011 09:21 AM
> FUNCTION SHOULD RETURN: 03/18/2011 10:21 AM
>
> postgres=# show time zone;
>   TimeZone
> --
>   America/New_York
> (1 row)
>

You are leaving out something critical. Based on what you've shown us, 
you shouldn't get either answer, you should get "03/17/2011 06:21 PM".

select to_char(timestamptz '2011-03-17 18:21:50-04', 'mm/dd/ hh:mi AM');

to_char
---
03/17/2011 06:21 PM

My money is on an accidental conversion between timestamp and 
timestamptz in some part of the function you haven't shown us.

Cheers,
Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] why does extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) == 16?

2011-03-18 Thread Steve Crawford

On 03/18/2011 10:17 AM, bubba postgres wrote:
Thank you for your thorough reply. It will take some time to digest 
your advice, but yes, I am specifically trying to avoid all TZ issues 
by using UTC everywhere all the time. My assumption was that Timestamp 
without timezone meant UTC, guess not.


Regards,
-JD



If you need to deal with multiple time zones, you can't avoid TZ issues. 
But be aware that regardless of how you specify a point in time 
(timestamp with time zone), PostgreSQL stores it internally at UTC and 
that point in time can be displayed in any time zone you wish.


create table tzexamp (mytimestamp timestamptz);

-- The following are equivalent (based on my default timezone of 
US/Pacific):

insert into tzexamp values (timestamptz '2010-01-01 00:00:00');
insert into tzexamp values (timestamptz '2010-01-01 08:00:00-00');
insert into tzexamp values (timestamptz '2010-01-01 03:00:00 EST5EDT');
insert into tzexamp values (abstime(1262332800));
insert into tzexamp values (timestamptz 'January 1 02:00:00 2010 
posix/America/Chicago');

set timezone to 'Asia/Macao';
insert into tzexamp values (timestamptz '2010-01-01 16:00:00');
set timezone to default;


select * from tzexamp;
  mytimestamp

 2010-01-01 00:00:00-08
 2010-01-01 00:00:00-08
 2010-01-01 00:00:00-08
 2010-01-01 00:00:00-08
 2010-01-01 00:00:00-08
 2010-01-01 00:00:00-08

But for the client connecting from Japan:

set timezone to 'Asia/Tokyo';

select * from tzexamp;
  mytimestamp

 2010-01-01 17:00:00+09
 2010-01-01 17:00:00+09
 2010-01-01 17:00:00+09
 2010-01-01 17:00:00+09
 2010-01-01 17:00:00+09
 2010-01-01 17:00:00+09

Or, of course, GMT:

set timezone to 'UTC';

select * from tzexamp;
  mytimestamp

 2010-01-01 08:00:00+00
 2010-01-01 08:00:00+00
 2010-01-01 08:00:00+00
 2010-01-01 08:00:00+00
 2010-01-01 08:00:00+00
 2010-01-01 08:00:00+00

Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SOCK_wait_for_ready function call caused a query to get stuck

2011-03-18 Thread Merlin Moncure
On Fri, Mar 18, 2011 at 11:39 AM, tamanna madaan
 wrote:
> Thanks for your reply Merlin .
>
> If I am getting you correctly, you mean to say that I should check for
> waiting queries in pg_stat_activity table
> while my application is hung at SOCK_wait_for_ready function call  . Right
> ??

correct. let's hope the problem is here (look for waiting = true).
Also take note of any sessions idle in transaction if there are any.

> Yes I am opening multiple seesions with ODBC driver  and my application is
> multi threaded .

The worst and possibly the most likely case is that you have some sort
of threading/synchronization problem in your application or possibly
the odbc driver itself.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query taking much longer since Postgres 8.4 upgrade

2011-03-18 Thread Merlin Moncure
On Fri, Mar 18, 2011 at 10:42 AM, Davenport, Julie  wrote:
> This helped, is now down from 14.9 min to 10.9 min to run the entire script.  
> Thanks.

can you try disabling nestloop and see what happens?   In the session,
before running the query, isssue:
set enable_nestloop = false;

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Ben Chobot
We're considering using postgres as a way to host database services for many, 
many independent applications. One obvious way to do this is with schemas, 
roles, and proper permissions, but that still leaves open the possibility for 
some poorly written application to leave open transactions and affect others. 

Another possible way to go is to use debian's ability to easily run multiple 
clusters, but while this option gives better isolation, I imaging it comes at a 
cost of more overhead, both in terms of memory and in terms of support. I 
suppose my question is, how much overhead? I'm less worried about support 
(that's what scripts are for) but if we're talking an extra 50MB of memory per 
cluster, that will start to add up.

How have you guys offered multi-tenant postgres services? Am I forgetting 
something?
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Ivan Voras

On 18/03/2011 19:17, Ben Chobot wrote:


if we're talking an extra 50MB of memory per cluster, that will start to add up.


Consider this: each such cluster will have:

a) its own database files on the drives (WAL, data - increasing IO)
b) its own postgresql processes (many of them) running in memory
c) its own shared_buffers in memory.

It is highly unlikely that you will manage anything decent with this 
type of configuration with a non-trivial number of clusters.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Ben Chobot
On Mar 18, 2011, at 11:47 AM, Ivan Voras wrote:

> On 18/03/2011 19:17, Ben Chobot wrote:
> 
>> if we're talking an extra 50MB of memory per cluster, that will start to add 
>> up.
> 
> Consider this: each such cluster will have:
> 
> a) its own database files on the drives (WAL, data - increasing IO)

Oh, I hadn't thought about WAL. Good point. 
But data files are a function of tables and indexes, right? Having them in 
different schemas or different clusters isn't going to change that. I guess 
there are system tables but those are relatively trivial - I think?

> b) its own postgresql processes (many of them) running in memory

I believe this is entirely a function of client connections.

> c) its own shared_buffers in memory.

Given that each application will be independent, I don't see a different 
between clusters and schemas here either.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How do I do this in plpgsql ?

2011-03-18 Thread Dan S
Hi !

Is there a way to use plpgsql copy type to get an array of a certain type ?

For example if I have a type sample%TYPE
How can I declare a variable that is an array of sample%TYPE
I can't get it to work, is there a way to do it ?

Best Regards
Dan S


Re: [GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Nicholson, Brad (Toronto, ON, CA)
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Ben Chobot
> Sent: Friday, March 18, 2011 3:10 PM
> To: Ivan Voras
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] multi-tenant vs. multi-cluster
> 
> On Mar 18, 2011, at 11:47 AM, Ivan Voras wrote:
> 
> > On 18/03/2011 19:17, Ben Chobot wrote:
> >
> >> if we're talking an extra 50MB of memory per cluster, that will
> start to add up.
> >
> > Consider this: each such cluster will have:
> >
> > a) its own database files on the drives (WAL, data - increasing IO)
> 
> Oh, I hadn't thought about WAL. Good point.
> But data files are a function of tables and indexes, right? Having them
> in different schemas or different clusters isn't going to change that.
> I guess there are system tables but those are relatively trivial - I
> think?

Correct, but with different clusters you are going to have different back ends 
handling writes without regard to each other.  How this unfolds will depend on 
your underlying disk structure and filsystems.  I've had bad experiences in the 
past having multiple Postgres instances fighting for the same disk.

> > b) its own postgresql processes (many of them) running in memory
> 
> I believe this is entirely a function of client connections.

With a single instance, you can use connection pooling to reduce the overall 
number of backend connections which will reduce your memory footprint. 
> 
> > c) its own shared_buffers in memory.
> 
> Given that each application will be independent, I don't see a
> different between clusters and schemas here either.

The difference is that in a single cluster, a single instance is going to make 
decisions about what data to cache or not.  This is an overly simplified 
example - but illustrates the point.  Say you have 4GB of RAM available to 
dedicate to a shared buffers on a server, and two databases (DB A and DB B) to 
run.  You either set up a single instance with a 4GB pool, or two instances 
with 2GB pools each.  Let's say that DB A gets really busy, and DB B is not.  
In the shared instance approach, the instance can evict buffers cached for DB B 
in order to load buffers needed for DB A.  In the split instance, you can't.

Brad.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgagent installation -- ccmake - getting selected wxWidgets configuration (version: 2.6, debug: no, static

2011-03-18 Thread akp geek
hi all -

 I am trying to install the pgagent on solaris. when I do the
ccmake /opt/postgres/apps/pgAgent-3.0.1-Source, I am getting the following
error.

 CMake Error at cmake/FindWX.cmake:271 (MESSAGE):
   The selected wxWidgets configuration (version: 2.6, debug: no, static:
yes,
   unicode: yes, modules: base) is not available.
 Call Stack (most recent call first):
   CMakeLists.txt:91 (FIND_PACKAGE)


thanks for the help

Regards


Re: [GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Ben Chobot

On Mar 18, 2011, at 12:34 PM, Nicholson, Brad (Toronto, ON, CA) wrote:

>>> b) its own postgresql processes (many of them) running in memory
>> 
>> I believe this is entirely a function of client connections.
> 
> With a single instance, you can use connection pooling to reduce the overall 
> number of backend connections which will reduce your memory footprint. 

Er, right, for some reason I was thinking I could use connection pooling 
against multiple clusters, but now that I think about it that doesn't make much 
sense, does it?

>> 
>>> c) its own shared_buffers in memory.
>> 
>> Given that each application will be independent, I don't see a
>> different between clusters and schemas here either.
> 
> The difference is that in a single cluster, a single instance is going to 
> make decisions about what data to cache or not.  This is an overly simplified 
> example - but illustrates the point.  Say you have 4GB of RAM available to 
> dedicate to a shared buffers on a server, and two databases (DB A and DB B) 
> to run.  You either set up a single instance with a 4GB pool, or two 
> instances with 2GB pools each.  Let's say that DB A gets really busy, and DB 
> B is not.  In the shared instance approach, the instance can evict buffers 
> cached for DB B in order to load buffers needed for DB A.  In the split 
> instance, you can't.

Ah, that's an illustrative example. Thanks.

OK, so are there any good ways to keep a bad/clueless user from gumming up a 
whole cluster? Something like statement_timeout, but for transactions, seems 
like it would be idle.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I do this in plpgsql ?

2011-03-18 Thread Merlin Moncure
On Fri, Mar 18, 2011 at 2:20 PM, Dan S  wrote:
> Hi !
>
> Is there a way to use plpgsql copy type to get an array of a certain type ?
>
> For example if I have a type sample%TYPE
> How can I declare a variable that is an array of sample%TYPE
> I can't get it to work, is there a way to do it ?

I don't think it's possible.  Can you give some more detail on what
you are trying to do?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Surge 2011 Conference CFP

2011-03-18 Thread Katherine Jeschke
We are excited to announce Surge 2011, the Scalability and Performance
Conference, to be held in Baltimore on Sept 28-30, 2011. The event focuses
on case studies that demonstrate successes (and failures) in Web
applications and Internet architectures. This year, we're adding Hack Day on
September 28th.

The inaugural, 2010 conference (http://omniti.com/surge/2010) was a smashing
success and we are currently accepting submissions for papers through April
3rd. You can find more information about topics online:

http://omniti.com/surge/2011

2010 attendees compared Surge to the early days of Velocity, and our
speakers received 3.5-4 out of 4 stars for quality of presentation and
quality of content! Nearly 90% of first-year attendees are planning to come
again in 2011.

For more information about the CFP or sponsorship of the event, please
contact us at surge (AT) omniti (DOT) com.

-- 
Katherine Jeschke
Marketing Director
OmniTI Computer Consulting, Inc.
7070 Samuel Morse Drive, Ste.150
Columbia, MD 21046
O: 410/872-4910, 222
C: 443/643-6140
omniti.com
circonus.com


Re: [GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Nicholson, Brad (Toronto, ON, CA)
> -Original Message-
> From: Ben Chobot [mailto:be...@silentmedia.com]
> Sent: Friday, March 18, 2011 3:45 PM
> To: Nicholson, Brad (Toronto, ON, CA)
> Cc: pgsql-general General
> Subject: Re: [GENERAL] multi-tenant vs. multi-cluster
> 
> 
> On Mar 18, 2011, at 12:34 PM, Nicholson, Brad (Toronto, ON, CA) wrote:
> 
> >>> b) its own postgresql processes (many of them) running in memory
> >>
> >> I believe this is entirely a function of client connections.
> >
> > With a single instance, you can use connection pooling to reduce the
> overall number of backend connections which will reduce your memory
> footprint.
> 
> Er, right, for some reason I was thinking I could use connection
> pooling against multiple clusters, but now that I think about it that
> doesn't make much sense, does it?

Not for reducing overall numbers of connections on the server.

> >>
> >>> c) its own shared_buffers in memory.
> >>
> >> Given that each application will be independent, I don't see a
> >> different between clusters and schemas here either.
> >
> > The difference is that in a single cluster, a single instance is
> going to make decisions about what data to cache or not.  This is an
> overly simplified example - but illustrates the point.  Say you have
> 4GB of RAM available to dedicate to a shared buffers on a server, and
> two databases (DB A and DB B) to run.  You either set up a single
> instance with a 4GB pool, or two instances with 2GB pools each.  Let's
> say that DB A gets really busy, and DB B is not.  In the shared
> instance approach, the instance can evict buffers cached for DB B in
> order to load buffers needed for DB A.  In the split instance, you
> can't.
> 
> Ah, that's an illustrative example. Thanks.
> 
> OK, so are there any good ways to keep a bad/clueless user from gumming
> up a whole cluster? Something like statement_timeout, but for
> transactions, seems like it would be idle.

statement_timeout will only time out SQL queries, not DB transactions.  There 
is nothing internal for that.  It's a fairly easy query to terminate all IDLE 
transactions, but you have to be careful that you aren't terminating active 
sessions.

Brad.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Merlin Moncure
On Fri, Mar 18, 2011 at 2:44 PM, Ben Chobot  wrote:
>
> On Mar 18, 2011, at 12:34 PM, Nicholson, Brad (Toronto, ON, CA) wrote:
>
 b) its own postgresql processes (many of them) running in memory
>>>
>>> I believe this is entirely a function of client connections.
>>
>> With a single instance, you can use connection pooling to reduce the overall 
>> number of backend connections which will reduce your memory footprint.
>
> Er, right, for some reason I was thinking I could use connection pooling 
> against multiple clusters, but now that I think about it that doesn't make 
> much sense, does it?
>
>>>
 c) its own shared_buffers in memory.
>>>
>>> Given that each application will be independent, I don't see a
>>> different between clusters and schemas here either.
>>
>> The difference is that in a single cluster, a single instance is going to 
>> make decisions about what data to cache or not.  This is an overly 
>> simplified example - but illustrates the point.  Say you have 4GB of RAM 
>> available to dedicate to a shared buffers on a server, and two databases (DB 
>> A and DB B) to run.  You either set up a single instance with a 4GB pool, or 
>> two instances with 2GB pools each.  Let's say that DB A gets really busy, 
>> and DB B is not.  In the shared instance approach, the instance can evict 
>> buffers cached for DB B in order to load buffers needed for DB A.  In the 
>> split instance, you can't.
>
> Ah, that's an illustrative example. Thanks.
>
> OK, so are there any good ways to keep a bad/clueless user from gumming up a 
> whole cluster? Something like statement_timeout, but for transactions, seems 
> like it would be idle.

single cluster, multiple database is likely the best way to go.
postgres is a little thin on resource provisioning features but at the
end of the day, restricting a single client app to N simultaneous
connections on a suitably powerful server is going to be your best way
to deal with this.  I highly advise using pgbouncer in front of your
database to do this.

you can always generate scripts to watch for high load situations and
kill off offending backends in emergencies.

statement_timeout is ok, but don't be too aggressive with it or it
will become annoying.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I do this in plpgsql ?

2011-03-18 Thread Pavel Stehule
2011/3/18 Dan S :
> Hi !
>
> Is there a way to use plpgsql copy type to get an array of a certain type ?
>
> For example if I have a type sample%TYPE
> How can I declare a variable that is an array of sample%TYPE
> I can't get it to work, is there a way to do it ?

No, it's not supported :(

Regards

Pavel Stehule
>
> Best Regards
> Dan S
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-tenant vs. multi-cluster

2011-03-18 Thread Scott Marlowe
On Fri, Mar 18, 2011 at 1:44 PM, Ben Chobot  wrote:

> OK, so are there any good ways to keep a bad/clueless user from gumming up a 
> whole cluster? Something like statement_timeout, but for transactions, seems 
> like it would be idle.

Newer versions of postgresql aren't quite as susceptible to problems
of open transactions in one DB affecting another in the same cluster /
instance.  I.e. if dbA has an open transaction, that will no longer
prevent dbB from being properly vacuumed etc.

I'd suggest using check_postgresql.pl to keep track of things like
open transactions, vacuum wraparound etc.

Note that it might be worthwhile to have two or more but not dozens of
individual clusters. Let's say you've got a critical app, and a bunch
of not so critical apps.  You can make a cluster for the more critical
app(s) so it's / they're isolated from the other apps if things go
wrong.

A rogue query using all the memory or IO in the machine up can still
be a bit of a problem, of course, but it will be limited if it happens
in another cluster.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgagent installation -- ccmake - getting selected wxWidgets configuration (version: 2.6, debug: no, static

2011-03-18 Thread Adrian Klaver
On Friday, March 18, 2011 12:36:07 pm akp geek wrote:
> hi all -
> 
>  I am trying to install the pgagent on solaris. when I do the
> ccmake /opt/postgres/apps/pgAgent-3.0.1-Source, I am getting the following
> error.
> 
>  CMake Error at cmake/FindWX.cmake:271 (MESSAGE):
>The selected wxWidgets configuration (version: 2.6, debug: no, static:
> yes,
>unicode: yes, modules: base) is not available.
>  Call Stack (most recent call first):
>CMakeLists.txt:91 (FIND_PACKAGE)

At a guess you do not have the wxWidgets dev package installed.

> 
> 
> thanks for the help
> 
> Regards


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgagent installation -- ccmake - getting selected wxWidgets configuration (version: 2.6, debug: no, static

2011-03-18 Thread akp geek
thank you for the clues. I am downloading the package and will install and
update you

Regards

On Fri, Mar 18, 2011 at 4:45 PM, Adrian Klaver wrote:

> On Friday, March 18, 2011 12:36:07 pm akp geek wrote:
> > hi all -
> >
> >  I am trying to install the pgagent on solaris. when I do the
> > ccmake /opt/postgres/apps/pgAgent-3.0.1-Source, I am getting the
> following
> > error.
> >
> >  CMake Error at cmake/FindWX.cmake:271 (MESSAGE):
> >The selected wxWidgets configuration (version: 2.6, debug: no, static:
> > yes,
> >unicode: yes, modules: base) is not available.
> >  Call Stack (most recent call first):
> >CMakeLists.txt:91 (FIND_PACKAGE)
>
> At a guess you do not have the wxWidgets dev package installed.
>
> >
> >
> > thanks for the help
> >
> > Regards
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] How do I do this in plpgsql ?

2011-03-18 Thread Dan S
Well I had the hope to only define the type used for this column in one
place.
Now I needed an array of the same type do some procedural calculations.
I figured that the best way was to get an array of this type by copying the
type used to fill the array.

Anyway I will declare the array the usual way then.

Thank you Merlin and Pavel for your quick answers
Dan S

2011/3/18 Merlin Moncure 

> On Fri, Mar 18, 2011 at 2:20 PM, Dan S  wrote:
> > Hi !
> >
> > Is there a way to use plpgsql copy type to get an array of a certain type
> ?
> >
> > For example if I have a type sample%TYPE
> > How can I declare a variable that is an array of sample%TYPE
> > I can't get it to work, is there a way to do it ?
>
> I don't think it's possible.  Can you give some more detail on what
> you are trying to do?
>
> merlin
>


Re: [GENERAL] error messages during restore

2011-03-18 Thread Tom Lane
Geoffrey Myers  writes:
> So we are in the process of converting our databases from SQL_ASCII to 
> UTF8.  If a particular row won't import because of the encoding issue we 
> get an error like:

> pg_restore: [archiver (db)] Error from TOC entry 5317; 0 1266711 TABLE 
> DATA logs postgres
> pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence 
> for encoding "UTF8": 0x90
> HINT:  This error can also happen if the byte sequence does not match 
> the encoding expected by the server, which is controlled by 
> "client_encoding".
> CONTEXT:  COPY logs, line 590744

> Question is, it would be really nice if we could figure out the actual 
> column name in that table.

Sorry, no chance of that.  The line is converted to server encoding
before any attempt is made to split it into columns.  Since the column
delimiter is potentially encoding-specific, there's not really any
alternative to doing it that way.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] Different encoding for string values and identifier strings? Or (select 'tést' as tést) returns different values for string and identifier...

2011-03-18 Thread Tom Lane
"Francisco Figueiredo Jr."  writes:
> My database has encoding set to UTF-8 although my lc_collate is pt.BR.UTF-8
> this lc setting my have cause some trouble?

Hmmm ... actually, it strikes me that this may be a downcasing problem.
PG will try to feed an unquoted identifier through tolower(), and that
basically can't work on multibyte characters.  Most implementations of
tolower() are smart enough to not change high-bit-set bytes in UTF8
locales, but maybe your platform's is not ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Insert value input syntax of an array of types without ARRAY/ROW nor casting?

2011-03-18 Thread Stefan Keller
Hi,

I'm playing around with array of types but don't get the intuitive
syntax really.

Given the test snippet below, why do the following insert attempts fail?

The literal constant would be the most intuitive syntax. The attempt
below also closely follows the documentation AFAIK:
http://www.postgresql.org/docs/current/static/arrays.html

INSERT INTO mytypetable VALUES (
  5, '{ ('a', 'aa'), ('b', 'bb') }' );
> ERROR: Syntax Error

This would be close to Oracle (SQL standard?) syntax by using an
implicitly generated constructor with same name as type ('mytypes'):

INSERT INTO mytypetable VALUES (
  6, ARRAY[ mytypes('a', 'aa'), ('b', 'bb') ] );
> ERROR: Function mytypes(unknown, unknown) does not exist

Any help?

Yours, S.


-- Testing arrays of types
CREATE TYPE mytype AS (
   attr1 varchar,
   attr2 varchar
);
CREATE TABLE mytypetable (
  id serial,
  mytypes mytype[10]
);

INSERT INTO mytypetable VALUES ( 0, null );
INSERT INTO mytypetable VALUES ( 1, '{ null, null }' );
INSERT INTO mytypetable VALUES (
  2, ARRAY[ (null, null) ]::mytype[] );
INSERT INTO mytypetable VALUES (
  3, ARRAY[ ('a', 'aa'), ('b', 'bb') ]::mytype[] );
INSERT INTO mytypetable VALUES (
  4, ARRAY[ ROW('a', 'aa'), ROW('b', 'bb') ]::mytype[] );

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] Different encoding for string values and identifier strings? Or (select 'tést' as tést) returns different values for string and identi

2011-03-18 Thread Francisco Figueiredo Jr.
Hm,

I'm using osx 10.6.6 and I compiled PG myself from source. Is there any
configure option or any library I may use to get the correct behavior? Is
there any runtime setting I can make to change this tolower() behavior,
maybe skip the call?

Thanks in advance.

--
Sent from my Android phone

Francisco Figueiredo Jr.
Npgsql lead developer
fxjr.blogspot.com
twitter.com/franciscojunior
Em 18/03/2011 22:01, "Tom Lane"  escreveu:
> "Francisco Figueiredo Jr."  writes:
>> My database has encoding set to UTF-8 although my lc_collate is
pt.BR.UTF-8
>> this lc setting my have cause some trouble?
>
> Hmmm ... actually, it strikes me that this may be a downcasing problem.
> PG will try to feed an unquoted identifier through tolower(), and that
> basically can't work on multibyte characters. Most implementations of
> tolower() are smart enough to not change high-bit-set bytes in UTF8
> locales, but maybe your platform's is not ...
>
> regards, tom lane