[GENERAL] Problems with pg_upgrade.

2014-07-29 Thread Bror Jonsson

Dear all,

I’m trying to upgrade from 9.0 to 9.2 with out any success. the pg_upgrade 
script fails with the message Old and new cluster install users have different 
values for pg_authid.oid:

geo-route-in:~ bror$ /opt/local//lib/postgresql92/bin/pg_upgrade -b 
/opt/local//lib/postgresql90/bin/ -B /opt/local//lib/postgresql92/bin/ -d 
/Volumes/ruoteRAID/db/postgresql90/defaultdb/ -D  
/Volumes/ruoteRAID/db/postgresql92/defaultdb/ -u bror
Performing Consistency Checks
-
Checking current, bin, and data directories ok
Checking cluster versions   ok
Checking database user is a superuser   ok
Checking for prepared transactions  ok
Checking for reg* system OID user data typesok
Checking for contrib/isn with bigint-passing mismatch   ok
Creating catalog dump   ok
Checking for presence of required libraries ok
Checking database user is a superuser   ok

Old and new cluster install users have different values for pg_authid.oid.
Failure, exiting

The only information about this error I could find in in the source code:

> https://github.com/postgres/postgres/blob/master/contrib/pg_upgrade/check.c

…
/*
 * We don't restore our own user, so both clusters must match have
 * matching install-user oids.
 */
if (old_cluster.install_role_oid != new_cluster.install_role_oid)
pg_fatal("Old and new cluster install users have different 
values for pg_authid.oid.\n");

/*
 * We only allow the install user in the new cluster because other 
defined
 * users might match users defined in the old cluster and generate an
 * error during pg_dump restore.
 */
if (new_cluster.role_count != 1)
pg_fatal("Only the install user can be defined in the new 
cluster.\n");

check_for_prepared_transactions(&new_cluster);
}
…

pg_authid in the old db looks as follows:

bror=# SELECT * FROM pg_authid;
 rolname  | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcatupdate 
| rolcanlogin | rolconnlimit | rolpassword | rolvaliduntil 
--+--++---+-+--+-+--+-+---
 postgres | t| t  | t | t   | t
| t   |   -1 | | 
 bror | t| t  | t | t   | t
| t   |   -1 | | 
 django   | f| t  | f | f   | f
| t   |   -1 | | 
(3 rows)

And in the new one:

SELECT * FROM pg_authid;
 rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcatupdate | 
rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil 
-+--++---+-+--+-++--+-+---
 bror| t| t  | t | t   | t| 
t   | t  |   -1 | | 
(1 row)

Any suggestion for how to fix this?


Many thanks!

/Bror Jonsson

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with pg_upgrade.

2014-07-29 Thread Tom Lane
Bror Jonsson  writes:
> I’m trying to upgrade from 9.0 to 9.2 with out any success. the pg_upgrade 
> script fails with the message Old and new cluster install users have 
> different values for pg_authid.oid:

> geo-route-in:~ bror$ /opt/local//lib/postgresql92/bin/pg_upgrade -b 
> /opt/local//lib/postgresql90/bin/ -B /opt/local//lib/postgresql92/bin/ -d 
> /Volumes/ruoteRAID/db/postgresql90/defaultdb/ -D  
> /Volumes/ruoteRAID/db/postgresql92/defaultdb/ -u bror

At a guess, "bror" is not the original superuser in the old cluster
(ie, the one who ran initdb).  You need to use the name of that
superuser, not just any superuser.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with pg_upgrade.

2014-07-29 Thread Bror Jonsson

Thanks!

Is there any way to figure out which the original superuser was? 

:-)B


On Jul 29, 2014, at 12:41, Tom Lane  wrote:

> Bror Jonsson  writes:
>> I’m trying to upgrade from 9.0 to 9.2 with out any success. the pg_upgrade 
>> script fails with the message Old and new cluster install users have 
>> different values for pg_authid.oid:
> 
>> geo-route-in:~ bror$ /opt/local//lib/postgresql92/bin/pg_upgrade -b 
>> /opt/local//lib/postgresql90/bin/ -B /opt/local//lib/postgresql92/bin/ -d 
>> /Volumes/ruoteRAID/db/postgresql90/defaultdb/ -D  
>> /Volumes/ruoteRAID/db/postgresql92/defaultdb/ -u bror
> 
> At a guess, "bror" is not the original superuser in the old cluster
> (ie, the one who ran initdb).  You need to use the name of that
> superuser, not just any superuser.
> 
>   regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with pg_upgrade.

2014-07-29 Thread John R Pierce

On 7/29/2014 9:41 AM, Tom Lane wrote:

Bror Jonsson  writes:

>I'm trying to upgrade from 9.0 to 9.2 with out any success. the pg_upgrade 
script fails with the message Old and new cluster install users have different 
values for pg_authid.oid:
>geo-route-in:~ bror$/opt/local//lib/postgresql92/bin/pg_upgrade 
-b/opt/local//lib/postgresql90/bin/  -B/opt/local//lib/postgresql92/bin/  
-d/Volumes/ruoteRAID/db/postgresql90/defaultdb/  -D  
/Volumes/ruoteRAID/db/postgresql92/defaultdb/  -u bror

At a guess, "bror" is not the original superuser in the old cluster
(ie, the one who ran initdb).  You need to use the name of that
superuser, not just any superuser.


to clarify, the old cluster's install_user is postgres, while the new is 
bror.  they must be the same.  stop the new cluster, wipe the data dir 
out, and re-run initdb as the postgres user this time, THEN run the 
upgrade script with -u postgres




--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] Problems with pg_upgrade.

2014-07-29 Thread Tom Lane
Bror Jonsson  writes:
> Is there any way to figure out which the original superuser was? 

You got a surplus of superusers?

Anyway, it should be the one with OID 10.

select * from pg_authid where oid = 10;

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG_TERMINATE_BACKEND not working.

2014-07-29 Thread byfei...@163.com
Then same problem with you,and any idea?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PG-TERMINATE-BACKEND-not-working-tp5732379p5812909.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] TZ_OFFSET

2014-07-29 Thread Ramesh T
Hello,

 select TZ_OFFSET ('US/Eastern') from dual ;

it's returning in oracle
--
-04:00


but in postgres

select TZ_OFFSET ('US/Eastern');

its'returning like function tz_offset('us/Eastern') does not exist;
let me know how to solve issue


Re: [GENERAL] TZ_OFFSET

2014-07-29 Thread Victor Yegorov
2014-07-28 13:19 GMT+03:00 Ramesh T :

> select TZ_OFFSET ('US/Eastern');
>
> its'returning like function tz_offset('us/Eastern') does not exist;
>

SELECT *
  FROM pg_catalog.pg_timezone_names
 WHERE name='US/Eastern';


-- 
Victor Y. Yegorov


[GENERAL] Is there any way to recover updated values from the table(Need old value)

2014-07-29 Thread gajendra s v
Hi All,


Is there any way to get history value in postgres database because in one
table unfortunately we have updated value but we need old data check ,Is
there any way to find old (Same table has updated multiple times),We need
oldest data, please inform me is there any way to recover data?

Thanks in advance
Gajendra


Re: [GENERAL] Is there any way to recover updated values from the table(Need old value)

2014-07-29 Thread John R Pierce

On 7/29/2014 12:50 PM, gajendra s v wrote:
Is there any way to get history value in postgres database because in 
one table unfortunately we have updated value but we need old data 
check ,Is there any way to find old (Same table has updated multiple 
times),We need oldest data, please inform me is there any way to 
recover data?


got backups?

if you have a basebackup and a complete WAL archive, you can do a 
'point-in-time recovery' up to some time prior to the updates that 
overwrote your data and retrieve the old values at that transactional point.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any way to recover updated values from the table(Need old value)

2014-07-29 Thread John R Pierce

On 7/29/2014 1:06 PM, gajendra s v wrote:

Thanks for quick reply.
Sorry I didnt get base backup means, but I have database dump of every 
day ,our database is in AWS EC2.


so take a dump from before you overwrote the data you need, restore it 
to a scratch database, and copy out your data.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tab_to_sting

2014-07-29 Thread David G Johnston
Ramesh T wrote
> Hi,
> when i ran below statement its working fine..
>select string_agg(part_id::text,':') from part;
> But,
> SELECT tab_to_largeStringcheck(cast(string_agg(part_id::text,':')as
> t_varchar2_tab)) FROM   part
> 
> 
> [image: Inline image 1]
> 
> when i ran like
> 
> SELECT
> qa.tab_to_largeStringcheck(string_agg(part_id::text,':'))
> FROM   qa.part
> its returnfunction( text)does'nt exist
> let me know how solve issue..
> thanks,

You really need to spend a day reading the PostgreSQL documentation,
especially the parts on what functions and data types are available.  There
are many things that work in Oracle but not PostgreSQL simply because names
are different.  If you know what need to do you should be able to recognize
the stuff in PostgreSQL that will accomplish the same goal.  You may have to
write custom functions too.

You should also explore EDB and the Oracle compatibility stuff they have
written.

Note that "tabtolargestringcheck(text)" is not a known PostgreSQL
function...

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/tab-to-sting-tp5812613p5813223.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any way to recover updated values from the table(Need old value)

2014-07-29 Thread John R Pierce

On 7/29/2014 1:29 PM, gajendra s v wrote:
Sorry to disturb you again, I have imported old backup, is there any 
way to find how many times transaction has happened for 24 hrs 
duration to perticular table with update query because by mistakenly 
our java application has update different values ones.


the backup dump just stores the current values.  and, postgres stores no 
history of transactions or anything, unless you've implemented some sort 
of auditing mechanism such as triggers that log update/insert events 
into special audit tables you've created.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Joining on CTE is unusually slow?

2014-07-29 Thread Jon Rosebaugh
I have a CTE that produces some row ids. I want to do a query with a
complicated join based on those row ids. I've tried running them split
into two (run CTE query, collect row ids, then run the complicated query
with id IN (id_1, id_2, id_3)) and it takes only a few seconds to run,
but when I run this as a single query it takes several minutes. Is this
expected behavior?

We have a message table (msg) where each message has a parent_id (a
nullable FK to the same table). I use the CTE to assemble a list of all
messages downstream of a given message (including that message). It
works pretty well:

WITH RECURSIVE downstream_thread(id, parent_id) AS
(SELECT pn_msg.id AS id, pn_msg.parent_id AS parent_id
FROM msg_table AS pn_msg JOIN msg_asset_table ON pn_msg.id =
msg_asset_table.msg_id JOIN asset_table ON asset_table.id =
msg_asset_table.asset_id
WHERE pn_msg.message_type = 'interest_notification' AND asset_table.type
= 'tp' AND asset_table.public_id = '59c89bdcaf6711e3b67f12313b0a607d'
UNION SELECT pr_msg.id AS id, pr_msg.parent_id AS parent_id
FROM msg_table AS pr_msg JOIN downstream_thread ON pr_msg.parent_id =
downstream_thread.id)
SELECT id from downstream_thread;

There are 2.3 million rows in msg_table, but for any expected asset
public_id, this query will return only a dozen rows at most, and runs in
6 seconds.

Once I have the row ids, I use this complicated join; basically I'm
finding replies to the original message that come from any user at the
company the original message was addressed to. This takes under a
second.

SELECT notification_reply_msg.*
FROM msg_table AS notification_reply_msg
JOIN reference_table AS notification_reply_ref ON msg_table.id =
notification_reply_ref.msg_id
JOIN thread_table AS notification_reply_thread ON
notification_reply_ref.thread_id = notification_reply_thread.id
JOIN user_table AS notification_reply_user ON
notification_reply_thread.user_id = notification_reply_user.id
JOIN user_table AS interest_notification_user ON
interest_notification_user.company_id =
notification_reply_user.company_id
JOIN thread_table AS interest_notification_thread ON
interest_notification_thread.user_id = interest_notification_user.id
JOIN reference_table AS interest_notification_ref ON
interest_notification_ref.thread_id = interest_notification_thread.id
JOIN msg_table AS interest_notification_msg ON
interest_notification_ref.msg_id = interest_notification_msg.id
WHERE interest_notification_msg.id IN (2648995, 2648996) and
notification_reply_msg.id IN (2648995, 2648996)
AND interest_notification_msg.message_type = 'interest_notification' AND
interest_notification_ref.header = 'to' AND
notification_reply_ref.header = 'from';

However, I tried combining the two queries:

WITH RECURSIVE downstream_thread(id, parent_id) AS
(SELECT pn_msg.id AS id, pn_msg.parent_id AS parent_id
FROM msg_table AS pn_msg JOIN msg_asset_table ON pn_msg.id =
msg_asset_table.msg_id JOIN asset_table ON asset_table.id =
msg_asset_table.asset_id
WHERE pn_msg.message_type = 'interest_notification' AND asset_table.type
= 'tp' AND asset_table.public_id = '59c89bdcaf6711e3b67f12313b0a607d'
UNION SELECT pr_msg.id AS id, pr_msg.parent_id AS parent_id
FROM msg_table AS pr_msg JOIN downstream_thread ON pr_msg.parent_id =
downstream_thread.id)
SELECT notification_reply_msg.*
FROM msg_table AS notification_reply_msg
JOIN reference_table AS notification_reply_ref ON msg_table.id =
notification_reply_ref.msg_id
JOIN thread_table AS notification_reply_thread ON
notification_reply_ref.thread_id = notification_reply_thread.id
JOIN user_table AS notification_reply_user ON
notification_reply_thread.user_id = notification_reply_user.id
JOIN user_table AS interest_notification_user ON
interest_notification_user.company_id =
notification_reply_user.company_id
JOIN thread_table AS interest_notification_thread ON
interest_notification_thread.user_id = interest_notification_user.id
JOIN reference_table AS interest_notification_ref ON
interest_notification_ref.thread_id = interest_notification_thread.id
JOIN msg_table AS interest_notification_msg ON
interest_notification_ref.msg_id = interest_notification_msg.id
WHERE interest_notification_msg.id IN (SELECT id from downstream_thread)
and notification_reply_msg.id IN (SELECT id from downstream_thread)
AND interest_notification_msg.message_type = 'interest_notification' AND
interest_notification_ref.header = 'to' AND
notification_reply_ref.header = 'from';

The only difference is that I've added the WITH RECURSIVE expression at
the beginning and changed the id list to "SELECT id FROM
downstream_thread".

This takes over eight minutes to run. Is this the expected behavior when
joining on CTE expressions?

I realize I haven't given the full schema/metadata/explain output as
explained in the "Slow Query Questions" wiki page; I wasn't sure if that
applied in this case because it's more a question of why combining these
two parts is much slower. However, I'm happy to provide it all; I'm just
worrie

Re: [GENERAL] Joining on CTE is unusually slow?

2014-07-29 Thread David G Johnston
Jon Rosebaugh wrote
> This takes over eight minutes to run. Is this the expected behavior when
> joining on CTE expressions?
> 
> I realize I haven't given the full schema/metadata/explain output as
> explained in the "Slow Query Questions" wiki page

You should at least provide some explain a/o explain analyse results.

Not to sound pedantic here but you are not JOINing on the CTE, you are
pushing it into WHERE clause via a pair of sub-selects.

I don't see why you wouldn't apply the result of the CTE to the "FROM
msg_table" in the main query...

SELECT ...
FROM (SELECT * FROM msg_table JOIN downstream_thread USING (id)) AS
notification_reply_message
JOIN ...

Or even just

SELECT ...
FROM downstream_thread 
JOIN msg_table USING (id)

Speculation as to your original queries is beyond me without seeing the
explain plans - and possibly even then.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Joining-on-CTE-is-unusually-slow-tp5813233p5813237.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] statement_timeout doesn't work

2014-07-29 Thread Sergey Konoplev
On Mon, Jul 21, 2014 at 11:32 AM, Sergey Konoplev  wrote:
> On Mon, Jul 21, 2014 at 10:16 AM, David G Johnston
>  wrote:
>>> So, If I separate the commands everything will will work as expected,
>>> correct?
>>
>> I would assume so.
>>
>> If you wait to send the DROP/ALTER index commands until the SET LOCAL
>> command returns successfully then both of those commands will die if they
>> exceed the timeout specified.

So, you were right, when I send the BEGIN/SET LOCAL/DROP/END as a
single command the statement timeout doesn't work.

Below is the test reproducing the problem.

psql -XAte < 'idle' AND pid <> pg_backend_pid();

-[ RECORD 1 ]-+---
pid   | 20071
backend_start | 2014-07-29 22:21:17.322722-07
xact_start| 2014-07-29 22:21:17.32666-07
query_start   | 2014-07-29 22:21:17.328291-07
state_change  | 2014-07-29 22:21:17.328293-07
waiting   | f
state | active
query | SELECT pg_sleep(100);
age   | 00:00:06.855373
-[ RECORD 2 ]-+---
pid   | 20085
backend_start | 2014-07-29 22:21:18.330979-07
xact_start| 2014-07-29 22:21:18.332332-07
query_start   | 2014-07-29 22:21:18.332332-07
state_change  | 2014-07-29 22:21:18.332332-07
waiting   | t
state | active
query | BEGIN;SET LOCAL statement_timeout TO 1000;DROP TABLE test;END;
age   | 00:00:05.849701

The age of the compound statement is more than the specified statement timeout.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general