Re: [GENERAL] Extensions btree_gist and cube collide?

2012-01-31 Thread pasman pasmański
Hi.

I recreate database and a problem disapears.
Thanks.




pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] pg_dump -s dumps data?!

2012-01-31 Thread Robert Haas
On Mon, Jan 30, 2012 at 11:18 PM, Tom Lane  wrote:
> I don't recall that we thought very hard about what should happen when
> pg_dump switches are used to produce a selective dump, but ISTM
> reasonable that if it's "user data" then it should be dumped only if
> data in a regular user table would be.

Yep.

> What's not apparent to me is whether there's an argument for doing more
> than that.  It strikes me that the current design is not very friendly
> towards the idea of an extension that creates a table that's meant
> solely to hold user data --- you'd have to mark it as "config" which
> seems a bit unfortunate terminology for that case.  Is it important to
> do something about that, and if so what?

Is this anything more than a naming problem?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extending Session / Logged User info

2012-01-31 Thread durumdara
Dear PGSQL Masters!

I have a complicated question.

Is there any way to extend the session information (what visible in
pgadmin's "Server Status")?

I explain what I want to do:

We used alternative DB in a place and I need to port this.
In this DB we using different db users for every real users.
When the program logging to the server it input for username/pwd, and
log with it.
So if John XY logged in, the session is shown as "John XY", which is a
real user, and db user too.

This meaning that if we have 80 users, we need to create 80 db users.

It is very useful when you want to see who use the db, but very tiring
when you want to create or manage them (plus role, etc).

I thought that (if possible), we can avoid this mechanism, if Session
can store additional informations.

Then I need to create only ONE DB user, I have an aux table with
usernames/pwds, and if the logging was successful, I need only put
this real username into Session's additional "field".

In the view I can see all users as one, but with plus column I can see
the "real username".

Is PGSQL supports this kind of "Session information extending", or any
way to I can use one user with more real user - and to get this info
(the real username)?

Maybe the solution is a table with:
Session ID (PID???), UserName, Logged in

If PID is unique in all lifetime of the server, and I can get only the
last logging in of each PID-s.


Thanks for your help:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Hard-Coded Version 9.1 In Names?

2012-01-31 Thread Marti Raudsepp
On Tue, Jan 31, 2012 at 00:41, Jerry Richards
 wrote:
> I just installed postgreSQL 9.1 and noticed it hard-codes the folder
> /var/lib/pgsql/9.1 and it hard-codes the service name to be postgresql91.

> Why is the hard-coded version included in the naming?

Note that this is done by Linux distributions, vanilla PostgreSQL
doesn't use version-specific paths.

The reason is that the PostgreSQL on-disk format is not
forward-compatible. In order to upgrade from one Postgres version to
the next, you need to have *both* versions installed at once. As
annoying as it is, version-specific paths is a pretty foolproof way to
enable that.

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending Session / Logged User info

2012-01-31 Thread Guillaume Lelarge
On Tue, 2012-01-31 at 04:16 -0800, durumdara wrote:
> Dear PGSQL Masters!
> 
> I have a complicated question.
> 
> Is there any way to extend the session information (what visible in
> pgadmin's "Server Status")?
> 
> I explain what I want to do:
> 
> We used alternative DB in a place and I need to port this.
> In this DB we using different db users for every real users.
> When the program logging to the server it input for username/pwd, and
> log with it.
> So if John XY logged in, the session is shown as "John XY", which is a
> real user, and db user too.
> 
> This meaning that if we have 80 users, we need to create 80 db users.
> 
> It is very useful when you want to see who use the db, but very tiring
> when you want to create or manage them (plus role, etc).
> 

Creating 80 users is not a really big burden. Managing privileges for 80
users will be. So you need to create some roles, on which you'll
grant/revoke privileges, and you'll make each user member of one or more
of the roles.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending Session / Logged User info

2012-01-31 Thread Durumdara
Hi!

2012/1/31 Guillaume Lelarge 

> On Tue, 2012-01-31 at 04:16 -0800, durumdara wrote:
> > Dear PGSQL Masters!
> >
> > I have a complicated question.
> >
> > Is there any way to extend the session information (what visible in
> > pgadmin's "Server Status")?
> >
> > I explain what I want to do:
> >
> > We used alternative DB in a place and I need to port this.
> > In this DB we using different db users for every real users.
> > When the program logging to the server it input for username/pwd, and
> > log with it.
> > So if John XY logged in, the session is shown as "John XY", which is a
> > real user, and db user too.
> >
> > This meaning that if we have 80 users, we need to create 80 db users.
> >
> > It is very useful when you want to see who use the db, but very tiring
> > when you want to create or manage them (plus role, etc).
> >
>
> Creating 80 users is not a really big burden. Managing privileges for 80
> users will be. So you need to create some roles, on which you'll
> grant/revoke privileges, and you'll make each user member of one or more
> of the roles.
>

This is also make problems when I don't want to install a new PGSQL server,
but they want to see the old, closed archives.
Now one solution is if I make prefixes on usernames:
main_[name]
archive1_[name]
and the program automatically set these prefixes.

But it would be easier if I can use one DB user can represent all users,
and I can make differents on them with extending session info (additional
"fields").

main_user
archive1_user
archive2_user

Thanks:
dd


Re: [GENERAL] Index on parent/child hierarchy

2012-01-31 Thread Merlin Moncure
On Sun, Jan 29, 2012 at 5:55 AM, Dmitriy Igrishin  wrote:
>> The point here is that you can exploit the tree structure with a btree
>> index.  Before we got recursive queries, this was often the best way
>> to do it, but now it's kind of a niche solution to be used when
>> certain things fall into place.
>>
>> merlin
>
> Another approarch  is to use ltree. It's easy and robust.
> http://www.postgresql.org/docs/9.1/static/ltree.html

yeah -- good point.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parameter "vacuum_defer_cleanup_age"

2012-01-31 Thread Tulio

  
  
Fabrizio,

I'm using the value default..
The situation was when I executed some query large in the server
Slave, because I'm trying do balancement of charge.

Somebody could help me to solve this problem?

System: CentOS 
Postgresql version:  9.1

Em 30/01/2012 22:12, Fabrízio de Royes Mello escreveu:

  2012/1/30 Tulio 

   Hi,

I have 2 servers, working with Hot-Standby and Streaming
Replication...
and when we executed some query much large returns a
message.. 
"canceling statement due to statement
  timeout"
  I want know, how can I calculate the better value to
  "vacuum_defer_cleanup_age" in my case? 
  
  Tell me if you recommend changes in other parameter.
  


  
  
  
  Which the value of the parameter "statement_timeout" ?
  
  
  -- 
  Fabrízio de Royes Mello
  Consultoria/Coaching PostgreSQL
>> Blog sobre TI: http://fabriziomello.blogspot.com
>> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello
  
  


-- 
  
  
  
  
  

  

Túlio Gustavo dos Santos
Adm. Téc. em Banco de Dados - TI
Informídia Pesquisas Esportivas Ltda.
Tel./Fax.: 55 11 3673 5566
Email: tu...@informidia.com.br
twitter: @informidia
Web: www.informidia.com.br
  

  
  
  Antes de imprimir lembre-se de sua responsabilidade
e comprometimento com o Meio Ambiente

  



[GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-01-31 Thread Nykolyn, Andy (AS)
I have been running PostgreSQL 8.4 on Windows Server 2003 Service Pack 2 for 
almost 3 years.  I have recently been intermittently getting the error "ERROR:  
could not open relation base/85599592/121526263: No such file or directory".  
It only appears to be happening during the execution of stored procedures where 
an insert into a temp table is performed.  The error goes away after the user 
reconnects with a new session.

This database periodically gets copied to a PostgreSQL 8.4 server on a Linux 
RedHat 4.5 machine where the same error also has started intermittently 
occurring; again it only appears to happen during the execution of stored 
procedures where an insert into a temp table is performed.

Does anybody have an idea what might be happening?  Are there any config 
settings for temp tables that might help prevent this error?  Any help would be 
much appreciated.  Thank you.

Andy Nykolyn
Northrop Grumman



[GENERAL] Gist cost estimates

2012-01-31 Thread Matthias
Hello,

I've noticed the new range data types in 9.2dev. I'm really looking
forward to use them, so I built postgres 9.2dev on windows to try.

While testing I noticed one thing. I have a simple test table with 1
million rows. There's a column called valid_range (of type int4range)
which is GiST indexed. Now when I do a query like

 select * from mytable where valid_range && int4range(100,200)

it will use the created gist index. But it will completely fail with
the cost estimation. For whatever reason it always assumes 5104 rows
will be returned, while in reality more than 300k rows are returned.
If I change the query to look like

 select * from mytable where valid_range && int4range(null,null)

it will still estimate 5104 rows to be returned (in reality it's 1M
rows -- the whole table). This leads to grossly inefficient query
plans.

Curiously I have the same problem with postgres' cube data type
(tested on 9.1 and which also estimates exactly 5104 rows). And
postgis indexes have a similar (though maybe unrelated) problem.

 Do you have any explanation for these grossly wrong cost estimates?
Are they unimplemented? What can I do to debug this further?

 Thank you,
 -Matthias

 P.S.: I've already increased the statistics collection size (done by
 vacuum analyze) to no avail

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-01-31 Thread Adrian Klaver
On Tuesday, January 31, 2012 8:08:24 am Nykolyn, Andy (AS) wrote:
> I have been running PostgreSQL 8.4 on Windows Server 2003 Service Pack 2
> for almost 3 years.  I have recently been intermittently getting the error
> "ERROR:  could not open relation base/85599592/121526263: No such file or
> directory".  It only appears to be happening during the execution of
> stored procedures where an insert into a temp table is performed.  The
> error goes away after the user reconnects with a new session.
> 
> This database periodically gets copied to a PostgreSQL 8.4 server on a
> Linux RedHat 4.5 machine where the same error also has started
> intermittently occurring; again it only appears to happen during the
> execution of stored procedures where an insert into a temp table is
> performed.
> 
> Does anybody have an idea what might be happening?  Are there any config
> settings for temp tables that might help prevent this error?  Any help
> would be much appreciated.  Thank you.

Some questions first:
1) What language are you using in the stored procedures?
2) How are the temp tables being created? For instance is there an ON COMMIT 
DROP clause?
3) How are the stored procedures being called?

> 
> Andy Nykolyn
> Northrop Grumman

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-01-31 Thread Nykolyn, Andy (AS)
Subject: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open 
relation

On Tuesday, January 31, 2012 8:08:24 am Nykolyn, Andy (AS) wrote:
> I have been running PostgreSQL 8.4 on Windows Server 2003 Service Pack 2
> for almost 3 years.  I have recently been intermittently getting the error
> "ERROR:  could not open relation base/85599592/121526263: No such file or
> directory".  It only appears to be happening during the execution of
> stored procedures where an insert into a temp table is performed.  The
> error goes away after the user reconnects with a new session.
> 
> This database periodically gets copied to a PostgreSQL 8.4 server on a
> Linux RedHat 4.5 machine where the same error also has started
> intermittently occurring; again it only appears to happen during the
> execution of stored procedures where an insert into a temp table is
> performed.
> 
> Does anybody have an idea what might be happening?  Are there any config
> settings for temp tables that might help prevent this error?  Any help
> would be much appreciated.  Thank you.

Some questions first:
1) What language are you using in the stored procedures?
The stored procedures are in Pl/Pgsql

2) How are the temp tables being created? For instance is there an ON COMMIT 
DROP clause?
They are created at the start of stored procedure and dropped at the end.  
There is no ON COMMIT 
DROP clause.

3) How are the stored procedures being called?
They are usually called from a java client but I have also seen this issue when 
called from a PgAdmin session screen.
> 
> Andy Nykolyn
> Northrop Grumman

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help speeding up a left join aggregate

2012-01-31 Thread Nick
I have a pretty well tuned setup, with appropriate indexes and 16GB of
available RAM. Should this be taking this long? I forced it to not use
a sequential scan and that only knocked a second off the plan.

QUERY
PLAN
--
 Hash Right Join  (cost=105882.35..105882.47 rows=3 width=118) (actual
time=3931.567..3931.583 rows=4 loops=1)
   Hash Cond: (songs_downloaded.advertisement_id = a.id)
   ->  HashAggregate  (cost=105881.21..105881.26 rows=4 width=13)
(actual time=3931.484..3931.489 rows=3 loops=1)
 ->  Seq Scan on songs_downloaded  (cost=0.00..95455.96
rows=1042525 width=13) (actual time=0.071..1833.680 rows=1034752
loops=1)
   Filter: (advertiser_id = 6553406)
   ->  Hash  (cost=1.10..1.10 rows=3 width=46) (actual
time=0.050..0.050 rows=4 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 1kB
 ->  Seq Scan on advertisements a  (cost=0.00..1.10 rows=3
width=46) (actual time=0.037..0.041 rows=4 loops=1)
   Filter: (advertiser_id = 6553406)
 Total runtime: 3931.808 ms
(10 rows)

SELECT a.id, sd.price, COALESCE(sd.downloads,0) AS downloads,
COALESCE(sd.download_revenue,0) AS download_revenue
FROM advertisements a
LEFT JOIN (SELECT advertisement_id, AVG(price) AS price, SUM(price) AS
download_revenue, COUNT(1) AS downloads FROM songs_downloaded WHERE
advertiser_id = 6553406 GROUP BY advertisement_id) AS sd ON a.id =
sd.advertisement_id
WHERE advertiser_id = 6553406

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] HELP - compiling C functions for wiondws

2012-01-31 Thread Edwin Quijada

Hi!
I am trying to compile a C function for windows using VS 2008 Express and I 
getting this errors

pg2.cc:\program files\postgresql\9.0\include\server\pg_config_os.h(203) : error 
C2011: 'timezone' : 'struct' type redefinitionc:\program 
files\postgresql\9.0\include\server\pg_config_os.h(203) : see  declaration of 
'timezone'   c:\program 
files\postgresql\9.0\include\server\pg_config_os.h(212) : error C2011 : 
'itimerval' : 'struct' type redefinitionc:\program 
files\postgresql\9.0\include\server\pg_config_os.h(212) : see  declaration of 
'itimerval' c:\program 
files\postgresql\9.0\include\server\utils/elog.h(118) : error C2365: 'errcode' 
: redefinition; previous definition was 'typedef'C:\Program 
Files\Microsoft Visual Studio 9.0\VC\INCLUDE\crtdefs.h(560) :  see declaration 
of 'errcode'
The command that I used to compile is thiscl /I "\Program 
Files\postgreSQL\9.0\include" /I "\Program Files\postgreSQL\9.0\include\server" 
/I "\Program Files\postgreSQL\9.0\include\server\port\win32" /c 
"D:\exte_c\pg2.c"
Any cluees ort any ideas ?I am using PG 9.0.6

*---* 
*-Edwin Quijada 
*-Developer DataBase 
*-JQ Microsistemas 

*-Soporte PostgreSQL

*-www.jqmicrosistemas.com
*-809-849-8087
*---*

  

[GENERAL] normalizing & join to table function

2012-01-31 Thread Little, Douglas

Hello,
Need some help.  Hoping some of the smart people might know how to solve this.

I'd like to replace all name/value pairs in a delimited string  with the id of 
the name/value pair in my reference table.
Eg
St=IL&city=Chicago&street=Madison
To
13&50&247
Assuming  St=IL  is id 13,  city=Chicago=50, street=Madison=247

My process is working but it's taking too long to convert the string to rows. 
Any ideas for swaping out the cursor for a  sql trick?
Thanks in advance
Doug

I'm currently cursoring thru the input rows, and then converting the string to 
rows using  1st array_to_string,  now explode_array.
-- Current code
   sql_cmd := ' SELECT   hash_page , log_cs_uri_query  FROM dim_wtlog_page_temp 
';
   FOR recset IN EXECUTE sql_cmd LOOP
insert into pagetag_temp (hash_page, wtlog_tagvalue_text)
 select recset.hash_page   ,qry.* as wtlog_tagvalue_text
 from explode_array (string_to_array(recset.log_cs_uri_query,'&') ) qry 
 ;
   END LOOP;

create or replace function explode_array( in_array anyarray)
returns setof anyelement as
$$
select ($1)[s] from generate_series(1,array_upper($1, 1)) as s;
$$
language sql immutable;

Doug Little

Sr. Data Warehouse Architect | Business Intelligence Architecture | Orbitz 
Worldwide
500 W. Madison, Suite 1000  Chicago IL 60661| Office 312.260.2588 | Fax 
312.894.5164 | Cell 847-997-5741
douglas.lit...@orbitz.com
 [cid:image001.jpg@01CCE011.A46685F0]   orbitz.com | 
ebookers.com | 
hotelclub.com | 
cheaptickets.com | 
ratestogo.com | 
asiahotels.com

<>

Re: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-01-31 Thread Adrian Klaver
On Tuesday, January 31, 2012 9:42:07 am Nykolyn, Andy (AS) wrote:

> 
> Some questions first:
> 1) What language are you using in the stored procedures?
> The stored procedures are in Pl/Pgsql

Temp table in PL/pgSQL was improved in 8.3 to handle this case, so you should 
be 
covered.

http://www.postgresql.org/docs/8.4/interactive/release-8-3.html

"
Automatically re-plan cached queries when table definitions change or 
statistics 
are updated (Tom)

Previously PL/pgSQL functions that referenced temporary tables would fail if 
the 
temporary table was dropped and recreated between function invocations, unless 
EXECUTE was used. This improvement fixes that problem and many related issues.
"
> 
> 2) How are the temp tables being created? For instance is there an ON
> COMMIT DROP clause?
> They are created at the start of stored procedure and dropped at the end. 
> There is no ON COMMIT DROP clause.

Are you using EXECUTE or doing a straight CREATE TEMP TABLE ...? 
Maybe a sample with sensitive info changed?
The insert you mention happens in the same procedure or is there a nested 
procedure?

> 
> 3) How are the stored procedures being called?
> They are usually called from a java client but I have also seen this issue
> when called from a PgAdmin session screen.

Is there a database pooler in the mix?
Are there any other errors in the logs at the same time that might pertain? 

> 
> > Andy Nykolyn
> > Northrop Grumman

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-01-31 Thread Nykolyn, Andy (AS)

Are you using EXECUTE or doing a straight CREATE TEMP TABLE ...? 
Maybe a sample with sensitive info changed?
The insert you mention happens in the same procedure or is there a nested 
procedure?

It is a straight CREATE TEMP TABLE.  It does not happen on the same procedure 
every time.  It happens on different procedures sometimes they are nested.  
These store procedures have been working for many years the same way.  Only 
recently has this error been intermittently occurring on temp tables.  
Sometimes it occurs on the create, sometimes it occurs on an insert, update or 
delete on a temp table

Is there a database pooler in the mix?
Are there any other errors in the logs at the same time that might pertain? 

There is no database pooler and there are never any other errors in the log at 
the time of this error.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump -s dumps data?!

2012-01-31 Thread Martijn van Oosterhout
On Mon, Jan 30, 2012 at 11:18:31PM -0500, Tom Lane wrote:
> I don't recall that we thought very hard about what should happen when
> pg_dump switches are used to produce a selective dump, but ISTM
> reasonable that if it's "user data" then it should be dumped only if
> data in a regular user table would be.  So I agree it's pretty broken
> that "pg_dump -t foo" will dump data belonging to a config table not
> selected by the -t switch.  I think this should be changed in both HEAD
> and 9.1 (note that HEAD will presumably return to the 9.1 behavior once
> that --exclude-table-data patch gets fixed).

Perhaps a better way of dealing with this is providing a way of dumping
extensions explicitly. Then you could say:

pg_dump --extension=postgis -s

to get the data. And you can use all the normal pg_dump options for
controlling the output. The flag currently used to seperate the table
schema from the table content could then interact logically. Another
way perhaps:

pg_dump --extension-postgis=data-only
pg_dump --extension-postgis=schema
pg_dump --extension-postgis=all
pg_dump --extension-postgis=none

The last being the default.

Just throwing out some completely different ideas.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> He who writes carelessly confesses thereby at the very outset that he does
> not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] list blocking queries

2012-01-31 Thread Scot Kreienkamp
Tom,

There's something wrong with the query that I've written based on what you gave 
me.

Here's what I have:

select bl.pid as Blocked_PID,
a.usename as Blocked_User,
kl.pid as Blocking_PID,
ka.usename as Blocking_User,
to_char(age(now(), a.query_start),'HH24h:MIm:SSs') as Age
from pg_catalog.pg_locks bl
join pg_catalog.pg_stat_activity a on bl.pid = a.procpid
join pg_catalog.pg_locks kl
join pg_catalog.pg_stat_activity ka on bl.locktype = kl.locktype
and bl.database is not distinct from kl.database
and bl.relation is not distinct from kl.relation
and bl.page is not distinct from kl.page
and bl.tuple is not distinct from kl.tuple
and bl.virtualxid is not distinct from kl.virtualxid
and bl.transactionid is not distinct from kl.transactionid
and bl.classid is not distinct from kl.classid
and bl.objid is not distinct from kl.objid
and bl.objsubid is not distinct from kl.objsubid
and bl.pid != kl.pid
where kl.granted and not bl.granted

When I run that I get:
ERROR:  syntax error at or near "where"
LINE 20: where kl.granted and not bl.granted

I think I'm missing something in one of the joins.

Thanks!

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Monday, January 30, 2012 9:49 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] list blocking queries

Scot Kreienkamp  writes:
> My apologies for the slightly novice post on this, but I'm a bit stumped.  I 
> have this query that I found on the net and adapted a little to find the 
> queries that were blocking:

> "select bl.pid as \"Blocked PID\", a.usename as \"Blocked User\", kl.pid as 
> \"Blocking PID\", ka.usename as \"Blocking User\", to_char(age(now(), 
> a.query_start),'HH24h:MIm:SSs') as \"Age\"  from pg_catalog.pg_locks bl join 
> pg_catalog.pg_stat_activity a on bl.pid = a.procpid join pg_catalog.pg_locks 
> kl join pg_catalog.pg_stat_activity ka on kl.pid = ka.procpid on 
> bl.transactionid = kl.transactionid and bl.pid != kl.pid where not 
> bl.granted;"

Hm, that would only have worked for rather small values of "work",
because it's matching pg_locks entries on the basis of the transactionid
field, which means it will only detect conflicts for locks on
transaction IDs.  There are a lot of other types of locks.  You need
something more like

join ... on bl.locktype = kl.locktype
and bl.database is not distinct from kl.database
and bl.relation is not distinct from kl.relation
and bl.page is not distinct from kl.page
and bl.tuple is not distinct from kl.tuple
and bl.virtualxid is not distinct from kl.virtualxid
and bl.transactionid is not distinct from kl.transactionid
and bl.classid is not distinct from kl.classid
and bl.objid is not distinct from kl.objid
and bl.objsubid is not distinct from kl.objsubid
and bl.pid != kl.pid

Since most of these fields will be nulls in any specific rows, you have
to use "is not distinct from" not just "=".  Tedious, I know.

The WHERE clause seems a few bricks shy of a load as well; you need

where kl.granted and not bl.granted

if you don't want it to claim that fellow blockees are blocking each
other.  (In some cases that would actually be a fair statement, but
I don't think it's possible to tell from pg_locks who's queued behind
whom in the wait-list for a lock, so it's probably best not to try
to show those relationships.)

regards, tom lane



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Gist cost estimates

2012-01-31 Thread Tom Lane
Matthias  writes:
> I've noticed the new range data types in 9.2dev. I'm really looking
> forward to use them, so I built postgres 9.2dev on windows to try.
> ...
>  Do you have any explanation for these grossly wrong cost estimates?

The range operators don't have any selectivity estimation worthy of the
name yet.  I'm still hoping to see that fixed before 9.2 final, but
the days grow short ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: EXT :Re: [GENERAL] Intermittent occurrence of ERROR: could not open relation

2012-01-31 Thread Tom Lane
"Nykolyn, Andy (AS)"  writes:
> It is a straight CREATE TEMP TABLE.  It does not happen on the same procedure 
> every time.  It happens on different procedures sometimes they are nested.  
> These store procedures have been working for many years the same way.  Only 
> recently has this error been intermittently occurring on temp tables.  
> Sometimes it occurs on the create, sometimes it occurs on an insert, update 
> or delete on a temp table

8.4.what exactly, and did you update versions around the time this
started happening?  I'm worried that this may represent a
newly-introduced bug.  Can you provide a self-contained test case?
It doesn't matter if it only fails occasionally, as long as we can
keep running it till it does fail.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] list blocking queries

2012-01-31 Thread Tom Lane
Scot Kreienkamp  writes:
> There's something wrong with the query that I've written based on what you 
> gave me.

It looks like you forgot the ON condition for the next-to-last JOIN,
so the parser is still expecting another ON when it runs into the WHERE.

Personally I think this sort of FROM construction is pretty bad style
anyway, because it takes an eagle eye to see which ON goes with which
JOIN, and even worse it's very easy to get them connected up wrongly.
You'd be a lot better off with explicit parentheses, viz

from
(pg_catalog.pg_locks bl join pg_catalog.pg_stat_activity a on bl.pid = 
a.procpid)
  join
(pg_catalog.pg_locks kl join pg_catalog.pg_stat_activity ka on kl.pid = 
ka.procpid)
  on the-long-on-condition-goes-here

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] normalizing & join to table function

2012-01-31 Thread Marc Mamin

> Hello,
> Need some help.  Hoping some of the smart people might know how to solve this.
> 
> I'd like to replace all name/value pairs in a delimited string  with the id 
> of the name/value pair in my reference table.
> Eg
> St=IL&city=Chicago&street=Madison
> To
> 13&50&247
> Assuming  St=IL  is id 13,  city=Chicago=50, street=Madison=247
> 
> My process is working but it's taking too long to convert the string to rows. 
> Any ideas for swaping out the cursor for a  sql trick?


Hello,

I would try to first explode all at once to a temp table using 
regexp_split_to_table,
then update with your IDs, and aggregate back to the desired form.

regards,

Marc Mamin

> Thanks in advance
> Doug
> 
> I'm currently cursoring thru the input rows, and then converting the string 
> to rows using  1st array_to_string,  now explode_array.
> -- Current code
>sql_cmd := ' SELECT   hash_page , log_cs_uri_query  FROM 
> dim_wtlog_page_temp ';
>FOR recset IN EXECUTE sql_cmd LOOP
> insert into pagetag_temp (hash_page, wtlog_tagvalue_text)
>  select recset.hash_page   ,qry.* as wtlog_tagvalue_text
>  from explode_array (string_to_array(recset.log_cs_uri_query,'&') ) 
> qry  ;
>END LOOP;
> 
> create or replace function explode_array( in_array anyarray)
> returns setof anyelement as
> $$
> select ($1)[s] from generate_series(1,array_upper($1, 1)) as s;
> $$
> language sql immutable;
> 
> Doug Little
> 
> Sr. Data Warehouse Architect | Business Intelligence Architecture | Orbitz 
> Worldwide
> 500 W. Madison, Suite 1000  Chicago IL 60661| Office 312.260.2588 | Fax 
> 312.894.5164 | Cell 847-997-5741
> douglas.lit...@orbitz.com
>  [cid:image001.jpg@01CCE011.A46685F0]   orbitz.com | 
> ebookers.com | 
> hotelclub.com | 
> cheaptickets.com | 
> ratestogo.com | 
> asiahotels.com
> 
> 
> 
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] pg_dump -s dumps data?!

2012-01-31 Thread Adrian Klaver

On 01/31/2012 04:36 AM, Robert Haas wrote:

On Mon, Jan 30, 2012 at 11:18 PM, Tom Lane  wrote:

I don't recall that we thought very hard about what should happen when
pg_dump switches are used to produce a selective dump, but ISTM
reasonable that if it's "user data" then it should be dumped only if
data in a regular user table would be.


Yep.


What's not apparent to me is whether there's an argument for doing more
than that.  It strikes me that the current design is not very friendly
towards the idea of an extension that creates a table that's meant
solely to hold user data --- you'd have to mark it as "config" which
seems a bit unfortunate terminology for that case.  Is it important to
do something about that, and if so what?


Is this anything more than a naming problem?


Seems to me that would be dependent on what the future plans are for the 
extension mechanism. There is also the issue of backward compatibility 
for those people that are using configuration tables in their extensions 
and would like to maintain that separation. I could see adding another 
function that is similar and would be used to identify strictly user 
data tables.




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help speeding up a left join aggregate

2012-01-31 Thread Alban Hertroys

On 31 Jan 2012, at 4:55, Nick wrote:

> I have a pretty well tuned setup, with appropriate indexes and 16GB of
> available RAM. Should this be taking this long? I forced it to not use
> a sequential scan and that only knocked a second off the plan.
> 
>QUERY
> PLAN
> --
> Hash Right Join  (cost=105882.35..105882.47 rows=3 width=118) (actual
> time=3931.567..3931.583 rows=4 loops=1)
>   Hash Cond: (songs_downloaded.advertisement_id = a.id)
>   ->  HashAggregate  (cost=105881.21..105881.26 rows=4 width=13)
> (actual time=3931.484..3931.489 rows=3 loops=1)
> ->  Seq Scan on songs_downloaded  (cost=0.00..95455.96
> rows=1042525 width=13) (actual time=0.071..1833.680 rows=1034752
> loops=1)
>   Filter: (advertiser_id = 6553406)
>   ->  Hash  (cost=1.10..1.10 rows=3 width=46) (actual
> time=0.050..0.050 rows=4 loops=1)
> Buckets: 1024  Batches: 1  Memory Usage: 1kB
> ->  Seq Scan on advertisements a  (cost=0.00..1.10 rows=3
> width=46) (actual time=0.037..0.041 rows=4 loops=1)
>   Filter: (advertiser_id = 6553406)
> Total runtime: 3931.808 ms
> (10 rows)

I bet the group by query would be far more selective on advertisement_id than 
on the advertiser_id it's currently using, wouldn't it?
Perhaps the query planner chooses the wrong filter here because the 
advertiser_id is in the inner query, while the advertisement_id is outside it. 
You could try and see what happens if you move the advertiser_id into the join 
condition:

SELECT a.id, sd.price, COALESCE(sd.downloads,0) AS downloads,
COALESCE(sd.download_revenue,0) AS download_revenue
FROM advertisements a
LEFT JOIN (SELECT advertisement_id, AVG(price) AS price, SUM(price) AS
download_revenue, COUNT(1) AS downloads FROM songs_downloaded
 GROUP BY advertisement_id) AS sd ON a.id = 
sd.advertisement_id AND a.advertiser_id = sd.advertiser_id
WHERE advertiser_id = 6553406

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] pg_dump -s dumps data?!

2012-01-31 Thread Tom Lane
Adrian Klaver  writes:
> On 01/31/2012 04:36 AM, Robert Haas wrote:
>> On Mon, Jan 30, 2012 at 11:18 PM, Tom Lane  wrote:
>>> What's not apparent to me is whether there's an argument for doing more
>>> than that.  It strikes me that the current design is not very friendly
>>> towards the idea of an extension that creates a table that's meant
>>> solely to hold user data --- you'd have to mark it as "config" which
>>> seems a bit unfortunate terminology for that case.  Is it important to
>>> do something about that, and if so what?

>> Is this anything more than a naming problem?

> Seems to me that would be dependent on what the future plans are for the 
> extension mechanism.

My thought exactly --- maybe it's only a minor cosmetic issue that will
affect few people, or maybe this will someday be a major use-case.
I don't know.  I was hoping Dimitri had an opinion.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT with RETURNING clause inside SQL function

2012-01-31 Thread xavieremv
(k)

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/INSERT-with-RETURNING-clause-inside-SQL-function-tp1923653p5445810.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Sequence Ids are not updating after COPY operation in PostgreSQL

2012-01-31 Thread Siva Palanisamy
Hi there!

I'm inserting bulk records using COPY statement in PostgreSQL, and is 
successful. When I try to insert a record later manually, it throws duplicate 
sequence id error. What I realize is, the sequence ids are not getting updated 
in its cache. Should I manually update the sequence number to get the number of 
records after performing COPY? Isn't there a solution while performing COPY, 
just increment the sequence variable, that is, the primary key field of the 
table?

For instance, if I insert 200 records, COPY does good and my table shows all 
the records. When I manually insert a record later, it says duplicate sequence 
id error. It very well implies that it didn't increment the sequence ids during 
COPYing as work fine during normal INSERTing. Instead of instructing the 
sequence id to set the max number of records, won't there be any mechanism to 
educate the COPY command to increment the sequence ids during its bulk COPYing 
option?

Please clarify me on this. Thanks in advance!

Regards,
Siva.


::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] Sequence Ids are not updating after COPY operation in PostgreSQL

2012-01-31 Thread Guillaume Lelarge
On Wed, 2012-02-01 at 12:08 +0530, Siva Palanisamy wrote:
> Hi there!
> 
> I'm inserting bulk records using COPY statement in PostgreSQL, and is 
> successful. When I try to insert a record later manually, it throws duplicate 
> sequence id error. What I realize is, the sequence ids are not getting 
> updated in its cache. Should I manually update the sequence number to get the 
> number of records after performing COPY? Isn't there a solution while 
> performing COPY, just increment the sequence variable, that is, the primary 
> key field of the table?
> 
> For instance, if I insert 200 records, COPY does good and my table shows all 
> the records. When I manually insert a record later, it says duplicate 
> sequence id error. It very well implies that it didn't increment the sequence 
> ids during COPYing as work fine during normal INSERTing. Instead of 
> instructing the sequence id to set the max number of records, won't there be 
> any mechanism to educate the COPY command to increment the sequence ids 
> during its bulk COPYing option?
> 
> Please clarify me on this. Thanks in advance!
> 

You usually need to use DEFAULT for the column, so that the sequence is
auto-incremented.

If you don't use DEFAULT, you need to set yourself the value of the
sequence.

PS: this isn't specific to COPY. INSERT behaves the same.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com
PostgreSQL Sessions #3: http://www.postgresql-sessions.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general