Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Rich Shepard

On Thu, 3 Jan 2013, Bèrto ëd Sèra wrote:


if it's a strict legal requirement you may want to enforce it with a
trigger system, so that each time a record is inserted/updated/deleted
you create an exact copy of it in a historical table, that has the
original record plus data about who performed the operation, when,
from which IP, maybe a comment field, etc. So your actual table
remains limited in size and it's performing well, while the size
problem is local to the audit logs.


Bèrto,

  That's in line with Adrian's suggestion and certainly worth doing. It's
not a required legal requirement but provides the company (and potential
investors) with assurance that data have not been manipulated.


You also want to use triggers to disable updates and deletes on this
historical table, for a matter of additional security (you might end up
needing a procedure to trim it, however, if it grows out of affordable
bounds).


  Yes, the history table will be read-only to all users; writing done by
triggers only.

Much appreciated,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Fabrízio de Royes Mello
On Thu, Jan 3, 2013 at 2:50 PM, Rich Shepard 
wrote:
>
>   There should not be many changes in these tables.

Ok.

>
> And historical data
> cannot be purged or the purpose of maintaining a history is lost. The
> history is valuable for tracking changes over time in regulatory agency
> staff and to prevent data manipulation such as was done several years ago
by
> the president of Southwestern Resources (a gold mining company) to pump up
> the company's stock price by changing assay results.

I understand it and for this reason I said to "use some strategy to purge
old historical data *OR* make your audit tables partitioned"...

regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
>> Blog sobre TI: http://fabriziomello.blogspot.com
>> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello


Re: [GENERAL] Large number of rows in pg_type and slow gui (pgadmin) refresh

2013-01-03 Thread Thomas Kellerer

Robert Klaus wrote on 03.01.2013 16:50:

We have 36,000+ rows returned by " SELECT oid, format_type(oid,
typtypmod) AS typname FROM pg_type".

My manager says this is only a small number compared to what is
expected by next summer.


Why do you need so many types?
That sounds like something in your design is not right.


When I run this select statement on the database server it returns in
under 1 second but it takes a minute to open some tabs using pgAmin
III (1.16.1).

Is there a workaround for this - most of the rows returned are from
one schema.  If we could elimiate it from the listed schemas it would
help.  Is this possible in pgAdmin III?



That sounds more like a pgAdmin problem and not a PostgreSQL problem.


Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Unable to reload postgresql.conf without restarting

2013-01-03 Thread Matheus de Oliveira
On Thu, Jan 3, 2013 at 2:25 PM, Jose Martinez wrote:

> Hi,
>
> I made some changes to postgresql.conf and I want them to take effect
> without having to restart the server.
>
> I tried
>
> select pg_reload_conf();
> /usr/pgsql-9.1/bin/pg_ctl reload
>
> but when I do 'show all', I see no changes take effect.
>
> There settings I tried to change are:
> -effective_cache_size
> -work_mem
>
>
They should have been reloaded. Have you set them by role, database or
tablespace?
Or even, has them been set, by mistake, twice at postgresql.conf? If so,
the last one
will be used.

Regards,
-- 
Matheus de Oliveira
Analista de Banco de Dados PostgreSQL
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres


Re: [GENERAL] [ADMIN] Unable to reload postgresql.conf without restarting

2013-01-03 Thread sk baji
> Hi,
>
> I made some changes to postgresql.conf and I want them to take effect
without
> having to restart the server.
>
> I tried
>
> select pg_reload_conf();
> /usr/pgsql-9.1/bin/pg_ctl reload
>
> but when I do 'show all', I see no changes take effect.
>
> There settings I tried to change are:
> -effective_cache_size
> -work_mem
>
> Im using posgres 9.1 on Centos Linux (amazon ec2)

I think, these parameters are duplicated in postgresql.conf file. Can you
check whether the same parameters are exits in bottom of the file. If so,
you need to change at the bottom(or comment them).


Regards,
Baji Shaik.



On Thu, Jan 3, 2013 at 5:17 PM, Bruce Momjian  wrote:

> On Thu, Jan  3, 2013 at 11:25:41AM -0500, Jose Martinez wrote:
> > Hi,
> >
> > I made some changes to postgresql.conf and I want them to take effect
> without
> > having to restart the server.
> >
> > I tried
> >
> > select pg_reload_conf();
> > /usr/pgsql-9.1/bin/pg_ctl reload
> >
> > but when I do 'show all', I see no changes take effect.
> >
> > There settings I tried to change are:
> > -effective_cache_size
> > -work_mem
> >
> > Im using posgres 9.1 on Centos Linux (amazon ec2)
>
> That is quite odd.  Can you show us the commands and the postgresql.conf
> line you are changing?
>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
>   + It's impossible for everything to be true. +
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Bèrto ëd Sèra
Hi again,

> I understand it and for this reason I said to "use some strategy to purge
> old historical data *OR* make your audit tables partitioned"...

yes, prepare to scale up in any case, even if it seems to be a remote
chance ATM. If the "untouched" nature of this data is so critical, you
have no chances to tamper with it in the future, or it will lose its
value. On the contrary, being able to scale up to a very large amount
of historical data can be sold as a plus to the same audience/market,
as you clearly are planning to "think big".

If it cannot be partitioned because of budget concerns, a low cost
alternative is to print it out and have it authenticated by a notary
(since your historical records bear a prog number you clearly cannot
hide "sections" in the process). Pretty much what you do with
book-keeping.

Cheers
Bèrto

-- 
==
If Pac-Man had affected us as kids, we'd all be running around in a
darkened room munching pills and listening to repetitive music.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large number of rows in pg_type and slow gui (pgadmin) refresh

2013-01-03 Thread Robert Klaus

Yes, I consider it a tool issue and not a database issue.  Is there
somewhere else I should be posting this to?

Thanks,
Robert

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thomas Kellerer
Sent: Thursday, January 03, 2013 11:31 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Large number of rows in pg_type and slow gui
(pgadmin) refresh

Robert Klaus wrote on 03.01.2013 16:50:
> We have 36,000+ rows returned by " SELECT oid, format_type(oid,
> typtypmod) AS typname FROM pg_type".
>
> My manager says this is only a small number compared to what is 
> expected by next summer.

Why do you need so many types?
That sounds like something in your design is not right.

> When I run this select statement on the database server it returns in 
> under 1 second but it takes a minute to open some tabs using pgAmin 
> III (1.16.1).
>
> Is there a workaround for this - most of the rows returned are from 
> one schema.  If we could elimiate it from the listed schemas it would 
> help.  Is this possible in pgAdmin III?
>

That sounds more like a pgAdmin problem and not a PostgreSQL problem.


Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] alter default privileges problem

2013-01-03 Thread Gauthier, Dave
v9.1 on linux

Connect to postgres DB, then...

create user "select" password 'select';
create user "insert" password 'insert';
alter default privileges for user "insert" grant select on tables to "select";
alter default privileges for user "insert" grant select on sequences to 
"select";
alter default privileges for user "insert" grant execute on functions to 
"select";

Disconnect.  Reconnect as user "insert", then...

create table foo (a text);
insert into foo (a) values ('aaa');

Disconnect.  Reconnect as user "select", expecting to be able to select 
contents of the "foo" table, but fails with "permission denied for relation 
foo".

Bottom line is that I want the "select" user to be able to query any table, 
sequence or use any function created by user "insert".

Thanks for any help !



Re: [GENERAL] [ADMIN] Unable to reload postgresql.conf without restarting

2013-01-03 Thread Jerry Sievers
Jose Martinez  writes:

> Hi,
>
> I made some changes to?postgresql.conf and I want them to take effect without 
> having to restart the server.

Check your server log for any report of a syntax error in your .conf
file.

If there is one, that will prevent the changes being loaded.

pg_reload_conf() will still return 't' and you'll be unaware of the
problem 

> I tried?
>
> select pg_reload_conf();
> /usr/pgsql-9.1/bin/pg_ctl reload
>
> but when I do 'show all', I see no changes take effect.?
>
> There settings I tried to change are:
> -effective_cache_size?
> -work_mem
>
> Im using posgres 9.1 on Centos Linux (amazon ec2)
>
> Thanks
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large number of rows in pg_type and slow gui (pgadmin) refresh

2013-01-03 Thread Thomas Kellerer

Robert Klaus wrote on 03.01.2013 18:45:

Yes, I consider it a tool issue and not a database issue.  Is there
somewhere else I should be posting this to?



There is a pgAdmin mailing list, see here: 
http://www.postgresql.org/community/lists/

Thomas





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter default privileges problem

2013-01-03 Thread Gauthier, Dave
The fix had to do with connecting as the "insert" user, then setting the 
default privs.  My mistake was to run the "alter default privileges..." as the 
superuser.



From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Gauthier, Dave
Sent: Thursday, January 03, 2013 2:09 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] alter default privileges problem

v9.1 on linux

Connect to postgres DB, then...

create user "select" password 'select';
create user "insert" password 'insert';
alter default privileges for user "insert" grant select on tables to "select";
alter default privileges for user "insert" grant select on sequences to 
"select";
alter default privileges for user "insert" grant execute on functions to 
"select";

Disconnect.  Reconnect as user "insert", then...

create table foo (a text);
insert into foo (a) values ('aaa');

Disconnect.  Reconnect as user "select", expecting to be able to select 
contents of the "foo" table, but fails with "permission denied for relation 
foo".

Bottom line is that I want the "select" user to be able to query any table, 
sequence or use any function created by user "insert".

Thanks for any help !



[GENERAL] Unnecessary files that can be deleted/moved in cluster dir?

2013-01-03 Thread John Abraham
I have a little problem, I let my drive get too full.   And then while I was 
deleting rows to free space, the auto vacuum didn't kick in quite the way I 
expected, and I ran out of space entirely.   So the DB shut down and won't 
start back up.

So is there anything ( other than the logs in pg_log) that I can delete, or 
move temporarily, to save some space and allow the database to start up and 
finish it's vacuum?

Or is there a way to move some of the stuff to another drive?  The whole 
cluster is too big to move entirely to a new physical drive (the machine is in 
another city so I can't just plug in a USB drive or anything, but I can put 
stuff on network shares for now) and there is nothing else on the partition 
other than the cluster.

Thanks,

--
John Abraham




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter default privileges problem

2013-01-03 Thread Tom Lane
"Gauthier, Dave"  writes:
> create user "select" password 'select';
> create user "insert" password 'insert';
> alter default privileges for user "insert" grant select on tables to "select";
> alter default privileges for user "insert" grant select on sequences to 
> "select";
> alter default privileges for user "insert" grant execute on functions to 
> "select";

> Disconnect.  Reconnect as user "insert", then...

> create table foo (a text);
> insert into foo (a) values ('aaa');

> Disconnect.  Reconnect as user "select", expecting to be able to select 
> contents of the "foo" table, but fails with "permission denied for relation 
> foo".

Works for me.  Maybe you've got some schema search path confusion,
or some such?  "\dp foo" in psql might be enlightening, too.  What
I see is

regression=> \dp foo
Access privileges
 Schema | Name | Type  |   Access privileges   | Column access privileges 
+--+---+---+--
 public | foo  | table | select=r/insert  +| 
|  |   | insert=arwdDxt/insert | 
(1 row)


regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Curious unnest behavior

2013-01-03 Thread Jeff Trout
I just ran into an interesting thing with unnest and empty arrays.

create table x (
a int,
b int[]
);

insert into x(a,b) values (1, '{}');
insert into x(a,b) values (1, '{}');
insert into x(a,b) values (1, '{}');

select a, b from x;
select a, unnest(b) from x;

insert into x(a,b) values (2, '{5,6}');
select a, unnest(b) from x;

drop table x;

gives me:
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
 a | b  
---+
 1 | {}
 1 | {}
 1 | {}
(3 rows)

 a | unnest 
---+
(0 rows)

INSERT 0 1
 a | unnest 
---+
 2 |  5
 2 |  6
(2 rows)

DROP TABLE

I can understand the likely reasoning behind the behavior but perhaps a note in 
the documentation about it might be of use for others that may get bit by this 
functionality.  (especially given the structure of the query, had I been doing 
select * from unnest(arr) that would be more intuitive, but given the query 
structure of select with no where the results can be surprising.)

thanks

--
Jeff Trout 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unable to reload postgresql.conf without restarting

2013-01-03 Thread Jose Martinez
Hi,

I made some changes to postgresql.conf and I want them to take effect
without having to restart the server.

I tried

select pg_reload_conf();
/usr/pgsql-9.1/bin/pg_ctl reload

but when I do 'show all', I see no changes take effect.

There settings I tried to change are:
-effective_cache_size
-work_mem

Im using posgres 9.1 on Centos Linux (amazon ec2)

Thanks


Re: [GENERAL] Curious unnest behavior

2013-01-03 Thread Patrick Krecker
I have to say, this seems straightforward to me.  An array with N elements
gets N rows in the result set.  I'm curious what other behavior would be
more reasonable.


On Thu, Jan 3, 2013 at 11:22 AM, Jeff Trout wrote:

> I just ran into an interesting thing with unnest and empty arrays.
>
> create table x (
> a int,
> b int[]
> );
>
> insert into x(a,b) values (1, '{}');
> insert into x(a,b) values (1, '{}');
> insert into x(a,b) values (1, '{}');
>
> select a, b from x;
> select a, unnest(b) from x;
>
> insert into x(a,b) values (2, '{5,6}');
> select a, unnest(b) from x;
>
> drop table x;
>
> gives me:
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
>  a | b
> ---+
>  1 | {}
>  1 | {}
>  1 | {}
> (3 rows)
>
>  a | unnest
> ---+
> (0 rows)
>
> INSERT 0 1
>  a | unnest
> ---+
>  2 |  5
>  2 |  6
> (2 rows)
>
> DROP TABLE
>
> I can understand the likely reasoning behind the behavior but perhaps a
> note in the documentation about it might be of use for others that may get
> bit by this functionality.  (especially given the structure of the query,
> had I been doing select * from unnest(arr) that would be more intuitive,
> but given the query structure of select with no where the results can be
> surprising.)
>
> thanks
>
> --
> Jeff Trout 
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] PostgreSQL run as process in windows

2013-01-03 Thread Alban Hertroys
FYI: There's a kernel sockets leak in the 64-bit edition of that OS in
combination with multiple CPU cores (meaning on any slightly modern CPU).
You might be running into that now or later. See:
http://support.microsoft.com/?id=2577795

The issue is over a year old and there's still no Windows update that fixes
it, except for the hotfix in linked article. Apparently the fix will be in
SP2, but there's no ETA for that.


[GENERAL] Postgresql 9.1 - select statement with multiple "with-clauses" becomes very slow

2013-01-03 Thread Opel Fahrer
I'm a noob in writing efficient Postgres queries, so I wrote a first function 
to query multiple linked tables using the PostGIS extension. The query should 
fetch data from multiple tables and finally give me a table with two columns. 
Here's the code:

[code]
    prepare getmydata(real,real,real) AS (
 with 
 closeby(id) AS (
  select buildingid from info where ST_DWithin(position, 'POINT($1 $2)', 
$3)    
 ), 
 closebuildings(descriptionid,image) AS (
  select descriptionid,image from buildings where id IN (select * from 
closeby)
 ),
 closebuildingdescriptions(data) AS (
  select data from buildingdescriptions where id IN (select descriptionid 
from closebuildings)
 )
 select image,data from closebuildings,closebuildingdescriptions;
    );
    execute getmydata(0.0,0.0,10.0);
[/code]

Actually the problem is that this query is VERY slow, even if the database 
content is small (taking around 15 minutes or so). The problem seems to be that 
postgres has to make sure that for the select statement both columns have equal 
length. If I only do "select image from closebuildings", the results are 
delivered in 0.1 secs, a "select data from closebuildingdescriptions" is 
delivered in 7.8 secs.

I ran an "explain analyze" call, but I can't make any sense from the output:

[code]
"Nested Loop  (cost=7816.51..2636821.06 rows=131352326 width=36) (actual 
time=117.125..6723.014 rows=12845056 loops=1)"
"  CTE closeby"
"    ->  Seq Scan on info  (cost=0.00..1753.11 rows=186 width=4) (actual 
time=0.022..5.821 rows=1579 loops=1)"
"  Filter: (("position" && 
'010320797F010005007D3F35DEAC512041E3A59BBC9BE153417D3F35DEAC512041E3A59BBCA0E153417D3F35DED4512041E3A59BBCA0E153417D3F35DED4512041E3A59BBC9BE153417D3F35DEAC512041E3A59BBC9BE15341'::geometry)
 AND ('010120797F7D3F35DEC0512041E3A59B3C9EE15341'::geometry && 
st_expand("position", 10::double precision)) AND _st_dwithin("position", 
'010120797F7D3F35DEC0512041E3A59B3C9EE15341'::geometry, 10::double 
precision))"
"  CTE closebuildings"
"    ->  Hash Semi Join  (cost=6.04..4890.03 rows=1351 width=8) (actual 
time=54.743..61.025 rows=3584 loops=1)"
"  Hash Cond: (closebuildings.id = closeby.buildingid)"
"  ->  Seq Scan on closebuildings  (cost=0.00..4358.52 rows=194452 
width=12) (actual time=0.042..31.646 rows=194452 loops=1)"
"  ->  Hash  (cost=3.72..3.72 rows=186 width=4) (actual 
time=7.073..7.073 rows=1579 loops=1)"
"    Buckets: 1024  Batches: 1  Memory Usage: 56kB"
"    ->  CTE Scan on closeby  (cost=0.00..3.72 rows=186 width=4) 
(actual time=0.023..6.591 rows=1579 loops=1)"
"  CTE closebuildingdescriptions"
"    ->  Nested Loop  (cost=30.40..1173.37 rows=97226 width=516) (actual 
time=117.103..1890.902 rows=3584 loops=1)"
"  ->  HashAggregate  (cost=30.40..32.40 rows=200 width=4) (actual 
time=63.529..66.176 rows=3584 loops=1)"
"    ->  CTE Scan on closebuildings  (cost=0.00..27.02 rows=1351 
width=4) (actual time=54.746..62.316 rows=3584 loops=1)"
"  ->  Index Scan using buildingdescriptions_pkey on 
buildingdescriptions  (cost=0.00..5.69 rows=1 width=520) (actual 
time=0.506..0.507 rows=1 loops=3584)"
"    Index Cond: (id = closebuildings.descriptionid)"
"  ->  CTE Scan on closebuildingdescriptions  (cost=0.00..1944.52 rows=97226 
width=32) (actual time=117.115..1901.993 rows=3584 loops=1)"
"  ->  CTE Scan on closebuildings  (cost=0.00..27.02 rows=1351 width=4) (actual 
time=0.000..0.536 rows=3584 loops=3584)"
"Total runtime: 7870.567 ms"
[/code]


If anyone can come up with a solution or a suggestion how to solve this, I 
would highly appreciate it.

Cheers



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.1 - select statement with multiple "with-clauses" becomes very slow

2013-01-03 Thread Alban Hertroys
You're ending up with something that's basically a carthesian product of
closebuildings and closebuildingdescriptions.

Your query looks like a simple join would serve just fine, something like:

prepare getmydata(real,real,real) AS (
select image, data
from info
inner join buildings on (buildings.id = info.building_id)
inner join buildingdescriptions on (buildingdescriptions.id =
buildings.description_id)
where ST_DWithin(position, 'POINT($1 $2)', $3)
)


On 3 January 2013 12:43, Opel Fahrer  wrote:

> I'm a noob in writing efficient Postgres queries, so I wrote a first
> function to query multiple linked tables using the PostGIS extension. The
> query should fetch data from multiple tables and finally give me a table
> with two columns. Here's the code:
>
> [code]
> prepare getmydata(real,real,real) AS (
>  with
>  closeby(id) AS (
>   select buildingid from info where ST_DWithin(position, 'POINT($1
> $2)', $3)
>  ),
>  closebuildings(descriptionid,image) AS (
>   select descriptionid,image from buildings where id IN (select * from
> closeby)
>  ),
>  closebuildingdescriptions(data) AS (
>   select data from buildingdescriptions where id IN (select
> descriptionid from closebuildings)
>  )
>  select image,data from closebuildings,closebuildingdescriptions;
> );
> execute getmydata(0.0,0.0,10.0);
> [/code]
>
> Actually the problem is that this query is VERY slow, even if the database
> content is small (taking around 15 minutes or so). The problem seems to be
> that postgres has to make sure that for the select statement both columns
> have equal length. If I only do "select image from closebuildings", the
> results are delivered in 0.1 secs, a "select data from
> closebuildingdescriptions" is delivered in 7.8 secs.
>
> I ran an "explain analyze" call, but I can't make any sense from the
> output:
>
> [code]
> "Nested Loop  (cost=7816.51..2636821.06 rows=131352326 width=36) (actual
> time=117.125..6723.014 rows=12845056 loops=1)"
> "  CTE closeby"
> "->  Seq Scan on info  (cost=0.00..1753.11 rows=186 width=4) (actual
> time=0.022..5.821 rows=1579 loops=1)"
> "  Filter: (("position" &&
> '010320797F010005007D3F35DEAC512041E3A59BBC9BE153417D3F35DEAC512041E3A59BBCA0E153417D3F35DED4512041E3A59BBCA0E153417D3F35DED4512041E3A59BBC9BE153417D3F35DEAC512041E3A59BBC9BE15341'::geometry)
> AND ('010120797F7D3F35DEC0512041E3A59B3C9EE15341'::geometry &&
> st_expand("position", 10::double precision)) AND _st_dwithin("position",
> '010120797F7D3F35DEC0512041E3A59B3C9EE15341'::geometry, 10::double
> precision))"
> "  CTE closebuildings"
> "->  Hash Semi Join  (cost=6.04..4890.03 rows=1351 width=8) (actual
> time=54.743..61.025 rows=3584 loops=1)"
> "  Hash Cond: (closebuildings.id = closeby.buildingid)"
> "  ->  Seq Scan on closebuildings  (cost=0.00..4358.52 rows=194452
> width=12) (actual time=0.042..31.646 rows=194452 loops=1)"
> "  ->  Hash  (cost=3.72..3.72 rows=186 width=4) (actual
> time=7.073..7.073 rows=1579 loops=1)"
> "Buckets: 1024  Batches: 1  Memory Usage: 56kB"
> "->  CTE Scan on closeby  (cost=0.00..3.72 rows=186
> width=4) (actual time=0.023..6.591 rows=1579 loops=1)"
> "  CTE closebuildingdescriptions"
> "->  Nested Loop  (cost=30.40..1173.37 rows=97226 width=516) (actual
> time=117.103..1890.902 rows=3584 loops=1)"
> "  ->  HashAggregate  (cost=30.40..32.40 rows=200 width=4) (actual
> time=63.529..66.176 rows=3584 loops=1)"
> "->  CTE Scan on closebuildings  (cost=0.00..27.02
> rows=1351 width=4) (actual time=54.746..62.316 rows=3584 loops=1)"
> "  ->  Index Scan using buildingdescriptions_pkey on
> buildingdescriptions  (cost=0.00..5.69 rows=1 width=520) (actual
> time=0.506..0.507 rows=1 loops=3584)"
> "Index Cond: (id = closebuildings.descriptionid)"
> "  ->  CTE Scan on closebuildingdescriptions  (cost=0.00..1944.52
> rows=97226 width=32) (actual time=117.115..1901.993 rows=3584 loops=1)"
> "  ->  CTE Scan on closebuildings  (cost=0.00..27.02 rows=1351 width=4)
> (actual time=0.000..0.536 rows=3584 loops=3584)"
> "Total runtime: 7870.567 ms"
> [/code]
>
>
> If anyone can come up with a solution or a suggestion how to solve this, I
> would highly appreciate it.
>
> Cheers
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


Re: [GENERAL] Postgresql 9.1 - select statement with multiple "with-clauses" becomes very slow

2013-01-03 Thread Opel Fahrer
Hi Alban,

thanks a lot - didn't realize it was so simple. It works like a charm!

Cheers




 Von: Alban Hertroys 
An: Opel Fahrer  
CC: "pgsql-general@postgresql.org"  
Gesendet: 13:25 Donnerstag, 3.Januar 2013
Betreff: Re: [GENERAL] Postgresql 9.1 - select statement with multiple 
"with-clauses" becomes very slow
 

You're ending up with something that's basically a carthesian product of 
closebuildings and closebuildingdescriptions.

Your query looks like a simple join would serve just fine, something like:

prepare getmydata(real,real,real) AS (

select image, data
from info
inner join buildings on (buildings.id = info.building_id)
inner join buildingdescriptions on (buildingdescriptions.id = 
buildings.description_id)
where ST_DWithin(position, 'POINT($1 $2)', $3)

)



On 3 January 2013 12:43, Opel Fahrer  wrote:

I'm a noob in writing efficient Postgres queries, so I wrote a first function 
to query multiple linked tables using the PostGIS extension. The query should 
fetch data from multiple tables and finally give me a table with two columns. 
Here's the code:
>
>[code]
>    prepare getmydata(real,real,real) AS (
> with
> closeby(id) AS (
>  select buildingid from info where ST_DWithin(position, 'POINT($1 $2)', 
>$3)   
> ),
> closebuildings(descriptionid,image) AS (
>  select descriptionid,image from buildings where id IN (select * from 
>closeby)
> ),
> closebuildingdescriptions(data) AS (
>  select data from buildingdescriptions where id IN (select descriptionid 
>from closebuildings)
> )
> select image,data from closebuildings,closebuildingdescriptions;
>    );
>    execute getmydata(0.0,0.0,10.0);
>[/code]
>
>Actually the problem is that this query is VERY slow, even if the database 
>content is small (taking around 15 minutes or so). The problem seems to be 
>that postgres has to make sure that for the select statement both columns have 
>equal length. If I only do "select image from closebuildings", the results are 
>delivered in 0.1 secs, a "select data from closebuildingdescriptions" is 
>delivered in 7.8 secs.
>
>I ran an "explain analyze" call, but I can't make any sense from the output:
>
>[code]
>"Nested Loop  (cost=7816.51..2636821.06 rows=131352326 width=36) (actual 
>time=117.125..6723.014 rows=12845056 loops=1)"
>"  CTE closeby"
>"    ->  Seq Scan on info  (cost=0.00..1753.11 rows=186 width=4) (actual 
>time=0.022..5.821 rows=1579 loops=1)"
>"  Filter: (("position" && 
>'010320797F010005007D3F35DEAC512041E3A59BBC9BE153417D3F35DEAC512041E3A59BBCA0E153417D3F35DED4512041E3A59BBCA0E153417D3F35DED4512041E3A59BBC9BE153417D3F35DEAC512041E3A59BBC9BE15341'::geometry)
> AND ('010120797F7D3F35DEC0512041E3A59B3C9EE15341'::geometry && 
>st_expand("position", 10::double precision)) AND _st_dwithin("position", 
>'010120797F7D3F35DEC0512041E3A59B3C9EE15341'::geometry, 10::double 
>precision))"
>"  CTE closebuildings"
>"    ->  Hash Semi Join  (cost=6.04..4890.03 rows=1351 width=8) (actual 
>time=54.743..61.025 rows=3584 loops=1)"
>"  Hash Cond: (closebuildings.id = closeby.buildingid)"
>"  ->  Seq Scan on closebuildings  (cost=0.00..4358.52 rows=194452 
>width=12) (actual time=0.042..31.646 rows=194452 loops=1)"
>"  ->  Hash  (cost=3.72..3.72 rows=186 width=4) (actual 
>time=7.073..7.073 rows=1579 loops=1)"
>"    Buckets: 1024  Batches: 1  Memory Usage: 56kB"
>"    ->  CTE Scan on closeby  (cost=0.00..3.72 rows=186 width=4) 
>(actual time=0.023..6.591 rows=1579 loops=1)"
>"  CTE closebuildingdescriptions"
>"    ->  Nested Loop  (cost=30.40..1173.37 rows=97226 width=516) (actual 
>time=117.103..1890.902 rows=3584 loops=1)"
>"  ->  HashAggregate  (cost=30.40..32.40 rows=200 width=4) (actual 
>time=63.529..66.176 rows=3584 loops=1)"
>"    ->  CTE Scan on closebuildings  (cost=0.00..27.02 rows=1351 
>width=4) (actual time=54.746..62.316 rows=3584 loops=1)"
>"  ->  Index Scan using buildingdescriptions_pkey on 
>buildingdescriptions  (cost=0.00..5.69 rows=1 width=520) (actual 
>time=0.506..0.507 rows=1 loops=3584)"
>"    Index Cond: (id = closebuildings.descriptionid)"
>"  ->  CTE Scan on closebuildingdescriptions  (cost=0.00..1944.52 rows=97226 
>width=32) (actual time=117.115..1901.993 rows=3584 loops=1)"
>"  ->  CTE Scan on closebuildings  (cost=0.00..27.02 rows=1351 width=4) 
>(actual time=0.000..0.536 rows=3584 loops=3584)"
>"Total runtime: 7870.567 ms"
>[/code]
>
>
>If anyone can come up with a solution or a suggestion how to solve this, I 
>would highly appreciate it.
>
>Cheers
>
>
>
>--
>Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>To make changes to your subscription:
>http://www.postgresql.org/mailpref/pgsql-general
>


-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

[GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Rich Shepard

  I have the need to develop an application that will use postgres as the
back end, and most of the design has been worked out, but I've one issue
left to resolve and want help in this. If this is not the appropriate forum
for this type of question, please point me in the right direction.

  For several reasons (including operational and legal) once data are
entered in a table they cannot be changed or deleted without an audit trail
of the change, when it occurred, who made the change, and the reason for it.
Tables might contain laboratory or instrument measurement values or the
names of regulatory staff.

  My current thoughts are that there needs to be a separate table, perhaps
called 'changes', with attribute columns for the source table, identifying
value(s) for the original row, new value, date of change, person making the
change, and the reason for the change. The original table should have an
attribute flag to indicated that a row has been changed.

  The middleware of the application needs to check this table when data are
to be viewed in the UI and present only the current row contents. A separate
view would display a history of changes for that row.

  All thoughts, suggestions, and recommendations based on your expertise and
experience will be most welcome.

TIA,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large number of rows in pg_type and slow gui (pgadmin) refresh

2013-01-03 Thread Robert Klaus
We have 36,000+ rows returned by " SELECT oid, format_type(oid, typtypmod)
AS typname FROM pg_type".

 

My manager says this is only a small number compared to what is expected by
next summer.

 

When I run this select statement on the database server it returns in under
1 second but it takes a minute to open some tabs using pgAmin III (1.16.1).

 

Is there a workaround for this - most of the rows returned are from one
schema.  If we could elimiate it from the listed schemas it would help.  Is
this possible in pgAdmin III?

 

Our database server is at 8.4

 

Robert 

Nexgen Wireless, Inc.

Schaumburg, IL 

 

 



Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Adrian Klaver

On 01/03/2013 07:38 AM, Rich Shepard wrote:



   The middleware of the application needs to check this table when data
are
to be viewed in the UI and present only the current row contents. A
separate
view would display a history of changes for that row.

   All thoughts, suggestions, and recommendations based on your
expertise and
experience will be most welcome.


As a matter of course I include fields to record the timestamp and user 
for insert of records and last update of record on my tables.


For a relatively simple solution see this blog post I put up this summer:

http://aklaver.org/wordpress/2012/06/23/postgres-and-hstore/

I have since expanded that to include updates by using TG_OP to 
determine the operation being done on the table.


There is also pg_audit
:
https://github.com/jcasanov/pg_audit



TIA,

Rich






--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Corrupt indexes on slave when using pg_bulkload on master

2013-01-03 Thread James Cowell
Hi Jeff (and group)
 
Was the reproduction information sufficient?
 
Do I need to submit this officially as a bug or something?
 
At the moment I'm considering rebuilding my cluster with 9.0 to see if that 
works and if not then reverting back to 9.1 but loading each DB seperately.  I 
would really like to understand why a load of 10 sequential rows with 
pg_bulkload produces a corrupt index on node 2 though, it just doesn't make 
sense to me.
 
Thanks and Happy New Year!
 
James
 


 From: James Cowell 
To: "pgsql-general@postgresql.org"  
Cc: Jeff Janes  
Sent: Wednesday, 19 December 2012, 13:11
Subject: Re: [GENERAL] Corrupt indexes on slave when using pg_bulkload on master
  

Actually, scratch that.  The difference in behaviour seems to be on the 
optimiser which now table scans the 10 row table (which I guess it should 
always have done really) rather than use the index as it was in 9.1.6.
 
The same index corruption occurs, so the same reproduction case stands, it just 
needs a "set enable_seqscan=false" prior to running the selects on the slave.
 
Cheers,
 
James
 


 From: James Cowell 
To: Jeff Janes  
Cc: "pgsql-general@postgresql.org"  
Sent: Monday, 17 December 2012, 20:42
Subject: Re: [GENERAL] Corrupt indexes on slave when using pg_bulkload on master
  

I saw that 9.1.7 was out and it had another bugfix in for WAL playback.
 
I installed it and ran my reproduction case and it no longer corrupts the index.

I reindexed the database I load into and did a full data load and the indexes 
still corrupt on the slave.

It does not appear to be related to constraint violation as one table is new 
rows only.

I will try and put together a new reproduction case this week.

Cheers,

James




 From: James Cowell 
To: Jeff Janes  
Cc: "pgsql-general@postgresql.org"  
Sent: Thursday, 13 December 2012, 12:26
Subject: Re: [GENERAL] Corrupt indexes on slave when using pg_bulkload on master
  

Hi Jeff,
 
Thanks again for your reply.
 
>If there are no constraint violations, do you still see the problem?
 
Yes, I've stripped it down to an empty table with a 10 row load and the pk 
index on the secondary node still corrupts.
 
> Were there any older version on which it worked? 
 
I'm afraid I started on 9.1.5, I upgraded to 9.1.6 when I had the initial 
problem due to the bugfix in the changelog to do with corrupt indexes on the 
secondary node but it hasn't resolved the issue.
 
> Can you post a minimal schema and control file to reproduce the problem?
 
I've attached a text file with details for table, load config file etc, is that 
everything you would need?
 
Cheers,
 
James
 


 From: Jeff Janes 
To: James Cowell  
Cc: "pgsql-general@postgresql.org"  
Sent: Monday, 10 December 2012, 16:53
Subject: Re: [GENERAL] Corrupt indexes on slave when using pg_bulkload on master
  
On Wed, Dec 5, 2012 at 5:17 AM, James Cowell  wrote:
> I'm using pg_bulkload to load large amounts of CSV data into a postgres
> database hourly.
>
> This database is replicated to a second node.
>
> Whenever a bulk load happens the indexes on the updated tables on the
> secondary node corrupt and are unusable until a reindex is run on the
> primary node.  I get the
 error below on node
 2:
>
> ERROR: index "tablename" contains unexpected zero page at block 0
> SQL state: XX002
> Hint: Please REINDEX it.
>
> I'm assuming that this is because of the way pg_bulkload builds the index on
> the primary, and possibly has something to do with the way pg_bulkload
> overwrites rows in the event of a constraint violation,

If there are no constraint violations, do you still see the problem?

> but at the same time
> if something works on the primary shouldn't the replicated node be able to
> process the WAL log?
>
> I've tried this on 9.1.6 and 9.1.5 on RHEL 6.3 with pg_bulkload build
> 3.1.1-1.pg91.rhel6 and it happens every time.

Were there any older version on which it worked?  Can you post a
minimal schema and control file to reproduce the problem?

Cheers,

Jeff

Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Fabrízio de Royes Mello
On Thu, Jan 3, 2013 at 2:09 PM, Adrian Klaver 
wrote:
>
> On 01/03/2013 07:38 AM, Rich Shepard wrote:
>
>>The middleware of the application needs to check this table when data
>> are
>> to be viewed in the UI and present only the current row contents. A
>> separate
>> view would display a history of changes for that row.
>>
>>All thoughts, suggestions, and recommendations based on your
>> expertise and
>> experience will be most welcome.
>
>
> As a matter of course I include fields to record the timestamp and user
for insert of records and last update of record on my tables.
>
> For a relatively simple solution see this blog post I put up this summer:
>
> http://aklaver.org/wordpress/2012/06/23/postgres-and-hstore/
>
> I have since expanded that to include updates by using TG_OP to determine
the operation being done on the table.
>
> There is also pg_audit
> :
> https://github.com/jcasanov/pg_audit
>

And keep in mind that kind of table tend to grow quickly, so you must use
some strategy to purge old historical data or make your audit table
partitioned...

I implemented the same think in our ERP a long time ago using partitioned
approach, because its easy to purge old historical data.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
>> Blog sobre TI: http://fabriziomello.blogspot.com
>> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello


Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Rich Shepard

On Thu, 3 Jan 2013, Fabrízio de Royes Mello wrote:


And keep in mind that kind of table tend to grow quickly, so you must use
some strategy to purge old historical data or make your audit table
partitioned...


Fabrizio,

  There should not be many changes in these tables. And historical data
cannot be purged or the purpose of maintaining a history is lost. The
history is valuable for tracking changes over time in regulatory agency
staff and to prevent data manipulation such as was done several years ago by
the president of Southwestern Resources (a gold mining company) to pump up
the company's stock price by changing assay results.

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Rich Shepard

On Thu, 3 Jan 2013, Adrian Klaver wrote:

As a matter of course I include fields to record the timestamp and user for 
insert of records and last update of record on my tables.


Adrian,

  This is a useful addition to the application.


For a relatively simple solution see this blog post I put up this summer:
http://aklaver.org/wordpress/2012/06/23/postgres-and-hstore/
I have since expanded that to include updates by using TG_OP to determine the 
operation being done on the table.


There is also pg_audit
:
https://github.com/jcasanov/pg_audit


  Once again you came through with valuable advice and guidance.

Many thanks!

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design: Maintain Audit Trail of Changes

2013-01-03 Thread Bèrto ëd Sèra
Hi Rich,

if it's a strict legal requirement you may want to enforce it with a
trigger system, so that each time a record is inserted/updated/deleted
you create an exact copy of it in a historical table, that has the
original record plus data about who performed the operation, when,
from which IP, maybe a comment field, etc. So your actual table
remains limited in size and it's performing well, while the size
problem is local to the audit logs.

You also want to use triggers to disable updates and deletes on this
historical table, for a matter of additional security (you might end
up needing a procedure to trim it, however, if it grows out of
affordable bounds).

Cheers
Bèrto

On 3 January 2013 16:52, Rich Shepard  wrote:
> On Thu, 3 Jan 2013, Adrian Klaver wrote:
>
>> As a matter of course I include fields to record the timestamp and user
>> for insert of records and last update of record on my tables.
>
>
> Adrian,
>
>   This is a useful addition to the application.
>
>
>> For a relatively simple solution see this blog post I put up this summer:
>> http://aklaver.org/wordpress/2012/06/23/postgres-and-hstore/
>> I have since expanded that to include updates by using TG_OP to determine
>> the operation being done on the table.
>>
>> There is also pg_audit
>> :
>> https://github.com/jcasanov/pg_audit
>
>
>   Once again you came through with valuable advice and guidance.
>
> Many thanks!
>
> Rich
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
==
If Pac-Man had affected us as kids, we'd all be running around in a
darkened room munching pills and listening to repetitive music.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Unable to reload postgresql.conf without restarting

2013-01-03 Thread Bruce Momjian
On Thu, Jan  3, 2013 at 11:25:41AM -0500, Jose Martinez wrote:
> Hi,
> 
> I made some changes to postgresql.conf and I want them to take effect without
> having to restart the server.
> 
> I tried 
> 
> select pg_reload_conf();
> /usr/pgsql-9.1/bin/pg_ctl reload
> 
> but when I do 'show all', I see no changes take effect. 
> 
> There settings I tried to change are:
> -effective_cache_size 
> -work_mem
> 
> Im using posgres 9.1 on Centos Linux (amazon ec2)

That is quite odd.  Can you show us the commands and the postgresql.conf
line you are changing?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general