[GENERAL] array_agg and partition sorts

2013-06-26 Thread Rory Campbell-Lange
I'm on Postgres 9.1 and I've come across an issue which shows I don't
understand partition sorting:

Given a table like this:

select * from test;
 n_group |   t_name| t_additional 
-+-+--
   1 | Canberra| Australia
   1 | Vienna  | Austria
   1 | Baku| Azerbaijan
   1 | Nassau  | The Bahamas
   1 | Dhaka   | Bangladesh
   1 | Porto-Novo  | Benin
   1 | Thimphu | Bhutan
   1 | Brasilia| Brazil
   1 | Sofia   | Bulgaria
   1 | Ouagadougou | Burkina Faso
   2 | Bridgetown  | Barbados
   2 | Minsk   | Belarus
   2 | Brussels| Belgium
   2 | Belmopan| Belize
   2 | Gaborone| Botswana

Running this query:

select * from (
select
array_agg(t_name) over (partition by n_group) as agg1
, array_agg(t_name) over (partition by n_group order by 
t_additional desc) as agg2
, array_agg(t_name) over (partition by n_group order by 
t_additional asc) as agg3
, row_number() over (partition by n_group)
from test
) x
where
x.row_number = 1;

Returns:

-[ RECORD 1 
]-
agg1   | 
{Canberra,Vienna,Baku,Nassau,Dhaka,Porto-Novo,Thimphu,Brasilia,Sofia,Ouagadougou}
agg2   | 
{Nassau,Ouagadougou,Sofia,Brasilia,Thimphu,Porto-Novo,Dhaka,Baku,Vienna,Canberra}
agg3   | {Canberra}
row_number | 1
-[ RECORD 2 
]-
agg1   | {Bridgetown,Minsk,Brussels,Belmopan,Gaborone}
agg2   | {Gaborone,Belmopan,Brussels,Minsk,Bridgetown}
agg3   | {Bridgetown}
row_number | 1


I don't understand why agg3 is missing values.

-- 
Rory Campbell-Lange
r...@campbell-lange.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] array_agg and partition sorts

2013-06-26 Thread Rory Campbell-Lange
On 26/06/13, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> I'm on Postgres 9.1 and I've come across an issue which shows I don't
> understand partition sorting:
> Returns:
> 
> -[ RECORD 1 
> ]-
> agg1   | 
> {Canberra,Vienna,Baku,Nassau,Dhaka,Porto-Novo,Thimphu,Brasilia,Sofia,Ouagadougou}
> agg2   | 
> {Nassau,Ouagadougou,Sofia,Brasilia,Thimphu,Porto-Novo,Dhaka,Baku,Vienna,Canberra}
> agg3   | {Canberra}
> row_number | 1
> -[ RECORD 2 
> ]-
> agg1   | {Bridgetown,Minsk,Brussels,Belmopan,Gaborone}
> agg2   | {Gaborone,Belmopan,Brussels,Minsk,Bridgetown}
> agg3   | {Bridgetown}
> row_number | 1

I've solved this by reading the docs at
http://www.postgresql.org/docs/9.1/static/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS
more carefully, particulary about how sorts affect the frame.

select * from (
select 
array_agg(t_name) over (partition by n_group) as agg1
, array_agg(t_name) over (partition by n_group order by 
t_additional desc
  range between unbounded preceding and 
unbounded following) as agg2
, array_agg(t_name) over (partition by n_group order by 
t_additional asc
  range between unbounded preceding and 
unbounded following) as agg3
, row_number() over (partition by n_group)
from test
) x;

produces the desired result:

...
-[ RECORD 10 
]
agg1   | 
{Canberra,Vienna,Baku,Nassau,Dhaka,Porto-Novo,Thimphu,Brasilia,Sofia,Ouagadougou}
agg2   | 
{Nassau,Ouagadougou,Sofia,Brasilia,Thimphu,Porto-Novo,Dhaka,Baku,Vienna,Canberra}
agg3   | 
{Canberra,Vienna,Baku,Dhaka,Porto-Novo,Thimphu,Brasilia,Sofia,Ouagadougou,Nassau}
row_number | 4
-[ RECORD 11 
]
agg1   | {Bridgetown,Minsk,Brussels,Belmopan,Gaborone}
agg2   | {Gaborone,Belmopan,Brussels,Minsk,Bridgetown}
agg3   | {Bridgetown,Minsk,Brussels,Belmopan,Gaborone}
row_number | 1


-- 
Rory Campbell-Lange
r...@campbell-lange.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JSON to INT[] or other custom type

2017-06-11 Thread Rory Campbell-Lange
I'm playing with plpgsql function parameters to try and come up with a
neat way of sending an array of arrays or array of custom types to
postgres from python and PHP. 

Psycopg works fine with an array of custom types:

In [108]: query = 'select * from fn_test03(%s, %s::dow_session[])'
In [109]: qargs = (5, [(0,2), (1, 3)])
In [110]: cur.execute(query, qargs)   
In [111]: rs = cur.fetchall()  # function plays back looped arrays
In [112]: rs
Out[112]: [(0, 2), (1, 3)]

However PHP's Postgresql support isn't very array friendly as far as I
can see, and requires some horrible manual escaping.

Consequently I wondered if I could serialise the array structure into
json and send that to the postgresql function. PHP seems to handle that
fine. My problem therefore move to handling the json in the plpgsql
function.

I'm hoping, in the plpgsql function, to unfurl the supplied json into a
custom type or at least an array of ints, and I can't work out how to do
that.

select * from json_array_elements_text('[[0, 1], [1, 2]]');
 value  

 [0, 1]
 [1, 2]
(2 rows)

works fine, but I can't seem to turn those values into actual ints or
anything else for that matter, apart from text via the
json_array_elements_text() function.

=>  select n::integer[]
from
json_array_elements_text('[[0, 1], [1, 2]]') as x(n);

ERROR:  malformed array literal: "[0, 1]"
DETAIL:  Missing "]" after array dimensions.

=>  select n::text[]::integer[]
from
json_array_elements_text('[[0, 1], [1, 2]]') as x(n);

ERROR:  malformed array literal: "[0, 1]"
DETAIL:  Missing "]" after array dimensions.

Thoughts gratefully received;
Rory



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON to INT[] or other custom type

2017-06-12 Thread Rory Campbell-Lange
On 11/06/17, Bruno Wolff III (br...@wolff.to) wrote:
> On Sun, Jun 11, 2017 at 22:35:14 +0100,
>  Rory Campbell-Lange  wrote:
> >
> >I'm hoping, in the plpgsql function, to unfurl the supplied json into a
> >custom type or at least an array of ints, and I can't work out how to do
> >that.
> >
> >   select * from json_array_elements_text('[[0, 1], [1, 2]]');
> >value
> >   
> >[0, 1]
> >[1, 2]
> >   (2 rows)
> >
> >works fine, but I can't seem to turn those values into actual ints or
> >anything else for that matter, apart from text via the
> >json_array_elements_text() function.
> 
> Does this example help?
> 
> area=> select (a->>0)::int, (a->>1)::int from json_array_elements('[[0, 1], 
> [1, 2]]') as s(a);
> int4 | int4 --+--
>0 |1
>1 |2
> (2 rows)

Hi Bruno

That worked great, thank you very much for the pointer.
I completely failed to see the top of the docs at
https://www.postgresql.org/docs/9.5/static/functions-json.html
-- my apologies.

I've put a working function and caller from PHP, Python below.

Many thanks
Rory

/* plpgsql test function */
CREATE OR REPLACE FUNCTION fn_test05 (
num INT
,ds json
) RETURNS SETOF dow_session AS $$
DECLARE
r test.dow_session;
BEGIN
RAISE NOTICE 'num: %', num;

FOR r IN 
SELECT
(n->>0)::int -- first json element
,(n->>1)::int
FROM
json_array_elements(ds)
AS x(n)
LOOP
RETURN NEXT r;
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql;


## python example
import psycopg2
import json
dbc = psycopg2.connect(database=, user=, password=)
cur = dbc.cursor()
query = 'select * from test.fn_test05(%s, %s)'
a_of_a = json.dumps([(0,2), (3,1), (5,2)])
qargs = (5, a_of_a)
cur.execute(query, qargs)
rs = cur.fetchall()
for r in rs:
print r
# (0, 2)
# (3, 1)
# (5, 2)


## php example
 user= password=");
$query = 'select * from test.fn_test05($1, $2)';
$result = pg_query_params($dbconn, $query, array($a, $j));
print_r(pg_fetch_all($result));
# Array
# (
# [0] => Array
# (
# [dow] => 0
# [session] => 1
# )
# 
# [1] => Array
# (
# [dow] => 2
# [session] => 3
# )
# )
?>



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-14 Thread Rory Campbell-Lange
On 14/06/17, Martin Goodson (kaema...@googlemail.com) wrote:
> The new master's repmgr promote script will execute commands to pause
> pgbouncer, reconfigure pgbouncer to point to the new database address, and
> then resume.

You could just move the service ip address at the new postgresql master
to which the pgbouncer instances on each app server could reconnect.

I assume, with reference to https://github.com/2ndQuadrant/repmgr, that
you could use repmgrd to trigger a script to do just that at
"switchover", possibly something you would need to monitor the
"repl_events" table to achieve.

Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-14 Thread Rory Campbell-Lange
On 15/06/17, Martin Goodson (kaema...@googlemail.com) wrote:
> On 14/06/2017 19:54, Rory Campbell-Lange wrote:
> >On 14/06/17, Martin Goodson (kaema...@googlemail.com) wrote:
> >>The new master's repmgr promote script will execute commands to pause
> >>pgbouncer, reconfigure pgbouncer to point to the new database address, and
> >>then resume.
> >
> >You could just move the service ip address at the new postgresql master
> >to which the pgbouncer instances on each app server could reconnect.
> >
> >I assume, with reference to https://github.com/2ndQuadrant/repmgr, that
> >you could use repmgrd to trigger a script to do just that at
> >"switchover", possibly something you would need to monitor the
> >"repl_events" table to achieve.

> I'm just wondering how people may have implemented this. Do people setup
> pgbouncer nodes on the database servers themselves, on application servers,

We have pgbouncer on the application servers and shift the postgres
master ip address to the promoted master.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-14 Thread Rory Campbell-Lange
On 15/06/17, Andreas Kretschmer (andr...@a-kretschmer.de) wrote:
> 
> Am 15.06.2017 um 01:18 schrieb Martin Goodson:
> >
> >...Do people setup pgbouncer nodes on the database servers
> >themselves, on application servers, in the middle tier between the
> >application and database, and so forth, or some combination of the
> >three? 
> 
> Usually we recommend to install pgbouncer on the app-servers.
> 
> If you have full control of the application you can try to integrate the
> logic into the application (provide a list of servers, the new pg10-version
> of libpg is working similar in this way:
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=274bb2b3857cc987cfa21d14775cae9b0dababa5
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=721f7bd3cbccaf8c07cad2707826b83f84694832
> )

Hi Andreas

The list of servers idea is a cool enhancement. However would pgbouncer
(or another client) be able to detect which of those servers were in slave
mode?

Otherwise, if there is a temporary glitch in communications with the
master, a client (such as pgbouncer) could move to try inserts on a
slave.

Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schedule

2017-06-20 Thread Rory Campbell-Lange
On 20/06/17, Steve Clark (steve.cl...@netwolves.com) wrote:
> > 4) Equipment table keyed to location.
> We already have a monitoring system in place that has been in operation circa 
> 2003. Just recently we have
> added a new class of customer whose operation is not 24/7.
> 
> I envision the schedule could be fairly complicated
> including WE and holidays, plus the enduser might shut down for lunch etc. I 
> am looking for more on how to organize the
> schedule, EG a standard weekly schedule then exceptions for holidays etc, or 
> a separate individual schedule for
> each week, also need to consider how easy it is to maintain the schedule, etc.

Assuming you have the ability to disable alerts for a
client/location/group/machine for a period of time, you could use
pg_cron schedules https://github.com/citusdata/pg_cron to enable and
disable this ability on a scheduled basis.

With some sort of front-end it would be trivial for the customer to
make their own schedules.

Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would you add a --dry-run to pg_restore?

2017-08-03 Thread Rory Campbell-Lange
On 02/08/17, Steve Atkins (st...@blighty.com) wrote:
> > On Aug 2, 2017, at 9:02 AM, Edmundo Robles  wrote:
> > 
> > I mean,  to   verify the integrity of backup  i do:
> > 
> > gunzip -c backup_yesterday.gz | pg_restore  -d my_database  && echo
> > "backup_yesterday is OK"
> > 
> > but my_database's size, uncompresed,  is too big  more than 15G  and
> > sometimes  i  have  no space  to restore it, so always i must
> > declutter my  disk first. 
...

> If the gunzip completes successfully then the backups weren't
> corrupted and the disk is readable. They're very likely to be "good"
> unless you have a systematic problem with your backup script.
> 
> You could then run that data through pg_restore, redirecting the
> output to /dev/null, to check that the compressed file actually came
> from pg_dump. (gunzip backup_yesterday.gz | pg_restore >/dev/null)

A couple of extra steps you can add to avoid a full restore (which is
best) is to do a file hash check as part of the verification, and do
something like add a token to the database just before dumping, then
verify that. We do something like this:

rory:~/db$ gpg -d dump_filename.sqlc.gpg | \
   pg_restore -Fc --data-only --schema audit | \
   grep -A 1 "COPY audit"

output > 
COPY audit (tdate) FROM stdin;
2017-04-25

Cheers
Rory



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Connection utilisation for pglogical

2017-10-06 Thread Rory Campbell-Lange
Ahead of setting up a testing environment to tryout pglogical, I'm keen
to learn of the connection requirements for pglogical publisher and
subscribers.

Our use case is a cluster of (say) 200 databases, and we would like to
look into aggregating data from a certain table using a row filter hook
into a central database, possibly in a remote cluster, for further
processing.

Based on the docs at https://github.com/2ndQuadrant/pglogical, the
configuration parameters appear to be:

wal_level = 'logical'
max_worker_processes = 10   # one per database needed on provider node
# one per node needed on subscriber node
max_replication_slots = 10  # one per node needed on provider node
max_wal_senders = 10# one per node needed on provider node
shared_preload_libraries = 'pglogical'

So presumably in my example I assume we need 200 max_worker_processes,
200 max_replication_slots and 200 max_wal_senders? Does this translate
into 200 actual database connections?

Thanks for any advice
Rory




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Debugging pgsql function date error

2009-05-28 Thread Rory Campbell-Lange
I have a long plpgsql function which takes a date as one of its
arguments but I am receiving an error for another date! I'm using 8.3 on
Debian.

CREATE OR REPLACE FUNCTION
fn_alert01_maker(integer, integer, integer, integer, date, integer)
RETURNS SETOF alert_info
AS $$
DECLARE
userid  ALIAS for $1;
alert   ALIAS for $2;
since   ALIAS for $3;
waitALIAS for $4;
date_from   ALIAS for $5;
days_until  ALIAS for $6;
resulteralert_info%rowtype;
since_int   INTERVAL;
wait_intINTERVAL;
days_intINTERVAL;
   
BEGIN

days_int := days_until || ' days';
since_int := since || ' seconds';
wait_int := wait || ' seconds';
...

and has that argument as one of its WHERE conditions:
...
WHERE
rs.dt_modified > ((now() - wait_int) - since_int)
AND
rs.d_date >= date_from
AND
rs.d_date <= date_from + days_int
...

Wierdly, if I call it like this, it is fine:
SELECT * FROM fn_alert01_maker(0, 1, 845, 5, '2009-05-31', 5);

But like this it throws an error:

test=> SELECT * FROM fn_alert01_maker(0, 1, 8450, 5, '2009-05-31', 5);
ERROR:  invalid input syntax for integer: "2009-06-01"
CONTEXT:  PL/pgSQL function "fn_alert01_maker" line 30 at FOR over
SELECT rows


Any ideas?

Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ident authentication problems postgresql 9.2

2013-10-16 Thread Rory Campbell-Lange
I have just done an update on my Debian servers running postgresql 9.2
(Postgres is from 9.2.4-2.pgdg70+1 from apt.postgresql.org) and suddenly
can't login to postgresql as postgres with the normal peer/ident
authentication over a local Unix socket. 

I've worked around it (I'm using md5 for the time being) but I wonder if
anyone else has experienced this problem which is affecting all 3 of my
9.2 servers.

I can't find anything in the recent Debian security updates that could
have broken this (i.e. recent stuff at http://www.debian.org/security/)

By the way there is a key expired error on http://apt.postgresql.org. 

Regards
Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ident authentication problems postgresql 9.2

2013-10-16 Thread Rory Campbell-Lange
On 16/10/13, Adrian Klaver (adrian.kla...@gmail.com) wrote:
> On 10/16/2013 06:56 AM, Rory Campbell-Lange wrote:
> >I have just done an update on my Debian servers running postgresql 9.2
> >(Postgres is from 9.2.4-2.pgdg70+1 from apt.postgresql.org) and suddenly
> >can't login to postgresql as postgres with the normal peer/ident
> >authentication over a local Unix socket.
> 
> So is there any pertinent information in either the Postgres or system logs?

I've just turned the setting in pg_hba.conf from md5 back to peer and
reloaded the server so that it now reads as:

local   all postgrespeer

Attempting to log in fails with the following error on the console:

psql: FATAL:  Peer authentication failed for user "postgres"

Thanks for the suggestion for checking the logs. My apologies for not
checking that. The logs show:

2013-10-16 16:21:16 BST template1 LOG:  local user with ID  does not 
exist
2013-10-16 16:21:16 BST template1 FATAL:  Peer authentication failed for 
user "postgres"

However /etc/passwd has

postgres:x:::PostgreSQL administrator,,,:/db:/bin/bash

I do not have an ident server running and never have had as far as I
recall.

> >By the way there is a key expired error on http://apt.postgresql.org.
> 
> An explanation and fix for the key expired error can be found here:
> http://blog.endpoint.com/2013/10/postgresql-debian-apt-repository-gnupg.html

Thanks very much. Removing /etc/apt/trusted.gpg.d/apt.postgresql.org.gpg
fixed the problem I was experiencing.

Thanks very much
Rory

-- 
Rory Campbell-Lange


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ident authentication problems postgresql 9.2

2013-10-16 Thread Rory Campbell-Lange
On 16/10/13, Tom Lane (t...@sss.pgh.pa.us) wrote:
> Rory Campbell-Lange  writes:
> > 2013-10-16 16:21:16 BST template1 LOG:  local user with ID  does 
> > not exist
> > 2013-10-16 16:21:16 BST template1 FATAL:  Peer authentication failed 
> > for user "postgres"
> 
> > However /etc/passwd has
> > postgres:x:::PostgreSQL administrator,,,:/db:/bin/bash
> 
> Hm.  A look at the PG source code shows conclusively that the "local user
> does not exist" error comes out only if getpwuid() fails.  Why it's
> failing isn't too clear though.  Perhaps your upgrade accidentally made
> /etc/passwd non-world-readable, or some such?

Hi Tom. Thanks for your response.

I'm sure I'm doing something silly, but /etc/password is 0644, and I've
checked my etckeeper logs for changes over the past month and the passwd
file hasn't changed.

...
reset
...

I've just done an upgrade to 9.2.5 and the ident issue is resolved. The
package changes were:

-libpq5 9.3.0-2.pgdg70+1
+libpq5 9.3.1-1.pgdg70+1
-postgresql-9.2 9.2.4-2.pgdg70+1
+postgresql-9.2 9.2.5-1.pgdg70+1
-postgresql-client-9.2 9.2.4-2.pgdg70+1
+postgresql-client-9.2 9.2.5-1.pgdg70+1
-postgresql-contrib-9.2 9.2.4-2.pgdg70+1
+postgresql-contrib-9.2 9.2.5-1.pgdg70+1

Regards
Rory





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Security Issues: Allowing Clients to Execute SQL in the Backend.

2014-04-30 Thread Rory Campbell-Lange
On 30/04/14, Hello World (worldani...@gmail.com) wrote:
> I'm developing a web application that needs to display data from a postgres
> backend.
> 
> The most convenient way for the app to get the data is by expressing the
> request in SQL.
> 
> I'm thinking about the following architecture
> 
> [ App/Client ] -> query in SQL ---> [Web server] ---> same SQL query
> --> [PG database]
...
> Given this are there any security other issues about letting client
> applications execute arbitrary SQL commands on the backend database?

If you wrap your queries into plpgsql functions you can provide a
managed interface to clients using Postgres security which provide calls
which are unlikely to badly affect server performance. For instance you
can not only control the complexity of the query performed but also fix
hard limits such as the number of rows returned. This approach partly
meets your criteria of allowing SQL commands from client apps, but not
arbitrary ones.

-- 
Rory Campbell-Lange


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with selecting arrays in set-returning plpgsql function

2011-08-02 Thread Rory Campbell-Lange
I am trying to select arrays in a set-returning function, but receiving
the error: "array value must start with "{" or dimension information".

This issue appears to relate to locationnodes.rs_people_c sometimes
having an empty array. The return type into which
locationnodes.rs_people_c is returned is INTEGER[].

Assistance much appreciated.
Rory

function:
CREATE OR REPLACE FUNCTION fn_matview_location_slots (  

week_start  DATE, 
) RETURNS setof matview_location_slots_info AS  

$$  
DECLARE 
resultermatview_location_slots_info%ROWTYPE;

BEGIN   
FOR resulter IN 
SELECT 
rs_node   AS node,  

rs_date   AS dater,
...
COALESCE(rs_people_c, '{}'::INTEGER[]) AS people
FROM
locationnodes
WHERE
rs_date >= week_start
LOOP
RETURN NEXT resulter;
END LOOP;
END; $$ LANGUAGE plpgsql;

type:
CREATE TYPE matview_location_slots_info AS (

node  VARCHAR,
dater DATE,
...
peopleINTEGER[]
);

data:
select rs_people_c from locationnodes;
   rs_people_c 
-
 {}
 {}
 {}
 {40}
 {28}
 {}
 {1}
 {}
 {36}
 {731}
 {32}
 {31}
 {66}
 {}
 {}
 {}
 {62}
 {540,72,69,53,37,42,201,65,560,51,58}
 {64}

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with selecting arrays in set-returning plpgsql function

2011-08-02 Thread Rory Campbell-Lange
On 02/08/11, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> I am trying to select arrays in a set-returning function, but receiving
> the error: "array value must start with "{" or dimension information".
> 
> This issue appears to relate to locationnodes.rs_people_c sometimes
> having an empty array. The return type into which
> locationnodes.rs_people_c is returned is INTEGER[].

My mistake was due to creating another return type expecting an array,
which received a simple INT.

Now sorted.
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UPDATE using query; per-row function calling problem

2011-09-01 Thread Rory Campbell-Lange
I'm doing an UPDATE something like this:

UPDATE
slots
SET
a = 'a'
,b = (SELECT uuid_generate_v1())
WHERE
c = TRUE;

Each updated row in slots is getting the same value for b. Is there a
way of getting a per-row value from uuid_generate_v1() without doing a
PL loop?

Regards
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE using query; per-row function calling problem

2011-09-02 Thread Rory Campbell-Lange
On 02/09/11, Tom Lane (t...@sss.pgh.pa.us) wrote:
> Rory Campbell-Lange  writes:
> > I'm doing an UPDATE something like this:
> > UPDATE
> > slots
> > SET
> > a = 'a'
> > ,b = (SELECT uuid_generate_v1())
> > WHERE
> > c = TRUE;
> 
> > Each updated row in slots is getting the same value for b.

> That's Postgres' interpretation of an uncorrelated sub-SELECT: there's
> no reason to do it more than once, so it doesn't.
> 
> > Is there a way of getting a per-row value from uuid_generate_v1()
> > without doing a PL loop?
> 
> Drop the word "SELECT".  Why did you put that in in the first place?

Hi Tom

Good question to which I don't know the answer. Thanks very much for the
advice.

I was able to force a per-row call to uuid_generate_v1 by using this
pattern

UPDATE 
r_slots 
SET b = (SELECT 
y.x 
 FROM 
(select -1 as n, uuid_generate_v1() as x )y 
 WHERE 
y.n != r_slots.id)
...

But 
b = uuid_generate_v1() 
is a lot simpler!

In my "-1" example, am I right in assuming that I created a correlated
subquery rather than an correlated one? I'm confused about the
difference.

Many thanks
Rory




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE using query; per-row function calling problem

2011-09-05 Thread Rory Campbell-Lange
On 02/09/11, David Johnston (pol...@yahoo.com) wrote:
> > In my "-1" example, am I right in assuming that I created a correlated
> > subquery rather than an correlated one? I'm confused about the
> > difference.
> > 
> Correlated: has a where clause that references the outer query
> Un-correlated: not correlated
> 
> Because of the where clause a correlated sub-query will return a
> different record for each row whereas an un-correlated sub-query will
> return the same record for all rows since the where clause (if any) is
> constant.

Hi David -- thanks for the clarification. However I'm still a little
confused. As I understand it the following is a un-correlated sub-query:

UPDATE
slots
SET
a = 'a'
,b = (SELECT uuid_generate_v1())
WHERE
c = TRUE;

and the following, without a 'WHERE', is a correlated sub-query:

UPDATE
slots
SET
a = 'a'
,b = uuid_generate_v1()
WHERE
c = TRUE;

Is the point that the lower is not a sub-query at all?

Regards
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Rory Campbell-Lange
I have a function wrapping a (fairly complex) query.

The query itself runs in about a 1/3rd of a second. When running the
query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
over 100 seconds, about 300 times slower.

The function takes 3 input parameters: 2 dates and a boolean. The dates
(in_date_from, in_date_to) are used several times in the function.

When I replace the two parameters in the body of the query with, for
instance date'2011-05-01' and date'2011-08-01', the function operates
almost as speedily as the straight query.

I would be grateful to know how to work around this date problem.

As the function is rather large I have taken the liberty of posting it
here:
http://campbell-lange.net/media/files/fn_report_pers_leave.sql.html

Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Rory Campbell-Lange
On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
> On 5 Zá??í 2011, 23:07, Rory Campbell-Lange wrote:
...
> > The query itself runs in about a 1/3rd of a second. When running the
> > query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
> > over 100 seconds, about 300 times slower.
...
> > As the function is rather large I have taken the liberty of posting it
> > here:
> > http://campbell-lange.net/media/files/fn_report_pers_leave.sql.html
> 
> Do I understand correctly that you compare a query with literal parameters
> with a parametrized query wrapped in a plpgsql function?

Yes! Certainly I need to make the function perform more quickly.

> Try to run it as a prepared query - I guess you'll get about the same run
> time as with the function (i.e. over 100 seconds).

The prepared query runs in almost exactly the same time as the function,
but thanks for the suggestion. A very useful aspect of it is that I was
able to get the EXPLAIN output which I guess gives a fairly good picture
of the plan used for the function.

The explain output is here:
http://campbell-lange.net/media/files/explain.txt.html

I'm inexperienced in reading EXPLAIN output, but it looks like the
Nested Loop Semi Join at line 72 is running very slowly.

> The problem with prepared statements is that when planning the query, the
> parameter values are unknown - so the optimizer does not know selectivity
> of the conditions etc. and uses "common" values to prepare a safe plan.
> OTOH the literal parameters allow to optimize the plan according to the
> actual parameter values.

Thank you very much for the helpful explanation.

Regards
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs in 335ms; function in 100,239ms : date problem?

2011-09-05 Thread Rory Campbell-Lange
On 05/09/11, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
> > On 5 Zá??í 2011, 23:07, Rory Campbell-Lange wrote:
> ...
> > > The query itself runs in about a 1/3rd of a second. When running the
> > > query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
> > > over 100 seconds, about 300 times slower.

...

> > Try to run it as a prepared query - I guess you'll get about the same run
> > time as with the function (i.e. over 100 seconds).
> 
> The prepared query runs in almost exactly the same time as the function,
> but thanks for the suggestion. A very useful aspect of it is that I was
> able to get the EXPLAIN output which I guess gives a fairly good picture
> of the plan used for the function.
> 
> The explain output is here:
> http://campbell-lange.net/media/files/explain.txt.html
> 
> I'm inexperienced in reading EXPLAIN output, but it looks like the
> Nested Loop Semi Join at line 72 is running very slowly.

I added in more filtering conditions to the clause at line 72 and the
prepared statement dropped in runtime to 24043.902 ms. Unfortunately the
function ran slower -- 47957.796 -- but even that is a 50% improvement.

Thanks very much for your help.

Regards
Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Strange primary key error on insertion

2011-10-06 Thread Rory Campbell-Lange
I have a strange issue (postgres 8.4) trying to insert old rows back
into the s_tbls table. A check on the primary key (n_id) between
s_tbl_import and s_tbls yields no matches, yet the index on s_tbls.n_id
yields an error when attempting to insert:

=> select n_id from s_tbl_import where n_id IN (
select n_id from s_tbls);

 n_id 
--
(0 rows)

=> insert into s_tbls (select * from s_tbl_import);

ERROR:  duplicate key value violates unique constraint "s_tbls_pkey"

Table "s_tbls"
  Column  |Type |   
Modifiers
--+-+
 n_id | integer | not null default 
nextval('s_tbls_n_id_seq'::regclass)
 dt_created   | timestamp without time zone | default now()
 dt_modified  | timestamp without time zone | default now()
 t_node   | text| 
...
Indexes:
"s_tbls_pkey" PRIMARY KEY, btree (n_id)


Thanks for any assistance
Rory







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange primary key error on insertion

2011-10-06 Thread Rory Campbell-Lange
On 06/10/11, Filip Rembiałkowski (plk.zu...@gmail.com) wrote:
> 2011/10/6 Rory Campbell-Lange 
> > => insert into s_tbls (select * from s_tbl_import);
> >
> >ERROR:  duplicate key value violates unique constraint "s_tbls_pkey"
> >
> >
> Looks like you had duplicates in s_tbl_import. Try this:
> SELECT * FROM s_tbl_import WHERE n_id IN (
> SELECT n_id from s_tbl_import group by n_id HAVING count(*)>1
> );

Hi Filip

You were right! Thanks for catching my silly error. 

Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strategies for segregating client data when using PostgreSQL in a web app

2012-08-03 Thread Rory Campbell-Lange
On 03/08/12, Menelaos PerdikeasSemantix (mperdikeas.seman...@gmail.com) wrote:
> I would like to know what are the best practices / common patterns (or
> pointers to such) for using PostgreSQL in the context of a "big" web
> application with substantial data per user.
...
> [1] use just one database and schema and logically segregate companies data
> by having all tables have a client_id column as part of their primary key.
> [2] use multiple database (in the same server instance) and only the public
> schema in each of them for the customer's data.
> [3] use one database and multiple schemas to separate the different
> customer's data.

We have about 75 clients with 10-50GB of data each, running two main
services. Early on we decided to give each client their own database.

This has the following advantages:
- backup per client is really easy
- moving databases between db servers is easy
- a sequential pg_dump over all the dbs in a server runs pretty quickly
- testing on a subset of dbs works well

Problems:
- the team needs to get really good at running upgrades across all dbs
- if you need to do cross-db work you need to start looking towards
  cross-database tools or messaging e.g. AMQP

Apart from the issues of dealing with cross-database queries (if you
require them) I firmly believe this is the way to go. If you are using
relatively small databases like us, you can scale when you need to, and
use commodity hardware instead of needing something exotic. Put this
together with the new streaming replication stuff, and you have a robust
and easily scalable solution.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Anyone know about PgMQ?

2010-06-11 Thread Rory Campbell-Lange
I was intrigued to see Chris Bohn's page about PgMQ ("Embedding
messaging in PostgreSQL") on the PGCon website at 
http://www.pgcon.org/2010/schedule/events/251.en.html

I have also had a look at the pgfoundry site at 
http://pgfoundry.org/projects/pgmq/ -- its empty.

I've tried to email Chris to find out more about his project, but
haven't received a response. Does any one have any details of this
project?

I am very interested in the possibility of linking Postgres events such
as triggers to RabbitMQ messaging queues.

Rory
-- 
Rory Campbell-Lange
r...@campbell-lange.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Anyone know about PgMQ?

2010-06-15 Thread Rory Campbell-Lange
OK -- I've spotted another project, AMQP for PostgreSQL.
http://lethargy.org/~jesus/writes/amqp-for-postgresql

Which looks pretty good.

Rory

On 11/06/10, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> I was intrigued to see Chris Bohn's page about PgMQ ("Embedding
> messaging in PostgreSQL") on the PGCon website at 
> http://www.pgcon.org/2010/schedule/events/251.en.html
> 
> I have also had a look at the pgfoundry site at 
> http://pgfoundry.org/projects/pgmq/ -- its empty.
> 
> I've tried to email Chris to find out more about his project, but
> haven't received a response. Does any one have any details of this
> project?
> 
> I am very interested in the possibility of linking Postgres events such
> as triggers to RabbitMQ messaging queues.

-- 
Rory Campbell-Lange
r...@campbell-lange.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to import *.sql file to postgresql database

2010-07-18 Thread Rory Campbell-Lange

On 18/07/10, Andre Lopes (lopes80an...@gmail.com) wrote:
> If I do this command gives me the same error:
> 
> [quote]
> ERROR:  invalid byte sequence for encoding "UTF8": 0xff
> HINT:  This error can also happen if the byte sequence does not match the
> encoding expected by the server, which is controlled by "client_encoding".
> [/quote]
> 
> What more can I do?

You still have an encoding/decoding issue, so your load sequence is not
occurring under a LATIN1 encoding in this instance. You may want to look
at iconv to convert your source to a more desirable encoding.

However, from your emails it may be that your file is not a normal text
file dump, but a dump file made with the -Fc flags. Please read the
pg_dump manpage, and, if after inspection of the file itself it proves
to be in the custom, compressed format produced by -Fc, you should use
the pg_restore command to restore from this file.

-- 
Rory Campbell-Lange
r...@campbell-lange.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extracting SQL from logs in a usable format

2009-12-18 Thread Rory Campbell-Lange
On 18/12/09, Chris Ernst (cer...@esoft.com) wrote:
> I have a project where I need to be able to capture every query from a
> production system into a file such that I can "replay" them on a staging
> system.  Does such a thing already exist or should I start writing my
> own log parser?

I'm sure there are a number of log replay systems already in existence.
Perhaps you could use Londiste, and introduce a lag to the replication
process if it is required? 
http://skytools.projects.postgresql.org/doc/londiste.ref.html#toc17

A very simple way of doing this is to log all of the SQL statements by
setting the postgresql.conf parameter log_min_duration_statement to 0.
You can then easily parse out the SQL statements from the log file.
I have done this before quite successfully when wishing to replay a long
set of SQL statements to test un upgrade of a Postgresql server.

Rory

-- 
Rory Campbell-Lange
Director
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Return unknown resultset from a function

2012-03-04 Thread Rory Campbell-Lange
On 04/03/12, Jan Meyland Andersen (j...@agile.dk) wrote:
> How do I return an unknown resultset from a function
> 
> My main problem is that I do not know how many columns or the data
> type of the columns before runtime.
> It this possible at all?
> 
> I also tried to return the data as a text array but I also have
> trouble with that.

There is a section on this in the docs at (for instance)
http://www.postgresql.org/docs/8.4/static/xfunc-sql.html#XFUNC-OUTPUT-PARAMETERS

A specific example which may help is

CREATE FUNCTION new_emp() RETURNS emp AS $$
SELECT ROW('None', 1000.0, 25, '(2,2)')::emp;
$$ LANGUAGE SQL;

Note that there are two ways of calling such a function. You probably
want the "SELECT *" form.

Rory

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Return unknown resultset from a function

2012-03-04 Thread Rory Campbell-Lange
On 04/03/12, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> On 04/03/12, Jan Meyland Andersen (j...@agile.dk) wrote:

> > My main problem is that I do not know how many columns or the data
> > type of the columns before runtime.
> > It this possible at all?

> There is a section on this in the docs at (for instance)
> http://www.postgresql.org/docs/8.4/static/xfunc-sql.html#XFUNC-OUTPUT-PARAMETERS
> 
> A specific example which may help is
> 
> CREATE FUNCTION new_emp() RETURNS emp AS $$
> SELECT ROW('None', 1000.0, 25, '(2,2)')::emp;
> $$ LANGUAGE SQL;
> 
> Note that there are two ways of calling such a function. You probably
> want the "SELECT *" form.

My apologies -- if you aren't sure about the return type you will need
to use a RECORD.

eg 
http://www.postgresql.org/docs/7.4/static/plpgsql-declarations.html#PLPGSQL-DECLARATION-RECORDS

There is a good answer on stack overflow about returning polymorphic
results here:
http://stackoverflow.com/questions/4547672/how-to-return-multiple-fields-as-a-record-in-postgresql-pl-pgsql
-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Server choice for small workload : raptors or SSD?

2012-03-21 Thread Rory Campbell-Lange
I presently have about 40 databases on an aging server which runs both
Postgresql and Apache. The databases presently consume about 20GB and I
expect them to be consuming around 40GB in a year or more if demand for
our services expand as we hope.

The present database is 2 x Quad core E5420 Xeon (2.5GHz) with 8GB of
RAM with an LSI battery-backed RAID 10 array of 4no 10K SCSI disks. It
performs fine although reads on complex queries can be a little slow due
to limited RAM.

I plan to buy two servers and split the workload between them, and back
each of these up to a VM over streaming replication. This is in addition
to our current pg_dump processes which run twice a day at present.
Although one server is adequate for our needs, two helps reduce the risk
of any one database going down.

I'd be grateful for comments on whether to go with a server with the A
or B spec. Both servers have the following in common:

E5620 Quad-Core / 4x 2.40GHz
LSI MegaRAID SAS 9260-4i with BBU (6Gbps)
48 GB PC3-10600 DDR3 / 1333MHz / registered ECC RAM

Server A:

4 * 300GB 10K WD raptors in a RAID10 configuration

Server B:

2 * 500GB 7.2K SATA disks in RAID 1
2 * 100GB Intel 710 Solid State SATA 270MBs read, 170MBs write
in RAID 1 **

Both servers cost about the same. 

**  The 710 SSDs use MLC NAND flash. Review here:
http://www.tomshardware.com/reviews/ssd-710-enterprise-x25-e,3038.html

Regards
Rory

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update table with random values from another table

2009-02-12 Thread Rory Campbell-Lange
On 12/02/09, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> I have a test system for which I need to replace actual user's data (in
> 'users') with anonymised data from another table ('testnames') on
> postgres 8.3. 
> 
> The tricky aspect is that each row from testnames must be randomised to
> avoid each row in users being updated with the same value.
> 
> I've been trying to avoid a correlated subquery for each column I am trying
> to update, and instead trying the following, which never returns. There are
> 2000 records in testnames and about 200 in users. 

I'm obviously doing something badly wrong because:

UPDATE  
users
SET t_firstname =
(select firstname from testnames order by random() limit 1),
t_surname = 
(select lastname from testnames order by random() limit 1)
WHERE   
n_role IN (2,3);

Doesn't return either after 60 seconds on a 8 core machine with 8GB of
RAM and 15K disks in R10 (no swap in use).

Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Update table with random values from another table

2009-02-12 Thread Rory Campbell-Lange
I have a test system for which I need to replace actual user's data (in
'users') with anonymised data from another table ('testnames') on
postgres 8.3. 

The tricky aspect is that each row from testnames must be randomised to
avoid each row in users being updated with the same value.

I've been trying to avoid a correlated subquery for each column I am trying
to update, and instead trying the following, which never returns. There are
2000 records in testnames and about 200 in users. 

   UPDATE
users   
SET 
t_firstname = x.firstname,
t_surname   = x.lastname,
t_username  = x.username,
t_contact   = x.email   
FROM
(select firstname, lastname, username, email
 from testnames order by random()) x
WHERE
t_firstname <> x.firstname;


Any help much appreciated
Rory 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update table with random values from another table

2009-02-12 Thread Rory Campbell-Lange
Hi Grzegorz

Many thanks for your reply.

On 12/02/09, Grzegorz Ja??kiewicz (gryz...@gmail.com) wrote:
> actually forget about that generate_series() in sub queries, I just
> realized that it won't do.
> I don't have too much time to analyse and find solution, but
> essentially you need to do it like in my example.
> 
> See, that's where normalization would help a lot. Ie, having a
> separate table for name, and surname - and than third one to connect
> them into full name.

I realise that for every row in my users table (which has a unique
integer field) I can update it if I construct a matching id field
against a random row from the testnames table.

Something like this:

UPDATE
users
SET 
FROM
(SELECT 
dynamic_id, firstname, lastname 
FROM 
testnames
 ORDER BY 
 random()
 ) x
WHERE 
users.id = x.id;

However I'm not sure how to generate a dynamic_id for testnames. If I
use generate_series() I get a full join, rather than

1  firstname1 lastname1
2  firstname2 lastname2

Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update table with random values from another table

2009-02-12 Thread Rory Campbell-Lange
On 12/02/09, Tom Lane (t...@sss.pgh.pa.us) wrote:
> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?=  writes:
> > On Thu, Feb 12, 2009 at 1:10 PM, Rory Campbell-Lange
> >  wrote:
> >> UPDATE
> >> users
> >> SET t_firstname =
> >> (select firstname from testnames order by random() limit 1),
> >> t_surname =
> >> (select lastname from testnames order by random() limit 1)
> >> WHERE
> >> n_role IN (2,3);
> >> 
> >> Doesn't return either after 60 seconds on a 8 core machine with 8GB of
> >> RAM and 15K disks in R10 (no swap in use).
> 
> > That would be because, for every row in users table, postgres has to
> > run two subselects, with order by random() (which in it self is quite
> > expensive).
> 
> Well, no, because those subselects are independent of the parent query;
> I'd expect PG to do them just once.  Do they show up as "SubPlans" or
> "InitPlans" in EXPLAIN?

Hi Tom

I don't know what the problem was, but I restarted my psql session and the
query runs in 2.181 ms. The plan is an InitPlan. Apologies for the specious
post.

I'm still unable to work out how to update some columns in my user's table each
with a unique record from my testnames table :).

Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] row constructors

2009-02-12 Thread Rory Campbell-Lange
On 12/02/09, Merlin Moncure (mmonc...@gmail.com) wrote:
> On Thu, Feb 12, 2009 at 5:03 AM, Sim Zacks  wrote:
> > Never mind. I found an old post.
> > I just needed to do:
> > insert into a1 select (f2).* from a2;
> >
> > I didn't find it the first time I searched because I was looking for row
> > constructors, and the post I found used the term composite value.
> 
> I'm scheming to get that fixed.  The main reason is that while the
> insert workaround works, there is no similar workaround for 'update'.

Do you mean that the currently unsupported behaviour

  UPDATE accounts SET (contact_last_name, contact_first_name) =
(SELECT last_name, first_name FROM salesmen
 WHERE salesmen.id = accounts.sales_id);  

will be fixed? (with reference to 
http://www.postgresql.org/docs/8.3/static/sql-update.html#AEN61082)

Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update table with random values from another table

2009-02-12 Thread Rory Campbell-Lange
On 12/02/09, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> I realise that for every row in my users table (which has a unique
> integer field) I can update it if I construct a matching id field
> against a random row from the testnames table.

I can make my join table pretty well by using the ranking procedures
outlined here: http://www.barik.net/archive/2006/04/30/162447/

CREATE TEMPORARY SEQUENCE rank_seq;
select nextval('rank_seq') AS id, firstname, lastname from testnames;

or

SELECT 
firstname, lastname, 
(SELECT 
count(*) 
FROM 
testnames t2 
WHERE
t2.firstname < t1.firstname) + 2 AS id 
FROM 
testnames t1 
ORDER BY 
id;

The second method skips some ids (probably because I haven't got an
integer column in testnames)? It looks like I will have to go for the
first procedure or write a function with a loop and counter.

Any other ideas?

Rory

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Join on virtual table

2004-12-10 Thread Rory Campbell-Lange
Hi. I'd like to return a result set from a plpgsql function constructed
out of a 'virtual table' joined to an actual table, and struggling to
find a sane approach.

I have a table 'recs' with records like this.

day |  nums
---
2   |  1
5   |  3
2   |  2.5

For a particular month in the year I would like to generate all the days
in the month into a virtual table.

'virt'

vday
---
1
... omitted ...
30

I would like a result set something like this:

day |  nums
---
1   |  0
2   |  3.5
3   |  0
4   |  0
5   |  3
6   |  0
... etc.

Below is a first attempt. It fails because it looks like one can't have
two set returning loops defined in the same function, quite apart from
any join oddities joining against 'dayom'.

Thanks! Rory

DROP TYPE dom CASCADE;
CREATE TYPE dom AS ( d INTEGER );

DROP TYPE comb CASCADE;
CREATE TYPE comb AS ( day INTEGER, val INTEGER );

CREATE OR REPLACE FUNCTION fn_test1 ( ) RETURNS setof comb
AS '
DECLARE
dayom   dom%rowtype;
resultercomb%rowtype;
BEGIN
FOR i IN 1..30 LOOP
dayom.d  = i;
RETURN NEXT dayom;
END LOOP;

FOR resulter IN
SELECT  
dayom.d as day,
recs.nums
FROM
dayom
LEFT OUTER JOIN recs r ON dayom.d = recs.day
ORDER BY
dayom.d
LOOP
RETURN NEXT resulter;
END LOOP;


RETURN;
    END;'
LANGUAGE plpgsql;



-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Select from function for part of column results

2005-02-03 Thread Rory Campbell-Lange
I've written a function to provide information relating to each row in
an inbox. I'd like to do a query returning some information from each
row of the inbox + some columns from the function pertinent to each row
returned.

I've tried the following on 7.4.6:

t4=> select * from function_message_context(inbox.rowid), inbox; 
ERROR:  function expression in FROM may not refer to other 
relations of same query level

Is there a way I can get the function to provide some columns in the
query?

Thanks
Rory

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Select from function for part of column results

2005-02-03 Thread Rory Campbell-Lange
Hi Oisin

On 03/02/05, Oisin Glynn ([EMAIL PROTECTED]) wrote:
> I have something simlar working on windows in V 8.0  
> My select loks like:
> 
> SELECT *,function_message_context(inbox.rowid) from inbox; 

This works great for a function returning a single column. I'm not sure
how to do it with a function returning more than one column.

st4=> select *, fn_message_context(n_id) from inbox;
ERROR:  cannot display a value of type record

Rory

> - Original Message - 
> From: "Rory Campbell-Lange" <[EMAIL PROTECTED]>
> To: "Postgresql General List" 
> Sent: Thursday, February 03, 2005 15:04
> Subject: [GENERAL] Select from function for part of column results
> 
> > I've written a function to provide information relating to each row in
> > an inbox. I'd like to do a query returning some information from each
> > row of the inbox + some columns from the function pertinent to each row
> > returned.
> > 
> > I've tried the following on 7.4.6:
> > 
> > t4=> select * from function_message_context(inbox.rowid), inbox; 
> > ERROR:  function expression in FROM may not refer to other 
> > relations of same query level
> > 
> > Is there a way I can get the function to provide some columns in the
> > query?

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Using a 250GB RAID10 server for postgres

2005-12-05 Thread Rory Campbell-Lange
[Didn't get any replies on the Perform list -- hoping someone can help
me here]

Hi. We have a server provided for a test of a web application with the
following specifications:

1 Dual core 1.8GHz Opteron chip
6 GB RAM
approx 250GB of RAID10 storage (LSI card + BBU, 4 x 15000 RPM,16MB
Cache SCSI disks)

The database itself is very unlikely to use up more than 50GB of storage
-- however we are going to be storing pictures and movies etc etc on the
server.

I understand that it is better to run pg_xlog on a separate spindle but
we don't have that option available at present.

Normally I use ext3. I wondered if I should make my normal partitions
and then make a +/- 200GB LVM VG and then slice that initially into a
100GB ext3 data directory and a 50GB xfs postgres data area, giving
100GB to use between these as they grow. I haven't used LVM with xfs
before, however.

Advice gratefully received.
Rory



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Using a 250GB RAID10 server for postgres

2005-12-06 Thread Rory Campbell-Lange
In the absence of replies (and sorry to bombard the list), I should
clarify my question:

Is it OK to use logical volume management to run an xfs partition
hosting postgres data?

(The server specs are below.)

Thanks for any replies.

Rory

On 05/12/05, Rory Campbell-Lange ([EMAIL PROTECTED]) wrote:
> [Didn't get any replies on the Perform list -- hoping someone can help
> me here]
> 
> Hi. We have a server provided for a test of a web application with the
> following specifications:
> 
> 1 Dual core 1.8GHz Opteron chip
> 6 GB RAM
> approx 250GB of RAID10 storage (LSI card + BBU, 4 x 15000 RPM,16MB
> Cache SCSI disks)
> 
> The database itself is very unlikely to use up more than 50GB of storage
> -- however we are going to be storing pictures and movies etc etc on the
> server.
> 
> I understand that it is better to run pg_xlog on a separate spindle but
> we don't have that option available at present.
> 
> Normally I use ext3. I wondered if I should make my normal partitions
> and then make a +/- 200GB LVM VG and then slice that initially into a
> 100GB ext3 data directory and a 50GB xfs postgres data area, giving
> 100GB to use between these as they grow. I haven't used LVM with xfs
> before, however.
> 
> Advice gratefully received.
> Rory
> 
> 

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL Advocacy, Thoughts and Comments

2003-12-16 Thread Rory Campbell-Lange
On 29/11/03, Randal L. Schwartz ([EMAIL PROTECTED]) wrote:
> Well, since I need 2.5 ideas per month for the three columns I'm still
> writing, I'm certainly in a position to write nice things about PG,
> although I always have to work it in from a Perl slant.
> 
> Actually, I'm sure that any of the magazines I'm in would appreciate
> an additional article or two from me.
> 
> If you can think of something that fits in 2000 words or so (or 4000
> if it needs part 1 and 2), and can have a Perl wrapper, I'd appreciate
> some inspiration.

Hi Randal

I think I may have an idea for an article which would address a common
problem for people writing database client interfaces:

The problem is simply explained.

Problem title:

The "page of pages" problem (!)

The problem:

You want to return a subset of a large number items using some
fairly complex search criteria. You want to make only one database
call, benefit from a cached query, and don't want to have all the
rows in memory. How do you get the total count of pages for the
relevant search criteria?

Why is this relevant?

Moving logic that is inherent to the database to the database
provides a potentially rich yet simple interface to database
queries that can benefit a number of client applications.

Typically this sort of query would be written as at least two
dynamically generated queries in the client program that has to be
parsed by the backend before it is executed. By using functions we
can hide complex joins behind simple field names, and provide
flexible (if limited) search capabilites, as well as caching and
sensible error messages.

Approach:

Using Postgres one can construct a function and then do either

   SELECT * from function fn_explore($searchstring, $limit, $offset);
OR
   SELECT  
* 
   FROM 
function fn_explore() 
   WHERE 
searchterm ~* 'test'
   LIMIT 
5 
   OFFSET 
10;

What is cool about the second format is that (if the function
returned a type 'explore_result' as below), your PHP/Perl programmer
can at their interface do something like 

'... where id <> 1 AND author IN ('james', 'bill')...'

However I don't know how you get back the total rows in this case,
also maybe the caching effects are minimised?


Type definition:

CREATE TYPE explore_result as (
id INTEGER,   -- some sort of row id
total  INTEGER,   -- total rows for query
author VARCHAR,
image  BYTEA
/*
Not needed unless search is done outside db.
, searchterm VARCHAR
*/
);


Sketch function definition:

CREATE OR REPLACE FUNCTION 
fn_explore (integer, integer, integer) RETURNS setof explore_result
AS '
DECLARE
searchstring  ALIAS for $1;
offsetter ALIAS for $2;
limiter   ALIAS for $3;
resulter  explore_page%rowtype;
BEGIN

/*
  variable verifation section chopped
*/

FOR resulter IN
SELECT 
n_id  as id,
LOJ.pagetotal as total
pers.t_name   as author,
image.b_contents  as image
/*
need searchterm returned if we are doing search outside
the database
, COALESCE(t_title || '' '' || t_text,  '''') as searchterm

FROM
db
/*
-> self join on db LOJ for unoffset, unlimited row count
   refer to searchterm stuff below
*/
WHERE
/* note, if we are doing a search outside of the
 * function and t_title or t_text could be empty then we
 * need to coalesce to an empty string
 * COALESCE(t_title || '' '' || t_text,  '''') as searchterm
 */
 searchstring ~ t_title || '' '' || t_text
ORDER BY 
dt_modified DESC
LIMIT
    limiter
OFFSET
offsetter
,

 LOOP

RETURN NEXT
resulter;

END LOOP;

RETURN; 
END;'
LANGUAGE plpgsql;

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Very slow query

2004-05-10 Thread Rory Campbell-Lange
Sorry for replying to my own post, but I'm anxious for an answer. Should
I provide other information?

Thanks
Rory

On 10/05/04, Rory Campbell-Lange ([EMAIL PROTECTED]) wrote:
> The following query on some small datasets takes over a second to run.
> I'd be grateful for some help in understanding the explain output, and
> to remake the code.
> 
> Looks like the sort is using up quite a bit of resources. I made an
> index on boards using columns "b.n_type, b.n_id, b.t_name" but the index
> was not recorded in explain analyze. (see "testindex" below).
> 
> I am using PostgreSQL 7.4.2 on i386-pc-linux-gnu (Debian). The query is
> in a psql function (I've changed variables to hard coded integers
> below). The db is vacuumed every night.
> 
> Thanks for any help;
> Rory
> 
> 
> SELECT DISTINCT
> b.n_id   as id,
> b.n_type,
> CASE b.n_type WHEN 0 THEN 'personal'
>   WHEN 1 THEN 'private'
>   WHEN 2 THEN 'blog'
>   ELSE 'public'
>   ENDas type,
> b.t_name as title
> FROM
> boards b, people p, objects o
> WHERE
>  b.b_hidden = 'f' 
>  AND 
>  (  
> b.n_type = 3 
> OR 
> b.n_creator = 71
> OR
> (   b.n_id = o.n_board_id 
> AND 
> o.n_creator = 71
> AND
> o.n_joined > 0
> )
>  )
> ORDER BY 
> b.n_type, b.n_id ASC, b.t_name;
> 
> trial=> \d boards
>  Table "public.boards"
> Column |Type |Modifiers  
>
> ---+-+--
>  n_id  | integer | not null default 
> nextval('public.boards_n_id_seq'::text)
>  b_hidden  | boolean | default false
>  dt_created| timestamp without time zone | default ('now'::text)::timestamp(6) 
> with time zone
>  dt_modified   | timestamp without time zone | default ('now'::text)::timestamp(6) 
> with time zone
>  t_mobile  | character varying(15)   | 
>  t_email   | character varying(50)   | 
>  n_creator | integer | not null
>  n_type| smallint| not null default 0
>  t_name| character varying(100)  | not null
>  t_description | character varying(500)  | 
>  n_id_photo| integer | 
>  n_bg_colour   | integer | default 0
> Indexes:
> "boards_pkey" primary key, btree (n_id)
> "boards_t_email_key" unique, btree (t_email)
> "boards_t_mobile_key" unique, btree (t_mobile)
> "testindex" btree (n_type, n_id, t_name)
> Foreign-key constraints:
> "$1" FOREIGN KEY (n_creator) REFERENCES people(n_id) ON UPDATE CASCADE ON DELETE 
> CASCADE
> "$2" FOREIGN KEY (n_id_photo) REFERENCES photo(n_id) ON UPDATE CASCADE ON DELETE 
> SET NULL
> Triggers:
> tr_update_modified_time BEFORE UPDATE ON boards FOR EACH ROW EXECUTE PROCEDURE 
> fn_update_modified_time()
> 
> trial=> \d people
>Table "public.people"
>   Column   |Type |Modifiers  
>
> ---+-+--
>  n_id  | integer | not null default 
> nextval('public.people_n_id_seq'::text)
>  n_object_id   | integer | 
>  n_objects_counter | integer | default 0
>  b_hidden  | boolean | default false
>  dt_created| timestamp without time zone | default 
> ('now'::text)::timestamp(6) with time zone
>  dt_modified   | timestamp without time zone | default 
> ('now'::text)::timestamp(6) with time zone
>  t_nickname| character varying(20)   | not null
>  t_firstname   | character varying(20)   | not null
>  t_surname | character varying(25)   | not null
>  t_mobile  | character varying(15)   | 
>  t_email   | character varying(50)   | 
>  t_password| character varying(15)   | n

Re: [GENERAL] Naive schema questions

2004-05-27 Thread Rory Campbell-Lange
Fabulous stuff! I am so delighted I chose Postgresql a couple of year
ago. Thank you for the valuable insights. A comment or two below:

On 27/05/04, Peter Eisentraut ([EMAIL PROTECTED]) wrote:
> Am Donnerstag, 27. Mai 2004 13:15 schrieb Rory Campbell-Lange:
> > I imagined schemas might allow me to globally update functions across a
> > database hosting many schemas with the same structure.

> Put your data tables in separate schemas, put the functions in yet
> another schema, and then when you connect set the schema search path
> to "dataschema, functionschema" (or maybe vice versa).

On 27/05/04, Nick Barr ([EMAIL PROTECTED]) wrote:
> > Put your data tables in separate schemas, put the functions in yet
> > another schema, and then when you connect set the schema search path
> > to "dataschema, functionschema" (or maybe vice versa).

> Or when you make the calls in the web app use the following:
> 
> SELECT function_schema.function1(arg1, arg2);
> instead of just:
> SELECT function1(arg1, arg2);
> But like Peter said have a schema per client/"instance" of your database.

Is it ok to use the public schema for the functions? It means it is that
much easier to reload the functions as one wouldn't need to specify the
search_path.

On 27/05/04, Jan Wieck ([EMAIL PROTECTED]) wrote:
...
> It is even better. The property that set's your "schema context" is 
> called search_path. This contains a list of schema names. For an 
> unqualified (schema name not explicitly given) object, be that a table, 
> sequence, view, function or whatever, the system looks in all those 
> schemas in that particular order and uses the first found.
> 
> With that, you can have your common or shared objects in a central 
> schema "schema_common", and everything that's application specific in 
> "schema_A", "schema_B". The connection just has to set the search_path 
> at the beginning with
> 
> set search_path = schema_A, schema_common;

This is brillliant. I didn't note this in the documentation.

> >I suppose I'm trying to think of how I might implement the second point
> >in this list (also from dd-schemas.html):
...
> >- To organize database objects into logical groups to make them more
> >  manageable.
...
> Yes, yes and yes. Plus the ability for you to do cross database joins 
> for global analyzing for example.

Just a question on this, Jan. Would one expect UNIONS for this sort of
work?

I just did this which is useful anyway:
schematest=> SELECT 
(select count(id) from b.messages) 
+ 
(select count(id) from a.messages);
 ?column? 
 --
 5
 (1 row)

I see the horizons expanding! Common data (I often have an 'info' table)
can be shared between schemas. I think my search_patch might go:

this_schema, info_schema, public_schema

Thanks very much for the information.

Kind regards,
Rory
-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [OT] Dilemma about OS <-> Postgres interaction

2004-06-18 Thread Rory Campbell-Lange
On 18/06/04, Harald Fuchs ([EMAIL PROTECTED]) wrote:
> In article <[EMAIL PROTECTED]>,
> Rory Campbell-Lange <[EMAIL PROTECTED]> writes:
> 
> > I should have mentioned that we need the messages sent very soon after
> > they have landed in the 'inbox'; otherwise cron would definitely be the
> > way to go -- including an @reboot line.
> 
> This rules out a cronjob.  How about writing a pl/perlu function which
> sends the mail and calling that function by an AFTER INSERT trigger?

I was trying to avoid having an untrusted language in the database.
After all, this may be the way to do, and I don't need to worry about
daemons or anything like that.

Even further off-topic:
How does a pl/perl function access perl modules? Is there a sensible way
of testing them?

Thanks
Rory
-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Schema and Group permissions question

2004-09-05 Thread Rory Campbell-Lange
I'm setting up a system to use schemas:

   DB
---+
|  |   |   | 
Schemas:Prod   Test   FuncsTestFuncs
|  |   
   tables  tables

This sort of arrangmenet allows me to have a Production set of tables and a
Test set of tables. By loading the plsql functions into either Funcs or
TestFuncs, I can try out new test functions with the test data before putting
them into production.

However different database users need to load functions into TestFuncs,
and use, and possibly drop and reload, the tables in Test.

I tried the following:

 [EMAIL PROTECTED]> CREATE SCHEMA Test;
 [EMAIL PROTECTED]> CREATE USER User1;
 [EMAIL PROTECTED]> CREATE USER User2;
 [EMAIL PROTECTED]> CREATE GROUP Testers WITH USER User1, User2;
 [EMAIL PROTECTED]> GRANT ALL ON Test TO GROUP Testers;

However, when User1 loads tables into Test, User2 cannot work with them
unless User1 explicity sets:
 
"GRANT ALL ON | TO User2"

Is there anyway of setting the equivalent of a directory g+sw
permissions on Test?

Thanks for any help
Rory

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [pgsql-advocacy] interesting PHP/MySQL thread

2003-06-24 Thread Rory Campbell-Lange
I'm a Postgres and PHP newbie. I'm having a great deal of success with
my latest development effort having moved most of the logic from a
perl/php logic 'core' to postgres using plpgsql functions. (Thanks for
all that help, Josh).

I have a few comments to make on the idea of introducing people, PHP
developers especially, to postgresql. I'm not commenting here on how
easy it is to use PHP with postgres (it was transparent for me using
Debian) or whether or not to advocate the use of advanced features to
general users. Rather, it appears to me, that the PHP/Postgres
documentation and feature set should be improved.

1)  PHP Documentation

The postgresql "write up" in the PHP html documentation doesn't give
a very good picture of the capabilities of postgres. While the PHP
docs aren't obviously a good place to write up the benefits of
plpgsql functions, some mention should be made to help differentiate
between the capabilities of MySQL and Postgres.

PHP documents: 
ref.pgsql.html; ref.mysql.html

The MySQL examples given for database specific functions are useful
and to the point. The page on most of the Postgres functions are
sketchy. (No error number in Postgres...)

PHP documents:
function.mysql-errno.html; function.pg-result-error.html

PHP/Postgres provides a set of predefined constants, eg
PGSQL_COMMAND_OK and PGSQL_FATAL_ERROR. The use and parameters of
these constants is not described. The latter appears to provide
inconsistent results under my PHP 4.2.3 install.

2)  PHP<->Postgres bugs

Apart from the PGSQL_FATAL_ERROR problem above, it would be good to
find a more simple, PHP-like, approach to catch exceptions and the
like. At the moment I believe one has to do something like:

function test () {
$sql = "
SELECT 
count(n_id) as number
FROM 
people
";

ob_start();
$result = pg_exec ($this->conn, $sql);
$this->status = pg_result_status($result);
ob_end_clean();

$this->result_checker();
if ($this->error != 0) {
echo "An error occured.\n";
exit;
}
...
return $this;
}

function result_checker () {
// horrible code to check for postgres exceptions
// status numbers sometimes show up
// ghosts of PGSQL_FATAL_ERROR?
if (! isset($this->status) or 
   ($this->status == 5 or $this->status == 7)) {
$this->error = 1;
// wierdly, this always works
$this->error_msg = pg_last_error($this->conn);
return 1;
} else {
return 0;
}
}


On 22/06/03, Bruce Momjian ([EMAIL PROTECTED]) wrote:
> We need to use this opportunity to encourage PHP folks to switch to
> PostgreSQL.

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] SELECT too complex?

2003-06-26 Thread Rory Campbell-Lange
fset LOOP
    
    RETURN NEXT
resulter;

END LOOP;

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Rollback for aborted function?

2003-07-03 Thread Rory Campbell-Lange
If a function generates an exception, do I need to rollback manually?

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] using EXISTS instead of IN: how?

2003-07-22 Thread Rory Campbell-Lange
I have been informed that at present (postgres 7.3.2) using IN is not
advised, and I should  replace it with EXISTS. I can't seem to get it to
work.

I've tried replacing (example):

SELECT
name
FROM
people
WHERE
state IN (
SELECT 
id 
FROM 
states
WHERE
name ~* 'r'
);

with

SELECT
name
FROM
people
WHERE
exists (
SELECT 
1
FROM 
states
WHERE
name ~* 'r'
);

However the second example simply finds all records in people.

Thanks for any help,
Rory

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Bug in comment parsing? [was: Re: [GENERAL] Comments in .sql files]

2003-08-01 Thread Rory Campbell-Lange
/*
Check to see if a person's code is correct
*/

My Postgres 7.3.2 install chokes on the ' when trying to load a function
from psql \i.

Rory

On 31/07/03, Tom Lane ([EMAIL PROTECTED]) wrote:
> "Chad N. Tindel" <[EMAIL PROTECTED]> writes:
> > 1.  It obviously doesn't like the # notation for comments.  What is the proper
> > way to put comments in schema files?
> 
> The SQL-standard comment syntaxes are
> 
>   -- comment to end of line
> 
>   /* C-like comment, possibly multiple lines */

-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Outer Join help please

2003-09-19 Thread Rory Campbell-Lange
I'm having troube doing a left outer self join on a table. The sent
column shows the number of items sent to each recipient from each
source. The received column (generated by the outer join) is incorrect
because although it is summing the number of messages by recipient,
these need to be filtered by source too.

How can I do a join on two columns in the master table?

Thanks for any help. 
Rory


 recipient | source | sent | received | outstanding 
---++--+--+-
 22| 1  |3 |2 |   1
 23| 1  |1 |1 |   0
 25| 1  |1 |2 |  -1
 25| 2  |1 |2 |  -1
 26| 2  |2 |0 |
 27| 2  |3 |0 |
(6 rows)



--- function definition ---


DROP TYPE dlr_report CASCADE;

CREATE TYPE dlr_report as (
recipientVARCHAR,
source   VARCHAR,
sent INTEGER,
received INTEGER,
outstanding  INTEGER
);


CREATE OR REPLACE FUNCTION report_on_dlr () 
RETURNS SETOF dlr_report
AS '
DECLARE
resulter   dlr_report%rowtype;
BEGIN

FOR resulter IN
SELECT 
dd.t_toas recipient,
dd.t_from  as source,
count(dd.id)   as sent,
CASE  
WHEN received_ok is NULL THEN 0 
ELSE received_ok
ENDas received,
count(dd.id) - received_ok as outstanding
FROM
dlr dd
LEFT OUTER JOIN (
SELECT
t_to as target, count(id) as received_ok
FROM
dlr 
WHERE
dlr = 1 and t_from = source
GROUP BY
target
) AS ok
ON t_to  = ok.target
GROUP BY
dd.t_to, dd.t_from, received, received_ok
ORDER BY 
dd.t_to, dd.t_from
 LOOP

RETURN NEXT
resulter;

END LOOP;

RETURN; 

END;'
LANGUAGE plpgsql;

- data definition -

CREATE TABLE dlr (
id serial NOT NULL,
t_to character varying(30),
t_from character varying(30),
dlr smallint
);

COPY dlr (id, t_to, t_from, dlr) FROM stdin;
1   22  1   \N
2   22  1   1
3   22  1   1
4   23  1   1
5   25  1   1
6   25  2   1
7   26  2   \N
8   26  2   0
9   27  2   0
10  27  2   0
11  27  2   0
\.

SELECT pg_catalog.setval ('dlr_id_seq', 11, true);



-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match