Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-26 Thread Gurjeet Singh

Hi Rafal,

   Just a note that this is not standard SQL... 'distinct on' is an
extension to SQL provided by postgres.

Following query utilizes the standard SQL to get the same results:

selectt1.id as id, t2.id as "id+1",
   t1.thread as thread, t2.thread as "thread+1",
   t1.info as info, t2.info as "info+1"
from test as t1, test as t2
where t2.id = ( select min(id) from test as t3 where t3.id > t1.id);

HTH
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

17°29'34.37"N  78°30'59.76"E - Hyderabad *
18°32'57.25"N  73°56'25.42"E - Pune

Sent from my BlackLaptop device

On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote:


Marvelous! Thenx!

-R

On Tue, 2007-06-26 at 10:06 +0200, hubert depesz lubaczewski wrote:
> On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote:
> Is there an SQL construct to get it?
>
> select
> distinct on (t1.id)
> t1.*, t2.*
> from
> test t1
> join test t2 on t2.id > t1.id
> order by t1.id asc, t2.id asc
>
> should do the trick.
>
> depesz
>
> --
> http://www.depesz.com/ - nowy, lepszy depesz

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/



Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-26 Thread Gurjeet Singh

I missed the ORDER BY clause... Here it goes:

selectt1.id as id, t2.id as "id+1",
   t1.thread as thread, t2.thread as "thread+1",
   t1.info as info, t2.info as "info+1"
from test as t1, test as t2
where t2.id = ( select min(id) from test as t3 where t3.id > t1.id )
order by t1.id asc;

Also note that this query is much cheaper that the 'distinct on' query by
more than two orders on magnitude (217.86 vs. 98040.67):

postgres=# explain
postgres-# select
postgres-# distinct on (t1.id)
postgres-# t1.*, t2.*
postgres-# from
postgres-# test t1
postgres-# join test t2 on t2.id > t1.id
postgres-# order by t1.id asc, t2.id asc;
  QUERY PLAN

Unique  (cost=95798.00..98040.67 rows=1160 width=80)
  ->  Sort  (cost=95798.00..96919.33 rows=448533 width=80)
Sort Key: t1.id, t2.id
->  Nested Loop  (cost=0.00..13827.29 rows=448533 width=80)
  ->  Seq Scan on test t1  (cost=0.00..21.60 rows=1160
width=40)
  ->  Index Scan using test_id_key on test t2
(cost=0.00..7.06rows=387 width=40)
Index Cond: (t2.id > t1.id)
(7 rows)
Time: 5.003 ms
postgres=# explain
postgres-# select   t1.id as id, t2.id as "id+1",
postgres-#  t1.thread as thread, t2.thread as "thread+1",
postgres-#  t1.info as info, t2.info as "info+1"
postgres-# from test as t1, test as t2
postgres-# where t2.id = ( select min(id) from test as t3 where t3.id >
t1.id )
postgres-# order by t1.id asc;
   QUERY PLAN
--
Sort  (cost=214.96..217.86 rows=1160 width=80)
  Sort Key: t1.id
  ->  Hash Join  (cost=36.10..155.92 rows=1160 width=80)
Hash Cond: ((subplan) = t2.id)
->  Seq Scan on test t1  (cost=0.00..21.60 rows=1160 width=40)
->  Hash  (cost=21.60..21.60 rows=1160 width=40)
  ->  Seq Scan on test t2  (cost=0.00..21.60 rows=1160
width=40)
SubPlan
  ->  Result  (cost=0.13..0.14 rows=1 width=0)
InitPlan
  ->  Limit  (cost=0.00..0.13 rows=1 width=4)
->  Index Scan using test_id_key on test t3  (cost=
0.00..51.02 rows=387 width=4)
  Index Cond: (id > $0)
  Filter: (id IS NOT NULL)
(14 rows)
Time: 4.125 ms


Best regards,
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

17°29'34.37"N  78°30'59.76"E - Hyderabad *
18°32'57.25"N  73°56'25.42"E - Pune

Sent from my BlackLaptop device

On 6/26/07, Gurjeet Singh <[EMAIL PROTECTED]> wrote:


Hi Rafal,

Just a note that this is not standard SQL... 'distinct on' is an
extension to SQL provided by postgres.

Following query utilizes the standard SQL to get the same results:

selectt1.id as id, t2.id as "id+1",
t1.thread as thread, t2.thread as "thread+1",
t1.info as info, t2.info as "info+1"
from test as t1, test as t2
where t2.id = ( select min(id) from test as t3 where t3.id > t1.id);

HTH
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

17°29'34.37"N  78°30'59.76"E - Hyderabad *
18°32'57.25"N  73°56'25.42 "E - Pune

Sent from my BlackLaptop device

On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote:
>
> Marvelous! Thenx!
>
> -R
>
> On Tue, 2007-06-26 at 10:06 +0200, hubert depesz lubaczewski wrote:
> > On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote:
> > Is there an SQL construct to get it?
> >
> > select
> > distinct on (t1.id)
> > t1.*, t2.*
> > from
> > test t1
> > join test t2 on t2.id > t1.id
> > order by t1.id asc, t2.id asc
> >
> > should do the trick.
> >
> > depesz
> >
> > --
> > http://www.depesz.com/ - nowy, lepszy depesz
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>http://archives.postgresql.org/
>




Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-26 Thread Gurjeet Singh

On 6/26/07, Tom Lane <[EMAIL PROTECTED]> wrote:


"news.gmane.org" <[EMAIL PROTECTED]> writes:
> Gurjeet Singh skrev:
>> Also note that this query is much cheaper that the 'distinct on' query
>> by more than two orders on magnitude ( 217.86 vs. 98040.67):

> No it isn't. The estimate is much lower, but the actual times are very
> close:

> [explain of distinct on]
>> Time: 5.003 ms

> [explain of correlated subquery]
>> Time: 4.125 ms

You're both confused:



???

the planner estimate certainly should not be taken

as gospel,



true

but the actual runtime of an EXPLAIN (not EXPLAIN ANALYZE)

only reflects planning effort.



Agree completely

EXPLAIN ANALYZE output would be a lot more suitable to settle the

question which one is faster.



Agree again. I was using the EXPLAIN output just to make a point that
optimizer thinks the query utilizing a subquery is much cheaper (and hence
maybe faster) than the 'distinct on' query.

In a later mail I posted the analyze o/p too...

--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

17°29'34.37"N  78°30'59.76"E - Hyderabad *
18°32'57.25"N  73°56'25.42"E - Pune

Sent from my BlackLaptop device


Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-28 Thread Gurjeet Singh

On 6/28/07, Alban Hertroys <[EMAIL PROTECTED]> wrote:



This is called a 'correlated subquery'. Basically the subquery is
performed for each record in the top query.

Google gave me this:

http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/sqlp/rbafycorrs.htm



I think the sub-section titled "Example: Correlated subquery in a WHERE
Clause" is appropriate to explain our query at hand.

Simply put, correlated queries are like nested FOR loops of any high level
programming language.

1. FOR( record R in result of outer-query )
2.   execute inner query, using any R.colname1
3.   compare R.colname2 with the result of the correlated-subquery
4.   produce R in output, iff the above comparison succeeded

Line 2 can be treated as another FOR loop, where every record of inner-query
is being processed, and comparing the local expressions with a column (or
expression) that comes from outer query.

The comparison in step 3 can be against any expression, with columns or
against a pure constant too!

For example, the following query produces the name of all the employees, who
manage at least one other employee.

select empno, ename
from   emp e1
where  exists (select 1
  from   emp e2
  where e2.mgr = e1.empno);

The only thing I would add for our query is that, that the outer SELECT of
our query produces a cartesian product (no join-condition between t1 and
t2), but only one row from t2 qualifies for the join, since the WHERE
condition is on a unique column, and the correlated subquery returns just
the required value (lowest of the IDs that are greater than current
t1.IDbeing processed).

I know the above one-line-paragraph may sound a bit cryptic for someone new
to correlated subqueries, but if you understand the example in the link
above, then this would start making some sense.

And there's probably more to find. Interestingly enough wikipedia

doesn't seem to have an article on the subject.






Regards,
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

17°29'34.37"N  78°30'59.76"E - Hyderabad *
18°32'57.25"N  73°56'25.42"E - Pune

Sent from my BlackLaptop device


Re: [GENERAL] a JOIN on same table, but 'slided over'

2007-06-29 Thread Gurjeet Singh

   It _is_ the optimised version as you can see from the explain plans
posted in the other mail, the planner shows that the cost is drastically
less than the 'distinct on' version.

   For smaller data-sets 'distinct-on' version might seem faster, but for
reasonably larger datasets, it's performance deteriorates exponentially...
This is because of the Nested-loops involved in the plan...

   I increased your data-set to 10240 rows by executing the following query
10 times:

insert into test select id+(select max(id) from test), thread, info from
test;

   On such data-set (which is not very large by any means), the standard
SQL version executes in almost a second, and on the other hand, I had to
cancel the EXPLAIN ANALYZE of the 'distinct on' query after letting it run
for over three minutes!!!

postgres=# explain analyze
postgres-# select   t1.id as id, t2.id as "id+1",
postgres-#  t1.thread as thread, t2.thread as "thread+1",
postgres-#  t1.info as info, t2.info as "info+1"
postgres-# from test as t1, test as t2
postgres-# where t2.id = ( select min(id) from test as t3 where t3.id >
t1.id )
postgres-# order by t1.id asc;

QUERY PLAN
--
Sort  (cost=2971.36..2996.96 rows=10240 width=24) (actual time=
1004.031..1030.116 rows=10239 loops=1)
  Sort Key: t1.id
  Sort Method:  external sort  Disk: 416kB
  ->  Merge Join  (cost=840.48..2289.28 rows=10240 width=24) (actual time=
834.218..956.595 rows=10239 loops=1)
Merge Cond: (t2.id = ((subplan)))
->  Index Scan using test_id_key on test t2
(cost=0.00..332.85rows=10240 width=12) (actual time=
0.060..24.503 rows=10240 loops=1)
->  Sort  (cost=840.48..866.08 rows=10240 width=12) (actual time=
834.129..854.776 rows=10240 loops=1)
  Sort Key: ((subplan))
  Sort Method:  quicksort  Memory: 928kB
  ->  Seq Scan on test t1  (cost=0.00..158.40 rows=10240
width=12)(actual time=0.196..797.752 rows=10240 loops=1)
SubPlan
  ->  Result  (cost=0.04..0.05 rows=1 width=0) (actual
time=0.062..0.064 rows=1 loops=10240)
InitPlan
  ->  Limit  (cost=0.00..0.04 rows=1 width=4)
(actual time=0.047..0.050 rows=1 loops=10240)
->  Index Scan using test_id_key on
test t3  (cost=0.00..121.98 rows=3413 width=4) (actual
time=0.038..0.038rows=1 loops=10240)
  Index Cond: (id > $0)
  Filter: (id IS NOT NULL)
Total runtime: 1052.802 ms
(18 rows)
Time: 1056.740 ms

postgres=# explain analyze
postgres-# select
postgres-# distinct on (t1.id)
postgres-# t1.*, t2.*
postgres-# from
postgres-# test t1
postgres-# join test t2 on t2.id > t1.id
postgres-# order by t1.id asc, t2.id asc;
Cancel request sent
ERROR:  canceling statement due to user request
postgres=#



On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote:


OK. Have tried this one looks like close to 6 times slower then the
'non-standard' phrase with 'distinct on'.

On the small dataset that I've included in my original post (ten rows of
data within TEST), I've run both queries through EXPLAIN ANALYSE, with
the following result summary (for clearity, I've cut away the details
from EXPLAIN output):

---STANDARD
Total runtime: 10.660 ms
---DISTINCT-ON
Total runtime: 1.479 ms
---

Would there be ways to optimise the standard query to get the
performance closer to the none-standard one?


-R


On Tue, 2007-06-26 at 18:05 +0530, Gurjeet Singh wrote:
> Hi Rafal,
>
> Just a note that this is not standard SQL... 'distinct on' is an
> extension to SQL provided by postgres.
>
> Following query utilizes the standard SQL to get the same results:
>
> selectt1.id as id, t2.id as "id+1",
> t1.thread as thread, t2.thread as "thread+1",
> t1.info as info, t2.info as "info+1"
> from test as t1, test as t2
> where t2.id = ( select min(id) from test as t3 where t3.id > t1.id);
>
> HTH
> --
> [EMAIL PROTECTED]
> [EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com
>
> 17°29'34.37"N  78°30'59.76"E - Hyderabad *
> 18°32'57.25"N  73°56'25.42 "E - Pune
>
> Sent from my BlackLaptop device
>
> On 6/26/07, Rafal Pietrak <[EMAIL PROTECTED]> wrote:
> Marvelous! Thenx!
>
> -R
>
> On Tue, 2007-06-26 at 10:06 +0200, hubert depesz lubaczewski
> wrote:
> > On 6/26/07,

Re: [GENERAL] Dump/Restore Large Object OID

2007-12-01 Thread Gurjeet Singh
Why not give it a try once? Dump and restore once and see for yourself.
You'd have done that by now, but if you haven't do give it a try instead of
waiting any more. You may learn a thing or two in the process...

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

Mail sent from my BlackLaptop device

On 11/29/07, Norberto Delle <[EMAIL PROTECTED]> wrote:
>
> Hi all
>
> If I don't use the --oids option when dumping a database with pg_dump,
> can I assure that the "loid" field of the pg_largeobject table will keep
> it's value when restoring?
>
> Thanks in advance
>
> Norberto
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
>


Re: [GENERAL] Initdb problem - catch 22?

2007-12-19 Thread Gurjeet Singh
You should run the commands in the following order:

initdb
pg_ctl -D  -w start   # (recommended way of starting the the
server)
createdb   #(optional, since default DB postgres
is already there)

On Dec 19, 2007 6:17 AM, Glyn Astill <[EMAIL PROTECTED]> wrote:

> Hi people,
>
> Right I've setup postgres numerous times without a problem, and this
> time I'm using all the same stuff but running into a problem where
> initdb won't run without the postgres server running, and the
> postgres server will not run without the postgresql.conf etc that
> initdb creates!
>
> To start with I tried the following:
>
> #/usr/local/pgsql/bin/createdb SEE --encoding='LATIN1'
>
> Which gave me the error :
>
> createdb: could not connect to database
> postgres: could not connect to server: No such file or directory
>Is the server running locally and accepting
>connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
>
> So okay, I'll try and start the server:
>
> #/usr/local/pgsql/bin/postgres -D /data/postgres
> >/data/postgres/psql.log 2>&1 &
>
> and in psql.log I get the error:
>
> "postgres cannot access the server configuration file
> "/data/postgres/postgresql.conf": No such file or directory"
>
> I shout at it I know Initdb should have done it!!
>
> Anyone got any idea what's eating my lunch? Maybe I ran something in
> the wrong order?
>
> Thanks
> Glyn
>
>
>  ___
> Yahoo! Answers - Got a question? Someone out there knows the answer. Try
> it
> now.
> http://uk.answers.yahoo.com/
>
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>   http://archives.postgresql.org/
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] Warm standby system - FATAL: the database system is starting up

2007-12-21 Thread Gurjeet Singh
I would recommend that whenever you are trying to recover a database (be it
WAL shipping or any other method), first change the port it is listening on,
to something that your apps wouldn't assume the DB to be running on, and
then start the recovery.

This way, even if there's a rogue application that you forgot to shut down,
your recovering database wouldn't be bothered about it.

Then, when you are finished recovery, just change back the port to your
default, and restart the server.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device
On Dec 21, 2007 1:24 PM, Glyn Astill <[EMAIL PROTECTED]> wrote:

> Is there a way I can check what it is?
>
> I see no reason why anything would be trying to connect, any ideas?
>
> Could it be the autovacuum as I suggested?
>
> Also something I omitted to point out in my original post, the
> processes running on the machine (ps -ax) are as follows
>
>  3467 pts/0S  0:00 /usr/local/pgsql/bin/postgres -D
> /data/postgres/
>  3468 ?Ss 0:02 postgres: startup process
>  4858 ?S  0:00 /bin/bash /data/postgres/restore.sh
> /mnt/archive/0001001A0087 pg_xlog/RECOVERYXLOG
>  6371 ?S  0:00 sleep 1
>
> Notice the "postgres: startup process", does that just mean we're in
> WAL reading mode? Or does it mean it's not working.
>
> I know Tom, you said it will be working, but does this extra info
> change that?
>
> Thanks
> Glyn
>
> --- Tom Lane <[EMAIL PROTECTED]> wrote:
>
> > Glyn Astill <[EMAIL PROTECTED]> writes:
> > > I see in my log on the backup machine:
> > > FATAL:  the database system is starting up
> > > Does this mean the backup is not working?
> >
> > No, it means something's trying to connect to the backup
> > postmaster.
> >
> >   regards, tom lane
> >
> > ---(end of
> > broadcast)---
> > TIP 4: Have you searched our list archives?
> >
> >http://archives.postgresql.org/
> >
>
>
>
>  ___
> Support the World Aids Awareness campaign this month with Yahoo! For Good
> http://uk.promotions.yahoo.com/forgood/
>
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>   http://archives.postgresql.org/
>


Re: [GENERAL] how to alter an enum type

2007-12-25 Thread Gurjeet Singh
Here's a possible solution (this may take long time if the table is too
big). The trick is to add a new column with a newly defined datatype, that
picks up values from the old column. Here's the sample psql script (the
session output follows after that):

create type possible_answers as enum ( 'yes' , 'no' );
create table questionnare( Q varchar(256), A possible_answers );
insert into questionnare values( 'dummy Q1', 'yes' );
insert into questionnare values( 'dummy Q2', 'no' );

begin;
create type possible_answers_new as enum ( 'yes' , 'no', 'maybe' );

alter table questionnare rename column A to B;
alter table questionnare add column A possible_answers_new;

update questionnare set A = B::text::possible_answers_new;

alter table questionnare drop column B;
commit;

vacuum full questionnare;

\d questionnare

insert into questionnare values( 'dummy Q3', 'maybe' );

select * from questionnare;


And here's what the session output looks like:

postgres=# create type possible_answers as enum ( 'yes' , 'no' );
CREATE TYPE
postgres=# create table questionnare( Q varchar(256), A possible_answers );
insert into questionnare values( 'dummy Q1', 'yes' );
CREATE TABLE
postgres=# insert into questionnare values( 'dummy Q1', 'yes' );
begin;
INSERT 0 1
postgres=# insert into questionnare values( 'dummy Q2', 'no' );
INSERT 0 1
postgres=#
postgres=# begin;
BEGIN
postgres=# create type possible_answers_new as enum ( 'yes' , 'no', 'maybe'
);
CREATE TYPE
postgres=#
postgres=# alter table questionnare rename column A to B;
ALTER TABLE
postgres=# alter table questionnare add column A possible_answers_new;
ALTER TABLE
postgres=#
postgres=# update questionnare set A = B::text::possible_answers_new;
UPDATE 2
postgres=#
postgres=# alter table questionnare drop column B;
commit;
ALTER TABLE
postgres=# commit;
COMMIT
postgres=#
postgres=# vacuum full questionnare;

VACUUM
postgres=#
postgres=# \d questionnare

 Table "public.questionnare"
 Column |  Type  | Modifiers
++---
 q  | character varying(256) |
 a  | possible_answers_new   |

postgres=#
postgres=# insert into questionnare values( 'dummy Q3', 'maybe' );
INSERT 0 1
postgres=#
postgres=# select * from questionnare;
q |   a
--+---
 dummy Q1 | yes
 dummy Q2 | no
 dummy Q3 | maybe
(3 rows)

postgres=#


Hope it helps.
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device
On Dec 24, 2007 12:48 AM, Henrique Pantarotto <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I was wondering how can I alter an ENUM type?  I have created a table
> like this:
>
> create type possible_answers as enum('yes', 'no');
> create table questions ( question text, answers possible_answers);
> insert into questions values ('Do you like me?', 'yes');
>
> So my question is... How can I change "possible_answers" to enum('yes',
> 'no', 'maybe')?
>
> I tried searching the documentation and mailing list, and I couldn't
> figure this one out.
>
>
> Thanks!
>
>
>
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>   choose an index scan if your joining column's datatypes do not
>   match
>


Re: [GENERAL] how to alter an enum type

2007-12-26 Thread Gurjeet Singh
I would still recommend to keep the meanings associated with the values
in the database somehow.

Have you given thought to CHECK constraints? They are easier to alter on
the fly:

create table questionnare( Q varchar(256), A varchar(16)
constraint possible_answers check ( A IN( 'yes', 'no') ) );
insert into questionnare values( 'dummy Q1', 'yes' );
insert into questionnare values( 'dummy Q2', 'no' );

begin;
alter table questionnare drop constraint possible_answers ;
alter table questionnare add constraint possible_answers check( A in ('yes',
'no', 'maybe' ) );
commit;

\d questionnare

insert into questionnare values( 'dummy Q3', 'maybe' );

select * from questionnare;

Again, this operation will take long time depending on the number of
rows in the table;

HTH, best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device
On Dec 26, 2007 10:17 AM, Henrique Pantarotto <[EMAIL PROTECTED]> wrote:

> Thanks a lot Gurjeet!  I understanded your suggestion... that seems to
> work indeed.  But I really would like to be able to alter the enum type
> on the fly, so instead of using enum, I think I'll  just use a "smallint"
> type and tie the "possible results" to the application using flags such
> as 0, 1, 2, 3 and whatever.. I think it will be easier for me this way.
>
> But thanks a lot anyway!!
>
> On Tue, 25 Dec 2007 23:08:12 -0800
> "Gurjeet Singh" <[EMAIL PROTECTED]> wrote:
>
> > Here's a possible solution (this may take long time if the table is too
> > big). The trick is to add a new column with a newly defined datatype,
> that
> > picks up values from the old column. Here's the sample psql script (the
> > session output follows after that):
> >
> > create type possible_answers as enum ( 'yes' , 'no' );
> > create table questionnare( Q varchar(256), A possible_answers );
> > insert into questionnare values( 'dummy Q1', 'yes' );
> > insert into questionnare values( 'dummy Q2', 'no' );
> >
> > begin;
> > create type possible_answers_new as enum ( 'yes' , 'no', 'maybe' );
> >
> > alter table questionnare rename column A to B;
> > alter table questionnare add column A possible_answers_new;
> >
> > update questionnare set A = B::text::possible_answers_new;
> >
> > alter table questionnare drop column B;
> > commit;
> >
> > vacuum full questionnare;
> >
> > \d questionnare
> >
> > insert into questionnare values( 'dummy Q3', 'maybe' );
> >
> > select * from questionnare;
> >
> >
> > And here's what the session output looks like:
> >
> > postgres=# create type possible_answers as enum ( 'yes' , 'no' );
> > CREATE TYPE
> > postgres=# create table questionnare( Q varchar(256), A possible_answers
> );
> > insert into questionnare values( 'dummy Q1', 'yes' );
> > CREATE TABLE
> > postgres=# insert into questionnare values( 'dummy Q1', 'yes' );
> > begin;
> > INSERT 0 1
> > postgres=# insert into questionnare values( 'dummy Q2', 'no' );
> > INSERT 0 1
> > postgres=#
> > postgres=# begin;
> > BEGIN
> > postgres=# create type possible_answers_new as enum ( 'yes' , 'no',
> 'maybe'
> > );
> > CREATE TYPE
> > postgres=#
> > postgres=# alter table questionnare rename column A to B;
> > ALTER TABLE
> > postgres=# alter table questionnare add column A possible_answers_new;
> > ALTER TABLE
> > postgres=#
> > postgres=# update questionnare set A = B::text::possible_answers_new;
> > UPDATE 2
> > postgres=#
> > postgres=# alter table questionnare drop column B;
> > commit;
> > ALTER TABLE
> > postgres=# commit;
> > COMMIT
> > postgres=#
> > postgres=# vacuum full questionnare;
> >
> > VACUUM
> > postgres=#
> > postgres=# \d questionnare
> >
> >  Table "public.questionnare"
> >  Column |  Type  | Modifiers
> > ++---
> >  q  | charact

[GENERAL] ERROR during WAL replay

2008-01-12 Thread Gurjeet Singh
Hi All,

We were trying to move a big database from one machine to the other
using PITR mechanism. We hit the following LOG message in during the
recovery (WAL replay) process"

LOG:  incorrect resource manager data checksum in record at 111/A7738C8

I had used this procedure to do such migrations in the past, and never
had any problem. This particular DB is pretty huge, and under load, so I
would like to consider other options before even trying to redo the whole
process.

Here's the whole scanario:

The DB is about 420 GB in size. We want to move it to another box with
more storage. I have set up WAL archival, executed pg_start_backup(),
started $DATA/ copy (took just over 24 hours), executed pg_stop_backup(),
and while the WAL archival is still enabled I started recovery on the second
machine, using the $DATA/ copied earlier

I have the compressed WAL file, so if someone wishes to take a peek at
it it will be available...

Considering this is a weekend, any help/suggestion at all would be
great.

Thanks and best regards,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-14 Thread Gurjeet Singh
And also remember to use the same version of Postgres as the previous
installation...

It might be helpful to post the tail of your server's log ahen it fails.

Best Regards,

On Jan 14, 2008 7:58 AM, Tom Lane <[EMAIL PROTECTED]> wrote:

> Stefan Schwarzer <[EMAIL PROTECTED]> writes:
> > I re-installed my machine and "forgot" to dump my database(s). I
> > naturally still have the whole database folders. For the moment I
> > installed the "old" postgres version (8.1) to be able to read my data.
> > But how can I read them? It seems that it doesn't work that I just
> > overwrite the new database folder with the old one... Would be too
> > simple, I guess...
>
> Should work, if you've got the whole $PGDATA directory tree.  Maybe
> you forgot to stop the postmaster while copying the backup into place?
>
>regards, tom lane
>
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] The first dedicated PostgreSQL forum

2010-11-22 Thread Gurjeet Singh
On Fri, Nov 19, 2010 at 1:07 AM, Thom Brown  wrote:

>
> As has been said previously, an unlinked forum (one which has no
> interaction with the mailing list) is destined to fail, as others have
> in the past.  It's creates a fragmented community and poor support on
> such a forum would reflect badly on the PostgreSQL community.
>
> Mailing lists aren't "old school".  They've just got a long history,
> and I think you'll find most open source projects probably have a
> mailing list (or several) associated with them.  But the reason why a
> forum hasn't been rejected out of hand is that we recognise that there
> is a demographic which probably see a mailing list as a barrier, hence
> our discussion about integrating a bidirection sync between a forum
> and a mailing list.  And the reason why we like that idea is that it
> ensures that people who would only join a forum benefit from the huge
> pool of knowledge and experience of users on the mailing list.
>
>
How about exposing our archives in the forum format (linear conversations,
like GMail), and allowing registered users to send emails using a web form
which automatically copies the mailing list. This would require real time
update of archives, which I think currently takes a few minutes for a
message to show up.

This way all of our conversation history would be available even in forum
format.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Do we want SYNONYMS?

2010-12-07 Thread Gurjeet Singh
On Tue, Dec 7, 2010 at 1:54 PM, Pavel Stehule wrote:

> Hello
>
> 2010/12/7 Joshua D. Drake :
> > Your perception has been mirrored on the Oracle free list. Really what
> > PostgreSQL people need to come to grips with is whether or not we want
> > to make it easier for others to port to Pg or not. (assuming
> > reasonableness)
> >
>
> it's question if this is task more for EnterpriseDB and less for
> PostgreSQL?
>
>
FWIW, EnterpriseDB Advanced Server has had the SYNONYM feature for quite a
while now: http://www.enterprisedb.com/documentation/ddl-synonims.html

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] did freese in creating a database cluster

2010-12-27 Thread Gurjeet Singh
On Mon, Dec 27, 2010 at 11:05 PM, Fujii Masao  wrote:

> On Tue, Dec 28, 2010 at 11:21 AM, Tsutomu Nakajima 
> wrote:
> > I add the information of freeze status in creating a database cluster.
> > The info is that the following process exists.
> >
> > postgres 1331432  708812   0 11:08:31 pts/10  0:00 /bin/sh
> > /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data -d
> > postgres 1347694 1331432 120 11:08:32 pts/10  0:05
> > /usr/local/pgsql/bin/postgres -boot -x0 -F -D/usr/local/pgsql/data -c
> > shared_buffers=500 -c max_connections=100 template1
> > * TIME is now.
>
> Though I'm not sure what the cause is, I recommend you to upgrade
> your database since 7.4 is already end-of-life.
>
>
Yeah, looks like you are creating a fresh  database so it is highly
recommended to use the latest version of Postgres, say 9.0 or maybe 8.4.

Regards,

-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Devart ADO.NET Data Providers Deliver Advanced Model-First and Database-First Support and Improved ORM Solution!

2010-12-28 Thread Gurjeet Singh
I think this is not the right forum for such announcements.

Consider http://archives.postgresql.org/pgsql-announce/ . I think Postgres
weekly news will pick it up automatically from there.

Regards,

On Tue, Dec 28, 2010 at 8:45 AM, Devart  wrote:

>
> New versions of Devart ADO.NET Data Providers with improved capabilities
> of
> ORM solution and advanced Model-First and Database-First support in ORM
> model designer.
>
> Devart has recently announced the release of the new versions of dotConnect
> Data Providers, the database connectivity solutions built over ADO.NET
> architecture and development frameworks with a number of innovative
> technologies. Devart ADO.NET Data Providers product line includes high
> performance data providers for major databases and offer a complete
> solution
> for developing database-related applications and web sites. Devart
> dotConnect provides support for ADO.NET Entity Framework and LinqConnect.
> LinqConnect is the Devart ORM solution, closely compatible with the
> Microsoft LINQ to SQL, while extending its functionality with its own
> features.
>
> With new versions of dotConnect ADO.NET Data providers - dotConnect for
> Oracle 6.00, dotConnect for MySQL 6.00, dotConnect for PostgreSQL 5.00, and
> dotConnect for SQLite 3.00 – Devart continues to introduce innovative
> approaches for ORM model designing and considerably improves the runtime of
> LinqConnect.
> 1. New versions of dotConnects contains new greatly improved ORM solution -
> LinqConnect
> The functionality of LinqConnect ORM was extended by the following:
>  Now LinqConnect supports Batch Update for DML statements execution on
> SubmitChanges call, which improves performance of update operations.
>  New kind of inheritance hierarchy was supported. In addition to Table Per
> Hierarchy support Table per Type is supported now.
>  LinqConnect package now includes the new templates for ASP.NET Dynamic
> Data
> projects, that allows creation ASP.NET Dynamic Data applications using
> LinqConnect.
>  Upgraded LinqConnect ORM supports Parallel LINQ, which offers performance
> improvements for your applications, using LinqConnect.
>  Visual Studio Debugger Visualizer was added to LinqConnect to improve
> readability of SQL code, generated for your LINQ queries, while debugging
> your application.
> 2. Enhanced own ORM model designer - Entity Developer
> Entity Developer, that is containing in dotConnect Data Providers, allows
> developing LINQ to SQL and Entity Framework models based on different
> database engines. Now it provides advanced Model First support and Update
> model from database functionality:
> Model-First approach implementation in Entity Developer offers automatic
> synchronization of mapping and storage part with the conceptual part of the
> model and easy to use wizards for synchronizing database with the model.
> Synchronizing Database with the Model
> Entity Developer offers you two options for creating database based on your
> model. You may either generate a create script, that creates a new
> database,
> or use Update to Database Wizard that synchronizes an existing database
> with
> the model. This wizard makes only necessary changes to the database and
> retain the data if possible. Even changes inside entity are detected and
> resulted in ALTER statements, the table doesn't need to be dropped and
> created again. The wizard displays the tree of the changed objects and
> their
> changes, allowing you to choose which database objects need to be
> synchronized with the model, specify the renamed objects, etc.
> Mapping Synchronization
> When automatic mapping synchronization is enabled, changes to the
> conceptual
> model are automatically applied to its mapping and storage part. For
> example, when you add a class to the conceptual part, a corresponding table
> is created in the storage part, and the class is automatically mapped to
> this table. Even the most complex conceptual part changes are supported -
> complex types, many-to-many associations, complex hierarchies. All these
> changes can be automatically reflected in the storage part. However you may
> make manual changes to the storage part, and they will be preserved when
> editing conceptual part.
> Update From Database Wizard allows synchronizing your model with the
> database for Entity Framework and LinqConnect in a fast and convenient way.
> Unlike standard Visual Studio Update Wizard, Entity Developer Update From
> Database Wizard tries to preserve manual changes to the model where
> possible. The wizard detects all the database changes that can affect the
> model, e.g. created and deleted tables and views, their columns and foreign
> keys, column datatype changes, created and deleted stored procedures and
> functions, changes to their parameters etc.
>
> Pricing and Availability
> We offer a free Express edition for each product from the dotConnect
> product
> line.
>
> A Single license price starts from as little as $99.95, a

Re: [GENERAL] Restore problem

2010-12-28 Thread Gurjeet Singh
On Tue, Dec 28, 2010 at 6:06 PM, Bob Pawley  wrote:

>   Hi
>
> I have restored a database using psql to windows version 8.4.
>
> During the restore the trigger code became jumbled.
>
> I now have a great number of lines that have moved so that they are now
> included in  lines the have been commented out – not to mention that the
> code is hard to read.
>
> Is there some way of correcting this – or re restoring the database, so
> that I don’t have to go through the whole code line by line?
>

I don't believe there's any easy way to do that. Can you show us some
examples of the 'before' and 'after' code, maybe that'll help.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Cursors WITH HOLD

2011-01-03 Thread Gurjeet Singh
2010/12/30 pasman pasmański 

> Hello.
>
> I use Postgres 8.4.5 via perl DBI.
> And i try to use cursors WITH HOLD to materialize
> often used queries.
>
> My question is how many cursors may be
> declared per session and which memory setting s
> to adjust for them ?
>

I believe there's no maximum limit to the number of cursors in Postgres. It
is limited only by the amount of memory available to the Postgres process.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Huge spikes in number of connections doing "PARSE"

2011-03-11 Thread Gurjeet Singh
On Fri, Mar 11, 2011 at 11:13 AM, Noah Misch  wrote:

>
>   gdb -ex=bt /path/to/bin/postgres $pid 
> I've used this on production systems to debug issues like this one, and
> I've
> never observed damage.  The exact effect of debugger attach/detach may be
> OS/kernel-dependent, so it's hard to make categorical guarantees of safety.


At least on Windows NT and derivatives (until I last checked a few years
ago),  detaching a debugger kills the debugged process.

Regards,
-- 
Gurjeet Singh
EnterpriseDB <http://www.enterprisedb.com/> Corporation
The Enterprise PostgreSQL <http://www.postgresql.org/> Company


Re: [GENERAL] How do you stop the log from rotating on restart?

2011-03-20 Thread Gurjeet Singh
On Sat, Mar 19, 2011 at 8:00 PM, runner  wrote:

>  Postgres 9.0.3 on Solaris 10
>
>  Can't seem to figure out how to stop the postgres log file from rotating
> on a restart of services.
>
> log_truncate_on_rotation  is commented out
> log_rotation_age = 1d
>
> We just want it to rotate once a day no matter haw many times we restart.
>
> Don't see anything else that would seem to affect log file rotation
>

You'll have to play with the format in log_filename GUC.

HTH,
-- 
Gurjeet Singh
EnterpriseDB <http://www.enterprisedb.com/> Corporation
The Enterprise PostgreSQL <http://www.postgresql.org/> Company


Re: [GENERAL] stack depth limit exceeded

2011-03-31 Thread Gurjeet Singh
If you are working with Postgres version >= 8.4, you should look at the WITH
RECURSIVE (called recursive CTEs) feature:

http://www.postgresql.org/docs/8.4/static/queries-with.html

Regards,

On Thu, Mar 31, 2011 at 12:19 PM, salah jubeh  wrote:

>
>
> Hello,
>
> I have written this function which is simply returns the entities which
> depends on a certain entity. It works fine if the dependency tree is not
> long. However, If I have an entity which are linked to many other entities I
> get
>
> stack depth limit exceeded
> HINT:  Increase the configuration parameter "max_stack_depth", after
> ensuring the platform's stack depth limit is adequate.
>
> I wrote this function to know exactly what are the tables, views that will
> be doped if I use cascade option.  I want to get around this issue without
> changing the server configuration
>
>
> CREATE OR REPLACE FUNCTION dependon(var text)
>   RETURNS SETOF text AS
> $BODY$
> DECLARE
> node record;
> child_node record;
> BEGIN
>
> FOR node IN -- For inheritance
> SELECT objid::regclass::text as relname
> FROM pg_catalog.pg_depend
> WHERE refobjid = $1::regclass::oid AND deptype ='n' AND classid
> ='pg_class'::regclass
> UNION
> -- For rewrite rules
> SELECT ev_class::regclass::text as relname
> FROM pg_rewrite WHERE oid IN ( SELECT objid FROM
> pg_catalog.pg_depend
> WHERE refobjid = $1::regclass::oid AND deptype ='n')
> UNION
> -- For constraints (Forign keys)
> SELECT conrelid::regclass::text as relname
> FROM pg_constraint WHERE oid in (SELECT objid FROM
> pg_catalog.pg_depend
> WHERE refobjid = $1::regclass::oid AND deptype ='n')
>
> LOOP
>
> RETURN NEXT node.relname;
> FOR child_node IN SELECT * FROM dependon(node.relname)
> LOOP
> RETURN NEXT child_node.dependon;
> END LOOP;
>
> END LOOP;
> END
> $BODY$
>   LANGUAGE 'plpgsql'
>
> Regards
>
>


-- 
Gurjeet Singh
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [GENERAL] Options for passing values to triggers?

2013-02-04 Thread Gurjeet Singh
On Mon, Feb 4, 2013 at 2:01 PM,  wrote:

> Hello.
>
> I'm modelling a system where I'd like to log inserts and deletes
> to two or more tables (with foreign key references between them).
>
> As a (contrived) example:
>
> CREATE TABLE projects (
>   project_id   SERIAL PRIMARY KEY,
>   project_name TEXT UNIQUE NOT NULL
> );
>
> CREATE TABLE project_repositories (
>   repos_id  SERIAL PRIMARY KEY,
>   repos_project INTEGER NOT NULL,
>   repos_url TEXT UNIQUE NOT NULL,
>
>   FOREIGN KEY (repos_project) REFERENCES projects (project_id)
> );
>
> CREATE TABLE tasks (
>   task_id SERIAL PRIMARY KEY,
>   task_repos INTEGER NOT NULL,
>
>   FOREIGN KEY (task_repos) REFERENCES project_repositories (repos_id)
> );
>
> And then the log table:
>
> CREATE TABLE audit (
>   audit_id  BIGSERIAL PRIMARY KEY,
>   audit_timeTIMPSTAMP WITH TIME ZONE NOT NULL,
>   audit_userTEXT NOT NULL,
>   audit_session TEXT NOT NULL,
>   audit_typeTEXT NOT NULL,
>   audit_message TEXT NOT NULL
> );
>
> Note: The audit_user and audit_session columns are NOT postgresql roles
> or sessions; they are from the external application.
>
> So, the intention is that when something is deleted from the projects
> table, an event will be recorded of type 'PROJECT_DELETE', including
> the name of the project and user responsible for the deletion. Similar
> events would be logged for the tasks and project_repositories tables.
> Creation would be logged in the same manner.
>
> I'd like to model this using triggers with cascading deletes (so that
> when a project is deleted, each one of its repositories is deleted and
> logged as having been deleted, and any tasks that depend on those
> repositories too).
>
> The problem: I'm not sure what the most pleasant way (or if it's
> even possible) to pass 'audit_user' and 'audit_session' to the trigger
> functions. The values are created by the external application that
> queries the database and aren't otherwise present in the database in
> any form.
>
> Furthermore: I'm intending to partition the system into separate roles
> such that the role that executes the database queries doesn't have read
> or write permission to the audit table (meaning that any logging is
> going to have to occur via a function with SECURITY DEFINER).
>
> Any advice or "you don't want to it that way" abuse would be much
> appreciated.
>

I have no opinion of whether this is the right way of going abut it, but
here's a way it can be done. Recent versions of postgres allow you to set
arbitrary session level variables, so you can use SQL commands to set/get
these variables.

.) At the start of a session, set the app user name in a variable

SET my_app.audit_user = 'app_user_1';

.) Inside your trigger function:

current_app_user = select current_setting('my_app.audit_user');

PS:
Question to PG-hackers: Why are such variables not visible in pg_settings
view?


[GENERAL] Re: [HACKERS] function for setting/getting same timestamp during whole transaction

2013-02-06 Thread Gurjeet Singh
Removing -hackers and adding -general

On Wed, Feb 6, 2013 at 7:26 AM, Miroslav Šimulčík
wrote:

>
>
>
>>  Alternately, you might be able to use a custom GUC from a rather smaller
>> PL/PgSQL function. At transaction start, issue:
>>
>> set_config('myapp.trigger_time', '', 't');
>>
>
> This is problem with using custom GUC - clearing variable at transaction
> start. Without clearing it's not sufficient solution (see my response to
> Pavel's mail). I don't want to do clearing from application and as far as i
> know there is not "transaction start" trigger.
>

I think you can do it by using 2 GUCs.

Put this at the beginning of your trigger:

if current_setting(my_var.now) != now() then
-- We're executing this code for the first time in this transaction
set_config(my_var.now, now())
set_config(my_var.my_ts, clock_timestamp)
end;
-- Go on happily use my_var.my_ts to stamp your data.

HTH,
-- 
Gurjeet Singh

http://gurjeet.singh.im/


Re: [GENERAL] Function returning subset of columns from table (return type)

2008-02-03 Thread Gurjeet Singh
On Feb 3, 2008 3:23 PM, brian <[EMAIL PROTECTED]> wrote:

> CREATE TYPE your_type
> AS (
>   added CHAR(11) NOT NULL,
>   updated CHAR(11) NOT NULL,
>   text_col TEXT
> );
>
> CREATE FUNCTION get_note(id INT)
> RETURNS SETOF your_type IMMUTABLE
> AS $$
>
>   DECLARE
> your_row your_type%rowtype;
>
>   BEGIN
> SELECT INTO your_row
>   to_char(added, 'Mon D '),
>   to_char(updated, 'Mon D '),
>   text_col
> FROM note
> WHERE id = $1;
>
> RETURN your_row;
>   END;
> $$ LANGUAGE sql;
>
>
All's okay, except you should not have declared it IMMUTABLE, because the
results depend on a database query.

>From the docs:
IMMUTABLE indicates that the function cannot modify the database and always
returns the same result when given the same argument values; that is, it does
not do database lookups or otherwise use information not directly present in
its argument list

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


[GENERAL] Reverse key index

2008-02-03 Thread Gurjeet Singh
Hi All,

I have wanted to create a reverse key index for some time now, and it
seems that an evening of reading and half a day of efforts finally paid off.
This is just a proof of concept, and sure, the bit-reversing technique can
use a native language's power for better results.

I started with the following posts:

http://archives.postgresql.org/pgsql-hackers/2002-01/msg01201.php
http://archives.postgresql.org/pgsql-hackers/2002-01/msg01225.php

The operator class that is created at the end uses one function to
populate the index in almost a random manner (reverse binary
representation). And this op-class provides just one operator to compare the
values, as opposed to Tom's suggestion ("all the other members would be
byte-reversed-comparison operators"); this is because if we allow the index
to use any of these other operators (custom or the built-in ones) for range
scans, the range's starting value will be found for sure, but because the
btree index follows the leaf nodes from there on, the results will be
totally what we never asked for!

The result at the end, INDEX del_i, is an index that helps disperse
heavy sequential INSERTs from different sessions over to different index
blocks, reducing index block contention hence improving performance. Also,
this index can be used of equality operator (but no other operator).

Hackers, of course, comments please. Let me know if I have missed
something, and if this is not exactly what a user would want!

For fun: If you wish to see how a BTree index performs the comparisons
and populates the index, just uncomment the 'raise notice' statement in
rev_int_cmp(). And to compare the bit-reversed mode to the normal mode of
index population, just replace the contents of declare section with 'rev_a
int = a; rev_b int = b;' in the declare section. :) have fun.

I have uploaded my original, unedited file from the efforts here. It
goes to lengths to create functions and operators and what not; may be
helpful for other noobs chasing operators.
http://www.geocities.com/gurjeet79/reverse_key_index.sql.txt

Best regards,

PS: I think my signature should be:
'Do I LOVE Postgres or what!!'
OR 'I am in LOVE with Postgres'
OR 'Postgres is _is_ *is* BEAutiful!'
OR 

--- CODE ---

--- Support

create or replace function public.reverse_string( str varchar )
returns varchar
strict
immutable
language plpgsql
as $$
declare reversed varchar = '';
begin
  for i in reverse char_length( str ) .. 1 loop
reversed = reversed || substring( str from i for 1 );
  end loop;
  return reversed;
end;
$$;

create or replace function public.rev_int_cmp( a int, b int )
returns int
strict
immutable
language plpgsql
as $$
declare
  rev_a int = reverse_string( a::bit(32)::varchar )::bit(32)::int;
  rev_b int = reverse_string( b::bit(32)::varchar )::bit(32)::int;
begin
  -- raise notice 'rev_int_cmp( %, % ) called', a, b;
  if( rev_a < rev_b ) then
return -1;
  elsif( rev_a > rev_b ) then
return +1;
  else
return 0;
  end if;
end;
$$;

--- Operator class

drop operator class if exists public.rev_int_ops using btree cascade;
create operator class public.rev_int_ops for type int using btree as
operator 3 pg_catalog.=,
function 1 public.rev_int_cmp( int, int );

--- example

drop table if exists del;
create table del( a int, b char(128) );
create index del_i on del( a rev_int_ops );
insert into del select s, s+1 from generate_series( 1, 1000 ) as s; -- rev
vacuum full analyze del;

explain
select * from del;
explain
select * from del order by a;
explain
select * from del where a = 2; -- should use the reverse index
explain
select * from del where a < 200; -- should NOT use the reverse index

truncate del;


-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] Reload only specific databases from pg_dumpall

2008-02-04 Thread Gurjeet Singh
What was the output format option used (-f option) ? Was it the plain-text
(SQL) or custom format?

Best regards

On Feb 4, 2008 2:45 AM, Guido Neitzer <[EMAIL PROTECTED]> wrote:

> Hi.
>
> Is it possible to reload only specific databases from a file created
> by pg_dumpall?
>
> Thanks,
> cug
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>   choose an index scan if your joining column's datatypes do not
>   match
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] Reload only specific databases from pg_dumpall

2008-02-04 Thread Gurjeet Singh
Sorry couldn't respond earlier...

Yeah, there's no -f option to pg_dumpall, I confused it with pg_dump's -F
option.

Since the output of dumpall is plain SQL, since and you would use psql to
restore the DB, there's no command line option to execute only a part of the
script.

Long story short: you have to manually extract the contents of your DB from
the dump file.

Here's what I did: created 3 databases test{1,2,3}. Created single table in
each of them. And here's what I see in the head of the dump:

REVOKE ALL ON DATABASE template1 FROM PUBLIC;
REVOKE ALL ON DATABASE template1 FROM gsingh;
GRANT ALL ON DATABASE template1 TO gsingh;
GRANT CONNECT ON DATABASE template1 TO PUBLIC;
CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = gsingh ENCODING =
'UTF8';
CREATE DATABASE test2 WITH TEMPLATE = template0 OWNER = gsingh ENCODING =
'UTF8';
CREATE DATABASE test3 WITH TEMPLATE = template0 OWNER = gsingh ENCODING =
'UTF8';

So lets say we want to restore DB test2... here's how I would go about it:

Take that dump, remove all other 'CREATE DATABASE' commands except for the
one for test2. Search for string 'test2'; I get to the following line:

\connect test2

delete everything between the a.m 'CREATE DATABASE' command and this
\connect command.

Since there's another DB after test2 (we saw the order in 'CREATE DATABASE'
commands, remember ), so I search for the next '\connect' command. I find
this:

\connect test3

Form this line on, I delete everything from the file. And I am done.

Now I run:

 psql -p 5444 -f ~/08-02-04.sql -d postgres

And my test2 DB is resored.

psql -p 5444 -f ~/08-02-04.sql -d test2 -c "select count(*) from test;"
 count
---
   100
(1 row)

HTH,

Best regards,

On Feb 4, 2008 10:54 AM, Guido Neitzer <[EMAIL PROTECTED]> wrote:

> On 04.02.2008, at 10:41, Gurjeet Singh wrote:
>
> > What was the output format option used (-f option) ? Was it the
> > plain-text (SQL) or custom format?
>
> I cannot see a -f option on pg_dumpall. This is the command:
>
> pg_dumpall > `date "+%y-%m-%d"`.sql
>
> I just want to use an older file from a dump to restore a server, but
> I don't want to reload all databases (because that will take way
> longer).
>
> cug
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] Reload only specific databases from pg_dumpall

2008-02-04 Thread Gurjeet Singh
I understand it all needs a little bit of 'vi' wizardry, (or whichever
editor you are using). Also, as with all open-source suggestions, do not
rely on this procedure until you understand what and how it does what it
does.

Best regards,

On Feb 4, 2008 4:39 PM, Gurjeet Singh <[EMAIL PROTECTED]> wrote:

> Sorry couldn't respond earlier...
>
> Yeah, there's no -f option to pg_dumpall, I confused it with pg_dump's -F
> option.
>
> Since the output of dumpall is plain SQL, since and you would use psql to
> restore the DB, there's no command line option to execute only a part of the
> script.
>
> Long story short: you have to manually extract the contents of your DB
> from the dump file.
>
> Here's what I did: created 3 databases test{1,2,3}. Created single table
> in each of them. And here's what I see in the head of the dump:
>
> REVOKE ALL ON DATABASE template1 FROM PUBLIC;
> REVOKE ALL ON DATABASE template1 FROM gsingh;
> GRANT ALL ON DATABASE template1 TO gsingh;
> GRANT CONNECT ON DATABASE template1 TO PUBLIC;
> CREATE DATABASE test1 WITH TEMPLATE = template0 OWNER = gsingh ENCODING =
> 'UTF8';
> CREATE DATABASE test2 WITH TEMPLATE = template0 OWNER = gsingh ENCODING =
> 'UTF8';
> CREATE DATABASE test3 WITH TEMPLATE = template0 OWNER = gsingh ENCODING =
> 'UTF8';
>
> So lets say we want to restore DB test2... here's how I would go about it:
>
> Take that dump, remove all other 'CREATE DATABASE' commands except for the
> one for test2. Search for string 'test2'; I get to the following line:
>
> \connect test2
>
> delete everything between the a.m 'CREATE DATABASE' command and this
> \connect command.
>
> Since there's another DB after test2 (we saw the order in 'CREATE DATABASE'
> commands, remember ), so I search for the next '\connect' command. I find
> this:
>
> \connect test3
>
> Form this line on, I delete everything from the file. And I am done.
>
> Now I run:
>
>  psql -p 5444 -f ~/08-02-04.sql -d postgres
>
> And my test2 DB is resored.
>
> psql -p 5444 -f ~/08-02-04.sql -d test2 -c "select count(*) from test;"
>  count
> ---
>100
> (1 row)
>
> HTH,
>
> Best regards,
>
>
> On Feb 4, 2008 10:54 AM, Guido Neitzer <[EMAIL PROTECTED]> wrote:
>
> > On 04.02.2008, at 10:41, Gurjeet Singh wrote:
> >
> > > What was the output format option used (-f option) ? Was it the
> > > plain-text (SQL) or custom format?
> >
> > I cannot see a -f option on pg_dumpall. This is the command:
> >
> > pg_dumpall > `date "+%y-%m-%d"`.sql
> >
> > I just want to use an older file from a dump to restore a server, but
> > I don't want to reload all databases (because that will take way
> > longer).
> >
> > cug
> >
>
>
>
> --
> [EMAIL PROTECTED]
> [EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com
>
> EnterpriseDB  http://www.enterprisedb.com
>
> 17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
> 18° 32' 57.25"N,   73° 56' 25.42"E - Pune
> 37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *
>
> http://gurjeet.frihost.net
>
> Mail sent from my BlackLaptop device
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] Reload only specific databases from pg_dumpall

2008-02-04 Thread Gurjeet Singh
Well... I assumed that'd be the case; production dumps are not small... this
calls for some scripting stuff; the script should basically the steps
mentioned above, and you will have a trimmed down version of the dump
file... :)

But again, the script should be fast enough to make this additional step
look smaller than the time required to restore+drop unwanted DBs.

Best regards,

On Feb 4, 2008 5:10 PM, Guido Neitzer <[EMAIL PROTECTED]> wrote:

> On 04.02.2008, at 18:00, Gurjeet Singh wrote:
>
> > I understand it all needs a little bit of 'vi' wizardry, (or
> > whichever editor you are using). Also, as with all open-source
> > suggestions, do not rely on this procedure until you understand what
> > and how it does what it does.
>
> Sure, the only problem is, I'm talking about a lrge file. Maybe I
> just import the whole bunch and drop some of the databases after that.
> Seems to be less hazzle, but takes probably an hour or two longer, so
> I'll take a service window next night ...
>
> Thanks for the hints.
>
> cug
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] Reload only specific databases from pg_dumpall

2008-02-04 Thread Gurjeet Singh
Nice...

Can this be reliably put into a script? The script would take only the DB
name as parameter. And one gotcha I woud look out for is to see the CREATE
DB's template= is nothing but template0, else warn the user that the DB may
not be reliably restored.

Best regards,

On Feb 4, 2008 6:03 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:

> On Feb 4, 2008 7:10 PM, Guido Neitzer <[EMAIL PROTECTED]> wrote:
> > On 04.02.2008, at 18:00, Gurjeet Singh wrote:
> >
> > > I understand it all needs a little bit of 'vi' wizardry, (or
> > > whichever editor you are using). Also, as with all open-source
> > > suggestions, do not rely on this procedure until you understand what
> > > and how it does what it does.
> >
> > Sure, the only problem is, I'm talking about a lrge file. Maybe I
> > just import the whole bunch and drop some of the databases after that.
> > Seems to be less hazzle, but takes probably an hour or two longer, so
> > I'll take a service window next night ...
> >
> > Thanks for the hints.
>
> Here's what I'd do.  First, use head, tail, and grep to find the lines
> you need to cut at...
>
> for instance, the create database statements will come up first, so
> something like this:
>
> For a test from dumpall I did this:
>
> $ grep -in "Create database"  test.sql
> 34:CREATE DATABASE smarlowe WITH TEMPLATE = template0 OWNER = postgres
> ENCODING = 'LATIN1';
>
> That pulls the create database line out.  Then, I can find the dump
> data with this:
>
> $ grep -in \\connect test.sql
> 5:\connect postgres
> 38:GRANT CONNECT ON DATABASE template1 TO PUBLIC;
> 41:\connect postgres
> 141:\connect smarlowe
> 335:\connect template1
>
> So, for the smarlowe database I need to get lines 141 to 334.  Quick test:
>
> head -n 141 test.sql |tail -n 1
> \connect smarlowe
>  head -n 335 test.sql|tail -n 1
> \connect template1
>
> So, the lines are where They should be and head / tail lets me grab
> them.  Now, to grab the bits I need:
>
> echo $((335-141))
> 194  -- Number of lines to grab
>
> So, to grab the dump, I can use
>
> $ head -n 334 test.sql|tail -n 194 > smarlowe.sql
>
> Note I'm grabbing 335-1 since I don't actually want the \connect template1
> bit.
>
> Quick test:
> $ head -n 334 test.sql|tail -n 194|head -n 1
> \connect smarlowe
> $ head -n 334 test.sql|tail -n 194|tail -n 4
> --
> -- PostgreSQL database dump complete
> --
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] pg_restore seems slow

2008-02-09 Thread Gurjeet Singh
On Feb 9, 2008 10:42 AM, Willem Buitendyk <[EMAIL PROTECTED]> wrote:

> I'm trying to restore my database from 8.26 into 8.3 (win32) but find
> the process to be exceedingly slow.  The database has about 60M records.
> I realize there will be differences based on hardware, available memory,
> complexity of records but when I first tried a restore with the verbose
> option I was able to calculate based on the index incrementing that it
> was inserting about 6500 records per minute.
> At that rate it would take 153 hours to restore my db.  I then tried
> minimizing the verbosity window and would open it only after a minute
> and the speed was improved to about 2 records per minute.  I'm
> hoping without the verbose option that the speed increases to at least
> 20 records per minute which would be a fairly reasonable 5 hours.
> So is there any way besides using verbose to calculate the speed at
> which pg_restore is inserting records?  It would be great to have a
> 'progress' option so that a person could time going out for a sail in
> the morning and then return at just the right time.  Guess you know what
> I'd rather be doing instead of staring at the command prompt :)
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>   http://www.postgresql.org/docs/faq
>

By any chance, are you using -d or -D option while doing pg_dump?

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,   78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,   73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] LIMIT Question

2008-02-29 Thread Gurjeet Singh
In my opinion (without looking at the code), if you have a grouping-function
or ORDER BY or GROUP BY clause, then yes, the whole query has to be executed
to show the first row of the result-set. But if the query doesn't have any
of these clauses, then the DB has the ability to send back the first row
from the result as soon as it processes it (i.e after WHERE clause
processing), and stop the query execution there.

Best regards,

On Fri, Feb 29, 2008 at 5:02 AM, Scott Marlowe <[EMAIL PROTECTED]>
wrote:

> On Thu, Feb 28, 2008 at 3:20 PM, Terry Lee Tucker <[EMAIL PROTECTED]>
> wrote:
> > When one uses LIMIT, as in LIMIT 1, is the entire query executed on the
> server
> >  side, but only one record returned?
> >
> >
>  
> 
> >   PostgreSQL 7.4.19 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC)
> 3.4.6
> >  20060404 (Red Hat 3.4.6-9)
>
> Sometimes yes, sometimes no.  Depends on how complex the query is and
> whether or not pgsql's query planner can see a shortcut or not.  It's
> more likely that a later version will have the optimizations needed to
> do that than an older version like 7.4 I'd think.  But I'd ask someone
> more expert on the planner like Tom to be sure.
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>   http://archives.postgresql.org/
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,  78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,  73° 56' 25.42"E - Pune *
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] LIMIT Question

2008-02-29 Thread Gurjeet Singh
On Fri, Feb 29, 2008 at 3:55 PM, Martijn van Oosterhout <[EMAIL PROTECTED]>
wrote:

> On Fri, Feb 29, 2008 at 02:53:05PM +0530, Gurjeet Singh wrote:
> > In my opinion (without looking at the code), if you have a
> grouping-function
> > or ORDER BY or GROUP BY clause, then yes, the whole query has to be
> executed
> > to show the first row of the result-set. But if the query doesn't have
> any
> > of these clauses, then the DB has the ability to send back the first row
> > from the result as soon as it processes it (i.e after WHERE clause
> > processing), and stop the query execution there.
>
> Except if you have an index on the column you're ordering by. Then the
> server can really return the first row quickly.


Quickly for sure... but I don't think 'without processing all the rows that
qualify'. I think it will still process all the rows and return just one.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,  78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,  73° 56' 25.42"E - Pune *
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] LIMIT Question

2008-02-29 Thread Gurjeet Singh
On Fri, Feb 29, 2008 at 7:20 PM, Sam Mason <[EMAIL PROTECTED]> wrote:

> On Fri, Feb 29, 2008 at 05:28:29PM +0530, Gurjeet Singh wrote:
> > On Fri, Feb 29, 2008 at 3:55 PM, Martijn van Oosterhout <
> [EMAIL PROTECTED]> wrote:
> > > Except if you have an index on the column you're ordering by. Then the
> > > server can really return the first row quickly.
> >
> > Quickly for sure... but I don't think 'without processing all the rows
> that
> > qualify'.
>
> Postgres will always try to do the smallest amount of work possible.
> Putting a LIMIT in will cause the planner to pick a plan that returns an
> appropriate number of rows quickly, sometimes it can be a very different
> plan.
>
> > I think it will still process all the rows and return just one.
>
> How do you explain that when you run:
>
>  SELECT 1/v FROM (VALUES (1),(0)) c(v);
>
> Without a limit you get a "division by zero" exception, and when you
> have a limit of one row you get a result back.
>

You are correct, and repeating what I said in the first mail. Your query
does not involve an ORDER BY (or other clauses), hence the first row that
the executor encounters, satisfies what the query asked for. But if it had
an ORDER BY you will again get the division ERROR.

Above I was referring to the case where the planner chooses an Index access.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37"N,  78° 30' 59.76"E - Hyderabad
18° 32' 57.25"N,  73° 56' 25.42"E - Pune *
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] ISO something like "#if 0 ... #endif" for SQL code

2008-03-11 Thread Gurjeet Singh
On Mon, Mar 10, 2008 at 9:20 PM, Kynn Jones <[EMAIL PROTECTED]> wrote:

>
>
>
> Hi!  When it comes to programming SQL, my newbie approach is to write my
> code in a file test.sql, which I test from within psql by using
>
>   my_db=> \i /some/path/test.sql
>
> ...and (once I'm satisfied with the code) copy and paste it to a different
> file that has the SQL I've written so far for the project.
>
> It's a rather cumbersome system, all the more so because I don't have a
> convenient way to comment out large chunks of code (which may include
> C-style comments).
>
> I'm thinking of something like the trick of surrounding C code with pairs
> of #if 0 and #endif, which effectively comments out code, even when it
> contains /* C-style comments */.
>
> Is there some similar trick for SQL code?
>

If your sole objective is to comment out large chunks of SQL code, which in
turn may have multi-line comments, then the simplest trick is to comment
them using /* multi-line */ itself!

The SQL standard, and Postgres, allow you to nest comments; some commercial
RDBMS' do not provide this, and hence people think it's not possible in SQL.

So following is possible, and you can \include this file in psql with your
desired results:


/* temporary big comment to test only one query, and ignore others

/* my first query */
select 'hello world';

/* my second query */
select 200*201;

/* my application's query */
select ename from emp where empid = 10;

end big comment */

/* Currently testing this query, and do not want the above queries to be
 executed until I am finished with this. So, put everything above this
 comment in a huge multi-line comment */
select ename, dname from emp, dept where emp.deptid = dept.deptid;




-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

17° 29' 34.37"N, 78° 30' 59.76"E - Hyderabad *
18° 32' 57.25"N, 73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] postgre vs MySQL

2008-03-14 Thread Gurjeet Singh
On Thu, Mar 13, 2008 at 6:06 PM, rrahul <[EMAIL PROTECTED]> wrote:

>
> Thanks to all you wonderful people out their. I don't know if its your
> love
> for Postgres or nepothism that makes it look far superior than mysql.


I wouldn't comment on that, but having read so much about MySQL in Postgres'
lists, I sure have a disliking for MySQL, so much so that I haven't bothered
even downloading and installing it even once!!!

Does anyone know of threads in MySQL lists/forums where they run a
propaganda against Postgres or if they downplay us? That would be an
interesting read!


> But why does the client list dosen't tell that?
> I see Mysql bosting for Google,Yahoo, Alcatel..
> What about Postgres the list is not that impressive.


I would suggest trying to find the customer list of Postgres derivatives,
such as EnterpriseDB and Greenplum. There are some pretty big names there
too. This, in some ways, recognizes customers' faith in Postgres.

Best regards,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

17° 29' 34.37"N, 78° 30' 59.76"E - Hyderabad *
18° 32' 57.25"N, 73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] identify database process given client process

2008-03-18 Thread Gurjeet Singh
On Mon, Mar 17, 2008 at 8:28 PM, hogcia <[EMAIL PROTECTED]> wrote:

> Hi,
> I have to find a Postgres database process pid (or other
> identification) for a given client process pid. Or client processes
> for a database process. How are they connected? I was suggested maybe
> netstat could give me the answer and I think those are two pf_unix
> processes. But maybe there are some PostgreSQL functions that do this?
> How should I approach this topic?
>

If possible, execute the following query from your client, and it will show
that server process the client is connected to.

select pg_backend_pid();

This is an easy, one way route to determine a client's shadow process, be it
local or remote.

If you want reverse lookup, that is, trying to find out which backend
process is servicing which client, it'd be a bit difficult. You have to
query pg_stat_activity and the client_addr and client_port columns wil give
you some information about the remote clients. For local clients using Unix
sockets, these columns are null.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

17° 29' 34.37"N, 78° 30' 59.76"E - Hyderabad *
18° 32' 57.25"N, 73° 56' 25.42"E - Pune
37° 47' 19.72"N, 122° 24' 1.69" W - San Francisco

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device


Re: [GENERAL] How to concatenate a boolean to a text string for an EXECUTE stmt in a stored proc

2008-03-23 Thread Gurjeet Singh
On Thu, Mar 20, 2008 at 1:58 AM, Keaton Adams <[EMAIL PROTECTED]> wrote:

>
> Postgres 8.1 on RHEL
>
> How do I formulate this EXECUTE statement so that cust_hold (boolean
> column in the table) can be set to the value in v_cust_on_hold (boolean
> variable in a function)?
>
> v_cust_on_hold  BOOLEAN;
>
> EXECUTE 'UPDATE customer_action_ytd_' || v_tyear || ' ' ||
>   'SET bytes_sc = bytes_sc + ' || v_bytes_sc || ', ' ||
>'cust_hold= ' || *v_cust_on_hold* || ', ' ||
>'cust_count = cust_count + ' || v_cust_count || ' ' ||
>'WHERE id = ' || v_id || ' ' ||
>'AND cust_type = \'' || v_cust_type || '\' ';
>
>
> psql:runit.sql:1: *ERROR:  operator does not exist: text || boolean
> *HINT:  No operator matches the given name and argument type(s). You may
> need to add explicit type casts.
> CONTEXT:  SQL statement "SELECT  'UPDATE customer_action_ytd_' ||  $1  ||
> ' ' || 'SET bytes_sc = bytes_sc + ' ||  $2  || ', ' || 'cust_hold = ' ||  $3
>  || ', ' || 'cust_count = cust_count + ' ||  $4  || ' ' || 'WHERE id = ' ||
>  $5  || ' ' || 'AND cust_type = \'' ||  $6  || '\' '"
> PL/pgSQL function "kda_test_ytd_rollup" line 96 at execute statement
>
> I tried a TO_CHAR(v_cust_on_hold) but received:
>
> psql:runit.sql:1: *ERROR:  function to_char(boolean) does not exist
> *
>

You can simply use the CAST operator like so:

declare
t text;
b boolean;
begin
b = true;
t = 'SELECT 1 where true = ' || b::text || ';';
raise notice '%', t;
end;


One more thing, I noticed that you are trying to escape single quotes
(') with backslashes (\), in the last line of the EXECUTE:

'AND cust_type = \'' || v_cust_type || '\' ';

You might want to do it like this:

'AND cust_type = ''' || v_cust_type || ''' ';

Thats the SQL standard way of escaping single quotes; just precede them
with another quote.

HTH,

Best regards,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Can Postgres 8.x start if some disks containing tablespaces are not mounted?

2008-03-31 Thread Gurjeet Singh
On Mon, Mar 31, 2008 at 11:40 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Morris Goldstein" <[EMAIL PROTECTED]> writes:
> > Suppose I have a database with $PGDATA on /dev/sda, and a tablespace
> > directory on /dev/sdb. Will Postgres start successfully if /dev/sda is
> > mounted and /dev/sdb is not? If not, why not?
>
> It will start, but you will have unpleasant failures when you try to use
> tables in the secondary tablespace ... note that if autovacuum is on,
> that is likely to happen even without any explicit action on your part.
>
>
One of the gripes I have with postgres is that, that it won't even complain
if one of the segments of a relation goes missing unless the missing segment
is referred to by an index!!!

The most troublesome part is that count(*) (i.e seq scan) scans only upto
the last sequential segment found. Here's a case in example:

Healthy:

count(*) : 2187001
size: 2441 MB
segments: 17651, .1, .2

Corrupt: 17651.1 missing
-
count(*) : 917503
size: 1024 MB
segments: 17651, .2
select max(a) from temp: 2187001 (uses index to locate the last tuple in
segment .2)

select a from temp where a = (select max(a) from temp)/2
ERROR:  could not read block 156214 of relation 1663/11511/17651: read only
0 of 8192 bytes

retore missing segment:
---
select a from temp where a = (select max(a) from temp)/2
  : 1093500


I think that the counter-argument would be that this has never been
reported in the field, but I wish our metadata records this somehow, and
reports an ERROR if it finds that a segment is missing.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Can Postgres 8.x start if some disks containing tablespaces are not mounted?

2008-04-01 Thread Gurjeet Singh
On Tue, Apr 1, 2008 at 7:42 PM, Morris Goldstein <
[EMAIL PROTECTED]> wrote:

> But that makes me wonder: what about this sequence of events:
>
> - Postgres running normally on /dev/sda and /dev/sdb.
> - Update to table in /dev/sdb tablespace is committed but still exists in
> WAL.
> - Postgres crashes (e.g. power failure).
> - Postgres starts with /dev/sda only.
> - Recovery needs to update table in /dev/sdb tablespace.
>
> I assume bad things will happen in this case.
>
>
200% correct.

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] PostgreSQL on Vista

2008-04-21 Thread Gurjeet Singh
I would recommend creating rules in your firewall that let Postgres run and
listen on sockets rather than turn off the firewall altogether. (The
firewall rules _may_ not be the problem in your case, but you can still try)

On an orthogonal note, I just disliked the UAC in Vista... For the first
month or so I tried to cope with it, hoping that I'd get used to it, but it
keeps coming in the way so much that I had to finally turn it off... now
life's much easier.

Best regards,

On Tue, Apr 22, 2008 at 5:17 AM, Justin <[EMAIL PROTECTED]> wrote:

>  this new keyboard has problems, or i can't type ;-) .
>
>  it should be off not of
>
> Justin wrote:
>
>
>
> Dirk Verleysen wrote:
>
>
> Hi,
>
> I have been running a Postgres (8.2.4) on a Windows XP for over 3 months.
> Last week this machine died and I bought a new Vista machine today.
> Installed everything on it and a Postgres (8.2.7). The problem is that I
> cannot start the Postgres service. I keep getting the following error:
> FATAL:  could not create any TCP/IP sockets
>
>
> Anyone has any idea what I can do ?
>
> Thanks,
>
> Dirk
>
> turn of the firewall
>
>


-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] table as log (multiple writers and readers)

2008-04-22 Thread Gurjeet Singh
On Wed, Apr 23, 2008 at 12:29 AM, David Wilson <[EMAIL PROTECTED]>
wrote:

> On Tue, Apr 22, 2008 at 2:48 PM, Joris Dobbelsteen
> <[EMAIL PROTECTED]> wrote:
> >
> >  Ah, yes, all visible rows...
> >  My point is that, unless you use a transaction with serializable
> isolation,
> > this all visible rows for the second statement might be different from
> those
> > that you copied into the log table.
> >
> >  With the normal Read committed isolation level you suffer from a
> possible
> > nonrepeatable read that might change tuple visibility between different
> > statements.
>
> That depends on implementation. A select into ... to do the initial
> copy followed by a delete where... with the where clause referencing
> the log table itself to ensure that we delete only things that now
> exist in the log table, or a row by row  insert/delete pair. Either
> would provide the appropriate level of protection from accidental
> deletion of more things than you intended without harming concurrency.
> The delete referencing the log table might require that the log table
> be indexed for performance, but it's likely that such indexing would
> be done anyway for general log use.


I think this plpgsql function would solve the problem of atomic
read-and-delete operation...

create or replace function log_rotate() returns void as $$
declare
  rec record;
begin

for rec in delete from t1 returning * loop
insert into t2 values( rec.a, rec.b );
end loop;

end;
$$ language 'plpgsql';

select log_rotate();

Best regards,
-- 
[EMAIL PROTECTED]

[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] How to modify ENUM datatypes?

2008-04-22 Thread Gurjeet Singh
On Tue, Apr 22, 2008 at 3:31 PM, D. Dante Lorenso <[EMAIL PROTECTED]> wrote:

> All,
>
> In the past I have used foreign keys to lookup tables for small lists of
> values that I now think ENUM could do the job of.  I was hoping that by
> using ENUM, I would avoid having to do joins in my queries, and that I'd be
> making the data more robust and faster.
>
> I used to have a table for account_status:
>
>  A | Active
>  B | Billing Failed
>  C | Closed
>  D | Deactivated
>
>  account.acct_type CHAR references account_type.acct_type CHAR
>
> But, now I've converted that to an ENUM:
>
>  ACTIVE
>  BILLING_FAILED
>  CLOSED
>  DEACTIVATED
>
>  account.acct_type ENUM account_type
>
> The problem is that once I create a column in my account table that uses
> this 'account_type' datatype, I can't seem to change or add to it any more.
>  I want to add a new value or edit/delete an existing one.
>
> How do you make changes to an ENUM datatype that is already in use?
>

I agree with others that ENUMs stop looking pretty when you need to modify
them...

Here's a thread from recent past where this exact problem was discussed...
maybe it'll interest you...

http://archives.postgresql.org/pgsql-general/2007-12/msg01363.php

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] inheritance. more.

2008-04-28 Thread Gurjeet Singh
On Mon, Apr 28, 2008 at 8:01 PM, Erik Jones <[EMAIL PROTECTED]> wrote:

>
> Postgres doesn't yet handle inheritance of constraints from parent to
> child tables via inheritance.
>
>
Was it done by design or was it a limitation we couldn't get over?

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] inheritance. more.

2008-04-30 Thread Gurjeet Singh
On Tue, Apr 29, 2008 at 11:47 AM, Martijn van Oosterhout <[EMAIL PROTECTED]>
wrote:

> On Tue, Apr 29, 2008 at 06:31:30AM +0530, Gurjeet Singh wrote:
> > On Mon, Apr 28, 2008 at 8:01 PM, Erik Jones <[EMAIL PROTECTED]> wrote:
> > > Postgres doesn't yet handle inheritance of constraints from parent to
> > > child tables via inheritance.
>
> > Was it done by design or was it a limitation we couldn't get over?
>
> Inheritence of most constraints works, just not unique constraints. The
> problem of managing a unique index over multiple tables has not yet
> been solved (it's a reasonably hard problem).
>
>
I completely agree with the difficulty of the problem. One of the advantages
of breaking up your data into partitions, as professed by Simon (I think)
(and I agree), is that you have smaller indexes, which improve performance.
And maybe having one huge index managing the uniqueness across partitioned
data just defeats the idea of data partitioning!

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] ERROR: could not open relation

2008-05-07 Thread Gurjeet Singh
On Wed, May 7, 2008 at 1:50 PM, Zdenek Kotala <[EMAIL PROTECTED]> wrote:

> Q Master napsal(a):
>
> > I get this strange error
> >
> > Caused by: org.postgresql.util.PSQLException: ERROR: could not open
> > relation 1663/53544/58374: No such file or directory
> >
> > How do I recover from it ? Version 8.2 on windows.
> >
> > I think I had an hardware issue in the past where my box rebooted few
> > times I assume this is due to that thing.
> >
> > I tried to re index them but is not working. Any ideas ?
> >
> >
> 2) get table name (connect to affected database)
>
> select * from pg_class where oid=58374
>

The query should be

select * from pg_class where relfienode = 58374

Best regards
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Scripting function definitions as SQL?

2008-05-12 Thread Gurjeet Singh
On Mon, May 12, 2008 at 10:57 AM, Reece Hart <[EMAIL PROTECTED]> wrote:

> On Sun, 2008-05-11 at 06:12 -0700, Postgres User wrote:
> > Has anyone written a function that scripts out all the functions in a
> > database as full SQL statements (Create Function.)
>
> You could pg_dump the schema in the "custom" format (-Fc), then call
> pg_restore with -l to get the TOC, grep the TOC for functions, and feed
> that back into pg_restore with -L. It sounds like a lot, but it's pretty
> easy in practice, like so:
>
> $ sudo -u postgres pg_dump -Fc -s mydb >mydb.pgdfc
> $ pg_restore -l mydb.pgdfc >mydb.toc
> $ grep -E '^[0-9]+; [0-9]+ [0-9]+ FUNCTION' mydb.toc >mydb-fx.toc
> $ pg_restore -L mydb-fx.toc mydb.pgdfc
>
> The output of pg_restore is sql.
>
> This technique is extremely useful for other kinds of schema elements as
> well.
>
>
It's a bit too much for the task at hand. Recently I was also faced with a
situation where I wanted to dump a few functions (only), but pg_dump does
not have any option to do so!!

Can we have an option to dump function?

Best regards,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Stripping out slony after / before / during pg_restore?

2008-05-13 Thread Gurjeet Singh
On Tue, May 13, 2008 at 5:42 PM, Glyn Astill <[EMAIL PROTECTED]> wrote:

> Hi people,
>
> I'm setting us up a separate staging / test server and I want to read in a
> pg_dump of our current origin stripping out all the slony stuff.
>
> I was thinking this could serve two purposes a) test out backups restore
> properly and b) provide us with us with the staging / test server
>
> What's the best way to remove all the slony bits?
>
> I was thinking read in the dump, then use uninstall node - but I'd rather
> not have to run the slon daemons.
>
> Or should I just leave all the slony stuff in there... would it cause us
> any problems? There'd be no slons running and the next night it's all wiped
> and restored again...
>
> Anyone got any ideas? Anyone got something similar already?
>

You need to have a slon daemon running, configured to monitor the
restored database, and the essential settings for this to work are: host
name, port-number, database name and the Slony cluster name. Since you do
not have a slon daemon for the restored database, I guess you are fine after
restoring the database.

If you really need to be sure that this restored database does not take
part in replication, you can go ahead an DROP CASCADE the replication schema
from the database. For eg. if your Slony cluster name was my_repl_cluster,
then you can connect to the restored database and issue 'DROP SCHEMA
_my_repl_cluster CASCADE;' to get rid of the replication information. Now,
even if there's a slon daemon running for this DB, it won't be able to do
anything; you can eye the slon's log to see the warnings it will generate.

Best regards,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] pg_standby stuck on a wal file size <16MB

2008-05-17 Thread Gurjeet Singh
On Sun, May 18, 2008 at 12:17 AM, Vladimir Kosilov <[EMAIL PROTECTED]>
wrote:

> I've been having problem with pgsql-8.2.5 master/slave warm standby
> replication setup where occasionally master node generates a wal file
> smaller then expected 16MB. pg_standby on slave gets stuck on such short
> files, and replication halts from that moment on. we have to do
> pg_start_backup/ rsync of data to slave / pg_stop_backup and restart slave
> in order to recover. database I'm replicating is write mostly.
>
> this is process on slave that is waiting on log file which is smaller in
> size:
> postgres 14277 11857  0 03:34 ?00:00:00 sh -c
> /usr/local/pgsql/bin/pg_standby -d -k 255 -r 7 -s 10 -w 0 -t
> /usr/local/recovery_control/quit_recovery /usr/local/wal_archive
> 000103400020 pg_xlog/RECOVERYXLOG 2>>
> /usr/local/pgsql/data/standby.log
> postgres 14278 14277  0 03:34 ?00:00:00
> /usr/local/pgsql/bin/pg_standby -d -k 255 -r 7 -s 10 -w 0 -t
> /usr/local/recovery_control/quit_recovery /usr/local/wal_archive
> 000103400020 pg_xlog/RECOVERYXLOG
>
> here is a sample list of wal_archive directory slave, note the size of
> 000103400020 expected wal file is less then 16MB:
>
> ...
> -rw--- 1 postgres postgres 16777216 May 17 03:19
> 000103400017
> -rw--- 1 postgres postgres 16777216 May 17 03:21
> 000103400018
> -rw--- 1 postgres postgres 16777216 May 17 03:22
> 000103400019
> -rw--- 1 postgres postgres 16777216 May 17 03:24
> 00010340001A
> -rw--- 1 postgres postgres 16777216 May 17 03:26
> 00010340001B
> -rw--- 1 postgres postgres 16777216 May 17 03:27
> 00010340001C
> -rw--- 1 postgres postgres 16777216 May 17 03:29
> 00010340001D
> -rw--- 1 postgres postgres 16777216 May 17 03:30
> 00010340001E
> -rw--- 1 postgres postgres 16777216 May 17 03:32
> 00010340001F
>
> -rw--- 1 postgres postgres 13746176 May 17 03:34
> 000103400020
>
> -rw--- 1 postgres postgres 16777216 May 17 03:35
> 000103400021
> -rw--- 1 postgres postgres 16777216 May 17 03:37
> 000103400022
> -rw--- 1 postgres postgres 16777216 May 17 03:38
> 000103400023
> -rw--- 1 postgres postgres 16777216 May 17 03:40
> 000103400024
> -rw--- 1 postgres postgres 16777216 May 17 03:41
> 000103400025
> -rw--- 1 postgres postgres 16777216 May 17 03:43
> 000103400026
> -rw--- 1 postgres postgres 16777216 May 17 03:45
> 000103400027
> ...
>
> skipping a bit further I see there is at least one other instance where wal
> file is shorter then normal 16MB:
>
> -rw--- 1 postgres postgres 16777216 May 17 05:42
> 00010340006F
> -rw--- 1 postgres postgres 16777216 May 17 05:44
> 000103400070
> -rw--- 1 postgres postgres 16777216 May 17 05:46
> 000103400071
> -rw--- 1 postgres postgres 16777216 May 17 05:47
> 000103400072
>
> -rw--- 1 postgres postgres16384 May 17 05:50
> 000103400073
>
> -rw--- 1 postgres postgres 16777216 May 17 05:51
> 000103400074
> -rw--- 1 postgres postgres 16777216 May 17 05:52
> 000103400075
> -rw--- 1 postgres postgres 16777216 May 17 05:54
> 000103400076
>
> why would a master node create a wal file smaller then normal
> checkpoint_segment size and how can this be avoided. I need a reliable
> replication mechanism even at a cost of longer recovery on standby.
>
> master's postgresql.conf:
> ..
> fsync = on
> wal_sync_method = open_sync
> wal_buffers = 128
> checkpoint_segments = 64
> archive_command = 'test ! -f /usr/local/wal_archive_local/%f && cp %p
> /usr/local/wal_archive_local/%f'
>
> archive files are then moved  on master to standby every other minute:
>
> rsync -aq --remove-sent-files /usr/local/wal_archive_local/
> slave::wal_archive/
>
> slave's recovery.conf:
> restore_command = '/usr/local/pgsql/bin/pg_standby -d -k 255 -r 7 -s 10 -w
> 0 -t /usr/local/recovery_control/quit_recovery /usr/local/wal_archive %f %p
> 2>> /u
> sr/local/pgsql/data/standby.log'
>
> both servers are identical Dell PE1950 servers with 4 sas hd w/hardware
> RAID 1+0 running:
> 2.6.18-8.1.8.el5 #1 SMP Tue Jul 10 06:39:17 EDT 2007 x86_64 x86_64 x86_64
> GNU/Linux
>
> Thanks much in advance
> V.


Any reason why you are not using rsync command in the archive_command in the
first place?

[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] pg_standby stuck on a wal file size <16MB

2008-05-17 Thread Gurjeet Singh
On Sun, May 18, 2008 at 5:48 AM, Vlad Kosilov <[EMAIL PROTECTED]> wrote:

> is this wrong of me to expect that postgres would not make a wal file
> available to archive_command unless it was completely ready to let go of
> that wal file?
>

thats a perfectly reasonable expectation; and thats exactly how it works.


>
> Ioannis Tambouras wrote:
>
>>  archive_command = 'test ! -f /usr/local/wal_archive_local/%f && cp %p
>>>  /usr/local/wal_archive_local/%f'
>>>
>>>
>>
>>  The archive command tests if the wal segment exists and is a file, but it
>> does not check if the file is still being written. You need to copy the file
>> after writing has finished (it reached 16777216 bytes). I don't have sources
>> of pg_standby near me, but I remember in the
>> C code checks for complete segment sizes.
>>
>>
>> thanks
>> Ioannis Tambouras
>>
>>
>>
>>
>
>
> --
> 
> Vladimir (Vlad) Kosilov
> Senior Systems Administrator
> Contigo Systems Inc.
> 604.683.3106 (phone)
> 604.648.9886 (fax)
> [EMAIL PROTECTED]
> www.contigo.com
> 
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Seeking rows whit \

2008-05-18 Thread Gurjeet Singh
On Mon, May 19, 2008 at 7:56 AM, Francisco Reyes <[EMAIL PROTECTED]>
wrote:

> Is there a way to search for rows with the character \?
>
> I tried variants of like and  ~.
>
> create table mytable (f1 text);
> insert into mytable values ('Test row 1');
> insert into mytable values (E'Test row 2 \\');
>
> select * from mytable where f1 like E'%\\%'; <-- returned nothing
> select * from mytable where f1 ~ '\'; <-- waiting for single quote
> select * from mytable where f1 ~ E'\\'; <-- Error
>
> And a few more variants.. with no valid reults.
>
> Any suggestions?
>
>
>
It is well documented. Quoting from
http://www.postgresql.org/docs/8.3/interactive/functions-matching.html#FUNCTIONS-LIKE

Thus, writing a pattern that actually matches a literal backslash means
writing four backslashes in the statement.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Join three fields into one on same table

2008-05-19 Thread Gurjeet Singh
On Tue, May 20, 2008 at 5:34 AM, Sam Mason <[EMAIL PROTECTED]> wrote:

> On Mon, May 19, 2008 at 03:36:39PM -0600, Scott Marlowe wrote:
> > On Mon, May 19, 2008 at 4:51 AM, jrivero <[EMAIL PROTECTED]> wrote:
> > > My problem is not that make update query.
> > >
> > >> update table set date=(select year || '-' || month || '-' || day || '
> 01:00:00' as newdate from table)
> >
> > Cast the output of those concatenations to date:
> >
> > update table set date=(select (year || '-' || month || '-' || day || '
> > 01:00:00')::date as newdate from table)
>
> I'd guess the OP doesn't want the sub-query, if he posted the error
> message we'd know for sure.  I'd guess something like:
>
>  update table set date=(year||'-'||month||'-'||day||'01:00:00')::date;


That'd be my guess too. OP's subquery is possibly returning more than one
row, and hence an error.


>
> If the "date" column really is of date type, then the final
> "||'01:00:00'" is somewhat superfluous.
>


-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] pg_standby stuck on a wal file size <16MB

2008-05-20 Thread Gurjeet Singh
On Wed, May 21, 2008 at 4:14 AM, Vlad Kosilov <[EMAIL PROTECTED]> wrote:

> as Greg pointed out: my use of rsync --remove-sent-files option had
> contributed to a short sized wal log file on standby.
> changing master's postgres crontab to the following helped to resolve the
> issue:
>
> # ship logs to standby:
> */2 * * * * rsync -aq /wal_archive_local/ 10.10.10.12::wal_archive/
> # remove files older then remove_check file mtime
> */5 * * * * find /wal_archive_local/ ! -newer
> /wal_archive_local/remove_check -exec rm -f {} \; && touch
> /wal_archive_local/remove_check
>
>
Hmmm, nice trick.

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Is this correct usage of generate_series?

2008-05-22 Thread Gurjeet Singh
On Thu, May 22, 2008 at 10:41 PM, Hiroaki Nakamura <[EMAIL PROTECTED]>
wrote:

> Hi, there.
>
> These three queries below works as I expect, which is wonderful, but are
> these correct usages?
> If these are intended features, I would like them to be documented at
> http://www.postgresql.org/docs/8.3/interactive/functions-srf.html
>
> => select generate_series(1, 3) as i;
>  i
> ---
>  1
>  2
>  3
> (3 rows)
>
> => select 'a' as a, generate_series(1, 3) as i;
>  a | i
> ---+---
>  a | 1
>  a | 2
>  a | 3
> (3 rows)
>
> => select 'a' as a, i from generate_series(1, 3) i;
>  a | i
> ---+---
>  a | 1
>  a | 2
>  a | 3
> (3 rows)
>
> Here is an example using this technique, which shows column positions and
> names in
> the specified index.
>
> select attnum, attname
> from pg_catalog.pg_attribute a
> join (
> select
> indrelid, indkey[i] as pos
> from (
> select
> indrelid, indkey, generate_series(lb, ub) as i
> from (
> select indrelid, indkey, array_lower(indkey, 1) as lb,
> array_upper(indkey, 1) as ub
> from pg_catalog.pg_index
> where indexrelid = (
> select oid
> from pg_catalog.pg_class
> where relnamespace = (select oid from
> pg_catalog.pg_namespace where nspname = 'public')
> and relkind = 'i'
> and relname = ''
> )
> ) x
> ) y
> ) z on a.attrelid = z.indrelid and a.attnum = z.pos;
>
>
>
All three of these are correct usages. This is an SRF (Set Returning
Function), and Postgres allows you to use an SRF in the SELECT list. People
have been using tricks similar to what you showed above, and it works great.

I have seen discussions in the past on the -hackers mailing list about
deprecating the usage of SRFs in select list, but honestly, I don't see this
usage being deprecated anytime soon; it's pretty useful, and I would call it
an extension to the SQL language. And as long as it's nit buggy, and ALA it
doesn't surprise people in a wrong way, I think such usage will remain.

Best regards,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] syntax error with execute

2008-05-30 Thread Gurjeet Singh
On Fri, May 30, 2008 at 9:25 PM, A B <[EMAIL PROTECTED]> wrote:

> I have a query like this in a plpgsql function:
>
> EXECUTE 'INSERT INTO '||tablename||' ('||fields||') VALUES
> ('||vals||') RETURNING currval('''||seqname||''') INTO newid'
>
> and I get the response:
>
> ERROR:  syntax error at or near "INTO"
> LINE 1: ...','2008','4',NULL) RETURNING currval('id_seq') INTO newid
>
> And I do not understand this error. If I take the INSERT command and
> run it by hand, it works fine, but it doesn't work in the function
> when called by execute. Anybody has an idea on what is wrong and what
> to do about it?
>

The final INTO clause should be outside the string, like this:

EXECUTE 'INSERT INTO '||tablename||' ('||fields||') VALUES
('||vals||') RETURNING currval('''||seqname||''')' INTO newid

Note the placement of the last quote.

HTH,

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] does postgresql works on distributed systems?

2008-06-04 Thread Gurjeet Singh
On Wed, Jun 4, 2008 at 11:34 AM, Volkan YAZICI <[EMAIL PROTECTED]> wrote:

> I'm planning to make a survey regarding PostgreSQL performance on
> OpenSSI. There are some obstacles mostly caused by shared-memory
> architecture of PostgreSQL, but that claim is -- AFAIK -- totally
> theoratical. There aren't any benchmarks done yet that explains
> shared-memory bottlenecks of PostgreSQL on an OpenSSI framework. If
> anybody have experience with PostgreSQL on OpenSSI, I'll be happy to
> hear them. (Yeah, there were some related posts in the past; but they
> were mostly noise.)


If you search for  OpenSSI Postgres, you hit this link:

http://wiki.openssi.org/go/PostgreSQL_on_OpenSSI_enabled_Knoppix

I have done this setup and small test on it about an year ago. The
performance was horrible. I cannot say for sure, but I think, as OpenSSI FAQ
mentions it, it's because of the way Postgres works with shared memory.

I hope you find that article useful in starting your own experiment. Lets
hope there's some improvement since last year. Do let us all know the
results.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Annoying messages when copy sql code to psql terminal

2008-06-05 Thread Gurjeet Singh
On Fri, Jun 6, 2008 at 7:58 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote:

> On Tue, May 27, 2008 at 9:24 AM, A B <[EMAIL PROTECTED]> wrote:
> > Whenever I use copy-paste to run code in a terminal window that is
> > running psql, and the code contains a row like
> >
> > IF FOUND THEN
> >
> > then I get the words
> >
> > ABORTCHECKPOINT   COMMIT   DECLARE  EXECUTE
>
> [...]
>
> As others have noted, you have tabs in your sql source.  I'd advise if
> possible, not to use the tab character in sql, for this and other
> reasons.


Can you please elaborate on other reasons? I have never encountered any
_other_ reason!!

And 'using psql' is not reason enough to not indent your SQL code.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] when to reindex?

2008-06-08 Thread Gurjeet Singh
On Sat, Jun 7, 2008 at 8:07 PM, Jeremy Harris <[EMAIL PROTECTED]> wrote:

> Gregory Stark wrote:
>
>>  REINDEX scans the table
>> precisely once and sorts it.
>>
>
> For the bloat, as opposed to corruption, case -
> what information is needed from the table that
> is not in the old index?  Why would a sequential
> read of the old index alone (then some processing)
> not suffice?


In Postgres, an index does not hold the livliness information of the rows it
is pointing it; that is, there may be a thousand row-pointers in the index,
but not all the rows pointed to by those pointers are known to be live. This
is an implication of MVCC in Postgres.

So every index lookup has to look at the corresponding heap (aka table) row
and decide if that row should be visible to the querying session.

Best regards,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Clustering with minimal locking

2008-06-18 Thread Gurjeet Singh
On Wed, Jun 18, 2008 at 9:26 AM, Decibel! <[EMAIL PROTECTED]> wrote:

> On Jun 17, 2008, at 11:37 AM, Scott Ribe wrote:
>
>> BOOM! Deadlock.
>>>
>>
>> No more likely than with the current cluster command. Acquiring the lock
>> is
>> the same risk; but it is held for much less time.
>>
>
>
> Actually, no (at least in 8.2). CLUSTER grabs an exclusive lock before it
> does any work meaning that it can't deadlock by itself. Of course you could
> always do something like
>
> BEGIN;
> SELECT * FROM a;
> CLUSTER .. ON a;
> COMMIT;
>
> Which does introduce the risk of a deadlock


Really!!? Am I missing something? How can a single transaction, running
synchronous commands, deadlock itself!

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


[GENERAL] Query to find Foreign Key column data type mismatch

2009-05-31 Thread Gurjeet Singh
Hi All,

I wanted to find out if both the ends of a foreign key reference were
using the same data types, since, in older versions, not having the same
data types can lead to Postgres not picking the appropriate index, and in
newer versions also it would be beneficial since having same data types can
avoid having to go through conversion functions for comparisons etc.

So I devised this query and the helper function for the purpose. Just
create the function and then run the query. This query checks all the
foreign keys in the database.

Feedback/comments welcome.

Best regards,
-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Mail sent from my BlackLaptop device


Postgres_find_FKey_data_type_mismatch.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Division by zero

2009-06-03 Thread Gurjeet Singh
On Wed, Jun 3, 2009 at 9:18 PM, Oliver Kohll - Mailing Lists <
oliver.li...@gtwm.co.uk> wrote:

> Hello,
>
> We have a system that allows users to create views containing calculations
> but divisions by zero are commonly a problem.
>
> An simple example calculation in SQL would be
>
> SELECT cost / pack_size AS unit_cost from products;
>
> Either variable could be null or zero.
>
> I don't think there's a way of returning null or infinity for divisions by
> zero, rather than causing an error but I'd just like to check - and put in a
> vote for that functionality!
>
> If not, I will have to get the system to check for any numeric fields in
> user input calculations and rewrite them similar to
>
> CASE WHEN cost IS NULL THEN null
> WHEN pack_size IS NULL THEN null
> WHEN cost = 0 THEN null
> WHEN pack_size = 0 THEN null
> ELSE cost / pack_size
> AS unit_cost
>
> I don't want to write new functions, I'd rather keep it in plain SQL.
>
>
Putting that in a function is definitely going to be expensive..

You need to take care of only one case here: denominator == 0; rest of the
cases will be handled sanely by the database.

CASE WHEN packet_size = 0 THEN null ELSE cost/packet_size END as unit_cost

Best regards,

-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Mail sent from my BlackLaptop device


Re: [GENERAL] Can postgresql store its data on raw device now?

2009-06-09 Thread Gurjeet Singh
It still doesn't support raw device.

On Tue, Jun 9, 2009 at 1:06 PM, Lizzy M  wrote:

> Hello,
>
>I have an problem: can postgresql store its data on the raw disks
> now?
>
>I have checked the mail list and manual, but haven’t found the
> answer. In some early mails, they mentioned pg didn’t support this
> character. But how about now? Raw disk may reduce the risks brought by
> file system, so I want to use it to store my data.
>
>Thanks a lot.
>
> Yours Sincerely,
> Liz
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Mail sent from my BlackLaptop device


Re: [GENERAL] Rounding incompatibility

2009-06-15 Thread Gurjeet Singh
On Mon, Jun 15, 2009 at 4:28 PM, Havasvölgyi Ottó <
havasvolgyi.o...@gmail.com> wrote:

> Hi,
>
> I have found the following strangeness on Windows versions:
>
> create table round_test (id int primary key, value double precision);
> insert into round_test(id, value) values(1, 1.5);
> insert into round_test(id, value) values(2, -1.5);
> insert into round_test(id, value) values(3, 3.5);
> select round(value) from round_test;
>
> psql 8.2.13 returns
>
> 2
> -2
> 4
>
> But psql 8.3.3 returns
>
> 1
> -1
> 3
>
> Trying more values it seems that 8.2 rounding works according to banker's
> rounding rules.
> Can you confirm this?
> How can I avoid this incompatibility or perhaps bug?
>
>
Posting the output  of 'select version()' from both the databases will help
blaming some distribution.

Best regards,
-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Mail sent from my BlackLaptop device


Re: [GENERAL] pl/sql resources for pl/pgsql?

2009-06-19 Thread Gurjeet Singh
On Sat, Jun 20, 2009 at 10:52 AM, Jasen Betts  wrote:

> On 2009-06-16, gvimrc  wrote:
> > I'm fairly new to PostgreSQL and completely new to using pl/pgsql
> > though I've used MySQL's procedural language a little.
> > I heard pl/pgsql is similar to Oracle's pl/sql so would it be possible,
> > given that pl/pgsql literature is a bit thin on the ground, to use books
> > on pl/sql for developing pl/pgsql code?
>
> For inspiration perhaps, not as a textbook.
> differences from oracle:
> http://www.postgresql.org/docs/8.3/interactive/plpgsql-porting.html
>
> the postgresql manual chapter on pl-pgsql (and the preceeding chapters)
> contain all that you really need.
>
> pl-pgsql chapter:
> http://www.postgresql.org/docs/8.3/interactive/plpgsql.html
> full manual:
> http://www.postgresql.org/docs/8.3/interactive/index.html
>
> plpgsql is much like any other procedural language
> only you can embed SQL queries very very easily
>
> some hints you may find useful:
>
> The syntax checker doesn't always give useful advice when it rejects
> your code so develop a habit of step-wise testing.
>
> if you say "IF" and forget to say "THEN" it will lead a confusing
> error message.
>
> "ELSIF" (and "ELSEIF") are different to "ELSE IF" in the number of
> "END IF"s youll need to use later.
>
> -- sql comments and
> /* c-style
> comments */ can both be used.
>
>
And a major one is, that it cannot detect semantic error's (like missing
table, or wrong column name, or wrong expression assignment) until you
execute the function. This implies that if you have branches in code, say IF
.. THEN .. ELSE .. END IF then you will not see errors from a branch until
that branch of code is executed.

Best regards,
-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Mail sent from my BlackLaptop device


Re: [GENERAL] Start automatically a function at startup of postgresql server

2009-07-04 Thread Gurjeet Singh
On Sat, Jul 4, 2009 at 2:21 PM, d...@dfx.it  wrote:

> Dear Sirs,
>
> Is it possible to start a function (a my function) when the
> server postgres is started? and how I can do?
>
>
No.. There's no way that can be done in Postgres. Although you can add
something to your startup scripts that connects to the DB immediately after
starting it, and calls the function you need.

Best regards,
-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Mail sent from my BlackLaptop device


Re: [GENERAL] Logging statements longer than 1000ms doesn't appear to work

2009-09-22 Thread Gurjeet Singh
Sometimes, your current postgresql.conf might not be in sync with server
settings, for various reasons. I'd suggest looking at the output of:

select name, setting, source from pg_settings where name like E'log\\_%';

Best regards,

On Wed, Sep 23, 2009 at 3:13 AM, Chris Barnes <
compuguruchrisbar...@hotmail.com> wrote:

>  Hello,
>
> I looked for log_statement and it appears to be off? Strange.
>
> #log_statement = 'none'
> #log_statement_stats = off
>
> > To: compuguruchrisbar...@hotmail.com
> > CC: pgsql-general@postgresql.org
> > Subject: Re: [GENERAL] Logging statements longer than 1000ms doesn't
> appear to work
> > Date: Tue, 22 Sep 2009 15:08:39 -0400
> > From: t...@sss.pgh.pa.us
> >
> > Chris Barnes  writes:
> > > I've have set the parameter in my postgresql.conf file and have
> restarted postgres.
> >
> > > When reviewing the log file I am finding that all of the statements are
> being logged (0.108 ms)?
> >
> > Perhaps you also set log_statement = all, or some other reason that
> > would cause them to be logged?
> >
> > regards, tom lane
>
> --
> We are your photos. Share us now with Windows Live 
> Photos.
>



-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com

singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Twitter: singh_gurjeet
Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Using complex PRIMARY KEY

2009-10-08 Thread Gurjeet Singh
2009/10/8 Zsolt 

>
> This is our first project using PostgerSQL, where I have a problem I cant
> solve on a neat way (I assume PGSQL should provide a nice solution...).
>
> So we have an old xBase based program we are trying to port to PostgreSQL
> while we should keep the original data structure especially the ID fields
> must be kept as this IDs are already used in other systems.
>
> The problem is with two table, one is storing the data of houses the other
> the data of tenants in a given houses.
>
>
>
>
>
> Something like this:
>
> CREATE TABLE house (
>
> house_id SERIAL,
>
> .
>
> CONSTRAINT pk_house_id PRIMARY KEY(house_id)
>
>
>
> ) WITHOUT OIDS;
>
>
>
>
>
> CREATE TABLE tenant (
>
> tenant_id SERIAL,
>
> house_id INTEGER REFERENCES house(house_id),
>
> .
>
> CONSTRAINT pk_tenant_house_id PRIMARY KEY(tenant_id, house_id)
>
>
>
> ) WITHOUT OIDS;
>
>
>
> For a given house I would like to start the numbering of tenants from 1.
> Each house could have tenant_ID=1, obviously in this case the house_ID will
> differ. The combination of tenant_ID and house_ID will be the unique
> identifier of each tenant.
>
>
>
> I'm just looking for the best solution to insert new rows into the tenant
> database without worrying about keeping the above mentioned logic in mind.
> Should I create a stored procedure to add a new tenant and this will
> calculate the new house_id+tenant_id combination (only the house_id would be
> passed to the stored procedure, the tenat_id will be calculated by the sp).
> In this case how can I avoid that two concurrent user would try to add
> records in the same time without getting an exception due to violating the
> pk_tenant_house_id constraint? Or should I add a  new field in the house
> table storing the last issued tenant_id in the given house (max_tenant_id)
> which will be used while adding a new record to tenant, and will be updated
> by a trigger on the tenant table? Or am I thinking on a wrong way and there
> is a better mechanism provided by PostgreSQL for this problem?
>
>
>
> Any other suggestions would be greatly appreciated.
>

For the logic you mentioned, do not use SERIAL for tenant_id, use
integer/bigint instead.

You'll have to combine 2 solutions here, probably combine them in a stored
procedure.

1) Lock the Tenant table for a small duration, possibly using LOCK command.

2) Get the next number in incremental order, like

insert into tenant( tenant_id,house_id,  ... ) values( (select
max(tenant_id)+1 from tenant where house_id = ),
, ... );

where H_id_parameter is the house_id value your application is trying to
operate on.

Best regards,

-- 
Lets call it Postgres

EnterpriseDB  http://www.enterprisedb.com

gurjeet[.sin...@enterprisedb.com

singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
Twitter: singh_gurjeet
Skype: singh_gurjeet

Mail sent from my BlackLaptop device


[GENERAL] A few user-level questions on Streaming Replication and pg_upgrade

2011-07-19 Thread Gurjeet Singh
Hi,

Here are a few questions that were asked by a customer, who are trying
to assess the pros and cons of using Postgres and its SR feature. I would
like to get an opinion of someone more involved with the community than me.

.) Will Postgres support Streaming Replication from 9.0.x to 9.1.x; i.e.
across major releases.

I am pretty sure the answer is "no, it won't", but just double-checking
with the community.

.) Is Streaming Replication supported across minor releases, in reverse
direction; e.g. 9.0.3 to 9.0.1

I think the answer is "it depends", since it would depend upon whether
any SR related bug has been fixed in the 'greater' of the minor releases.

I am assuming that smaller minor release to bigger minor release will
always be supported (e.g. 9.0.1 to 9.0.3)

.) How reliable is `pg_upgrade -c` (dry run) currently; that is, how
accurate is pg_upgrade at predicting any potential problem with the eventual
in-place upgrade.

I'd say it is as reliable as it gets since this is the official tool
supported by the project, and it should not contain any known bugs. One has
to use the latest and greatest 'minor' version of the tool for the major
release they are upgrading to, though.

I'd also like to mention a piece of information that may be surprising
to some. Per Tom at a PGCon dinner, Postgres project does not promise
continued guarantee of in-place upgrades across future major releases.
Although the project will try hard to avoid having to make any changes that
may affect in-place upgrade capability, but if a case can be made that a
feature would give a significant improvement at the cost of compromising
this capability, then the in-place upgrade capability may be forgone for
that release.

Thanks in advance,
-- 
Gurjeet Singh
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [GENERAL] Primary key vs unique index

2012-07-06 Thread Gurjeet Singh
Bloat in primary key indexes has been a long standing issue (although not
faced by many), and especially since online rebuild of primary keys was
never possible in production environments.

Since version 9.1 we have a nice little feature of being able to change a
primary key's underlying index. Look at the 'table_constraint_using_index'
clause in ALTER TABLE docs [1]. And example in the same doc specifically
shows how to solve the problem in just two commands:


To recreate a primary key constraint, without blocking updates while the
index is rebuilt:

CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON distributors (dist_id);

ALTER TABLE distributors DROP CONSTRAINT distributors_pkey,
ADD CONSTRAINT distributors_pkey PRIMARY KEY USING INDEX
dist_id_temp_idx;


[1] http://www.postgresql.org/docs/9.1/static/sql-altertable.html

Best regards,

PS: Shameless plug: I am credited for this feature :)

http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107778

On Fri, Jul 6, 2012 at 12:07 PM, rverghese  wrote:

> We are experiencing a similar problem, even though we are on 8.4 and have
> been for a while, and have autovacuum turned on. I have regular concurrent
> reindexes on the indexes but the primary key is seriously bloated. I was
> considering doing the same thing, that is, create another primary key that
> is built on a sequence ( primarily for slony) and then change my current
> multi-column primary key to a unique index. Have you been able to work
> around the problem in any other way?
>
> Thanks
> RV
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Primary-key-vs-unique-index-tp3883778p5715712.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Gurjeet Singh
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [GENERAL] Primary key vs unique index

2012-07-06 Thread Gurjeet Singh
On Fri, Jul 6, 2012 at 2:07 PM, rverghese  wrote:

> Yes I am using that option for one of my POstgres 9.1 database and it works
> well. But its still an issue with Foreign keys, which you need to drop and
> recreate .


Having to drop and create foriegn keys is a legitimate concern. I am
looking into improving that.


> Also I use Slony for replication and it uses the primary key to
> check repl. So I don't want that to be interrupted by dropping PK and
> recreating PK.
>

If you look closely at that example, DROP and CREATE of the primary key is
being done in one command (and hence one transaction), so anything that
depends  this constraint should not be affected except from the fact that
this table will be locked in exclusive mode for the duration of this
operation, which should be very short.

Best regards,
-- 
Gurjeet Singh
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [GENERAL] two-column primary key (not the typical question)

2012-07-06 Thread Gurjeet Singh
On Fri, Jul 6, 2012 at 10:00 PM, Chip Nowacek  wrote:

> I need a suggestion. I need a two-column primary key that does not depend
> on the order of the entries. That is, for the purposes of the key:
>
> PKColA PKColB
> foobar
> barfoo
>
> is not valid.
>

I don't think it's possible using PKeys. It can be done with unique
expression index combined with NOT NULL constraints.

Here's a working example:

postgres=# create table test3( a varchar, b varchar);
CREATE TABLE
postgres=# create unique index on test3 ((case when a < b then a || b else
b || a end));
CREATE INDEX
postgres=# alter table test3 alter a set not null, alter b set not null;
ALTER TABLE


postgres=# insert into test3 values('foo', 'bar');
INSERT 0 1
postgres=# insert into test3 values('foo', 'bar');
ERROR:  duplicate key value violates unique constraint "test3_case_idx"
DETAIL:  Key ((
CASE
WHEN a::text < b::text THEN a::text || b::text
ELSE b::text || a::text
END))=(barfoo) already exists.
postgres=# insert into test3 values('bar', 'foo');
ERROR:  duplicate key value violates unique constraint "test3_case_idx"
DETAIL:  Key ((
CASE
WHEN a::text < b::text THEN a::text || b::text
ELSE b::text || a::text
END))=(barfoo) already exists.
postgres=#

Best regards,
-- 
Gurjeet Singh
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [GENERAL] Suboptimal query plan fixed by replacing OR with UNION

2012-07-06 Thread Gurjeet Singh
On Thu, Jul 5, 2012 at 7:16 PM, Steven Schlansker wrote:

>
> On Jul 5, 2012, at 3:51 PM, Tom Lane wrote:
>
> > Steven Schlansker  writes:
> >> Why is using an OR so awful here?
> >
> > Because the OR stops it from being a join (it possibly needs to return
> > some rows that are not in the semijoin of the two tables).
> >
> >> Why does it pick a sequential scan?  Is this an optimizer bug
> >
> > No.  It can't transform OR into a UNION because the results might not
> > be the same.  I assume you don't care about removal of duplicates, or
> > have some reason to know that there won't be any ... but the planner
> > doesn't know that.
> >
>
> Thanks for the insight here.  It still seems unfortunate that it picks a
> sequential scan -- but if there really is no more efficient way to do this,
> I will just rewrite the query.
>

It might not be applicable to this case (because of the use of ANY in
second branch of OR clause), but some databases provide a feature called
OR-Optimization, where the optimizer breaks up the query at OR clause
boundaries and uses UNION ALL operator to join the resulting queries, just
like you did. Optimizer does need to add additional AND clauses to some of
the branches to make sure the result set is not affected.

Just a thought.
-- 
Gurjeet Singh
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


[GENERAL] Getting random rows from a table

2012-08-31 Thread Gurjeet Singh
This email has been sitting in my drafts folder since Sept 20th, 2008.
Almost 4 years! Getting it off my chest now. I am attaching 2 files for
this solution I developed, because I don't know which is the correct one
(probably both are, with something different in implementation), but I
don't have time or energy to verify that now. I am pretty sure the
randomization works, although it is a tad bit expensive to get random rows.

Although the procedure's parameter names are pretty self descriptive, I'll
explain them in brief here:

p_schemaname   : name of the schema where the table resides
p_tablename: name of the table you want to get random rows from
p_columns  : column list (AFAIR, these can expressions too)
p_where: the WHERE clause you wish to appy, if any.
p_numrows  : how many rows you want in the result.
p_maxretries   : how many times to retry when we can't find a row, before
giving up; null implies 'retry forever'


Hi All,

For one of my small experiments (which is obviously backed by PG), I
needed to get a set of random rows from a table. iGoogling around gave me
some pointers, but they all were either not convenient (needed adding a
column), or were not performant enough (sort on huge resultsets!); some had
both the problems. In my test table of about 90 MB containing 1 million
rows, these solutions clearly did not perform well!

One of the solutions I thought of, and which worked too for me, was
using the 'Synchronized Sequential Scans' feature of 8.3. You make one of
your connections do sequential scans on the target table in a loop (jut do
a count(*) on that table in a loop). And when you want to select rows, say
5, from that table, you just fire 'SELECT * FROM mytab LIMIT 5'. Depending
on where the other constantly-looping sequential scan is, you will get 5
rows from a random location in your table.

The problem with this approach is that, that you will always get the
first rows from whichever database block you hit. So, in effect, you will
almost never be able to see al the rows which lie at the end of the blocks
(unless your LIMIT is high enough, or all the rows before that row are
dead).

So I developed another solution, which might work for many cases; and
in cases it doesn't work, the code can be easily be extended/modified to
suit any query type.

Attached is the file containing the definition of plpgsql function
get_random_rows(), using which we can get a specified number of random
rows. This function returns truly random rows from the mentioned table.
Here are two invocations of this function on a test table:

postgres=> explain analyze select * from get_random_rows( null, 'url',
'{url}', 100, null ) as ( a varchar );
NOTICE:  Number of misses: 17
QUERY PLAN
--
 Function Scan on get_random_rows  (cost=0.00..260.00 rows=1000 width=32)
(actual time=193.790..194.269 rows=100 loops=1)
 Total runtime: 195.017 ms
(2 rows)

postgres=> explain analyze select * from get_random_rows( null, 'url',
'{url}', 100, null ) as ( a varchar );
NOTICE:  Number of misses: 30
QUERY PLAN
--
 Function Scan on get_random_rows  (cost=0.00..260.00 rows=1000 width=32)
(actual time=246.101..246.714 rows=100 loops=1)
 Total runtime: 247.452 ms
(2 rows)

postgres=>

-- 
Gurjeet Singh


get_random_rows.sql
Description: Binary data


get_random_rows_seq.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why some GUC parameter names are not lowercased

2012-10-30 Thread Gurjeet Singh
On Tue, Oct 30, 2012 at 12:11 PM, Tom Lane  wrote:

> Gurjeet Singh  writes:
> > Is there a reason why Postgres chose to not use all lowercase characters
> > for these parameters' names.
> >  DateStyle
> >  IntervalStyle
> >  TimeZone
>
> It's historical, for sure.  I think we've discussed changing them and
> decided it would be more likely to break things than improve matters.
>
> In particular, modern style would probably be more like "date_style"
> etc, but we definitely could not insert underscores without breaking
> applications all over the place.  So the best we could do is just
> smash to lowercase, eg "datestyle", which isn't really a readability
> improvement.  And there would still be some risk of breaking
> applications that are expecting these names to print a particular way.
>

Can we develop aliases to these parameters, that adhere to the standard.
Next release we can mark the old ones as deprecated and a few releases down
the line  completely remove the non-stsandard names and rest easy.

Would be happy to contribute such a patch. I think it'd be trivial.

-- 
Gurjeet Singh

http://gurjeet.singh.im/


Re: [GENERAL] setting contrib lo visible to all schemas

2010-05-20 Thread Gurjeet Singh
On Thu, May 20, 2010 at 11:16 AM, Teddy Limousin wrote:

> thanks tom, aparently your rigth in the sql dump file there are some
> set search_path changes
> I tried the set search_path command and use the same query that generate
> the error and it works on my pgadmin editor.
>
> My question is how can I set the search_path for all users or all
> conections
> so my  application will find the lo type.
>


You can use the ALTER DATABASE dbname SET search_path = ...
or can do it for individual user as: ALTR USER uname SET search_path = ...



>
> thanks
>
> 2010/5/20 Tom Lane 
>
> Teddy Limousin  writes:
>> > I guess this is because when executing the sql command to install lo, it
>> > makes it visible from the public schema only
>> > not from the schema I`m using. So my question is how can I make lo type
>> to
>> > be visible from all schemas
>>
>> > by visible I mean accesible: not need to use for example public.lo to
>> define
>> > a field of type lo
>>
>> It sounds like you removed the public schema from your search_path
>> setting.  Don't do that ...
>>
>>regards, tom lane
>>
>
>
>
> --
> Teddy Limousin
>



-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.enterprisedb.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] 9.0RC1 error variable not found in subplantarget lists

2010-09-26 Thread Gurjeet Singh
2010/9/26 Andrus 

>  Here's the patch if it helps.
>>
>
> Thank you.
> When 9.1 will released ?
> Is it possible to remove 9.0 from downloads so that this issue will not
> propagated?
>
>
If committed, this fix will be available in 9.0.1.

Thanks for the bug report and the reproducible test case.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] 9.0RC1 error variable not found in subplantarget lists

2010-09-26 Thread Gurjeet Singh
2010/9/26 Andrus 

>  Here's the patch if it helps.
>>
>
> Thank you.
> When 9.1 will released ?
> Is it possible to remove 9.0 from downloads so that this issue will not
> propagated?
>
>
>
If committed, this fix will be available in release 9.0.1. I cannot comment
on when it will be available though.

Thanks for the bug report and the reproducible test case.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Checking for stale reads on hot standby

2010-09-26 Thread Gurjeet Singh
On Mon, Sep 27, 2010 at 1:51 AM, Yang Zhang  wrote:

> Say you have an application using PG asynchronous streaming
> replication to some hot standbys, to distribute the read load. The
> application itself is a typical web application consisting of multiple
> servers, serving a number of sessions (perhaps belonging to different
> users), and the workload is OLTP-ish, with each session continually
> issuing a bunch of transactions. To guarantee session timeline
> consistency for clients of the application, you want to make sure that
> they can read data that's at least as new as anything they've
> read/written previously, never traveling back in time.
>
> With asynchronous replication, after seeing a new version of the data
> from one standby, you may see an older version from a subsequent query
> to another standby. The question: what are some ways to provide this
> form of consistency in the context of PG asynchronous replication?
>
> Is the standard/recommended approach to use a sequence representing
> the global database version? Here, the application is responsible for
> incrementing this from update transactions. In read transactions,
> check that the sequence value is >= the session's highest-seen-value,
> and raise the latter if necessary.
>
>
See the nuggets hidden in section 25.2.5.2. "Monitoring" at
http://www.postgresql.org/docs/9.0/static/warm-standby.html#STREAMING-REPLICATION

After an UPDATE, your application can cache the info from
'pg_current_xlog_location()' result on the primary and then compare that
with the result of  'pg_last_xlog_receive_location()' on the standby to see
if it is seeing fresh enough data.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Preserving order through an inner join

2010-09-27 Thread Gurjeet Singh
On Sun, Sep 26, 2010 at 9:37 PM, Tom Lane  wrote:

>
> The SQL standard explicitly disavows any particular
> output row order unless there is a top-level ORDER BY.  (In fact,
> unless things have changed recently an ORDER BY in a sub-select isn't
> even legal per spec.)
>
>
Not sure about the SQL spec allowing it, but an ORDER BY followed by a LIMIT
does have valid use cases in sub-selects.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Copying a column into a temp. table

2010-10-01 Thread Gurjeet Singh
On Fri, Oct 1, 2010 at 10:02 AM, Adrian Klaver wrote:

> On Friday 01 October 2010 6:54:09 am Alexander Farber wrote:
> > How do you copy a column from table please?
> > I'm trying with PostgreSQL 8.1.21/CentOS 5.5:
>


> > select topic_id from phpbb_topics where forum_id=5 and topic_poster=1
> > and age(to_timestamp(topic_time))>interval '14 days' into old_topics;
> > psql:clean-forum.sql:6: ERROR:  syntax error at or near "into" at
> character
> > 124 psql:clean-forum.sql:6: LINE 1: ...
> > age(to_timestamp(topic_time))>interval '14 days' into old_t...
>


> > Of course "select ... into" doesn't work, but how should I do it?
> > Can't find in http://www.postgresql.org/docs/8.1/static/
> >
> > Thank you
> > Alex
>
> http://www.postgresql.org/docs/8.1/interactive/sql-insert.html
>

Short answer is
insert into my_temp_table select a,b from mytable;

Or as Gabriele said, CREATE TEMP TABLE mytable AS SELECT should also work.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] Copying a column into a temp. table

2010-10-01 Thread Gurjeet Singh
On Fri, Oct 1, 2010 at 10:52 AM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> Thank you, I've created the following script which seems to work ok.
>
> I just hope, that it won't be hit by the max_stack_depth-problem
> which actually forced me to look for a custom solution -
>
>SQL ERROR [ postgres ]
>ERROR: stack depth limit exceeded HINT: Increase the configuration
>parameter "max_stack_depth". []
>
>SQL
>DELETE FROM phpbb_posts WHERE post_id IN (334767, ..skipped.., 382871)
>
>
>
> # psql -a -f clean-phpbb-forum.sql
> start transaction;
> START TRANSACTION
> create temp table old_topics (topic_id integer) on commit delete rows;
> CREATE TABLE
> create temp table old_posts (post_id integer) on commit delete rows;
> CREATE TABLE
> insert into old_topics select topic_id from phpbb_topics
> where forum_id=5 and topic_poster=1 and
> age(to_timestamp(topic_time))>interval '14 days';
> INSERT 0 14788
> -- select count(topic_id) as "old topics:" from old_topics;
> insert into old_posts select p.post_id from phpbb_posts p, old_topics t
> where p.forum_id=5 and p.poster_id=1 and p.topic_id=t.topic_id;
> INSERT 0 73718
> -- select count(post_id) as "old posts:" from old_posts;
> delete from phpbb_posts where post_id in (select post_id from old_posts);
> DELETE 73718
> delete from phpbb_topics where topic_id in (select topic_id from
> old_topics);
> DELETE 14788
> update phpbb_config set
> config_value = (select count(topic_id) from phpbb_topics)
> where config_name = 'num_topics';
> UPDATE 1
> update phpbb_config set
> config_value = (select count(post_id) from phpbb_posts)
> where config_name = 'num_posts';
> UPDATE 1
> update phpbb_users set
> user_posts = (select count(post_id) from phpbb_posts where poster_id=1)
> where user_id = 1;
> UPDATE 1
> update phpbb_forums set
> forum_posts = (select count(post_id) from phpbb_posts),
> forum_topics = (select count(topic_id) from phpbb_topics),
> forum_topics_real = (select count(topic_id) from phpbb_topics)
> where forum_id = 5;
> UPDATE 1
> commit
> COMMIT
>
>
Please stay on the list.

With temporary tables you shouldn't be hitting that limit. BTW, what
Postgres version are you using? and any rough estimation of how many
elements that IN list has in the query that fails?

You could have also done:

   DELETE FROM phpbb_posts WHERE post_id IN (select xyz from mytable where
...)

assuming that inner select would return the list: 334767, ..skipped..,
382871, and avoided creating temp tables altogether.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [GENERAL] error while autovacuuming

2010-10-03 Thread Gurjeet Singh
On Fri, Oct 1, 2010 at 9:16 PM, Craig Ringer wrote:

> On 2/10/2010 5:44 AM, Scott Marlowe wrote:
>
>> On Fri, Oct 1, 2010 at 2:06 PM, tamanna madaan
>>   wrote:
>>
>>> Hi All
>>>
>>> I am planning to upgrade  postgres to avoid this autovacuum problem.
>>>
>>>  From the release notes of postgres-8.1.x it seems this issue
>>> has been solved in postgres-8.1.6 and hence upgrading
>>> to latest 8.1 release i.e 8.1.21 will solve this issue.
>>>
>>> But I was just wondering if postgres-8.4 will also be having this fix
>>> (as I couldn't find it in release notes of postgres-8.4).
>>>
>>
>> 8.4 has much better autovacuuming, seeing as it's multithreaded and
>> you can have a lot more control over how it behaves.  This fix is
>> definitely in 8.4 and 8.3, or should I say the bug you hit was long
>> gone by the time the 8.4 branch was cut.
>>
>
> OP: Do be aware that if you jump straight to 8.4, there are some
> incompatibilities that may require changes to your code. The one people seem
> to be bitten by most frequently is the removal of implicit casts to text:
>
> http://www.postgresql.org/docs/current/static/release-8-3.html
>
> ... but, in general, you should read the release notes for each major
> version and see if you need to change anything to achieve a smooth upgrade.
>

I think, from the beginning the OP has been  unaware of the fact that
upgrading to latest minor release does not require a huge effort, and that
it involves very low risk of breaking any applications, Slony or otherwise.

OP should have been pointed to our versioning policy at
http://www.postgresql.org/support/versioning  . That would have kept this
thread very short.

Regards,
-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


[GENERAL] Confusion about ident sameuser

2008-07-02 Thread Gurjeet Singh
Hi All,

I am confused with the 'ident sameuser' authentication.  Here's my
setup:

[EMAIL PROTECTED] ~]$ grep -v "^#" /pg_hba.conf
local   all all   ident sameuser
hostall all 127.0.0.1/32  ident sameuser
hostall all 10.0.0.0/8password

[EMAIL PROTECTED] ~]$ pg_ctl -D  reload

[EMAIL PROTECTED] ~]$ psql -h 10.10.0.27 -p 6543 -d postgres -U postgres
Password for user postgres:

[EMAIL PROTECTED] ~]$ psql -p 6543 -d postgres -U postgres
Welcome to psql 8.1.11, the PostgreSQL interactive terminal.

[EMAIL PROTECTED] ~]$ psql -h localhost -p 6543 -d postgres -U postgres
psql: FATAL:  Ident authentication failed for user "postgres"

As you can see, if I use the machine's interface or unix sockets, it either
asks for password or lets me in. But when I use localhost, it correctly uses
127.0 line for authentication, but does not let me in!!!

Is this not supposed to work? Could it be because I might not be having an
ident server running on my box? How do I determine if I have an ident server
running?

Thanks in advance,

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Index question regarding numeric operators

2008-10-01 Thread Gurjeet Singh
On Wed, Oct 1, 2008 at 1:57 PM, Mike Christensen <[EMAIL PROTECTED]> wrote:

> Hi guys, I'm very new to PostgreSQL so please excuse me if this is an easy
> question..
>
> I have a table called Recipes which has a column called CookTime.  I have
> an index on the CookTime column as such:
>
> CREATE INDEX idx_recipes_cooktime
>  ON recipes
>  USING btree
>  (cooktime);
>
> If I run the following query:
>
> select * from recipes where cooktime = 30;
>
> I get the following execution plan:
>
> "Bitmap Heap Scan on recipes  (cost=260.53..22533.22 rows=10870 width=1029)
> (actual time=6.881..281.442 rows=10915 loops=1)"
> "  Recheck Cond: (cooktime = 30)"
> "  ->  Bitmap Index Scan on idx_recipes_cooktime  (cost=0.00..257.82
> rows=10870 width=0) (actual time=4.490..4.490 rows=12568 loops=1)"
> "Index Cond: (cooktime = 30)"
> "Total runtime: 333.061 ms"
>
> As you can see, the index above is being used and the query is very fast.
>
> However, when I change the query to:
>
> select * from recipes where cooktime > 30;
>
> I get the following execution plan:
>
> "Seq Scan on recipes  (cost=0.00..35090.00 rows=187500 width=1029) (actual
> time=0.943..997.372 rows=184740 loops=1)"
> "  Filter: (cooktime > 30)"
> "Total runtime: 1507.961 ms"
>
>
> As you can see the index is not being used and it's doing a seq scan on the
> table directly.  I would think if Postgres is indeed keeping a btree index
> on the column, meaning the values would be stored in numerical order, the
> index would be used to find rows that have a value greater than 30.  I'm
> curious as to why this is not the case, or if perhaps I have my index setup
> incorrectly for this sort of query.
>
>
I'd suggest you try this query after an ANALYZE on the table.

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Transactions within a function body

2008-10-02 Thread Gurjeet Singh
On Thu, Oct 2, 2008 at 8:40 PM, Alvaro Herrera
<[EMAIL PROTECTED]>wrote:

> Reg Me Please escribió:
> > Il Thursday 02 October 2008 16:15:10 Alvaro Herrera ha scritto:
>
> > > You can nest blocks arbitrarily, giving you the chance to selectively
> > > rollback pieces of the function.  It's only a bit more awkward.
> >
> > You mean I can issue a ROLLBACK command within a BEGIN...END; block to
> roll it
> > back?
>
> No -- I mean you can use BEGIN/EXCEPTION/END blocks as you like, nesting
> them or putting one after another.  Complementing this with RAISE
> EXCEPTION you can cause savepoints to roll back at will.


I have seen this feature being asked for, and this work-around suggested so
many times. If plpgql does it internally, why not provide a clean interface
for this? Is there some road-block, or that nobody has ever tried it?

If there are no known limitations, I'd like to start work on it.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Transactions within a function body

2008-10-02 Thread Gurjeet Singh
No, in Oracle too SAVEPOINT and AUTONOMOUS transaction are different beasts.

On Thu, Oct 2, 2008 at 9:27 PM, Bob Henkel <[EMAIL PROTECTED]> wrote:

> Coming from an Oracle background my understanding is they're one in the
> same.
>
>
>
>
> On Thu, Oct 2, 2008 at 10:37 AM, Alvaro Herrera <
> [EMAIL PROTECTED]> wrote:
>
>> Bob Henkel escribió:
>> > Have you looked at creating a function in perl and creating a new
>> > connection? Or using a dblink query which can create a new connection?
>> > These two methods work. I have used them to insert to a log table
>> regardless
>> > of the parent transaction being commited or rolled back.
>>
>> That's a different thing, "autonomous transactions".
>>
>> --
>> Alvaro Herrera
>> http://www.CommandPrompt.com/ 
>> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>>
>
>


-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] How do I save data and then raise an exception?

2008-10-02 Thread Gurjeet Singh
On Fri, Oct 3, 2008 at 7:41 AM, Jaime Casanova <[EMAIL PROTECTED]
> wrote:

> On Thu, Oct 2, 2008 at 8:44 PM, Alvaro Herrera
> <[EMAIL PROTECTED]> wrote:
> > Rob Richardson wrote:
> >
> >> Here's what I need to do:
> >>
> >> IF query_check_fails THEN
> >> UPDATE some_table SET some_value = 0 WHERE some_condition_is_true;
> >> RAISE EXCEPTION 'Look, you idiot, do it right next time!';
> >> END;
> >>
> >> I need the update to work, but I need to raise the exception so the C++
> >> code recognizes the error.  How can I do both?
> >
> > You need an autonomous transaction, which Postgres does not support
> > directly but you can implement using dblink or a plperl function that
> > connects back to the database.
> >
>
> what about RAISE NOTICE?


NOTICE wouldn't rollback any part of the transaction! OP needs mixed COMMIT
success in the same transaction.

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] How do I save data and then raise an exception?

2008-10-02 Thread Gurjeet Singh
On Fri, Oct 3, 2008 at 7:14 AM, Alvaro Herrera
<[EMAIL PROTECTED]>wrote:

> Rob Richardson wrote:
>
> > Here's what I need to do:
> >
> > IF query_check_fails THEN
> > UPDATE some_table SET some_value = 0 WHERE some_condition_is_true;
> > RAISE EXCEPTION 'Look, you idiot, do it right next time!';
> > END;
> >
> > I need the update to work, but I need to raise the exception so the C++
> > code recognizes the error.  How can I do both?
>
> You need an autonomous transaction, which Postgres does not support
> directly but you can implement using dblink or a plperl function that
> connects back to the database.


I was also going to suggest that but did not, because autonomous transaction
won't help here! The data has been INSERTed or UPDATEd in this transaction,
and hence won't be visible to the autonomous transaction, because the main
transaction hasn't committed yet.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Getting rows in statement-level triggers

2008-10-02 Thread Gurjeet Singh
On Fri, Oct 3, 2008 at 11:42 AM, Artacus <[EMAIL PROTECTED]> wrote:

>
>  So the manual says there is no way for a statement-level trigger to
>> examine the row(s) modified by the statement.
>>
>> Is there any way to get the xmin or cmin of the transaction that fired the
>> trigger? Or can I look up the last xid for a table some where?
>>
>
> Ok, so it took a lot of googling to figure this one out, but you can do it
> with something like so.
>
> SELECT *
> FROM strand_scores
> WHERE xmin::text = txid_current()::text
>
> It appears you can't convert a xid type to int or bigint, not sure why. I
> guess I should leave a comment on the manual page.


That's an interesting find, i'd say.

xmin::bigint doesn't work because that implicit CAST doesn't exist. If
needed, I'd use xmin::text::bigint; that should work.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] failed to install posgrest

2008-10-03 Thread Gurjeet Singh
On Sat, Oct 4, 2008 at 2:42 AM, Dave Page <[EMAIL PROTECTED]> wrote:

> On Fri, Oct 3, 2008 at 9:31 PM, Joshua Drake <[EMAIL PROTECTED]> wrote:
> > On Fri, 3 Oct 2008 15:18:26 -0500
> > "Eduardo Arévalo" <[EMAIL PROTECTED]> wrote:
> >
> >> Hi I am trying to install the portgres of 64btis in centos 5.6 but at
> >> the next command ./postgresql-8.3.4-1-linux-x64.bin
> >> the results are as follows:
> >
> > We don't distribute .bin files I would strongly suggest using the
> > rpm based packages for Centos.
>
> Yes we do - and simply switching to the RPM distro is unlikely to
> solve the problem as the OP is obviously not aware of the what the
> problem is and will probably grab the latest version available.
>
> Eduardo; the error you see occurs because the data directory you have
> is not compatible with the PostgreSQL version you are trying to
> install. Given the catalog version number the installer is reporting
> that it's found, your existing data directory is from PostgreSQL 8.2.
> You should therefore install the latest 8.2 RPM (or build from
> source), as the one-click installer is not available for the 8.2
> series.
>
> Why you have a PG 8.2 data directory under /usr/local/postgresql_8.3
> is another question entirely...


OP has data directory in /base/data, /usr/local/postgresql_8.3/data is the
default from the installer.
...
Data Directory [/usr/local/postgresql_8.3/data]: /base/data
...

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Static functions

2008-10-03 Thread Gurjeet Singh
On Sat, Oct 4, 2008 at 1:36 AM, Tom Lane <[EMAIL PROTECTED]> wrote:

> Joseph S <[EMAIL PROTECTED]> writes:
> > Aren't static functions supposed to executed only once per transaction?
>
> There's no promise of that.
>

Can this be changed? Or does it work only in simpler queries, based on some
result-caching criteria?

Shouldn't PG make all efforts to not execute something when the result is
already known?

-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Static functions

2008-10-03 Thread Gurjeet Singh
On Sat, Oct 4, 2008 at 8:49 AM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Gurjeet Singh" <[EMAIL PROTECTED]> writes:
> > Shouldn't PG make all efforts to not execute something when the result is
> > already known?
>
> Not if said effort would cost more than is saved, which would be by far
> the most likely result if we tried to cache all function results.
>

Sorry Tom, I confused STABLE  with IMMUTABLE; my bad.

Joseph, you can cloak the STABLE function inside an IMMUTABLE function, then
this whole thing will be executed only once. Use this advice only after you
understand what you are doing.

Here's an example:

create or replace function f_stable() returns int as $$ begin raise NOTICE
'stable'; return 1; end; $$ stable language plpgsql;

create or replace function f_immutable() returns int as $$ begin raise
NOTICE 'immutable'; perform f_stable(); return 1; end; $$ IMMUTABLE language
plpgsql;

postgres=> select f_stable() from generate_series( 1, 2 );
NOTICE:  stable
NOTICE:  stable
 f_stable
--
1
1
(2 rows)

postgres=> select f_immutable() from generate_series( 1, 2);
NOTICE:  immutable
NOTICE:  stable
CONTEXT:  SQL statement "SELECT  f_stable()"
PL/pgSQL function "f_immutable" line 1 at PERFORM
 f_immutable
-
   1
   1
(2 rows)

postgres=>


You can see that if STABLE function is called directly, it is invoked for
each row; but if we hide the STABLE function inside an IMMUTABLE function,
there is going to be just one invocation of both these functions for the
whole command.

HTH.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Static functions

2008-10-04 Thread Gurjeet Singh
On Sat, Oct 4, 2008 at 6:06 PM, Gregory Stark <[EMAIL PROTECTED]>wrote:

> "Gurjeet Singh" <[EMAIL PROTECTED]> writes:
>
> > On Sat, Oct 4, 2008 at 8:49 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> >
> >> "Gurjeet Singh" <[EMAIL PROTECTED]> writes:
> >> > Shouldn't PG make all efforts to not execute something when the result
> is
> >> > already known?
> >>
> >> Not if said effort would cost more than is saved, which would be by far
> >> the most likely result if we tried to cache all function results.
> >>
> >
> > Sorry Tom, I confused STABLE  with IMMUTABLE; my bad.
>
> No, this is equally untrue for immutable.


Yup... I realized that after a bit of more testing after the mail...
Immutable functions are single-call-per-command only of you are passing
constants-only as parameters; if we have an expression involving columns,
then they will be called for every row.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Monty on MySQL 5.1: "Oops, we did it again"

2008-12-07 Thread Gurjeet Singh
As I read it, he is supportive of the community process that PG follows; I
am not so sure he promotes Postgres though :)

On Thu, Dec 4, 2008 at 3:56 AM, Bruce Momjian <[EMAIL PROTECTED]> wrote:

> Jason Long wrote:
> > Greg Smith wrote:
> > > I wonder if I'm the only one who just saved a copy of that post for
> > > reference in case it gets forcibly removed...
> > >
> > > Recently I was thinking about whether I had enough material to warrant
> > > a 2008 update to "Why PostgreSQL instead of MySQL"; who would have
> > > guessed that Monty would do most of the research I was considering for
> > > me?
> > >
> > > --
> > > * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore,
> MD
> > >
> > I quit using MySQL years ago when the default table type did not have
> > transactions and subqueries were not existent.  The features I was
> > looking for were already in PostgreSQL for several versions.
> >
> > I am surprised to see such an honest post regarding MySQL.
>
> Monty is quite supportive of Postgres.
>
> --
>  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
>  EnterpriseDB http://enterprisedb.com
>
>  + If your life is a hard drive, Christ can be your backup. +
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] ChronicDB: Live database schema updates with zero downtime

2010-01-13 Thread Gurjeet Singh
On Wed, Jan 13, 2010 at 12:29 AM, ChronicDB  wrote:

>
> [3] http://chronicdb.com/chronicdb_early_adoption_program


"Page not found"

-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.enterprisedb.com

singh.gurj...@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


[GENERAL] ERROR: failed to find conversion function from "unknown" to text

2009-01-05 Thread Gurjeet Singh
Q1: select '' union all select ''
Q2: select '' union all select * from (select '' ) as s

version: PostgreSQL 8.3.1, compiled by Visual C++ build 1400

Hi All,

Q1 works just fine, but Q2 fails with:

ERROR:  failed to find conversion function from "unknown" to text

Q2 is a generalization of a huge query we are facing, which we cannot
modify. I don't think this is a 'removed-casts' problem generally faced in
8.3, but I may be wrong. Will adding some cast resolve this?

Best regards,
-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] [HACKERS] ERROR: failed to find conversion function from "unknown" to text

2009-01-05 Thread Gurjeet Singh
As I mentioned, we cannot change the query, so adding casts to the query is
not an option. I was looking for something external to the query, like a
CREATE CAST command that'd resolve the issue.

Best regards,

On Tue, Jan 6, 2009 at 12:00 PM, Pavel Stehule wrote:

> Hello
>
> 2009/1/6 Gurjeet Singh :
> > Q1: select '' union all select ''
> > Q2: select '' union all select * from (select '' ) as s
> >
> > version: PostgreSQL 8.3.1, compiled by Visual C++ build 1400
> >
> > Hi All,
> >
> > Q1 works just fine, but Q2 fails with:
> >
> > ERROR:  failed to find conversion function from "unknown" to text
> >
> > Q2 is a generalization of a huge query we are facing, which we cannot
> > modify. I don't think this is a 'removed-casts' problem generally faced
> in
> > 8.3, but I may be wrong. Will adding some cast resolve this?
>
> yes
>
> postgres=#  select '' union all select * from (select ''::text ) as s;
>  ?column?
> --
>
>
> (2 rows)
>
> regards
> Pavel Stehule
>
> >
> > Best regards,
> > --
> > gurjeet[.sin...@enterprisedb.com
> > singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
> >
> > EnterpriseDB  http://www.enterprisedb.com
> >
> > Mail sent from my BlackLaptop device
> >
>



-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] [HACKERS] ERROR: failed to find conversion function from "unknown" to text

2009-01-06 Thread Gurjeet Singh
I took your cue, and have formulated this solution for 8.3.1 :

create or replace function unknown2text(unknown) returns text as
$$ begin return text($1::char); end $$ language plpgsql;

drop cast (unknown as text);

create cast (unknown as text) with function unknown2text( unknown ) as
implicit;

select '' union all select * from (select '' ) as s;

Thanks for your help Pavel.

Best regards,

PS: I was getting the same error as yours (stack depth) in EDB version
8.3.0.12, so I had to use the following code for unknown2text:

return charin( unknownout($1) );

It works for PG 8.3.1 too.

On Tue, Jan 6, 2009 at 12:15 PM, Pavel Stehule wrote:

> 2009/1/6 Gurjeet Singh :
> > As I mentioned, we cannot change the query, so adding casts to the query
> is
> > not an option. I was looking for something external to the query, like a
> > CREATE CAST command that'd resolve the issue.
>
> I am sorry, I blind - I tested casting on 8.3.0 and it doesn't work
> (but I am have old 8.3)
> postgres=# create function unknown2text(unknown) returns text as
> $$select $1::text$$ language sql;
> CREATE FUNCTION
> postgres=# create cast(unknown as text) with function
> unknown2text(unknown) as implicit;
> CREATE CAST
> postgres=# select '' union all select * from (select '' ) as s;
> ERROR:  stack depth limit exceeded
> HINT:  Increase the configuration parameter "max_stack_depth", after
> ensuring the platform's stack depth limit is adequate.
> CONTEXT:  SQL function "unknown2text" during startup
> SQL function "unknown2text" statement 1
> SQL function "unknown2text" statement 1
> SQL function "unknown2text" statement 1
> SQL function "unknown2text" statement 1
> SQL function "unknown2text" statement 1
>
> It working on 8.4
>
> postgres=# create cast (unknown as text) with inout as implicit;
> CREATE CAST
> postgres=# select '' union all select * from (select '' ) as s;
>  ?column?
> --
>
>
> (2 rows)
>
> regards
> Pavel Stehule
>
>
> >
> > Best regards,
>
>
> >
> > On Tue, Jan 6, 2009 at 12:00 PM, Pavel Stehule 
> > wrote:
> >>
> >> Hello
> >>
> >> 2009/1/6 Gurjeet Singh :
> >> > Q1: select '' union all select ''
> >> > Q2: select '' union all select * from (select '' ) as s
> >> >
> >> > version: PostgreSQL 8.3.1, compiled by Visual C++ build 1400
> >> >
> >> > Hi All,
> >> >
> >> > Q1 works just fine, but Q2 fails with:
> >> >
> >> > ERROR:  failed to find conversion function from "unknown" to text
> >> >
> >> > Q2 is a generalization of a huge query we are facing, which we
> >> > cannot
> >> > modify. I don't think this is a 'removed-casts' problem generally
> faced
> >> > in
> >> > 8.3, but I may be wrong. Will adding some cast resolve this?
> >>
> >> yes
> >>
> >> postgres=#  select '' union all select * from (select ''::text ) as s;
> >>  ?column?
> >> --
> >>
> >>
> >> (2 rows)
> >>
> >> regards
> >> Pavel Stehule
> >>
> >> >
> >> > Best regards,
> >> > --
> >> > gurjeet[.sin...@enterprisedb.com
> >> > singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
> >> >
> >> > EnterpriseDB  http://www.enterprisedb.com
> >> >
> >> > Mail sent from my BlackLaptop device
> >> >
> >
> >
> >
> > --
> > gurjeet[.sin...@enterprisedb.com
> > singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com
> >
> > EnterpriseDB  http://www.enterprisedb.com
> >
> > Mail sent from my BlackLaptop device
> >
>



-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] [HACKERS] ERROR: failed to find conversion function from "unknown" to text

2009-01-06 Thread Gurjeet Singh
On Tue, Jan 6, 2009 at 2:43 PM, Scott Marlowe wrote:

> On Tue, Jan 6, 2009 at 2:04 AM, Gurjeet Singh 
> wrote:
> > I took your cue, and have formulated this solution for 8.3.1 :
>
> Is there a good reason you're running against a db version with known
> bugs instead of 8.3.5?  Seriously, it's an easy upgrade and running a
> version missing over a year of updates is not a best practice.
>

That's just a development instance that I have kept for long; actual issue
was on EDB 8.3.0.12, which the customer is using. As noted in the PS of
previous mail, the solution that worked for PG 8.3.1 didn't work on EDB
8.3.0.12, so had to come up with a different code for that!

Best regards,
-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] [HACKERS] ERROR: failed to find conversion function from "unknown" to text

2009-01-06 Thread Gurjeet Singh
On Tue, Jan 6, 2009 at 6:31 PM, Tom Lane  wrote:

> "Gurjeet Singh"  writes:
> > create cast (unknown as text) with function unknown2text( unknown ) as
> > implicit;
>
> This is a horrendously bad idea; it will bite your *ss sooner or later,
> probably sooner.
>
>regards, tom lane
>

I guessed so, but couldn't figure out exactly how! That's why I have
suggested this as a temp solution until we confirmed this with someone more
knowledgeable.

Can you please let us know how this would be problematic? And can you
suggest a better solution?

Thanks and best regards,
-- 
gurjeet[.sin...@enterprisedb.com
singh.gurj...@{ gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

Mail sent from my BlackLaptop device


  1   2   >