[GENERAL] delphi -> postgresql database connect ???

2005-02-02 Thread
hi,
delphi to postgresql database connect commands and source code free download 
. Please.
Tanks.

Burak BÝTÝKÇÝ
_
Yagmura yakalanmamak için sadece semsiyenize degil, MSN hava durumuna 
güvenin! http://www.msn.com.tr/havadurumu/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Weird PostgreSQL crashes on FC2/FC3 64-bit

2005-02-02 Thread Martijn van Oosterhout
On Tue, Feb 01, 2005 at 10:32:15PM -0800, William Yu wrote:
> Tom Lane wrote:
> >William Yu <[EMAIL PROTECTED]> writes:
> >
> >>Doing a ps -ef | grep postgres, I will see something like:
> >
> >
> >>root 17034 1  0 21:41 ?00:00:00 gdb -q -x /dev/stdin 
> >>postgres 9131
> >>postgres  9131  2712  0 20:31 ?00:00:00 postgres: postgres 
> >>netdata [local] VACUUM
> >
> >
> >So where did the gdb come from?
> 
> To be honest, I do not know. I see the parent process is 1 so something 
> in the FCx kernel is triggering it. I just don't know where the logs for 
> something like this would be.

Parent process 1 just means that it's either spawned by init or it's
real parent process has died. I wouldn't put any stock in it. Try using
"ps aux" to get the process state. Chances are it's in T (trace) state.
You can try to kill -CONT it. Maybe strace it.
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpq64st6p0xc.pgp
Description: PGP signature


[GENERAL] When is a blank not a null or ''

2005-02-02 Thread mike
I have the following query (I have removed all nulls from the field as
test)

SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email <>'';

However I get loads of blank email addresses coming up

anyone any ideas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Wierd memory problem with Apache / PHP. Related to

2005-02-02 Thread Jeff Davis
It sounds like a php issue to me more than anythong else. Perhaps PHP's
garbage collection doesn't free the space fast enough?

However, I was looking at the PG source a little to see if it looked
like it was copying the query, and it appears that pqPutMsgBytes (in
fe-misc.c) copies the entire query to a buffer. That could account for
doubling of the query that you send there (since php still holds the
$data string), but still doesn't explain 16MB. Perhaps a little of both
PHP and postgres?

Developers: am I mistaken about libpq copying the entire query before
sending it to the backend? Is there a reason that libpq wouldn't just
send it along to the backend? After all, it seems strange that postgres
would allow a huge query length, yet expect to be able to copy the
entire thing. 

Regards,
Jeff Davis



On Wed, 2005-02-02 at 08:58 +0300, Michael Ben-Nes wrote:
> Hi all,
> 
> I got a problem with uploading files which encounter the memory limit 
> when their size is not even close to the memory limit itself, let me 
> explain.
> 
> Here is my code that i made for testing the problem (along the code i 
> echoed the function memory_get_usage() to know how much memory was 
> allocated already for the script):
> 
> $imagefile=$_FILES['imagefile']; // recieve the file
> echo memory_get_usage().'';// 118592 memory bytes allocated
>
> $data = pg_escape_bytea(`cat $imagefile[tmp_name]`);
> echo memory_get_usage().'';// 5570280 memory bytes allocated
> 
> $data = "INSERT INTO test_files (bin_data, filename, filesize, filetype)
> VALUES ('$data', '$imagefile[name]', '$imagefile[size]', 
> '$imagefile[type]')";   // creating the sql for the insert, i called the 
> received value also $data cause i dont want to keep the previous $data 
> (after all we want to our precious memory no? :))
> echo memory_get_usage().'';// 5570400 memory bytes allocated 
> {changed from b4 only alittle}
> 
> if ( !$res = pg_query ($this->conn, $data) )  // try to insert the sql 
> string
> return 'error';
> else
> return 'gr8';
> echo memory_get_usage().'';// 5570648 memory bytes allocated 
> {again changed only alittle}
> 
> 
> Now as far as i see the script needed about 5.5MB of memory to upload a 
> file of 4.7MB but thats what so weird here... i receive the memory limit 
> error even if the php.ini "memory_limit" is set to 16MB! {twice of the 
> default of 8MB!}  at 32MB it works fine... but thats way too much..
> I suspect that the problem is connected to the pg_query function itself 
> but i didnt find what made it exactly...
> 
> Any ideas, knowledge or even just solutions ;) will be extremly helpful.
> 
> Thanks in advance,
> Ben-Nes Yonatan
> 
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Troels Arvin
On Wed, 02 Feb 2005 09:59:30 +, mike wrote:

> SELECT first_name,work_email FROM tb_contacts  WHERE
> tb_contacts.work_email <>'';
> 
> However I get loads of blank email addresses coming up
> 
> anyone any ideas

An idea: You have " "-values in your work_email column, i.e. work_email
values consisting of space(s).

-- 
Greetings from Troels Arvin, Copenhagen, Denmark



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Alban Hertroys
mike wrote:
I have the following query (I have removed all nulls from the field as
test)
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email <>'';
However I get loads of blank email addresses coming up
anyone any ideas
A blank is never a NULL:
SELECT '' IS NULL;
 ?column?
--
 f
(1 row)
Try this:
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email IS NOT NULL;
Or if there are also blanks among those e-mail addresses:
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email IS NOT NULL AND tb_contacts.work_email != '';
--
Alban Hertroys
MAG Productions
T: +31(0)53 4346874
F: +31(0)53 4346876
E: [EMAIL PROTECTED]
W: http://www.magproductions.nl
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Sean Davis
Is there a newline or carriage return in the "blank" emails?
Sean
On Feb 2, 2005, at 4:59 AM, mike wrote:
I have the following query (I have removed all nulls from the field as
test)
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email <>'';
However I get loads of blank email addresses coming up
anyone any ideas
---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Michael Kleiser




mike wrote:

  I have the following query (I have removed all nulls from the field as
test)

SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email <>'';

However I get loads of blank email addresses coming up

anyone any ideas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
  

Are NULL in work_email possible ?
If yes you should you have to use.


SELECT first_name,work_email FROM tb_contacts  WHERE
  tb_contacts.work_email <>''
AND 
  tb_contacts.work_email IS NOT NULL;

By the was in Oracle there is no difference between empty 
CHAR- or VARCHAR-column and NULL- CHAR or VARCHAR-columns.
But that is scpecial to Oracle.







Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Alban Hertroys
mike wrote:
Try this:
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email IS NOT NULL;
Or if there are also blanks among those e-mail addresses:
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email IS NOT NULL AND tb_contacts.work_email != '';

no difference
Then you probably have email addresses that exist of white space only. 
You should probably put a constraint on that if undesirable.

Try using a regular expression like so:
SELECT first_name,work_email
FROM tb_contacts
WHERE work_email !~ '^[[:space:]]*$';
--
Alban Hertroys
MAG Productions
T: +31(0)53 4346874
F: +31(0)53 4346876
E: [EMAIL PROTECTED]
W: http://www.magproductions.nl
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread mike
On Wed, 2005-02-02 at 11:31 +0100, Troels Arvin wrote:
> On Wed, 02 Feb 2005 09:59:30 +, mike wrote:
> 
> > SELECT first_name,work_email FROM tb_contacts  WHERE
> > tb_contacts.work_email <>'';
> > 
> > However I get loads of blank email addresses coming up
> > 
> > anyone any ideas
> 
> An idea: You have " "-values in your work_email column, i.e. work_email
> values consisting of space(s).
> 

nope

SELECT work_email FROM tb_contacts  WHERE tb_contacts.work_email ILIKE
'% %';
 work_email

(0 rows)


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Chris Green
On Wed, Feb 02, 2005 at 09:59:30AM +, mike wrote:
> I have the following query (I have removed all nulls from the field as
> test)
> 
> SELECT first_name,work_email FROM tb_contacts  WHERE
> tb_contacts.work_email <>'';
> 
> However I get loads of blank email addresses coming up
> 
Maybe you have some entries in the work_email column set to one or
more spaces.

-- 
Chris Green ([EMAIL PROTECTED])

"Never ascribe to malice that which can be explained by incompetence."

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread mike
On Wed, 2005-02-02 at 11:26 +0100, Alban Hertroys wrote:
> mike wrote:
> > I have the following query (I have removed all nulls from the field as
> > test)
> > 
> > SELECT first_name,work_email FROM tb_contacts  WHERE
> > tb_contacts.work_email <>'';
> > 
> > However I get loads of blank email addresses coming up
> > 
> > anyone any ideas
> 
> A blank is never a NULL:

I know, I meant visually a blank


> 
> SELECT '' IS NULL;
>   ?column?
> --
>   f
> (1 row)
> 
> 
> Try this:
> 
> SELECT first_name,work_email FROM tb_contacts  WHERE
> tb_contacts.work_email IS NOT NULL;
> 
> Or if there are also blanks among those e-mail addresses:
> 
> SELECT first_name,work_email FROM tb_contacts  WHERE
> tb_contacts.work_email IS NOT NULL AND tb_contacts.work_email != '';
> 

no difference

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] query time

2005-02-02 Thread WireSpot
I have a table with about 200.000 entries. Among other things, it
contains an integer field I use as a timestamp, and a variable
character field I use for user names. Certain queries are taking too
long IMO. I'm trying this on both 7.4 and 8.0.

If I do a direct comparison (using =) on the user name field and I
sort by the numeric field, I get about 5 ms. If I do a LIKE on the
user name and I don't sort at all, I get about 5 ms too. But if I use
both LIKE on the user name and sorting on the timestamp, the time
jumps to 2000 ms.

I have indexes on both fields, but I remember reading only one of them
will be used when doing a query.

Is such a serious jump in query times normal or am I doing something wrong?

I'm attaching the explain output for all 3 cases.
explain analyze select * from log_entries where user_name='john_doe' order by 
timestamp desc limit 100 offset 0;

QUERY PLAN  
  
--
 Limit  (cost=0.00..235.47 rows=100 width=175) (actual time=0.945..5.858 
rows=100 loops=1)
   ->  Index Scan Backward using timestamp_log_entries_key on log_entries  
(cost=0.00..39093.47 rows=16602 width=175) (actual time=0.938..5.622 rows=100 
loops=1)
 Filter: ((user_name)::text = 'john_doe'::text)
 Total runtime: 6.175 ms
(4 rows)



explain analyze select * from log_entries where user_name like '%john_doe%' 
limit 100 offset 0;
QUERY PLAN  
  
--
 Limit  (cost=0.00..8250.17 rows=1 width=175) (actual time=0.495..3.364 
rows=100 loops=1)
   ->  Seq Scan on log_entries  (cost=0.00..8250.17 rows=1 width=175) (actual 
time=0.486..3.138 rows=100 loops=1)
 Filter: ((user_name)::text ~~ '%john_doe%'::text)
 Total runtime: 3.657 ms
(4 rows)



explain analyze select * from log_entries where user_name like '%john_doe%' 
order by timestamp desc limit 100 offset 0;
 QUERY PLAN 
 
-
 Limit  (cost=8250.18..8250.19 rows=1 width=175) (actual 
time=1880.358..1880.910 rows=100 loops=1)
   ->  Sort  (cost=8250.18..8250.19 rows=1 width=175) (actual 
time=1880.345..1880.701 rows=100 loops=1)
 Sort Key: "timestamp"
 ->  Seq Scan on log_entries  (cost=0.00..8250.17 rows=1 width=175) 
(actual time=0.432..1051.036 rows=15481 loops=1)
   Filter: ((user_name)::text ~~ '%john_doe%'::text)
 Total runtime: 1887.071 ms
(6 rows)
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Martijn van Oosterhout
Try:

SELECT first_name,'['||work_email||']' FROM tb_contacts  WHERE
tb_contacts.work_email <>'';

Maybe you have spaces?

On Wed, Feb 02, 2005 at 09:59:30AM +, mike wrote:
> I have the following query (I have removed all nulls from the field as
> test)
> 
> SELECT first_name,work_email FROM tb_contacts  WHERE
> tb_contacts.work_email <>'';
> 
> However I get loads of blank email addresses coming up
> 
> anyone any ideas
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpMVio21xvbZ.pgp
Description: PGP signature


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Csaba Nagy
[snip]
> Or if there are also blanks among those e-mail addresses:
> 
> SELECT first_name,work_email FROM tb_contacts  WHERE
> tb_contacts.work_email IS NOT NULL AND tb_contacts.work_email != '';

The "tb_contacts.work_email IS NOT NULL" clause is superfluous, the
other condition will evaluate to false for null email anyway: a null
compared with any operator to any value is always null, which fails the
comparison. Generally any operator involving a null always results in
null, except a few special operators like "IS NULL" and some others.

In fewer words, the original statement will filter out both null and
empty string emails, but not emails with one or more space characters in
them. For example "   " will be selected, but for a human it still looks
blank. I wonder what data type you have, cause e.g. if you have char(n),
that will be padded automatically with space characters
(see http://www.postgresql.org/docs/7.4/static/datatype-character.html).

I you do have space characters in the email field, you could use:

trim(both from tb_contacts.work_email) != ''
or
char_length(trim(both from tb_contacts.work_email)) != 0
See also:
http://www.postgresql.org/docs/7.4/static/functions-string.html

This should filter out all null, empty string, and only space emails.

HTH,
Csaba.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Berend Tober
>>anyone any ideas
> If yes you should you have to use.
>
> SELECT first_name,work_email FROM tb_contacts  WHERE
>   tb_contacts.work_email <>''
> AND
>   tb_contacts.work_email IS NOT NULL;
>

See what happens with

SELECT first_name, work_email, LENGTH(COALESCE(work_email, ''))
FROM tb_contacts
WHERE LENGTH(TRIM(COALESCE(work_email, ''))) = 0


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Sean Davis
Did you try something like:
select first_name, work_email
FROM tb_contacts
WHERE tb_contacts.work_email !~ '^\\s$';
If this works, then you may want to do something like:
update tb_contacts set work_email=NULL where work_email ~ '^\\s$';
to "clean" the data and then use a trigger to do the same process on 
future inserts.

Sean
On Feb 2, 2005, at 6:24 AM, mike wrote:
On Wed, 2005-02-02 at 11:26 +0100, Alban Hertroys wrote:
mike wrote:
I have the following query (I have removed all nulls from the field 
as
test)

SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email <>'';
However I get loads of blank email addresses coming up
anyone any ideas
A blank is never a NULL:
I know, I meant visually a blank

SELECT '' IS NULL;
  ?column?
--
  f
(1 row)
Try this:
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email IS NOT NULL;
Or if there are also blanks among those e-mail addresses:
SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email IS NOT NULL AND tb_contacts.work_email != '';
no difference
---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Does indexing help >= as well as = for integer columns?

2005-02-02 Thread TJ O'Donnell
I had thought that the Creation of the Index would do something
equivalent to Analyze.  I tried Analyze Verbose and it improved
the scanner's ability to predict when an index would be useful.
Last week, I asked about visualizing B-tree "coverage".  I think
I meant "Can I see the histograms that Analyze creates?"
Are they available anywhere?  The docs mention them (bins) and I
was hoping Analyze Verbose would show them to me.
TJ
Tom Lane wrote:
"TJ O'Donnell" <[EMAIL PROTECTED]> writes:
This I don't get.  Why is an index scan not used?  Isn't an index supposed
to help when using > < >= <= too?
Explain Analyze Select count(smiles) from structure where _c >= 30
Aggregate  (cost=196033.74..196033.74 rows=1 width=32) (actual 
time=42133.432..42133.434 rows=1
loops=1)
 ->  Seq Scan on structure  (cost=0.00..191619.56 rows=1765669 width=32) (actual
time=8050.437..42117.062 rows=1569 loops=1)
   Filter: (_c >= 30)

Have you ANALYZEd the table lately?  That rowcount estimate is off by
about three orders of magnitude :-(
			regards, tom lane
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] query time

2005-02-02 Thread Richard Huxton
WireSpot wrote:
I have a table with about 200.000 entries. Among other things, it 
contains an integer field I use as a timestamp, and a variable 
character field I use for user names. Certain queries are taking too 
long IMO. I'm trying this on both 7.4 and 8.0.

If I do a direct comparison (using =) on the user name field and I 
sort by the numeric field, I get about 5 ms. If I do a LIKE on the 
user name and I don't sort at all, I get about 5 ms too. But if I use
 both LIKE on the user name and sorting on the timestamp, the time 
jumps to 2000 ms.

explain analyze select * from log_entries where user_name like
'%john_doe%' limit 100 offset 0;
This will not (and never will be able to) use an index on user_name.
Think about it, you'd need an index that ordered use_name so that
(john_doe, Ajohn_doe, Zjohn_doe1234) were all next to each other.
If you anchor the search (LIKE 'john_doe%') and are using the C locale 
then an index can be used (because it's the same as >='john_doe' AND 
<'john_dof').

If you really need to do indexed searches anywhere in a text-field 
you'll need to look at tsearch2 in the contrib/ directory. That lets you 
build a full-text index, but it's really meant for documents rather than 
user names.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] query time

2005-02-02 Thread WireSpot
On Wed, 02 Feb 2005 14:48:41 +, Richard Huxton  wrote:
> Think about it, you'd need an index that ordered use_name so that
> (john_doe, Ajohn_doe, Zjohn_doe1234) were all next to each other.
> 
> If you anchor the search (LIKE 'john_doe%') and are using the C locale
> then an index can be used (because it's the same as >='john_doe' AND
> <'john_dof').

Unfortunately, all my cases are LIKE '%john_doe'. So I'm guessing I'll
never get an index.

How about the response time when doing both LIKE user_name and ORDER
BY timestamp? Why does it get blown out of the water like that, from 5
and 5 to 2000 ms? If a LIKE by itself takes 5 ms and an ORDER by
itself takes 5 ms... Doesn't it grab the results matching the LIKE and
the ORDER only those?

While we're on the subject of indexes, is there any way I can speed up
a SELECT DISTINCT user_name FROM log_entries? With 200.000 entries I
get like 46 seconds on this one.

I apologise if these things are already in the manual, I'm only now
getting used to it and I don't find some things right away.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Lonni J Friedman
Greetings,
I've got a pesky J2EE based app that is using PostgreSQL-7.4.x on the
backend.  Lately i've been getting a disturbing large number of
PostgreSQL out of memory exceptions on the J2EE side when running
queries that have some huge joins in them.  I've tried increasing the
sort_mem value for the DB , but that hasn't had any noticable impact. 
Rather than fumbling blindly in the dark, i'd like to get a better
idea of how much memory this query really needs.

Is there a way for me to capture that in the log?

Note, i'm not the person who wrote this app or the query, i'm just the
guy responsible for the DB.  I've already castigated the folks who
wrote this resource hog.

thanks,
Lonni

-- 
~
L. Friedman[EMAIL PROTECTED]
LlamaLand   http://netllama.linux-sxs.org

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Does indexing help >= as well as = for integer columns?

2005-02-02 Thread Martijn van Oosterhout
On Wed, Feb 02, 2005 at 06:51:13AM -0800, TJ O'Donnell wrote:
> I had thought that the Creation of the Index would do something
> equivalent to Analyze.  I tried Analyze Verbose and it improved
> the scanner's ability to predict when an index would be useful.

Create index creates an index, analyze collects statistics. Neither
happens without being asked for...

> Last week, I asked about visualizing B-tree "coverage".  I think
> I meant "Can I see the histograms that Analyze creates?"
> Are they available anywhere?  The docs mention them (bins) and I
> was hoping Analyze Verbose would show them to me.

Maybe pg_statistic? You may need the oid of the column definition to
work out what goes where...

Hope this helps,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpaXUNP1kBwE.pgp
Description: PGP signature


Re: [GENERAL] PL/PgSQL, Inheritance, Locks, and Deadlocks

2005-02-02 Thread Tom Lane
"Thomas F.O'Connell" <[EMAIL PROTECTED]> writes:
> The linking table is a pure linking table. It has a user_id and a 
> group_id, each a foreign key. The user_id ties to the appropriate 
> subclass user table. The group_id ties to the groups table, which is 
> not part of an inheritance hierarchy. A multicolumn primary key covers 
> both foreign keys in the linking table, and the secondary column of the 
> key also has its own index.

Inserts/updates in a table that has a foreign key result in locks on the
referenced rows in the master table.  Could this explain your problem?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] NewsServer down ?

2005-02-02 Thread Postgre . News . Firma
Hi.

There should exist a news-server at:
news://news.postgresql.org/
but I can't connect to it.
It seems does it does not exist.
Even the name can't get resolved.

Is there a new one out there or isn't there one now ?

Cu,
Andreas


PS:
Btw, it got the news-servr from 
http://www.postgresql.org/community/lists/
...
The mailing lists are also available at the PostgreSQL .
However, in order to post to a list, you must still subscribe to that list
(but it is possible to subscribe without receiving mail).
where  is a link.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Does indexing help >= as well as = for integer columns?

2005-02-02 Thread Tom Lane
"TJ O'Donnell" <[EMAIL PROTECTED]> writes:
> Last week, I asked about visualizing B-tree "coverage".  I think
> I meant "Can I see the histograms that Analyze creates?"
> Are they available anywhere?

See pg_stats

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] simple case syntax oddity in 8.0.0

2005-02-02 Thread Vlad
Hi there:

Postgresql 8.0.0, FreeBSD 5.3


test=> select case 0 when 0 then null else 1/0 end as test;
ERROR:  division by zero

test=> select case when 0=0 then null else 1/0 end as test;
 test
--

(1 row)

test=>

Postgresql 7.4.5, FreeBSD 5.3

test => select case 0 when 0 then null else 1/0 end as test;
 test
--

(1 row)

test => select case when 0=0 then null else 1/0 end as test;
 test
--



-- 
Vlad

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] query time

2005-02-02 Thread Richard Huxton
WireSpot wrote:
On Wed, 02 Feb 2005 14:48:41 +, Richard Huxton  wrote:
Think about it, you'd need an index that ordered use_name so that
(john_doe, Ajohn_doe, Zjohn_doe1234) were all next to each other.
If you anchor the search (LIKE 'john_doe%') and are using the C locale
then an index can be used (because it's the same as >='john_doe' AND
<'john_dof').

Unfortunately, all my cases are LIKE '%john_doe'. So I'm guessing I'll
never get an index.
Well, if you want names *ending* in john_doe you can create a function 
reverse() and then create a functional index on it. If you want 
something in the middle, tough.

How about the response time when doing both LIKE user_name and ORDER
BY timestamp? Why does it get blown out of the water like that, from 5
and 5 to 2000 ms? If a LIKE by itself takes 5 ms and an ORDER by
itself takes 5 ms... Doesn't it grab the results matching the LIKE and
the ORDER only those?
In your first '%john_doe%' example you had a LIMIT 100 without a sort. 
So - the sequential scan started and when PG found the 100th match it 
stopped. If you order by timestamp, it has to find all the matches, sort 
them by timestamp and then discard the 101st onwards.

Now, the reason it doesn't use the timestamp index instead is that it 
thinks that your LIKE is going to be very specific:
  (cost=0.00..8250.17 rows=1 width=175)
In actual fact, there isn't one matching row there are 15,481:
  (actual time=0.432..1051.036 rows=15481 loops=1)
I'm not sure that there's any way for PG to know how many matches there 
are going to be - it keeps track of values, but there's no easy way to 
model substrings of a column.

Are you sure you need to match user_name against a double-wildcarded 
value? If you do, all I could suggest is perhaps limiting the search to 
one day/week/whatever at a time, which might make the timestamp index 
seem appealing.

While we're on the subject of indexes, is there any way I can speed up
a SELECT DISTINCT user_name FROM log_entries? With 200.000 entries I
get like 46 seconds on this one.
Not sure there's a simple way to avoid a seq-scan of the whole table. PG 
has good concurrency support with MVCC, but the downside is that the 
indexes don't store whether something is visible or not. That means you 
can't just look at the index to determine what values are visible to 
your current transaction.

> I apologise if these things are already in the manual, I'm only now
> getting used to it and I don't find some things right away.
No problem - all sensible questions. Worth checking the mailing-list 
archives too though.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] PostgreSQL on cluster

2005-02-02 Thread Yury Don
Hello All,

Does anybody have a live cluster with 2 or more computers running
PostgreSQL, connected to single database on shared storage or to
replicated database with load balancing between them?

-- 
Best regards,
 Yury  mailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Wierd memory problem with Apache / PHP. Related to

2005-02-02 Thread Tom Lane
Jeff Davis <[EMAIL PROTECTED]> writes:
> Developers: am I mistaken about libpq copying the entire query before
> sending it to the backend? Is there a reason that libpq wouldn't just
> send it along to the backend?

That's a feature, not a bug.  libpq marshals whole messages before
sending them so that it can be sure it doesn't have a problem with
a half-sent message.  The only way out of such a problem would be to
drop the connection, because there's no way to regain message boundary
sync with the backend.

If your SQL queries are so long as to pose a memory threat by
themselves, you might want to rethink your approach anyway.  I'd
expect such things to hit bottlenecks all over the place.  In particular
the backend is quite certain to make multiple copies of any long literal
constant during parsing/planning.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] NewsServer down ?

2005-02-02 Thread Marc G. Fournier

Just connected to it from here ... is this still providing to be a problem 
for you?

On Wed, 2 Feb 2005 [EMAIL PROTECTED] wrote:
Hi.
There should exist a news-server at:
news://news.postgresql.org/
but I can't connect to it.
It seems does it does not exist.
Even the name can't get resolved.
Is there a new one out there or isn't there one now ?
Cu,
Andreas
PS:
Btw, it got the news-servr from
http://www.postgresql.org/community/lists/
...
The mailing lists are also available at the PostgreSQL .
However, in order to post to a list, you must still subscribe to that list
(but it is possible to subscribe without receiving mail).
where  is a link.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PL/PgSQL, Inheritance, Locks, and Deadlocks

2005-02-02 Thread Thomas F.O'Connell
Doubtful, because users never share groups, so even though the groups 
table is not part of an inheritance hierarchy, there shouldn't be any 
overlap between foreign keys in the users1_groups table and the 
users2_groups table in the groups table.

users1_groups links all users in the users1 subclass to groups that 
will be completely distinct from the groups in which users2 users could 
be categorized.

We were seeing, for instance, the stored procedure I posted, which was 
unique to users1, acquire an AccessShareLock on the users2_groups 
table. And as it ran (which took a while, since it does several 
counts), it seemed to acquire locks on a few different linking tables 
from itself (e.g., users3_groups and users4_groups, as well).

The extra locks it was acquiring seemed to be related to some of the 
deadlocks I've been seeing during CREATE statements (during standard 
operation of the database) on a variety of the subclass tables (both 
user tables and linking tables).

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Feb 2, 2005, at 9:41 AM, Tom Lane wrote:
"Thomas F.O'Connell" <[EMAIL PROTECTED]> writes:
The linking table is a pure linking table. It has a user_id and a
group_id, each a foreign key. The user_id ties to the appropriate
subclass user table. The group_id ties to the groups table, which is
not part of an inheritance hierarchy. A multicolumn primary key covers
both foreign keys in the linking table, and the secondary column of 
the
key also has its own index.
Inserts/updates in a table that has a foreign key result in locks on 
the
referenced rows in the master table.  Could this explain your problem?

			regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] basic pg lock question

2005-02-02 Thread Rick Schumeyer
I have a question about whether or not I need to do locking to a pg
table being accessed from a php application.

Let's say two users select rows from the table and display them in
their browser.  User A clicks on row 1 to edit it.  Since this is php,
so far it just selects the current values from the row.  Eventually,
user A will submit his changes and update the row.

In the meantime, when User B looks at his web page, there will still
be an 'edit' link for row 1.  I'm pretty sure that I don't want User B
to try to edit the row, but as far as I understand the default postgres
locking will not prevent this.  When user A selects row 1, do I need to
manually lock the row to prevent another select?

I'm sorry if this is basic stuff!

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL on cluster

2005-02-02 Thread Richard Huxton
Yury Don wrote:
Hello All,
Does anybody have a live cluster with 2 or more computers running
PostgreSQL, connected to single database on shared storage or to
replicated database with load balancing between them?
No, because you can't share the same database files between two 
independent servers. You could run PG on a cluster if it could 
distribute shared memory, but without specialised hardware it would be 
very slow.

Replication is fine, although load-balancing needs to be thought 
through. There's been plenty of discussion of this recently, search the 
archives for "cluster" or "replication" or "slony".
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] When is a blank not a null or ''

2005-02-02 Thread Michael Kleiser




Maybe other whitspace or non-printable-character.
Try:

SELECT first_name, '[' || work_email || ']', ASCII(work_email) FROM tb_contacts  WHERE
tb_contacts.work_email <>'';


mike wrote:

  On Wed, 2005-02-02 at 11:31 +0100, Troels Arvin wrote:
  
  
On Wed, 02 Feb 2005 09:59:30 +, mike wrote:



  SELECT first_name,work_email FROM tb_contacts  WHERE
tb_contacts.work_email <>'';

However I get loads of blank email addresses coming up

anyone any ideas
  

An idea: You have " "-values in your work_email column, i.e. work_email
values consisting of space(s).


  
  
nope

SELECT work_email FROM tb_contacts  WHERE tb_contacts.work_email ILIKE
'% %';
 work_email

(0 rows)


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
  






Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Tom Lane
Lonni J Friedman <[EMAIL PROTECTED]> writes:
> I've got a pesky J2EE based app that is using PostgreSQL-7.4.x on the
> backend.  Lately i've been getting a disturbing large number of
> PostgreSQL out of memory exceptions on the J2EE side when running
> queries that have some huge joins in them.

Such an error on the client side would indicate that the query retrieved
more rows than the client app had memory for.  Perhaps your problem is
incorrectly-constrained joins that are returning way more rows than
you expect?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] simple case syntax oddity in 8.0.0

2005-02-02 Thread Tom Lane
Vlad <[EMAIL PROTECTED]> writes:
> Postgresql 8.0.0, FreeBSD 5.3

> test=> select case 0 when 0 then null else 1/0 end as test;
> ERROR:  division by zero

Hmm ... unexpected side effect of the change to avoid evaluating the
test-expression multiple times.  This used to be transformed into
this at parse time:

select case when 0=0 then null else 1/0 end as test;

and then constant-folding would do the right thing.
But now it's transformed into something sort of like

select (let x = 0 in case when x=0 then ...)

and the when-clause therefore appears not to be constant.  I think
I can fix this for 8.0.2, though.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Invalid headers and xlog flush failures

2005-02-02 Thread Bricklen Anderson
Hi all,
I recently came across some apparent corruption in one of our databases around 
a month ago.
version: postgresql 8 (originally 8r3, now at 8.0.1), debian box
The messages that we were originally getting in our syslog were about invalid 
page headers. After
googling around, then dumping the page with pg_filedump, I decided to drop and 
recreate the affected
table. This seemed to work for a while, until this message cropped up in the 
syslog during a heavy load:
Feb  1 11:17:49 dev94 postgres[4959]: [470-2] 2005-02-01 11:17:49 PST> CONTEXT: 
 writing block 47272
of relation 1663/17235/57800
Feb  1 11:17:49 dev94 postgres[4959]: [471-1] 2005-02-01 11:17:49 PST> WARNING: 
 could not write
block 47272 of 1663/17235/57800
Feb  1 11:17:49 dev94 postgres[4959]: [471-2] 2005-02-01 11:17:49 PST> DETAIL:  
Multiple failures
--- write error may be permanent.
Feb  1 11:17:50 dev94 postgres[4959]: [472-1] 2005-02-01 11:17:50 PST> ERROR:  
xlog flush request
972/FC932854 is not satisfied --- flushed only to 73/86D2640
This maps to an index. I reindexed it (and several other tables), and a 3 hours 
later, restarted my
load process. Shortly after that, the same thing happened again (with different 
numbers this time):
Feb  1 14:36:05 dev94 postgres[12887]: [626-2] 2005-02-01 14:36:05 PST> 
CONTEXT:  writing block 7502
of relation 1663/17235/151565
Feb  1 14:36:05 dev94 postgres[12887]: [627-1] 2005-02-01 14:36:05 PST> 
WARNING:  could not write
block 7502 of 1663/17235/151565
Feb  1 14:36:05 dev94 postgres[12887]: [627-2] 2005-02-01 14:36:05 PST> DETAIL: 
 Multiple failures
--- write error may be permanent.
Feb  1 14:36:06 dev94 postgres[12887]: [628-1] 2005-02-01 14:36:06 PST> ERROR:  
xlog flush request
973/3EF36C2C is not satisfied --- flushed only to 73/419878B4
Both sets are repeated continuously through the syslog.
I pursued some references to XID wraparound, but that didn't seem likely from 
what I could see
(unless I'm misreading the numbers)
SELECT datname, age(datfrozenxid) FROM pg_database where datname='dev17';
 datname |age
-+
 dev17  | 1074008776
Here is a pg_filedump of 151565:
$pg_filedump -i -f -R 7502 /var/postgres/data/base/17235/151565
***
* PostgreSQL File/Block Formatted Dump Utility - Version 3.0
*
* File: /var/postgres/data/base/17235/151565
* Options used: -i -f -R 7502
*
* Dump created on: Tue Feb  1 14:34:14 2005
***
Block 7502 
 -
 Block Offset: 0x03a9c000 Offsets: Lower 988 (0x03dc)
 Block: Size 8192  Version2Upper3336 (0x0d08)
 LSN:  logid115 recoff 0x39e855f4  Special  8176 (0x1ff0)
 Items:  242   Free Space: 2348
 Length (including item array): 992
 Error: Invalid header information.
  : 7300 f455e839 0100 dc03080d  sU.9
  0010: f01f0220 cc912800 e0912800 f4912800  ... ..(...(...(.
  0020: 08922800 1c922800 30922800 44922800  ..(...(.0.(.D.(.

 --
 Item   1 -- Length:   20  Offset: 4556 (0x11cc)  Flags: USED
  Block Id: 9016  linp Index: 2  Size: 20
  Has Nulls: 0  Has Varwidths: 16384
  11cc: 3823 02001440 0b00 022000cf  [EMAIL PROTECTED] ..
  11dc: 66f06500 f.e.
 Item   2 -- Length:   20  Offset: 4576 (0x11e0)  Flags: USED
  Block Id: 9571  linp Index: 8  Size: 20
  Has Nulls: 0  Has Varwidths: 16384
  11e0: 6325 08001440 0b00 022000cf  [EMAIL PROTECTED] ..
  11f0: 66f06400 f.d.
 Item   3 -- Length:   20  Offset: 4596 (0x11f4)  Flags: USED
  Block Id: 9571  linp Index: 3  Size: 20
  Has Nulls: 0  Has Varwidths: 16384
  11f4: 6325 03001440 0b00 022000cf  [EMAIL PROTECTED] ..
  1204: 66f06400 f.d.

 -
 BTree Index Section:
  Flags: 0x0001 (LEAF)
  Blocks: Previous (1314)  Next (1958)  Level (0)
  1ff0: 2205 a607  0100  "...
*** End of Requested Range Encountered. Last Block Read: 7502 ***
Can anyone suggest what I should try next, or if you need more information, 
I'll happily supply what
I can.
Inline are the changes I made to pg_filedump to get it to compile and work, 
as such I can't attest
to pg_filedump's accuracy (which could be the source of those invalid header 
messages)
#
--- pg_filedump.c.old   2004-02-23 12:58:58.0 -0800
+++ ../pg_filedump-3.0/pg_filedump.c.new 2005-01-31 09:24:36.0
-0800
@@ -742,8 +742,8 @@
  printf ("  XID: min (%u)  CMIN|XMAX: %u  CMAX|XVAC: %u\n"
  "  Block Id: %u  linp Index: %u   Attributes: %d Size: %d\n",
- htup->t_xmin, htup->t_field2.t_cmin,
- htup->t_field3.t_cmax,
+ htup->t_choice.t_heap.t_xmin,htup->t_choice.t_heap.t_cmin,
+ htup->t_choice.t_heap.t_field4.t_cmax,
  

Re: [GENERAL] basic pg lock question

2005-02-02 Thread Richard Huxton
Rick Schumeyer wrote:
I have a question about whether or not I need to do locking to a pg
table being accessed from a php application.
Let's say two users select rows from the table and display them in
their browser.  User A clicks on row 1 to edit it.  Since this is php,
so far it just selects the current values from the row.  Eventually,
user A will submit his changes and update the row.
In the meantime, when User B looks at his web page, there will still
be an 'edit' link for row 1.  I'm pretty sure that I don't want User B
to try to edit the row, but as far as I understand the default postgres
locking will not prevent this.  When user A selects row 1, do I need to
manually lock the row to prevent another select?
I'm sorry if this is basic stuff!
You can't solve this problem with row locking. Since PHP is web-based 
you can't hold the connection open to the user's web-browser for longer 
than it takes to process one web-page.

So - you'll need one of:
1. Make sure all items are owned by only one user and only that user can 
edit them.
2. Have a user_locks table which your PHP code inserts to when user A 
starts to edit the object. Alternatively have a "locked_by" column on 
the target table and set that. You'll also need to write code to deal 
with the case when user A locks the object and goes to lunch.
3. Check the value(s) of the object before you update it. If it has 
changed since you started editing it, someone else has changed it - stop 
and tell the user. An auto-incrementing sequence can provide a useful 
single-value check for whether an object has changed. Another check 
would be OID, but make sure you understand its limitations.

I've used all three, and choosing between 2/3 will depend on which will 
make your users' lives easier.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] basic pg lock question

2005-02-02 Thread Tom Lane
"Rick Schumeyer" <[EMAIL PROTECTED]> writes:
> In the meantime, when User B looks at his web page, there will still
> be an 'edit' link for row 1.  I'm pretty sure that I don't want User B
> to try to edit the row, but as far as I understand the default postgres
> locking will not prevent this.  When user A selects row 1, do I need to
> manually lock the row to prevent another select?

You can, but it's usually considered bad practice to hold locks for long
enough for people to edit rows.  (What if they go off to lunch or
something while your app is holding the row locked?)

Better is to add some logic that checks when the user hits "update" to
see if the row has changed since you read it in.  If so, you can either
punt or try to merge the changes.  This requires a very short-term lock
(just long enough to avoid a race condition from simultaneous update
attempts) which you can grab with SELECT FOR UPDATE while reading the
current value of the row.

There are discussions of this issue in the mailing list archives.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] pgpool 2.5b2 released

2005-02-02 Thread Bruce Momjian
Tatsuo Ishii wrote:
> Pgpool 2.5b2 supports "master slave mode" which can cope with
> master/slave replication softwares such as Slony-I. In this mode
> pgpool sends non SELECT queries to master only. SELECTs are load
> balanced by pgpool.
> 
> Other features of 2.5b2 include:
> - ability to add timestamp to each log entry
> - control to whether cache connection info or not
> 
> pgpool 2.5b2 is available at:
> http://pgfoundry.org/projects/pgpool/

Wow, that is great!  I know Jan was waiting for this.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] basic pg lock question

2005-02-02 Thread Scott Marlowe
On Wed, 2005-02-02 at 10:07, Rick Schumeyer wrote:
> I have a question about whether or not I need to do locking to a pg
> table being accessed from a php application.
> 
> Let's say two users select rows from the table and display them in
> their browser.  User A clicks on row 1 to edit it.  Since this is php,
> so far it just selects the current values from the row.  Eventually,
> user A will submit his changes and update the row.
> 
> In the meantime, when User B looks at his web page, there will still
> be an 'edit' link for row 1.  I'm pretty sure that I don't want User B
> to try to edit the row, but as far as I understand the default postgres
> locking will not prevent this.  When user A selects row 1, do I need to
> manually lock the row to prevent another select?
> 
> I'm sorry if this is basic stuff!

Contrary to popular belief, application level locking (what you'll have
to do here) is not basic stuff, and is prone to errors.  The two basic
models are 1:  Use a separate field to hold a lock key of some kind, and
time it out every so often to prevent permanently locked records because
User A went to lunch and forgot about his edit, or 2:  Conflict
resolution handled at checkin time.  

Method 2 often provides all the protection you need and is quite easy to
program.  You basically do something like:

test=# select data, md5(data) from table where id=$idnum;

 data |   md5
--+--
 abc  | 900150983cd24fb0d6963f7d28e17f72
(1 row)

And then when you insert it, you make sure the md5 sums match:

test=# update locktest set data='xyz' where id=1 and
md5(data)='900150983cd24fb0d6963f7d28e17f72';
UPDATE 1

Note that if someone had changed the data field underneath your app, you
sould see this instead:

test=# update locktest set data='xyz' where id=1 and
md5(data)='900150983cd24fb0d6963f7d28e17f72';
UPDATE 0

So you can use pg_affected_rows to see if the change took place and
handle it in userland.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Invalid headers and xlog flush failures

2005-02-02 Thread Tom Lane
Bricklen Anderson <[EMAIL PROTECTED]> writes:
> Feb  1 11:17:50 dev94 postgres[4959]: [472-1] 2005-02-01 11:17:50 PST> ERROR: 
>  xlog flush request
> 972/FC932854 is not satisfied --- flushed only to 73/86D2640

Hmm, have you perhaps played any games with pg_resetxlog in this database?

I would have suggested that maybe this represented on-disk data
corruption, but the appearance of two different but not-too-far-apart
WAL offsets in two different pages suggests that indeed the end of WAL
was up around segment 972 or 973 at one time.  And now it's evidently
ending at 73.  Not good.  What file names do you see in pg_xlog/, and
what does pg_controldata show?

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Lonni J Friedman
On Wed, 02 Feb 2005 11:17:59 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
> Lonni J Friedman <[EMAIL PROTECTED]> writes:
> > I've got a pesky J2EE based app that is using PostgreSQL-7.4.x on the
> > backend.  Lately i've been getting a disturbing large number of
> > PostgreSQL out of memory exceptions on the J2EE side when running
> > queries that have some huge joins in them.
> 
> Such an error on the client side would indicate that the query retrieved
> more rows than the client app had memory for.  Perhaps your problem is
> incorrectly-constrained joins that are returning way more rows than
> you expect?

You might be right, i'm honestly not sure. What we're seeing on the
client side is:
org.postgresql.util.PSQLException: ERROR: out of memory

Although, looking in the log postgresql is generating, I'm seeing the
following at the same time as that OOM above so it certainly looks
like the DB is barfing:
###
TopMemoryContext: 71448 total in 7 blocks; 22544 free (120 chunks); 48904 used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 1040384 total in 7 blocks; 395024 free (1021 chunks);
645360 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 800 free (0 chunks); 224 used
ExecutorState: 2121752 total in 9 blocks; 538864 free (12 chunks); 1582888 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 451796 total in 2 blocks; 260800 free (0 chunks); 190996 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: -1934683632 total in 295 blocks; 135760 free (879
chunks); -1934819392 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 24576 total in 2 blocks; 11128 free (4 chunks); 13448 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 2088960 total in 8 blocks; 1595032 free (3190
chunks); 493928 used
relationship_pk: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
relation_type: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
relation_target: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
relation_origin: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
field_value_pk: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
field_class_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
field_value_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
f_val_fid_val_idx: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
report_param_pk: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
report_parm_report: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
projectmember_pk: 1024 

Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Tom Lane
Lonni J Friedman <[EMAIL PROTECTED]> writes:
> Although, looking in the log postgresql is generating, I'm seeing the
> following at the same time as that OOM above so it certainly looks
> like the DB is barfing:

OK, then it's a backend issue not a client-side issue.

> HashBatchContext: -1934683632 total in 295 blocks; 135760 free (879
> chunks); -1934819392 used

Hmm, looks like a hash join ran out of memory.  What PG version is this
again, and what do you have sort_mem set to?  Can you show an EXPLAIN
for the query that is failing like this?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Invalid headers and xlog flush failures

2005-02-02 Thread Bricklen Anderson
Tom Lane wrote:
Bricklen Anderson <[EMAIL PROTECTED]> writes:
Feb  1 11:17:50 dev94 postgres[4959]: [472-1] 2005-02-01 11:17:50 PST> ERROR:  
xlog flush request
972/FC932854 is not satisfied --- flushed only to 73/86D2640

Hmm, have you perhaps played any games with pg_resetxlog in this database?
I would have suggested that maybe this represented on-disk data
corruption, but the appearance of two different but not-too-far-apart
WAL offsets in two different pages suggests that indeed the end of WAL
was up around segment 972 or 973 at one time.  And now it's evidently
ending at 73.  Not good.  What file names do you see in pg_xlog/, and
what does pg_controldata show?
			regards, tom lane
Hi Tom,
Nope, never touched pg_resetxlog.
My pg_xlog list ranges from 000100730041 to 0001007300FE, with no breaks. 
There are also these: 00010074 to 00010074000B

$ pg_controldata
pg_control version number:74
Catalog version number:   200411041
Database system identifier:   4738750823096876774
Database cluster state:   in production
pg_control last modified: Wed 02 Feb 2005 12:38:22 AM PST
Current log file ID:  115
Next log file segment:66
Latest checkpoint location:   73/419A4BDC
Prior checkpoint location:73/419A4B80
Latest checkpoint's REDO location:73/419A4BDC
Latest checkpoint's UNDO location:0/0
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID:  4161807
Latest checkpoint's NextOID:  176864
Time of latest checkpoint:Wed 02 Feb 2005 12:38:22 AM PST
Database block size:  8192
Blocks per segment of large relation: 131072
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum number of function arguments: 32
Date/time type storage:   floating-point numbers
Maximum length of locale name:128
LC_COLLATE:   en_CA
LC_CTYPE: en_CA
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Lonni J Friedman
On Wed, 02 Feb 2005 12:13:59 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
> Lonni J Friedman <[EMAIL PROTECTED]> writes:
> > Although, looking in the log postgresql is generating, I'm seeing the
> > following at the same time as that OOM above so it certainly looks
> > like the DB is barfing:
> 
> OK, then it's a backend issue not a client-side issue.
> 
> > HashBatchContext: -1934683632 total in 295 blocks; 135760 free (879
> > chunks); -1934819392 used
> 
> Hmm, looks like a hash join ran out of memory.  What PG version is this
> again, and what do you have sort_mem set to?  Can you show an EXPLAIN
> for the query that is failing like this?

This is PostgreSQL-7.4.6 running on RHES-3 (using the RPMs).
sort_mem=10

I've attached the explain for the query that is blowing up.  Thanks
for looking at this.


-- 
~
L. Friedman[EMAIL PROTECTED]
LlamaLand   http://netllama.linux-sxs.org


oom.out
Description: Binary data

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] plpython.so

2005-02-02 Thread elein
On two different machines I've built pg 7.4 --with-python.

When I createlang plpythonu db I get this error:
createlang: language installation failed: ERROR:  could not load library 
"/usr/local/pgsql/lib/plpython.so": /usr/local/pgsql/lib/plpython.so: undefined 
symbol: PyDict_Copy

PyDict_Copy is a python standard call since forever.
nm shows it is not defined.
nm also shows some glib.c functions not defined.

build --with-python and createlang works fine on some other machines.  

What is the problem here?  Some python module I'm missing?

Help!

--elein
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Tom Lane
Lonni J Friedman <[EMAIL PROTECTED]> writes:
> On Wed, 02 Feb 2005 12:13:59 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Hmm, looks like a hash join ran out of memory.  What PG version is this
>> again, and what do you have sort_mem set to?  Can you show an EXPLAIN
>> for the query that is failing like this?

> I've attached the explain for the query that is blowing up.

One of the three Hash steps must be absorbing a whole lot more rows than
the planner is expecting, but all of them look like fairly
straightforward estimation situations:

   ->  Hash  (cost=108.96..108.96 rows=28 width=24)
 ->  Index Scan using mntr_subscr_usrevt on mntr_subscription 
sfmain_monitoringsubscriptio0  (cost=0.00..108.96 rows=28 width=24)
   Index Cond: (((user_id)::text = 'user1187'::text) AND 
((event_operation)::text = 'update'::text))

   ->  Hash  (cost=701.44..701.44 rows=3 width=24)
 ->  Seq Scan on field_value tracker_artifact_group0  
(cost=0.00..701.44 rows=3 width=24)

 ->  Hash  (cost=5.74..5.74 rows=1 width=80)
   ->  Index Scan using project_path on project 
tracker_artifact_extension_f1  (cost=0.00..5.74 rows=1 width=80)
 Index Cond: (("path")::text = 
'projects.meeting_broker_v3'::text)
 Filter: ((("path")::text = 
'projects.meeting_broker_v3'::text) OR (("path")::text ~~ 
'projects.meeting_broker_v3.%'::text))

Perhaps one of these tables hasn't been vacuumed/analyzed and is way
bigger than the planner thinks?  Can you check into how many rows
actually meet the identified conditions?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] is this index bloat?

2005-02-02 Thread Patrick Hatcher




PG=7.4.5

I guess I never noticed this during vacuum verbose before, but is it common
for the index to be 2 to 3 times the number of rows in a table?  I've tried
reindexing and then dropping and readding them.  Still the same number of
rows.
Indexes are all btree

mdc_oz=# select count(*) from kst;
 count
---
   919
(1 row)

mdc_oz=# vacuum full verbose analyze  kst;
INFO:  vacuuming "public.kst"
INFO:  "kst": found 0 removable, 2757 nonremovable row versions in 64 pages
DETAIL:  1838 dead row versions cannot be removed yet.
Nonremovable row versions range from 141 to 235 bytes long.
There were 0 unused item pointers.
Total free space (including removable row versions) is 9112 bytes.
0 pages are or will become empty, including 0 at the end of the table.
8 pages containing 5328 free bytes are potential move destinations.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "xie1kst" now contains 2757 row versions in 9 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "xie2kst" now contains 2757 row versions in 9 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "kst_dpt_pk" now contains 2757 row versions in 9 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "kst": moved 0 row versions, truncated 64 to 64 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "public.kst"
INFO:  "kst": 64 pages, 919 rows sampled, 919 estimated total rows
VACUUM


# - Free Space Map -

max_fsm_pages = 300 # min max_fsm_relations*16, 6 bytes each
max_fsm_relations = 500 # min 100, ~50 bytes each

TIA


Patrick Hatcher


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Lonni J Friedman
On Wed, 02 Feb 2005 12:58:49 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
> Lonni J Friedman <[EMAIL PROTECTED]> writes:
> > On Wed, 02 Feb 2005 12:13:59 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
> >> Hmm, looks like a hash join ran out of memory.  What PG version is this
> >> again, and what do you have sort_mem set to?  Can you show an EXPLAIN
> >> for the query that is failing like this?
> 
> > I've attached the explain for the query that is blowing up.
> 
> One of the three Hash steps must be absorbing a whole lot more rows than
> the planner is expecting, but all of them look like fairly
> straightforward estimation situations:
> 
>->  Hash  (cost=108.96..108.96 rows=28 width=24)
>  ->  Index Scan using mntr_subscr_usrevt on mntr_subscription 
> sfmain_monitoringsubscriptio0  (cost=0.00..108.96 rows=28 width=24)
>Index Cond: (((user_id)::text = 'user1187'::text) AND 
> ((event_operation)::text = 'update'::text))
> 
>->  Hash  (cost=701.44..701.44 rows=3 width=24)
>  ->  Seq Scan on field_value tracker_artifact_group0  
> (cost=0.00..701.44 rows=3 width=24)
> 
>  ->  Hash  (cost=5.74..5.74 rows=1 width=80)
>->  Index Scan using project_path on project 
> tracker_artifact_extension_f1  (cost=0.00..5.74 rows=1 width=80)
>  Index Cond: (("path")::text = 
> 'projects.meeting_broker_v3'::text)
>  Filter: ((("path")::text = 
> 'projects.meeting_broker_v3'::text) OR (("path")::text ~~ 
> 'projects.meeting_broker_v3.%'::text))
> 
> Perhaps one of these tables hasn't been vacuumed/analyzed and is way

I'm doing a 'vacuumdb -a -z' every 15 minutes, and a full vacuum once/day.

> bigger than the planner thinks?  Can you check into how many rows
> actually meet the identified conditions?

I'm afraid i'm not clear on what i'm supposed to be checking here. 
Which conditions should I be looking at?  thanks.

-- 
~
L. Friedman[EMAIL PROTECTED]
LlamaLand   http://netllama.linux-sxs.org

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PL/PgSQL, Inheritance, Locks, and Deadlocks

2005-02-02 Thread Thomas F . O'Connell
One thing that is curious, though, is that when the AccessShareLock is 
acquired by the stored procedure on an unrelated linking table, there 
is also an AccessShareLock acquired on the primary key of the groups 
table. The latter lock is understandable, but why would the procedure 
need any locks whatsoever on linking tables on which it has no direct 
effect (either reading or writing)?

-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Feb 2, 2005, at 9:41 AM, Tom Lane wrote:
"Thomas F.O'Connell" <[EMAIL PROTECTED]> writes:
The linking table is a pure linking table. It has a user_id and a
group_id, each a foreign key. The user_id ties to the appropriate
subclass user table. The group_id ties to the groups table, which is
not part of an inheritance hierarchy. A multicolumn primary key covers
both foreign keys in the linking table, and the secondary column of 
the
key also has its own index.
Inserts/updates in a table that has a foreign key result in locks on 
the
referenced rows in the master table.  Could this explain your problem?

			regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Tom Lane
Lonni J Friedman <[EMAIL PROTECTED]> writes:
> I'm afraid i'm not clear on what i'm supposed to be checking here. 
> Which conditions should I be looking at?  thanks.

Well, for instance,

>> ->  Hash  (cost=108.96..108.96 rows=28 width=24)
>> ->  Index Scan using mntr_subscr_usrevt on mntr_subscription 
>> sfmain_monitoringsubscriptio0  (cost=0.00..108.96 rows=28 width=24)
>> Index Cond: (((user_id)::text = 'user1187'::text) AND 
>> ((event_operation)::text = 'update'::text))

select count(*) from mntr_subscription where (((user_id)::text =
'user1187'::text) AND ((event_operation)::text = 'update'::text));

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Invalid headers and xlog flush failures

2005-02-02 Thread Tom Lane
Bricklen Anderson <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> I would have suggested that maybe this represented on-disk data
>> corruption, but the appearance of two different but not-too-far-apart
>> WAL offsets in two different pages suggests that indeed the end of WAL
>> was up around segment 972 or 973 at one time.

> Nope, never touched pg_resetxlog.
> My pg_xlog list ranges from 000100730041 to 
> 0001007300FE, with no breaks. 
> There are also these: 00010074 to 00010074000B

That seems like rather a lot of files; do you have checkpoint_segments
set to a large value, like 100?  The pg_controldata dump shows that the
latest checkpoint record is in the 73/41 file, so presumably the active
end of WAL isn't exceedingly far past that.  You've got 200 segments
prepared for future activity, which is a bit over the top IMHO.

But anyway, the evidence seems pretty clear that in fact end of WAL is
in the 73 range, and so those page LSNs with 972 and 973 have to be
bogus.  I'm back to thinking about dropped bits in RAM or on disk.
IIRC these numbers are all hex, so the extra "9" could come from just
two bits getting turned on that should not be.  Might be time to run
memtest86 and/or badblocks.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] plpython.so

2005-02-02 Thread Tom Lane
[EMAIL PROTECTED] (elein) writes:
> When I createlang plpythonu db I get this error:
> createlang: language installation failed: ERROR:  could not load library 
> "/usr/local/pgsql/lib/plpython.so": /usr/local/pgsql/lib/plpython.so: 
> undefined symbol: PyDict_Copy

> build --with-python and createlang works fine on some other machines.  

What are the exact platforms on which it works or doesn't work for you?
Are they exactly the same Python versions?

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] is this index bloat?

2005-02-02 Thread Tom Lane
Patrick Hatcher <[EMAIL PROTECTED]> writes:
> I guess I never noticed this during vacuum verbose before, but is it common
> for the index to be 2 to 3 times the number of rows in a table?

Hm?  Your vacuum output shows exactly 2757 rows in the table and in each
index, so I'm unclear what you are looking at.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Lonni J Friedman
On Wed, 02 Feb 2005 13:23:11 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
> Lonni J Friedman <[EMAIL PROTECTED]> writes:
> > I'm afraid i'm not clear on what i'm supposed to be checking here.
> > Which conditions should I be looking at?  thanks.
> 
> Well, for instance,
> 
> >> ->  Hash  (cost=108.96..108.96 rows=28 width=24)
> >> ->  Index Scan using mntr_subscr_usrevt on mntr_subscription 
> >> sfmain_monitoringsubscriptio0  (cost=0.00..108.96 rows=28 width=24)
> >> Index Cond: (((user_id)::text = 'user1187'::text) AND 
> >> ((event_operation)::text = 'update'::text))
> 
> select count(*) from mntr_subscription where (((user_id)::text =
> 'user1187'::text) AND ((event_operation)::text = 'update'::text));

OK, i think i see what you're requesting.  How's this:

select count(*) from mntr_subscription where (((user_id)::text =
'user1187'::text) AND ((event_operation)::text = 'update'::text));
 count 
---
 9
(1 row)

select count(*) from project where (("path")::text =
'projects.meeting_broker_v3'::text);
 count 
---
 1
(1 row)

select count(*) from project where ((("path")::text =
'projects.meeting_broker_v3'::text) OR (("path")::text ~~
'projects.meeting_broker_v3.%'::text));
 count 
---
 1
(1 row)

Let me know if you need more.  thanks

-- 
~
L. Friedman[EMAIL PROTECTED]
LlamaLand   http://netllama.linux-sxs.org

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] plpython.so

2005-02-02 Thread elein
Debian--works (mine)
Fedora--doesn't work (not mine)
RedHat--doesn't work (not mine)

Is it possible that there is a problem with glibc?
If so, what does one do about that?

--elein

On Wed, Feb 02, 2005 at 01:25:03PM -0500, Tom Lane wrote:
> [EMAIL PROTECTED] (elein) writes:
> > When I createlang plpythonu db I get this error:
> > createlang: language installation failed: ERROR:  could not load library 
> > "/usr/local/pgsql/lib/plpython.so": /usr/local/pgsql/lib/plpython.so: 
> > undefined symbol: PyDict_Copy
> 
> > build --with-python and createlang works fine on some other machines.  
> 
> What are the exact platforms on which it works or doesn't work for you?
> Are they exactly the same Python versions?
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
> 

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Invalid headers and xlog flush failures

2005-02-02 Thread Bricklen Anderson
Tom Lane wrote:
Bricklen Anderson <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
I would have suggested that maybe this represented on-disk data
corruption, but the appearance of two different but not-too-far-apart
WAL offsets in two different pages suggests that indeed the end of WAL
was up around segment 972 or 973 at one time.

Nope, never touched pg_resetxlog.
My pg_xlog list ranges from 000100730041 to 0001007300FE, with no breaks. 
There are also these: 00010074 to 00010074000B

That seems like rather a lot of files; do you have checkpoint_segments
set to a large value, like 100?  The pg_controldata dump shows that the
latest checkpoint record is in the 73/41 file, so presumably the active
end of WAL isn't exceedingly far past that.  You've got 200 segments
prepared for future activity, which is a bit over the top IMHO.
But anyway, the evidence seems pretty clear that in fact end of WAL is
in the 73 range, and so those page LSNs with 972 and 973 have to be
bogus.  I'm back to thinking about dropped bits in RAM or on disk.
IIRC these numbers are all hex, so the extra "9" could come from just
two bits getting turned on that should not be.  Might be time to run
memtest86 and/or badblocks.
			regards, tom lane
Yes, checkpoint_segments is set to 100, although I can set that lower if you feel that that is more 
appropriate. Currently, the system receives around 5-8 million inserts per day (across 3 primary 
tables), so I was leaning towards the "more is better" philosophy.

We ran e2fsck with badblocks option last week and didn't turn anything up, along with a couple of 
passes with memtest. I will run a full-scale memtest and post any interesting results.

I've also read that kill -9 postmaster is "not a good thing". I honestly can't vouch for whether or 
not this may or may not have occurred around the time of the initial creation of this database. It's 
possible, since this db started it's life as a development db at 8r3 then was bumped to 8r5, then on 
to 8 final where it has become a dev-final db.

Assuming that the memtest passes cleanly, as does another run of badblocks, do you have any more 
suggestions on how I should proceed? Should I run for a while with zero_damaged_pages set to true 
and accpet the data loss, or just recreate the whole db from scratch?

Thanks again for your help.
Cheers,
Bricklen
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] capturing/viewing sort_mem utilization on a per query basis

2005-02-02 Thread Tom Lane
Lonni J Friedman <[EMAIL PROTECTED]> writes:
> OK, i think i see what you're requesting.  How's this:

What about field_value, does it in fact have circa 3 rows?

These tables are surely nowhere near big enough to bloat a hash table to
upwards of 2Gb, as your log shows happened.  I'm thinking that you must
have found some sort of bug, or memory leak maybe.  Is there a way I can
get access to your data set to trace through the behavior with a debugger?

(We should probably take the discussion offlist at this point.)

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] plpython.so

2005-02-02 Thread Tom Lane
[EMAIL PROTECTED] (elein) writes:
> Debian--works (mine)
> Fedora--doesn't work (not mine)
> RedHat--doesn't work (not mine)

[ blink ]  plpython works for me on Fedora Core 3, using
python-2.3.4-11.  Anyone else see a problem with it?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Problems with filter on timestamp

2005-02-02 Thread Alex Turner
I have a database that logs website requests, and I'm trying to get
all unique visitors within the last 1 minute, 5 minutes and 10
minutes.  I have a table that I have the schema for below that incude 
a field remote_ip.

When I perform the query with the filter, and the query without, the
results are the same:

dblogger=# select count(distinct(remote_ip)) from weblog_entry where
request_time>(current_time-interval '1 minute');
 count 
---
   313
(1 row)

dblogger=# select count(distinct(remote_ip)) from weblog_entry;
 count 
---
   316
(1 row)

dblogger=# explain select count(distinct(remote_ip)) from weblog_entry
where request_time>(current_time-interval '1 minute');
   QUERY PLAN
-
 Aggregate  (cost=951.98..951.98 rows=1 width=40)
   ->  Seq Scan on weblog_entry  (cost=0.00..940.85 rows=4452 width=40)
 Filter: ((request_time)::text > ((('now'::text)::time(6) with
time zone - '00:01:00'::interval))::text)
(3 rows)

This explain plain seems to me to be saying that it's casting the
timestamp to a text type before comparing with my
current_time-interval.

If I cast request_time directly to a time(6), then it works:

dblogger=# explain select count(distinct(remote_ip)) from weblog_entry
where request_time::time(6)>(current_time-interval '1 minute');

QUERY PLAN
-
 Aggregate  (cost=1177.00..1177.00 rows=1 width=40)
   ->  Seq Scan on weblog_entry  (cost=0.00..1165.46 rows=4612 width=40)
 Filter: (((request_time)::time(6) without time zone)::time
with time zone > (('now'::text)::time(6) with time zone -
'00:01:00'::interval))
(3 rows)

dblogger=# select count(distinct(remote_ip)) from weblog_entry where
request_time::time(6)>(current_time-interval '1 minute');
 count 
---
18
(1 row)

Is this the right behaviour?

I'm using Postgresql 8.0.1 on AMD64 compiled from source with -m64
-march=k8 -O3 on SuSe 9.2

Alex Turner
NetEconomist

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] plpython.so

2005-02-02 Thread elein
Fedora as listed below is really, RedHat Enterprice Linux ES 2.1
I'm not sure about the second failing system.

--elein

On Wed, Feb 02, 2005 at 10:32:06AM -0800, elein wrote:
> Debian--works (mine)
> Fedora--doesn't work (not mine)
> RedHat--doesn't work (not mine)
> 
> Is it possible that there is a problem with glibc?
> If so, what does one do about that?
> 
> --elein
> 
> On Wed, Feb 02, 2005 at 01:25:03PM -0500, Tom Lane wrote:
> > [EMAIL PROTECTED] (elein) writes:
> > > When I createlang plpythonu db I get this error:
> > > createlang: language installation failed: ERROR:  could not load library 
> > > "/usr/local/pgsql/lib/plpython.so": /usr/local/pgsql/lib/plpython.so: 
> > > undefined symbol: PyDict_Copy
> > 
> > > build --with-python and createlang works fine on some other machines.  
> > 
> > What are the exact platforms on which it works or doesn't work for you?
> > Are they exactly the same Python versions?
> > 
> > regards, tom lane
> > 
> > ---(end of broadcast)---
> > TIP 8: explain analyze is your friend
> > 
> 
> ---(end of broadcast)---
> TIP 8: explain analyze is your friend
> 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PostgreSQL Security Release

2005-02-02 Thread Andrey V. Semyonov
Marc G. Fournier wrote:
In order to address a potential security hole recently identified with 
the "LOAD" option, the PostgreSQL Global Development Group is 
announcing the release of new versions of PostgreSQL going back to the 
7.2.x version.

As always, these releases are available on all mirrors...
I have never used PGSQL before 8.0.0 Release. So, make me understand 
such a thing: do the developers team make any quick-patch sets for 
patching source tree from 8.0.0 to 8.0.1 for example? Or do I need to 
download the whole tarball of 8.0.1 from scratch?

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PostgreSQL Security Release

2005-02-02 Thread Marc G. Fournier
On Wed, 2 Feb 2005, Andrey V. Semyonov wrote:
Marc G. Fournier wrote:
In order to address a potential security hole recently identified with the 
"LOAD" option, the PostgreSQL Global Development Group is announcing the 
release of new versions of PostgreSQL going back to the 7.2.x version.

As always, these releases are available on all mirrors...
I have never used PGSQL before 8.0.0 Release. So, make me understand such a 
thing: do the developers team make any quick-patch sets for patching source 
tree from 8.0.0 to 8.0.1 for example? Or do I need to download the whole 
tarball of 8.0.1 from scratch?
You need to download the whole tarball ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL Security Release

2005-02-02 Thread Geoffrey
Marc G. Fournier wrote:
In order to address a potential security hole recently identified with 
the "LOAD" option, the PostgreSQL Global Development Group is announcing 
the release of new versions of PostgreSQL going back to the 7.2.x version.

As always, these releases are available on all mirrors, as listed at:
http://wwwmaster.postgresql.org/download/mirrors-ftp
Anyone know what's up with the mirrors?
"We could not query the database or no mirrors could be found!"
Is what I get.
--
Until later, Geoffrey
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] plpython.so

2005-02-02 Thread Tom Lane
[EMAIL PROTECTED] (elein) writes:
> Fedora as listed below is really, RedHat Enterprice Linux ES 2.1

ES 2.1 is pretty old; I'd not be at all surprised if it has a very
obsolete python version.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Problems with filter on timestamp

2005-02-02 Thread Tom Lane
Alex Turner <[EMAIL PROTECTED]> writes:
>->  Seq Scan on weblog_entry  (cost=0.00..940.85 rows=4452 width=40)
>  Filter: ((request_time)::text > ((('now'::text)::time(6) with
> time zone - '00:01:00'::interval))::text)

> This explain plain seems to me to be saying that it's casting the
> timestamp to a text type before comparing with my
> current_time-interval.

Yup.

> Is this the right behaviour?

Hard to say.  You did not say what the datatype of request_time is.

The whole thing looks a bit bogus though --- isn't it going to fail near
midnight due to wraparound?  Seems like you should be using timestamps
not times of day.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL Security Release

2005-02-02 Thread Marc G. Fournier
On Wed, 2 Feb 2005, Geoffrey wrote:
Marc G. Fournier wrote:
In order to address a potential security hole recently identified with the 
"LOAD" option, the PostgreSQL Global Development Group is announcing the 
release of new versions of PostgreSQL going back to the 7.2.x version.

As always, these releases are available on all mirrors, as listed at:
http://wwwmaster.postgresql.org/download/mirrors-ftp
Anyone know what's up with the mirrors?
"We could not query the database or no mirrors could be found!"
Is what I get.
Should be fixed now ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] plpython.so

2005-02-02 Thread elein
It did have an old python and we updated it to 
python 2.1 and the problem still occurs.

Is there a documented minimum python version required?

--elein

On Wed, Feb 02, 2005 at 04:16:03PM -0500, Tom Lane wrote:
> [EMAIL PROTECTED] (elein) writes:
> > Fedora as listed below is really, RedHat Enterprice Linux ES 2.1
> 
> ES 2.1 is pretty old; I'd not be at all surprised if it has a very
> obsolete python version.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] delphi -> postgresql database connect ???

2005-02-02 Thread Daniel Schuchardt
Burak B&B wrote:
hi,
delphi to postgresql database connect commands and source code free 
download . Please.
Tanks.

Burak BÝTÝKÇÝ
_
Yagmura yakalanmamak için sadece semsiyenize degil, MSN hava durumuna 
güvenin! http://www.msn.com.tr/havadurumu/

---(end of broadcast)---
TIP 8: explain analyze is your friend
free is only
www.zeoslib.net
or
ODBC and BDE
OLEDB and ADO (never tried, don't like Delphi - ADO)
Daniel.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] plpython.so

2005-02-02 Thread Scott Marlowe
On Wed, 2005-02-02 at 16:26, elein wrote:
> It did have an old python and we updated it to 
> python 2.1 and the problem still occurs.
> 
> Is there a documented minimum python version required?
> 
> --elein
> 
> On Wed, Feb 02, 2005 at 04:16:03PM -0500, Tom Lane wrote:
> > [EMAIL PROTECTED] (elein) writes:
> > > Fedora as listed below is really, RedHat Enterprice Linux ES 2.1
> > 
> > ES 2.1 is pretty old; I'd not be at all surprised if it has a very
> > obsolete python version.


Are you sure you're not missing some -devel library or something like
that?

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL Security Release

2005-02-02 Thread Geoffrey
Marc G. Fournier wrote:
On Wed, 2 Feb 2005, Geoffrey wrote:
Marc G. Fournier wrote:
In order to address a potential security hole recently identified 
with the "LOAD" option, the PostgreSQL Global Development Group is 
announcing the release of new versions of PostgreSQL going back to 
the 7.2.x version.

As always, these releases are available on all mirrors, as listed at:
http://wwwmaster.postgresql.org/download/mirrors-ftp

Anyone know what's up with the mirrors?
"We could not query the database or no mirrors could be found!"
Is what I get.

Should be fixed now ...
Yes, thank you.
--
Until later, Geoffrey
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] plpython.so

2005-02-02 Thread elein
Installation of a new version of python (2.1) did the trick.
The apparent failure of 2.1 was because 1.x remains on the
system and I had to convince configure to ignore it.
(PYTHON=python2)

Thank you all.

--elein

On Wed, Feb 02, 2005 at 05:01:21PM -0600, Scott Marlowe wrote:
> On Wed, 2005-02-02 at 16:26, elein wrote:
> > It did have an old python and we updated it to 
> > python 2.1 and the problem still occurs.
> > 
> > Is there a documented minimum python version required?
> > 
> > --elein
> > 
> > On Wed, Feb 02, 2005 at 04:16:03PM -0500, Tom Lane wrote:
> > > [EMAIL PROTECTED] (elein) writes:
> > > > Fedora as listed below is really, RedHat Enterprice Linux ES 2.1
> > > 
> > > ES 2.1 is pretty old; I'd not be at all surprised if it has a very
> > > obsolete python version.
> 
> 
> Are you sure you're not missing some -devel library or something like
> that?
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
> 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] multidimensional arrays

2005-02-02 Thread mstory


I'm writing a 2 dimensional array INTEGER[3][] that holds data that i need to
parse through to check for zeros.  The user will insert into this array one row
at a time, when i insert the first row into this array it works fine but when i
check  array_dims it gives me [1:3] not the [1:3][1:1] that i expected, also the
array lower and array upper for the second dimension i get a null value for
both, i checked the documentation and it says that multidimensional arrays act
strangly, but nothing i've read would give me the impression that something like
this should have happened.  Any help would be appreciated.

matt

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Problems with filter on timestamp

2005-02-02 Thread Alex Turner
Yes - I am a complete idiot:

The query is indeed completely wrong, it should be current_timestamp
not current_time.  I finaly figured this out after staring at the
screen for twenty minutes trying to figure out what was going wrong.

DOH!

Alex Turner
NetEconomist


On Wed, 02 Feb 2005 16:14:58 -0500, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alex Turner <[EMAIL PROTECTED]> writes:
> >->  Seq Scan on weblog_entry  (cost=0.00..940.85 rows=4452 width=40)
> >  Filter: ((request_time)::text > ((('now'::text)::time(6) with
> > time zone - '00:01:00'::interval))::text)
> 
> > This explain plain seems to me to be saying that it's casting the
> > timestamp to a text type before comparing with my
> > current_time-interval.
> 
> Yup.
> 
> > Is this the right behaviour?
> 
> Hard to say.  You did not say what the datatype of request_time is.
> 
> The whole thing looks a bit bogus though --- isn't it going to fail near
> midnight due to wraparound?  Seems like you should be using timestamps
> not times of day.
> 
> regards, tom lane
> 
>

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] multidimensional arrays

2005-02-02 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I'm writing a 2 dimensional array INTEGER[3][] that holds data that i
> need to parse through to check for zeros.  The user will insert into
> this array one row at a time, when i insert the first row into this
> array it works fine but when i check array_dims it gives me [1:3] not
> the [1:3][1:1] that i expected,

You assigned a one-dimensional value to the array, not a two-dimensional
value.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Scanning the PGSQL DB

2005-02-02 Thread Anil
Thanks for the reply.
I can do a select of some basic tables, but I was looking for some
scanner sort of thing.
I am not running with the disabled sync.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] postgresql.conf - add_missing_from

2005-02-02 Thread Niederland
Yes I removed the comment...

Tail end of postgresql.conf..

#---
# VERSION/PLATFORM COMPATIBILITY
#---

# - Previous Postgres Versions -

# do not allow the database engine to change the from clause
add_missing_from = false
#regex_flavor = advanced# advanced, extended, or basic
#sql_inheritance = true
#default_with_oids = true
# - Other Platforms & Clients -

#transform_null_equals = false


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Introducing the future Debian multiversion/multicluster architecture

2005-02-02 Thread Martin Pitt
Hi!

As an ultra-short introduction, I am the primary Debian developer for
PostgreSQL packages. 

Our current packages became pretty hard to maintain; in particular
automatic upgrades to new major versions have always been a pain and
fail very often. There are also many requests for supporting more than
one cluster.

Thus we are currently developing a completely new packaging system and
architecture for future PostgreSQL packages which allow to manage
arbitrarily many clusters (also of different versions) in parallel.
This allows to do cluster updates in a running system, independently
of package upgrades, and will generally be much more robust and useful
also for web hosters.

I have heard that some guys of other distributions want to do
something similar in the future, so I was asked to publish the general
ideas for public review and discussion:

  http://people.debian.org/~mpitt/postgresql-ng.html

Thanks in advance for any thoughts,

Martin

(Please respect Mail-Followup-To or keep me CC'ed, I'm not
subscribed).

-- 
Martin Pitt   http://www.piware.de
Ubuntu Developerhttp://www.ubuntulinux.org
Debian GNU/Linux Developer   http://www.debian.org


signature.asc
Description: Digital signature


Re: [GENERAL] change table to view problem

2005-02-02 Thread Sim Zacks
Actually it's only the views that have this problem. The funcitons have kept
the original table name which is now the name of the view.



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] change table to view problem

2005-02-02 Thread Sim Zacks
I merged 2 tables into 1 table and created a view  for each of the tables so
as not to break existing applications. I renamed the old tables with the
_old suffix in case there was a problem so I could verify against the old
data.

The problem is that all the views and functions switched the name of the
table to the renamed table and do not use the new views I have created. Is
there a recommended way of updating all the views and functions that  use
the table to start using the view?

Thank You
Sim



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] [OT] PostgreSQL and Namo's WebEditor

2005-02-02 Thread Andrew L. Gould
WebEditor has support for PHP and several databases, including support 
for ODBC and JDBC.  Has anyone used it with PostgreSQL via ODBC?  If 
so, do you recommend it?

WebEditor is a Windows product; and current versions do not support 
PostgreSQL natively.  Now that PostgreSQL is available for Windows, I 
have sent a request to Namo that they add PostgreSQL support to 
WebEditor.

Thanks,

Andrew

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] NewsServer down ? (PostgreSql.org: trusted sender for your account)

2005-02-02 Thread Andreas Duffner
> > There should exist a news-server at: news://news.postgresql.org/
> > but I can't connect to it.
> > It seems does it does not exist.
> > Even the name can't get resolved.

> Just connected to it from here ... is this still providing to 
> be a problem 
> for you?

O, NOW I start thinking. :-(
I think my corporation blocks all but http-traffic.
Sorry, just did not think.

Thanks anyway,
Andreas


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] modifying views

2005-02-02 Thread Sim Zacks
I read the following thread from Nov 2002 on the impossibilities of
modifying a view and I was wondering if anything had changed in this regard
since then?
http://archives.postgresql.org/pgsql-hackers/2002-11/msg00609.php

Basically I want to remove a column from a table. The column is used in a
view. The view, but not the column that I want to remove, is used in 24
other views. A number of those views are also used in other views...

I can't remove the columns from the view without dropping over 100 other
views, removing the columns from this view and recreating them. I was able
to remove the columns from the table by modifying the view so instead of
"table1.field1" it has "null::varchar as field1"
The problem is that the field is still in the view and I don't want it there
(obviously).

I read somewhere that a possible solution would be to do a pg_dump, manually
change the text file and then do a pg_restore. Unfortunately, that means
taking the system offline, which I can't do.

Even adding a column to a view isn't allowed. So if I want a new field in my
table to be accessible in the same manner that the view would be accessible
in, I would have to go through the same process of dropping 100 views,
making my change and then recreating 100 views. Talk about inefficient.

Are there any realistic methods for changing the number of columns in a
view? Is it possible to manually modify the backend? I looked at the
pg_views view SQL and it seemed to work completely internally.
the definition is called by this function "pg_get_viewdef(c.oid) AS
definition"
and the pg_get_viewdef(oid) function  just calls 'pg_get_viewdef' with a
language Internal (as shown below)

CREATE OR REPLACE FUNCTION pg_get_viewdef(oid)
  RETURNS text AS
'pg_get_viewdef'
  LANGUAGE 'internal' STABLE STRICT;
GRANT EXECUTE ON FUNCTION pg_get_viewdef(oid) TO public;
COMMENT ON FUNCTION pg_get_viewdef(oid) IS 'select statement of a view';

I could use any help that you can give me.
Thanks
Sim



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] pgpool 2.5b2 released

2005-02-02 Thread Julian Scarfe
From: "Tatsuo Ishii" <[EMAIL PROTECTED]>


> Pgpool 2.5b2 supports "master slave mode" which can cope with
> master/slave replication softwares such as Slony-I. In this mode
> pgpool sends non SELECT queries to master only. SELECTs are load
> balanced by pgpool.

Sounds good!

Does it attempt any interaction with Slony when it detects a failure of the
master?  It would seem a pity to have pgpool watching the pair to detect
failure but having to have a separate watcher process to tell Slony to
failover.

Julian Scarfe



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Problem with the sequence

2005-02-02 Thread sid tow
HI
 
 I have a problem locating the documentation for "sequence". I want to get the detailed information about the columns present in a sequence table ie when I do 
 
psql=# select * from foo_seq; 
 sequence_name   | last_value | increment_by |  max_value  | min_value | cache_value | log_cnt | is_cycled | is_called ---++--+-+---+-+-+---+--- foo_seq    | 11 |    1 | 9223372036854775807 | 1 |   1 |   0 | f | t(1 row)
I need to know where do i find documentation to know what the columns specify and I have already checked the man pages of create_sequence where I did not find much. 
Can somebody give me ref to a link where I get such information.
		Do you Yahoo!? 
Yahoo! Search presents - Jib Jab's 'Second Term'

[GENERAL] psql question on echo output

2005-02-02 Thread John DeSoi
I have output set to go to a file with \o. Now I want to process a file 
with \i. With ECHO set to all, I would like the statement to be echoed 
in the file before the output, but instead it comes to the screen 
(standard output). Is there some setting or trick I missed to 
accomplish this? I realize I can get all output in the file with 
something like psql -f file > out, but I want to use the \i option so 
the connection state and psql settings can be persisted across 
different file executions.

Thanks,
John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] SQL query question

2005-02-02 Thread Uwe C. Schroeder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Maybe it's to late for me to think correctly (actually I'm sure of that). I'm 
going to ask anyways.
I have a table like

id int4
user_id int4
photo varchar
image_type char(1)

where image_type is either G or X
What I want to do is have ONE query that gives me the count of images of each 
type per user_id.
So if user 3 has 5 photos of type G and 3 photos of type X 
I basically want to have a result 5,3
It got to be possible to get a query like that, but somehow it eludes me 
tonight.

Any pointers are greatly appreciated.

UC

- --
Open Source Solutions 4U, LLC   2570 Fleetwood Drive
Phone:  +1 650 872 2425 San Bruno, CA 94066
Cell:   +1 650 302 2405 United States
Fax:+1 650 872 2417
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFCAdOMjqGXBvRToM4RApgvAJsEUsdl6hrVGqRwJ+NI7JrqQqQ5GgCgkTQN
pavTkx47QUb9nr7XO/r/v5k=
=B3DH
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] how to release a transaction lock on a table?

2005-02-02 Thread Michael Fuhr
On Tue, Feb 01, 2005 at 02:27:37PM -0800, Si Chen wrote:

> You are right.  The transactions are idle--when I do a "ps auxw" on my 
> database server, I see "idle in transaction".  Is this what you 
> meant, and would the steps you talked about with pg_stat_activity help 
> me track down the transactions that are idle?

Here's an example of what you'll get when stats_command_string is "on":

SELECT * FROM pg_stat_activity;

 datid | datname | procpid | usesysid | usename | current_query 
|  query_start  
---+-+-+--+-+---+---
 17232 | test|   26484 |  100 | mfuhr   |  in transaction 
| 2005-02-03 00:20:11.072507-07
 17232 | test|   26482 |  100 | mfuhr   | 
| 2005-02-03 00:19:24.445995-07
 17232 | test|   26483 |  100 | mfuhr   | ALTER TABLE foo ADD COLUMN 
name text; | 2005-02-03 00:20:36.089689-07
(3 rows)

> What's strange is that usually this does not cause problems.  It is just 
> occasionally that I am unable to gain access to a table.  Unfortunately 
> that also makes it hard to track down the source of the problem.

Joining pg_locks and pg_stat_activity can be useful.  Here's an example:

SELECT p.procpid, p.usename, p.datname, l.relation::regclass,
   l.granted, p.current_query, p.query_start
FROM pg_stat_activity AS p
JOIN pg_locks AS l ON l.pid = p.procpid
WHERE l.relation IN (SELECT relation FROM pg_locks WHERE granted IS FALSE)
ORDER BY l.relation;

 procpid | usename | datname | relation | granted | current_query   
  |  query_start  
-+-+-+--+-+---+---
   26484 | mfuhr   | test| foo  | t   |  in transaction   
  | 2005-02-03 00:20:11.072507-07
   26483 | mfuhr   | test| foo  | f   | ALTER TABLE foo ADD COLUMN 
name text; | 2005-02-03 00:20:36.089689-07
(2 rows)

If a connection holding a lock (granted = t) is idle, then you might
want to investigate what it's doing.  It might simply have issued
a query, acquiring an AccessShareLock on the table, which conflicts
with the AccessExclusiveLock that ALTER TABLE wants.  If that's
the case, then maybe the application could be modified to COMMIT
or ROLLBACK its transactions if it knows it's going to be idle for
a while.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq