[GENERAL] regarding select into

2005-09-26 Thread surabhi.ahuja
 Hello,
 
Is it not possible to write such an sql 
statement:
 
select ser into val1, count(*) into val2 
from tab1 where id = $1;
 
do i need to perform the 2 selects separately 
..wont that impact the performance?
 
in the table tab1, id is the primary 
key.
 
Thank You
 
Regards
Surabhi Ahuja

Re: [GENERAL] Slow connection to the database

2005-09-26 Thread Csaba Nagy
This is a known issue with Java 1.5 networking code, it tries to reverse
look up the IP addresses in certain circumstances (which obviously occur
in the JDBC driver) if you use IPs in your connection addresses.

You're best off to make sure DNS is set up locally on your client
machines for all the known servers you connect to. That would be in your
/etc/hosts on linux, on other OSes I don't know.
This will make sure you have no timeouts whether you use IPs or host
names names, but you have to maintain it.

Cheers,
Csaba.


On Sun, 2005-09-25 at 12:32, Poul Møller Hansen wrote:
> > How do you know the delay isn't in the client's lookup of the server?
> > 
> 
> You are absolutely right!
> I am using the ip address to connect to, so I thougt that was it.
> But the client is doing netbios queries  seen with ethereal.
> 
> Thanks to both of you.
> 
> 
> Poul
> 
> 
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] regarding select into

2005-09-26 Thread Alban Hertroys

surabhi.ahuja wrote:

 Hello,
 
Is it not possible to write such an sql statement:
 
select ser into val1, count(*) into val2 from tab1 where id = $1;
 
do i need to perform the 2 selects separately ..wont that impact the 
performance?


Well, considering id is your primary key and therefore very probably 
unique, you could as well do: select ser, 1 from tab1 where id=$1;


Otherwise, whatever interface you're using is likely to have a rowcount 
value for you.
If that's not what you're looking for, please don't make us guess what 
you want to achieve.


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

//Showing your Vision to the World//

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] RI_ConstraintTrigger question

2005-09-26 Thread [EMAIL PROTECTED]
We have a database with about 30 tables and some RI. The RI constraints, 
however, were not named upon creation of the database 2-3 years ago and 
now when we get an error it contains  for the constraint.


I tried Google and the documentation, and I still have 2 questions -

1. Is it possible to rename RI_ConstraintTrigger, so that we do not get 
 in the errors.


2. Is there somewhere explanation what the RI_FKey_ procedures mean?

Thanks you.

Iv


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] insertion becoming slow

2005-09-26 Thread surabhi.ahuja
i have seen that after insertion say (20 k rows) the 
insertion to tables becomes slow..why is it?
 
is there any way in which u can stop the performance from 
degrading

Re: [GENERAL] What is an 'unused item pointer'

2005-09-26 Thread Jim C. Nasby
On Sun, Sep 25, 2005 at 12:09:24AM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > On Sat, Sep 24, 2005 at 07:19:10PM -0400, Tom Lane wrote:
> >> Currently, when a tuple is reclaimed by VACUUM, we just mark its item
> >> pointer as unused (and hence recyclable).  I think it might be safe to
> >> decrease pd_lower if there are unused pointers at the end of the page's
> >> pointer array, but we don't currently do that.
> 
> > Sounds like a good newbie TODO?
> 
> Uh, no, because the $64 question is whether it actually *is* safe, or
> perhaps would be safe with more locking than we do now.  I'm not sure of
> the answer myself, and would have zero confidence in a newbie's answer.
> 
> Decreasing pd_lower would definitely be a win if we can do it free or
> cheaply.  If it requires significant additional locking overhead, then
> maybe not.

Ok, sounds like a non-newbie TODO then. :)
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Data Entry Tool for PostgreSQL

2005-09-26 Thread Brent Wood


On Fri, 23 Sep 2005, Ian Overton wrote:

> Hello,
>
> We have a web application using PHP, Linux and PostgreSQL.  We need a
> simple data entry tool to allow non-IT people to edit data in our database
> through a web-browser.  Are there any free or commercial data management
> tools out there, not database management tools like pgAdmin.

I would have thought a php appliction would be able to this fairly
easily. My data entry scripts insert the new records with just a primary
key, then iterate through the various fields using an update sql for each
one which is not null.

This sort of approach coul easily be used to populate an on-screen table
using php, then update any changed fields as required.


Cheers,


  Brent Wood

k

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Index use in BETWEEN statement...

2005-09-26 Thread Cristian Prieto

Hello pals, I have the following table in Postgresql 8.0.1

Mydb# \d geoip_block
Table "public.geoip_block"
   Column|  Type  | Modifiers
-++---
 locid   | bigint |
 start_block | inet   |
 end_block   | inet   |

mydb# explain analyze select locid from geoip_block where
'216.230.158.50'::inet between start_block and end_block;
  QUERY PLAN

---
 Seq Scan on geoip_block  (cost=0.00..142772.86 rows=709688 width=8) (actual
time=14045.384..14706.927 rows=1 loops=1)
   Filter: (('216.230.158.50'::inet >= start_block) AND
('216.230.158.50'::inet <= end_block))
 Total runtime: 14707.038 ms

Ok, now I decided to create a index to "speed" a little the query

Mydb# create index idx_ipblocks on geoip_block(start_block, end_block);
CREATE INDEX

clickad=# explain analyze select locid from geoip_block where
'216.230.158.50'::inet between start_block and end_block;
  QUERY PLAN

--
 Seq Scan on geoip_block  (cost=0.00..78033.96 rows=230141 width=8) (actual
time=12107.919..12610.199 rows=1 loops=1)
   Filter: (('216.230.158.50'::inet >= start_block) AND
('216.230.158.50'::inet <= end_block))
 Total runtime: 12610.329 ms
(3 rows)

I guess the planner is doing a sequential scan in the table, why not use the
compound index? Do you have any idea in how to speed up this query?

Thanks a lot!


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Function keys cause psql to segfault

2005-09-26 Thread Douglas McNaught
CSN <[EMAIL PROTECTED]> writes:

> Hmm, in putty (Terminal->Keyboard) I changed "the
> function keys and keypad" from "ESC[n~" to "Linux".
> Hitting F1-5 in psql outputs "ABCDE" - no segfaults!
> Setting it to "Xterm R6" also results in function keys
> 1-4 causing segfaults (there are also options for
> "VT400", "VT100+", and "SCO" - haven't tried those).

Sounds like it might be useful to compile psql with debugging symbols
and get a backtrace from the crash.

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Data Entry Tool for PostgreSQL

2005-09-26 Thread Sean Davis
On 9/25/05 8:49 PM, "Brent Wood" <[EMAIL PROTECTED]> wrote:

> 
> 
> On Fri, 23 Sep 2005, Ian Overton wrote:
> 
>> Hello,
>> 
>> We have a web application using PHP, Linux and PostgreSQL.  We need a
>> simple data entry tool to allow non-IT people to edit data in our database
>> through a web-browser.  Are there any free or commercial data management
>> tools out there, not database management tools like pgAdmin.
> 
> I would have thought a php appliction would be able to this fairly
> easily. My data entry scripts insert the new records with just a primary
> key, then iterate through the various fields using an update sql for each
> one which is not null.
> 
> This sort of approach coul easily be used to populate an on-screen table
> using php, then update any changed fields as required.

These types of applications are typically called "CRUD" applications
(Create, Read, Update, and Delete).  Perl, Ruby, and Java, at least, have
frameworks for building CRUD applications in a fairly straightforward
(depending on needs and customization).

Sean
 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] Index use in BETWEEN statement...

2005-09-26 Thread Sean Davis
On 9/26/05 11:26 AM, "Cristian Prieto" <[EMAIL PROTECTED]> wrote:

> 
> Hello pals, I have the following table in Postgresql 8.0.1
> 
> Mydb# \d geoip_block
> Table "public.geoip_block"
>  Column|  Type  | Modifiers
> -++---
> locid   | bigint |
> start_block | inet   |
> end_block   | inet   |
> 
> mydb# explain analyze select locid from geoip_block where
> '216.230.158.50'::inet between start_block and end_block;
> QUERY PLAN
> 
> ---
> Seq Scan on geoip_block  (cost=0.00..142772.86 rows=709688 width=8) (actual
> time=14045.384..14706.927 rows=1 loops=1)
>  Filter: (('216.230.158.50'::inet >= start_block) AND
> ('216.230.158.50'::inet <= end_block))
> Total runtime: 14707.038 ms
> 
> Ok, now I decided to create a index to "speed" a little the query
> 
> Mydb# create index idx_ipblocks on geoip_block(start_block, end_block);
> CREATE INDEX
> 
> clickad=# explain analyze select locid from geoip_block where
> '216.230.158.50'::inet between start_block and end_block;
> QUERY PLAN
> 
> --
> Seq Scan on geoip_block  (cost=0.00..78033.96 rows=230141 width=8) (actual
> time=12107.919..12610.199 rows=1 loops=1)
>  Filter: (('216.230.158.50'::inet >= start_block) AND
> ('216.230.158.50'::inet <= end_block))
> Total runtime: 12610.329 ms
> (3 rows)
> 
> I guess the planner is doing a sequential scan in the table, why not use the
> compound index? Do you have any idea in how to speed up this query?

Did you vacuum analyze the table after creating the index?

Sean


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] could not send data to server: Operation not supported

2005-09-26 Thread ruben

Hi:

A PHP script that is running every 30 minutes for three years has 
crashed when updating a Postgres database table with this error message:


"pg_exec() query failed:  could not send data to server: Operation not 
supported"


I just ran it again and it worked fine. I could not find a single page 
about this error in Google. This is the output of the logfile:


LOG:  server process (PID 7069) was terminated by signal 11
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted at 2005-09-26 16:49:43 CEST
LOG:  checkpoint record is at 125/858E0144
LOG:  redo record is at 125/858E0144; undo record is at 0/0; shutdown FALSE
LOG:  next transaction ID: 2270061; next OID: 30820346
LOG:  database system was not properly shut down; automatic recovery in 
progress

LOG:  record with zero length at 125/858E0184
LOG:  redo is not required
LOG:  database system is ready
LOG:  unexpected EOF on client connection


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Error migrating from 7.4.3 to 8.0.3

2005-09-26 Thread Raj Gupta

Hi,

 We are trying to migrate our database from 7.4.3 of Postgresql to 
8.0.3.


 While migrating a table, we got the following error:

 ERROR:  could not write block 2830 of relation 1663/2276041/4965853: 
Operation not permitted


 This came when pg was trying to create an index on the relation. Has 
anyone seen this behavior before? Are we doing something wrong?


 Thanks

Raj

 --
 Raj Gupta                      [EMAIL PROTECTED]
 1684 Nightingale Avenue     Suite 201
 Sunnyvale, CA 94087        408-733-2737(fax)

 http://www.zeesource.net


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Slow search.. quite clueless

2005-09-26 Thread Yonatan Ben-Nes

Oleg Bartunov wrote:
contrib/tsearch2 ( 
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ )
might works for you. It might because performance depends on cardinality 
of your keywords.


 Oleg
On Tue, 20 Sep 2005, Yonatan Ben-Nes wrote:


Hi all,

Im building a site where the users can search for products with up to 
4 diffrent keywords which all MUST match to each product which found 
as a result to the search.


I got 2 tables (which are relevant to the issue :)), one is the 
product table (5 million rows) and the other is the keyword table 
which hold the keywords of each product (60 million rows).


The scheme of the tables is as follows:

 Table "public.product"
  Column   | Type  |  Modifiers
+---+-
product_id | text  | not null
product_name   | text  | not null
retail_price   | numeric(10,2) | not null
etc...
Indexes:
   "product_product_id_key" UNIQUE, btree (product_id)

Table "public.keyword"
  Column| Type  | Modifiers
-+---+---
product_id  | text  | not null
keyword | text  | not null
Indexes:
   "keyword_keyword" btree (keyword)

The best query which I succeded to do till now is adding the keyword 
table for each keyword searched for example if someone search for 
"belt" & "black" & "pants" it will create the following query:


poweraise.com=# EXPLAIN ANALYZE SELECT 
product_id,product_name,product_image_url,short_product_description,long_product_description,discount,discount_type,sale_price,retail_price 
FROM product INNER JOIN keyword t1 USING(product_id) INNER JOIN 
keyword t2 USING(product_id) INNER JOIN keyword t3 USING(product_id) 
WHERE t1.keyword='belt' AND t2.keyword='black' AND t3.keyword='pants' 
LIMIT 13;


  QUERY PLAN
--- 

Limit  (cost=37734.15..39957.20 rows=13 width=578) (actual 
time=969.798..1520.354 rows=6 loops=1)
  ->  Hash Join  (cost=37734.15..3754162.82 rows=21733 width=578) 
(actual time=969.794..1520.337 rows=6 loops=1)

Hash Cond: ("outer".product_id = "inner".product_id)
->  Nested Loop  (cost=18867.07..2858707.34 rows=55309 
width=612) (actual time=82.266..1474.018 rows=156 loops=1)
  ->  Hash Join  (cost=18867.07..2581181.09 rows=55309 
width=34) (actual time=82.170..1462.104 rows=156 loops=1)

Hash Cond: ("outer".product_id = "inner".product_id)
->  Index Scan using keyword_keyword on keyword t2 
(cost=0.00..331244.43 rows=140771 width=17) (actual 
time=0.033..1307.167 rows=109007 loops=1)

  Index Cond: (keyword = 'black'::text)
->  Hash  (cost=18851.23..18851.23 rows=6337 
width=17) (actual time=16.145..16.145 rows=0 loops=1)
  ->  Index Scan using keyword_keyword on 
keyword t1 (cost=0.00..18851.23 rows=6337 width=17) (actual 
time=0.067..11.050 rows=3294 loops=1)

Index Cond: (keyword = 'belt'::text)
  ->  Index Scan using product_product_id_key on product 
(cost=0.00..5.01 rows=1 width=578) (actual time=0.058..0.060 rows=1 
loops=156)

Index Cond: (product.product_id = "outer".product_id)
->  Hash  (cost=18851.23..18851.23 rows=6337 width=17) (actual 
time=42.863..42.863 rows=0 loops=1)
  ->  Index Scan using keyword_keyword on keyword t3 
(cost=0.00..18851.23 rows=6337 width=17) (actual time=0.073..36.120 
rows=3932 loops=1)

Index Cond: (keyword = 'pants'::text)
Total runtime: 1521.441 ms
(17 rows)

Sometimes the query work fast even for 3 keywords but that doesnt help 
me if at other times it take ages


Now to find a result for 1 keyword its really flying so I also tried 
to make 3 queries and do INTERSECT between them but it was found out 
to be extremly slow...


Whats make this query slow as far as I understand is all the merging 
between the results of each table... I tried to divide the keyword 
table into lots of keywords table which each hold keywords which start 
only with a specific letter, it did improve the speeds but not in a 
real significant way.. tried clusters,indexes,SET STATISTICS,WITHOUT 
OIDS on the keyword table and what not.. im quite clueless...


Actually I even started to look on other solutions and maybe you can 
say something about them also.. maybe they can help me:

1. Omega (From the Xapian project) - http://www.xapian.org/
2. mnoGoSearch - http://www.mnogosearch.org/doc.html
3. Swish-e - http://swish-e.org/index.html

To add on everything I want at the end to be able to ORDER BY the 
results like order the product by price, but im less concerned about 
that cause I saw that wi

Re: [GENERAL] Error migrating from 7.4.3 to 8.0.3

2005-09-26 Thread Poul Møller Hansen

Raj Gupta wrote:

Hi,

 We are trying to migrate our database from 7.4.3 of Postgresql to 8.0.3.

 While migrating a table, we got the following error:

 ERROR:  could not write block 2830 of relation 1663/2276041/4965853: 
Operation not permitted


 This came when pg was trying to create an index on the relation. Has 
anyone seen this behavior before? Are we doing something wrong?




Have you dumped and restored the database ?
The structure of the two major versions is not compatible.

Poul


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Error migrating from 7.4.3 to 8.0.3

2005-09-26 Thread Vivek Khera


On Sep 26, 2005, at 12:43 PM, Raj Gupta wrote:


 While migrating a table, we got the following error:

 ERROR:  could not write block 2830 of relation  
1663/2276041/4965853: Operation not permitted


 This came when pg was trying to create an index on the relation.  
Has anyone seen this behavior before? Are we doing something wrong?




This is during the pg_restore step when loading your data  into the  
8.0.3 instance?  it looks suspiciously like some sort of low-level  
fault.  Try running some hardware diags on your system.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Error migrating from 7.4.3 to 8.0.3

2005-09-26 Thread Raj Gupta
Yes, I followed the procedure for dumping and restoring binary data (to 
include large objects).


It creates all the tables, etc. fine, and all the other indices are 
created. It's just this one index that it gives me a problem.


The problem also happens if I manually try to create the same index on 
that table. Otherwise, everything else seems to be okay.


Raj


On Sep 26, 2005, at 10:09 AM, Poul Møller Hansen wrote:


Raj Gupta wrote:

Hi,
 We are trying to migrate our database from 7.4.3 of Postgresql to 
8.0.3.

 While migrating a table, we got the following error:
 ERROR:  could not write block 2830 of relation 1663/2276041/4965853: 
Operation not permitted
 This came when pg was trying to create an index on the relation. Has 
anyone seen this behavior before? Are we doing something wrong?


Have you dumped and restored the database ?
The structure of the two major versions is not compatible.

Poul


---(end of 
broadcast)---

TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Data Entry Tool for PostgreSQL

2005-09-26 Thread John DeSoi


On Sep 26, 2005, at 11:45 AM, Sean Davis wrote:


These types of applications are typically called "CRUD" applications
(Create, Read, Update, and Delete).  Perl, Ruby, and Java, at  
least, have

frameworks for building CRUD applications in a fairly straightforward
(depending on needs and customization).


Cake looks like an interesting CRUD framework if you are using PHP:

https://trac.cakephp.org/wiki


John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Error migrating from 7.4.3 to 8.0.3

2005-09-26 Thread Raj Gupta
This is both while doing  a pg_restore and also when doing a 'create 
index' command.


This is running on Mac OS X 10.3

Postgresql 7.4 runs fine, and creates the index, so I would be 
surprised if this is a hardware error?


Raj


On Sep 26, 2005, at 10:23 AM, Vivek Khera wrote:



On Sep 26, 2005, at 12:43 PM, Raj Gupta wrote:


 While migrating a table, we got the following error:

 ERROR:  could not write block 2830 of relation 1663/2276041/4965853: 
Operation not permitted


 This came when pg was trying to create an index on the relation. Has 
anyone seen this behavior before? Are we doing something wrong?




This is during the pg_restore step when loading your data  into the 
8.0.3 instance?  it looks suspiciously like some sort of low-level 
fault.  Try running some hardware diags on your system.



---(end of 
broadcast)---

TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] insertion becoming slow

2005-09-26 Thread Jim C. Nasby
On Mon, Sep 26, 2005 at 05:55:18PM +0530, surabhi.ahuja wrote:
> i have seen that after insertion say (20 k rows) the insertion to tables 
> becomes slow..why is it?

Most likely due to indexes.

> is there any way in which u can stop the performance from degrading

If you're loading from scratch, don't create the indexes until after the
load is done.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Index use in BETWEEN statement...

2005-09-26 Thread Cristian Prieto

mydb=# explain analyze select locid from geoip_block where
'216.230.158.50'::inet between start_block and end_block;
  QUERY PLAN

--
 Seq Scan on geoip_block  (cost=0.00..78033.96 rows=230141 width=8) (actual
time=13015.538..13508.708 rows=1 loops=1)
   Filter: (('216.230.158.50'::inet >= start_block) AND
('216.230.158.50'::inet <= end_block))
 Total runtime: 13508.905 ms
(3 rows)

mydb=# alter table geoip_block add constraint pkey_geoip_block primary key
(start_block, end_block);
NOTICE:  ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pkey_geoip_block" for table "geoip_block"
ALTER TABLE

mydb=# vacuum analyze geoip_block; 

mydb=# explain analyze select locid from geoip_block where
'216.230.158.50'::inet between start_block and end_block;
  QUERY PLAN

---
 Seq Scan on geoip_block  (cost=0.00..101121.01 rows=308324 width=8) (actual
time=12128.190..12631.550 rows=1 loops=1)
   Filter: (('216.230.158.50'::inet >= start_block) AND
('216.230.158.50'::inet <= end_block))
 Total runtime: 12631.679 ms
(3 rows)

mydb=#


As you see it still using a sequential scan in the table and ignores the
index, any other suggestion?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Sean Davis
Sent: Lunes, 26 de Septiembre de 2005 10:24 a.m.
To: Cristian Prieto; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Index use in BETWEEN statement...

On 9/26/05 11:26 AM, "Cristian Prieto" <[EMAIL PROTECTED]> wrote:

> 
> Hello pals, I have the following table in Postgresql 8.0.1
> 
> Mydb# \d geoip_block
> Table "public.geoip_block"
>  Column|  Type  | Modifiers
> -++---
> locid   | bigint |
> start_block | inet   |
> end_block   | inet   |
> 
> mydb# explain analyze select locid from geoip_block where
> '216.230.158.50'::inet between start_block and end_block;
> QUERY PLAN
>

> ---
> Seq Scan on geoip_block  (cost=0.00..142772.86 rows=709688 width=8)
(actual
> time=14045.384..14706.927 rows=1 loops=1)
>  Filter: (('216.230.158.50'::inet >= start_block) AND
> ('216.230.158.50'::inet <= end_block))
> Total runtime: 14707.038 ms
> 
> Ok, now I decided to create a index to "speed" a little the query
> 
> Mydb# create index idx_ipblocks on geoip_block(start_block, end_block);
> CREATE INDEX
> 
> clickad=# explain analyze select locid from geoip_block where
> '216.230.158.50'::inet between start_block and end_block;
> QUERY PLAN
>

> --
> Seq Scan on geoip_block  (cost=0.00..78033.96 rows=230141 width=8) (actual
> time=12107.919..12610.199 rows=1 loops=1)
>  Filter: (('216.230.158.50'::inet >= start_block) AND
> ('216.230.158.50'::inet <= end_block))
> Total runtime: 12610.329 ms
> (3 rows)
> 
> I guess the planner is doing a sequential scan in the table, why not use
the
> compound index? Do you have any idea in how to speed up this query?

Did you vacuum analyze the table after creating the index?

Sean


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Replicating new sequences

2005-09-26 Thread Jan Wieck

On 9/22/2005 3:54 PM, Todd Eddy wrote:

I know this gets asked all the time, but I'm having issues with  
replication.  I got Slony setup between two computers and that does  
replication of transactions.  But we have a table that because of how  
it works new sequences are added on a somewhat regular basis, maybe a  
couple times a day.  Also replication needs to be "almost" instant  
because we distribute the load so updates go to one database and  
selects happen on another database.  So having these sequences  
replicate on a fairly quick basis is important.


I think what we need is a log based replication solution instead of  
the trigger based solution of Slony or other replication systems out  


I think it makes little difference how the changes are recorded 
(triggers or WAL log analysis). The thing that matters is that both are 
asynchronous replication methods which allow the master to run ahead.


What you are asking for is a synchronous replication system.

I hope that your load balancing system is smart enough to at least 
guarantee that "SELECT ... FOR UPDATE" queries are executed in the same 
"master transaction" as their corresponding "UPDATE" queries. If that is 
the case, you shouldn't really have much of a problem. If not, I am not 
even sure a synchronous replication system under MVCC can guarantee 
consistency for you.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SQL command to dump the contents of table failed: PQendcopy()

2005-09-26 Thread ruben

Thanks Tom and Michael:

Michael Fuhr wrote:


On Fri, Sep 23, 2005 at 05:11:19PM +0200, ruben wrote:

The operating system is Red Hat Linux release 8.0 (Psyche) and 
PostgreSQL version is 7.4.6., without non-standard extensions.


I cannot find any core dump in the PGDATA directory /usr/local/pgsql (I 
don't know how to debug it to get a stack trace, I'll find out).



Did you look everywhere under $PGDATA or just in that directory?
As I recall, released versions of PostgreSQL usually dump core under
$PGDATA/base/.  However, it's also possible that your
coredumpsize resource limit prevents core dumps; you could fix that
by putting a command like "ulimit -c unlimited" in your PostgreSQL
startup script and then stopping and restarting PostgreSQL.

Once you have a core dump, you can get a stack trace with gdb:

$ gdb /path/to/postgres /path/to/core
...
(gdb) bt

If your postgres binary was built with debugging symbols then the
stack trace should show function names, file names, and line numbers.


I cannot find a core file, ulimit is set to unlimit. I guess I'm doing 
something wrong:


-bash-2.05b$ ulimit
unlimited
-bash-2.05b$ find /usr/local/pgsql -name '*core*' -print
-bash-2.05b$



Can you duplicate the backend crash from psql if you issue the COPY
command that pg_dump complained about?  


-bash-2.05b$ /usr/local/pgsql/bin/pg_dump -Fc -t llamadas heos -f 
/home/buheos/5/llamadas3.dump

pg_dump: socket not open
pg_dump: SQL command to dump the contents of table "llamadas" failed: 
PQendcopy() failed.

pg_dump: Error message from server: socket not open
pg_dump: The command was: COPY public.llamadas (cod_empresa, 
fecha_llamada, tfno_origen, tfno_destino, duracion_llamada, 
hora_llamada, cod_destino_llamada, cod_pais_destino, 
cod_destino_internacional, franja_horaria, importe, cod_fuente, 
precio_coste_llamada, observaciones_llamada, coment_llamada, 
fecha_factura, num_factura, fecha_alta, fecha_ult_mod, fecha_sis_alta, 
usuario_alta, i_a_alta, fecha_sis_ult_mod, usuario_ult_mod, i_a_ult_mod, 
periodicidad_facturacion, cod_operador, franja_horaria_operador, 
fichero_origen, cod_destino_internacional_operador) TO stdout;



What about if you issue a SELECT for all records in the table?  



heos=# select * from llamadas;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: WARNING: 
terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

Failed.

In the logfile:

LOG:  server process (PID 7069) was terminated by signal 11
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted at 2005-09-26 16:49:43 CEST
LOG:  checkpoint record is at 125/858E0144
LOG:  redo record is at 125/858E0144; undo record is at 0/0; shutdown FALSE
LOG:  next transaction ID: 2270061; next OID: 30820346
LOG:  database system was not properly shut down; automatic recovery in 
progress

LOG:  record with zero length at 125/858E0184
LOG:  redo is not required
LOG:  database system is ready
LOG:  unexpected EOF on client connection



> What does "\d tablename" show for the table in question?

heos=# \d llamadas;
  Table "public.llamadas"
   Column   |   Type   | 
  Modifiers

+--+
 cod_empresa| smallint | not null
 fecha_llamada  | date |
 tfno_origen| character(15)|
 tfno_destino   | character(15)|
 duracion_llamada   | integer  |
 hora_llamada   | time without time zone   | 
default ('now'::text)::time(6) with time zone

 cod_destino_llamada| character(1) |
 cod_pais_destino   | integer  |
 cod_destino_internacional  | character(15)|
 franja_horaria | character(1) |
 importe| real |
 cod

Re: [GENERAL] Slow search.. quite clueless

2005-09-26 Thread Oleg Bartunov

On Mon, 26 Sep 2005, Yonatan Ben-Nes wrote:


Hi again everyone,

Oleg I tried tsearch2 and happily it does work wonderfully for me returning 
results extremly fast and actually its working even better then I wanted with 
all of those neat features like: lexem, weight & stop words.


I got only one problem which is when I want the results to be ordered by a 
diffrent field (like print INT field) it takes quite alot of time for it to 
do it if the query can return lots of results (for example search for the 
word "computer") and thats even if I limit the results.
The best way to improve its speed for such quereies (that I've found...) is 
to create an index on the field which I want to order by and using it CLUSTER 
the table, after the clustering I drop the the index so it won't be used when 
I run queries with ORDER BY on that field, that seem to improve the speed, if 
anyone got a better idea ill be glad to hear it.


what's your actual query ?  have you tried multicolumn index ?




Anyway thanks alot everyone!
 Ben-Nes Yonatan



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL command to dump the contents of table failed: PQendcopy()

2005-09-26 Thread Tom Lane
ruben <[EMAIL PROTECTED]> writes:
>> As I recall, released versions of PostgreSQL usually dump core under
>> $PGDATA/base/.  However, it's also possible that your
>> coredumpsize resource limit prevents core dumps; you could fix that
>> by putting a command like "ulimit -c unlimited" in your PostgreSQL
>> startup script and then stopping and restarting PostgreSQL.

> I cannot find a core file, ulimit is set to unlimit. I guess I'm doing 
> something wrong:

> -bash-2.05b$ ulimit
> unlimited
> -bash-2.05b$ find /usr/local/pgsql -name '*core*' -print

(1) The fact that it's unlimited in your user environment doesn't prove
that it's unlimited in the environment the postmaster is started in.

(2) I forget which constraint ulimit-with-no-argument prints, but it's
not core file size.  (Try "ulimit -a")

Please actually follow the advice given to you above.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Error migrating from 7.4.3 to 8.0.3

2005-09-26 Thread Tom Lane
Raj Gupta <[EMAIL PROTECTED]> writes:
> Postgresql 7.4 runs fine, and creates the index, so I would be 
> surprised if this is a hardware error?

"Operation not permitted" during a write() is not very sensible;
the kernel should have complained when the file was open()'d if there
were permission issues.  So I'd say there is something pretty broken
about either your kernel or the filesystem the database is on.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Error migrating from 7.4.3 to 8.0.3

2005-09-26 Thread Raj Gupta
This is a stock Mac OS 10.3 (Panther). Has anyone else seen this 
problem with 8.0.3 on Panther?


According to the INSTALL file 10.3 should work fine, as reported by 
Andrew Rawnsley, cc'd on this message.


Raj

On Sep 26, 2005, at 12:23 PM, Tom Lane wrote:


"Operation not permitted" during a write() is not very sensible;
the kernel should have complained when the file was open()'d if there
were permission issues.  So I'd say there is something pretty broken
about either your kernel or the filesystem the database is on.

regards, tom lane



ERROR:  could not write block 2830 of relation 1663/2276041/4965853: 
Operation not permitted


 This came when pg was trying to create an index on the relation.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How many insert + update should one transaction handle?

2005-09-26 Thread Yonatan Ben-Nes

Robert Treat wrote:

On Fri, 2005-09-23 at 14:53, Dawid Kuroczko wrote:


On 9/23/05, Yonatan Ben-Nes < [EMAIL PROTECTED]  >
wrote:

Hi all,

Every few days I need to DELETE all of the content of few tables and
INSERT new data in them.
The amount of new data is about 5 million rows and each row get about 3
queries (INSERT + UPDATE).



 


Or you could even try (haven't tested it):
BEGIN;
CREATE new_table;
SELECT INTO new_table * FROM temp_table;
DROP TABLE table;
ALTER TABLE new_table RENAME TO table;
COMMIT; -- leaving you with fresh 5mln new tuples table
...with a risk of loosing all the changes made to old table after BEGIN;




yeah, i was thinking 


create newtable;
~~ load data into newtable

begin;
  drop oldtable;
  alter table newtable rename to oldtable
commit;

this seperates the data loading piece from the piece where you promote
the data to live data, plus then the time you have to hold the
transaction open is only for the drop and rename, which will be quite
fast. 


the only potential issues would be making sure you dont have FK/View
type issues, but it doesn't sound like it would apply here.


Robert Treat


Sorry everyone for not responding... I just didnt know that the 
discussion continued :)


Anyway I saw the idea:
BEGIN;
CREATE new_table;
SELECT INTO new_table * FROM temp_table;
DROP TABLE table;
ALTER TABLE new_table RENAME TO table;
COMMIT;

Where if I understood correctly "table" is the final table, "temp_table"
is the table that receive all the proccess and at the end of it got 
10mil delete tuples and 5mil active and finally "new_table" is the 
receipent of all of the active tuples from "temp_table".


Its looking quite promising to me but I did alittle check and saw that 
between the drop table command & the commit I get a lock on the table 
(obvious but problematic to a 24/7 site) so im wondering to myself how 
much time such a transaction will take from the drop command point?


If users wont be able to access the table for some extremly small amount 
of time (less then a second obviously) then though I dont like it much 
it is better then running a vacuum full which will slow all my server 
for a considerable amount of time...


So anyone know how much time does such a process take? (tried to explain 
analyze it with no success :)).


Thanks alot everyone!
  Ben-Nes Yonatan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Index use in BETWEEN statement...

2005-09-26 Thread Tom Lane
"Cristian Prieto" <[EMAIL PROTECTED]> writes:
> mydb=# explain analyze select locid from geoip_block where
> '216.230.158.50'::inet between start_block and end_block;

> As you see it still using a sequential scan in the table and ignores the
> index, any other suggestion?

That two-column index is entirely useless for this query; in fact btree
indexes of any sort are pretty useless.  You really need some sort of
multidimensional index type like rtree or gist.  There was discussion
just a week or three ago of how to optimize searches for intervals
overlapping a specified point, which is identical to your problem.
Can't remember if the question was about timestamp intervals or plain
intervals, but try checking the list archives.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Data Entry Tool for PostgreSQL

2005-09-26 Thread Mike Nolan
> I would have thought a php appliction would be able to this fairly
> easily. My data entry scripts insert the new records with just a primary
> key, then iterate through the various fields using an update sql for each
> one which is not null.

A generalized program to do this for (nearly) any table is not a trivial 
task.  For a start you have to deal with knowing the difference between 
an insert and an update, perhaps provide some kind of record-locking 
scheme so it works in multi-user mode, do something about record keys 
(whether using OIDs or some other unique single field), and deal with 
quotes and other characters that cause problems for either web pages 
or SQL statements.

Things like supporting a variety of search features, data type checking 
(eg, making sure that a date or an integer is valid BEFORE trying an 
insert/update), lookups on related data (for example, displaying the name 
from a customer record when the ID appears in an order record), user 
passwords, data access security levels, data formatting, etc. all add 
complexity.

The main program I've been working on for about two years now is nearly 
3200 lines long at this point.  It has about 95% of the items on my 
original wish list of features.  It's been in use at a client's office
since March of 2004 and is used to maintain their database of over 600,000
members, among other things.  

Could I write separate PHP programs to handle each table?  Yes, and in
fact I've been doing that where I've needed to.  

But I can build a full-featured query tool (with search, insert, update 
and delete capabilities) for a new table in under 20 minutes, and it will
have the same look and feel as a couple dozen other programs for other
tables.  That's saved me a BUNCH of time both in development and in training.
--
Mike Nolan


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] SQL command to dump the contents of table failed: PQendcopy()

2005-09-26 Thread Peter Wiersig
On Mon, Sep 26, 2005 at 07:03:06PM +0200, ruben wrote:
> 
> I guess I'm doing something wrong:
> 
> -bash-2.05b$ ulimit
> unlimited

Please read manpages, in this case bash: ulimit -a

Peter

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Slow search.. quite clueless

2005-09-26 Thread Yonatan Ben-Nes

Oleg Bartunov wrote:

On Mon, 26 Sep 2005, Yonatan Ben-Nes wrote:


Hi again everyone,

Oleg I tried tsearch2 and happily it does work wonderfully for me 
returning results extremly fast and actually its working even better 
then I wanted with all of those neat features like: lexem, weight & 
stop words.


I got only one problem which is when I want the results to be ordered 
by a diffrent field (like print INT field) it takes quite alot of time 
for it to do it if the query can return lots of results (for example 
search for the word "computer") and thats even if I limit the results.
The best way to improve its speed for such quereies (that I've 
found...) is to create an index on the field which I want to order by 
and using it CLUSTER the table, after the clustering I drop the the 
index so it won't be used when I run queries with ORDER BY on that 
field, that seem to improve the speed, if anyone got a better idea ill 
be glad to hear it.



what's your actual query ?  have you tried multicolumn index ?




Anyway thanks alot everyone!
 Ben-Nes Yonatan



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83


Hi Oleg,

I can't use a multicolumn index cause I already use on that table the 
tsearch2 index, here is the query:


EXPLAIN ANALYZE SELECT product_id,final_price FROM product WHERE 
keywords_vector @@ to_tsquery('cat') ORDER BY retail_price LIMIT 13;


QUERY PLAN

 Limit  (cost=4.02..4.03 rows=1 width=39) (actual time=367.627..367.654 
rows=13 loops=1)
   ->  Sort  (cost=4.02..4.03 rows=1 width=39) (actual 
time=367.622..367.630 rows=13 loops=1)

 Sort Key: retail_price
 ->  Index Scan using product_keywords_vector_idx on product 
(cost=0.00..4.01 rows=1 width=39) (actual time=0.056..276.385 rows=14295 
loops=1)

   Index Cond: (keywords_vector @@ '\'cat\''::tsquery)
 Total runtime: 370.916 ms
(6 rows)

Now this is the result after its already at the cache (made such a query 
b4), the first time I ran this query it took few seconds...


Thanks as always :),
  Ben-Nes Yonatan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] insertion becoming slow

2005-09-26 Thread Scott Marlowe
On Mon, 2005-09-26 at 12:58, Jim C. Nasby wrote:
> On Mon, Sep 26, 2005 at 05:55:18PM +0530, surabhi.ahuja wrote:
> > i have seen that after insertion say (20 k rows) the insertion to tables 
> > becomes slow..why is it?
> 
> Most likely due to indexes.
> 
> > is there any way in which u can stop the performance from degrading
> 
> If you're loading from scratch, don't create the indexes until after the
> load is done.

And don't forget the corallary, don't analyze an empty table then insert
thousands of rows.

(I know Jim knows this, this is for surabhi)

Let's say we do:

truncate table;
analyze table;  (alternately, analyze parent table)
insert into table ...   (repeat 20,000 times)

And that this table has fk references or check constraints that need to
be checked on insert.  Now the query planner looks at the stats, says,
they table has only 1 or so rows, so I'll sequentially scan it for
matches.

Well, it did have 0 or 1 rows when we started, but somewhere along the
line, as we approached 20,000 rows, the planner needed to switch to an
index scan.

The simple fix is:

Don't analyze an empty table before insertion.  

Slightly less simple fix:

analyze your table every 1,000 or so inserts, especially at the
beginning.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How many insert + update should one transaction handle?

2005-09-26 Thread Jim C. Nasby
On Mon, Sep 26, 2005 at 08:41:03PM +0200, Yonatan Ben-Nes wrote:
 
> Anyway I saw the idea:
> BEGIN;
> CREATE new_table;
> SELECT INTO new_table * FROM temp_table;
> DROP TABLE table;
> ALTER TABLE new_table RENAME TO table;
> COMMIT;
> 
> Where if I understood correctly "table" is the final table, "temp_table"
> is the table that receive all the proccess and at the end of it got 
> 10mil delete tuples and 5mil active and finally "new_table" is the 
> receipent of all of the active tuples from "temp_table".
> 
> Its looking quite promising to me but I did alittle check and saw that 
> between the drop table command & the commit I get a lock on the table 
> (obvious but problematic to a 24/7 site) so im wondering to myself how 
> much time such a transaction will take from the drop command point?
> 
> If users wont be able to access the table for some extremly small amount 
> of time (less then a second obviously) then though I dont like it much 
> it is better then running a vacuum full which will slow all my server 
> for a considerable amount of time...
> 
> So anyone know how much time does such a process take? (tried to explain 
> analyze it with no success :)).

Based on http://lnk.nu/developer.postgresql.org/44b.c, line 1478 on,
there's not a lot that happens during the ALTER TABLE. Likewise DROP
(line 517) doesn't do much either. So basically, anything trying to
access the old table will block for a while waiting for the update to
happen.

But keep in mind that 'a while' will depend on what's happening on the
system. Imagine...

Start long transaction involving table
Run code above; drop aquires lock on table

Everything else against table will now block, waiting for the DROP to
happen.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Index use in BETWEEN statement...

2005-09-26 Thread Don Isgitt



Cristian Prieto wrote:


mydb=# explain analyze select locid from geoip_block where
'216.230.158.50'::inet between start_block and end_block;
 QUERY PLAN

--
Seq Scan on geoip_block  (cost=0.00..78033.96 rows=230141 width=8) (actual
time=13015.538..13508.708 rows=1 loops=1)
  Filter: (('216.230.158.50'::inet >= start_block) AND
('216.230.158.50'::inet <= end_block))
Total runtime: 13508.905 ms
(3 rows)

mydb=# alter table geoip_block add constraint pkey_geoip_block primary key
(start_block, end_block);
NOTICE:  ALTER TABLE / ADD PRIMARY KEY will create implicit index
"pkey_geoip_block" for table "geoip_block"
ALTER TABLE

mydb=# vacuum analyze geoip_block; 


mydb=# explain analyze select locid from geoip_block where
'216.230.158.50'::inet between start_block and end_block;
 QUERY PLAN

---
Seq Scan on geoip_block  (cost=0.00..101121.01 rows=308324 width=8) (actual
time=12128.190..12631.550 rows=1 loops=1)
  Filter: (('216.230.158.50'::inet >= start_block) AND
('216.230.158.50'::inet <= end_block))
Total runtime: 12631.679 ms
(3 rows)

mydb=#


As you see it still using a sequential scan in the table and ignores the
index, any other suggestion?

Cristian,
 

Please note that the planner thinks 308324 rows are being returned, 
while there is actually only 1 (one!). You might try altering statistics 
for the relevant column(s), analyzing the table, and then try again. If 
that doesn't give you a more accurate row estimate, though, it won't help.


Don


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] [SQL] add column if doesn't exist (fwd)

2005-09-26 Thread Brandon Metcalf
I figured pgsql-general would be a more appropriate forum for the
question below.

-- 
Brandon

-- Forwarded message --
Date: Mon, 26 Sep 2005 15:38:56 -0500 (CDT)
From: "Metcalf, Brandon [SC100:CM21:EXCH]" <[EMAIL PROTECTED]>
To: pgsql-sql@postgresql.org
Subject: [SQL] add column if doesn't exist

Is there a way to check for the existence of a column in a table other
than, say, doing a SELECT on that column name and checking the output?

I'm basically looking to do an ALTER TABLE foo ADD COLUMN bar if bar
doesn't exist.

Thanks.

-- 
Brandon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] c-language function .h file changes between (major) versions

2005-09-26 Thread TJ O'Donnell

I've been developing c-language functions in 7.4.5 and 7.4.8.
I had not encountered a need to recompile, because of pg .h file,
or other differences.  I expected a need to do so when upgrading
to 8.0.3.  But I forgot to do so and it worked!!  I figured I
lucked out, a suspicion which was confirmed when my functions
failed with 8.1beta1, but only some functions.  A recompile
restored correct operation under 8.1beta1.

So, my question is: Is there some hard and fast rule about when
one needs to recompile c-language functions from one pg version to
another?  Always for a major version change?  minor?  I suppose
the wise thing is to recompile with ANY pg version change.

Thanks,
TJ

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] c-language function .h file changes between (major) versions

2005-09-26 Thread Tom Lane
"TJ O'Donnell" <[EMAIL PROTECTED]> writes:
> So, my question is: Is there some hard and fast rule about when
> one needs to recompile c-language functions from one pg version to
> another?  Always for a major version change?  minor?  I suppose
> the wise thing is to recompile with ANY pg version change.

You should definitely expect to recompile for a major version change.
You can generally get away without it in minor version updates, although
once in a while we will change internal APIs in a minor update if it's
necessary to fix a bug.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Can't change language through LC_MESSAGES

2005-09-26 Thread Thomas Kellerer

Hello all,

I'm trying to change the server messages back to english (initdb created 
'German_Germany.1251' because I'm running a German Windows) by setting 
the lc_messages property to 'C'. But the messages from e.g. pgsql or 
pg_ctl still show up in German. I reloaded the config and I restarted 
the PG service but to no avail. I also tried 'English' and 'POSIX' but 
none of them changed the messages language.


I have now simply removed the locale subdirectory in $PGHOME/share and 
that worked, but this doesn't feel right :)


What am I missing here?

Best regards
Thomas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] could not send data to server: Operation not supported

2005-09-26 Thread Qingqing Zhou

"ruben" <[EMAIL PROTECTED]> wrote
>
> A PHP script that is running every 30 minutes for three years has crashed 
> when updating a Postgres database table with this error message:
>
> "pg_exec() query failed:  could not send data to server: Operation not 
> supported"
>
> I just ran it again and it worked fine. I could not find a single page 
> about this error in Google. This is the output of the logfile:
>
> LOG:  server process (PID 7069) was terminated by signal 11
>

It is a segment fault in Postgres process. You can compile another PG using 
exactly the same envrionment with --enable-debug support (if this version is 
not) and trace the core dump (if you got it). Theoretically you can bt the 
core dump frames and post here.

Regards,
Qingqing



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] [SQL] add column if doesn't exist (fwd)

2005-09-26 Thread Qingqing Zhou

""Brandon Metcalf"" <[EMAIL PROTECTED]> wrote
>
> Is there a way to check for the existence of a column in a table other
> than, say, doing a SELECT on that column name and checking the output?
>

Take a look at pg_attribute system table. 
http://www.postgresql.org/docs/8.0/static/catalog-pg-attribute.html

Regards,
Qingqing 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Can't change language through LC_MESSAGES

2005-09-26 Thread Tom Lane
Thomas Kellerer <[EMAIL PROTECTED]> writes:
> I'm trying to change the server messages back to english (initdb created 
> 'German_Germany.1251' because I'm running a German Windows) by setting 
> the lc_messages property to 'C'. But the messages from e.g. pgsql or 
> pg_ctl still show up in German.

What PG version is that, and on exactly what version of Windows?  There
are some ugly Windows-specific kluges in pg_locale.c ... maybe your copy
doesn't have them, or the code still doesn't work on your Windows
version.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Can not get ODBC to log. Trace is not working.

2005-09-26 Thread Qingqing Zhou

"Roy Souther" <[EMAIL PROTECTED]> wrote
>I am trying to get the ODBC client to log but I just can't. I have
>included my odbc.ini and odbcinst.ini. Can anyone tell me why there is
>never a log file created? I connect to the database with isql and
>OpenOffice, run queries and get data but never do I ever get a log file.

Take a look here: http://support.microsoft.com/kb/q274551/

Regards,
Qingqing




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How many insert + update should one transaction handle?

2005-09-26 Thread Gnanavel S
On 9/27/05, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Mon, Sep 26, 2005 at 08:41:03PM +0200, Yonatan Ben-Nes wrote:> Anyway I saw the idea:> BEGIN;> CREATE new_table;> SELECT INTO new_table * FROM temp_table;> DROP TABLE table;

Instead of dropping it here, just rename to a different name and 
then after  doing the  next  step  drop  the
table.
> ALTER TABLE new_table RENAME TO table;> COMMIT;

you can do like this, 

BEGIN;
CREATE new_table;
SELECT INTO new_table * FROM temp_table;
ALTER TABLE table RENAME TO temp_table_orig;
ALTER TABLE new_table RENAME TO table;
COMMIT;
drop table temp_table_orig;
 >> Where if I understood correctly "table" is the final table, "temp_table"
> is the table that receive all the proccess and at the end of it got> 10mil delete tuples and 5mil active and finally "new_table" is the> receipent of all of the active tuples from "temp_table".
>> Its looking quite promising to me but I did alittle check and saw that> between the drop table command & the commit I get a lock on the table> (obvious but problematic to a 24/7 site) so im wondering to myself how
> much time such a transaction will take from the drop command point?>> If users wont be able to access the table for some extremly small amount> of time (less then a second obviously) then though I dont like it much
> it is better then running a vacuum full which will slow all my server> for a considerable amount of time...>> So anyone know how much time does such a process take? (tried to explain> analyze it with no success :)).
Based on http://lnk.nu/developer.postgresql.org/44b.c, line 1478 on,there's not a lot that happens during the ALTER TABLE. Likewise DROP(line 517) doesn't do much either. So basically, anything trying to
access the old table will block for a while waiting for the update tohappen.But keep in mind that 'a while' will depend on what's happening on thesystem. Imagine...Start long transaction involving table
Run code above; drop aquires lock on tableEverything else against table will now block, waiting for the DROP tohappen.--Jim C. Nasby, Sr. Engineering Consultant  
[EMAIL PROTECTED]Pervasive Software  http://pervasive.comwork: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf
   cell: 512-569-9461---(end of broadcast)---TIP 6: explain analyze is your friend-- with regards,S.Gnanavel
Satyam Computer Services Ltd.


Re: [GENERAL] RI_ConstraintTrigger question

2005-09-26 Thread George Essig
On 9/26/05, [EMAIL PROTECTED]  <[EMAIL PROTECTED]> wro
We have a database with about 30 tables and some RI. The RI constraints,however, were not named upon creation of the database 2-3 years ago andnow when we get an error it contains  for the constraint.
I tried Google and the documentation, and I still have 2 questions -1. Is it possible to rename RI_ConstraintTrigger, so that we do not get in the errors.2. Is there somewhere explanation what the RI_FKey_ procedures mean?

I think RI stand for referential integrity.  Foreign keys used to
be implemented using 'create constraint trigger' which automatically
names triggers 'RI_ConstraintTrigger_' then some integer which I guess
is an oid (object id).  

Constraint triggers execute functions to implement a constraint. 
RI_FKey_... are the functions that implement foreign key constraints
for different events like insert, update, and delete.

When you upgrade a database it's likely that the oids for different
database objects will change.  In sounds like somehow you upgraded
and retained references to old oids which don't exist anymore. 
Just a guess.

I suggest you upgrade to a newer version of PostgreSQL and drop all of
the 'RI_ConstraintTrigger_' trigger and recreate the foreign keys.

George Essig



[GENERAL] insertion becoming slow

2005-09-26 Thread surabhi.ahuja
 thanks for the help..
i removed a few indexes,
but i cant remove the primary key and uniqye key 
constraints ..whoch means that indexes will be made on these 2 
attributes.
 
i have four tables, each having such 2 attributes, 
one which is the primary key and the other is the uniqu key.
 
and the insertion to these 4 tables is by means of 
4 stored procedures (1 for each table).
 
These stored procedures are being called within a 
transaction ..which i begin from the application..
 
i noticed by putting timers that it is the "COMMIT 
TRANSACTION" statement to the database (after calling the 4 stored procedures) 
that is taking time.
 
but commit transaction is not slow each time, in 
fact it remains very small for a large number of installs ..then suddenly rises 
(becomes almost 8 times) and then falls again.
 
why such a behaviour?
 
thanks
regards
surabhi