Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Alban Hertroys

Thomas H. wrote:

hi list.

as soon as i left-join an additional table, the query takes 24sec 
instead of 0.2sec, although the added fields have no impact on the 
resultset:



SELECT * FROM shop.dvds
LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like 
'%superman re%' OR lower(dvd_edition) LIKE '%superman re%')


Hash Left Join  (cost=8402.16..10733.16 rows=39900 width=1276) (actual 
time=260.712..260.722 rows=2 loops=1)

 Hash Cond: (dvds.dvd_mov_id = movies.mov_id)
 Filter: ((lower((movies.mov_name)::text) ~~ '%superman re%'::text) OR 
(lower((dvds.dvd_edition)::text) ~~ '%superman re%'::text))
 ->  Seq Scan on dvds  (cost=0.00..1292.00 rows=39900 width=1062) 
(actual time=0.036..23.594 rows=20866 loops=1)
 ->  Hash  (cost=8194.93..8194.93 rows=82893 width=214) (actual 
time=168.121..168.121 rows=37417 loops=1)
   ->  Seq Scan on movies  (cost=0.00..8194.93 rows=82893 width=214) 
(actual time=0.024..131.401 rows=37417 loops=1)

Total runtime: 264.193 ms
2 rows fetched



That's a pretty bad plan already, considering it does two seq-scans. I'm 
pretty sure you can get that query to return in something close to 1ms.


Do you have indexes on any of dvds.dvd_mov_id, movies.mov_id, 
lower(mov_name), lower(dvd_edition) or lower(dvd_name)?


I think that'd help.

If you already do have those indices, you may be running out of memory; 
check for how much memory your postgres is set, the defaults are rather 
modest.


now, an additional table (containing 600k records) is added through a 
left join. all the sudden the query takes 24sec. although there are 
indices on both dvds.dvd_ean and data_soundmedia.sm_info_ean, the planer 
does not make use of the indices but rather chooses to do 2 seq-scans.



SELECT * FROM shop.dvds
LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean


Make sure you have indexes on both sm_info_ean and dvd_ean.

Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Richard Huxton

Thomas H. wrote:


SELECT * FROM shop.dvds
LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean
WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like 
'%superman re%' OR lower(dvd_edition) LIKE '%superman re%')




Try putting your conditions as part of the join:
SELECT * FROM shop.dvds
LEFT JOIN
  oldtables.movies
ON
  mov_id = dvd_mov_id
  AND (
lower(mov_name) LIKE '%superman re%'
OR lower(dvd_name) like '%superman re%'
OR lower(dvd_edition) LIKE '%superman re%'
  )
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean

I'd also be tempted to look at a tsearch2 setup for the word searches.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] RULE - special variables?

2006-11-09 Thread Richard Huxton

Matthew Peter wrote:

Do rules get special variables like triggers? Can I set variables in them like
triggers? 


You get NEW/OLD but they mean something subtly different. Rules act on a 
query-tree so they are more like placeholders.


You can't set variables in triggers. You do so in a function. If your 
rule calls a function, that could have variables.


Spend some time playing with rules - make sure you understand the 
difference between them and triggers.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] TRIGGERS - access sql query_string that called it?

2006-11-09 Thread Richard Huxton

Matthew Peter wrote:

Is it possible to access the entire sql query_string that called the trigger?


No. It doesn't necessarily exist in any useful form (think about what 
happens with rules on a view, or triggers making updates which fire 
other triggers - what is the "original" query?).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] dblink / plpgsql - definition list cheats for record

2006-11-09 Thread Richard Huxton

Matthew Peter wrote:

Is there any way to use CREATE TYPE or an VIEW/TABLE defintion list instead of
manually defining it?


Yes, just define your function as returning that type rather than 
RECORD. That way the calling context will know what to expect.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Table design - unknown number of column

2006-11-09 Thread Richard Ollier

Hello,

For a project I have a table containing products and flags.
The columns of this table are of 2 kinds :
- Not null data (id, column1, column2)
- Flags (100 different flags set to 1 or 0)

Over the time the number of flag will increase from 100 to 200 or more.

So I have 2 choices :
- Use a unique table and redesign this table and my application each 
time I need to add a flag


- Split this table in 2 tables : a table containing the Not null data 
and a table containing 3 columns (id, flag_name, flag_value). But in 
this case how can I do a select where (flag_a=1 and flag_c=0 and 
flag_k=1...) ? I would like to have 200 or more joins on the main table.


What would be the cleanest and most recommended solution ?

Thanks a lot for your help

Richard




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] RULE - special variables?

2006-11-09 Thread Albe Laurenz
> Do rules get special variables like triggers? Can I set 
> variables in them like
> triggers? 

A rule is one or more SQL-Statements - how do you set variables in SQL?

Don't think of a rule as a program!

In a statement in a rule, you have the pseudotables 'OLD' and 'NEW'
which refer to the affected rows of the table on which the rule is
defined, OLD (in DELETE and UPDATE rules) referring to the values before
the triggering statement takes place, and NEW (in INSERT and UPDATE)
referring to the values afterwards.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Table design - unknown number of column

2006-11-09 Thread Alban Hertroys

Richard Ollier wrote:

Hello,

For a project I have a table containing products and flags.
The columns of this table are of 2 kinds :
- Not null data (id, column1, column2)
- Flags (100 different flags set to 1 or 0)

Over the time the number of flag will increase from 100 to 200 or more.


Having 200 flags as 200 fields... Writing queries on that is going to be 
painful.


I would probably look at bitwise operations, although 200 bits is quite 
a bit larger than a bigint. Maybe an array of ints would work...


I don't suppose you expect 2^200 different combinations, maybe some of 
the flags can be grouped together?


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Table design - unknown number of column

2006-11-09 Thread Ragnar
On fim, 2006-11-09 at 10:56 +0100, Alban Hertroys wrote:
> Richard Ollier wrote:
> > For a project I have a table containing products and flags.
> > The columns of this table are of 2 kinds :
> > - Not null data (id, column1, column2)
> > - Flags (100 different flags set to 1 or 0)
> > 
> > Over the time the number of flag will increase from 100 to 200 or more.
> 
> Having 200 flags as 200 fields... Writing queries on that is going to be 
> painful.
> 
> I would probably look at bitwise operations, although 200 bits is quite 
> a bit larger than a bigint. Maybe an array of ints would work...

maybe the BIT VARYING datatype could be useful.

http://www.postgresql.org/docs/8.1/interactive/datatype-bit.html

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Can PostgreSQL notify a client that a trigger has fired?

2006-11-09 Thread Richard Huxton

Bill wrote:

Jorge Godoy wrote:


I have the impression that you're missing a lot of sections in the
manual...  How about some time to re-read it?


I don't know about you but for me a 1500 page manual is at least two
weeks of full time reading. I have read several sections of it but I
am trying to decide if PostgreSQL should be considered for a project
and I don't have 80 hours to make the evaluation. 


If you don't have 80 hours to evaluate a new database, I'd suggest 
sticking with whatever you're already familiar with. You only have to 
hit a couple of minor problems with your implementation to consume more 
than 80 hours. If you're up against timescales that short, then stick to 
technologies you already know front-to-back.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [SQL] [ADMIN] Is there anyway to...

2006-11-09 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

i'm wanting to learn something here so i'm going to
chime in.

the way i read what you are saying is that you'd have
start_date and number_days columns in your table.

each day a query would run and pull the start_date and
numbers_days columns.

the application (or postgresql function) would then
take the current date, subtract starte date and
compare it to number of days.  if it is above that
number, the code will take some sort of action.

is that about it or have i missed something?


I think that's what Jim's talking about. Databases are good at filtering 
large numbers of rows.


You'll also want some sort of status code or "processed" flag, so you 
know not to look at rows you've already handled.


The other alternative would be an "expiry_list" table that stores the 
target row's key and when it expires. Add a row when the contract is 
created. Delete the row when the contract is paid. Keep it all 
up-to-date with triggers.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Gregory S. Williamson
I am admittedly speaking up somewhat late here, and may be completely off base, 
but it seems to me that the "LIKE" operation is almost always going to be a 
loser, performance-wise, when there is an initial wildcard, e.g. "%superman 
re%" will require a sequential scan, while "superman re%" would not (assuming 
proper indexes matching case and type).

I'd suggest tsearch2, possibly, which uses GIST indexes and may perhaps be a 
better match for this sort of problem.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Richard Huxton
Sent:   Thu 11/9/2006 1:22 AM
To: Thomas H.
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] planer picks a bad plan (seq-scan instead of 
index)

Thomas H. wrote:
> 
> SELECT * FROM shop.dvds
> LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
> LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean
> WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like 
> '%superman re%' OR lower(dvd_edition) LIKE '%superman re%')
> 

Try putting your conditions as part of the join:
SELECT * FROM shop.dvds
LEFT JOIN
   oldtables.movies
ON
   mov_id = dvd_mov_id
   AND (
 lower(mov_name) LIKE '%superman re%'
 OR lower(dvd_name) like '%superman re%'
 OR lower(dvd_edition) LIKE '%superman re%'
   )
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean

I'd also be tempted to look at a tsearch2 setup for the word searches.
-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


---
Click link below if it is SPAM [EMAIL PROTECTED]
"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4552efed289104295495211&[EMAIL
 PROTECTED]&retrain=spam&template=history&history_page=1"
!DSPAM:4552efed289104295495211!
---






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] RULE - special variables?

2006-11-09 Thread Shane Ambler

Albe Laurenz wrote:
Do rules get special variables like triggers? Can I set 
variables in them like
triggers? 


A rule is one or more SQL-Statements - how do you set variables in SQL?



SET [ SESSION | LOCAL ] name { TO | = } { value | 'value' | DEFAULT }

http://www.postgresql.org/docs/8.1/interactive/sql-set.html


--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] autovacuum blues

2006-11-09 Thread Anton Melser

Hi,
I just can't understand why autovacuum is not working. I have a test
db/table which I insert values into (by the thousands) and can't work
out why my stats don't get updated. Could someone have a quick look at
my attached .conf and tell me what I am doing?
I am running it on FC5 8.1.4.fc5.1.
Cheers
Antoine


postgresql.conf
Description: Binary data

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Russell Smith

Thomas H. wrote:

hi list.

as soon as i left-join an additional table, the query takes 24sec 
instead of 0.2sec, although the added fields have no impact on the 
resultset:



SELECT * FROM shop.dvds
LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like 
'%superman re%' OR lower(dvd_edition) LIKE '%superman re%')


Hash Left Join  (cost=8402.16..10733.16 rows=39900 width=1276) (actual 
time=260.712..260.722 rows=2 loops=1)

 Hash Cond: (dvds.dvd_mov_id = movies.mov_id)
 Filter: ((lower((movies.mov_name)::text) ~~ '%superman re%'::text) OR 
(lower((dvds.dvd_edition)::text) ~~ '%superman re%'::text))
 ->  Seq Scan on dvds  (cost=0.00..1292.00 rows=39900 width=1062) 
(actual time=0.036..23.594 rows=20866 loops=1)
 ->  Hash  (cost=8194.93..8194.93 rows=82893 width=214) (actual 
time=168.121..168.121 rows=37417 loops=1)
   ->  Seq Scan on movies  (cost=0.00..8194.93 rows=82893 
width=214) (actual time=0.024..131.401 rows=37417 loops=1)

Total runtime: 264.193 ms
2 rows fetched


1. You MUST sequence scan dvds, as there is no way to do an index search 
on a like with % at the beginning.
2. You are asking for a left join on dvds, which means you want all 
records, so you must sequence scan dvds.  The filters are all OR, so you 
can't say that a records is excluded until AFTER the join is completed.
3. The join estimates that 39900 rows will come out, but only 2 do, 
thats out by a large factor 10^4, which means that it's not going to 
join movies after the filter is applied.


now, an additional table (containing 600k records) is added through a 
left join. all the sudden the query takes 24sec. although there are 
indices on both dvds.dvd_ean and data_soundmedia.sm_info_ean, the 
planer does not make use of the indices but rather chooses to do 2 
seq-scans.
The 2 items from the first query are still true.  You just have a lot 
more records to play with now by joining in a 3rd table.  Which means 
there is no way to reduce the possible output set before the join.


->  Sort  (cost=286162.37..287781.38 rows=647601 width=660) (actual 
time=19336.011..20328.247 rows=646633 loops=1)
This sort here is where nearly all of the 24 seconds goes. I am assuming 
at this point that the Merge Join is chosen because of the stats problem 
listed next;


>  Merge Left Join  (cost=309190.05..313899.09 rows=159086 width=1722) 
(actual time=19876.552..21902.007 rows=20866 loops=1)
This estimate is out by a factor of 10.  I'd increase the statistics on 
the join columns to see if it decides to use a different join method.


If the estimate for the Merge join is fixed to be closed, then it's 
likely an index scan would be chosen, 159000 is about 25% of the table.  
Assuming it's small on disk then it's faster to do a seq_scan than all 
the random I/O to read the index, then the heap to produce results.


How big is data_soundmedia? Mb size, not rows.
How much is your sort_mem/work_mem?



SELECT * FROM shop.dvds
LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean
WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like 
'%superman re%' OR lower(dvd_edition) LIKE '%superman re%')


Hash Left Join  (cost=317592.21..326882.92 rows=159086 width=1936) 
(actual time=21021.023..22242.253 rows=2 loops=1)

 Hash Cond: (dvds.dvd_mov_id = movies.mov_id)
 Filter: ((lower((movies.mov_name)::text) ~~ '%superman re%'::text) OR 
(lower((dvds.dvd_name)::text) ~~ '%superman re%'::text) OR 
(lower((dvds.dvd_edition)::text) ~~ '%superman re%'::text))
 ->  Merge Left Join  (cost=309190.05..313899.09 rows=159086 
width=1722) (actual time=19876.552..21902.007 rows=20866 loops=1)

   Merge Cond: ("outer"."?column20?" = "inner"."?column29?")
   ->  Sort  (cost=23027.68..23127.43 rows=39900 width=1062) 
(actual time=507.886..520.143 rows=20866 loops=1)

 Sort Key: (dvds.dvd_ean)::text
 ->  Seq Scan on dvds  (cost=0.00..1292.00 rows=39900 
width=1062) (actual time=0.047..100.415 rows=20866 loops=1)
   ->  Sort  (cost=286162.37..287781.38 rows=647601 width=660) 
(actual time=19336.011..20328.247 rows=646633 loops=1)

 Sort Key: (data_soundmedia.sm_info_ean)::text
 ->  Seq Scan on data_soundmedia  (cost=0.00..31080.01 
rows=647601 width=660) (actual time=0.074..2834.831 rows=647601 loops=1)
 ->  Hash  (cost=8194.93..8194.93 rows=82893 width=214) (actual 
time=177.033..177.033 rows=37417 loops=1)
   ->  Seq Scan on movies  (cost=0.00..8194.93 rows=82893 
width=214) (actual time=0.118..129.716 rows=37417 loops=1)

Total runtime: 24419.939 ms
2 rows fetched


shouldn't the planer join the additional table *after* filtering? even 
if it does first joining then filtering, why isn't the existing index 
not used?

I would fix the above anomalies before aski

Re: [GENERAL] RULE - special variables?

2006-11-09 Thread Richard Huxton

Shane Ambler wrote:

Albe Laurenz wrote:
Do rules get special variables like triggers? Can I set variables in 
them like
triggers? 


A rule is one or more SQL-Statements - how do you set variables in SQL?



SET [ SESSION | LOCAL ] name { TO | = } { value | 'value' | DEFAULT }

http://www.postgresql.org/docs/8.1/interactive/sql-set.html


That's configuration settings, not user-defined variables though.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Richard Huxton

Anton Melser wrote:

Hi,
I just can't understand why autovacuum is not working. I have a test
db/table which I insert values into (by the thousands) and can't work
out why my stats don't get updated. Could someone have a quick look at
my attached .conf and tell me what I am doing?
I am running it on FC5 8.1.4.fc5.1.


1. Check "ps auxw | grep postgres" to see if it's running.
2. If so, check your PG logs and see if you see any activity. Turn 
connection and statement logging on and see what appears.


Oh, and you might want to upgrade to 8.1.5 when convenient - that 
shouldn't affect this though.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Can PostgreSQL notify a client that a trigger has fired?

2006-11-09 Thread Jorge Godoy
"Bill" <[EMAIL PROTECTED]> writes:

> Jorge Godoy wrote:
>
>> I have the impression that you're missing a lot of sections in the
>> manual...  How about some time to re-read it?
>
> I don't know about you but for me a 1500 page manual is at least two
> weeks of full time reading. I have read several sections of it but I

It took me 5s to type "postgresql notify" (all words from the subject) at
Google and choose the first link I got from the answer.  I don't see how this
compare to reading 1500 pages manuals page by page. 

Anyway, just ignore this kind of answer if you wish.  I just believe that
you'll miss much more than what you'll get if you don't see the whole
picture. 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Changing the number and type of columns in a view

2006-11-09 Thread Russell Smith

Hi General,

Is there a reason why it is not possible to redefine a view with a 
different number of columns or a different column type? 

It's been possible to change the datatypes of a table, and the column 
numbers for a long time.  What are the restrictions on making this 
possible for views.


I know you can drop an recreate the view, but if the view has dependent 
objects it becomes more difficult.


I'm currently running 8.1 when I say it's not possible.

Thanks

Russell Smith

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join

2006-11-09 Thread Thomas H.
1. You MUST sequence scan dvds, as there is no way to do an index search 
on a like with % at the beginning.
2. You are asking for a left join on dvds, which means you want all 
records, so you must sequence scan dvds.  The filters are all OR, so you 
can't say that a records is excluded until AFTER the join is completed.


true, but thats fast (200ms).

3. The join estimates that 39900 rows will come out, but only 2 do, thats 
out by a large factor 10^4, which means that it's not going to join movies 
after the filter is applied.



20866 is the total number of rows in the dvd table. the planer is now 
showing the accurate rate after another (auto)vacuum run. of course it can't 
know the estimate of a '%...' comparsion, so estimating the full result set 
is ok.


now, an additional table (containing 600k records) is added through a 
left join. all the sudden the query takes 24sec. although there are 
indices on both dvds.dvd_ean and data_soundmedia.sm_info_ean, the planer 
does not make use of the indices but rather chooses to do 2 seq-scans.
The 2 items from the first query are still true.  You just have a lot more 
records to play with now by joining in a 3rd table.  Which means there is 
no way to reduce the possible output set before the join.



well, under normal cases, the output set would be determined by index 
lookups (see bottom)




->  Sort  (cost=286162.37..287781.38 rows=647601 width=660) (actual 
time=19336.011..20328.247 rows=646633 loops=1)
This sort here is where nearly all of the 24 seconds goes. I am assuming 
at this point that the Merge Join is chosen because of the stats problem 
listed next;



exactly. but that sort shouldn't happen as there is an index on the 
join-field, and that is usualy pretty fast (~400ms), but not here...




>  Merge Left Join  (cost=309190.05..313899.09 rows=159086 width=1722)
(actual time=19876.552..21902.007 rows=20866 loops=1)
This estimate is out by a factor of 10.  I'd increase the statistics on 
the join columns to see if it decides to use a different join method.


If the estimate for the Merge join is fixed to be closed, then it's likely 
an index scan would be chosen, 159000 is about 25% of the table.  Assuming 
it's small on disk then it's faster to do a seq_scan than all the random 
I/O to read the index, then the heap to produce results.



the discs are not the fastest, but there is plenty of free ram available. 
seq_page_cost is in its default state (1.0). should i raise this?



How big is data_soundmedia? Mb size, not rows.
How much is your sort_mem/work_mem?


data_soundmedia is 195mb + 105mb for indices (6 fields indexed).
work_mem = 30MB, sort_mem is undefined in 8.2's postgresql.conf

what troubles me is that its only slow with this table (data_soundmedia). we 
have other tables (400k and 200k entries) that are joined with the same 
query in under 400ms total. these tables do have the exact same structure 
and indices defined, but in these joins the planer properly uses an Index 
Scann...


if i force enable_seqscan = off, the planer makes use of the index, 
resulting in acceptable query speed:


Nested Loop Left Join  (cost=8402.16..257761.36 rows=83223 width=1067) 
(actual time=361.931..713.405 rows=2 loops=1)
 ->  Hash Left Join  (cost=8402.16..11292.37 rows=20873 width=407) (actual 
time=322.085..666.519 rows=2 loops=1)

   Hash Cond: (dvds.dvd_mov_id = movies.mov_id)
   Filter: ((lower((movies.mov_name)::text) ~~ '%superman re%'::text) 
OR (lower((dvds.dvd_name)::text) ~~ '%superman re%'::text) OR 
(lower((dvds.dvd_edition)::text) ~~ '%superman re%'::text))
   ->  Seq Scan on dvds  (cost=0.00..804.73 rows=20873 width=193) 
(actual time=11.781..329.672 rows=20866 loops=1)
   ->  Hash  (cost=8194.93..8194.93 rows=82893 width=214) (actual 
time=200.823..200.823 rows=37418 loops=1)
 ->  Seq Scan on movies  (cost=0.00..8194.93 rows=82893 
width=214) (actual time=0.070..155.178 rows=37418 loops=1)
 ->  Index Scan using data_soundmedia_info_ean_idx on data_soundmedia 
(cost=0.00..11.76 rows=4 width=660) (actual time=23.424..23.428 rows=1 
loops=2)
   Index Cond: ((data_soundmedia.sm_info_ean)::text = 
(dvds.dvd_ean)::text)

Total runtime: 716.988 ms

2 rows fetched (821 ms)


could it be the index gets somehow corrupted? but on the other hand, if i do 
a TRUNCATE before loading new data, it should be rebuild anyway, shouldn't 
it?


thanks,
thomas 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Thomas H.

Try putting your conditions as part of the join:
SELECT * FROM shop.dvds
LEFT JOIN
  oldtables.movies
ON
  mov_id = dvd_mov_id
  AND (
lower(mov_name) LIKE '%superman re%'
OR lower(dvd_name) like '%superman re%'
OR lower(dvd_edition) LIKE '%superman re%'
  )
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean




unfortunately its getting optimized into the same plan :-)



I'd also be tempted to look at a tsearch2 setup for the word searches.



tsearch2 doesn't work that well for exact matches (including special chars). 
but the culprit here isn't the '%...'%' seqscan, but rather the additional 
joined table (where no lookup except for the join-column takes place) that 
makes the query going from 200ms to 24sec.


regards,
thomas 




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Changing the number and type of columns in a view

2006-11-09 Thread Richard Huxton

Russell Smith wrote:

Hi General,

Is there a reason why it is not possible to redefine a view with a 
different number of columns or a different column type?
It's been possible to change the datatypes of a table, and the column 
numbers for a long time.  What are the restrictions on making this 
possible for views.


I know you can drop an recreate the view, but if the view has dependent 
objects it becomes more difficult.


That's why - the dependent objects might have to change with it.

I suppose it should be possible to add columns to a view without causing 
too many problems. Or change a column-definition provided it isn't 
referenced anywhere.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Too many open cursors

2006-11-09 Thread Sandeep Kumar Jakkaraju
 
Hi All ..
 
I have used Oracle 9i in the past ...currently working on postgres ...
 
Oracle throws this SQLException ... Too many open cursors
when there are too many connections to the database. 
 
I have found that  ..but postgres ..locks ..the DB ..and doesnt throw this exception
 
I am true or ...mine can be a special case !!???
-- Sandeep Kumar Jakkaraju  


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Richard Huxton

Thomas H. wrote:

Try putting your conditions as part of the join:
SELECT * FROM shop.dvds
LEFT JOIN
  oldtables.movies
ON
  mov_id = dvd_mov_id
  AND (
lower(mov_name) LIKE '%superman re%'
OR lower(dvd_name) like '%superman re%'
OR lower(dvd_edition) LIKE '%superman re%'
  )
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean


unfortunately its getting optimized into the same plan :-)


OK - in that case try explicit subqueries:

SELECT ... FROM
(SELECT * FROM shop.dvds
 LEFT JOIN shop.oldtables.movies
 WHERE lower(mov_name) LIKE ...
) AS bar
LEFT JOIN shop.data_soundmedia


I'd also be tempted to look at a tsearch2 setup for the word searches.



tsearch2 doesn't work that well for exact matches (including special 
chars). but the culprit here isn't the '%...'%' seqscan, but rather the 
additional joined table (where no lookup except for the join-column 
takes place) that makes the query going from 200ms to 24sec.


Agreed, but I'd still be inclined to let tsearch do a first filter then 
limit the results with LIKE.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Anton Melser

On 09/11/06, Richard Huxton  wrote:

Anton Melser wrote:
> Hi,
> I just can't understand why autovacuum is not working. I have a test
> db/table which I insert values into (by the thousands) and can't work
> out why my stats don't get updated. Could someone have a quick look at
> my attached .conf and tell me what I am doing?
> I am running it on FC5 8.1.4.fc5.1.

1. Check "ps auxw | grep postgres" to see if it's running.
2. If so, check your PG logs and see if you see any activity. Turn
connection and statement logging on and see what appears.



...
postgres  1300  0.0  1.1  20180  3048 ?S12:03   0:00
/usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
postgres  1302  0.0  0.2   9968   544 ?S12:03   0:00
postgres: logger process
postgres  1304  0.0  0.4  20316  1188 ?S12:03   0:00
postgres: writer process
postgres  1305  0.0  0.6  10968  1544 ?S12:03   0:00
postgres: stats buffer process
postgres  1306  0.0  0.3  10200   796 ?S12:03   0:00
postgres: stats collector process
...
Any chance you could give me some pointers on activating logging? My
thoughts were to log
log_planner_stats = on
log_min_messages = info

Anything else?
Cheers
Antoine

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] [PL/pgSQL] How should I use FOUND special variable. Documentation is little unclear for me

2006-11-09 Thread Jeremiasz Miedzinski
Hello.I'm porting some procedures from PL/SQL  and I encountered following problem:In PL/SQL I'm using this statement related to cursor:OPEN crs_cnt(start_millis, end_millis);LOOP FETCH crs_cnt into row_cnt;
    EXIT WHEN crs_cnt%NOTFOUND;    insert into spm_audit_stats values(SEQ_SPM_ID_AUTO_INC.nextval, start_millis, base_stat_period, row_cnt.adt_count, row_cnt.adt_avg, row_cnt.adt_max, row_cnt.adt_min, row_cnt.adt_stdev, row_cnt.adt_service_name, row_cnt.adt_root_user);
 global_counter := global_counter + 1;END LOOP;CLOSE crs_cnt;Now, I need to do the same action in PL/pgSQL. It's rather simple, but I don't know how to use FOUND variable described in documentation:
FETCH retrieves the next row from the cursor
into a target, which may be a row variable, a record variable, or a
comma-separated list of simple variables, just like SELECT INTO.  As with SELECTINTO, the special variable FOUND may   be checked to see whether a row was obtained or not.
When I'm trying to use it in Oracle way, my DB reports error. Also I tried to use it like that:IF NOT crs_cnt%FOUND THEN ...But it also doesn't worked for me.Thanks for any help.Kind Regards.
-- -- audi vide sile --


Re: [GENERAL] reproducing this issue on PG 8.0.0 ERROR: index "patient_pkey" is not a btree

2006-11-09 Thread Alvaro Herrera
surabhi.ahuja wrote:
>  hi
> I am using Postgres 8.0.0 and we found this issue "ERROR:  index
> "patient_pkey" is not a btree" I have been informed that we should
> shift to Postgres 8.0.9
>  
> I discussed this with my team member and they are asking if we can
> upgrade to the latest Postgres version i.e. 8.1.5

The difference is that updating to 8.0.9 would be dead simple (just
upgrade the binaries), but upgrading to 8.1.5 is more difficult because
it requires you to dump and restore the database, and there may be minor
incompatibilities for your apps.  So an idea might be to have a
medium-term plan to migrate to a new major series (say 8.1 or 8.2), and
a very short term plan to migrate to 8.0.9.

> I have some questions regarding this:
> 1.will this vesion solve the problem that I have mentioned?

If it is solved by 8.0.9 then it will be solved by 8.1.5 as well.  It
may be something independent of Postgres, for example hardware failure,
in which case it probably won't be solved by either.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [PL/pgSQL] How should I use FOUND special variable.

2006-11-09 Thread Matthias . Pitzl
Title: Nachricht



Hi!
 
Just 
use this:
 
FETCH crs_cnt into row_cnt;
EXIT WHEN NOT FOUND;
 
Greetings,
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Jeremiasz MiedzinskiSent: Thursday, November 
  09, 2006 1:15 PMTo: pgsql-general@postgresql.orgSubject: 
  [GENERAL] [PL/pgSQL] How should I use FOUND special variable. Documentation is 
  little unclear for meHello.I'm porting some 
  procedures from PL/SQL  and I encountered following problem:In PL/SQL 
  I'm using this statement related to cursor:OPEN crs_cnt(start_millis, 
  end_millis);LOOP FETCH crs_cnt into row_cnt; 
      EXIT WHEN crs_cnt%NOTFOUND;    
  insert into spm_audit_stats values(SEQ_SPM_ID_AUTO_INC.nextval, start_millis, 
  base_stat_period, row_cnt.adt_count, row_cnt.adt_avg, row_cnt.adt_max, 
  row_cnt.adt_min, row_cnt.adt_stdev, row_cnt.adt_service_name, 
  row_cnt.adt_root_user);  global_counter := global_counter + 1;END 
  LOOP;CLOSE crs_cnt;Now, I need to do the same action in PL/pgSQL. 
  It's rather simple, but I don't know how to use FOUND variable described in 
  documentation:FETCH retrieves the next row from 
  the cursor into a target, which may be a row variable, a record variable, or a 
  comma-separated list of simple variables, just like SELECT 
  INTO. As with SELECT INTO, the special variable 
  FOUND may be checked to see whether a row was obtained 
  or not. When I'm trying to use it in Oracle way, my DB reports error. 
  Also I tried to use it like that:IF NOT crs_cnt%FOUND THEN 
  ...But it also doesn't worked for me.Thanks for any 
  help.Kind Regards. -- -- audi vide sile -- 



Re: [GENERAL] autovacuum blues

2006-11-09 Thread Matthias . Pitzl
Hi Anton!

I'm not sure how this is with 8.1 but on 7.4.14 we have to enable row level
statistics collection for autovacuum:
stats_row_level = true

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Anton Melser
> Sent: Thursday, November 09, 2006 1:12 PM
> To: Richard Huxton
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] autovacuum blues

> postgres  1300  0.0  1.1  20180  3048 ?S12:03   0:00
> /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
> postgres  1302  0.0  0.2   9968   544 ?S12:03   0:00
> postgres: logger process
> postgres  1304  0.0  0.4  20316  1188 ?S12:03   0:00
> postgres: writer process
> postgres  1305  0.0  0.6  10968  1544 ?S12:03   0:00
> postgres: stats buffer process
> postgres  1306  0.0  0.3  10200   796 ?S12:03   0:00
> postgres: stats collector process
> ...
> Any chance you could give me some pointers on activating logging? My
> thoughts were to log
> log_planner_stats = on
> log_min_messages = info
> 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Thomas H.

OK - in that case try explicit subqueries:

SELECT ... FROM
(SELECT * FROM shop.dvds
 LEFT JOIN shop.oldtables.movies
 WHERE lower(mov_name) LIKE ...
) AS bar
LEFT JOIN shop.data_soundmedia



same result, have tried this as well (22sec). it's the LEFT JOIN 
shop.data_soundmedia for which the planer picks a seqscan instead of index 
scan, no matter what...



I'd also be tempted to look at a tsearch2 setup for the word searches.



tsearch2 doesn't work that well for exact matches (including special 
chars). but the culprit here isn't the '%...'%' seqscan, but rather the 
additional joined table (where no lookup except for the join-column takes 
place) that makes the query going from 200ms to 24sec.


Agreed, but I'd still be inclined to let tsearch do a first filter then 
limit the results with LIKE.



would be a way to probably speed up the seqscan on shop.dvds that takes now 
200ms. unfortunately, tsearch2 is broken for me in 8.2 (filling tsearch2 
tvector columns crashes backend). but thats a different story :-)


- thomas 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [PL/pgSQL] How should I use FOUND special variable.

2006-11-09 Thread brian



Jeremiasz Miedzinski wrote:

Hello.

I'm porting some procedures from PL/SQL  and I encountered following
problem:
In PL/SQL I'm using this statement related to cursor:

OPEN crs_cnt(start_millis, end_millis);
LOOP
FETCH crs_cnt into row_cnt;
   EXIT WHEN crs_cnt%NOTFOUND;
   insert into spm_audit_stats values(SEQ_SPM_ID_AUTO_INC.nextval,
start_millis, base_stat_period, row_cnt.adt_count, row_cnt.adt_avg,
row_cnt.adt_max, row_cnt.adt_min, row_cnt.adt_stdev,
row_cnt.adt_service_name, row_cnt.adt_root_user);
global_counter := global_counter + 1;
END LOOP;
CLOSE crs_cnt;

Now, I need to do the same action in PL/pgSQL. It's rather simple, but I
don't know how to use FOUND variable described in documentation:

FETCH retrieves the next row from the cursor into a target, which may be a
row variable, a record variable, or a comma-separated list of simple
variables, just like SELECT INTO. As with SELECT INTO, the special variable
FOUND may be checked to see whether a row was obtained or not.

When I'm trying to use it in Oracle way, my DB reports error. Also I tried
to use it like that:

IF NOT crs_cnt%FOUND THEN ...

But it also doesn't worked for me.

Thanks for any help.

Kind Regards.



EXIT WHEN NOT FOUND

brian

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Table design - unknown number of column

2006-11-09 Thread Richard Ollier

Alban Hertroys wrote:

Richard Ollier wrote:

Hello,

For a project I have a table containing products and flags.
The columns of this table are of 2 kinds :
- Not null data (id, column1, column2)
- Flags (100 different flags set to 1 or 0)

Over the time the number of flag will increase from 100 to 200 or more.


Having 200 flags as 200 fields... Writing queries on that is going to 
be painful.


I would probably look at bitwise operations, although 200 bits is 
quite a bit larger than a bigint. Maybe an array of ints would work...


I don't suppose you expect 2^200 different combinations, maybe some of 
the flags can be grouped together?

Thanks for your time !

Even if I know it won't happen,  there is 2^200 different combinations 
possible as products are very different one from each others...
In the case I use an array of int, is the search fast and efficient ? I 
actually never had to use an array in postgres till now..




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] converting Informix outer to Postgres

2006-11-09 Thread Harco de Hilster

I am not familiar with Informix but:

- is OUTER() a LEFT or FULL outer join?
- it is important where you put your join condition in Postgres wrt NULL 
insertions of OUTER joins
E.g. Tables A(k,a) with (k1,a1), (k2, a2)  records and table B(k,b) with 
(k1, b1) will result in:


A LEFT OUTER JOIN B ON a.k = b.k
AxB
k1,a1,k1,b1
k2,a2,NULL,NULL

and

A LEFT OUTER JOIN B ON a.k = b.k WHERE a.k = b.k
AxB
k1,a1,k1,b1

and

A LEFT OUTER JOIN B WHERE a.k = b.k
AxB
k1,a1,k1,b1


Since you moved your join condition from the WHERE to the ON part of the 
query, you might run into this subtle difference in joining (been there, 
done that ;-)).


Regards,

Harco

[EMAIL PROTECTED] wrote:

Hi all,
I have been working on this Informix SQL query which has an outer join.
I have attached Informix query and my "supposedly" solution to this query
but I cannot get the same count. I appreciate for any help.
Thanks.

--Informix query
select count(u.id)
from user u, invention i, inv_contracts ic, inv_milestones im1, milestonedef 
mdef1,
OUTER inv_milestones im2,
milestonedef mdef2
where u.id = i.user_id and
ic.inv_id = i.id and
ic.contract_id = mdef1.contract_id and
im1.inv_id = i.id and
mdef1.id = im1.milestone_id and
im1.datesent BETWEEN '2005-05-05' AND '2005-05-10' and
ic.contract_id = mdef2.contract_id and
im2.inv_id = i.id and
mdef2.id = im2.milestone_id and
im1.datereceived IS NULL

--Postges query
select count(u.id)
from dbuser u, inv_contracts ic, inv_milestones im1, milestonedef mdef1,
--OUTER inv_milestones im2,
milestonedef mdef2 LEFT OUTER JOIN inv_milestones im2 ON mdef2.id = 
im2.milestone_id
LEFT OUTER JOIN invention i ON im2.inv_id = i.id
where u.id = i.user_id and 
ic.inv_id = i.id and 
ic.contract_id = mdef1.contract_id and 
im1.inv_id = i.id and 
mdef1.id = im1.milestone_id and 
im1.datesent BETWEEN '2005-05-05' AND '2005-05-10' and

ic.contract_id = mdef2.contract_id and
--im2.inv_id = i.id and 
--mdef2.id = im2.milestone_id and 
im1.datereceived IS NULL


-
This mail sent through IMP: www.resolution.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

  


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Table design - unknown number of column

2006-11-09 Thread Sean Davis
I think the typical way of attacking a problem would be a second and third 
table.  The second table would look like:

flat_type table
  flag_type_id
  flag_type (like the column name in your original table)
  flag_type_description (BONUS:  you can describe each flag)

product_flag table
  product_id (fk to your original table)
  flag_type_id (fk to the flag_type table)
  flag_value (0, 1, or whatever you want to store here)

The advantages with this method are several:
1)  No wasted storage for all those NULLs where a flag is not needed
2)  Should be very fast to lookup by product_id to get all flags
3)  You can expand to an arbitrary number of flags
4)  Your schema remains normalized

Sean

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Richard Huxton

Thomas H. wrote:

OK - in that case try explicit subqueries:

SELECT ... FROM
(SELECT * FROM shop.dvds
 LEFT JOIN shop.oldtables.movies
 WHERE lower(mov_name) LIKE ...
) AS bar
LEFT JOIN shop.data_soundmedia



same result, have tried this as well (22sec). it's the LEFT JOIN 
shop.data_soundmedia for which the planer picks a seqscan instead of 
index scan, no matter what...


Two things to try:
1. "SET enable_seqscan = false" and see if that forces it. If not 
there's something very odd
2. Try adding a LIMIT 99 to the inner query (bar) so PG knows how many 
(few) rows will emerge.


I'm guessing we're up against PG's poor estimate on the '%...%' filter. 
If you were getting 160,000 rows in the final result then a seq-scan 
might well be the way to go.


The only workaround that I can think of (if we can't persuade the 
planner to cooperate) is to build a temp-table containing dvd_ean's for 
the first part of the query then analyse it and join against that. That 
way PG's row estimate will be accurate regardless of your text filtering.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Alvaro Herrera
Anton Melser wrote:
> Hi,
> I just can't understand why autovacuum is not working. I have a test
> db/table which I insert values into (by the thousands) and can't work
> out why my stats don't get updated. Could someone have a quick look at
> my attached .conf and tell me what I am doing?
> I am running it on FC5 8.1.4.fc5.1.

You can tell whether autovacuum is running by using

SHOW autovacuum;


Note that if you have a firewall of some sort stopping UDP messages from
being transmitted inside your machine, the stats collector may not
start, which may stop the autovacuum daemon from starting.  If you're
not seeing stat updates then there's probably something like that going
on.  Maybe the system has emitted a warning message at server start;
check the logs.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Table design - unknown number of column

2006-11-09 Thread Leif B. Kristensen
On Thursday 9. November 2006 09:34, Richard Ollier wrote:
>Hello,
>
>For a project I have a table containing products and flags.
>The columns of this table are of 2 kinds :
>- Not null data (id, column1, column2)
>- Flags (100 different flags set to 1 or 0)
>
>Over the time the number of flag will increase from 100 to 200 or
> more.
>
>So I have 2 choices :
>- Use a unique table and redesign this table and my application each
>time I need to add a flag
>
>- Split this table in 2 tables : a table containing the Not null data
>and a table containing 3 columns (id, flag_name, flag_value). But in
>this case how can I do a select where (flag_a=1 and flag_c=0 and
>flag_k=1...) ? I would like to have 200 or more joins on the main
> table.
>
>What would be the cleanest and most recommended solution ?

I'd go for alternative b. If the only flag values are 0 and 1, you can  
skip the value column entirely and just enter the records where the 
value is 1.

Rather than having the flag names in this table, I'd break the names out 
in another table flag_types with the columns flag_id and flag_name.

Then, your flag table becomes a standard many-to-many crosstable:

create table flags (
product_fk integer references products (product_id),
flag_fk integer references flag_types (flag_id)
);

select product_fk, flag_name from flags, flag_types
where flags.flag_fk = flag_types.flag_id
and product_fk = 42

will give you every flag that is set for this product. If you need to 
set a flag for any product to 0, just delete the row:

delete from flags where flag_fk = 120 and product_fk = 42;
-- 
Leif Biberg Kristensen | Registered Linux User #338009
http://solumslekt.org/ | Cruising with Gentoo/KDE

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Why overlaps is not working

2006-11-09 Thread Andrus
set datestyle to iso,iso;

select 1 where ('2006-10-31'::date, '-12-31'::date) OVERLAPS
   ('2006-10-16'::DATE, '2006-10-31':: DATE)


does not return any rows.

Why ?
How to make overlaps to return correct result?

Andrus.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Matthias . Pitzl
Hm, why not this one:

select ('2006-10-31'::date, '-12-31'::date) OVERLAPS
('2006-10-16'::DATE, '2006-10-31':: DATE);

 overlaps
--
 f
(1 row)

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Andrus
> Sent: Thursday, November 09, 2006 2:47 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Why overlaps is not working
> 
> 
> set datestyle to iso,iso;
> 
> select 1 where ('2006-10-31'::date, '-12-31'::date) OVERLAPS
>('2006-10-16'::DATE, '2006-10-31':: DATE)
> 
> 
> does not return any rows.
> 
> Why ?
> How to make overlaps to return correct result?
> 
> Andrus.
> 
> 
> 
> ---(end of 
> broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] database dump then restore on another system?

2006-11-09 Thread Rick Schumeyer

To date I have always used pg on a system where I had pg superuser status.
I'm trying to move a database from such a system to one where I am just 
a user, and I'm having a couple of problems.


The first is, the output of pg_dump has a lot of lines like:

ALTER FUNCTION some_function OWNER TO rick;

The user 'rick' does not exist on the target system.  I've looked at the 
help for pg_dump...it looks like specifying "no-owner" will skip these 
lines.  Are there any side effects I need to be aware of?


The second problem is the statement:

CREATE PROCEDURAL LANGUAGE plpgsql;

Apparently I need to be a pg superuser to do this?  Do I need to get the 
db admin to run this statement before I load the database?  What if, 
during testing, I need to drop/create the database?  Do I need the admin 
to run something every time?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Alban Hertroys

Andrus wrote:

set datestyle to iso,iso;

select 1 where ('2006-10-31'::date, '-12-31'::date) OVERLAPS
   ('2006-10-16'::DATE, '2006-10-31':: DATE)


does not return any rows.

Why ?


They're adjacent, they don't overlap. Check the documentation on 
OVERLAPS, I'm sure it's explicit about whether it is inclusive or 
exclusive (the latter apparently).



How to make overlaps to return correct result?


 select 1 where ('2006-10-30'::date, '-12-31'::date) OVERLAPS
('2006-10-16'::DATE, '2006-10-31':: DATE);
 ?column?
--
1
(1 row)

--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread A. Kretschmer
am  Thu, dem 09.11.2006, um 15:46:50 +0200 mailte Andrus folgendes:
> set datestyle to iso,iso;
> 
> select 1 where ('2006-10-31'::date, '-12-31'::date) OVERLAPS
>('2006-10-16'::DATE, '2006-10-31':: DATE)
> 
> 
> does not return any rows.
> 
> Why ?
> How to make overlaps to return correct result?

Because they don't overlaps.

Example:

test=*# select ('2006-10-01'::date, '2006-10-11'::date) OVERLAPS 
('2006-10-11'::DATE, '2006-10-20'::DATE);
 overlaps
--
 f
(1 row)

test=*# select ('2006-10-01'::date, '2006-10-12'::date) OVERLAPS 
('2006-10-11'::DATE, '2006-10-20'::DATE);
 overlaps
--
 t
(1 row)


Your date-range don't overlap, because the 2nd ends '2006-10-31' and the other
begin with '2006-10-31'. And your query can't return anything because the 
where-condition
returns false.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Andrus
I have a number of select statements (in 8.1 and 8.2beta) which assume that 
overlaps returns true for those cases.

Which the best way to fix them ?

Should I use AND, OR and date comparison operators instead of OVERLAPS ?

Andrus. 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Andrus
> They're adjacent, they don't overlap. Check the documentation on OVERLAPS, 
> I'm sure it's explicit about whether it is inclusive or exclusive (the 
> latter apparently).

8.2 doc does not explain term overlap. It only says:

"This expression yields true when two time periods (defined by their 
endpoints) overlap"

>> How to make overlaps to return correct result?
>
>  select 1 where ('2006-10-30'::date, '-12-31'::date) OVERLAPS
> ('2006-10-16'::DATE, '2006-10-31':: DATE);

In real queries I have column names and parameters instead of data 
constants.
The only way it seems to replace OVERLAPS operator with AND, OR, <= 
operators.

Is it so ?

Andrus. 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread William Leite Araújo
2006/11/9, Andrus <[EMAIL PROTECTED]>:
> They're adjacent, they don't overlap. Check the documentation on OVERLAPS,> I'm sure it's explicit about whether it is inclusive or exclusive (the> latter apparently).8.2 doc does not explain term overlap. It only says:
"This _expression_ yields true when two time periods (defined by theirendpoints) overlap">> How to make overlaps to return correct result?>>  select 1 where ('2006-10-30'::date, '-12-31'::date) OVERLAPS
> ('2006-10-16'::DATE, '2006-10-31':: DATE);In real queries I have column names and parameters instead of dataconstants.The only way it seems to replace OVERLAPS operator with AND, OR, <=
operators.Is it so ?Andrus.  Maybe:     ('2006-10-16'::DATE BETWEEN '2006-10-30'::date AND '-12-31'::date) OR    ('2006-10-31'::DATE BETWEEN '2006-10-30'::date AND '-12-31'::date) 
-- William Leite Araújo


[GENERAL] cannot connect anymore from a remote host

2006-11-09 Thread Luca Ferrari
Hi all,
after a crash of my machine I restarted the pgsql as usual, and I can connect 
from the machine itself, but no more from a remote host. I checked the 
pg_hba.conf file and it's ok, but either from psql or pgadmin I cannot 
connect to the host. Nmapping my host I cannot see the daemon listening on 
the port 5342.
I've started the daemon as:
postmaster -D /mnt/data/database &
and my pg_hba.conf gile contains:
local   all all trust
# IPv4-style local connections:
hostall all 127.0.0.1 255.255.255.255   trust
hostall all 192.168.1.0   255.255.255.0 trust
hostall all 192.168.2.0   255.255.255.0 trust
# IPv6-style local connections:
hostall all ::1   
:::::::trust


Any suggestion?
Thanks,
Luca

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] posgres headers

2006-11-09 Thread Antonios Katsikadamos
Hi all. Sorry to bother. Does anyone know where the postgres headers are stored? Which files constitute postgres headers?  kind regards,  Antonios 

Access over 1 million songs - Yahoo! Music Unlimited.

Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Tom Lane
Alban Hertroys <[EMAIL PROTECTED]> writes:
> They're adjacent, they don't overlap. Check the documentation on 
> OVERLAPS, I'm sure it's explicit about whether it is inclusive or 
> exclusive (the latter apparently).

It's not very clear, but the spec defines (S1,T1) OVERLAPS (S2,T2)
as

  ( S1 > S2 AND NOT ( S1 >= T2 AND T1 >= T2 ) )
  OR
  ( S2 > S1 AND NOT ( S2 >= T1 AND T2 >= T1 ) )
  OR
  ( S1 = S2 AND ( T1 <> T2 OR T1 = T2 ) )

(for the simple case where there are no nulls and S1 <= T1, S2 <= T2).
So it looks to me like the intervals are actually considered to be
half-open intervals [S1, T1).  Which is something that has its uses,
but it's a bit surprising compared to, say, BETWEEN.

If you don't like it, write your own comparison function ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] posgres headers

2006-11-09 Thread Adrian Klaver
On Thursday 09 November 2006 06:33 am, Antonios Katsikadamos wrote:
> Hi all. Sorry to bother. Does anyone know where the postgres headers are
> stored? Which files constitute postgres headers?
>
>  kind regards,
>
>  Antonios
>
> -
> Access over 1 million songs - Yahoo! Music Unlimited.

I answered this question for you on the psycopg list.
-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] [PL/pgSQL] Commit every N rows. Is it possible ?

2006-11-09 Thread Jeremiasz Miedzinski
Hello,I'm still trying to convert my PL/SQL stored procedures into PL/pgSQL. Now, I have problem with commiting transaction every N rows:
loopfetch csr_ac into row_id;   if not FOUND then  exit;   end if;   counter := counter + 1;   delete from spm_audit where adt_id=row_id;   delete from spm_audit_pipeline_data where apd_adt_id=row_id;
   global_counter := global_counter + 1;   if counter = rows_between_commit then   counter := 0;   commit;   end if;end loop;I'm digging into postgresql documentation but maybe I'm just not smart enough to understand the way which transactions are being processed into pgSQL. Is it possible to port above code to PL/pgSQL ?
Kind Regards.-- -- audi vide sile --


Re: [GENERAL] [PL/pgSQL] Commit every N rows. Is it possible ?

2006-11-09 Thread Richard Huxton

Jeremiasz Miedzinski wrote:

I'm digging into postgresql documentation but maybe I'm just not smart
enough to understand the way which transactions are being processed into
pgSQL. Is it possible to port above code to PL/pgSQL ?


All functions, including pl/pgsql functions take place within a 
transaction. You cannot commit, but you can catch errors and roll back 
to savepoints within a transaction.


If you want to have multiple transactions you will need to step outside 
of the database.


It's not clear to me why your function does what it does anyway. I can't 
see why you wouldn't just do this as standard queries.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] cannot connect anymore from a remote host

2006-11-09 Thread Jeremiasz Miedzinski
2006/11/9, Luca Ferrari <[EMAIL PROTECTED]>:
Hi all,after a crash of my machine I restarted the pgsql as usual, and I can connectfrom the machine itself, but no more from a remote host. I checked thepg_hba.conf file and it's ok, but either from psql or pgadmin I cannot
connect to the host. Nmapping my host I cannot see the daemon listening onHi Luca,

Have You checked network interfaces on your machine ? Maybe some of them doesn't start-up properly...

Rgds. -- -- audi vide sile --


Re: [GENERAL] cannot connect anymore from a remote host

2006-11-09 Thread Richard Huxton

Luca Ferrari wrote:

Hi all,
after a crash of my machine I restarted the pgsql as usual, and I can connect 
from the machine itself, but no more from a remote host. I checked the 
pg_hba.conf file and it's ok, but either from psql or pgadmin I cannot 
connect to the host. Nmapping my host I cannot see the daemon listening on 
the port 5342.

I've started the daemon as:
postmaster -D /mnt/data/database &


Is that how you normally start your DB server? Not via pg_ctl or your 
system startup scripts?



and my pg_hba.conf gile contains:


If you say it's not listening on port 5432 then the pg_hba.conf doesn't 
matter. Check your postgresql.conf for valid settings on 
listen_addresses and port.


http://www.postgresql.org/docs/8.1/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS

Once it's listening on the right port, then look at your pg_hba.conf
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Table design - unknown number of column

2006-11-09 Thread Merlin Moncure

On 11/9/06, Richard Ollier <[EMAIL PROTECTED]> wrote:

Hello,

For a project I have a table containing products and flags.
The columns of this table are of 2 kinds :
- Not null data (id, column1, column2)
- Flags (100 different flags set to 1 or 0)

Over the time the number of flag will increase from 100 to 200 or more.

So I have 2 choices :
- Use a unique table and redesign this table and my application each
time I need to add a flag

- Split this table in 2 tables : a table containing the Not null data
and a table containing 3 columns (id, flag_name, flag_value). But in
this case how can I do a select where (flag_a=1 and flag_c=0 and
flag_k=1...) ? I would like to have 200 or more joins on the main table.


this is much cleaner.  you have a table of products, a table of flags,
and a crossref table, lets call it product_flags.  if you want to
query procucts with a very specific set of flags, you can write a
query with a join to get it out or use arrays.

maybe if you query out products with realated flags alot you can make
another table, product_flag_group which you can use to simplify things
a bit.

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] 8.1.2 postmaster died

2006-11-09 Thread Martijn van Oosterhout
On Thu, Nov 09, 2006 at 11:52:48AM +0500, Shoaib Mir wrote:
> Have a look at "16.4.3. Linux Memory Overcommit" on
> http://www.postgresql.org/docs/8.1/interactive/kernel-resources.html

If you look at the beginning of the thread you'dve seen we're talking
about HPUX here...

Maybe they have an OOM killer too? Although maybe it's quotas?

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] cannot connect anymore from a remote host

2006-11-09 Thread Luca Ferrari
On Thursday 09 November 2006 15:57 Jeremiasz Miedzinski's cat, walking on the 
keyboard, wrote:

> Have You checked network interfaces on your machine ? Maybe some of them
> doesn't start-up properly...


Interfaces are ok, and in fact other services on such interface are running 
correctly.

Luca

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] cannot connect anymore from a remote host

2006-11-09 Thread Tom Lane
Richard Huxton  writes:
> Luca Ferrari wrote:
>> I've started the daemon as:
>> postmaster -D /mnt/data/database &

> Is that how you normally start your DB server?

Ditto.  But if you are using a stock postgresql.conf then in fact the
postmaster will *not* be listening to TCP with that command line.
You need to check listen_addresses or tcpip_socket depending on which
Postgres release this is.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Alban Hertroys

Andrus wrote:
I have a number of select statements (in 8.1 and 8.2beta) which assume that 
overlaps returns true for those cases.


Which the best way to fix them ?

Should I use AND, OR and date comparison operators instead of OVERLAPS ?


Why not just subtract/add 1, so that the check includes the boundary dates?

Like so;
 select 1 where ('2006-10-31'::date -1, '-12-31'::date +1) OVERLAPS
   ('2006-10-16'::DATE -1, '2006-10-31':: DATE +1)

Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] authentication question

2006-11-09 Thread Alvaro Herrera
Craig White wrote:

> logs say...
> Nov  8 20:18:26 srv1 postgresql: Starting postgresql service:  succeeded
> Nov  8 20:18:39 srv1 postgres[21020]: PAM audit_open() failed:
> Permission denied
> Nov  8 20:18:39 srv1 postgres[21020]: [2-1] LOG:  pam_authenticate
> failed: System error
> Nov  8 20:18:39 srv1 postgres[21020]: [3-1] FATAL:  PAM authentication
> failed for user "craig"

I'm not at all familiar with PAM error message wording, but are you
aware that you must create the user "craig" inside the database _as
well_ as on whatever PAM layer you use?

The "audit_open(): Permission denied" message sounds like Postgres is
not authorized to consult PAM though.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] ROWTYPE initialization question

2006-11-09 Thread Alban Hertroys

'lo list,

I have a plpgsql SP where I loop through a cursor. I have an internal 
variable that keeps the previous row, so that I can compare it with the 
current row in the cursor.

Like so;

DECLARE
current table%ROWTYPE;
previous table%ROWTYPE;
BEGIN
LOOP
FETCH tableCur INTO current;

-- Do stuff

But, in this loop I need to compare the previous row to the current one. 
To do that I need to know whether a row was assigned to 'previous', or 
there'll be very little to compare (if it doesn't throw an error).

How do I check for that?

I guess it would be like this, but I'd like to be sure.

IF previous IS NOT NULL
THEN
-- Compare previous and current column values
END IF

previous := current;
END LOOP;
END;

Thanks in advance,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Andrus

> Why not just subtract/add 1, so that the check includes the boundary 
> dates?
>
> Like so;
>  select 1 where ('2006-10-31'::date -1, '-12-31'::date +1) OVERLAPS
>('2006-10-16'::DATE -1, '2006-10-31':: DATE +1)

Alban,

thank you. I use only dates as OVERLAPS arguments.
I changed all my WHERE clauses from

WHERE (a,b) OVERLAPS (c,d)

to

WHERE (a-1,b+1) OVERLAPS (c-1,d+1)

Will this give correct results ?

Andrus. 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] 8.1.2 postmaster died

2006-11-09 Thread Shoaib Mir
Look at the database server logs and see what actually was happening just before the server crashed... do you have the auto vacuuming running at backend? or was there some client making connection just when the server did crash?  your db logs can actually help you here.
Thank you,---Shoaib MirEnterpriseDB (www.enterprisedb.com)On 11/9/06, Martijn van Oosterhout
  wrote:On Thu, Nov 09, 2006 at 11:52:48AM +0500, Shoaib Mir wrote:
> Have a look at "16.4.3. Linux Memory Overcommit" on> http://www.postgresql.org/docs/8.1/interactive/kernel-resources.html
If you look at the beginning of the thread you'dve seen we're talkingabout HPUX here...Maybe they have an OOM killer too? Although maybe it's quotas?Have a nice day,--Martijn van Oosterhout   <
kleptog@svana.org>   http://svana.org/kleptog/> From each according to his ability. To each according to his ability to litigate.
-BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFFU0RoIB7bNG8LQkwRAjRIAJwJig0SLNyUtfthMw0rPM5olfXuVwCfYoX5OiFfQVVyp55mDo4MDfD88v8==7Enb-END PGP SIGNATURE-



Re: [GENERAL] cannot connect anymore from a remote host

2006-11-09 Thread Shoaib Mir
You might want to check the ip tables as well if they have the required entries or not.Thanks,---Shoaib MirEnterpriseDB (www.enterprisedb.com)
On 11/9/06, Richard Huxton  wrote:
Luca Ferrari wrote:> Hi all,> after a crash of my machine I restarted the pgsql as usual, and I can connect> from the machine itself, but no more from a remote host. I checked the> pg_hba.conf file and it's ok, but either from psql or pgadmin I cannot
> connect to the host. Nmapping my host I cannot see the daemon listening on> the port 5342.> I've started the daemon as:> postmaster -D /mnt/data/database &Is that how you normally start your DB server? Not via pg_ctl or your
system startup scripts?> and my pg_hba.conf gile contains:If you say it's not listening on port 5432 then the pg_hba.conf doesn'tmatter. Check your postgresql.conf for valid settings onlisten_addresses and port.
http://www.postgresql.org/docs/8.1/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS
Once it's listening on the right port, then look at your pg_hba.conf--   Richard Huxton   Archonet Ltd---(end of broadcast)---TIP 6: explain analyze is your friend



Re: [GENERAL] authentication question

2006-11-09 Thread Craig White
On Thu, 2006-11-09 at 12:34 -0300, Alvaro Herrera wrote:
> Craig White wrote:
> 
> > logs say...
> > Nov  8 20:18:26 srv1 postgresql: Starting postgresql service:  succeeded
> > Nov  8 20:18:39 srv1 postgres[21020]: PAM audit_open() failed:
> > Permission denied
> > Nov  8 20:18:39 srv1 postgres[21020]: [2-1] LOG:  pam_authenticate
> > failed: System error
> > Nov  8 20:18:39 srv1 postgres[21020]: [3-1] FATAL:  PAM authentication
> > failed for user "craig"
> 
> I'm not at all familiar with PAM error message wording, but are you
> aware that you must create the user "craig" inside the database _as
> well_ as on whatever PAM layer you use?
> 
> The "audit_open(): Permission denied" message sounds like Postgres is
> not authorized to consult PAM though.

I did create a user 'craig' in postgres but I agree, that isn't the
issue at this point.

I checked the source rpm to make sure that it was compiled with the pam
option and it appears to me that it was.

I haven't had to fool too much with pam for authenticating other
services so I'm a little bit out of my knowledge base but I know that it
was simple to add netatalk into the pam authentication and expected that
postgresql would be similar.

I have to believe that other people are using pam for authentication
because otherwise, you have to have maintain passwords for each user
within postgresql itself - which seems unwise for many sites.

Still struggling with this...

Craig


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] authentication question

2006-11-09 Thread Tom Lane
Craig White <[EMAIL PROTECTED]> writes:
> I haven't had to fool too much with pam for authenticating other
> services so I'm a little bit out of my knowledge base but I know that it
> was simple to add netatalk into the pam authentication and expected that
> postgresql would be similar.

FWIW, we ship this PAM config file in the Red Hat PG RPMs:

#%PAM-1.0
authinclude system-auth
account include system-auth

which AFAIR looks about the same as the corresponding files for other
services.  It's installed as /etc/pam.d/postgresql.

I concur with the other response that you need to find out where the
"Permission denied" failure is coming from.  There is no "audit_open"
in the Postgres sources so it sounds like an internal failure in the PAM
libraries.  If nothing else comes to mind, try strace'ing the postmaster
to see what kernel call draws that failure.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Datum problem

2006-11-09 Thread Enrico
Hi, 
I'm newbie of programming postgresql sever side,
I write the function below and when I execute

SELECT (anag_art, 150) AS esistenza from anag_art order by 1;

result is something about this:

| esistenza |

 ("(""002  "",""ARTICOLO PREZ. VEND.  "",,PZ,32,1,20,""1
  "","" "",,N,,0,-67.0,-23.0,,N,N)",150)


So postgresql outputs with a tuple, instead of an integer value as I wish.
Can anyone helps me?

Thanks in advantage.
Enrico


--- Start C code ---

#include "postgres.h"
#include "executor/executor.h"

PG_FUNCTION_INFO_V1(visualizza_esistenza);

Datum
visualizza_esistenza(PG_FUNCTION_ARGS)
{
HeapTupleHeader  t = PG_GETARG_HEAPTUPLEHEADER(0);
int16limit = PG_GETARG_INT16(1);
bool isnull;
Datum esistenza;
int16 es;

esistenza = GetAttributeByName(t, "esistenza", &isnull);
es = DatumGetInt16(esistenza);
if (esistenza > limit)
  PG_RETURN_INT16(es);

}

--- End C code ---

CREATE OR REPLACE FUNCTION visualizza_esistenza(anag_art, integer) RETURNS int4
as '/home/scotty/enrico/postgresql_c/prova.so', 'visualizza_esistenza'
LANGUAGE C;


-- 
If Bill Gates had a penny for everytime Windows crashed,he'd be a 
multi-billionaire by now ...oh look, he already is 
[EMAIL PROTECTED] - Skype:sscotty71
http://www.linuxtime.it/enricopirozzi

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Mac OS X

2006-11-09 Thread Nathan Leon Pace, MD, MStat
I run an older version of psql (7.2.4) on a Linux machine.I wish to migrate psql to an Intel Mac running OS x 10.4.8.The supported platforms table in postgresql-8.1-US.pdf (page 259) lists Mac OS X with a PPC cpu as being supported, but not Mac OS X with a Intel cpu.Can psql run on an Intel Mac?Have I addressed my question to the correct list?Nathan Nathan Leon Pace, MD, MStatUniversity of UtahSalt Lake City, UT 84132Office: 801.581.6393Fax: 801.581.4367Cell: 801.205.1019Pager: 801.291.9019Home: 801.467.2925 

Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Jorge Godoy
"Andrus" <[EMAIL PROTECTED]> writes:

>> Why not just subtract/add 1, so that the check includes the boundary 
>> dates?
>>
>> Like so;
>>  select 1 where ('2006-10-31'::date -1, '-12-31'::date +1) OVERLAPS
>>('2006-10-16'::DATE -1, '2006-10-31':: DATE +1)
>
> Alban,
>
> thank you. I use only dates as OVERLAPS arguments.
> I changed all my WHERE clauses from
>
> WHERE (a,b) OVERLAPS (c,d)
>
> to
>
> WHERE (a-1,b+1) OVERLAPS (c-1,d+1)
>
> Will this give correct results ?

It might give you false positives...

2006-11-30 -- 2006-12-05   AND2006-12-06 -- 2006-12-15  (original) -- FALSE
2006-11-29 -- 2006-12-06   AND2006-12-05 -- 2006-12-16  (changed)  -- TRUE


Be seeing you,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Mac OS X

2006-11-09 Thread Joshua D. Drake
On Thu, 2006-11-09 at 10:10 -0700, Nathan Leon Pace, MD, MStat wrote:
> I run an older version of psql (7.2.4) on a Linux machine.
> 
> 
> I wish to migrate psql to an Intel Mac running OS x 10.4.8.
> 
> 
> The supported platforms table in postgresql-8.1-US.pdf (page 259)
> lists Mac OS X with a PPC cpu as being supported, but not Mac OS X
> with a Intel cpu.
> 
> 
> Can psql run on an Intel Mac?

Yes, absolutely. It is just that 8.1 came out before the Intel Macs. It
will perform wonderfully on a Intel Mac.

Joshua D. Drake

-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Anton Melser

On 09/11/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:

Anton Melser wrote:
> Hi,
> I just can't understand why autovacuum is not working. I have a test
> db/table which I insert values into (by the thousands) and can't work
> out why my stats don't get updated. Could someone have a quick look at
> my attached .conf and tell me what I am doing?
> I am running it on FC5 8.1.4.fc5.1.

You can tell whether autovacuum is running by using

SHOW autovacuum;


Note that if you have a firewall of some sort stopping UDP messages from
being transmitted inside your machine, the stats collector may not
start, which may stop the autovacuum daemon from starting.  If you're
not seeing stat updates then there's probably something like that going
on.  Maybe the system has emitted a warning message at server start;
check the logs.


Thanks for that. Just a clarification, can someone tell me what the
"number of tuples" means in the context of the multipliers? I mean,
when the vacuum min multiplier is at 0.4, we multiply 0.4 by what? I
get the min val, but what are we multiplying by 0.4? The total number
of tuples in the table? The total modified/deleted?
I guess it is just that I assumed that it would kick in regularly, and
it doesn't seem to. Probably it is not needed but I always got the
feeling that after a good vacuum performance seemed better.
Cheers
Antoine

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Mac OS X

2006-11-09 Thread Steve Atkins


On Nov 9, 2006, at 9:10 AM, Nathan Leon Pace, MD, MStat wrote:


I run an older version of psql (7.2.4) on a Linux machine.

I wish to migrate psql to an Intel Mac running OS x 10.4.8.

The supported platforms table in postgresql-8.1-US.pdf (page 259)  
lists Mac OS X with a PPC cpu as being supported, but not Mac OS X  
with a Intel cpu.


Can psql run on an Intel Mac?


It's running fine on this one. No problems building from
source.

There are several people providing prebuilt universal
binary packages for OS X.

http://www.entropy.ch/software/macosx/postgresql/
is one.

Cheers,
  Steve




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Datum problem

2006-11-09 Thread Martijn van Oosterhout
On Thu, Nov 09, 2006 at 06:12:08PM +0100, Enrico wrote:
> Hi, 
> I'm newbie of programming postgresql sever side,
> I write the function below and when I execute
> 
> SELECT (anag_art, 150) AS esistenza from anag_art order by 1;
> 
> result is something about this:

Well, you didn't actually call any function, so it just returned the
two parameters as you asked... You have other problems though.

> visualizza_esistenza(PG_FUNCTION_ARGS)
> {
> HeapTupleHeader  t = PG_GETARG_HEAPTUPLEHEADER(0);
> int16limit = PG_GETARG_INT16(1);

An "integer" is not an "int16", so this is wrong, you want INT32

> bool isnull;
> Datum esistenza;
> int16 es;
> 
> esistenza = GetAttributeByName(t, "esistenza", &isnull);
> es = DatumGetInt16(esistenza);

Possibly same problem here.

> if (esistenza > limit)
>   PG_RETURN_INT16(es);

You declared you were going to return an int4, yet you're returing
something else.

> }

What are you returning if the if statement doesn't get run?

Hope this helps,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Datum problem

2006-11-09 Thread A. Kretschmer
am  Thu, dem 09.11.2006, um 18:12:08 +0100 mailte Enrico folgendes:
> Hi, 
> I'm newbie of programming postgresql sever side,
> I write the function below and when I execute
> 
> SELECT (anag_art, 150) AS esistenza from anag_art order by 1;

Try select *, 150 AS esistenza from anag_art order by 1;


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Too many open cursors

2006-11-09 Thread Scott Marlowe
On Thu, 2006-11-09 at 06:07, Sandeep Kumar Jakkaraju wrote:
>  
> Hi All ..
>  
> I have used Oracle 9i in the past ...currently working on postgres ...
>  
> Oracle throws this SQLException ... Too many open cursors
> when there are too many connections to the database. 

That may be the exception you've seen, but I'm pretty sure Oracle also
has an exception for too many connections that says something about
there being too many connections.
 
> I have found that  ..but postgres ..locks ..the DB ..and doesnt throw
> this exception

Ummm.  what do you mean by "locks"???  The db stops working completely,
even for the clients you already have connected?  Won't accept new
connections but keeps processing the old ones?
 
> I am true or ...mine can be a special case !!???

We need more quoting of the error messages you're getting.  What we've
got now doesn't help a lot.

You should get an error message saying something about too many
connections open

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] authentication question

2006-11-09 Thread Craig White
On Thu, 2006-11-09 at 11:51 -0500, Tom Lane wrote:
> Craig White <[EMAIL PROTECTED]> writes:
> > I haven't had to fool too much with pam for authenticating other
> > services so I'm a little bit out of my knowledge base but I know that it
> > was simple to add netatalk into the pam authentication and expected that
> > postgresql would be similar.
> 
> FWIW, we ship this PAM config file in the Red Hat PG RPMs:
> 
> #%PAM-1.0
> auth  include system-auth
> account   include system-auth
> 
> which AFAIR looks about the same as the corresponding files for other
> services.  It's installed as /etc/pam.d/postgresql.

that doesn't work at all... /var/log/messages reports...
Nov  9 10:26:33 srv1 postgres[6034]: PAM unable to
dlopen(/lib/security/system-auth)
Nov  9 10:26:33 srv1 postgres[6034]: PAM
[dlerror: /lib/security/system-auth: cannot open shared object file: No
such file or directory]

as for what is included BY Red Hat in their postgresql-server rpm...
# rpm -ql postgresql-server|grep pam

returns nothing which makes me double check the spec file on the RPM
which has...
# grep pam /usr/src/redhat/SPECS/postgresql.spec
#work around the undefined or defined to 1 build 6x interaction with the
pam stuff
%{!?build6x:%define non6xpamdeps 1}
%{?build6x:%define non6xpamdeps 0}
%{!?pam:%define pam 1}
%if %pam
%if %non6xpamdeps
BuildPrereq: pam-devel
%if %pam
--with-pam \

a search of Red Hat's bugzilla shows all postgresql bugs closed and
nothing reporting a problem with pam  ;-(

and since it does attempt to call pam (as I am seeing in logs), I am
certain that pam option is compiled.

I'm virtually certain that I am better off pointing
to /etc/pam.d/system-auth which clearly works for sshd logins

> 
> I concur with the other response that you need to find out where the
> "Permission denied" failure is coming from.  There is no "audit_open"
> in the Postgres sources so it sounds like an internal failure in the PAM
> libraries.  If nothing else comes to mind, try strace'ing the postmaster
> to see what kernel call draws that failure.

pretty short strace but I can't see anything that jumps at me and says
aha...

# strace -p 3267
Process 3267 attached - interrupt to quit
select(6, [3 5], NULL, NULL, {95, 566000}) = 1 (in [3], left {88,
881000})
rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS RTMIN
RT_1], NULL, 8) = 0
accept(3, {sa_family=AF_INET, sin_port=htons(56844),
sin_addr=inet_addr("192.168.2.10")}, [16]) = 9
getsockname(9, {sa_family=AF_INET, sin_port=htons(5432),
sin_addr=inet_addr("192.168.2.1")}, [16]) = 0
setsockopt(9, SOL_TCP, TCP_NODELAY, [1], 4) = 0
setsockopt(9, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|
SIGCHLD, child_tidptr=0xb7f2e708) = 5921
close(9)= 0
time(NULL)  = 1163093004
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(6, [3 5], NULL, NULL, {89, 0})   = ? ERESTARTNOHAND (To be
restarted)
--- SIGCHLD (Child exited) @ 0 (0) ---
rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS RTMIN
RT_1], NULL, 8) = 0
waitpid(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG) = 5921
send(6, "\2\0\0\0\30\0\0\0\0\0\0\0!\27\0\0\0\0\0\0\0\0\0\0", 24, 0) = 24
waitpid(-1, 0xbfecf5fc, WNOHANG)= 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
sigreturn() = ? (mask now [])
rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS RTMIN
RT_1], NULL, 8) = 0
time(NULL)  = 1163093004
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(6, [3 5], NULL, NULL, {89, 0} 
Process 3267 detached



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Problem with pg_dump

2006-11-09 Thread Alex Turner
I seem to be having a problem with pg_dump in 8.1.2, it's not dumping indexes at all.  Is this a known problem, should I just do an upgrade?Thanks,Alex TurnerMint Pixels


Re: [GENERAL] Can PostgreSQL notify a client that a trigger has

2006-11-09 Thread Jeff Davis
On Thu, 2006-11-09 at 10:16 +, Richard Huxton wrote:
> Bill wrote:
> > Jorge Godoy wrote:
> > 
> >> I have the impression that you're missing a lot of sections in the
> >> manual...  How about some time to re-read it?
> > 
> > I don't know about you but for me a 1500 page manual is at least two
> > weeks of full time reading. I have read several sections of it but I
> > am trying to decide if PostgreSQL should be considered for a project
> > and I don't have 80 hours to make the evaluation. 
> 
> If you don't have 80 hours to evaluate a new database, I'd suggest 
> sticking with whatever you're already familiar with. You only have to 
> hit a couple of minor problems with your implementation to consume more 
> than 80 hours. If you're up against timescales that short, then stick to 
> technologies you already know front-to-back.
> 

If we turn away everyone who was "trying to decide if PostgreSQL should
be considered", I think that we're failing in the advocacy department.
It may be just a preliminary analysis to see which databases claim to
meet the application's requirements. Perhaps he doesn't already have a
team of DBAs that know any RDBMS front-to-back.

I think he should consider PostgreSQL, and try to find out if it is a
possible solution for his application. Asking questions on -general or
IRC, in addition to googling the docs, are probably the best way to get
answers to his questions quickly. It took me a long time before I really
understood where to find things in the docs very quickly*. And just like
mathematics, sometimes asking the question out loud answers itself :)

However, you're right. If Bill is already familiar with one product,
it's probably a mistake to jump blind into any new RDBMS on an important
project; and that includes the world's most advanced open source
database. He should see if it looks good, then evaluate and test it as
thoroughly as he would any other RDBMS.

Regards,
Jeff Davis

* The docs are great, but it's a complex subject, everything's inter-
related, and the words you're looking for aren't always obvious.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem with pg_dump

2006-11-09 Thread A. Kretschmer
am  Thu, dem 09.11.2006, um 13:04:31 -0500 mailte Alex Turner folgendes:
> I seem to be having a problem with pg_dump in 8.1.2, it's not dumping indexes
> at all.  Is this a known problem, should I just do an upgrade?

I can't see a necessity to dump a index. But, i hope, and i'm sure,
pg_dump dumps the definition for a index...


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Andrus
>> WHERE (a,b) OVERLAPS (c,d)
>>
>> to
>>
>> WHERE (a-1,b+1) OVERLAPS (c-1,d+1)
>>
>> Will this give correct results ?
>
> It might give you false positives...
>
> 2006-11-30 -- 2006-12-05   AND2006-12-06 -- 2006-12-15  (original) --
> FALSE
> 2006-11-29 -- 2006-12-06   AND2006-12-05 -- 2006-12-16  (changed)  --
> TRUE

Jorge,

Thank you very much. Now I try to William Leite Araújo solution by replacing

WHERE (a,b) OVERLAPS (c,d)

with

WHERE  ( c BETWEEN a AND b ) OR  ( d BETWEEN a AND b )

Is this OK ?
This requires writing a and b expressions twice. How to avoid repeating 
expressions ?

Andrus. 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Can PostgreSQL notify a client that a trigger has fired?

2006-11-09 Thread Richard Huxton

Jeff Davis wrote:

On Thu, 2006-11-09 at 10:16 +, Richard Huxton wrote:

Bill wrote:

Jorge Godoy wrote:


I have the impression that you're missing a lot of sections in the
manual...  How about some time to re-read it?

I don't know about you but for me a 1500 page manual is at least two
weeks of full time reading. I have read several sections of it but I
am trying to decide if PostgreSQL should be considered for a project
and I don't have 80 hours to make the evaluation. 
If you don't have 80 hours to evaluate a new database, I'd suggest 
sticking with whatever you're already familiar with. You only have to 
hit a couple of minor problems with your implementation to consume more 
than 80 hours. If you're up against timescales that short, then stick to 
technologies you already know front-to-back.


If we turn away everyone who was "trying to decide if PostgreSQL should
be considered", I think that we're failing in the advocacy department.


True enough. But its better he use something else than have a bad 
experience with PG. Look at how long the "difficult to install" story 
hung around after the 6.5 days (when it was too fiddly to be fair).



It may be just a preliminary analysis to see which databases claim to
meet the application's requirements. Perhaps he doesn't already have a
team of DBAs that know any RDBMS front-to-back.


Part of what was worrying me was Bill was asking a series of small 
questions. Of course he's not to know that the PG lists are among the 
best on the 'net and he's probably trying to keep his questions specific 
and answerable (which they were).



However, you're right. If Bill is already familiar with one product,
it's probably a mistake to jump blind into any new RDBMS on an important
project; 


I'm sure we've all done it, either from enthusiasm or management diktat. 
Invariably I've regretted trying to do something new with new (to me) 
technology.



* The docs are great, but it's a complex subject, everything's inter-
related, and the words you're looking for aren't always obvious.


The .pgpass file is my favourite. I can see how it's part of the libpq 
chapter, but I'd always have to spend an hour trying to remember that fact.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] authentication question

2006-11-09 Thread Tom Lane
Craig White <[EMAIL PROTECTED]> writes:
> On Thu, 2006-11-09 at 11:51 -0500, Tom Lane wrote:
>> FWIW, we ship this PAM config file in the Red Hat PG RPMs:

> that doesn't work at all... /var/log/messages reports...

Sorry, I should have mentioned that that was for recent Fedora branches.
In RHEL4 I think this would work:

#%PAM-1.0
authrequiredpam_stack.so service=system-auth
account requiredpam_stack.so service=system-auth

> pretty short strace but I can't see anything that jumps at me and says
> aha...

You seem to have only strace'd the postmaster itself --- the interesting
events would be in the child process it forked off.  Try "strace -f -p ..."

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Installation Instruction For Window

2006-11-09 Thread Vernon _
Where I can find the instruction on-line? I only see the UNIX/LINUX 
instruction. I installed PG on window  before, but can't remember how it is 
done right now.


BTW, is the 8.2 beta good for a development environment?

Thanks,

vernon

_
Get FREE company branded e-mail accounts and business Web site from 
Microsoft Office Live 
http://clk.atdmt.com/MRT/go/mcrssaub0050001411mrt/direct/01/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] authentication question

2006-11-09 Thread Alvaro Herrera
Tom Lane wrote:
> Craig White <[EMAIL PROTECTED]> writes:
> > I haven't had to fool too much with pam for authenticating other
> > services so I'm a little bit out of my knowledge base but I know that it
> > was simple to add netatalk into the pam authentication and expected that
> > postgresql would be similar.
> 
> FWIW, we ship this PAM config file in the Red Hat PG RPMs:
> 
> #%PAM-1.0
> auth  include system-auth
> account   include system-auth
> 
> which AFAIR looks about the same as the corresponding files for other
> services.  It's installed as /etc/pam.d/postgresql.

For this to work you need a system-auth file in /etc/pam.d, which would
have lines for auth/account/required etc, and not just "includes".

PAM seems to be another area on which Linux distributors have been
diverging wildly for a long time; for example here on Debian the include
lines look like

authrequisite   pam_nologin.so
authrequiredpam_env.so
@include common-auth
@include common-account
session requiredpam_limits.so

so I doubt one distro's config files are applicable to any other.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Jeff Davis
On Thu, 2006-11-09 at 18:16 +0100, Anton Melser wrote:
> On 09/11/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> > Anton Melser wrote:
> > > Hi,
> > > I just can't understand why autovacuum is not working. I have a test
> > > db/table which I insert values into (by the thousands) and can't work
> > > out why my stats don't get updated. Could someone have a quick look at
> > > my attached .conf and tell me what I am doing?
> > > I am running it on FC5 8.1.4.fc5.1.
> >
> > You can tell whether autovacuum is running by using
> >
> > SHOW autovacuum;
> >
> >
> > Note that if you have a firewall of some sort stopping UDP messages from
> > being transmitted inside your machine, the stats collector may not
> > start, which may stop the autovacuum daemon from starting.  If you're
> > not seeing stat updates then there's probably something like that going
> > on.  Maybe the system has emitted a warning message at server start;
> > check the logs.
> 
> Thanks for that. Just a clarification, can someone tell me what the
> "number of tuples" means in the context of the multipliers? I mean,
> when the vacuum min multiplier is at 0.4, we multiply 0.4 by what? I
> get the min val, but what are we multiplying by 0.4? The total number
> of tuples in the table? The total modified/deleted?

http://www.postgresql.org/docs/8.1/static/runtime-config-autovacuum.html

I read that to mean that it multiplies the total number of tuples in the
table by autovacuum_vacuum_scale_factor and adds the
autovacuum_vacuum_threshold to that number. If the number of
updated/deleted tuples exceeds that sum, autovacuum will issue a vacuum.

The docs could maybe be more clear on this.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] authentication question

2006-11-09 Thread Craig White
On Thu, 2006-11-09 at 16:34 -0300, Alvaro Herrera wrote:
> Tom Lane wrote:
> > Craig White <[EMAIL PROTECTED]> writes:
> > > I haven't had to fool too much with pam for authenticating other
> > > services so I'm a little bit out of my knowledge base but I know that it
> > > was simple to add netatalk into the pam authentication and expected that
> > > postgresql would be similar.
> > 
> > FWIW, we ship this PAM config file in the Red Hat PG RPMs:
> > 
> > #%PAM-1.0
> > authinclude system-auth
> > account include system-auth
> > 
> > which AFAIR looks about the same as the corresponding files for other
> > services.  It's installed as /etc/pam.d/postgresql.
> 
> For this to work you need a system-auth file in /etc/pam.d, which would
> have lines for auth/account/required etc, and not just "includes".
> 
> PAM seems to be another area on which Linux distributors have been
> diverging wildly for a long time; for example here on Debian the include
> lines look like
> 
> authrequisite   pam_nologin.so
> authrequiredpam_env.so
> @include common-auth
> @include common-account
> session requiredpam_limits.so
> 
> so I doubt one distro's config files are applicable to any other.

and I'm on a Red Hat system which obviously Tom is familiar with since
he is the packager for RH / postgres but I don't think that is the issue
but I have adopted his pam file.

Thanks

Craig


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Jorge Godoy
"Andrus" <[EMAIL PROTECTED]> writes:

> Jorge,
>
> Thank you very much. Now I try to William Leite Araújo solution by replacing
>
> WHERE (a,b) OVERLAPS (c,d)
>
> with
>
> WHERE  ( c BETWEEN a AND b ) OR  ( d BETWEEN a AND b )
>
> Is this OK ?

From bare tests this looks OK. 

> This requires writing a and b expressions twice. How to avoid repeating 
> expressions ?

You can use a function for that and use variables for the four arguments:

CREATE OR REPLACE FUNCTION f_v_same_day_overlaps(date, 
   date, date, date, out overlaps bool) as 
$_$ 
SELECT (($3 between $1 and $2) or ($4 between $1 and $2)); 
$_$ language sql;


Be seeing you,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Alvaro Herrera
Anton Melser wrote:

> Thanks for that. Just a clarification, can someone tell me what the
> "number of tuples" means in the context of the multipliers? I mean,
> when the vacuum min multiplier is at 0.4, we multiply 0.4 by what? I
> get the min val, but what are we multiplying by 0.4? The total number
> of tuples in the table? The total modified/deleted?

pg_class.reltuples

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Can non-superuser install c functions ?

2006-11-09 Thread Rick Schumeyer
I am transferring a database from a system where I am a pg superuser to 
one where I am not.


The database uses tsearch2.  I am unable to install any of the 
functions.  For example:


CREATE FUNCTION gtsvector_in(cstring) RETURNS gtsvector
   AS '$libdir/tsearch2', 'gtsvector_in'
   LANGUAGE c STRICT;

results in: permission denied for language c

Do I need to get the pg administrator to install my database ?





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Can non-superuser install c functions ?

2006-11-09 Thread Martijn van Oosterhout
On Thu, Nov 09, 2006 at 04:04:33PM -0500, Rick Schumeyer wrote:
> I am transferring a database from a system where I am a pg superuser to 
> one where I am not.



> results in: permission denied for language c
> 
> Do I need to get the pg administrator to install my database ?

Yes. Letting someone install C language functions is equivalent to
giving then access to all data in the database and possibly most of the
rest of the system. So it's restricted.

So yes, the pg superuser in charge of the cluster needs to install
tsearch2 for you. The rest you can do yourself.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Anton Melser

On 09/11/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:

Anton Melser wrote:

> Thanks for that. Just a clarification, can someone tell me what the
> "number of tuples" means in the context of the multipliers? I mean,
> when the vacuum min multiplier is at 0.4, we multiply 0.4 by what? I
> get the min val, but what are we multiplying by 0.4? The total number
> of tuples in the table? The total modified/deleted?

pg_class.reltuples


And so...

For efficiency reasons, reltuples and relpages are not updated
on-the-fly, and so they usually contain somewhat out-of-date values.
They are updated by VACUUM, ANALYZE, and a few DDL commands such as
CREATE INDEX. A stand-alone ANALYZE, that is one not part of VACUUM,
generates an approximate reltuples value since it does not read every
row of the table. The planner will scale the values it finds in
pg_class to match the current physical table size, thus obtaining a
closer approximation.

So I am still a little unclear... I need to analyse to get relevant
stats for autovacuum... but autovacuum needs to be used to update the
relevant stats? Can I REALLY do without a cronjob, or am I just
thinking wishfully (I can invent expressions if I want!).
Cheers
Antoine

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] autovacuum blues

2006-11-09 Thread Alvaro Herrera
Anton Melser wrote:
> On 09/11/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> >Anton Melser wrote:
> >
> >> Thanks for that. Just a clarification, can someone tell me what the
> >> "number of tuples" means in the context of the multipliers? I mean,
> >> when the vacuum min multiplier is at 0.4, we multiply 0.4 by what? I
> >> get the min val, but what are we multiplying by 0.4? The total number
> >> of tuples in the table? The total modified/deleted?
> >
> >pg_class.reltuples
> 
> So I am still a little unclear... I need to analyse to get relevant
> stats for autovacuum... but autovacuum needs to be used to update the
> relevant stats? Can I REALLY do without a cronjob, or am I just
> thinking wishfully (I can invent expressions if I want!).

This means that we use the previously-known value of tuples in the table, plus
the number of new tuples that have been inserted, deleted and/or updated
(numbers you can see in the pg_stat_* views), and compare them to the
thresholds.  If I'm being too unclear, here is the relevant code comment:

 * A table needs to be vacuumed if the number of dead tuples exceeds a
 * threshold.  This threshold is calculated as
 *
 * threshold = vac_base_thresh + vac_scale_factor * reltuples
 *
 * For analyze, the analysis done is that the number of tuples inserted,
 * deleted and updated since the last analyze exceeds a threshold calculated
 * in the same fashion as above.  Note that the collector actually stores
 * the number of tuples (both live and dead) that there were as of the last
 * analyze.  This is asymmetric to the VACUUM case.

(http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/postmaster/autovacuum.c?rev=1.28;content-type=text%2Fx-cvsweb-markup)


The idea is that you _can_ do without a cronjob.  You may need to do a
first ANALYZE just to get things warmed up, and then let autovacuum do
its job.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join

2006-11-09 Thread Tom Lane
"Thomas H." <[EMAIL PROTECTED]> writes:
> SELECT * FROM shop.dvds
> LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
> LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean
> WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like 
> '%superman re%' OR lower(dvd_edition) LIKE '%superman re%')

Um, what's the datatype of sm_info_ean and dvd_ean exactly?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join

2006-11-09 Thread Thomas H.

SELECT * FROM shop.dvds
LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean
WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like
'%superman re%' OR lower(dvd_edition) LIKE '%superman re%')


Um, what's the datatype of sm_info_ean and dvd_ean exactly?


varchar(15) and varchar(14)

i can make them same width if that could help - just saw the same field on 
the other tables are limited to 14...


regards,
thomas 




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] zero values in statistic views

2006-11-09 Thread Cornelia Boenigk

Hi all

I would like to see what is going on in a database using the statistic 
views.


I am superuser,
stats_start_collector = on
stats_row_level = on
stats_block_level = on

select * from pg_stat_user_tables;
select * from pg_stat_user_indexes;
select * from pg_statio_user_tables
select * from pg_statio_user_indexes;

are showing zeroes in all columns where UPDATEs, INSERTs, block-reads, 
 hits ... should be shown.


select * from pg_stat_database where datname = 'mydb';
shows between 20 and 60 numbackends and from 11 to 50 commits.

What could be the reason for this behaviour? What am I missing?

Thanks in advance
Conni

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join

2006-11-09 Thread Tom Lane
"Thomas H." <[EMAIL PROTECTED]> writes:
>> Um, what's the datatype of sm_info_ean and dvd_ean exactly?

> varchar(15) and varchar(14)

OK.  I was wondering if you'd tried to use the new contrib/isn code and
it was messing up the estimates somehow.  Seems like a red herring.

After looking more closely, I think the issue is that the size of the
dvds/movies join is estimated at 39900 rows when it's really only 2,
and this estimate discourages the planner from using a nestloop join of
that join against data_soundmedia.  We were thinking this was entirely
because of poor estimation of the ~~ operators, but I suddenly realize
that what's happening is that the size of the left join is being clamped
to be at least the size of its left input --- that is, the planner is
failing to distinguish JOIN/ON clauses (which can't suppress left-side
rows) from WHERE clauses (which can).  Per comment in
set_joinrel_size_estimates:

 * Basically, we multiply size of Cartesian product by selectivity.
 *
 * If we are doing an outer join, take that into account: the output must
 * be at least as large as the non-nullable input.  (Is there any chance
 * of being even smarter?)  (XXX this is not really right, because it
 * assumes all the restriction clauses are join clauses; we should figure
 * pushed-down clauses separately.)

This didn't matter a whole lot back when the planner couldn't reorder
outer joins, but now that it can, it's more important that the estimates
be accurate.

I'm not sure if this is feasible to fix before 8.2, but I'll take a
look.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] AutoVacuum on demand?

2006-11-09 Thread Glen Parker
I would like a way to run the autovacuum daemon on demand periodically. 
  Every night at 2 AM, for example.


Anybody know if this is possible?  If not, it's a feature request :-)

-Glen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Remote tables infrastructure.

2006-11-09 Thread Dawid Kuroczko

Hello.

One nice feature of PostgreSQL's CREATE FUNCTION/RULE/VIEW system
is ability to create objects which "may appear closer than they really are".
Hence, all the dbi-link-like modules.

The usual method is to create a view on a set returning function (if
reasonable),
create RULEs for managing INSERT/UPDATE/DELETE, and voila.

This approach has some limitations.  First of all if you add WHERE restrictions
to such a view, your PostgreSQL will first slurp all the data from SRF, and then
do the hard work of limiting/ordering.  To circumvent that you may want to
pass actual query string at the application level.

Methinks it would be good idea to discuss a more general approach, i.e.
don't "create" links using existing infrastructure but rather provide something
conceptually similar to CREATE AGGREGATE statements. In other words,
make a remote link a first class object inside PostgreSQL.

The first thing, a connection:
CREATE DBLINK name (
   connection = "some connection string",
   start_transaction = foo_start_transaction_func,
   rollback = foo_rollback_func,
   before_commit = foo_prepare_transaction_func,
   after_commit = foo_commit_prepared_func,
   ...
);

A start_transaction_func should receive ISOLATION LEVEL of course.
A 2PC infrastructure could be used to ensure that both transactions
will either commit or rollback.  I think the remote transaction should
start (initialize connection) on first access to the remote table (as not
to cause connection storms upon backend process startup).  If it
fails to initialize it, then our transaction should rollback.
I think it may be reasonable to provide timeout on IDLE (not IDLE in
transaction) remote link.  Of course "nontransactional" remote links
should be possible without inconveniences.  And we would need
to take (and keep) connection string which will allow us to reuse
the library functions with different connections.

Normally one would prototype using PLperlU/PLpythonU PLs and
if one wants more robust interface -- one would made a .so library
providing more robust interface.

Then we should have ability to access remote tables.  I think PostgreSQL
should not assume that it knows what's on the other side of the link, and
just blindly follow user's request for a remote table (and throw an error
if table does not exist).  I'll elaborate further on access functions later on.
For instance accessing table bar on foo DBLINK would be possible through:

SELECT * FROM foo.public.bar WHERE baz = 123;
 foo -- being connection name,
 public -- being schema name,
 bar -- being table name.

You see now where am I cruising to: this way we're using
.. naming scheme, and if we provide
implicit (or explicit) dblink library for connecting other DBs
in current (or remote) cluster -- we've got a nice new feature
"for free".

Once CREATEd DBLINK should remain in system tables until
somebody DROPs it.  It might be good idea to also provide
CREATE TEMPORARY DBLINK similar in concept to temporary
tables.

At minimum we should provide (in CREATE DBLINK) extra functions
for accessing and modifying data.  Something like
CREATE DBLINK foo (
 table_select = foo_tab_select_func
);

It should be possible to provide "where" parameters, "order by" and "limit"
somehow -- if remote supports it.  And we should do it so that
remote end could easily transform LIMIT into ROWNUM for instance.
I think PostgreSQL should reevaluate results -- if possible after all it's
PostgreSQL, and we trust PostgreSQL.  Then again there is a big
question how to provide them, and if simpler cases of GROUP BY
should be also possible (like count(*) ?).

At simplest it might be something like:
table_sepect(table_name text, ret_cols text[], where_cond,
where_vals[], order_cols[], limit int, offset int);

SELECT a, t0, t1-t0,
 FROM foo.public.bar
 WHERE (a > 4 AND b IN ('a','z')) OR t=now()
 ORDER BY a
 LIMIT 100 OFFSET 5;

foo_tab_select_func('bar',
 '{a,t0,t0}',
 '("a" > ? AND "b" IN (?,?)) OR "t" = ?', '{4,a,z,2006-11-10T01:13}',
 'a',
 100, 5);

I know it looks a bit awkward, and I do hope someone comes with
a better idea altogether.  My idea was to "transform" original query
conditions into perl DBI prepare-like string -- with column names
properly quoted, and arguments supplied as a separate argument.
I.e provide strict rules and limited number of operators to the query
(brackets, AND, OR, NOT, >, <, =, >=, <=, IN (...), perhaps
IS [NOT] NULL).  The remote end would try to implement as much
as possible but could as well put it on main backend's back.

There is a case with LIMIT/OFFSET -- we must know if remote end
will do the work for us or not.  Also, it would be nice to be able to
push more advanced parts like GROUP BY queries, or even JOINs
(if applicable) to the remote end.  Perhaps as a special mode for
PostgreSQL-compatible backends -- i.e. if some dblink driver
tries to understand PostgreSQL requests -- it is free to do so;
if it quacks like a duck, it is a duck.  I haven't got idea what format
of s

Re: [GENERAL] AutoVacuum on demand?

2006-11-09 Thread Joshua D. Drake
On Thu, 2006-11-09 at 17:00 -0800, Glen Parker wrote:
> I would like a way to run the autovacuum daemon on demand periodically. 
>Every night at 2 AM, for example.
> 
> Anybody know if this is possible?  If not, it's a feature request :-)

use vacuumdb and cron.

Joshua D. Drake


> 
> -Glen
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
> 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] blocking function in PL/Python

2006-11-09 Thread Ottavio Campana
suppose we have a stored procedure written in PL/Python.

What happens if the function blocks for a while?

Does the server still works for the other clients?



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] AutoVacuum on demand?

2006-11-09 Thread Glen Parker

Joshua D. Drake wrote:

On Thu, 2006-11-09 at 17:00 -0800, Glen Parker wrote:
I would like a way to run the autovacuum daemon on demand periodically. 
   Every night at 2 AM, for example.


Anybody know if this is possible?  If not, it's a feature request :-)


use vacuumdb and cron.


Cron yes, vacuumdb no.  I would like to AUTO vacuum periodically.

-Glen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


  1   2   >