[GENERAL] Help speeding up this query - maybe need another index?
Here's my SQL query. I don't think it's too gigantic, but it is kind of beastly: SELECT COUNT(r) FROM trainer_hand_results r, trainer_scenarios s, trainer_scenario_stats stats WHERE r.user_id=1 AND r.trainer_scenario_id=s.id AND s.id=stats.trainer_scenario_id AND r.action=stats.correct_action; When I EXPLAIN it, I get: Aggregate (cost=18.12..18.13 rows=1 width=32) -> Nested Loop (cost=0.00..18.12 rows=1 width=32) -> Nested Loop (cost=0.00..12.28 rows=1 width=40) Join Filter: (("outer"."action")::text = ("inner".correct_action)::text) -> Seq Scan on trainer_hand_results r (cost=0.00..6.56 rows=1 width=181) Filter: (user_id = 1) -> Index Scan using trainer_scenario_stats_trainer_scenario_id_index on trainer_scenario_stats stats (cost=0.00..5.71 rows=1 width=149) Index Cond: (stats.trainer_scenario_id = "outer".trainer_scenario_id) -> Index Scan using trainer_scenarios_pkey on trainer_scenarios s (cost=0.00..5.82 rows=1 width=4) Index Cond: ("outer".trainer_scenario_id = s.id) (10 rows) I don't have a lot of experience with getting queries to go faster. The things that jump out at me though are two nested loops and a sequential scan. What could I do to speed this up? Pat ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Help speeding up this query - maybe need another index?
Pat Maddox wrote: Here's my SQL query. I don't think it's too gigantic, but it is kind of beastly: SELECT COUNT(r) FROM trainer_hand_results r, trainer_scenarios s, trainer_scenario_stats stats WHERE r.user_id=1 AND r.trainer_scenario_id=s.id AND s.id=stats.trainer_scenario_id AND r.action=stats.correct_action; When I EXPLAIN it, I get: Aggregate (cost=18.12..18.13 rows=1 width=32) -> Nested Loop (cost=0.00..18.12 rows=1 width=32) -> Nested Loop (cost=0.00..12.28 rows=1 width=40) Join Filter: (("outer"."action")::text = ("inner".correct_action)::text) -> Seq Scan on trainer_hand_results r (cost=0.00..6.56 rows=1 width=181) Filter: (user_id = 1) -> Index Scan using trainer_scenario_stats_trainer_scenario_id_index on trainer_scenario_stats stats (cost=0.00..5.71 rows=1 width=149) Index Cond: (stats.trainer_scenario_id = "outer".trainer_scenario_id) -> Index Scan using trainer_scenarios_pkey on trainer_scenarios s (cost=0.00..5.82 rows=1 width=4) Index Cond: ("outer".trainer_scenario_id = s.id) (10 rows) I don't have a lot of experience with getting queries to go faster. The things that jump out at me though are two nested loops and a sequential scan. What could I do to speed this up? Have you analyzed the tables in question? Post the result of 'explain analyze' rather than just explain. -- Postgresql & php tutorials http://www.designmagick.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Help speeding up this query - maybe need another index?
On 6/13/06, Chris <[EMAIL PROTECTED]> wrote: Pat Maddox wrote: > Here's my SQL query. I don't think it's too gigantic, but it is kind > of beastly: > > SELECT COUNT(r) FROM trainer_hand_results r, trainer_scenarios s, > trainer_scenario_stats stats WHERE r.user_id=1 AND > r.trainer_scenario_id=s.id AND s.id=stats.trainer_scenario_id AND > r.action=stats.correct_action; > > When I EXPLAIN it, I get: > > Aggregate (cost=18.12..18.13 rows=1 width=32) > -> Nested Loop (cost=0.00..18.12 rows=1 width=32) > -> Nested Loop (cost=0.00..12.28 rows=1 width=40) > Join Filter: (("outer"."action")::text = > ("inner".correct_action)::text) > -> Seq Scan on trainer_hand_results r > (cost=0.00..6.56 rows=1 width=181) > Filter: (user_id = 1) > -> Index Scan using > trainer_scenario_stats_trainer_scenario_id_index on > trainer_scenario_stats stats (cost=0.00..5.71 rows=1 width=149) > Index Cond: (stats.trainer_scenario_id = > "outer".trainer_scenario_id) > -> Index Scan using trainer_scenarios_pkey on > trainer_scenarios s (cost=0.00..5.82 rows=1 width=4) > Index Cond: ("outer".trainer_scenario_id = s.id) > (10 rows) > > I don't have a lot of experience with getting queries to go faster. > The things that jump out at me though are two nested loops and a > sequential scan. What could I do to speed this up? Have you analyzed the tables in question? Post the result of 'explain analyze' rather than just explain. -- Postgresql & php tutorials http://www.designmagick.com/ Here is the result from EXPLAIN ANALYZE on that query: Aggregate (cost=18.12..18.13 rows=1 width=32) (actual time=4.924..4.925 rows=1 loops=1) -> Nested Loop (cost=0.00..18.12 rows=1 width=32) (actual time=0.243..4.878 rows=15 loops=1) -> Nested Loop (cost=0.00..12.28 rows=1 width=40) (actual time=0.222..4.613 rows=15 loops=1) Join Filter: (("outer"."action")::text = ("inner".correct_action)::text) -> Seq Scan on trainer_hand_results r (cost=0.00..6.56 rows=1 width=181) (actual time=0.039..1.302 rows=285 loops=1) Filter: (user_id = 1) -> Index Scan using trainer_scenario_stats_trainer_scenario_id_index on trainer_scenario_stats stats (cost=0.00..5.71 rows=1 width=149) (actual time=0.009..0.009 rows=0 loops=285) Index Cond: (stats.trainer_scenario_id = "outer".trainer_scenario_id) -> Index Scan using trainer_scenarios_pkey on trainer_scenarios s (cost=0.00..5.82 rows=1 width=4) (actual time=0.012..0.014 rows=1 loops=15) Index Cond: ("outer".trainer_scenario_id = s.id) Total runtime: 5.494 ms (11 rows) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] ECPG and Curors.
On Mon, Jun 12, 2006 at 11:14:24PM -0400, Peter L. Berghold wrote: > what I don't see is how to detect that I've fetched the last row from a > query. Is there more complete doco on this process somewhere? You could either handle a NOT FOUND exception, or make use of SQLCODE or SQLSTATE like this... EXEC SQL DECLARE csr_fred CURSOR FOR SELECT EXEC SQL OPEN csr_fred; while (SQLCODE == 0) { EXEC SQL FETCH csr_fred INTO :jim, :sheila,... if (SQLCODE == 0) { body of row processing here } } EXEC SQL CLOSE csr_fred; ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] BLOB & Searching
Hi, I am not 100% sure what the best solution would be, so I was hoping someone could point me in the right direction. I usually develop in MS tools, such as .net, ASP, SQL Server etc..., but I really want to expand my skillset and learn as much about Postgres as possible. What I need to do, is design a DB that will index and store approximately 300 word docs, each with a size no more that 1MB. They need to be able to seacrh the word documents for keyword/phrases to be able to identify which one to use. So, I need to write 2 web interfaces. A front end and a back end. Front end for the users who will search for their documents, and a backend for an admin person to upload new/ammended documents to the DB to be searchable. NOW. I could do this in the usual MS tools that I work with using BLOB's and the built in Full-text searching that comes with SQL Server, but i don't have these to work with. I am working with PostGres & JSP pages What I was hoping someone could help me out with was identifying the best possible solution to use. 1. How can I store the word doc's in the DB, would it be best to use a BLOB data type? 2. Does Postgres support full text searching of a word document once it is loaded into the BLOB column & how would this work? Would I have to unload each BLOB object, convert it back to text to search, or does Postgres have the ability to complete the full-text search of a BLOB, like MSSQL Server & Oracle do? 3. Is there a way to export the Word Doc From the BLOB colum and dump it into a PDF format (I guess I am asking if someone has seen or written a PDF generator script/storedProc for Postgres)? If someone could help me out, it would be greatly appreciated. cheers, James ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] pg_dump: missing pg_database entry
Hello, I'm experiencing a strange problem with PostgreSQL 7.4.9. One of my database production servers has 2 large databases, it's still possible to connect to them and pass queries, but the pg_database system table is empty, which prohibits such actions as dumping the databases. For instance: [EMAIL PROTECTED]:~$ pg_dump maf pg_dump: missing pg_database entry for database "maf" does not work,but: [EMAIL PROTECTED]:~$ psql maf Welcome to psql 7.4.9, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit maf=# This works, but the pg_database looks like it's empty: maf=# select * from pg_database; datname | datdba | encoding | datistemplate | datallowconn | datlastsysoid | datvacuumxid | datfrozenxid | datpath | datconfig | datacl -++--+---+--+---+--+--+-+---+ (0 rows) I tried to repair the system indexes, but it doesn't work either... Thanks in advance for your help. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Searching BLOB
Hi,I am not 100% sure what the best solution would be, so I was hopingsomeone could point me in the right direction.I usually develop in MS tools, such as .net, ASP, SQL Server etc...,but I really want to expand my skillset and learn as much about Postgresqlas possible. What I need to do, is design a DB that will index and storeapproximately 300 word docs, each with a size no more that 1MB. Theyneed to be able to seacrh the word documents for keyword/phrases to beable to identify which one to use. So, I need to write 2 web interfaces. A front end and a back end. Frontend for the users who will search for their documents, and a backendfor an admin person to upload new/ammended documents to the DB to be searchable.NOW. I could do this in the usual MS tools that I work with usingBLOB's and the built in Full-text searching that comes with SQL Server,but i don't have these to work with at the mometn. I am working with PostGres & JSP pagesWhat I was hoping someone could help me out with was identifying thebest possible solution to use.1. How can I store the word doc's in the DB, would it be best to use aBLOB data type? 2. Does Postgres support full text searching of a word document once itis loaded into the BLOB column & how would this work? Would I have tounload each BLOB object, convert it back to text to search, or does Postgres have the ability to complete the full-text search of a BLOB,like MSSQL Server & Oracle do?3. Is there a way to export the Word Doc From the BLOB colum and dumpit into a PDF format (I guess I am asking if someone has seen or written a PDF generator script/storedProc for Postgres)? If someone could help me out, it would be greatly appreciated.cheers,James
[GENERAL] Error: Server doesn't listen
Every two days, Postgres 8.1 Server is unconfigured without apparent changes done. The service under windows XP is always stoped. I work under XP SP2. has somebody any idea? Do the list accept messages in spanish? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Help with storing spatial (map coordinates) data?
On Mon, 12 Jun 2006, John Tregea wrote: > Hi, > > I have recently switched to PostgreSQL and had no problem bringing our > existing (my)SQL databases and data into the environment. I am now > extending the functionality of our databases and want to start storing > spatial information. > > The information is made up of latitude and longitude coordinates that > define a point or location on the earth's surface. e.g. degrees, minutes > and seconds north/south and degrees, minutes and seconds east/west. > > I have read up on custom data types (with input and output functions) in > the docs but am not sure if that is the best way to go. Can anyone point > me to a simple, workable implementation of storing and managing this > type of data or advise me on how to structure a series of fields that > could combine to the required string? > > I am running postgreSQL 8.1.4 under WinXP Pro and currently evaluating > an X-Talk front end called Revolution for the GUI development and have > only some general experience with SQL. I stongly suggest you do not use tne native Postgres geometry capability, but install PostGIS and use this instead. See www.postgis.org Brent Wood ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Help with storing spatial (map coordinates) data?
On Tue, 13 Jun 2006, John Tregea wrote: > Hi Brent, > > I will look at postGIS today. I will try and keep the whole GIS > functionality as a separate schema to avoid confusing myself, so a > postGIS may be exactly what I am looking for. > Ummm... one caution: The lovely side effect, apart from all the SQL functions to query & analyse spatial data in Postgres, is that any table with a properly created geometry attribute is automatically available as a GIS layer in a GIS map window, using GIS applications like QGIS, mezoGIS, JUMP & uDIG (even ArcInfo via the PostGIS SDE), or to a less well integrated extent, GRASS. It can also be a layer in a web map server application using something like UMN mapserver. However, not all of these support the concept of schema's, so only tables in the public schema may be able to be plotted/mapped. Also, from a data modelling perspective, a geometry attribute is not inherently different to a numeric, int, varchar or text attribute, so unless there is some other reason to divide entities with geometries into a separate schema frpom those without, I'm not sure it is good practice. Cheers, Brent ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Help with storing spatial (map coordinates) data?
On Tue, 13 Jun 2006, John Tregea wrote: > Thanks Brent, > > I will be cautious in my approach. The public schema is the place that I > wanted to use to store the geometry attributes, so from your points, > that sounds like the best place. The other schemas contain controlled > (security) information in proprietary data structures so I that was my > reluctance to modify those tables with the necessary geometry functions, > types etc. > Sounds eminently sensible :-) One point you might note, the AddGeometryColumn() function does two things. It adds a geometry column of the appropriate projection & type to the specified table. It also writes a metadata record to the geometry_columns table. This is where many application look to find tables with geometries. If you create a view on a table with a geometry column, or create a table with a geometry column without using the AddGeometryColumn() function (eg: create table foo1 as select * from foo0;), then some applications will not recognise the table or view as a "GIS" table. If you are adding geometries to tables via views, which it sounds like you may be doing, you may need to manually insert the appropriate data into the geometry_columns table to be fully compliant with the OGC specs & PostGIS implementation. If you create such a geometry table or view & the GIS package you are using fails to make it available as a data source, this is almost certainly why :-) Cheers, Brent ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Searching BLOB
James Watson wrote: What I was hoping someone could help me out with was identifying the best possible solution to use. 1. How can I store the word doc's in the DB, would it be best to use a BLOB data type? You can use the column type "bytea", which can store (nearly) arbitrary amounts of binary data. 2. Does Postgres support full text searching of a word document once it is loaded into the BLOB column & how would this work? Would I have to unload each BLOB object, convert it back to text to search, or does Postgres have the ability to complete the full-text search of a BLOB, like MSSQL Server & Oracle do? There is fulltext indexing support for postgres, look for tsearch2 in the contrib module of postgres. A bytea-column is basically used like a string, so there is no need to load/unload the blob. There is also the concept of a LOB as a distinct entity in postgresql. Accessing those lobs needs special support from your client library (standard libpq provides that support of course). They have the advantage that you can open/seek/close them like a regular file. But the disadvantage is that you can't store them in columns - they are referenced via oids, and you need to store those oids. You also can't put triggers on those LOBs, and I'm not sure how transaction-safe they are. 3. Is there a way to export the Word Doc From the BLOB colum and dump it into a PDF format (I guess I am asking if someone has seen or written a PDF generator script/storedProc for Postgres)? You can use java as a backend language with postgresql (google for pljava). So you can pretty much do whatever you can do with java. greetings, Florian Pflug ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_dump: missing pg_database entry
[EMAIL PROTECTED] wrote: Hello, I'm experiencing a strange problem with PostgreSQL 7.4.9. One of my database production servers has 2 large databases, it's still possible to connect to them and pass queries, but the pg_database system table is empty, which prohibits such actions as dumping the databases. I believe that postgresql keeps a plaintext copy of the database table, because it can't access that table until you are connected. I'd suggest you make a backup of you whole data directory immediatly, in case things get worse (e.g. some tries to create a database, and this causes the plaintext copy to be overwritten). One reason the pg_database table seems to be empty could be oid wraparound. Has this database been vacuumed regularly? If not, try doing a "vacuum full" now - according to some earlier discussion about oid wraparound on this list this should fix the problem if the wraparound hasn't happend too long ago. But, in any case, take a (filesystem leven) backup of your database NOW, before you do anything else. greetings, Florian Pflug ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Help speeding up this query - maybe need another index?
Pat Maddox wrote: Here's my SQL query. I don't think it's too gigantic, but it is kind of beastly: SELECT COUNT(r) FROM trainer_hand_results r, trainer_scenarios s, trainer_scenario_stats stats WHERE r.user_id=1 AND r.trainer_scenario_id=s.id AND s.id=stats.trainer_scenario_id AND r.action=stats.correct_action; The only indices that can help here are trainer_hand_results: (user_id), (trainer_scenario_id) trainer_scenarios: (id) trainer_scenario_stats: (trainer_scenario_id), (correct_action) Which of those help depends on the size of your tables. greetings, Florian Pflug ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Searching BLOB
Save yourself some effort and use Lucene to index a directory of your 300 word documents. I'm pretty sure that Lucene includes an extension to read Word documents, and you can use PDFBox to read/write PDF files. Marrying the searching and displaying of results to your web application should be trivial since you're wanting to use java anyway. Lucene has full character set support and is blindingly fast If you're looking for a solution to this problem using Postgres, then you'll be creating a ton extra work for yourself. If you're wanting to learn more about postgres, then maybe it'll be worthwhile. John James Watson said: > Hi, > I am not 100% sure what the best solution would be, so I was hoping > someone could point me in the right direction. > > I usually develop in MS tools, such as .net, ASP, SQL Server etc..., > but I really want to expand my skillset and learn as much about > Postgresqlas > possible. > > What I need to do, is design a DB that will index and store > approximately 300 word docs, each with a size no more that 1MB. They > need to be able to seacrh the word documents for keyword/phrases to be > able to identify which one to use. > > So, I need to write 2 web interfaces. A front end and a back end. Front > end for the users who will search for their documents, and a backend > for an admin person to upload new/ammended documents to the DB to be > searchable. > > NOW. I could do this in the usual MS tools that I work with using > BLOB's and the built in Full-text searching that comes with SQL Server, > but i don't have these to work with at the mometn. I am working with > PostGres & JSP > pages > > What I was hoping someone could help me out with was identifying the > best possible solution to use. > > 1. How can I store the word doc's in the DB, would it be best to use a > BLOB data type? > > 2. Does Postgres support full text searching of a word document once it > is loaded into the BLOB column & how would this work? Would I have to > unload each BLOB object, convert it back to text to search, or does > Postgres have the ability to complete the full-text search of a BLOB, > like MSSQL Server & Oracle do? > > 3. Is there a way to export the Word Doc From the BLOB colum and dump > it into a PDF format (I guess I am asking if someone has seen or > written a PDF generator script/storedProc for Postgres)? > > If someone could help me out, it would be greatly appreciated. > > cheers, > James > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Error: Server doesn't listen
Fernando Sánchez Cervera wrote: Every two days, Postgres 8.1 Server is unconfigured without apparent changes done. The service under windows XP is always stoped. I work under XP SP2. has somebody any idea? What do the logs show? Either PostgreSQL's or Window's logs should show something happening. Do the list accept messages in spanish? You can try, but there are separate Spanish resources available too: http://www.postgresql.org/community/lists/ http://www.postgresql.org/community/international -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] A slow query
Hi all, We're using some 3rd party product that uses inheritence, and the following query is rather slow on PostgreSQL 7.4.7 (debian stable). Any suggestions how to speed it up? explain analyze SELECT otype,owner,rnumber,dir,number,dnumber,pos,snumber FROM mm_posrel posrel ORDER BY number DESC LIMIT 25; QUERY PLAN Limit (cost=7996.04..7996.10 rows=25 width=60) (actual time=2329.505..2329.767 rows=25 loops=1) -> Sort (cost=7996.04..8157.42 rows=64553 width=60) (actual time=2329.495..2329.585 rows=25 loops=1) Sort Key: posrel.number -> Result (cost=0.00..1510.51 rows=64553 width=60) (actual time=0.045..1644.541 rows=75597 loops=1) -> Append (cost=0.00..1510.51 rows=64553 width=60) (actual time=0.034..977.543 rows=75597 loops=1) -> Seq Scan on mm_posrel posrel (cost=0.00..1510.51 rows=64551 width=39) (actual time=0.027..436.501 rows=75597 loops=1) -> Seq Scan on mm_menu_item posrel (cost=0.00..0.00 rows=1 width=60) (actual time=0.005..0.005 rows=0 loops=1) -> Seq Scan on mm_cms_operation posrel (cost=0.00..0.00 rows=1 width=60) (actual time=0.005..0.005 rows=0 loops=1) Total runtime: 2332.136 ms (9 rows) The tables look like (I added the pkeys after the indexes on number, it didn't change the problem): Table "public.mm_posrel" Column | Type | Modifiers -+-+--- number | integer | not null otype | integer | not null owner | text| not null snumber | integer | not null dnumber | integer | not null rnumber | integer | not null dir | integer | pos | integer | Indexes: "mm_posrel_pkey" primary key, btree (number) "mm_posrel_dnumber_idx" btree (dnumber) "mm_posrel_number_idx" btree (number) "mm_posrel_rnumber_idx" btree (rnumber) "mm_posrel_snumber_idx" btree (snumber) Inherits: mm_insrel Table "public.mm_menu_item" Column | Type | Modifiers -+-+--- number | integer | not null otype | integer | not null owner | text| not null snumber | integer | not null dnumber | integer | not null rnumber | integer | not null dir | integer | pos | integer | name| text| not null Indexes: "mm_menu_item_pkey" primary key, btree (number) "mm_menu_item_dnumber_idx" btree (dnumber) "mm_menu_item_number_idx" btree (number) "mm_menu_item_rnumber_idx" btree (rnumber) "mm_menu_item_snumber_idx" btree (snumber) Inherits: mm_posrel Table "public.mm_cms_operation" Column | Type | Modifiers -+-+--- number | integer | not null otype | integer | not null owner | text| not null snumber | integer | not null dnumber | integer | not null rnumber | integer | not null dir | integer | pos | integer | m_type | text| not null getvars | text| Indexes: "mm_cms_operation_pkey" primary key, btree (number) "mm_cms_operation_dnumber_idx" btree (dnumber) "mm_cms_operation_number_idx" btree (number) "mm_cms_operation_rnumber_idx" btree (rnumber) "mm_cms_operation_snumber_idx" btree (snumber) Inherits: mm_posrel Table "public.mm_insrel" Column | Type | Modifiers -+-+--- number | integer | not null otype | integer | not null owner | text| not null snumber | integer | not null dnumber | integer | not null rnumber | integer | not null dir | integer | Indexes: "mm_insrel_dnumber_idx" btree (dnumber) "mm_insrel_number_idx" btree (number) "mm_insrel_rnumber_idx" btree (rnumber) "mm_insrel_snumber_idx" btree (snumber) Inherits: mm_object Table "public.mm_object" Column | Type | Modifiers +-+--- number | integer | not null otype | integer | not null owner | text| not null Indexes: "mm_object_pkey" primary key, btree (number) -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] delete seems to be getting blocked
Title: Re: [GENERAL] delete seems to be getting blocked however if the for every occuerence of the foreign key there are some 2000 rows in the table, is it ok to still have an index on that foreign key. also will index scan still take place or postgres will itself choose to do sequential scan. however, i have also noticed that even though the indexes exixt, still the sequential can takes place. how can this be avoided, will i have to set the enable_seq_scan to off? thanks, regards surabhi From: [EMAIL PROTECTED] on behalf of Michael FuhrSent: Mon 6/12/2006 6:18 PMTo: surabhi.ahujaCc: pgsql-general@postgresql.orgSubject: Re: [GENERAL] delete seems to be getting blocked ***Your mail has been scanned by InterScan VirusWall.***-***On Mon, Jun 12, 2006 at 12:54:15PM +0530, surabhi.ahuja wrote:> i have four tables in my database> TAB1, has one primary key T1> > TAB2 , has 2 fields, one is the primary ley T2 and the other one> is the foreign key T1(from TAB1)> > TAB3 also has 2 fields, one is the primary ley T3 and the other> is the foreign key T2(from TAB2)> > TAB4 has 2 fields again, primary key T4 and a foreign key T3(from TAB3)Do the foreign key columns in TAB2, TAB3, and TAB4 have indexes?How many rows do the tables have?> the disk is 100% full.> > i open psql and do> delete from TAB1> > nothing seems to be happening for a long time, although when i do> top, it shows postgres taking 99%.For each record you delete in TAB1 the database must search TAB2to check for referential integrity violations or cascading operations(ON DELETE CASCADE, ON DELETE SET NULL, etc.). If the foreign keycolumn in TAB2 doesn't have an index then each row deleted fromTAB1 will result in a sequential scan on TAB2; likewise with TAB3if you modify TAB2 and with TAB4 if you modify TAB3.If the tables are large then make sure you have indexes on theforeign key columns. If you create indexes then you might need tostart a new session due to plan caching.--Michael Fuhr---(end of broadcast)---TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] A slow query
Alban Hertroys wrote: Hi all, We're using some 3rd party product that uses inheritence, and the following query is rather slow on PostgreSQL 7.4.7 (debian stable). Any suggestions how to speed it up? A few more datapoints: - Database was vacuum full analyzed just before the query. - The same query on mm_insrel takes just over 11s. - The actual and estimated row numbers are accurate. -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Password for postgresql superuser?
- Original Message - From: "Chris Browne" <[EMAIL PROTECTED]> To: pgsql-general@postgresql.org Sent: Thursday, June 08, 2006 01:30 PM Subject: [GENERAL] Password for postgresql superuser? > [EMAIL PROTECTED] ("jqpx37") writes: > > > Is there any security risk in the postgresql superuser having a > > password? > > > > I installed a Linux distro recently and had it install Postgresql. > > It automatically set up the postgres account; the account was set up > > with no password. > > > > I could of course create a password, but it's not clear to me that's > > a good thing from a security standpoint. > > That depends on your security policies. > > There's a pretty good argument to be made that a 'postgres' account > should only permit people in via "su -", in which case it might not > need to have an individual password... Thanks for your response. I found allusions to the point your making, though no detailed explanation. It makes sense even without a thorough explication. Best wishes > -- > (format nil "[EMAIL PROTECTED]" "cbbrowne" "cbbrowne.com") > http://www3.sympatico.ca/cbbrowne/oses.html > "If you give someone Fortran, he has Fortran. > If you give someone Lisp, he has any language he pleases." > -- Guy L. Steele Jr. > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] PostgreSQL and Apache
I'm working on a project involving PostgreSQL and Apache. Anyone know of any good books or online how-to's on getting PostgreSQL and Apache to work together? (I'm also using PHP.) TIA ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Cant' create language - "could not load library"
Hello All.I have installed version 8.1.4 (located in D:/PostgreSQL/8.1 ) on Windows XP and try to create language with command CREATE LANGUAGE or createlang command line tool but in any way got an error:postgres=# create language plperl; ERROR: could not load library "D:/PostgreSQL/8.1/lib/plperl.dll": The specified module could not be found.and the same for any language except of plpgsql.Also I have tried to create language java from command line: java org.postgresql.pljava.deploy.Deployer -installbut got the same error.All of the dll-libraries are exist in dynamic_library_path, and on their places. I've read thread http://archives.postgresql.org/pgsql-bugs/2006-03/msg00039.php and some referenced but got nothing.Is this real bug or it's possible to resolve it and how? Thanks.
Re: [GENERAL] Aggregate functions not allowed in WHERE clause
On Mon, Jun 12, 2006 at 08:40:29PM -0700, pradeep singh wrote: > i think this query can be rewritten as > > SELECT claim_id,sum(invoices),sum(payments) > FROM logs > GROUP BY claim_id > HAVING sum(invoices) > 0 OR sum(payments) > 0; > > having clause can be used with aggregate functions but > those functions should be the part of column > list/expression list in the SELECT statement. PostgreSQL has no such requirement; see "The GROUP BY and HAVING Clauses" in the documentation: http://www.postgresql.org/docs/8.1/interactive/queries-table-expressions.html#QUERIES-GROUP "Tip: Grouping without aggregate expressions effectively calculates the set of distinct values in a column." "Note that the aggregate expressions do not necessarily need to be the same in all parts of the query." Offhand I don't know if the SQL standard requires expressions in the HAVING clause to be present in the select list -- can you cite reference from the standard that supports the assertion that they should be? -- Michael Fuhr ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] PostgreSQL and Apache
On Tuesday 13. June 2006 15:39, jqpx37 wrote: >I'm working on a project involving PostgreSQL and Apache. > >Anyone know of any good books or online how-to's on getting PostgreSQL > and Apache to work together? (I'm also using PHP.) AFAIK, there are no dependencies beween Apache and PostgreSQL. PHP is what you'll use as the glue between them. I've worked with PHP and MySQL for some years, and found the transition to PostgreSQL rather painless, but still I've considered buying the "Beginning PHP and PostgreSQL 8: From Novice to Professional" by W. Jason Gilmore and Robert H. Treat. -- Leif Biberg Kristensen | Registered Linux User #338009 http://solumslekt.org/ | Cruising with Gentoo/KDE ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PostgreSQL and Apache
PHP is one alternative, another is PERL with CGI to write web based programs that can GET/POST with input/output from the browser, and to interface with *SQL - i.e. postgresql - you can use PERL's DBI interface Leif B. Kristensen wrote: On Tuesday 13. June 2006 15:39, jqpx37 wrote: I'm working on a project involving PostgreSQL and Apache. Anyone know of any good books or online how-to's on getting PostgreSQL and Apache to work together? (I'm also using PHP.) AFAIK, there are no dependencies beween Apache and PostgreSQL. PHP is what you'll use as the glue between them. I've worked with PHP and MySQL for some years, and found the transition to PostgreSQL rather painless, but still I've considered buying the "Beginning PHP and PostgreSQL 8: From Novice to Professional" by W. Jason Gilmore and Robert H. Treat. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] PostgreSQL and Apache
louis gonzales wrote: PHP is one alternative, another is PERL with CGI to write web based programs that can GET/POST with input/output from the browser, and to interface with *SQL - i.e. postgresql - you can use PERL's DBI interface Leif B. Kristensen wrote: On Tuesday 13. June 2006 15:39, jqpx37 wrote: I'm working on a project involving PostgreSQL and Apache. Anyone know of any good books or online how-to's on getting PostgreSQL and Apache to work together? (I'm also using PHP.) AFAIK, there are no dependencies beween Apache and PostgreSQL. PHP is what you'll use as the glue between them. I've worked with PHP and MySQL for some years, and found the transition to PostgreSQL rather painless, but still I've considered buying the "Beginning PHP and PostgreSQL 8: From Novice to Professional" by W. Jason Gilmore and Robert H. Treat. Another one is 'Whitebeam' - which it primarily targeted at developing web-applications using server-side JavaScript, XML and PostgreSQL. It runs as an Apache module (like PHP). The Whitebeam interface to PostgreSQL allows you to do just about anything you could do in libpq and in a slightly more object orientated way than the raw PHP interface (although for PHP there are a number of wrappers). Whitebeam at http://www.whitebeam.org The Postgres interface is documented at: http://www.whitebeam.org/library/guide/TechNotes/postgres.rhtm Best regards Pete ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_dump: missing pg_database entry
[EMAIL PROTECTED] writes: > I'm experiencing a strange problem with PostgreSQL 7.4.9. > One of my database production servers has 2 large databases, it's still > possible to connect to them and pass queries, but the pg_database > system table is empty, which prohibits such actions as dumping the > databases. Sounds like XID wraparound. Does vacuuming pg_database make the problem go away? If so, I'd recommend doing database-wide vacuums in all your databases ASAP. And then instituting a regular vacuum maintenance schedule. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] XID comparations
Hi, I would like to understand better the logic to determine when a xid is older than another one. As I could understand, the XID is always incremented, never reset. If it is true, then we can have rows with cmin ranging from 1 to 4.294.967.295 (2^32-1). When xid overflows (32 bits) the next one will be 3 (1 and 2 are reserved). In this case, we could have have lines with cmin 4.294.967.295 and lines with cmin 3. How are they compared to determine that rows with cmin 3 are newer than rows with cmin 4.294.967.295? Thanks in advance, Reimer ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully
> What say we just stop right there and call Date's Relational Model > what it is: a silly edifice built atop wrong premises. SQL was a quick and dirty hack (Systems R and R* needed some way to interface with data) with multiple deficiencies recognized and documented right within the very first paper by its own authors. To hold it up as any kind of paradigm is really misinformed. -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] XID comparations
"Carlos H. Reimer" <[EMAIL PROTECTED]> writes: > I would like to understand better the logic to determine when a xid is older > than another one. It's circular mod 2^32, with a special case for FrozenXID. It's a mistake to imagine that XIDs are unsigned ints, really --- the comparison doesn't work that way. For an XID of say 1billion, XIDs from 1billion to 3billion are "after", the rest "before". So once a row is created, it has to be deleted or frozen within 2 billion transactions, else its XID wraps around and appears to be "in the future" rather than "in the past" compared to current XIDs. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] XID comparations
On Tue, Jun 13, 2006 at 12:10:26PM -0300, Carlos H. Reimer wrote: > When xid overflows (32 bits) the next one will be 3 (1 and 2 are reserved). > > In this case, we could have have lines with cmin 4.294.967.295 and lines > with cmin 3. How are they compared to determine that > rows with cmin 3 are newer than rows with cmin 4.294.967.295? The same way you handle any circular numbering system, compare the difference. i.e. if( (xmin1-xmin2) mod (2^32) < 2^31 ) { xmin1 is newer than xmin2 } In the example you give: (3 - 4.294.967.295) mod (2^32) = -4294967292 mod (2^32) = 4 Hence XID 3 is newer compared to XID 4.294.967.295. Note in such a circular system, it is possible for A < B, B < C and C < A to all be simultaneously true. However, we always know where the current transaction is, so everything is compared to that. I hope this clears it up for you. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
[GENERAL] PostgreSQL and Apache: authentication and authorization
Thanks to everyone who replied to my previous post, "PostgreSQL and Apache," where I asked about references describing how to get PostgreSQL and Apache working together. Here, I'll ask the specific questions which motivated that post. The setup: I'm working on a database project. We chose PostgreSQL (PG) (right now 8.1.3 is installed, though we will soon switch to the latest release, 8.1.4 I believe). For a user interface, I decided to have users access the DB via a web server, and chose Apache 2 (relatively recent version) for that. The dynamic HTML and DB access will be handled by PHP (tentatively I want to use PHP5). The web server is not being used to put up a public webpage. Rather, it's just a choice for a user interface. Both the database and the web server will reside on the same Linux box. That box will sit on an intranet, behind a firewall; and all users will connect from within the firewall from web browsers. Users will be able to both read and write data. Security is paramount; only authorized users will be able to connect. Each user will have access to only parts of the data (authorization). My question concerns authentication and access. I can envision, broadly, two authentication schemes: (1) Users authenticate to Apache. (2) Users authenticate to PG. (For security reasons, I'm not considering any scheme where users authenticate/log into the Linux server; or, where authentication to PG or Apache relies on an OS-level password file listing OS-level users.) Regarding scheme (1), once a user authenticates to Apache, it's not clear to me how to pass on authorization (access) rights to PG. One method would be to have a single user, say, www_user (aside from the PG superuser), and pass access rights information along in some kind of flag. But then I wouldn't be able to rely on PG's built-in authorization model (ROLEs). (Instead either access rights would be handled in PHP, where I don't think it belongs, or in some extra DB logic, which would be messy.) Another possibility would be to have the PG recognize the same users that Apache does, and use one of the weaker authentication methods for PG itself. That is, the "real" authentication would lie with Apache; once authenticated there, a user would be passed on to PG by the weaker method. I'm not sure, however, if this method is secure. (The hope is it would be secure because Apache was secured, the OS/box is otherwise secure, and PG would accept local connections only.) Re (2), it appears that there is at least one Apache module that allows Apache to handle authentication based on a PG authentication file. (So while there would still be Apache-level authentication, the data for that would reside within PG.) This avoids problems with ROLEs, since the users would be PG users, but I'm not sure how it would fit in with other Apache-level authentication I want to do (e.g., client certificates, etc). Nor am I sure how secure it is. My question is the pros and cons of the possible methods. I also need pointers to information on how to do these things. (There are lots of Apache books, and a few PG books, out there, but I haven't found lengthy material on the connections between the two that I'm wondering about.) Thanks, and apologies for any conceptual confusion evidence here. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing
On Tue, Jun 13, 2006 at 09:18:17AM -0600, Scott Ribe wrote: > > What say we just stop right there and call Date's Relational Model > > what it is: a silly edifice built atop wrong premises. > > SQL was a quick and dirty hack (Systems R and R* needed some way to > interface with data) with multiple deficiencies recognized and > documented right within the very first paper by its own authors. Perfection isn't a human attribute. There isn't a whole lot of convincing evidence that it's a divine attribute. Did you have a point to make? > To hold it up as any kind of paradigm is really misinformed. SQL had something that relational algebra/relational calculus did not have, which is that somebody without a math degree can stare at it a short while and *do* something with it right away. That it also has other properties that are extremely useful and powerful (the ability to specify states of ignorance using NULL, do arithmetic, use aggregates, etc.) is what has made it such a smashing success. Now, there's another thing that makes it amazingly hard to displace: imagining what would be better *enough* to justify the many millions of people-years and even more billions of dollars needed to move away from it. Despite Date's many whines over the decades, his still-vaporware Relational Model doesn't even vaguely approximate that criterion. Cheers, D -- David Fetter <[EMAIL PROTECTED]> http://fetter.org/ phone: +1 415 235 3778AIM: dfetter666 Skype: davidfetter Remember to vote! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully
> Now, there's another thing that makes it amazingly hard to displace: > imagining what would be better *enough* to justify the many millions of > people-years and even more billions of dollars needed to move away from > it. Despite Date's many whines over the decades, his still-vaporware > Relational Model doesn't even vaguely approximate that > criterion. 1) Please understand Date is not a programmer, he is a lecturer, therefore, he is not undertaking nor does he wish to undertake any implementation. Ideally, he wouldn't endorse any particular implementation (but he does- that's his option). 2) Re: "still-vaporware Relational Model"- the relational model is a mathematical model for data representation. Your comment makes as much sense as claiming that "Newtonian physics" is vaporware. 3) From your comments, it is clear that you wish to only consider existing software as proof of usefulness and you are not interested in considering alternative ideas. This is precisely the difference between a researcher and a rote programmer. I would rather be someone in between. Regardless of what you think of the relational model, I would urge you to be more open-minded, even about "vaporware". Much of the world's most interesting software has not yet been written. I'm done with this thread. Good luck. -M ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing
On 6/13/06, David Fetter <[EMAIL PROTECTED]> wrote: > SQL was a quick and dirty hack (Systems R and R* needed some way to > interface with data) with multiple deficiencies recognized and > documented right within the very first paper by its own authors. Perfection isn't a human attribute. There isn't a whole lot of convincing evidence that it's a divine attribute. Did you have a point to make? so your logic is that since perfection is not a human attribute it follows that it is not worthwhile finding better alternatives to existing methods of data management? SQL had something that relational algebra/relational calculus did not have, which is that somebody without a math degree can stare at it a short while and *do* something with it right away. That it also has other properties that are extremely useful and powerful (the ability to specify states of ignorance using NULL, do arithmetic, use aggregates, etc.) is what has made it such a smashing success. SQL is a smashing success because at the time it was invention it was better than it's alternatives. It also received heavy backing from major software shops of the time. It's relitive merit to relational applications were not a factor here. Compared to ISAM, for example, SQL is an improvement for most applications. Also, I think the relational model is easier to understand precisely because it is so grounded in mathematics...the terse mathematical notation commonly used may be difficult for some to follow but it could be 'dumbed down' as it were for easier consumption. Now, there's another thing that makes it amazingly hard to displace: imagining what would be better *enough* to justify the many millions of people-years and even more billions of dollars needed to move away from it. Despite Date's many whines over the decades, his still-vaporware Relational Model doesn't even vaguely approximate that criterion. So you are justifying investment in 'A' as not to consider application or consideration of 'B'.While this may be an agument not to drop everything and move to 'B', 'B' should still be considered for long term advantages it might provide. Anyways, I think Date and Pascal are pragmatic about this particular point. I think what they are concerned about it the combination of social factors which cause illogical arguments such as the above to get so much traction. Merlin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PostgreSQL and Apache: authentication and authorization
jqpx37 wrote: I can envision, broadly, two authentication schemes: (1) Users authenticate to Apache. (2) Users authenticate to PG. This is a little too complicated. The most accurate and precise security is obtained by having the user log in with a real postgres account, and to grant table priveleges to that account (though usually we make the account a member of a group that has certain priveleges (except now we call them all roles to make it more confusing)). In our system we store the username's userid and password in the PHP session and nothing more. On each round trip we make a connection and run the request. No Apache configuration is required. Our package that does all of this is available as GPL, you are free to trawl the code: http://docs.secdat.com. Folks with a heavy coding background who distrust databases will usually recommend connecting as superuser and enforcing security with application code. This is bad because A) a mistake in code could be catastropic, and B) it takes so much longer to code up security in the app layer and C) anybody connecting directly to the database can subvert it all. begin:vcard fn:Kenneth Downs n:Downs;Kenneth email;internet:[EMAIL PROTECTED] tel;work:631-689-7200 tel;fax:631-689-0527 tel;cell:631-379-0010 x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing
On Tue, Jun 13, 2006 at 12:51:57PM -0400, Merlin Moncure wrote: > On 6/13/06, David Fetter <[EMAIL PROTECTED]> wrote: > >> SQL was a quick and dirty hack (Systems R and R* needed some way > >> to interface with data) with multiple deficiencies recognized and > >> documented right within the very first paper by its own authors. > > > >Perfection isn't a human attribute. There isn't a whole lot of > >convincing evidence that it's a divine attribute. Did you have a > >point to make? > > so your logic is that since perfection is not a human attribute it > follows that it is not worthwhile finding better alternatives to > existing methods of data management? If there are better alternatives, they will need to show some real-world attributes, not mathematically-inspired fantasies, because they're not "better" unless actual people in real-world situations can use them. > >SQL had something that relational algebra/relational calculus did > >not have, which is that somebody without a math degree can stare at > >it a short while and *do* something with it right away. That it > >also has other properties that are extremely useful and powerful > >(the ability to specify states of ignorance using NULL, do > >arithmetic, use aggregates, etc.) is what has made it such a > >smashing success. > > SQL is a smashing success because at the time it was invention it > was better than it's alternatives. And it still is. If you have evidence to the contrary that you can point to in real-world software that's actually deployed, please show it. > It also received heavy backing from major software shops of the > time. It's relitive merit to relational applications were not a > factor here. Compared to ISAM, for example, SQL is an improvement > for most applications. Also, I think the relational model is easier > to understand precisely because it is so grounded in > mathematics... Date's Relational Model is not the only one out there. Codd wrote about one which was different, as have Stonebraker, Libkin, etc. That Date and his dour crew have spent more time yelling louder is not by itself (or any other way) a recommendation for the model they endorse. It's *certainly* not a logical argument for that model. > the terse mathematical notation commonly used may be difficult for > some to follow but it could be 'dumbed down' as it were for easier > consumption. Again, if you have a piece of software you can point to that does this thing, please do so. What might be done is an interesting question, but what hasn't been done despite hugely many opportunities is also a significant piece of information. > >Now, there's another thing that makes it amazingly hard to > >displace: imagining what would be better *enough* to justify the > >many millions of people-years and even more billions of dollars > >needed to move away from it. Despite Date's many whines over the > >decades, his still-vaporware Relational Model doesn't even vaguely > >approximate that criterion. > > So you are justifying investment in 'A' as not to consider > application or consideration of 'B'. While this may be an agument > not to drop everything and move to 'B', 'B' should still be > considered for long term advantages it might provide. Please feel free. I've read Date, Darwen and Pascal's stuff over time, and it looks to me like the increasingly strident whines of other frustrated ideologues whose theories don't match reality. I take it as significant that nobody's managed to implement this extreme purist model in actual software, as the computing world--academic, commercial and FLOSS--has had decades to do it. > Anyways, I think Date and Pascal are pragmatic about this particular > point. Could you quote me something somewhere that shows evidence of pragmatism on either of their parts? Preferably something from the current decade, although something from earlier would be OK, too. > I think what they are concerned about it the combination of social > factors which cause illogical arguments such as the above to get so > much traction. "Social factors," as you call them, are some of the principle differences between an abstruse theory that nobody cares about and a useful tool that people actually get work done with. Cheers, D -- David Fetter <[EMAIL PROTECTED]> http://fetter.org/ phone: +1 415 235 3778AIM: dfetter666 Skype: davidfetter Remember to vote! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] More on state variables
Yesterday on this list I found out about the nifty setting custom_variable_classes='global' which allows the setting and retrieving of arbitrary values that persist across statements. I wonder if there is a way to do the same thing within the scope of a transaction? Is there a "magic" value of custom_variable_classes that displays any such behavior? Or is it available by other means? begin:vcard fn:Kenneth Downs n:Downs;Kenneth email;internet:[EMAIL PROTECTED] tel;work:631-689-7200 tel;fax:631-689-0527 tel;cell:631-379-0010 x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully
2) Re: "still-vaporware Relational Model"- the relational model is a mathematical model for data representation. Your comment makes as much sense as claiming that "Newtonian physics" is vaporware. If we're discussing the "luminiferous aether", then, yes, vaporware seems /somewhat/ appropriate. Thanks - i'll be here all week. b ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] plpgsql Result Sets
When using a temp table in plpgsql functions that has columns comprised from many tables populated by joins, how do you specify a temp table return type when its generated by select into and dropped dynamically? I get an error when I specify returns setof temp_table. Also when I specify a permanent table as the return type, I get extra blank columns in the result set which are the additional columns of the permanent table I don't need and the data is displayed in the wrong columns. I tried defining composite types, but get a runtime error that it isn't available unless I first define it outside of the function. Since requests are dynamic, its impossible for me to determine all possible composite types required ahead of time. In plpgsql, how do you return back a result set that is determined and generated at runtime based on a report request? Also why does plpgsql require you to define what is returned? I'm relatively new to PostgreSQL and really impressed by its capabilities. Do you have any examples showing how to define and return a dynamic result set at runtime within a function? Any help would be greatly appreciated. Thanks, Brandon ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
RES: [GENERAL] XID comparations
Thanks, In my first question I would like to use xmin instead of cmin, even so I could understand the logic. Then for each XID you have 2 bilions XIDs that are considered lower than and the other 2 bi higher than. About row visibility: are all the rows with xmin higher than my XID be considered in the future and not visible to my XID? Thanks in advance! Reimer ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How can I retrieve a function result?
Hi, Thanks a lot for your answer, Let me give you a simple example with more detail: In my postgres database I have a function called "myFunction". // CREATE OR REPLACE FUNCTION myFunction(INT) RETURN INT AS ' DECLARE indice ALIAS FOR $1; BEGIN IF indice > 0 THEN return -900; ELSE //something to do return 0; END IF; END; ' LANGUAGE 'plpgsql'; // I have a program in C/C++ which call the FUNCTION 'myFunction' // res = PQexecParams ( conn, "select myFunction($1)" , 1, NULL,paramValues, paraLenghts, paramFormats, resultFormat);// The program works fine, actually I can verify that it executes the FUNCTION 'myFunction', however I dont know how can I get the return value of the FUNCTION 'myFunction' (as you can see in the little example the return value can be 0 or -900). Regards, Thanks in advance! On 6/12/06, Joachim Wieland <[EMAIL PROTECTED]> wrote: Luis,On Mon, Jun 12, 2006 at 06:24:24PM -0500, Luis Alberto Pérez Paz wrote:> res = PQexecParams ( conn, "select myFunction($1,$2,$3)" , 3, NULL, > paramValues, paraLenghts, paramFormats, resultFormat);> It's works fine, however I dont know how can I retrieve the result that> return the FUNCTION "myFunction". "myFunction" is a postgres FUNCTION which > returns a INT.libpq functions for retrieving query results are described here:http://www.postgresql.org/docs/8.1/static/libpq-exec.html#LIBPQ-EXEC-SELECT-INFO Have you tried them and do you have a special problem with one of those? Ifso, please give more detail.Getting the result of a function does not differ from getting the result ofa regular select of a table or view. Joachim-- paz, amor y comprensión(1967-1994)
[GENERAL] Short circuit evaluation of expressions in query
Say I have the following parameterized query in a function: select * from foo where ($1 = -1 or foo.status = $1) and (…) where the (…) part consists of more parameterized conditions similar to the first one. Suppose that at runtime, $1 is supplied a value of -1, does the foo.status = $1 condition still have to be evaluated? Assuming the $1 = -1 case, should the above query be as fast as the following? select * from foo where (…) Thanks Jimmy
[GENERAL] MAX() across partition tables
Hi, In Postgres 8.1, MAX() was rewritten to use index backward search without doing seq_scan. This achieves a huge performance gain. But I found that when I use MAX() on a partitioned table, it reverses back to the old seq_scan plan. Wouldn't it be more efficient to get MAX() from each partition table (using index) and then select the MAX() among the max values? Attached is the EXPLAIN output. I have index on 'time' on each partition table. --Qiao report_log=> explain select max(time) from url_access_base; QUERY PLAN Aggregate (cost=542341.71..542341.72 rows=1 width=8) -> Append (cost=0.00..490943.97 rows=20559097 width=8) -> Seq Scan on url_access_base (cost=0.00..15.00 rows=500 width=8) -> Seq Scan on url_access_2006_06_02 url_access_base (cost=0.00..7728.43 rows=326343 width=8) -> Seq Scan on url_access_2006_06_03 url_access_base (cost=0.00..23818.00 rows=1005000 width=8) -> Seq Scan on url_access_2006_06_04 url_access_base (cost=0.00..23890.00 rows=1008000 width=8) -> Seq Scan on url_access_2006_06_05 url_access_base (cost=0.00..126426.85 rows=5291885 width=8) -> Seq Scan on url_access_2006_05_31 url_access_base (cost=0.00..7220.01 rows=302001 width=8) -> Seq Scan on url_access_2006_05_22 url_access_base (cost=0.00..292.11 rows=12211 width=8) -> Seq Scan on url_access_2006_05_23 url_access_base (cost=0.00..8.00 rows=300 width=8) -> Seq Scan on url_access_2006_05_24 url_access_base (cost=0.00..13021.30 rows=544630 width=8) -> Seq Scan on url_access_2006_05_25 url_access_base (cost=0.00..59.50 rows=2450 width=8) -> Seq Scan on url_access_2006_05_26 url_access_base (cost=0.00..735.00 rows=31000 width=8) -> Seq Scan on url_access_2006_06_01 url_access_base (cost=0.00..208633.43 rows=8726643 width=8) -> Seq Scan on url_access_2006_06_07 url_access_base (cost=0.00..78903.17 rows=3300017 width=8) -> Seq Scan on url_access_2006_06_08 url_access_base (cost=0.00..190.00 rows=8000 width=8) -> Seq Scan on url_access_2006_05_01 url_access_base (cost=0.00..3.17 rows=117 width=8) (17 rows) report_log=> explain select max(time) from url_access_2006_05_23; QUERY PLAN -- Result (cost=0.03..0.04 rows=1 width=0) InitPlan -> Limit (cost=0.00..0.03 rows=1 width=8) -> Index Scan Backward using url_access_2006_05_23_time_idx on url_access_2006_05_23 (cost=0.00..10.30 rows=300 width=8) Filter: ("time" IS NOT NULL) (5 rows) report_log=> explain select max(time) from url_access_2006_05_24; QUERY PLAN Result (cost=0.03..0.04 rows=1 width=0) InitPlan -> Limit (cost=0.00..0.03 rows=1 width=8) -> Index Scan Backward using url_access_2006_05_24_time_idx on url_access_2006_05_24 (cost=0.00..15458.93 rows=544630 width=8) Filter: ("time" IS NOT NULL) (5 rows) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing
David Fetter wrote: On Tue, Jun 13, 2006 at 12:51:57PM -0400, Merlin Moncure wrote: On 6/13/06, David Fetter <[EMAIL PROTECTED]> wrote: SQL was a quick and dirty hack... > If there are better alternatives, they will need to show some real-world attributes, not mathematically-inspired fantasies, because they're not "better" unless actual people in real-world situations can use them that you can point to in real-world software that's > actually deployed, please show it. KDB [1] from KX Systems has both terse mathematical notations to access data directly as well as an awkwardly(?) extended SQL syntax to handle queries not easily handled by SQL alone. Time-series related queries are one such case [2,3]. Examples can be seen various places on the web [3]. It has plenty of real-world usage on large databases at places including Deutsche Bank, Fidelity Investments, JP Morgan, Merrill Lynch, the US Army, etc. the terse mathematical notation commonly used... Again, if you have a piece of software you can point to that does this thing, please do so. I seriously doubt it follows Date or Pascal religiously, but it does have a convenient and very terse mathematical notation so might count as a real-world piece of software that you were asking for. [1] http://www.kx.com/news/press-releases/arthur-interview.php [2] http://www.kx.com/news/in-the-news/sql-timeseries.php [3] http://www.intelligententerprise.com/010327/celko_online.jhtml;jsessionid=NDIHEWXGL4TNKQSNDBNSKHSCJUMEKJVN [4] http://www.cs.nyu.edu/courses/fall02/G22.3033-007/kintro.html ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Short circuit evaluation of expressions in query
On Tue, Jun 13, 2006 at 02:34:36PM -0400, Jimmy Choi wrote: > Suppose that at runtime, $1 is supplied a value of -1, does the > foo.status = $1 condition still have to be evaluated? SQL doesn't offer any kind of guarentees about order of evaluations, and neither does postgres. It may occasionally try in an effort to reduce execution time, but there are no guarentees. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
Re: [GENERAL] State of Postgresql Replication?
On 10/6/2006 4:37, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote: > On Fri, Jun 09, 2006 at 01:51:23PM -0500, Scott Marlowe wrote: >> On Fri, 2006-06-09 at 12:51, Jim C. Nasby wrote: >>> On Fri, Jun 09, 2006 at 01:35:44PM -0400, Wei Weng wrote: Is there any OSS solutions (stable) for postgresql replication for postgresql 8.0? >>> >>> Slony, pgmirror, and I think there's another one. >>> google:postgresql replication >> >> pgpool does syncrhonous query level replication, in addition to >> connection pooling. It also makes a good front end to slony. > > There's also PITR, depending on your needs... Also look at pgcluster - you can find it at http://pgfoundry.org/projects/pgcluster/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing
Ron Mayer <[EMAIL PROTECTED]> wrote: > David Fetter wrote: >>> the terse mathematical notation commonly used... >> Again, if you have a piece of software you can point to that does >> this >> thing, please do so. > > I seriously doubt it follows Date or Pascal religiously, but > it does have a convenient and very terse mathematical notation > so might count as a real-world piece of software that you were > asking for. > > [1] http://www.kx.com/news/press-releases/arthur-interview.php > [2] http://www.kx.com/news/in-the-news/sql-timeseries.php > [3] > http://www.intelligententerprise.com/010327/celko_online.jhtml;jsessionid=NDIHEWXGL4TNKQSNDBNSKHSCJUMEKJVN > [4] http://www.cs.nyu.edu/courses/fall02/G22.3033-007/kintro.html The sample problem in [3] is one that shows pretty nicely a significant SQL weakness; it's very painful to build SQL to do complex things surrounding cumulative statistics. Unfortunately, across that set of URLs, I don't actually see a single presentation of their terse notation for handling time series, so it's not quite an answer, either. I should probably try to take another look at Tutorial D to see if it actually *does* provide something that would help make aggregates "play better." I'm not certain it tries nearly hard enough... I'm not sure what the Right Answer would be. I'm not certain there necessarily is one, short of either programming using vector statements (ala K) or lambda expressions (Lisp), neither of which are likely to be considered "generally acceptable." I'd rather see some attempts at it than the "head in sand" of present... -- output = ("cbbrowne" "@" "gmail.com") http://cbbrowne.com/info/finances.html If the automobile had improved as much as the computer in the last 25 years, it would get 1,000,000 miles to the gallon, cost $1, have a top speed of 30,000 mph, and explode at least once a year, killing all passengers. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] BLOB & Searching
On Mon, Jun 12, 2006 at 02:44:34PM -0700, [EMAIL PROTECTED] wrote: > 1. How can I store the word doc's in the DB, would it be best to use a > BLOB data type? Use a bytea field. > 2. Does Postgres support full text searching of a word document once it Nope. > is loaded into the BLOB column & how would this work? Would I have to > unload each BLOB object, convert it back to text to search, or does > Postgres have the ability to complete the full-text search of a BLOB, > like MSSQL Server & Oracle do? You'd want to store the plain-text version of the doc and then index that using tsearch2. Actually, there may be a way to avoid storing the text representation if you get clever with tsearch2... worst case you might need to extend tsearch2 so you can feed it an arbitrary function instead of a field. > 3. Is there a way to export the Word Doc From the BLOB colum and dump > it into a PDF format (I guess I am asking if someone has seen or > written a PDF generator script/storedProc for Postgres)? No, but you should be able to make that happen using an untrusted language/function and some external tools. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Partitioning...
On Mon, Jun 12, 2006 at 06:58:06PM -0400, Tom Lane wrote: > "Milen Kulev" <[EMAIL PROTECTED]> writes: > > What is wrong with random() ? > > Not guaranteed to be stable across the multiple evaluations that the > rule will perform ... remember a rule is a macro and has the usual > multiple-evaluation gotchas in the face of volatile arguments. I believe a safe alternative would be... INSERT INTO ... SELECT * FROM (SELECT random()*20 FROM ...) ; You might need to add an ORDER BY to the subquery to ensure PostgreSQL doesn't pull it into the main query. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] plpgsql Result Sets
If you're trying to do what I think you're trying to do, you might need to create the temp table before you create the function (in the same session). Posting a test case people could look at somewhere would be easier than reading a bunch of description about the problem. On Tue, Jun 13, 2006 at 12:59:33PM -0500, Brandon E Hofmann wrote: > > When using a temp table in plpgsql functions that has columns comprised > from many tables populated by joins, how do you specify a temp table return > type when its generated by select into and dropped dynamically? I get an > error when I specify returns setof temp_table. Also when I specify a > permanent table as the return type, I get extra blank columns in the result > set which are the additional columns of the permanent table I don't need > and the data is displayed in the wrong columns. > > I tried defining composite types, but get a runtime error that it isn't > available unless I first define it outside of the function. Since requests > are dynamic, its impossible for me to determine all possible composite > types required ahead of time. > > In plpgsql, how do you return back a result set that is determined and > generated at runtime based on a report request? Also why does plpgsql > require you to define what is returned? > > I'm relatively new to PostgreSQL and really impressed by its capabilities. > Do you have any examples showing how to define and return a dynamic result > set at runtime within a function? > > Any help would be greatly appreciated. > > Thanks, > > Brandon > > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] How can I retrieve a function result?
Luis, On Tue, Jun 13, 2006 at 01:19:32PM -0500, Luis Alberto Pérez Paz wrote: > The program works fine, actually I can verify that it executes the FUNCTION > 'myFunction', however I dont know how can I get the return value of the > FUNCTION 'myFunction' (as you can see in the little example the return > value can be 0 or -900). this is similar to retrieving the result of a query that returned one row and one column. So you just have to use PQgetvalue(res, 0, 0). Here are a few other examples: Check if there were rows at all: if (PQntuples(res) == 0) { /* no rows */ PQclear(res); return (char*) 0; } Check if there was anything else than one column per row: if (PQnfields(res) != 1) { /* did not get only 1 column back */ PQclear(res); return (char*) 0; } Check whether or not the first column, first row field is NULL: if (PQgetisnull(res, 0, 0)) { /* got NULL */ PQclear(res); return (char*) 0; } Get the first row, first column value as char*: db_value = PQgetvalue(res, 0, 0); I hope this gives you an idea. Those functions are in this section of the documentation: http://www.postgresql.org/docs/8.1/static/libpq-exec.html#LIBPQ-EXEC-SELECT-INFO Joachim ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] postgres vs. oracle for very large tables
On Mon, May 15, 2006 at 12:24:51PM -0700, TJ O'Donnell wrote: > I've written some extensions to postgres to implement > chemical structure searching. I get inquiries about > the performance of postgres vs. oracle. This is a huge > topic, with lots of opinions and lots of facts. But, > today I got some feedback stating the opinion that: > "Postgres performance diminishes with large tables > (we?ll be going to upwards of hundreds of millions of rows)." > > Is this pure speculation, opinion, known fact? > Does anyone know of measured performance of postgres > vs. oracle, specifically with very large tables? You're more likely to run into problems with large fields being toasted than plain large tables. IIRC Oracle's large object support is better than PostgreSQL's. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How can I retrieve a function result?
Thank you Joachim, it woks! Regards, Luis A Perez Paz On 6/13/06, Joachim Wieland <[EMAIL PROTECTED]> wrote: Luis,On Tue, Jun 13, 2006 at 01:19:32PM -0500, Luis Alberto Pérez Paz wrote:> The program works fine, actually I can verify that it executes the FUNCTION > 'myFunction', however I dont know how can I get the return value of the> FUNCTION 'myFunction' (as you can see in the little example the return> value can be 0 or -900).this is similar to retrieving the result of a query that returned one row and one column. So you just have to use PQgetvalue(res, 0, 0).Here are a few other examples:Check if there were rows at all: if (PQntuples(res) == 0) { /* no rows */ PQclear(res); return (char*) 0; }Check if there was anything else than one column per row: if (PQnfields(res) != 1) { /* did not get only 1 column back */ PQclear(res); return (char*) 0; }Check whether or not the first column, first row field is NULL: if (PQgetisnull(res, 0, 0)) { /* got NULL */ PQclear(res); return (char*) 0; }Get the first row, first column value as char*: db_value = PQgetvalue(res, 0, 0);I hope this gives you an idea. Those functions are in this section of the documentation:http://www.postgresql.org/docs/8.1/static/libpq-exec.html#LIBPQ-EXEC-SELECT-INFO Joachim-- paz, amor y comprensión(1967-1994)
Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing
On Tuesday 2006-06-13 09:26, David Fetter wrote: > On Tue, Jun 13, 2006 at 09:18:17AM -0600, Scott Ribe wrote: > > > What say we just stop right there and call Date's Relational Model > > > what it is: a silly edifice built atop wrong premises. > > > > SQL was a quick and dirty hack (Systems R and R* needed some way to > > interface with data) with multiple deficiencies recognized and > > documented right within the very first paper by its own authors. > > Perfection isn't a human attribute. There isn't a whole lot of > convincing evidence that it's a divine attribute. Did you have a > point to make? > > > To hold it up as any kind of paradigm is really misinformed. > > SQL had something that relational algebra/relational calculus did not > have, which is that somebody without a math degree can stare at it a > short while and *do* something with it right away. That it also has > other properties that are extremely useful and powerful (the ability > to specify states of ignorance using NULL, do arithmetic, use > aggregates, etc.) is what has made it such a smashing success. > > Now, there's another thing that makes it amazingly hard to displace: > imagining what would be better *enough* to justify the many millions > of people-years and even more billions of dollars needed to move away > from it. Despite Date's many whines over the decades, his > still-vaporware Relational Model doesn't even vaguely approximate that > criterion. > > Cheers, > D COBOL and VisualBasic are better than Haskell by the same argument. (SQL always reminds me a lot of COBOL.) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] postgres vs. oracle for very large tables
On Tuesday 2006-06-13 16:19, Jim C. Nasby wrote: > On Mon, May 15, 2006 at 12:24:51PM -0700, TJ O'Donnell wrote: > > I've written some extensions to postgres to implement > > chemical structure searching. I get inquiries about > > the performance of postgres vs. oracle. This is a huge > > topic, with lots of opinions and lots of facts. But, > > today I got some feedback stating the opinion that: > > "Postgres performance diminishes with large tables > > (we?ll be going to upwards of hundreds of millions of rows)." > > > > Is this pure speculation, opinion, known fact? > > Does anyone know of measured performance of postgres > > vs. oracle, specifically with very large tables? > > You're more likely to run into problems with large fields being toasted > than plain large tables. IIRC Oracle's large object support is better > than PostgreSQL's. There's more to it that that. If the huge tables grow, VACCUMING for XID maintenance could put Postgres at a disadvantage relative to Oracle. There are "behavioral" variables involved. Furthermore, it may be possible to trade DBA tricks for initial cost of ownership. Usually the accounting doesn't work out (DBA salaries are even more expensive than Oracle licenses) ... but grad students work cheap. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] BLOB & Searching
Jim C. Nasby wrote: On Mon, Jun 12, 2006 at 02:44:34PM -0700, [EMAIL PROTECTED] wrote: 1. How can I store the word doc's in the DB, would it be best to use a BLOB data type? Use a bytea field. 2. Does Postgres support full text searching of a word document once it Nope. Not natively. It would however be possible to use one of the .doc libs out there to read the binary document and run it through a parser and feed that to Tsearch2. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] postgres vs. oracle for very large tables
Jim C. Nasby wrote: On Mon, May 15, 2006 at 12:24:51PM -0700, TJ O'Donnell wrote: I've written some extensions to postgres to implement chemical structure searching. I get inquiries about the performance of postgres vs. oracle. This is a huge topic, with lots of opinions and lots of facts. But, today I got some feedback stating the opinion that: "Postgres performance diminishes with large tables (we?ll be going to upwards of hundreds of millions of rows)." It really depends. I have many customers with hundred of millions of rows that don't have ANY problems. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Short circuit evaluation of expressions in query
"Jimmy Choi" <[EMAIL PROTECTED]> writes: > Say I have the following parameterized query in a function: > select * from foo where ($1 = -1 or foo.status = $1) and (...) > Suppose that at runtime, $1 is supplied a value of -1, does the > foo.status = $1 condition still have to be evaluated? The PG manual points out that left-to-right short circuit is NOT part of the language specification: http://www.postgresql.org/docs/8.1/static/sql-expressions.html#SYNTAX-EXPRESS-EVAL However, the real problem with thinking as you do above is that it's micro-optimization on the wrong level. Usually the sort of optimization you need to think about in SQL is whether the query allows an index to be used to fetch the relevant rows. In the above, even if you have an index on foo.status, it won't be used because the OR means that potentially *every* row of foo matches the OR condition. If you really have a need to sometimes fetch all the rows and sometimes fetch only the ones with status = X, I'd suggest generating different queries in those two cases. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Problem Connecting to 5432
All, I have read message after message and searched the internet for hours, yet I still can’t get a remote computer to connect to port 5432 on my Fedora Core 3 system running Postgresql 7.4.7. What I have done: 1) Stopped the iptables service 2) Modified postgresql.conf and added the following lines tcpip_socket = true port = 5432 3) Modified pg_hba.conf and added host all all (my ip address) 255.255.255.255 trust 4) Modified the postgresql startup script to use the –i flag 5) Verified that postmaster is running with the –i flag… ps ax | grep postmaster output: 4259 pts/1 S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data –i 6) Tried to verify that the server was listening on port 5432 only to find out that it isn’t. The netstat output follows: tcp 0 0 127.0.0.1:8438 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 :::443 :::* LISTEN As you can see it is only listening on the loopback interface I’m quite certain the issue is how I am starting the service, but I’ve added the –i flag. I’m all out of ideas on this one. Any and all help is greatly appreciated. Regards, Bart
Re: [GENERAL] Problem Connecting to 5432
Try using the following format in the pg_hba.conf file: host all all(or your_user_account) your_IP/32 trust (The 32 is the same as 255.255.255.255 but in CIDR format) As for the command line you started postmaster with, doesn't the "-i" require an interface such as an IP address too? If you look below in your comments, you specify "-i" after your DATA directory but never give the "-i" an argument? Casey, J Bart wrote: All, I have read message after message and searched the internet for hours, yet I still can’t get a remote computer to connect to port 5432 on my Fedora Core 3 system running Postgresql 7.4.7. What I have done: 1) Stopped the iptables service 2) Modified postgresql.conf and added the following lines tcpip_socket = true port = 5432 3) Modified pg_hba.conf and added host all all (my ip address) 255.255.255.255 trust 4) Modified the postgresql startup script to use the –i flag 5) Verified that postmaster is running with the –i flag… ps ax | grep postmaster output: 4259 pts/1 S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data –i 6) Tried to verify that the server was listening on port 5432 only to find out that it isn’t. The netstat output follows: tcp 0 0 127.0.0.1:8438 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 :::443 :::* LISTEN As you can see it is only listening on the loopback interface I’m quite certain the issue is how I am starting the service, but I’ve added the –i flag. I’m all out of ideas on this one. Any and all help is greatly appreciated. Regards, Bart ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Problem Connecting to 5432
My mistake, the "-h host_IP" explicitly states which IP address to listen on. /usr/bin/postmaster -h your_IP -p 5432 -D /var/lib/pgsql/data –i I'm not sure if postgresql v7.x.y already used the pg_ctl command which is essentially a wrapper for postmaster, if so use, pg_ctl -w -o "-h your_IP -p your_PORT" -l logfile(if you wish) start if you use "your_IP = 0.0.0.0" it will listen on all valid TCP/IP interfaces, including 127.0.0.1(a.k.a. localhost) louis gonzales wrote: Try using the following format in the pg_hba.conf file: host all all(or your_user_account) your_IP/32 trust (The 32 is the same as 255.255.255.255 but in CIDR format) As for the command line you started postmaster with, doesn't the "-i" require an interface such as an IP address too? If you look below in your comments, you specify "-i" after your DATA directory but never give the "-i" an argument? Casey, J Bart wrote: All, I have read message after message and searched the internet for hours, yet I still can’t get a remote computer to connect to port 5432 on my Fedora Core 3 system running Postgresql 7.4.7. What I have done: 1) Stopped the iptables service 2) Modified postgresql.conf and added the following lines tcpip_socket = true port = 5432 3) Modified pg_hba.conf and added host all all (my ip address) 255.255.255.255 trust 4) Modified the postgresql startup script to use the –i flag 5) Verified that postmaster is running with the –i flag… ps ax | grep postmaster output: 4259 pts/1 S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data –i 6) Tried to verify that the server was listening on port 5432 only to find out that it isn’t. The netstat output follows: tcp 0 0 127.0.0.1:8438 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 :::443 :::* LISTEN As you can see it is only listening on the loopback interface I’m quite certain the issue is how I am starting the service, but I’ve added the –i flag. I’m all out of ideas on this one. Any and all help is greatly appreciated. Regards, Bart ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq