[GENERAL] [PGSQL v8.2.5] Similar queries behave differently
Hi all. On the very same database and session I have two different (but similar) queries behaving in a very different way as far as timings. This is the first one: prove=# explain analyze select d.* from t_vcol natural join v_dati_attuali d natural join tt_elem where vtab_id='TEST'; QUERY PLAN - Nested Loop (cost=5.65..8562012.60 rows=88104022 width=73) (actual time=36.579..36.772 rows=7 loops=1) -> Hash Join (cost=1.19..442967.06 rows=408730 width=73) (actual time=36.547..36.660 rows=7 loops=1) Hash Cond: (d.camp_id = t_vcol.camp_id) -> Nested Loop (cost=0.00..430860.08 rows=1603700 width=73) (actual time=36.480..36.558 rows=24 loops=1) -> Seq Scan on tt_elem (cost=0.00..29.40 rows=1940 width=8) (actual time=0.012..0.013 rows=1 loops=1) -> Index Scan using i_dati_0 on t_dati d (cost=0.00..211.74 rows=827 width=73) (actual time=36.461..36.498 rows=24 loops Index Cond: (d.elem_id = tt_elem.elem_id) Filter: dato_flag -> Hash (cost=1.12..1.12 rows=5 width=15) (actual time=0.039..0.039 rows=5 loops=1) -> Seq Scan on t_vcol (cost=0.00..1.12 rows=5 width=15) (actual time=0.015..0.026 rows=5 loops=1) Filter: (vtab_id = 'TEST'::text) -> Bitmap Heap Scan on tt_data (cost=4.46..16.62 rows=216 width=8) (actual time=0.009..0.009 rows=1 loops=7) Recheck Cond: ((d.dato_validita <= tt_data.data_corr) AND (d.dato_scadenza > tt_data.data_corr)) -> Bitmap Index Scan on tt_data_pkey (cost=0.00..4.41 rows=216 width=0) (actual time=0.006..0.006 rows=1 loops=7) Index Cond: ((d.dato_validita <= tt_data.data_corr) AND (d.dato_scadenza > tt_data.data_corr)) Total runtime: 36.922 ms (16 rows) And this is the second one: prove=# explain analyze SELECT d.* from t_campi_ricerche natural join v_dati_attuali d natural join tt_rice where rice_id='CODICE'; QUERY PLAN - Nested Loop (cost=43.29..38167065.82 rows=409498649 width=73) (actual time=2927.890..56922.415 rows=1 loops=1) -> Hash Join (cost=38.83..430557.39 rows=1899736 width=73) (actual time=2915.990..56910.510 rows=1 loops=1) Hash Cond: (d.dato_t = tt_rice.dato_t) -> Hash Join (cost=1.15..402765.04 rows=2335285 width=73) (actual time=191.261..55238.816 rows=2394966 loops=1) Hash Cond: (d.camp_id = t_campi_ricerche.camp_id) -> Seq Scan on t_dati d (cost=0.00..326867.12 rows=14011712 width=73) (actual time=16.612..42797.766 rows=14011712 loops Filter: dato_flag -> Hash (cost=1.09..1.09 rows=5 width=15) (actual time=0.053..0.053 rows=5 loops=1) -> Seq Scan on t_campi_ricerche (cost=0.00..1.09 rows=5 width=15) (actual time=0.031..0.041 rows=5 loops=1) Filter: (rice_id = 'CODICE'::text) -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual time=0.009..0.009 rows=1 loops=1) -> Seq Scan on tt_rice (cost=0.00..22.30 rows=1230 width=32) (actual time=0.003..0.004 rows=1 loops=1) -> Bitmap Heap Scan on tt_data (cost=4.46..16.62 rows=216 width=8) (actual time=11.885..11.886 rows=1 loops=1) Recheck Cond: ((d.dato_validita <= tt_data.data_corr) AND (d.dato_scadenza > tt_data.data_corr)) -> Bitmap Index Scan on tt_data_pkey (cost=0.00..4.41 rows=216 width=0) (actual time=0.033..0.033 rows=1 loops=1) Index Cond: ((d.dato_validita <= tt_data.data_corr) AND (d.dato_scadenza > tt_data.data_corr)) Total runtime: 56922.563 ms (17 rows) The v_dati_attuali is a view and is common to both queries. The structure of indexes is on t_vcol and t_campi_ricerche is very similar and both tt_rice and tt_elem have just one row wirh one field being primary key. Of course I'd like the second query to behave the same as the first one but have no clue on how to achieve it. Is there any hint? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] [PGSQL v8.2.5] Similar queries behave differently
Il Thursday 25 October 2007 13:20:40 Gregory Stark ha scritto: > "Gregory Stark" <[EMAIL PROTECTED]> writes: > > "Reg Me Please" <[EMAIL PROTECTED]> writes: > >>-> Seq Scan on tt_elem (cost=0.00..29.40 rows=1940 > >> width=8) (actual time=0.012..0.013 rows=1 loops=1) > > > > The discrepancy etween the estimated rows and actual rows makes me think > > you've not analyzed this table in a long time. It's probably best to > > analyze the whole database to have a consistent set of statistics and to > > catch any other old table stats. > > > > There could be other misestimations based due to Postgres limitations but > > first fix the out of date stats and re-post both plans. > > Actually it's pretty clear there are some other bad estimations as well. > You should send along the view definition too. > > And I would recommend you try it with a normal JOIN ON/USING instead of the > NATURAL JOIN. It's possible it's joining on some unexpected columns -- > though that doesn't really look like it's the case here. I'm not sure whether my previous message has reached the list. In any case, the tables have been created with a pg_restore and, thus, not much stats should be available not out-of-date ones. I'd actually like to better understand how to compose queries (and indexes) in order to make them appealing to the query planner. Oggetto: Re: [PGSQL v8.2.5] Similar queries behave differently Data: giovedì 25 ottobre 2007 Da: Reg Me Please <[EMAIL PROTECTED]> A: pgsql-general@postgresql.org Hai all again. Maybe I've solved the problem, but would like to have some hint on "why". In the second query I've substituted the last join (natural join tt_rice) with an additional "where condition". I can do this as I am sure that the tt_rice table will always contain just one row with one field. The main difference with the first query is that in the first case the single row with a single field is a "bigint", while in the second one it is "text". Otherwise the two queries are almost identical, apart the number of result rows and the size of the joined tables. Is there any deeper tutorial on how to read (and understand) the explain analyze output? Many thanks again. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Crosstab Problems
Il Thursday 25 October 2007 16:29:33 Scott Marlowe ha scritto: > On 10/24/07, Tom Lane <[EMAIL PROTECTED]> wrote: > > Joe Conway <[EMAIL PROTECTED]> writes: > > > Tom Lane wrote: > > >> 1. Treat NULL rowid as a category in its own right. This would > > >> conform with the behavior of GROUP BY and DISTINCT, for instance. > > > > > > In any case, the attached changes the behavior to #1 for both flavors > > > of crosstab (the original crosstab(text, int) and the usually more > > > useful crosstab(text, text)). > > > > > > It is appropriate for 8.3 but not back-patching as it changes behavior > > > in a non-backward compatible way and is probably too invasive anyway. > > > > Um, if the previous code crashed in this case, why would you worry about > > being backward-compatible with it? You're effectively changing the > > behavior anyway, so you might as well make it do what you've decided is > > the right thing. > > As a crosstab user, I agree with Tom. If I can throw in my EUR 0.01 contrib, I would agree with Joe (thanks for your wonderful crosstab). If crosstab in 8.3 will have a different behaviour *and* it's not part of the core features, then I'd prefer to correct it. In any case developers will have to cope with discrepancies when going to 8.3 and you can bet they won't remain with 8.2 when 8.3 will be rolled out. And, by the way, why not including the crosstab as a standard feature? I think it deserves it! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] subversion support?
Ever tried Druid? http://druid.sourceforge.net/ Il Thursday 25 October 2007 18:02:51 Tino Wildenhain ha scritto: > Hi, > > Roberts, Jon schrieb: > > I could use psql instead of pgAdmin then which isn't what I want. > > > > Having used Quest software SQL Navigator since 97 for Oracle and then > > migrated to Toad for Oracle which both products have integration to > > source control, it is hard to revert back to a command line or text file > > solution. > > Well you can still use gui tools and just let them work against a > development database. With little scripting you can just dump > the schema of that database periodically and check it in to SVN. > > Hook scripts can then take over the deployment (ideally based > on tag creation) > > > pgAdmin should graphically show differences between the committed version > > and the database. > > Does SQL Nav do this? At least the SQL Navigator/Toad support seems > to heavily depend on server side code to help. This looks very unclean > to the very least. > > > It should allow me to click a button in the tool and commit it to the > > repository. > > > > It should allow me to revert back to a previous version and the tool take > > care of restoring the function automatically. > > You can test before you commit in the database - unlike Oracle, Postgres > supports transactions even for DDL :-) (ok, I've yet find the button > in pgadmin to disable auto commit :-) > > > Regards > Tino > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Query_time SQL as a function w/o creating a new type
You could try this: CREATE OR REPLACE FUNCTION foo( out procpid integer, out client_addr inet, out query_time interval, out current_query text ) RETURNS SETOF RECORD AS $BODY$ ... $BODY$ LANGUAGE PLPGSQL VOLATILE; (Thanks to Joen Conway for showing this in tablefunc!) Il Friday 26 October 2007 08:24:46 Ow Mun Heng ha scritto: > Hi, > > After Erik Jones gave me the idea for this, I started to become lazy to > have to type this into the sql everytime I want to see how long a query > is taking.. so, I thought that I'll create a function to do just that.. > I ended up with.. > > CREATE OR REPLACE FUNCTION query_time() > RETURNS SETOF query_time AS > $BODY$ > DECLARE > rec RECORD; > > BEGIN > FOR rec IN > SELECT procpid, client_addr, now() - query_start as query_time, > current_query > FROM pg_stat_activity > ORDER BY query_time DESC > LOOP > RETURN NEXT rec; > END LOOP; > RETURN; > END; > > $BODY$ > LANGUAGE 'plpgsql' VOLATILE; > > > But the issue with the above is that I need to create a type. > > CREATE TYPE query_time AS >(procpid integer, > client_addr inet, > query_time interval, > current_query text); > > Is there a method which I'm able to return a result set w/o needing to > declare/create a new type. > > I tried to use language 'sql' but it only returned me 1 column, with all > the fields concatenated together with comma separating the fields. > > > > > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] How to ALTER a TABLE to change the primary key?
Hi all. I'd need to modify the primary key definition in an already populated table. How can I do it? Thanks. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] INDEX and JOINs
Hi all. I have a setup like this: CREATE TABLE T_FIELDS ( TABL_ID TEXT NOT NULL, COLU_ID TEXT NOT NULL, FIEL_ID TEXT PRIMARY KEY, UNIQUE( TABL_ID,COLU_ID ) ); -- < 200 ROWS CREATE TABLE T_DATA ( ITEM_ID INT8 NOT NULL, FIEL_ID TEXT NOT NULL REFERENCES T_FIELDS, DATA_T TEXT NOT NULL, PRIMARY( FIEL_ID,ITEM_ID ) ); -- > 10M ROWS When I run SELECT * FROM T_DATA WHERE FIEL_ID='TABL.FIEL'; it's very fast (of course!). But when I run: SELECT * FROM T_DATA NATURAL JOIN T_FIELDS WHERE TABL_ID='TABL'; it's very slow. The EXPLAIN says that in the second case it has to do a sequential scan on T_DATA. And this explains the timing. Is there a way to avoid such a behaviour by acting on indexes? Thanks. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] INDEX and JOINs
Il Friday 26 October 2007 13:05:10 Martijn van Oosterhout ha scritto: > On Fri, Oct 26, 2007 at 12:34:06PM +0200, Reg Me Please wrote: > > it's very fast (of course!). But when I run: > > > > SELECT * FROM T_DATA NATURAL JOIN T_FIELDS WHERE TABL_ID='TABL'; > > > > it's very slow. > > The EXPLAIN says that in the second case it has to do a sequential > > scan on T_DATA. And this explains the timing. > > Is there a way to avoid such a behaviour by acting on indexes? > > Firstly, have you run ANALYZE recently. Secondly, you'll have to show > us the output of EXPLAIN ANALYZE if you want some useful help. > > Have a nice day, Yes, I'm often runing analyze while trying to sort this kind of things out. This is the output: prove=# explain analyze SELECT * from t_dati natural left join t_campi where tabe_id='CONTE'; QUERY PLAN -- Hash Join (cost=3.95..382140.91 rows=274709 width=91) (actual time=1.929..57713.305 rows=92 loops=1) Hash Cond: (t_dati.camp_id = t_campi.camp_id) -> Seq Scan on t_dati (cost=0.00..326851.72 rows=14010172 width=73) (actual time=0.028..43814.946 rows=14011712 loops=1) -> Hash (cost=3.91..3.91 rows=3 width=33) (actual time=0.129..0.129 rows=3 loops=1) -> Seq Scan on t_campi (cost=0.00..3.91 rows=3 width=33) (actual time=0.040..0.121 rows=3 loops=1) Filter: (tabe_id = 'CONTE'::text) Total runtime: 57713.449 ms (I translated the table and column names. The substance is the same.) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] INDEX and JOINs
Il Friday 26 October 2007 13:56:20 Martijn van Oosterhout ha scritto: > On Fri, Oct 26, 2007 at 01:10:42PM +0200, Reg Me Please wrote: > > prove=# explain analyze SELECT * from t_dati natural left join t_campi > > where tabe_id='CONTE'; > > QUERY PLAN > > - > >- Hash Join > > (cost=3.95..382140.91 rows=274709 width=91) (actual > > time=1.929..57713.305 rows=92 loops=1) > >Hash Cond: (t_dati.camp_id = t_campi.camp_id) > > Neither of the columns are indexed according to the schema you sent so > that's the problem. Or you broke something while "translating". > > > (I translated the table and column names. The substance is the same.) > > Try not translating, and we might get somewhere... > > Have a nice day, prove=# \d t_dati Tabella "public.t_dati" Colonna| Tipo | Modificatori ---+--+-- elem_id | bigint | not null camp_id | text | not null dato_t| text | not null dato_r| double precision | dato_validita | timestamp with time zone | not null default '-infinity'::timestamp with time zone dato_scadenza | timestamp with time zone | not null default 'infinity'::timestamp with time zone dato_flag | boolean | not null default true dato_data | timestamp with time zone | not null default now() dato_id | bigint | not null default nextval('t_dati_dato_id_seq'::regclass) Indici: "t_dati_pkey" PRIMARY KEY, btree (dato_id) "i_dati_0" btree (elem_id) "i_dati_1" btree (camp_id) "i_dati_2" btree (dato_t text_pattern_ops) "i_dati_3" btree (dato_flag, dato_validita, dato_scadenza) "i_dati_4" btree (dato_data) Vincoli di integrità referenziale "t_dati_camp_id_fkey" FOREIGN KEY (camp_id) REFERENCES t_campi(camp_id) prove=# \d t_campi Tabella "public.t_campi" Colonna | Tipo | Modificatori -+--+-- tabe_id | text | not null colo_id | text | not null camp_id | text | not null Indici: "t_campi_pkey" PRIMARY KEY, btree (camp_id) "i_t_campi_0" btree (tabe_id) Vincoli di integrità referenziale "t_campi_colo_id_fkey" FOREIGN KEY (colo_id) REFERENCES t_colonne(colo_id) "t_campi_tabe_id_fkey" FOREIGN KEY (tabe_id) REFERENCES t_tabelle(tabe_id) They seems to be indexed. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] INDEX and JOINs
Il Friday 26 October 2007 15:18:04 Tom Lane ha scritto: > Reg Me Please <[EMAIL PROTECTED]> writes: > >>> (cost=3.95..382140.91 rows=274709 width=91) (actual > >>> time=1.929..57713.305 rows=92 loops=1) > >>> Hash Cond: (t_dati.camp_id = t_campi.camp_id) > > I think the reason it doesn't want to use an indexed join is the large > estimate of the number of join result rows. You need to try to get that > number down to something nearer the reality. Increasing the statistics > target for the larger table might help. > > regards, tom lane How can I "Increasing the statistics target for the larger table"? I'ìm sorry for asking, but I'm not that deep into RDBMS. Thanks. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] INDEX and JOINs
Il Saturday 27 October 2007 08:51:09 Andreas Kretschmer ha scritto: > Reg Me Please <[EMAIL PROTECTED]> schrieb: > > How can I "Increasing the statistics target for the larger table"? > > I'ìm sorry for asking, but I'm not that deep into RDBMS. > > alter table alter column SET STATISTICS ; > > Andreas How can I display the statistics for a table/column/index/whatever applies? Thanks again. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [PGSQL v8.2.5] Similar queries behave differently
Hai all again. Maybe I've solved the problem, but would like to have some hint on "why". In the second query I've substituted the last join (natural join tt_rice) with an additional "where condition". I can do this as I am sure that the tt_rice table will always contain just one row with one field. The main difference with the first query is that in the first case the single row with a single field is a "bigint", while in the second one it is "text". Is there any deeper tutorial on how to read (and understand) the explain analyze output? Many thanks again. Il Thursday 25 October 2007 10:17:23 Reg Me Please ha scritto: > Hi all. > On the very same database and session I have two different (but similar) > queries behaving in a very different way as far as timings. > > This is the first one: > > prove=# explain analyze select d.* from t_vcol natural join v_dati_attuali > d natural join tt_elem where vtab_id='TEST'; > QUERY PLAN > --- >-- Nested Loop > (cost=5.65..8562012.60 rows=88104022 width=73) (actual time=36.579..36.772 > rows=7 loops=1) >-> Hash Join (cost=1.19..442967.06 rows=408730 width=73) (actual > time=36.547..36.660 rows=7 loops=1) > Hash Cond: (d.camp_id = t_vcol.camp_id) > -> Nested Loop (cost=0.00..430860.08 rows=1603700 width=73) > (actual time=36.480..36.558 rows=24 loops=1) >-> Seq Scan on tt_elem (cost=0.00..29.40 rows=1940 > width=8) (actual time=0.012..0.013 rows=1 loops=1) >-> Index Scan using i_dati_0 on t_dati d > (cost=0.00..211.74 rows=827 width=73) (actual time=36.461..36.498 rows=24 > loops > Index Cond: (d.elem_id = tt_elem.elem_id) > Filter: dato_flag > -> Hash (cost=1.12..1.12 rows=5 width=15) (actual > time=0.039..0.039 rows=5 loops=1) >-> Seq Scan on t_vcol (cost=0.00..1.12 rows=5 width=15) > (actual time=0.015..0.026 rows=5 loops=1) > Filter: (vtab_id = 'TEST'::text) >-> Bitmap Heap Scan on tt_data (cost=4.46..16.62 rows=216 width=8) > (actual time=0.009..0.009 rows=1 loops=7) > Recheck Cond: ((d.dato_validita <= tt_data.data_corr) AND > (d.dato_scadenza > tt_data.data_corr)) > -> Bitmap Index Scan on tt_data_pkey (cost=0.00..4.41 rows=216 > width=0) (actual time=0.006..0.006 rows=1 loops=7) >Index Cond: ((d.dato_validita <= tt_data.data_corr) AND > (d.dato_scadenza > tt_data.data_corr)) > Total runtime: 36.922 ms > (16 rows) > > And this is the second one: > prove=# explain analyze SELECT d.* from t_campi_ricerche natural join > v_dati_attuali d natural join tt_rice where rice_id='CODICE'; > QUERY PLAN > --- >-- Nested Loop > (cost=43.29..38167065.82 rows=409498649 width=73) (actual > time=2927.890..56922.415 rows=1 loops=1) >-> Hash Join (cost=38.83..430557.39 rows=1899736 width=73) (actual > time=2915.990..56910.510 rows=1 loops=1) > Hash Cond: (d.dato_t = tt_rice.dato_t) > -> Hash Join (cost=1.15..402765.04 rows=2335285 width=73) > (actual time=191.261..55238.816 rows=2394966 loops=1) >Hash Cond: (d.camp_id = t_campi_ricerche.camp_id) >-> Seq Scan on t_dati d (cost=0.00..326867.12 > rows=14011712 width=73) (actual time=16.612..42797.766 rows=14011712 loops > Filter: dato_flag >-> Hash (cost=1.09..1.09 rows=5 width=15) (actual > time=0.053..0.053 rows=5 loops=1) > -> Seq Scan on t_campi_ricerche (cost=0.00..1.09 > rows=5 width=15) (actual time=0.031..0.041 rows=5 loops=1) >Filter: (rice_id = 'CODICE'::text) > -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual > time=0.009..0.009 rows=1 loops=1) >-> Seq Scan on tt_rice (cost=0.00..22.30 rows=1230 > width=32) (actual time=0.003..0.004 rows=1 loops=1) >-> Bitmap Heap Scan on tt_data (cost=4.46..16.62 rows=216 width=8) > (actual time=11.885..11.886 rows=1 loops=1) > Recheck Cond: ((d.dato_validita <= tt_data.data_corr) AND > (d.dato_scadenza > tt_data.data_corr)) > -> Bitmap Index Scan on tt_data_pkey (cost=0.00..4.41 rows=216 > width=0) (actual time=0.033..0
Re: [GENERAL]
You can also create the language in template1 and then you'll have it in any other database you'll create (from template1). Il Wednesday 31 October 2007 08:21:08 Martijn van Oosterhout ha scritto: > On Wed, Oct 31, 2007 at 11:07:36AM +0800, carter ck wrote: > > Hi all, > > > > I was trying to create function in postgres, but it returns error > > mentioning the language is NOT defined. > > > > > ERROR: language "plpgsql" does not exist > > HINT: Use CREATE LANGUAGE to load the language into the database. > > > > I was wonderring why it is not included by default? Or have I missed out > > something in the configuration! > > It's included by default, just not enabled by default. Try "create > language plpgsql" as administrator. > > > Also, how to do a better text search? I have come across the bad > > performance of LIKE statement. > > See tsearch2. > > Have a nice day, -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Join between tables of two or more databases
Il Wednesday 31 October 2007 11:47:46 T.J. Adami ha scritto: > Hi people. I have 2 databases named as follows: > > 1) company_a > > 2) company_b > > > These two databases do not have the same ER model. However, I want to > access tables on "company_a" there are in "company_b". I want to use > them as local tables (as other databases like Sybase allows, since you > have opened a database link). > > The question is: can I do this using remote database servers > (different hosts)? If does not, can I do it at least on local > databases on the same server? > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend As far as they are "different databases", you cannot do it at the moment. It doesn't matter whether they are local or not. The only option should be to do it at the application level with two connections. -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Securing stored procedures and triggers
Il Wednesday 31 October 2007 15:39:55 mgould ha scritto: > We are currently migrating from Sybase's ASA 9/10 to PostGres 8.2.4. One > of the features that is really nice in ASA is the ability to add the > attribute hidden to a Create procedure, Create function and Create trigger. > Essentially what this does is encrypt the code so that if anyone or any > utility gets into the database they cannot see any of the actual code. > This is a great feature for protecting intellectual processing techniques. > I don't know if there is anyway to do this in PostGres. Before the hidden > feature was added, we had a competitor steal some of our stored procedure > processing code. Is there anyway to protect this from happening in > PostGres? > > Best Regards, > > Michael Gould > All Coast Intermodal Services, Inc. > 904-376-7030 I'm not sure about that feature being supported by PGSQL. Don't think so, anyway. But if your competitor needs to steal your code in order to compete, then you are ahead of him: you think, he copies. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Securing stored procedures and triggers
There's not bulletproof way, in my opinion. If they copy the whole DB structure *and* the object binaries they'll have the very same functionalities! Il Wednesday 31 October 2007 16:13:23 Douglas McNaught ha scritto: > mgould <[EMAIL PROTECTED]> writes: > > We are currently migrating from Sybase's ASA 9/10 to PostGres 8.2.4. > > One of the features that is really nice in ASA is the ability to add > > the attribute hidden to a Create procedure, Create function and > > Create trigger. Essentially what this does is encrypt the code so > > that if anyone or any utility gets into the database they cannot see > > any of the actual code. This is a great feature for protecting > > intellectual processing techniques. I don't know if there is anyway > > to do this in PostGres. Before the hidden feature was added, we had > > a competitor steal some of our stored procedure processing code. Is > > there anyway to protect this from happening in PostGres? > > The only bulletproof way to do this currently is to write all your > stored functions in C and load them as a shared library. > > -Doug > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] COPY ... FROM and index usage
Hi all. I'd like to know whether the indexes on a table are updated or not during a "COPY ... FROM" request. That is, should I drop all indexes during a "COPY ... FROM" in order to gain the maximum speed to load data? Thanks. -- Reg me Please ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] COPY ... FROM and index usage
Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto: > On 11/4/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > Hi all. > > > > I'd like to know whether the indexes on a table are updated or not during > > a "COPY ... FROM" request. > > > > That is, should I drop all indexes during a "COPY ... FROM" in order to > > gain the maximum speed to load data? > > > > Thanks. > > Although questions of "which is faster" often depend very heavily on > the data involved, the database schema, the hardware, etc., typically > people find it best to drop all indexes during a large import and > recreate them afterward. > > - Josh/eggyknap This sounds very reasonable to me. But the first question remains unanswered: Are the indexes updated during the COPY ... FROM ? Thanks again. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] COPY ... FROM and index usage
Il Sunday 04 November 2007 16:21:41 Erik Jones ha scritto: > On Nov 4, 2007, at 9:15 AM, Reg Me Please wrote: > > Il Sunday 04 November 2007 14:59:10 Josh Tolley ha scritto: > >> On 11/4/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > >>> Hi all. > >>> > >>> I'd like to know whether the indexes on a table are updated or > >>> not during > >>> a "COPY ... FROM" request. > >>> > >>> That is, should I drop all indexes during a "COPY ... FROM" in > >>> order to > >>> gain the maximum speed to load data? > >>> > >>> Thanks. > >> > >> Although questions of "which is faster" often depend very heavily on > >> the data involved, the database schema, the hardware, etc., typically > >> people find it best to drop all indexes during a large import and > >> recreate them afterward. > >> > >> - Josh/eggyknap > > > > This sounds very reasonable to me. > > But the first question remains unanswered: > > > > Are the indexes updated during the COPY ... FROM ? > > Of course. Why would think that data could be inserted into a table > by any means without it updating the table's indexes? That would > make the index worthless. > > Erik Jones > > Software Developer | Emma® > [EMAIL PROTECTED] > 800.595.4401 or 615.292.5888 > 615.292.0777 (fax) > > Emma helps organizations everywhere communicate & market in style. > Visit us online at http://www.myemma.com I do understand your remarks and would agree. But I was thinking about the COPY...FROM request not as a normal INSERT INTO. If this were the case I could have been running a REINDEX TABLE. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Locale and indexes: howto?
HI all. While reading chapter 11 of v8.2 I've encountered this sentence: However, if your server does not use the C locale you will need to create the index with a special operator class to support indexing of pattern-matching queries. Well, I'd like to use the C locale at least for PGSQL. Accordingly to my system (Linux, of course) these are my locale settings: LANG=it_IT.UTF-8 LC_CTYPE="it_IT.UTF-8" LC_NUMERIC="it_IT.UTF-8" LC_TIME="it_IT.UTF-8" LC_COLLATE=C LC_MONETARY="it_IT.UTF-8" LC_MESSAGES="it_IT.UTF-8" LC_PAPER="it_IT.UTF-8" LC_NAME="it_IT.UTF-8" LC_ADDRESS="it_IT.UTF-8" LC_TELEPHONE="it_IT.UTF-8" LC_MEASUREMENT="it_IT.UTF-8" LC_IDENTIFICATION="it_IT.UTF-8" LC_ALL= (I'm Italian, I think). So the "locale" itself seems to be a little bit more complex than I thought. I already use the C language collation schema, very useful in directory listings. Should I install PGSQL with also the LC_CTYPE=C? Or what? Many thanks in advance. -- Reg me Please ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Locale and indexes: howto?
Il Monday 05 November 2007 21:27:27 Reg Me Please ha scritto: > HI all. > > While reading chapter 11 of v8.2 I've encountered this sentence: > > However, if your server does not use the C locale you will need to create > the index with a special operator class to support indexing of > pattern-matching queries. > > Well, I'd like to use the C locale at least for PGSQL. > Accordingly to my system (Linux, of course) these are my locale settings: > > LANG=it_IT.UTF-8 > LC_CTYPE="it_IT.UTF-8" > LC_NUMERIC="it_IT.UTF-8" > LC_TIME="it_IT.UTF-8" > LC_COLLATE=C > LC_MONETARY="it_IT.UTF-8" > LC_MESSAGES="it_IT.UTF-8" > LC_PAPER="it_IT.UTF-8" > LC_NAME="it_IT.UTF-8" > LC_ADDRESS="it_IT.UTF-8" > LC_TELEPHONE="it_IT.UTF-8" > LC_MEASUREMENT="it_IT.UTF-8" > LC_IDENTIFICATION="it_IT.UTF-8" > LC_ALL= > > (I'm Italian, I think). > > So the "locale" itself seems to be a little bit more complex than I > thought. I already use the C language collation schema, very useful in > directory listings. Should I install PGSQL with also the LC_CTYPE=C? > Or what? > > Many thanks in advance. OK, I solved it by myself. You need to have *both* LC_COLLATE=C *and* LC_CTYPE=C in order to comply with the "C locale" stated into the abovementioned paragraph. It's written in the initdb man page. In my humble opinion, I would be clearer in the documentation about this point, especially in the "Indexes" chapter (11th in v8.2). With "locale" we mean a lot of things, while just two aspects make sense in this context. -- Reg me Please ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] How to create primary key
Hi all. What'd be the syntax to create a primary key on an already build table? Thanks. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] generic crosstab
I was having the same problem. I defined a "crosstab definition table" where I declare the things I want out of crosstab and then I dynamically build views that will add the needed "AS ct42( )". Maybe someone else can suggest a better solution ... Il Tuesday 06 November 2007 13:46:06 Toni Casueps ha scritto: > I'm using the PostgreSQL crosstab functions to do something like this: > > From this table ... > > ax 10 > bx 13 > ay 11 > by 17 > > > ... I get: > > xy > a10 11 > b13 17 > > > To get the second table I have to specify the field list when I call the > crosstab function (in this example it would be AS t(x integer, y integer)) > My problem arises when that field list changes, for example when someone > inserts a record in the first table with a new value, for example "z". In > that case I have to change my queries to AS t(x integer, y integer, z > integer) > > Do you know how to declare a "generic" field list, or some other library > similar to crosstab? > > _ > Express yourself instantly with MSN Messenger! Download today it's FREE! > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly -- Reg me Please ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Postgresql simple query performance question
I have no doubt you're right, Pavel. But why not? It could be a simple enhacement. Il Tuesday 06 November 2007 15:11:02 Pavel Stehule ha scritto: > Hello > > PostgreSQL doesn't use index for COUN(*) > > http://www.varlena.com/GeneralBits/18.php > http://sql-info.de/en/postgresql/postgres-gotchas.html#1_7 > > Regards > Pavel Stehule > > On 06/11/2007, SHARMILA JOTHIRAJAH <[EMAIL PROTECTED]> wrote: > > Hi > > We are in the process of testing for migration of our database from > > Oracle to Postgresql. > > I hava a simple query > > > > Select count(*) from foo > > This table has 29384048 rows and is indexed on foo_id > > > > The tables are vacuumed and the explain plan for postgresql is > > > > QUERY PLAN > > > > > > -- > > Aggregate (cost=1194020.60..1194020.61 rows=1 width=0) (actual > > time=68797.280..68797.280 rows=1 loops=1) > > > >-> Seq Scan on foo (cost=0.00..1120560.48 rows=29384048 width=0) > > (actual > > time=0.232..60657.948 rows=29384048 loops=1) > > Total runtime: 68797.358 ms > > > > > > > > The explain plan for oracle is > > > > OPERATIONOBJECTACCESS_PREDICATES > > FILTER_PREDICATES > > --- > > > > SELECT STATEMENT () (null)(null) > > (null) > > > > SORT (AGGREGATE)(null)(null) > > (null) > > > >INDEX (FULL SCAN) foo_IDX_ID (null)(null) > > > > Oracle uses index for count(*) query in this case > > This query in Oracle takes only 5 sec and in postgresql it takes 1 min > > 10sec > > > > The same query in oracle without the index and full table scan(like in > > postgresql) has the > > > > explain plan like this and it takes 34 sec. > > > > select /*+ full(foo1) */ count(*) from foo1 > > > > OPERATIONOBJECT ACCESS_PREDICATES > > FILTER_PREDICATES > > --- -- > > > > SELECT STATEMENT () (null) (null) > > (null) > > > > SORT (AGGREGATE)(null) (null) > > (null) TABLE ACCESS (FULL) foo (null) > > (null) > > > > > > In short the query "Select count(*) from foo" takes the following time: > > Postgresql - 1m 10 sec > > Oracle(index scan) - 5 sec > > Oracle (full table scan) - 34 sec > > > > How can I speed up this query in postgresql ? The other postgres settings > > are > > > > postgresql > > > >max_connections = 100 > >shared_buffers = 5 > >temp_buffers = 5000 > >work_mem = 16384 > >maintenance_work_mem = 262144 > >fsync = on > >wal_sync_method = fsync > >effective_cache_size = 30 > >random_page_cost = 4 > >cpu_tuple_cost = 0.01 > >cpu_index_tuple_cost = 0.001 > >cpu_operator_cost = 0.0025 > > > > Are there any tuning that need to be done in the OS or database side? I > > had attached the iostat and vmstat results of postgresql > > > > Thanks > > > > __ > > Do You Yahoo!? > > Tired of spam? Yahoo! Mail has the best spam protection around > > http://mail.yahoo.com > > > > ---(end of > > broadcast)--- > > TIP 1: if posting/reading through Usenet, please send an appropriate > >subscribe-nomail command to [EMAIL PROTECTED] so that your > >message can get through to the mailing list cleanly > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Postgresql simple query performance question
While I would not spend resources in fine tuning the count(*), I would spend some to underastand why and how the other ones do it better. Just to be better. Il Tuesday 06 November 2007 15:29:34 Bill Moran ha scritto: > In response to Reg Me Please <[EMAIL PROTECTED]>: > > I have no doubt you're right, Pavel. > > But why not? > > It could be a simple enhacement. > > It's not simple. Do some searches on the mailing lists and you will > find discussion of why it's difficult to do. > > > Il Tuesday 06 November 2007 15:11:02 Pavel Stehule ha scritto: > > > Hello > > > > > > PostgreSQL doesn't use index for COUN(*) > > > > > > http://www.varlena.com/GeneralBits/18.php > > > http://sql-info.de/en/postgresql/postgres-gotchas.html#1_7 > > > > > > Regards > > > Pavel Stehule > > > > > > On 06/11/2007, SHARMILA JOTHIRAJAH <[EMAIL PROTECTED]> wrote: > > > > Hi > > > > We are in the process of testing for migration of our database from > > > > Oracle to Postgresql. > > > > I hava a simple query > > > > > > > > Select count(*) from foo > > > > This table has 29384048 rows and is indexed on foo_id > > > > > > > > The tables are vacuumed and the explain plan for postgresql is > > > > > > > > QUERY PLAN > > > > > > > > > > > > -- > > > > Aggregate (cost=1194020.60..1194020.61 rows=1 width=0) (actual > > > > time=68797.280..68797.280 rows=1 loops=1) > > > > > > > >-> Seq Scan on foo (cost=0.00..1120560.48 rows=29384048 width=0) > > > > (actual > > > > time=0.232..60657.948 rows=29384048 loops=1) > > > > Total runtime: 68797.358 ms > > > > > > > > > > > > > > > > The explain plan for oracle is > > > > > > > > OPERATIONOBJECTACCESS_PREDICATES > > > > FILTER_PREDICATES > > > > --- > > > > > > > > SELECT STATEMENT () (null)(null) > > > > (null) > > > > > > > > SORT (AGGREGATE)(null)(null) > > > > (null) > > > > > > > >INDEX (FULL SCAN) foo_IDX_ID (null)(null) > > > > > > > > Oracle uses index for count(*) query in this case > > > > This query in Oracle takes only 5 sec and in postgresql it takes 1 > > > > min 10sec > > > > > > > > The same query in oracle without the index and full table scan(like > > > > in postgresql) has the > > > > > > > > explain plan like this and it takes 34 sec. > > > > > > > > select /*+ full(foo1) */ count(*) from foo1 > > > > > > > > OPERATIONOBJECT ACCESS_PREDICATES > > > > FILTER_PREDICATES > > > > --- -- > > > > > > > > SELECT STATEMENT () (null) (null) > > > > (null) > > > > > > > > SORT (AGGREGATE)(null) (null) > > > > (null) TABLE ACCESS (FULL) foo (null) > > > > (null) > > > > > > > > > > > > In short the query "Select count(*) from foo" takes the following > > > > time: Postgresql - 1m 10 sec > > > > Oracle(index scan) - 5 sec > > > > Oracle (full table scan) - 34 sec > > > > > > > > How can I speed up this query in postgresql ? The other postgres > > > > settings are > > > > > > > > postgresql > > > > > > > >max_connections = 100 > > > >shared_buffers = 5 > > > >temp_buffers = 5000 > > > >work_mem = 16384 > > > >maintenance_work_mem = 262144 > > > >fsync = on > > > >wal_sync_method = fsync > > > >effective_cache_size = 30 > > > >random_page_cost = 4 > > > >cpu_tuple_cost = 0.01 > > > >cpu_index_tuple_cost = 0.001 > > > >cpu_operator_cost = 0.0025 > > > > > > > > Are there any tuning that need to be done in the OS or database > > > > side? I h
Re: [GENERAL] How does the query planner make its plan?
It may depend on the index itself against the locales. 1. define the insex with the text_pattern_ops operato class 2. run vacuum analyze on the table 3. re-run the explain See chapter 11 (especiallu 11.8) for v8.2 Il Tuesday 06 November 2007 16:25:09 Christian Schröder ha scritto: > Hi list, > once again I do not understand how the query planner works and why it > apparently does not find the best result. > I have a table with about 125 million rows. There is a char(5) column > with a (non-unique) index. When I try to find the distinct values in > this column using the following sql statement: > > select distinct exchange from foo > > the query planner chooses not to use the index, but performs a > sequential scan. When I disfavour the use of sequential scans ("set > enable_seqscan = off") the performance is more than 6 times better. Why > does the query planner's plan go wrong? The table has been vacuum > analyzed just before I ran the queries. > > Here is the plan when I let the query planner alone: > >QUERY PLAN > --- >- Unique (cost=23057876.40..23683350.48 rows=4 width=9) >-> Sort (cost=23057876.40..23370613.44 rows=125094816 width=9) > Sort Key: exchange > -> Seq Scan on quotes (cost=0.00..3301683.16 rows=125094816 > width=9) > (4 rows) > > This is what really happens: > > QUERY PLAN > --- > Unique > (cost=23057876.40..23683350.48 rows=4 width=9) (actual > time=1577159.744..1968911.024 rows=4 loops=1) >-> Sort (cost=23057876.40..23370613.44 rows=125094816 width=9) > (actual time=1577159.742..1927400.118 rows=125094818 loops=1) > Sort Key: exchange > -> Seq Scan on quotes (cost=0.00..3301683.16 rows=125094816 > width=9) (actual time=0.022..169744.162 rows=125094818 loops=1) > Total runtime: 1969844.753 ms > (5 rows) > > With "enable_seqscan = off" I get this plan: > > QUERY PLAN > --- > Unique (cost=0.00..89811549.81 rows=4 width=9) >-> Index Scan using quotes_exchange_key on quotes > (cost=0.00..89498812.77 rows=125094816 width=9) > (2 rows) > > And again with execution times: > > > QUERY PLAN > --- > >--- Unique (cost=0.00..89811549.81 rows=4 width=9) (actual > time=0.079..313068.922 rows=4 loops=1) >-> Index Scan using quotes_exchange_key on quotes > (cost=0.00..89498812.77 rows=125094816 width=9) (actual > time=0.078..273787.493 rows=125094818 loops=1) > Total runtime: 313068.967 ms > (3 rows) > > I understand that from looking at the estimations (89811549.81 with > index scan vs. 23683350.48 with sequential scan) the query planner had > to choose the sequential scan. So maybe I have to tune the planner cost > constants? Indeed I did some changes to these values, but in my opinion > this should make index scans preferable: > > #seq_page_cost = 1.0# measured on an arbitrary scale > #random_page_cost = 4.0 # same scale as above > random_page_cost = 1.0 > #cpu_tuple_cost = 0.01 # same scale as above > #cpu_index_tuple_cost = 0.005 # same scale as above > cpu_index_tuple_cost = 0.001 > #cpu_operator_cost = 0.0025 # same scale as above > #effective_cache_size = 128MB > effective_cache_size = 4GB > > The machine is a dedicated database server with two dual-core xeon > processors and 8 GB memory. > > Thanks for your help, > Christian -- Reg me Please ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] external editor for psql
Il Tuesday 06 November 2007 16:40:27 John Smith ha scritto: > http://www.nabble.com/need-good-editor-for-postgreSQL-tf3660882.html#a10229 >560 > > how do u get gvim (portable?) to do syntax color highlighting for > sql/plpgsql on files in buffer?? you're right- notepad is not enough > (and pgedit is not free). > > cheers, jzs > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly 1. I've looked for the PostgreSQL syntax script by Manni Wood. (I don't remember from where, maybe www.vim.org) 2. I've installed it as /usr/share/vim/vim71/syntax/postgresql.vim 3. I've edited /usr/share/vim/vim71/filetype.vim in order to add this line: au BufNewFile,BufRead *.psqlsetf postgresql Now all .psql files get the Posgres highlighting. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Syntax error in a large COPY
Hi all. I'm generating an SQL script to load some million rows into a table. I'm trying to use the COPY command in order to speed the load up. At a certain point I get an error telling about a "invalid input syntax for type numeric" The incriminated line number is the last one (the one containing the \.). Is there a way to know which line is really malformed? Thanks. -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Syntax error in a large COPY
That seems not to be the case. The last line has a \. by its own and the last but one is well formed. Il Tuesday 06 November 2007 19:14:00 Tom Lane ha scritto: > Reg Me Please <[EMAIL PROTECTED]> writes: > > At a certain point I get an error telling about a > > "invalid input syntax for type numeric" > > The incriminated line number is the last one (the one containing the \.). > > Is there a way to know which line is really malformed? > > Why do you think the report is inaccurate? > > I can reproduce this by putting a few spaces in front of \., for > instance. > > regression=# create table foo(f1 numeric); > CREATE TABLE > regression=# copy foo from stdin; > Enter data to be copied followed by a newline. > End with a backslash and a period on a line by itself. > > >> 123 > >> \. > > regression=# copy foo from stdin; > Enter data to be copied followed by a newline. > End with a backslash and a period on a line by itself. > > >> \. > >> \. > > ERROR: invalid input syntax for type numeric: " " > CONTEXT: COPY foo, line 1, column f1: " " > regression=# > > As the psql prompt mentions, \. has to be alone on a line. > > regards, tom lane -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Syntax error in a large COPY
Il Tuesday 06 November 2007 22:13:15 hai scritto: > On 11/6/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > Il Tuesday 06 November 2007 19:43:38 Scott Marlowe ha scritto: > > > On 11/6/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > > > That seems not to be the case. > > > > The last line has a \. by its own and the last but one is > > > > well formed. > > > > > > (Please don't top post...) > > > > > > Got a self contained test case you can post? > > > > Back to the original topic ... > > > > I'm trying to understand what and where. > > The point is that I have this 29M+ lines script telling me there's a > > problem somewhere. > > > > A self contained test, at the moment, would be that long! > > > > I'm considering a "branch and bound" approach ... but it'd be quite long > > and tedious as the program generating the script has not been written > > to do such things. > > Split it in half, with the appropriate sql on each end so the data > still works, and see which half causes a problem. Keep splitting the > one that causes a problem in half until you have a small one with the > problem still. > > I'm guessing the problem will become obvious then. That's the "branch and bound". Editing 29M+ lines file takes some time. But this is the way I'm going to go right now. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Syntax error in a large COPY
Il Tuesday 06 November 2007 19:43:38 Scott Marlowe ha scritto: > On 11/6/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > That seems not to be the case. > > The last line has a \. by its own and the last but one is > > well formed. > > (Please don't top post...) > > Got a self contained test case you can post? Back to the original topic ... I'm trying to understand what and where. The point is that I have this 29M+ lines script telling me there's a problem somewhere. A self contained test, at the moment, would be that long! I'm considering a "branch and bound" approach ... but it'd be quite long and tedious as the program generating the script has not been written to do such things. I'm pretty sure that the interpreter knows exactly where the problem is. It's just missing to tell it. Unless there's some friendly advise. -- Reg me Please ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Syntax error in a large COPY
Il Tuesday 06 November 2007 22:37:12 Scott Marlowe ha scritto: > On 11/6/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > Il Tuesday 06 November 2007 22:13:15 hai scritto: > > > On 11/6/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > > > Il Tuesday 06 November 2007 19:43:38 Scott Marlowe ha scritto: > > > > > On 11/6/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > > > > > That seems not to be the case. > > > > > > The last line has a \. by its own and the last but one is > > > > > > well formed. > > > > > > > > > > (Please don't top post...) > > > > > > > > > > Got a self contained test case you can post? > > > > > > > > Back to the original topic ... > > > > > > > > I'm trying to understand what and where. > > > > The point is that I have this 29M+ lines script telling me there's a > > > > problem somewhere. > > > > > > > > A self contained test, at the moment, would be that long! > > > > > > > > I'm considering a "branch and bound" approach ... but it'd be quite > > > > long and tedious as the program generating the script has not been > > > > written to do such things. > > > > > > Split it in half, with the appropriate sql on each end so the data > > > still works, and see which half causes a problem. Keep splitting the > > > one that causes a problem in half until you have a small one with the > > > problem still. > > > > > > I'm guessing the problem will become obvious then. > > > > That's the "branch and bound". Editing 29M+ lines file takes some time. > > But this is the way I'm going to go right now. > > Oh, we called it half-splitting in the military. > > Using something like head / tail in unix to do it. Should be fairly > fast, especially if you keep cutting it in half after the first test. > I can't imagine editing something that large even in vi being very > fast. My laptop has a fairly slow disk and even with Linux and vi it takes time. Well, a better diagnostic messages would be better in any case. -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] returning dynamic record
Il Wednesday 07 November 2007 06:35:55 Tom Lane ha scritto: > nick <[EMAIL PROTECTED]> writes: > > Just wondering, if it is possible to do the following > > create function foo() RETURNS Record(int, varchar, int) > > OUT parameters (in reasonably recent PG releases). > > regression=# create function foo (f1 out int, f2 out varchar, f3 out int) > as $$ select 42, 'foo'::varchar, 43 $$ language sql; > CREATE FUNCTION > regression=# select * from foo(); > f1 | f2 | f3 > +-+ > 42 | foo | 43 > (1 row) > > regards, tom lane Maybe create function foo (f1 out int, f2 out varchar, f3 out int) returns setof record as $body$ ... will return the set. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Syntax error in a large COPY
Il Wednesday 07 November 2007 01:29:44 Alvaro Herrera ha scritto: > Reg Me Please wrote: > > Il Tuesday 06 November 2007 22:13:15 hai scritto: > > That's the "branch and bound". Editing 29M+ lines file takes some time. > > But this is the way I'm going to go right now. > > Huh, why not just use pgloader? Becasue I never heard about it. Because it's not included into my distribution package list. And because I was trusting the "core tools" to work reasonably. I'll compile and use that. By the way, unsless you want to have logs at the debug level, no information has been found in the logs about the offending line(s) in the 29M+ COPY script. -- Reg me Please ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Syntax error in a large COPY
Il Wednesday 07 November 2007 07:54:41 Reg Me Please ha scritto: > Il Wednesday 07 November 2007 01:29:44 Alvaro Herrera ha scritto: > > Reg Me Please wrote: > > > Il Tuesday 06 November 2007 22:13:15 hai scritto: > > > That's the "branch and bound". Editing 29M+ lines file takes some time. > > > But this is the way I'm going to go right now. > > > > Huh, why not just use pgloader? > > Becasue I never heard about it. > Because it's not included into my distribution package list. > And because I was trusting the "core tools" to work reasonably. > > I'll compile and use that. > > By the way, unsless you want to have logs at the debug level, no > information has been found in the logs about the offending line(s) > in the 29M+ COPY script. pgloader seems not that easy to use for a newbie like myself. Also because domentation seems too skinny. In any case each "goto line, add lines, save, run" cycle requires about 10 minutes on my PC. And the logs don't provide any useful detail. So, again, better logging would help in any case. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Syntax error in a large COPY
Il Wednesday 07 November 2007 11:10:40 Dimitri Fontaine ha scritto: > Le mercredi 07 novembre 2007, Reg Me Please a écrit : > > pgloader seems not that easy to use for a newbie like myself. > > Also because domentation seems too skinny. > > Sorry about this, writting documentation in English is not that easy when > it's not one's natural language... I'll accept any comment/patch to the > documentation, the aim of it being to ease users life, of course ;) > http://pgloader.projects.postgresql.org/ > > Short story: you have to make a pgloader.conf file where you explain where > is the data file and what pgloader should expect into it (csv, text, what > delimiter and quotes, etc), then run > pgloader -Tc pgloader.conf > > The -T option will TRUNCATE the configured table(s) before COPYing data > into it (them). > > > In any case each "goto line, add lines, save, run" cycle requires about > > 10 minutes on my PC. And the logs don't provide any useful detail. > > So, again, better logging would help in any case. > > pgloader would certainly give this, at first run... It seems to me it is > worth the effort of reading the manual... Hi. pgloader rocks! Maybe just a complete example would suffice. Let's say a table structure, a CSV and a raw text file, a config file and the run output. Thanks. P.S. Why not including the pgloader into the main tarball? -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Syntax error in a large COPY
Il Wednesday 07 November 2007 11:26:56 Dimitri Fontaine ha scritto: > Le mercredi 07 novembre 2007, Reg Me Please a écrit : > > Maybe just a complete example would suffice. Let's say a table structure, > > a CSV and a raw text file, a config file and the run output. > > Do you mean something like the included examples, which I tend to also use > as (regression) tests? > http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgloader/pgloader/examples/ Right. I installed .deb. The man page has not been included. The examples have been copied in /usr/share/doc/pgloader/examples. The examples are OK. > Regards, Thanks for yor work. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Syntax error in a large COPY
Il Tuesday 06 November 2007 19:05:52 Reg Me Please ha scritto: > Hi all. > I'm generating an SQL script to load some million rows into a table. > I'm trying to use the COPY command in order to speed the load up. > > At a certain point I get an error telling about a > "invalid input syntax for type numeric" > > The incriminated line number is the last one (the one containing the \.). > > Is there a way to know which line is really malformed? > > Thanks. Blame on me! The problem (spotted thanks to the pgloader) was that I was using "\n" for null values instead of "\N" (capital n). As stated into the friendly documentation "\n" stands for "new line", while "\N" stands for "NULL". While being clearly stated, this choice is a little bit confusing, at least for newbies like myself. Thanks everyone and Dimitri Fontaine for his pgloader. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Postgresql simple query performance question
Il Wednesday 07 November 2007 13:08:46 André Volpato ha scritto: > > > > > > > > Reid Thompson escreveu: Would it be possible to avoid the so-called "HTML email body"? -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] prepared statements suboptimal?
Il Wednesday 07 November 2007 12:25:46 rihad ha scritto: > I don't understand why postgres couldn't plan this: > SELECT foo.e, foo.f > FROM foo > WHERE pk=$1 AND b=$2 AND status='1' AND c <= $3; > > to be later executed any slower than > > SELECT foo.e, foo.f > FROM foo > WHERE pk='abcabc' AND b='13' AND status='1' AND c <= '2007-11-20 13:14:15'; One solution is that the planner will work when it will see the query. At that time the comparisons are all against unknown values. Try the same with dyamical SQL, that is you dynamically build by placing the current values instead of the $1, $2 and $3 placeholders. In this case the planner will see the query with all current real values. Then you execute it and compare the timings. Maybe you get better scores: there's no warranty for better performances becasue you are going to send the whole query again and again to the planner. Of course you need a plpgsql function for this. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postgresql simple query performance question
Il Wednesday 07 November 2007 13:47:26 SHARMILA JOTHIRAJAH ha scritto: > Hi > we are testing with version PostgreSQL 8.2.3. Why not using at least the current 8.2.5? Read here http://www.postgresql.org/docs/current/static/release.html for details. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] what is the date format in binary query results
Il Thursday 08 November 2007 17:09:22 Tom Lane ha scritto: > Reg Me Please <[EMAIL PROTECTED]> writes: > > Il Thursday 08 November 2007 16:18:58 Tom Lane ha scritto: > >> It's either an int8 representing microseconds away from 2000-01-01 > >> 00:00:00 UTC, or a float8 representing seconds away from the same > >> origin. > > > > Does this mean that negative numbers are for timestamps before y2k? > > Right. > > > Why and when there is a choice between int8 and float8 representation? > > configure --enable-integer-datetimes. Wow: it's at compile time! How can I tell which one has been choosen by my distribution (Ubuntu)? -- Reg me Please ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] what is the date format in binary query results
Il Thursday 08 November 2007 16:18:58 Tom Lane ha scritto: > It's either an int8 representing microseconds away from 2000-01-01 > 00:00:00 UTC, or a float8 representing seconds away from the same > origin. Does this mean that negative numbers are for timestamps before y2k? Why and when there is a choice between int8 and float8 representation? -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Warning about max_fsm_pages: what's that?
Hi all. During a routine "VACUUM ANALYZE" I've got this message in the logs: WARNING: relation "public.t_dati" contains more than "max_fsm_pages" pages with useful free space HINT: Consider compacting this relation or increasing the configuration parameter "max_fsm_pages". The table in question contains some 14M+ rows. Would it be possible to get some reference or suggestion in order to understand better the provided hint? This is the very first time I see this message. Many thanks. -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] PIPELINED Functions
Il Friday 09 November 2007 16:06:34 Cesar Alvarez ha scritto: > Hello every one, > im working in a proyect that uses Oracle 10g, and nice concept i learn > is the PIPELINED functions and Functions that return a Table, > is there something alike in Postgres? You do have functions returning set of rows. I have no idea about pipelined functions. -- Reg me Please ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Filter tables
Hi all. I have this sample setup: CREATE table t1 ( t text, id int ); CREATE TABLE f1 ( t text ); INSERT INTO t1 VALUES ( 'field1',1 ), ( 'field2',1 ), ( 'field3',1 ), ( 'field1',2 ), ( 'field3',3 ) ; INSERT INTO f1 VALUES ( 'field1' ), ( 'field2' ) ; What I'd need to do is to "filter" t1 against f1 to get only the rows ( 'field1',1 ) and ( 'field2',1 ). Of course both t1 and f1 don't have a defined number of rows, though usually t1 should be much bigger that f1. I have a rather complex solution in mind with loops in a plpgsql function and am wondering whether there is one simpler. Thanks a lot. -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Filter tables
Il Monday 12 November 2007 17:05:18 Dimitri Fontaine ha scritto: > Hi, > > Le lundi 12 novembre 2007, Reg Me Please a écrit : > > What I'd need to do is to "filter" t1 against f1 to get only the rows > > ( 'field1',1 ) and ( 'field2',1 ). > > select * from t1 natural join f1 where t1.id = 1; >t| id > + > field1 | 1 > field2 | 1 > (2 lignes) > > > I'm not sure about how you wanted to filter out the ('field1', 2) row of > table t1, so used the where t1.id = 1 restriction. > > Hope this helps, I think surely I've not been clean enough. The rows in t1 should be seen as grouped by the field id. A group of such rouws matches the filter f1 (made by two rows in my example) if I can find all the values of f1 in the field t of that group. So, in my example, in t1 the group of rows with id=2 (actually made by only one row in my example) doesn't match the filter because it's lacking a row with t='field2'. In the same way the group of rows with id=3 won't match as they lack both values that are in f1. What I'd like to see as an output of the query/function is id 1 as only the group with id=1 has both the values. Of course, f1 could have any number of different values. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Filter tables
Il Monday 12 November 2007 18:10:40 Osvaldo Rosario Kussama ha scritto: > Try: > SELECT DISTINCT t1.id FROM t1 > WHERE NOT EXISTS (SELECT f1.t FROM f1 > WHERE NOT EXISTS (SELECT x1.t FROM t1 x1 > WHERE f1.t = x1.t > AND t1.id = x1.id)); > > Osvaldo Nice, it seems to work. But I fear it won't with a longer f1 filter table. Let me think about it. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Postgres table size
Il Tuesday 13 November 2007 17:36:30 SHARMILA JOTHIRAJAH ha scritto: > Hi > I have a table with 29384048 records in oracle and postgresql. The table > has 47 columns (16 numeric and 27 varchar and the rest timestamp). The > tablesize in postgresql is twice as much than the tablesize in oracle (for > the same number of rows and columns). There are no updates or deletes in > this table. It is a test table that is used only for querying. The tables > are vacuumed regularly > > Even a simple seqscan query takes twice as much time in postgres than in > oracle. Does postgresql generally occupy more space than oracle tables? > Thanks > Sharmila This's an interesting point fore sure as far as the data types for the two table are comparable. If this yelds true, the more space an RDBMS occupies, the slower the access. I think. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Bulk Load Ignore/Skip Feature
Il Wednesday 14 November 2007 05:50:36 Willem Buitendyk ha scritto: > Will Postgresql ever implement an ignore on error feature when bulk > loading data? Currently it is my understanding that any record that > violates a unique constraint will cause the "copy from" command to halt > execution instead of skipping over the violation and logging it - as is > done in Oracle and DB2. > > Are there alternative ways of dealing with this scenario that won't > consume as much time? > > Appreciate any help - would love to migrate away from Oracle. > > Cheers pgloader http://pgfoundry.org/projects/pgloader/ -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Variable LIMIT and OFFSET in SELECTs
Hi all. I'd need to implement a "parametric windowed select" over a table called "atable". The idea is to have a one row table to maintain the LIMIT and the OFFSET for the selects. If I try this: create table limoff( l int, o int ); insert into limoff values ( 10,2 ); select a.* from atable a,limoff limit l offset o; I get "ERROR: argument of OFFSET must not contain variables". (You get the error also on LIMIT if you put a constant as the offset). But I can do the following: create or replace function f_limoff_1( l int, o int ) returns setof atable as $$ select * from atable limit $1 offset $2 $$ language sql; create or replace function f_limoff() returns setof atable as $$ select * from f_limoff_1( (select l from limoff),(select i from limoff) ); $$ language sql; Of course, in my opinion at least, there's no real reason for the above syntax limitation, as the sematics is not. Wouldn't it be a nice enhacement to allow variable LIMIT and OFFSET in SELECTs? -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
In any case, what'd be the benefit for not allowing "variables" as LIMIT and OFFSET argument? -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
Il Thursday 15 November 2007 17:55:42 Sam Mason ha scritto: > On Thu, Nov 15, 2007 at 05:34:43PM +0100, Reg Me Please wrote: > > Il Thursday 15 November 2007 14:09:16 Trevor Talbot ha scritto: > > > On 11/15/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > > > In any case, what'd be the benefit for not allowing "variables" as > > > > LIMIT and OFFSET argument? > > > > > > When you can fully describe the semantics of your example, you'll > > > probably be able to answer that question too :) > > > > The question is: why not correcting the syntax checker to allow also the > > first solution? > > In relational algebra terms, try thinking about what would happen if you > did something like: > > SELECT * FROM foo LIMIT val; > > Where the table foo has more than one row (and val had different values > for each row). Which row would the database use? I believe these are > the semantics Trevor was referring to. > > In implementation terms, the problem is that a query is planned without > getting any data from the database. If you're planning a query it helps > to know how many rows you're getting back. If you're getting few rows > back then it's probably better to make the query work differently than > if it's returning lots of rows. Therefore, knowing what the LIMIT is, > at planning time, makes a lot of difference. How would this work in the > presence of arbitrary expressions for LIMIT? > > > Sam Your remarks make a lot of sense. Of course. But then why allowing the LIMIT and the OFFSET as coming from function argument evaluations? Isn't this breaking the planner? Of course I would say no, as in my case it's just working fine, only more complex to write that the "simpler" way. -- Reg me Please ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
Il Thursday 15 November 2007 20:28:17 hai scritto: > Reg Me Please wrote: > > Sorry but I don't understand. > > > > Either the LIMIT and OFFSET are to be definitely CONSTANT or not. > > They must be constant during the execution of the query. > > > In the SQL function body the LIMIT and the OFFSET *are definitely not* > > CONSTANT. And the planner can do its job at best as usual. > > Well, they're constant during one execution of the query, but I grant > you the planner doesn't know what values they will have. > > > As Sam says I should be able to "put an IMMUTABLE expression into a LIMIT > > or OFFSET". And under some circumstances (SQL function body) it's true > > even with VARIABLE expressions like function call arguments. > > And you can. > > CREATE FUNCTION limfunc() RETURNS integer AS 'SELECT 2' LANGUAGE SQL > IMMUTABLE; > > SELECT * FROM fit LIMIT limfunc(); > a | b > ---+ > 1 | 43 > 2 | 43 > (2 rows) > > > In my opinion I would say it's more a problem with the syntax checker > > that with the planner ("semantics" in my lingo). But I could be wrong. > > Well, what it won't let you do is have a subquery in the LIMIT clause. > That's probably due to a combination of: > 1. The spec probably says something about it > 2. There is an obvious approach involving functions/prepared queries > 3. You're the first person to have asked for it. > > Now if you can get a couple of hundred to join you at #3, you might have > a feature request :-) Hmmm ... It also works with STABLE functions, though. 1. Unluckily I've been too lazy t read the specs. 2. I am not willing to put subqueries there, just need to drive the "windowing" mechanism by means of a second table (limoff). 3. Dont' think so :) The solution I'm using now (two functions) allows for really variable limit and offset argument. It's just UGLY to write. But it works. And if it works it should be made available in a easier way (unless the specs advise otherwhise) with a simple table field (or a function parameter) as the argument for the LIMIT and for the OFFSET. Maybe with a huge warning about possible drawbacks with the query planner. I'll check whether I can drop a feature request, even without undreds of fellows. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
Sorry but I don't understand. Either the LIMIT and OFFSET are to be definitely CONSTANT or not. In the SQL function body the LIMIT and the OFFSET *are definitely not* CONSTANT. And the planner can do its job at best as usual. As Sam says I should be able to "put an IMMUTABLE expression into a LIMIT or OFFSET". And under some circumstances (SQL function body) it's true even with VARIABLE expressions like function call arguments. In my opinion I would say it's more a problem with the syntax checker that with the planner ("semantics" in my lingo). But I could be wrong. -- Reg me Please ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
Il Friday 16 November 2007 07:38:30 Tom Lane ha scritto: > IIRC, it used to be restricted to a constant, a few revisions back. > In current releases the only restriction that stems from laziness is > not allowing a sub-select. (If anyone were to put forward a serious > use-case, we'd probably go fix that.) Mine was not serious enough! Sigh! :-) > The OP's complaint is that we don't allow a variable of the query's own > level, but AFAICT he's still not grasped the point that that leads to an > indeterminate limit value ... So it works, but it's not serious enough to be unlocked. Sigh! :-) -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
Il Thursday 15 November 2007 23:08:10 Richard Huxton ha scritto: > Reg Me Please wrote: > > Il Thursday 15 November 2007 20:28:17 hai scritto: > >> Reg Me Please wrote: > >>> In my opinion I would say it's more a problem with the syntax checker > >>> that with the planner ("semantics" in my lingo). But I could be wrong. > >> > >> Well, what it won't let you do is have a subquery in the LIMIT clause. > >> That's probably due to a combination of: > >> 1. The spec probably says something about it > >> 2. There is an obvious approach involving functions/prepared queries > >> 3. You're the first person to have asked for it. > >> > >> Now if you can get a couple of hundred to join you at #3, you might have > >> a feature request :-) > > > > Hmmm ... > > > > It also works with STABLE functions, though. > > Well, STABLE means the value won't change during the query. > > > 1. Unluckily I've been too lazy t read the specs. > > A wise choice. They're not my idea of fun either. > > > 2. I am not willing to put subqueries there, just need to drive the > > "windowing" mechanism by means of a second table (limoff). > > And how do you plan to get information from limoff unless it's by a > subquery? > > > 3. Dont' think so :) > > > > The solution I'm using now (two functions) allows for really variable > > limit and offset argument. It's just UGLY to write. But it works. > > And if it works it should be made available in a easier way (unless the > > specs advise otherwhise) with a simple table field (or a function > > parameter) as the argument for the LIMIT and for the OFFSET. Maybe with a > > huge warning about possible drawbacks with the query planner. > > But you're back to subqueries here and the possibility of multiple > values from limoff. Even if you do something like: >SELECT ... LIMIT (SELECT l FROM limoff LIMIT 1) > That doesn't guarantee you one row - you might get zero. > > > I'll check whether I can drop a feature request, even without undreds of > > fellows. > > Hey, anyone can request a feature. You're more likely to get it > implemented with a mix of coding skills, money and user-demand though. You are right: in SQl anything is a query. So I'll close this. The thing is doable and working. The syntax parser doesn't allow it, maybe because of a bug, but there's a workaround. This's enough for myself. -- Reg me Please ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
Il Friday 16 November 2007 08:33:14 Tom Lane ha scritto: > Reg Me Please <[EMAIL PROTECTED]> writes: > >> The OP's complaint is that we don't allow a variable of the query's own > >> level, but AFAICT he's still not grasped the point that that leads to an > >> indeterminate limit value ... > > > > So it works, but it's not serious enough to be unlocked. > > You really don't have a clue what this is about, do you? > Let me try to explain one more time. You propose allowing > > select ... from > table1 join table2 on table1.x = table2.y > limit table1.z > > Now this would be somewhat well-defined if there were only one row in > table1, or if there were many rows but they all had the same value > in column z. But what if that's not the case? If there are multiple > possible values for z then the query is just not sensible. > > regards, tom lane You're right, maybe I have no clue at all. (Now it seems I'm maybe a little bit less clueless). And you are right, the fragment you propose depicts my goal. And, once again, you are right with "this would be somewhat well-defined if there were only one row in table1". I know this: - tmp=# CREATE SEQUENCE s1; CREATE SEQUENCE tmp=# CREATE SEQUENCE s2; CREATE SEQUENCE tmp=# CREATE TABLE seqs ( seq text primary key ); CREATE TABLE tmp=# INSERT INTO seqs VALUES ( 's1' ); INSERT 0 1 tmp=# SELECT * from nextval( (select seq from seqs) ); nextval - 1 (1 row) tmp=# INSERT INTO seqs VALUES ( 's2' ); INSERT 0 1 tmp=# SELECT * from nextval( (select seq from seqs) ); ERROR: more than one row returned by a subquery used as an expression - (Semantics: I put a sequence name into a table and the nextval() will be computed over that table row content at the call time. If I put more than one row I get an error (maybe from the planner) so I need to ensure that the table will contain just one row). I would not to lock features (or capabilities) to avoid people making mistakes! Because you could hinder careful people from exploiting them. In any case, I need to thank you all very much for the new things I've learnt about PostgreSQL. -- Reg me Please ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Variable LIMIT and OFFSET in SELECTs
Il Thursday 15 November 2007 14:09:16 Trevor Talbot ha scritto: > On 11/15/07, Reg Me Please <[EMAIL PROTECTED]> wrote: > > In any case, what'd be the benefit for not allowing "variables" as LIMIT > > and OFFSET argument? > > When you can fully describe the semantics of your example, you'll > probably be able to answer that question too :) OK, I presume I've been unclear. I need to have a "function returning a set of records" to send a "window" of the complete data set. In my mind, LIMIT and OFFSET predicates are meant for this purpose. My original idea was a solution like this: create table limoff( l int, o int ); -- only 1 line in this table insert into limoff values ( 10,2 ); select a.* from atable a,limoff limit l offset o; Unluckily this yelds the known problem about "variables". I've tried to workaround the problem and infact this is doable: First step, I encpasulate the LIMIT+OFFSET predicate in a SQL function. create or replace function f_limoff_1( l int, o int ) returns setof atable as $$ select * from atable limit $1 offset $2 $$ language sql; It works. Second step, I encapsulate the access to the limoff table in another function: create or replace function f_limoff() returns setof atable as $$ select * from f_limoff_1( (select l from limoff),(select i from limoff) ); $$ language sql; Also this works. Please not that neither the LIMIT nor the OFFSET argument is constant and are both contained in the limoff table. So, in my opinion, the variable LIMIT and OFFSET is not a real problem as in both cases the actual values of the arguments would be known only at runtime. But for some reason, the first simpler solution leads to an error. The question is: why not correcting the syjntax checker to allow also the first solution? -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Timestamp without timezone
Il Tuesday 20 November 2007 15:01:53 T.J. Adami ha scritto: > I'm using Microsoft Visual Foxpro 9 developing an ERP application, > using PostgreSQL 8.2.5 and ODBC connection in version 7. > > If I upgrade my ODBC drivers to use "PostgreSQL ANSI", becomes an > error like this: > > "function saldo_estoque("unknown", "unknown", "unknown", "unknown", > timestamp without time zone) does not exist" > > However, the "unknown" types are BPCHAR on function "saldo_estoque", > and "timestamp without time zone" is DATE type. > > How can I solve this? > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org/ It's very likely that you have to do some explici casting on all other paramter types, as the only one the DB has recognised is the last one, aka TIMESTAMPTZ. -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] MAybe a FAQ
Hi all. What'd be the right place to put a "feature request" for the next releases and for bugs in the current one? Thanks. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Table filter
Hi all. I've the following concept. create table t_data ( property text, item int8, prop_value text ); (I've cut some fields away needed only for data history). The meaning is that an entity called by the value of "item" has a number of properties called by "property" with value "prop_value". So, for a single "item" there can be many different "property" each with its own value. create table t_filters ( filter text ); create table t_filter_def ( filter text references t_filters, property text, prop_value l text ); A filter is a list of property values needed to qualify an entity as "good". An entity evaluates as good only when all property values in the filter match the ones associated to an item in t_data. What's missing to me is how to apply a filter to the t_data and get the list of the items that evaluate good. -- Reg me Please ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Table filter
Il Wednesday 21 November 2007 16:41:03 Rodrigo De León ha scritto: > On Nov 21, 2007 9:21 AM, Reg Me Please <[EMAIL PROTECTED]> wrote: > > Hi all. > > > > I've the following concept. > > > > This smells like EAV. > > Please read > > http://joecelkothesqlapprentice.blogspot.com/2006/04/using-one-table-vs-man >y.html > > and consider reevaluating the schema according to valid relational > design (tables, columns, check constraints, etc.). > > In any case, good luck. You are right, Rodrigo. It smells like EAV because it is. But I need to take into account the history of changes for evey single attribute of every single item. And what's worst (for me), attributes can be changed for both future values (that is values getting validity in the future) and past ("history gets fixed"). A variant of the well known time-travel won't apply properly as I can get single attributes changed. Getting the proper attribute list for an item at a certain date it's a snap in EAV, while it's a nightmare in the usual fields-in-a-table approach. This is why I'm going EAV (+ timestamps and flags of course). So the table actually is: create table t_data ( property text, item int8, prop_value text, flag bool, valid timestamptz, expires timestamptz, ); And I hope that there's a better advise for such a scenario. -- Reg me Please ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Table filter
Il Wednesday 21 November 2007 20:22:46 Joe Conway ha scritto: > Reg Me Please wrote: > > The meaning is that an entity called by the value of "item" has a number > > of properties called by "property" with value "prop_value". > > So, for a single "item" there can be many different "property" each with > > its own value. > > > > A filter is a list of property values needed to qualify an entity as > > "good". An entity evaluates as good only when all property values in the > > filter match the ones associated to an item in t_data. > > > > What's missing to me is how to apply a filter to the t_data and get the > > list of the items that evaluate good. > > I haven't played with it myself, but it seems to me that you could do > this with an inner join of two crosstabs -- one on t_data and one on the > filters (i.e. you join on the columns of the filter crosstab to the > matching ones in the t_data crosstab). > > Joe This is a good point. I just need to avoid doing crosstabs over a very large t_data: I fear it'd kill the application. -- Reg me Please ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] EAV or not to EAV?
Hi all. I have to take into account the update history of every single field into a number of tables. Those updates can come to the future (next values) but also to the past (history fix) and apply only to some fields, usually not to the entire record. Every lookup in those tables is always related to a timestamp that normally is the current timestamp, but can also be a past timestamp. I've come up with a "traditionally regretted" EAV design with the add of timestamps for the history. And the related problems, some of which have been solved by Joe Conways's crosstab contrib. Is there a better idea than mine? I hope so. -- Reg me Please ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] LIBPQ: Can we have buffered PGresult (i.e. a retreival bu chuncks?)
Il Thursday 22 November 2007 16:23:25 Abraham, Danny ha scritto: > Is there a way to break the PGresult array to chunks > Like Oracle? > > Thanks > > Danny Abraham > BMC Software > CTM&D Business Unit > 972-52-4286-513 > [EMAIL PROTECTED] > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend I presume that the usal LIMIT+OFFSET solution is not OK. Right? -- Reg me Please ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] "Suspending" indexes and constraint updates
Hi all. I'd need to do large updates into already populated tables as a maintenance activity. The updates are generated by software and are consistent with the constraints defined in the tables. As of now I've been looking at the index and constraint definitions, dropping them, doing the inserts and then re-creating both indexes and constraints. Is there a way to "suspend" the index updates and the constraint checks before the inserts in order to later re-enable them and do a reindex? TIA. -- Reg me, please! ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] "Suspending" indexes and constraint updates
Il Tuesday 04 December 2007 11:50:21 Peter Eisentraut ha scritto: > Am Dienstag, 4. Dezember 2007 schrieb Reg Me Please: > > Is there a way to "suspend" the index updates and the constraint checks > > before the inserts in order to later re-enable them and do a reindex? > > You can disable foreign-key constraints (see ALTER TABLE ... DISABLE > TRIGGER; not quite obvious, but there is a trigger beneath each foreign key > constraint). But there is no general solution for all constraints and > indexes. > > It might also be helpful to investigate the order in which pg_dump puts out > things, since it faces mostly the same issues. It puts index and > constraint creation at the end after the data load. As far as I know, pg_dump usually starts with an empty DB. Then it creates plain table to be filled with COPY. And at last it creates indexes and constraints. Which is not what I nedd. In any case thanks for the hint. -- Reg me, please! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] COPY speedup
Hi all. In order to speed up the COPY ... FROM ... command, I've disabled everything (primary key, not null, references, default and indexes) in the table definition before doing the actual COPY. Later I can restore them with ALTER TABLE ... and CREATE INDEX ... My question is: is all this necessary, or could I save some of them (maybe just the DEFAULT) with no speed cost? Is there a way to "automate" this by using the information_schema? Many thanks in advance. -- Reg me, please! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] COPY speedup
Il Thursday 13 December 2007 19:56:02 Tom Lane ha scritto: > Reg Me Please <[EMAIL PROTECTED]> writes: > > In order to speed up the COPY ... FROM ... command, I've > > disabled everything (primary key, not null, references, default and > > indexes) in the table definition before doing the actual COPY. > > Later I can restore them with ALTER TABLE ... and CREATE INDEX ... > > > > My question is: is all this necessary, or could I save some of them > > (maybe just the DEFAULT) with no speed cost? > > Indexes and foreign key references are the only things that benefit > from this treatment. DEFAULTs are irrelevant to a COPY, and simple > constraints (NOT NULL and CHECK) are not any faster to verify later > --- which makes dropping them slower, since you'll need an additional > table scan to verify them when they're re-added. > > regards, tom lane I'd suppose that foreign keys are to be "disabled" in order to speed things up. Right? -- Reg me, please! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Rich LIKE inheritance
Hi all. Is there a way to inherit from a table with the LIKE construct and also get its indexes along with contraints, DEFAULTs and NOT NULLs? An example: CREATE TABLE story_base ( flag BOOL NOT NULL DEFAULT TRUE, starting TIMESTAMP NOT NULL DEFAULT '-INFINITY', ending TIMESTAMP NOT NULL DEFAULT 'INFINTY' ); CREATE INDEX i_story_base ON story_base( flag,starting,ending ); CREATE TABLE atable ( sometext TEXT, LIKE story_base INCLUDING DEFAULTS ); I'd like atable to also "inherit" an index like the one defined for story_base. Any hint? -- Reg me, please! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] COPY with composite type column
Hi all. I have this composite type: create type ct as ( ct1 text, ct2 int ); Then I have this table create table atable ( somedata numeric, otherdata text, compo ct ); when I try to COPY data to that table and use the following "query" I get a syntax error message: COPY atable( somedata,(ct).ct1 ) from stdin; ERROR: syntax error at or near "(" LINE 1: COPY atable( somedata,(ct).ct1 ) from stdin; ^ The "caret" points to the open parenthesis in "(ct)". Same error is reported on the first open parenthesis if I write "((ct.).ct1)". Any hint on how to write this COPY? -- Reg me, please! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] COPY with composite type column
Il Wednesday 26 December 2007 12:58:34 Reg Me Please ha scritto: > Hi all. > > I have this composite type: > > create type ct as ( > ct1 text, > ct2 int > ); > > Then I have this table > > create table atable ( > somedata numeric, > otherdata text, > compo ct > ); > > when I try to COPY data to that table and use the following "query" > I get a syntax error message: > > COPY atable( somedata,(ct).ct1 ) from stdin; > ERROR: syntax error at or near "(" > LINE 1: COPY atable( somedata,(ct).ct1 ) from stdin; > ^ > > The "caret" points to the open parenthesis in "(ct)". > Same error is reported on the first open parenthesis if I write > "((ct.).ct1)". > > Any hint on how to write this COPY? OK. I've managed to walk the first step. The correct grammar is COPY atable( somedata,ct ) FROM STDIN; That is you have to consider the full composed type column. As stated into the COPY manual. The data to be entered for the composed type column is to be enclosed within paretheses. The single sub-columns are to be comma separated (!). Is there a way to just enter some of the composed types composing columns? -- Reg me, please! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] COPY with composite type column
Il Wednesday 26 December 2007 14:31:04 Reg Me Please ha scritto: > Il Wednesday 26 December 2007 12:58:34 Reg Me Please ha scritto: > > Hi all. > > > > I have this composite type: > > > > create type ct as ( > > ct1 text, > > ct2 int > > ); > > > > Then I have this table > > > > create table atable ( > > somedata numeric, > > otherdata text, > > compo ct > > ); > > > > when I try to COPY data to that table and use the following "query" > > I get a syntax error message: > > > > COPY atable( somedata,(ct).ct1 ) from stdin; > > ERROR: syntax error at or near "(" > > LINE 1: COPY atable( somedata,(ct).ct1 ) from stdin; > > ^ > > > > The "caret" points to the open parenthesis in "(ct)". > > Same error is reported on the first open parenthesis if I write > > "((ct.).ct1)". > > > > Any hint on how to write this COPY? > > OK. > I've managed to walk the first step. > The correct grammar is > > COPY atable( somedata,ct ) FROM STDIN; > > That is you have to consider the full composed type column. > As stated into the COPY manual. > The data to be entered for the composed type column is to be enclosed > within paretheses. The single sub-columns are to be comma separated (!). > > Is there a way to just enter some of the composed types composing columns? The answer is YES, by leaving empty values between the commas. I found it by test-and-try: I've been too lazy to dig very deeply into the 8.2 manual. What instead it seems to be not possible is to define default values for every single sub-column of a composite type. Right? Sorry for this "autoanswering", though! -- Reg me, please! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Hinting the planner
Hello all. I have some tables that contain exactly 1 row and that I use for searches with JOIN. Does it make any sense to hint the planner about this? If so, how can I send such hints to it? -- Reg me, please! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] INSERT with a composite columnt from query
Hi all. I have two table like these: create table compo ( t text, i int ); create table tab ( x int, c compo ); Then I have a function like this: create or replace function f_compo() returns setof compo as $body$ ... $body$ language sql stable; What I'd need to do is to insert the results from f_compo() into the table TAB along with a value x. I expected somthing like this to work: insert into tab select 42,row( c.* ) from f_compo() c; But I get ERROR: cannot cast type record to compo Any hint? TALIA -- Reg me, please! ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Accessing composite type columns from C
Hi all. Is there a way with the libpq to access "subcolumns" in a composite type column? The documentation (8.2) seems not to mention this case. Thanks. -- Reg me, please! ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Question about indexes
Any hint? > Hi all. > I usually create indexes accordingly to the queries used in my software. > This means the more often than not I have composited indexes over more than > one column. > What'd be in PGSQL (v8.3+) the pros and cons of having instead only > one-column indexes? > Thanks in advance. > > RMP. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Question about indexes
As I told you, I use to design indexes based upon the queries, the WHERE clauses especially. My fear is that in PGSQL the runtime "index composition" can be a drawback to the performances if compared to "static index composition". Is this true accordingly to your experience? Is there any "best common practice" for this issue in PGSQL? Thanks again. On Tuesday 16 September 2008 20:41:22 Scott Marlowe wrote: > On Tue, Sep 16, 2008 at 12:24 PM, Merlin Moncure <[EMAIL PROTECTED]> wrote: > > On Tue, Sep 9, 2008 at 10:52 AM, regme please <[EMAIL PROTECTED]> wrote: > >> Hi all. > >> I usually create indexes accordingly to the queries used in my software. > >> This means the more often than not I have composited indexes over more > >> than one column. > >> What'd be in PGSQL (v8.3+) the pros and cons of having instead only > >> one-column indexes? > >> Thanks in advance. > > > > Pro: > > Fewer number of smaller well traveled indexes (more efficient from > > cache perspective). > > > > Con: > > Operations that look up multiple fields simultaneously run slower > > (sometimes much slower). > > > > Note that when reviewing sql written by other people in the > > performance list, probably the most common optimization suggestion is > > to use composite indexes. > > The other, closely related optimization is functional indexes. If you > need to look up stuff based on date_trunc() then create indexes on > that. Next I'd say partial indexes. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PDF Documentation for 8.3?
NOthing bad, except that a number of tables are actually unreadable and some code example lines are going past the right margin. Apart of this, I would say it's great documentation. On Sunday 21 September 2008 11:52:44 Sven Marcel Buchholz wrote: > Michelle Konzack schrieb: > > Hello, > > > > I am using Debian GNU/Linux Etch with PostgreSQL 8.1.11 and since the > > next release of Debian will use 8.3 I am searching for documentation > > which can be print out... > > > > Ma last Printed version was "Practical PostgreSQL" from O'Reilly which > > cover only 7.4. > > > > I was searching the site but there are no PDF's for 8.3 in format A4 or > > do I missing something? > > > > Note: The american "Letter" format sucks, because I am printing > > two A4 pages on ONE A4 side and with the "Letter" format > > I get very huge borders... > > > > Thanks, Greetings and nice Day/Evening > > Michelle Konzack > > Systemadministrator > > 24V Electronic Engineer > > Tamay Dogan Network > > Debian GNU/Linux Consultant > > Hello, > what is wrong with this PDF? > http://www.postgresql.org/files/documentation/pdf/8.3/postgresql-8.3-A4.pdf > > Greetings from Berlin > Sven Marcel Buchholz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Counting rows in a PL/PgSQL CURSOR without fetching?
Hi all. Is there a way in PL/PgSQL to get the number of rows resulting from a: OPEN curs1 SCROLL FOR EXECUTE query; before actually fetching any? Unuckily MOVE LAST FROM curs1; won't work with GET DIAGNOSTICS cnt = ROW_COUNT; Any hint? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Dynamically created cursors vanish in PLPgSQL
Hi all. I'm running PGSQL v.8.3.3 I tried to adapt the examples from the friendly manual (38.7.3.5) in order to to have a function to create cursors based on a parametric query string: CREATE SEQUENCE s_cursors; CREATE OR REPLACE FUNCTION f_cursor( query text, out curs refcursor ) LANGUAGE PLPGSQL STRICT AS $BODY$ DECLARE c refcursor; BEGIN c := 'cursor_'||nextval( 's_cursors' ); OPEN c SCROLL FOR EXECUTE query; curs := c; END; $BODY$; SELECT f_cursor( 'SELECT * FROM pg_tables' ); curs --- cursor_1 (1 row) FETCH 10 FROM cursor_1; ERROR: cursor "cursor_1" does not exist SELECT * from pg_cursors ; name | statement | is_holdable | is_binary | is_scrollable | creation_time --+---+-+---+---+--- (0 rows) The cursor is (should have been) created as there's no error but it seems it vanishes as soon as the creating function returns. As if it was created "WITHOUT HOLD", which doesn't make much of sense in a function returning a refcursor, this is why there is (seems to be) no "HOLD" part in the cursor creation in PLPgSQL. I think more likely I am making some mistake. But have n ìo idea where. Any hint? Thanks in advance RMP. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamically created cursors vanish in PLPgSQL
Nice suggestion. In the meanwhile I've found a "workaround" that works for me (unless there's a hidden pitfall): CREATE OR REPLACE FUNCTION f_cursor2( query text, out curs refcursor ) LANGUAGE PLPGSQL STRICT AS $BODY$ DECLARE c refcursor; BEGIN c := 'cursor_'||nextval( 's_cursors' ); EXECUTE 'DECLARE '||curs||' SCROLL CURSOR WITH HOLD FOR '||query; curs := c; END; $BODY$; SELECT f_cursor( 'SELECT * FROM pg_tables' ); curs --- cursor_2 (1 row) FETCH 10 FROM cursor_2; schemaname |tablename| tableowner | tablespace | hasindexes | hasrules | hastriggers +-++++--+- information_schema | sql_features| postgres | [NULL] | f | f| f information_schema | sql_implementation_info | postgres | [NULL] | f | f| f pg_catalog | pg_statistic| postgres | [NULL] | t | f| f information_schema | sql_languages | postgres | [NULL] | f | f| f information_schema | sql_packages| postgres | [NULL] | f | f| f information_schema | sql_parts | postgres | [NULL] | f | f| f information_schema | sql_sizing | postgres | [NULL] | f | f| f pg_catalog | pg_authid | postgres | pg_global | t | f| t information_schema | sql_sizing_profiles | postgres | [NULL] | f | f| f pg_catalog | pg_database | postgres | pg_global | t | f| t (10 rows) SELECT * from pg_cursors ; name| statement | is_holdable | is_binary | is_scrollable | creation_time ---+---+-+---+---+--- cursor_2 | DECLARE cursor_2 SCROLL CURSOR WITH HOLD FOR SELECT * FROM pg_tables | t | f | t | 2008-09-26 10:05:38.963548+02 (1 row) I would then say the PLPgSQL should also have the "WITH / WITHOUT HOLD" feature, otherwise a function that creates a cursor needs a transaction despite the cursor is read-only. In my very humble opinion. On Thursday 25 September 2008 19:58:07 Pavel Stehule wrote: > Hello > > try to look at > http://okbob.blogspot.com/2008/08/using-cursors-for-generating-cross.html > > regards > Pavel Stehule > > p.s. you should to use transaction > > 2008/9/25 Reg Me Please <[EMAIL PROTECTED]>: > > Hi all. > > > > I'm running PGSQL v.8.3.3 > > > > I tried to adapt the examples from the friendly manual (38.7.3.5) in > > order to to have a function to create cursors based on a parametric query > > string: > > > > CREATE SEQUENCE s_cursors; > > > > CREATE OR REPLACE FUNCTION f_cursor( query text, out curs refcursor ) > > LANGUAGE PLPGSQL STRICT > > AS $BODY$ > > DECLARE > > c refcursor; > > BEGIN > > c := 'cursor_'||nextval( 's_cursors' ); > > OPEN c SCROLL FOR EXECUTE query; > > curs := c; > > END; > > $BODY$; > > > > SELECT f_cursor( 'SELECT * FROM pg_tables' ); > > > > curs > > --- > > cursor_1 > > (1 row) > > > > FETCH 10 FROM cursor_1; > > > > ERROR: cursor "cursor_1" does not exist > > > > SELECT * from pg_cursors ; > > name | statement | is_holdable | is_binary | is_scrollable | > > creation_time > > --+---+-+---+---+ > >--- (0 rows) > > > > The cursor is (should have been) created as there's no error but it seems > > it vanishes as soon as the creating function returns. > > As if it was created "WITHOUT HOLD", which doesn't make much of sense in > > a function returning a refcursor, this is why there is (seems to be) no > > "HOLD" part in the cursor creation in PLPgSQL. > > > > I think more likely I am making some mistake. But have n ìo idea where. > > > > Any hint? > > > > Thanks in advance > > > > RMP. > > > > > > -- > > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Transactions within a function body
Hi all. Is there a way to have (sub)transactions within a function body? I'd like to execute some code (a transaction!) inside a function and later decide whether that transaction is to be committed or not. Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Transactions within a function body
Well, if it is a limitation, and having it would lead to a "better product", why not making it a feature for the next still-open release? In my opinion that's more than a limitation, it's a missing feature. In your code you often need to create savepoints to delay the decision for the commitment. A Pl/PgSQL function is just a bunch of code you want to move into the DB. So the need for savepoints seems to me to be still there. Useless to say I would vote for a "GO". On Thursday 02 October 2008 11:01:37 Albe Laurenz wrote: > Alvaro Herrera wrote: > > > > Is there a way to have (sub)transactions within a function body? > > > > I'd like to execute some code (a transaction!) inside a function and > > > > later decide whether that transaction is to be committed or not. > > > > > > You could issue a "SAVEPOINT name". If at the end you don't want your > > > changes to apply, you can issue a "ROLLBACK to name" > > > > Actually you can't use SAVEPOINT nor ROLLBACK TO within a function. In > > PL/pgSQL you can use EXCEPTION blocks (if you don't like the changes, > > just do a RAISE EXCEPTION, and the exception block is run). > > After a discussion on comp.databases.postgresql I realized that this > is actually a limitation. > > Consider the following: > > BEGIN >UPDATE ... >UPDATE ... >UPDATE ... > EXCEPTION >WHEN integrity_constraint_violation THEN > ... > END; > > If the first UPDATE succeeds but the second one bombs, there is no way > to undo the first update short of having the whole transaction cancelled. > > So while exceptions are implemented using savepoints, they give you only > part of the functionality, namely to make a group of statements > all-or-nothing within one transaction. > > If you need all three of these UPDATEs to either all succeed or fail, > but the whole transaction should continue, you cannot do that in PL/pgSQL. > > Is there a chance to get savepoint support in PL/pgSQL at some point? > Does it make sense to raise this on -hackers? > > Yours, > Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Transactions within a function body
Hi. My humble opinion follows. One point here is that the decision for the ROLLBACK could possibly be different from errors. It could simply be based upon a generic expression, not just the conditions seen in "Appendix A" of the manual. An exception is something different from a transaction, despite the former is implemented with the latter. On Thursday 02 October 2008 11:53:17 Richard Huxton wrote: > Albe Laurenz wrote: > > After a discussion on comp.databases.postgresql I realized that this > > is actually a limitation. > > > > Consider the following: > > > > BEGIN > >UPDATE ... > >UPDATE ... > >UPDATE ... > > EXCEPTION > >WHEN integrity_constraint_violation THEN > > ... > > END; > > > > If the first UPDATE succeeds but the second one bombs, there is no way > > to undo the first update short of having the whole transaction cancelled. > > No, I think you've got that backwards Albe. You can even nest exceptions. > > > If you need all three of these UPDATEs to either all succeed or fail, > > but the whole transaction should continue, you cannot do that in > > PL/pgSQL. > > Try the following script. By commenting out the second INSERT you can > change whether you get one or no rows inserted into t1. The > BEGIN...EXCEPTION...END block has a savepoint set at the "BEGIN". > > > BEGIN; > > CREATE TABLE t1 (a integer); > > CREATE OR REPLACE FUNCTION test_exception() > RETURNS boolean AS $$ > DECLARE > n integer; > BEGIN > INSERT INTO t1 (a) VALUES (1); > -- INSERT INTO t1 (a) VALUES ('b'); > BEGIN > INSERT INTO t1 (a) VALUES (2); > INSERT INTO t1 (a) VALUES ('c'); > EXCEPTION > WHEN OTHERS THEN > SELECT INTO n count(*) FROM t1; > RAISE NOTICE 'n2 = %', n; > RETURN false; > END; > RETURN true; > EXCEPTION > WHEN OTHERS THEN > SELECT INTO n count(*) FROM t1; > RAISE NOTICE 'n1 = %', n; > RETURN false; > END; > $$ LANGUAGE plpgsql; > > SELECT test_exception(); > > SELECT count(*) FROM t1; > > ROLLBACK; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Transactions within a function body
Il Thursday 02 October 2008 16:15:10 Alvaro Herrera ha scritto: > Reg Me Please escribió: > > Well, if it is a limitation, and having it would lead to a "better > > product", why not making it a feature for the next still-open release? > > Because no one is working on implementing it? > > > In my opinion that's more than a limitation, it's a missing feature. > > In your code you often need to create savepoints to delay the decision > > for the commitment. > > A Pl/PgSQL function is just a bunch of code you want to move into the DB. > > So the need for savepoints seems to me to be still there. > > You can nest blocks arbitrarily, giving you the chance to selectively > rollback pieces of the function. It's only a bit more awkward. You mean I can issue a ROLLBACK command within a BEGIN...END; block to roll it back? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Transactions within a function body
Il Thursday 02 October 2008 17:10:23 Alvaro Herrera ha scritto: > Reg Me Please escribió: > > Il Thursday 02 October 2008 16:15:10 Alvaro Herrera ha scritto: > > > You can nest blocks arbitrarily, giving you the chance to selectively > > > rollback pieces of the function. It's only a bit more awkward. > > > > You mean I can issue a ROLLBACK command within a BEGIN...END; block to > > roll it back? > > No -- I mean you can use BEGIN/EXCEPTION/END blocks as you like, nesting > them or putting one after another. Complementing this with RAISE > EXCEPTION you can cause savepoints to roll back at will. Now I understand. (Sorry, me dumb!) Looks quirky, but I trust it's working. I'll give that a try. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] NATURAL JOINs
Hi all. I'm running v8.3.3 First point. Is there a way to know how a NATURAL JOIN is actually done? That is, which fields are actually used for the join? The EXPLAIN directive doesn't show anyting useful. Second point. I have this: CREATE TABLE tab_dictionary ( item text primary key ); CREATE TABLE tab_atable( item1 TEXT NOT NULL REFERENCES tab_dictionary( item ), item2 TEXT NOT NULL REFERENCES tab_dictionary( item ), trans NUMERIC NOT NULL ); INSERT INTO tab_dictionary VALUES ( 'meters' ),('feet' ); INSERT INTO tab_atable VALUES ( 'meters','feet',3.28084 ); SELECT * FROM tab_atable NATURAL JOIN tab_dictionary; item1 | item2 | trans | item +---+-+ meters | feet | 3.28084 | meters meters | feet | 3.28084 | feet (2 rows) Very likely I'm wrong, but this output looks wrong to me (and shold be wrong also accordingly to the documentation). Is there ant good explaination to this behaviour? Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] NATURAL JOINs
Il Wednesday 15 October 2008 17:55:03 Tom Lane ha scritto: > "Richard Broersma" <[EMAIL PROTECTED]> writes: > > For this reason, clients passing natural joins to the server can have > > dangerous result sets returned with no warning. > > Yeah. A lot of people consider that NATURAL JOIN is simply a bad idea > and shouldn't be used ever --- it's too easy to shoot yourself in the > foot with a careless column addition or rename. Explicitly spelling out > the join columns with ON or USING is a lot less prone to silent breakage > after a schema change. > > regards, tom lane Both are perfectly right, indeed. Nonetheless, in my opinion a NATURAL JOIN exploiting the FKs instead of the column names would be much more helpful and much less error prone! As far as I know there is no way to exploit FKs in JOINs, right? THANKS -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] CREATE DOMAIN with REFERENCES
Hello all. I have a number of tables that are actually dictionaries. This means that, in order to keep the reference integrity, in a number of other tables I have FK definitions like these ones: CREATE TABLE dict1 ( d1 TEXT PRIMARY KEY, ... ); CREATE TABLE user1 ( d1 TEXT NOT NULL REFERENCES dict1 DEFAULT 'value1' ); I am trying to simplify this thing by defining DOMAINs with the proper "NOT NULL" and "DEFAULT" predicates. But it seems there's no way to CREATE a DOMAIN with a "REFERENCES ..." predicate, possibly in the optional CHECK. At the moment I have solved the issue with a function that CHECKs for the VALUE being already in the proper dictionary. But this way I loose the explicit reference integrity. Is there any better advise for this need? Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] [PGSQL 8.3.5] How to handle FKs with partitioning?
Hi all. I need to implement something very similar to temporal table partitioning as described in the documentation at chapter 5.9. My issues come from the fact that I have other tables that references (FKs) to the table(s) to be partitioned. Those references are enforced by means of DRI statements (REFERENCES ...). As the table containing the referenced data will not be a single table, will I be forced to drop DRI? The referencing table(s) don't need to be partitioned, though and have also other FKs to other tables. Is there any other solution as I would keep DRI? Thanks in advance. -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [PGSQL 8.3.5] How to handle FKs with partitioning?
My idea is that DRI will help during the the JOINs I'll need to make later. Creating a trigger to check the consistence would not help for that case, unless my idea is wrong. In which case I'd follow the great Merlin's hint. So the question is now: do DRI impact on JOINs efficiency? What'd be the gain? The table in question should easily go 20+M rows, possibly up to 50+M a year. The partitioning would ensure about 2M rows per partition and the trigger should work accordingly to this (dynamic) schema. So, along with the loss of efficiency due to the trigger I also would get some other loss because of an external table needed for the partitioning. On Friday December 19 2008 17:15:56 Merlin Moncure wrote: > On Fri, Dec 19, 2008 at 6:04 AM, Reg Me Please wrote: > > Hi all. > > > > I need to implement something very similar to temporal table partitioning > > as described in the documentation at chapter 5.9. > > > > My issues come from the fact that I have other tables that references > > (FKs) to the table(s) to be partitioned. Those references are enforced by > > means of DRI statements (REFERENCES ...). > > > > As the table containing the referenced data will not be a single table, > > will I be forced to drop DRI? > > The referencing table(s) don't need to be partitioned, though and have > > also other FKs to other tables. > > > > Is there any other solution as I would keep DRI? > > Write a trigger. > > merlin -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] [PGSQL 8.3.5] Use of a partial indexes
HI all. I have a 8M+ rows table over which I run a query with a and-only WHERE condition. The table has been periodically VACUUMed and ANALYZEd. In the attempt of speeding that up I added a partial index in order to limit the size of the index. Of course that index is modeled after a "slowly variable" part of the WHERE condition I have in my query. And timings actually dropped dramatically (I do know the problems with caching etc. and I paid attention to that) to about 1/20th (from about 800ms to average 40ms, actually). So I turned to EXPLAIN to see how the partial index was used. Incredibly, the partial index was not used! So I tried to drop the new index and incredibly the performances where still very good. While I can understand that the planner can decide not to use a partial index (despite in my mind it'd make a lot of sense), I'd like to understand how it comes that I get benefits from an enhancement not used! What'd be the explanation (if any) for this behavior? Thanks. -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes
Hi. The WHERE condition can be divided into a "slowly changing" part and in a "random" one. The random part is the one I change at every query to avoid result caching. The planner seems to be smart enough to "learn" while working but then I should see a change in the EXPLAIN output, which never happens. I also tried to restart PostgreSQL in order to force a cache flush, but again, once the new performances are in the don't get out! Disk cache could explain the thing, but then why I got the high performances after the partial index has been created? By chance? On Monday December 29 2008 15:24:33 Gauthier, Dave wrote: > Not sure if this applies to your case, but I've seen cases where an initial > run of a particular query is a lot slower than subsequent runs even though > no changes were made between the two. I suspect that the initial run did > all the disk IO needed to get the data (slow), and that the subsequent runs > were just reading the data out of memory (fast) as it was left over in the > PG data buffer cache, the server's caches, the disk server's cache, etc... > . > > Try the same query only with different search criteris. IOW, force it to > go back out to disk. You may find that the slow performance returns. > > Good Luck ! > > -dave > > -Original Message- > From: pgsql-general-ow...@postgresql.org > [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Reg Me Please > Sent: Monday, December 29, 2008 9:09 AM > To: pgsql-general@postgresql.org > Subject: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes > > HI all. > > I have a 8M+ rows table over which I run a query with a and-only WHERE > condition. > The table has been periodically VACUUMed and ANALYZEd. > In the attempt of speeding that up I added a partial index in order to > limit the size of the index. Of course that index is modeled after a > "slowly variable" part of the WHERE condition I have in my query. > > And timings actually dropped dramatically (I do know the problems with > caching etc. and I paid attention to that) to about 1/20th (from about > 800ms to average 40ms, actually). > So I turned to EXPLAIN to see how the partial index was used. > Incredibly, the partial index was not used! > So I tried to drop the new index and incredibly the performances where > still very good. > > While I can understand that the planner can decide not to use a partial > index (despite in my mind it'd make a lot of sense), I'd like to understand > how it comes that I get benefits from an enhancement not used! > What'd be the explanation (if any) for this behavior? > > Thanks. > > -- > Fahrbahn ist ein graues Band > weisse Streifen, grüner Rand -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes
Only one question remains in my mind: why the planner is not using the partial index? The partial index is covering 2 predicates out of the 3 used in the where condition. Actually there is a boolean flag (to exclude "disabled" rows), a timestamp (for row age) and an int8 (a FK to another table). The first two are in the partial index in order to exclude "disabled" and older rows. The int8 is the "random" key I mentioned earlier. So the WHERE condition reads like: flag AND tstz >= '2008-01-01'::timestamptz and thekey=42 I can see in the EXPLAIN that there is no mention to the partial index. Please keep in mind that the table has 8+M rows, few of which are flagged, about 70% don't match the age limit and few dozens match the key. In my opinion the partial index should help a lot. -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand On Monday December 29 2008 16:36:49 Scott Ribe wrote: > > The WHERE condition can be divided into a "slowly changing" part and in > > a "random" one. The random part is the one I change at every query to > > avoid result caching. > > The first query will leave in cache at least many of the index pages needed > by the second query, and likely actual rows needed by the second query. > > > Disk cache could explain the thing, but then why I got the high > > performances after the partial index has been created? By chance? > > Creating the partial index reads rows, and the pages are left in the disk > cache. The only way to do proper comparisons is to reboot between trials in > order to compare queries with cold caches, or use the latter of multiple > runs in order to compare queries with hot caches. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes
Here it comes: -- DDL CREATE TABLE gm_t_movimenti_magazzini ( gm_movi_unic INT8 NOT NULL REFERENCES gm_t_movimenti, gm_moti_unic TEXT NOT NULL REFERENCES gm_t_movimenti_tipi, ap_prod_unic INT8 NOT NULL REFERENCES ap_t_prodotti, gm_maga_unic TEXT NOT NULL REFERENCES gm_t_magazzini, gm_moma_qnta NUMERIC NOT NULL, gm_moma_flag BOOL NOT NULL, gm_moma_vali TIMESTAMPTZ NOT NULL ); CREATE INDEX i_gm_t_movimenti_magazzini_gm_movi_unic ON gm_t_movimenti_magazzini( gm_movi_unic ); CREATE INDEX i_gm_t_movimenti_magazzini_gm_moti_unic ON gm_t_movimenti_magazzini( gm_moti_unic ); CREATE INDEX i_gm_t_movimenti_magazzini_ap_prod_unic ON gm_t_movimenti_magazzini( ap_prod_unic ); CREATE INDEX i_gm_t_movimenti_magazzini_gm_maga_unic ON gm_t_movimenti_magazzini( gm_maga_unic ); CREATE INDEX i_gm_t_movimenti_magazzini_gm_moma_flag ON gm_t_movimenti_magazzini( gm_moma_flag ); CREATE INDEX i_gm_t_movimenti_magazzini_gm_moma_vali ON gm_t_movimenti_magazzini( gm_moma_vali ); CREATE INDEX i_gm_t_movimenti_magazzini_partial ON gm_t_movimenti_magazzini( (gm_moma_flag AND gm_moma_vali>='2008-01-01'::TIMESTAMPTZ) ) WHERE gm_moma_flag AND gm_moma_vali>='2008-01-01'::TIMESTAMPTZ; -- DML SELECT SUM( gm_moma_qnta ) FROM gm_t_movimenti_magazzini WHERE gm_moma_flag AND gm_moma_vali >= '2008-01-01'::TIMESTAMPTZ AND ap_prod_unic = ; where changes from query to query. -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand On Tuesday December 30 2008 15:12:33 Scott Marlowe wrote: > On Tue, Dec 30, 2008 at 2:02 AM, Reg Me Please wrote: > > Only one question remains in my mind: > > > > why the planner is not using the partial index? > > > > The partial index is covering 2 predicates out of the 3 used in the where > > condition. Actually there is a boolean flag (to exclude "disabled" rows), > > a timestamp (for row age) and an int8 (a FK to another table). > > The first two are in the partial index in order to exclude "disabled" and > > older rows. The int8 is the "random" key I mentioned earlier. > > > > So the WHERE condition reads like: > > > > flag AND tstz >= '2008-01-01'::timestamptz and thekey=42 > > > > I can see in the EXPLAIN that there is no mention to the partial index. > > Please keep in mind that the table has 8+M rows, few of which are > > flagged, about 70% don't match the age limit and few dozens match the > > key. In my opinion the partial index should help a lot. > > Can you show us the DDL for the index creation and the select query as > well? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes
Well it should look like the one you suggest. But maybe I've missed some important concept in the partial indexes theory! :-) As soon as I read your posting I understood the problem. I was thinking to create a big cut on the index containing the flag and the timestamp, while the concept is to cut the other index using the flag+timestamp part of the condition. Thanks a lot for the "satori". -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand On Tuesday December 30 2008 17:20:05 Picavet Vincent wrote: > Hello, > Why do you index a boolean of your condition ? > Isn't it better to have the partial index like the following ? > > CREATE INDEX i_gm_t_movimenti_magazzini_partial > ON gm_t_movimenti_magazzini( ap_prod_unic ) > WHERE gm_moma_flag AND gm_moma_vali>='2008-01-01'::TIMESTAMPTZ; > > Hope this helps, > Vincent > > > -Message d'origine- > > De : pgsql-general-ow...@postgresql.org > > [mailto:pgsql-general-ow...@postgresql.org] De la part de Reg > > Me Please > > Envoyé : mardi 30 décembre 2008 17:09 > > À : Scott Marlowe > > Cc : Scott Ribe; Gauthier, Dave; pgsql-general@postgresql.org > > Objet : Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes > > > > Here it comes: > > > > -- DDL > > > > CREATE TABLE gm_t_movimenti_magazzini > > ( > > gm_movi_unic INT8 NOT NULL REFERENCES gm_t_movimenti, > > gm_moti_unic TEXT NOT NULL REFERENCES gm_t_movimenti_tipi, > > ap_prod_unic INT8 NOT NULL REFERENCES ap_t_prodotti, > > gm_maga_unic TEXT NOT NULL REFERENCES gm_t_magazzini, > > gm_moma_qnta NUMERIC NOT NULL, > > gm_moma_flag BOOL NOT NULL, > > gm_moma_vali TIMESTAMPTZ NOT NULL > > ); > > > > CREATE INDEX i_gm_t_movimenti_magazzini_gm_movi_unic > > ON gm_t_movimenti_magazzini( gm_movi_unic ); > > > > CREATE INDEX i_gm_t_movimenti_magazzini_gm_moti_unic > > ON gm_t_movimenti_magazzini( gm_moti_unic ); > > > > CREATE INDEX i_gm_t_movimenti_magazzini_ap_prod_unic > > ON gm_t_movimenti_magazzini( ap_prod_unic ); > > > > CREATE INDEX i_gm_t_movimenti_magazzini_gm_maga_unic > > ON gm_t_movimenti_magazzini( gm_maga_unic ); > > > > CREATE INDEX i_gm_t_movimenti_magazzini_gm_moma_flag > > ON gm_t_movimenti_magazzini( gm_moma_flag ); > > > > CREATE INDEX i_gm_t_movimenti_magazzini_gm_moma_vali > > ON gm_t_movimenti_magazzini( gm_moma_vali ); > > > > CREATE INDEX i_gm_t_movimenti_magazzini_partial > > ON gm_t_movimenti_magazzini( (gm_moma_flag AND > > gm_moma_vali>='2008-01-01'::TIMESTAMPTZ) ) > > WHERE gm_moma_flag AND gm_moma_vali>='2008-01-01'::TIMESTAMPTZ; > > > > -- DML > > > > SELECT SUM( gm_moma_qnta ) > > FROM gm_t_movimenti_magazzini > > WHERE > > gm_moma_flag AND > > gm_moma_vali >= '2008-01-01'::TIMESTAMPTZ AND > > ap_prod_unic = > > ; > > > > where changes from query to query. > > > > -- > > Fahrbahn ist ein graues Band > > weisse Streifen, grüner Rand > > > > On Tuesday December 30 2008 15:12:33 Scott Marlowe wrote: > > > On Tue, Dec 30, 2008 at 2:02 AM, Reg Me Please > > > > > > > > wrote: > > > > Only one question remains in my mind: > > > > > > > > why the planner is not using the partial index? > > > > > > > > The partial index is covering 2 predicates out of the 3 > > > > used in the > > > > > > where condition. Actually there is a boolean flag (to exclude > > > > "disabled" rows), a timestamp (for row age) and an int8 > > > > (a FK to another table). > > > > > > The first two are in the partial index in order to exclude > > > > "disabled" and older rows. The int8 is the "random" key I > > > > mentioned earlier. > > > > > > So the WHERE condition reads like: > > > > > > > > flag AND tstz >= '2008-01-01'::timestamptz and thekey=42 > > > > > > > > I can see in the EXPLAIN that there is no mention to the > > > > partial index. > > > > > > Please keep in mind that the table has 8+M rows, few of which are > > > > flagged, about 70% don't match the age limit and few dozens match > > > > the key. In my opinion the partial index should help a lot. > > > > > > Can you show us the DDL for the index creation and the > > > > select query as > > > > > well? > > > > -- > > Sent via pgsql-general mailing list > > (pgsql-general@postgresql.org) To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] auto insert data every one minute
You need to write a process that will do it. At best you can use crontab if your a lucky and use Unix. -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand On Saturday January 3 2009 11:46:47 searchelite wrote: > Daniel Verite wrote: > > searchelite wrote: > > > > > > How about using pg_sleep ? > > > > INSERT ; > > COMMIT; > > SELECT pg_sleep(60); > > > > INSERT...; > > COMMIT; > > SELECT pg_sleep(60); > > i can use pg_sleep..but i have thousands of data to be inserted..is there > any better way using pg_sleep? > > -- > View this message in context: > http://www.nabble.com/auto-insert-data-every-one-minute-tp20027811p21263763 >.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error: column "host" does not exist
IS there any name clash with a function argument? -- Fahrbahn ist ein graues Band weisse Streifen, grüner Rand On Thursday 08 January 2009 08:30:07 Mayuresh Nirhali wrote: > Hello, > > I am working with 8.1.4 pgsql as my database backend. I have a function > written in plpgsql language, that queries a particular table as below, > > select host from table_host where ip_address = ip_array[i] and port = > port_array[i]; > > The query is running in a for loop. I see ip_array and port_array > getting populated with correct values during the execution. > The error I see is, > > ERROR: column "host" does not exist > CONTEXT: SQL statement "SELECT host" > PL/pgSQL function "sp_get_ip_data" line 286 at raise > > throu psql, I can select the table_host fine. > column "host" is a primary key here. > > I dont understand why I see the "column does not exist" error; any idea ? > > cheers > Mayuresh -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general