[GENERAL] type cast in index
Hello, i have reading in the mailing list any messages where different people use this format to establish a functional index in a column using a type cast. CREATE INDEX fecha_creacion_ordtrab ON orden_trabajo_cabecera USING btree ((time_stamp_creacion::date)); but in my postgresql 8.3 version i get this error: ERROR: functions in index expression must be marked IMMUTABLE I have tried with the typical CAST too but i get the same error, the column in the table is: Column | Type| Modifiers time_stamp_creacion | timestamp with time zone | default now() What can be the problem? Thanks. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] type cast in index
Yes it seems you have reason Chris, i have been using 'timestamp with time zone' without need it because i have read in the mailing list was good practice because maybe one day you use the database in more timezones but i ever use the app in the same timezone so i will convert the column datatype where i need an index in a cast to date from a timestamp. Thanks. Chris Spotts escribió: CREATE INDEX fecha_creacion_ordtrab ON orden_trabajo_cabecera USING btree ((time_stamp_creacion::date)); but in my postgresql 8.3 version i get this error: ERROR: functions in index expression must be marked IMMUTABLE If your time_stamp_creacion is a timestamp with time zone I believe that the cast is not immutable. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 10 missing features
Hi all, only want to link this blog post http://blog.kimiensoftware.com/2011/04/top-10-missing-postgresql-features , i think he may have any good points. Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] question about query
Hi, i am trying to obtain from the database what we have been doing in a excel sheet from some time. I have some tables where i store the documents associated with our customers, invoices, payments and others, what i need to get from this tables it is a movement log where i can get what payment it is pending from this customer after the change in this row, for example: movement | qty | pending invoice N-111 | 1000.0 | 1000.0 payment 1 N-111 | 200.0 | 800.0 payment 1 N-111 | 150.0 | 550.0 invoice N-112 | 350.0 | 900.0 how could i calculate this pending column that does not exists in the database and i need to be based on last result row? I would like to get with raw SQL but the need to use any procedural language would not be a problem. Thanks! Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] question about query
sorry, i have more searching for a way to do something like this but i should have posted the tables anyway, i have any more documents but all have a structure like this: invoices: CREATE TABLE factura ( id_factura serial NOT NULL, -- invoice id id_cliente integer NOT NULL, -- customer relation fecha date NOT NULL, -- date of invoice importe numeric(10,2) NOT NULL, --total invoice qty time_stamp_creacion timestamp without time zone NOT NULL DEFAULT now() CONSTRAINT factura_cabecera_pkey PRIMARY KEY (id_factura) ) WITH ( OIDS=FALSE ); payments: CREATE TABLE factura_fecha_cobro ( id_factura integer NOT NULL, --invoice id id_cobro integer NOT NULL, --number of payment for this invoice fecha date NOT NULL, --date of payment importe numeric(10,2) NOT NULL, --qty paid time_stamp_creacion timestamp without time zone NOT NULL DEFAULT now(), CONSTRAINT factura_fecha_cobro_pkey PRIMARY KEY (id_factura, id_cobro), CONSTRAINT factura_fecha_cobro_id_factura_fkey FOREIGN KEY (id_factura) REFERENCES factura (id_factura) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE ) WITH ( OIDS=FALSE ); Regards, Miguel Angel. El 30/06/11 15:11, Craig Ringer escribió: On 30/06/2011 9:00 PM, Linos wrote: how could i calculate this pending column that does not exists in the database and i need to be based on last result row? I would like to get with raw SQL but the need to use any procedural language would not be a problem. Without knowing how you've structured your data in the database, that's pretty much impossible to answer. What are your table definitions? What does your raw data currently look like? Show some example rows. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] question about query
El 30/06/11 15:44, David Johnston escribió: Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Linos Sent: Thursday, June 30, 2011 9:00 AM To: pgsql-general@postgresql.org Subject: [GENERAL] question about query Hi, i am trying to obtain from the database what we have been doing in a excel sheet from some time. I have some tables where i store the documents associated with our customers, invoices, payments and others, what i need to get from this tables it is a movement log where i can get what payment it is pending from this customer after the change in this row, for example: movement | qty | pending invoice N-111 | 1000.0 | 1000.0 payment 1 N-111 | 200.0 | 800.0 payment 1 N-111 | 150.0 | 550.0 invoice N-112 | 350.0 | 900.0 how could i calculate this pending column that does not exists in the database and i need to be based on last result row? I would like to get with raw SQL but the need to use any procedural language would not be a problem. I would create an "Accounts Receivable" (A/R) relation (view probably but you could create a table as well) that is basically a UNION between the invoice and payment tables. This will get you the "qty" column (but make sure you use positive and negative numbers). I would then create a table returning function that will calculate and output the running total "pending". This calculation may possibly be done using a Window function but if not you can query the A/R relation and step over each result record in order to calculate the running total. David J. i have created the union to get all the records (giving payments negative sign) but what i don't know how to do it is use a window function to create the column with the running total, any short example of syntax please? Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] question about query
El 30/06/11 15:44, David Johnston escribió: Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Linos Sent: Thursday, June 30, 2011 9:00 AM To: pgsql-general@postgresql.org Subject: [GENERAL] question about query Hi, i am trying to obtain from the database what we have been doing in a excel sheet from some time. I have some tables where i store the documents associated with our customers, invoices, payments and others, what i need to get from this tables it is a movement log where i can get what payment it is pending from this customer after the change in this row, for example: movement | qty | pending invoice N-111 | 1000.0 | 1000.0 payment 1 N-111 | 200.0 | 800.0 payment 1 N-111 | 150.0 | 550.0 invoice N-112 | 350.0 | 900.0 how could i calculate this pending column that does not exists in the database and i need to be based on last result row? I would like to get with raw SQL but the need to use any procedural language would not be a problem. I would create an "Accounts Receivable" (A/R) relation (view probably but you could create a table as well) that is basically a UNION between the invoice and payment tables. This will get you the "qty" column (but make sure you use positive and negative numbers). I would then create a table returning function that will calculate and output the running total "pending". This calculation may possibly be done using a Window function but if not you can query the A/R relation and step over each result record in order to calculate the running total. David J. Thanks for pointing me this David, with the idea and the help from http://www.postgresonline.com/journal/index.php?/archives/119-Running-totals-and-sums-using-PostgreSQL-8.4-Windowing-function.html article i think i get it. Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] integer column to serial "type"
Hello, there is some way to get and old integer primary key column to show the "type" serial at table description? I had a table like this: CREATE TABLE test ( test_id integer NOT NULL, CONSTRAINT test_pkey PRIMARY KEY (test_id), ); that i would like to see as: CREATE TABLE test ( test_id serial NOT NULL, CONSTRAINT test_pkey PRIMARY KEY (test_id), ); i have tried with: CREATE SEQUENCE test_test_id_seq; ALTER TABLE test ALTER COLUMN test_id SET DEFAULT nextval('test_test_id_seq'::regclass); ALTER SEQUENCE test_test_id_seq OWNED BY test.test_id; and now i have this table: CREATE TABLE test ( test_id integer NOT NULL DEFAULT nextval('test_test_id_seq'::regclass), CONSTRAINT test_pkey PRIMARY KEY (test_id), ); i know this works the same as serial but do i have some way to get this table to show me test_id column as "serial" type without have to recreate the table and restore original rows? Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] integer column to serial "type"
On 25/10/12 17:39, Tom Lane wrote: > Linos writes: >> there is some way to get and old integer primary key column to show the >> "type" >> serial at table description? > > No, because there is no such type. "serial" is just a convenient macro > that you can use in CREATE TABLE; it's not a real type. > > regards, tom lane > > Yeah, i knew it's not a real type and supposed the outcome, i only wanted to be sure, i find much more pleasant to see: test_id serial NOT NULL, than test_id integer NOT NULL DEFAULT nextval('test_test_id_seq'::regclass), but i can live with it, thanks! Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] integer column to serial "type"
On 25/10/12 19:45, Raymond O'Donnell wrote: > On 25/10/2012 16:32, Linos wrote: >> Hello, >> there is some way to get and old integer primary key column to show the >> "type" >> serial at table description? >> >> I had a table like this: >> >> CREATE TABLE test >> ( >> test_id integer NOT NULL, >> CONSTRAINT test_pkey PRIMARY KEY (test_id), >> ); >> >> >> that i would like to see as: >> >> CREATE TABLE test >> ( >> test_id serial NOT NULL, >> CONSTRAINT test_pkey PRIMARY KEY (test_id), >> ); > > > PgAdmin manages it somehow, because it's able to show columns as > "serial"... maybe it infers it from the ownership dependency between the > sequence and the column. > > Ray. > > Yeah i was referring to PgAdmin, didn't notice that columns shown as serial in PgAdmin are shown like integer nextval(sequence) in \d from psql, i will try to ask in PgAdmin mailing list how they detect a "serial" column because i have tried to replicate the exact setup (ownership included) of one and it doesn't works. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] savepoint problems
Hello, i have migrated from Maxdb to Postgresql recently and i am having a speed problem in large transactions over slow links because of autorollback on error postgresql feature, i create data in any tables with triggers in other tables and i do large inserts from the data created in this tables to any other postgresql servers (replication purposes), for this example maybe we can say 2 rows, i want do this in a transaction to make rollback on certain errors, but i use a fallback feature if a duplicated is found i relaunch the last insert data in a update to the existing row, so i have to set savepoint and release after the insert has been successful, so my traffic flow is anything like this. client server begin --> <- ok savepoint-> <- ok insert --> <- ok release savepoint---> <- ok insert --> <- error duplicated key update -> <- ok release savepoint---> <- ok 2 rows later commit -> <- ok obviously in a slow link this is slow as hell, i have posted this same email in spanish pgsql-es-ayuda where Alvaro Herrera has replied my with some solutions (thanks Alvaro for your great support in spanish mailing list!), mainly two: 1- create a function that uses EXCEPTION to save data traffic or the function like an upsert that can be located in the example 38-1 at http://www.postgresql.org/docs/8.3/static/plpgsql-control-structures.html but this have the problem that i still have the savepoint overhead. 2- create a function that make a select locking the table before decide to do an insert or an update. Well i would like to know if every can help with any other idea or any notes on this problem? Other question i have it is how i could create a function without be sure the number of columns to insert/update. Thanks in advance. Best Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] savepoint problems
David Wilson escribió: On Fri, Aug 1, 2008 at 12:30 PM, Linos <[EMAIL PROTECTED]> wrote: Well i would like to know if every can help with any other idea or any notes on this problem? Other question i have it is how i could create a function without be sure the number of columns to insert/update. Thanks in advance. you could do: begin; create temporary table tmp (...); [insert (or better yet, COPY) into tmp table] [delete from real table where exists in temporary table]; insert into real_table select * from tmp; drop table tmp; commit; Your client <--> server communication should be extremely small. I think this is probably the better solution if i get the jdbc to use the copy command, but i still dont know how to make a function with a variable column number, maybe i simply can put all the columns and let the null columns insert/update with null. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] savepoint problems
Tom Lane escribió: Martijn van Oosterhout <[EMAIL PROTECTED]> writes: On Fri, Aug 01, 2008 at 06:30:36PM +0200, Linos wrote: i have migrated from Maxdb to Postgresql recently and i am having a speed problem in large transactions over slow links because of autorollback If the goal is to reduce latency costs, the best way could be: [ move it to the server side ] Or move the logic into a server-side function, if you prefer to stick with your existing procedural approach. regards, tom lane when you say move the logic into a server-side function do you mean send the data in a copy command (or many inserts) to a temporary table and load from here with a server-side functions like David or Martijn or are you telling me other way to do it, could you elaborate this please? Thanks. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] syncing with a MySQL DB
Brandon Metcalf escribió: m == [EMAIL PROTECTED] writes: m> On Tue, Oct 21, 2008 at 9:54 AM, Brandon Metcalf <[EMAIL PROTECTED]> wrote: m> > m == [EMAIL PROTECTED] writes: m> > m> > m> On Tue, Oct 21, 2008 at 9:07 AM, Brandon Metcalf <[EMAIL PROTECTED]> wrote: m> > m> > I have a need to keep a PostgreSQL and MySQL table synchronized. My m> > m> > thoughts are to use triggers on the pgsql side to manipulate the MySQL m> > m> > table when data in the pgsql table is changed. I also plan on using m> > m> > PL/Perl to write the functions. m> > m> > m> > m> > Are there better ways to achieve this? m> > m> > m> have you looked at dbi-link? m> > m> > m> > No, but just did and it looks very cool. Have you used it before? If m> > so, would you mind if I ask you some questions? We could take it off m> > the list if that's more appropriate. m> i'm not a perl guy...hop on postgresql irc and ping 'davidfetter' :-) OK. I'll dig in and can probably figure everything out. I'll contact David if I can't. Thanks. Maybe you can use too a multi-master replication solution that supports mysql and postgresql like symmetricds, http://symmetricds.sourceforge.net , it is developed in java and can works like a library or a daemon, i am using now between postgresql machines but it supports replicate heterogeneous database (a common usage in the forums it is oracle in a central and many mysql like clients), it is a trigger based replication so i think it can be good fit for your needs. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large Objects and Replication question
Symmetric-ds it is a replication solution that handles large objects, it is asynchronous and multi-master, i have been using between 30 separate postgresql connected by a slow link and until now i have been working without problems, i think this project should be in the wiki. http://symmetricds.codehaus.org/ Regards, Miguel Angel. El 02/12/09 15:23, Howard Cole escribió: Does anyone know of a replication solution that can handle large objects? Preferrably on a per database rather than per cluster basis. Incidentally - out of interest - why doesn't Slony handle large objects? Thanks. Howard www.selestial.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] function to return rows as columns?
Hello, i have a query that returns a result set like this: item | size | stock 123 | XL | 10 123 | XXL | 5 123 | XS | 3 and i would like get the results like this: item | XL | XXL | XS 123 | 10 | 5 | 3 i have been thinking how to do it with a plpgsql function but the number of sizes depend on the item that it is queried so i can not create a type and return it, i could create it like a text concatenating the stock and size of every row and returning the complete line text but i would have to process it in the application anyway so i am searching a solution that lets me return it like a record. I have been searching the list and maybe i could create the record type inside the function and to get the correct names and number of columns in the application side launching a query to get the number of sizes before call the function to specify the columns in the function call but maybe i am missing anything important here? any better (or more correct) way to do this? Thanks. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] function to return rows as columns?
Thomas Kellerer escribió: Linos, 27.02.2009 11:41: Hello, i have a query that returns a result set like this: item | size | stock 123 | XL | 10 123 | XXL | 5 123 | XS | 3 and i would like get the results like this: item | XL | XXL | XS 123 | 10 | 5 | 3 i have been thinking how to do it with a plpgsql function but the number of sizes depend on the item that it is queried so i can not create a type and return it, i could create it like a text concatenating the stock and size of every row and returning the complete line text but i would have to process it in the application anyway so i am searching a solution that lets me return it like a record. I have been searching the list and maybe i could create the record type inside the function and to get the correct names and number of columns in the application side launching a query to get the number of sizes before call the function to specify the columns in the function call but maybe i am missing anything important here? any better (or more correct) way to do this? Thanks. Check out the "crosstab" function in the "Tablefunc" module: http://www.postgresql.org/docs/current/static/tablefunc.html I knew i was missing something hehehe, thanks Thomas. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] function to return rows as columns?
A. Kretschmer escribió: In response to Linos : Hello, i have a query that returns a result set like this: item | size | stock 123 | XL | 10 123 | XXL | 5 123 | XS | 3 and i would like get the results like this: item | XL | XXL | XS 123 | 10 | 5 | 3 Other solution with plain SQL: test=*# select * from linos ; item | size | stock --+--+--- 123 | XL |10 123 | XXL | 5 123 | XS | 3 (3 rows) test=*# select item, sum(case when size='XL' then stock else 0 end) as "XL", sum(case when size='XXL' then stock else 0 end) as "XXL", sum(case when size='XS' then stock else 0 end) as "XS" from linos where item=123 group by item; item | XL | XXL | XS --++-+ 123 | 10 | 5 | 3 (1 row) Andreas I think this approach have a problem (almost with my data), i have a somewhat large number of different sizes, about 200 or so (although i have a presented a limited example i now). Thanks anyway by the alternative way to do it Andreas. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] strange performance problem
Hello i have the same table with the same data in my development machine and in a small server in production. The table is this: Tabla «modelo_subfamilia» Columna| Tipo | Modificadores ---+---+--- nombre| character varying(40) | not null subfamilia_id | character(4) | not null id_familia| character(4) | not null hasta | character(4) | not null foto | bytea | id_seccion| integer | id_categoria | integer | Índices: «modelo_subfamilia_pkey» PRIMARY KEY, btree (subfamilia_id) «uq_hasta_index_modsubfam» UNIQUE, btree (hasta) «nombre_index_modsubfam» btree (nombre) the column "foto" has a toast table, aside from the fact that it have in the server three triggers they are exactly the same, with the same data too, my development machine has version 8.3.6 (linux kernel 2.6.28) and production server has version 8.3.3 (linux kernel 2.6.26), the settings in postgresql.conf are nearly the same except for work_men (24 server, 36 development machine) and effective_cache_size (1024 server, 1536 development machine), they have the same sysctl settings and limits too, and the same mount options for the ext3 filesystem that have the data, i have a single sata disk (wd velociraptor) in my development machine and the server it is using a linux software raid10 with 4 sata disks. I have detected that a simple query from the application i am developing in QT it is really fast in my machine and takes too much time in production server, i am logging the queries that spend more than 500ms so i have this in the log. 2009-02-27 13:51:15 CET 127.0.0.1LOG: duración: 4231.045 ms sentencia: SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto", "id_seccion", "id_categoria" FROM "modelo_subfamilia" so i have been testing in my machine and in the server the same query to see the difference. EXPLAIN ANALYZE: -development: Seq Scan on modelo_subfamilia (cost=0.00..11.68 rows=368 width=73) (actual time=0.010..0.092 rows=368 loops=1) Total runtime: 0.174 ms -server: Seq Scan on modelo_subfamilia (cost=0.00..6.10 rows=368 width=69) (actual time=0.008..0.158 rows=368 loops=1) Total runtime: 0.289 ms PSQL with \timing: -development: Time: 72,441 ms -server: Time: 78,762 ms but if i load it from QT or from pgadmin i get more than 4 seconds in server and ~100ms in develoment machime, if i try the query without the "foto" column i get 2ms in development and 30ms in server so the difference its there anyway but not in psql commandline it seems to be only when accessing from a graphical front end, and with the complete query with foto column included i get the postgresql process to eat 90% of the cpu for the complete 4 seconds that it gets to send me the result so it not seems to be a problem with the cpu usage from the graphic libs (no QT or WxWindows), how could i debug this problem?, where should i begin to search? Thanks. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] strange performance problem
Richard Huxton escribió: Linos wrote: 2009-02-27 13:51:15 CET 127.0.0.1LOG: duración: 4231.045 ms sentencia: SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto", "id_seccion", "id_categoria" FROM "modelo_subfamilia" PSQL with \timing: -development: Time: 72,441 ms -server: Time: 78,762 ms but if i load it from QT or from pgadmin i get more than 4 seconds in server and ~100ms in develoment machime, if i try the query without the "foto" column i get 2ms in development and 30ms in server OK, so: 1. No "foto" - both quick 2. psql + "foto" - both slow 3. QT + "foto" - slow only on server 1.No "foto" -both quick but still a noticeable difference between them 2ms develoment - 30ms server 2. psql + "foto"-both quick really, they are about 70ms, not bad giving that foto are bytea with small png images. 3. QT or WXWindows + "foto" -slow only one server yes. The bit that puzzles me is why both are slow in #2 and not in #3. First things first: run "VACUUM FULL VERBOSE modela_subfamilia" on both systems and see how many pages were being taken up. I'm guessing it will be more on the server, but is it a *lot* more? > Then run "REINDEX TABLE modela_subfamilia" to clean up your indexes. If it's still a problem that suggests the two systems are doing something different with the bytea encoding/decoding. Check: 1. Connection settings - is one tcp/ip and the other unix sockets? 2. Binary/text mode - are you using a binary format for query results on the development machine? 3. Are you fetching the entire result-set on the server and only the first row(s) on your dev. machine? 4. Encoding/locale differences - can't see how this would matter for bytea, but worth ruling out. After the vacuum full verbose and reindex still the same problem (i had tried the vacuum before). 1- The same in the two machines, tcp/ip with localhost. 2- I am exactly the same code in the two machines and the same pgadmin3 version too. 3- Ever the entire result set. 4- I am using es_ES.UTF8 in the two machines What can be using wxwindows and QT to access postgresql that psql it is not using, libpq? Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] strange performance problem
Richard Huxton escribió: Linos wrote: Richard Huxton escribió: Linos wrote: 2009-02-27 13:51:15 CET 127.0.0.1LOG: duración: 4231.045 ms sentencia: SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto", "id_seccion", "id_categoria" FROM "modelo_subfamilia" PSQL with \timing: -development: Time: 72,441 ms -server: Time: 78,762 ms but if i load it from QT or from pgadmin i get more than 4 seconds in server and ~100ms in develoment machime, if i try the query without the "foto" column i get 2ms in development and 30ms in server OK, so: 1. No "foto" - both quick 2. psql + "foto" - both slow 3. QT + "foto" - slow only on server 1.No "foto" -both quick but still a noticeable difference between them 2ms develoment - 30ms server 2. psql + "foto"-both quick really, they are about 70ms, not bad giving that foto are bytea with small png images. Ah, sorry - I read the psql timings as 72 thousand ms (72 seconds) - of course you're using European decimal marks. 3. QT or WXWindows + "foto" -slow only one server yes. The bit that puzzles me is why both are slow in #2 and not in #3. OK - well, the fact that both psql are fast means there's nothing too wrong with your setup. It must be something to do with the application libraries. After the vacuum full verbose and reindex still the same problem (i had tried the vacuum before). OK. Worth ruling it out. 1- The same in the two machines, tcp/ip with localhost. Hmm... 2- I am exactly the same code in the two machines and the same pgadmin3 version too. Good. We can rule that out. 3- Ever the entire result set. Good. 4- I am using es_ES.UTF8 in the two machines Good. What can be using wxwindows and QT to access postgresql that psql it is not using, libpq? Well, I'm pretty sure that pgadmin will be using libpq at some level, even if there is other code above it. Either: 1. One machine (the fast one) is actually using unix sockets and not tcp/ip+localhost like you think. 2. The networking setup is different on each. 3. Something your code is doing with the bytea data is slower on one machine than another. I seem to remember that pgadmin used to be quite slow at displaying large amounts of data. They did some work on that, but it might be that your use-case still suffers from it. For #1 try the psql test again, but with "-h localhost" and "-h /tmp" (or whatever directory your unix socket is in - might be /var/run/postgresql or similar too). Ok, thanks for the trick now i know where to search, after trying with -h localhost psql it is slow too in the server from 80,361 with \timing to 4259,453 using -h localhost. Any ideas what can be the problem here? i am going to make what you suggest and capture analyze the traffic, after find the hole i have tried in other debian server with the same kernel 2.6.26 and i have the same problem (my development machine it is Arch Linux with 2.6.28). Regards, Miguel Angel. For #2, you can always try timing "psql -h localhost ... > /dev/null" on both machines. If you capture port 5432 with something like "tcpdump -w ip.dump host localhost and port 5432" you can then use wireshark to see exactly why it's slow. For #3, I guess you'd need to reduce your code to just fetching the data and time that. You may have already done this of course. HTH -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] strange performance problem
Linos escribió: Richard Huxton escribió: Linos wrote: Richard Huxton escribió: Linos wrote: 2009-02-27 13:51:15 CET 127.0.0.1LOG: duración: 4231.045 ms sentencia: SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto", "id_seccion", "id_categoria" FROM "modelo_subfamilia" PSQL with \timing: -development: Time: 72,441 ms -server: Time: 78,762 ms but if i load it from QT or from pgadmin i get more than 4 seconds in server and ~100ms in develoment machime, if i try the query without the "foto" column i get 2ms in development and 30ms in server OK, so: 1. No "foto" - both quick 2. psql + "foto" - both slow 3. QT + "foto" - slow only on server 1.No "foto" -both quick but still a noticeable difference between them 2ms develoment - 30ms server 2. psql + "foto"-both quick really, they are about 70ms, not bad giving that foto are bytea with small png images. Ah, sorry - I read the psql timings as 72 thousand ms (72 seconds) - of course you're using European decimal marks. 3. QT or WXWindows + "foto" -slow only one server yes. The bit that puzzles me is why both are slow in #2 and not in #3. OK - well, the fact that both psql are fast means there's nothing too wrong with your setup. It must be something to do with the application libraries. After the vacuum full verbose and reindex still the same problem (i had tried the vacuum before). OK. Worth ruling it out. 1- The same in the two machines, tcp/ip with localhost. Hmm... 2- I am exactly the same code in the two machines and the same pgadmin3 version too. Good. We can rule that out. 3- Ever the entire result set. Good. 4- I am using es_ES.UTF8 in the two machines Good. What can be using wxwindows and QT to access postgresql that psql it is not using, libpq? Well, I'm pretty sure that pgadmin will be using libpq at some level, even if there is other code above it. Either: 1. One machine (the fast one) is actually using unix sockets and not tcp/ip+localhost like you think. 2. The networking setup is different on each. 3. Something your code is doing with the bytea data is slower on one machine than another. I seem to remember that pgadmin used to be quite slow at displaying large amounts of data. They did some work on that, but it might be that your use-case still suffers from it. For #1 try the psql test again, but with "-h localhost" and "-h /tmp" (or whatever directory your unix socket is in - might be /var/run/postgresql or similar too). Ok, thanks for the trick now i know where to search, after trying with -h localhost psql it is slow too in the server from 80,361 with \timing to 4259,453 using -h localhost. Any ideas what can be the problem here? i am going to make what you suggest and capture analyze the traffic, after find the hole i have tried in other debian server with the same kernel 2.6.26 and i have the same problem (my development machine it is Arch Linux with 2.6.28). Regards, Miguel Angel. For #2, you can always try timing "psql -h localhost ... > /dev/null" on both machines. If you capture port 5432 with something like "tcpdump -w ip.dump host localhost and port 5432" you can then use wireshark to see exactly why it's slow. For #3, I guess you'd need to reduce your code to just fetching the data and time that. You may have already done this of course. HTH I have been testing with tcpdump but i dont see the problem in the traffic (aside from the fact that it gives big jumps in ms between packets of data, but i dont know why), i have tested on other debian machines with the same result, i have upgraded kernel to 2.6.28 and postgresql to 8.3.6 (equal versions of my Arch Linux Development machine), but still have the same problem: -query with \timing with "psql -d database" ~110ms -query with \timing with "psql -d database -h localhost" ~4400ms Using tcp the cpu of postgresql spike to the max it can borrow within the query. I have attached the tcpdump logs of a debian machine and the Arch too (maybe anyone can see anything in them that i can not). How i can test pure speed in the loopback interface? i have tried iperf but seems to be cpu bound so maybe the results are misleading. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] strange performance problem
I have been testing with tcpdump but i dont see the problem in the traffic (aside from the fact that it gives big jumps in ms between packets of data, but i dont know why), i have tested on other debian machines with the same result, i have upgraded kernel to 2.6.28 and postgresql to 8.3.6 (equal versions of my Arch Linux Development machine), but still have the same problem: -query with \timing with "psql -d database" ~110ms -query with \timing with "psql -d database -h localhost" ~4400ms Using tcp the cpu of postgresql spike to the max it can borrow within the query. I have attached the tcpdump logs of a debian machine and the Arch too (maybe anyone can see anything in them that i can not). How i can test pure speed in the loopback interface? i have tried iperf but seems to be cpu bound so maybe the results are misleading. The pcap files are here for download, the mail with the attachment seems to not get into the list so here it is a link to download them. http://www.linos.es/pcap_files.tar.bz2 Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] strange performance problem (SOLVED)
Linos escribió: Linos escribió: Richard Huxton escribió: Linos wrote: Richard Huxton escribió: Linos wrote: 2009-02-27 13:51:15 CET 127.0.0.1LOG: duración: 4231.045 ms sentencia: SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto", "id_seccion", "id_categoria" FROM "modelo_subfamilia" PSQL with \timing: -development: Time: 72,441 ms -server: Time: 78,762 ms but if i load it from QT or from pgadmin i get more than 4 seconds in server and ~100ms in develoment machime, if i try the query without the "foto" column i get 2ms in development and 30ms in server OK, so: 1. No "foto" - both quick 2. psql + "foto" - both slow 3. QT + "foto" - slow only on server 1.No "foto" -both quick but still a noticeable difference between them 2ms develoment - 30ms server 2. psql + "foto"-both quick really, they are about 70ms, not bad giving that foto are bytea with small png images. Ah, sorry - I read the psql timings as 72 thousand ms (72 seconds) - of course you're using European decimal marks. 3. QT or WXWindows + "foto" -slow only one server yes. The bit that puzzles me is why both are slow in #2 and not in #3. OK - well, the fact that both psql are fast means there's nothing too wrong with your setup. It must be something to do with the application libraries. After the vacuum full verbose and reindex still the same problem (i had tried the vacuum before). OK. Worth ruling it out. 1- The same in the two machines, tcp/ip with localhost. Hmm... 2- I am exactly the same code in the two machines and the same pgadmin3 version too. Good. We can rule that out. 3- Ever the entire result set. Good. 4- I am using es_ES.UTF8 in the two machines Good. What can be using wxwindows and QT to access postgresql that psql it is not using, libpq? Well, I'm pretty sure that pgadmin will be using libpq at some level, even if there is other code above it. Either: 1. One machine (the fast one) is actually using unix sockets and not tcp/ip+localhost like you think. 2. The networking setup is different on each. 3. Something your code is doing with the bytea data is slower on one machine than another. I seem to remember that pgadmin used to be quite slow at displaying large amounts of data. They did some work on that, but it might be that your use-case still suffers from it. For #1 try the psql test again, but with "-h localhost" and "-h /tmp" (or whatever directory your unix socket is in - might be /var/run/postgresql or similar too). Ok, thanks for the trick now i know where to search, after trying with -h localhost psql it is slow too in the server from 80,361 with \timing to 4259,453 using -h localhost. Any ideas what can be the problem here? i am going to make what you suggest and capture analyze the traffic, after find the hole i have tried in other debian server with the same kernel 2.6.26 and i have the same problem (my development machine it is Arch Linux with 2.6.28). Regards, Miguel Angel. For #2, you can always try timing "psql -h localhost ... > /dev/null" on both machines. If you capture port 5432 with something like "tcpdump -w ip.dump host localhost and port 5432" you can then use wireshark to see exactly why it's slow. For #3, I guess you'd need to reduce your code to just fetching the data and time that. You may have already done this of course. HTH I have been testing with tcpdump but i dont see the problem in the traffic (aside from the fact that it gives big jumps in ms between packets of data, but i dont know why), i have tested on other debian machines with the same result, i have upgraded kernel to 2.6.28 and postgresql to 8.3.6 (equal versions of my Arch Linux Development machine), but still have the same problem: -query with \timing with "psql -d database" ~110ms -query with \timing with "psql -d database -h localhost" ~4400ms Using tcp the cpu of postgresql spike to the max it can borrow within the query. I have attached the tcpdump logs of a debian machine and the Arch too (maybe anyone can see anything in them that i can not). How i can test pure speed in the loopback interface? i have tried iperf but seems to be cpu bound so maybe the results are misleading. Okay, i have found the problem, in postgresql.conf the parameter "ssl = true" seems to slow the clear tcp connections (not ssl enabled) very very much, but this does not affect my arch Linux machine, only debian ones so i will contact debian package maintainer so they can investigate it, thanks for the help. Regards, Miguel Angel. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] strange performance problem (SOLVED)
Linos escribió: Linos escribió: Linos escribió: Richard Huxton escribió: Linos wrote: Richard Huxton escribió: Linos wrote: 2009-02-27 13:51:15 CET 127.0.0.1LOG: duración: 4231.045 ms sentencia: SELECT "nombre", "subfamilia_id", "id_familia", "hasta", "foto", "id_seccion", "id_categoria" FROM "modelo_subfamilia" PSQL with \timing: -development: Time: 72,441 ms -server: Time: 78,762 ms but if i load it from QT or from pgadmin i get more than 4 seconds in server and ~100ms in develoment machime, if i try the query without the "foto" column i get 2ms in development and 30ms in server OK, so: 1. No "foto" - both quick 2. psql + "foto" - both slow 3. QT + "foto" - slow only on server 1.No "foto" -both quick but still a noticeable difference between them 2ms develoment - 30ms server 2. psql + "foto"-both quick really, they are about 70ms, not bad giving that foto are bytea with small png images. Ah, sorry - I read the psql timings as 72 thousand ms (72 seconds) - of course you're using European decimal marks. 3. QT or WXWindows + "foto" -slow only one server yes. The bit that puzzles me is why both are slow in #2 and not in #3. OK - well, the fact that both psql are fast means there's nothing too wrong with your setup. It must be something to do with the application libraries. After the vacuum full verbose and reindex still the same problem (i had tried the vacuum before). OK. Worth ruling it out. 1- The same in the two machines, tcp/ip with localhost. Hmm... 2- I am exactly the same code in the two machines and the same pgadmin3 version too. Good. We can rule that out. 3- Ever the entire result set. Good. 4- I am using es_ES.UTF8 in the two machines Good. What can be using wxwindows and QT to access postgresql that psql it is not using, libpq? Well, I'm pretty sure that pgadmin will be using libpq at some level, even if there is other code above it. Either: 1. One machine (the fast one) is actually using unix sockets and not tcp/ip+localhost like you think. 2. The networking setup is different on each. 3. Something your code is doing with the bytea data is slower on one machine than another. I seem to remember that pgadmin used to be quite slow at displaying large amounts of data. They did some work on that, but it might be that your use-case still suffers from it. For #1 try the psql test again, but with "-h localhost" and "-h /tmp" (or whatever directory your unix socket is in - might be /var/run/postgresql or similar too). Ok, thanks for the trick now i know where to search, after trying with -h localhost psql it is slow too in the server from 80,361 with \timing to 4259,453 using -h localhost. Any ideas what can be the problem here? i am going to make what you suggest and capture analyze the traffic, after find the hole i have tried in other debian server with the same kernel 2.6.26 and i have the same problem (my development machine it is Arch Linux with 2.6.28). Regards, Miguel Angel. For #2, you can always try timing "psql -h localhost ... > /dev/null" on both machines. If you capture port 5432 with something like "tcpdump -w ip.dump host localhost and port 5432" you can then use wireshark to see exactly why it's slow. For #3, I guess you'd need to reduce your code to just fetching the data and time that. You may have already done this of course. HTH I have been testing with tcpdump but i dont see the problem in the traffic (aside from the fact that it gives big jumps in ms between packets of data, but i dont know why), i have tested on other debian machines with the same result, i have upgraded kernel to 2.6.28 and postgresql to 8.3.6 (equal versions of my Arch Linux Development machine), but still have the same problem: -query with \timing with "psql -d database" ~110ms -query with \timing with "psql -d database -h localhost" ~4400ms Using tcp the cpu of postgresql spike to the max it can borrow within the query. I have attached the tcpdump logs of a debian machine and the Arch too (maybe anyone can see anything in them that i can not). How i can test pure speed in the loopback interface? i have tried iperf but seems to be cpu bound so maybe the results are misleading. Okay, i have found the problem, in postgresql.conf the parameter "ssl = true" seems to slow the clear tcp connections (not ssl enabled) very very much, but this does not affect my arch Linux machine, only debian ones so i will contact debian package maintainer so they can investigate it, thanks for the help. Regards, Miguel Angel. The difference here between Arch and Debian it is that debian ships with a ssl cert (in the pgsql-performance mailing