[GENERAL] Temporary table already exists
Hi there, in my database I'm using several stored_functions that take advantage of temporary table. The application that is connected to Postgres is a Java Web Application in a Glassfish Application Server: it is connected by a JDBC Connection Pool provided by Glassfish with this settings: Resource type -> javax.sql.ConnectionPoolDataSouce Dataset Classname -> org.postgresql.ds.PGConnectionPoolDataSource Transaction Isolation -> read-uncommitted The problem is that in a concurrent execution of a function, I received error of relation already exists. The relation that caused issue is exactly my temporary table. My question is: what is the reason for which I take this type of error? Is there a way to follow to avoid this situation? Thanks in advance. Mephysto -- View this message in context: http://postgresql.1045698.n5.nabble.com/Temporary-table-already-exists-tp5789852.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reindexing and tablespaces
You were right. I just reindexed the DB and I saw no real changes in drive storage (those tablespaces are on separate hard disk volumes). Thanks -- View this message in context: http://postgresql.1045698.n5.nabble.com/Reindexing-and-tablespaces-tp5789827p5789853.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Temporary table already exists
mephysto wrote: > in my database I'm using several stored_functions that take advantage of > temporary table. The application that is connected to Postgres is a Java Web > Application in a Glassfish Application Server: it is connected by a JDBC > Connection Pool provided by Glassfish with this settings: > > Resource type -> javax.sql.ConnectionPoolDataSouce > Dataset Classname -> org.postgresql.ds.PGConnectionPoolDataSource > Transaction Isolation -> read-uncommitted > > > The problem is that in a concurrent execution of a function, I received > error of relation already exists. The relation that caused issue is exactly > my temporary table. > > My question is: what is the reason for which I take this type of error? Is > there a way to follow to avoid this situation? You probably have a connection pool that reuses a connection in which you already created the temporary table. I see two options: - Explicitly drop the temporary table when you are done. - Create the table with ON COMMIT DROP and put your work into a transaction. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Temporary table already exists
Hi Albe,this is code of my stored function:ConnectionPool reuse connections, of course, but how you can see from my code, the temporary table deck_types are already defined with ON COMMIT DROP clause, so I think that my work is not in transaction. Am I true?If so, how can I put my code in transaction?Many thanks.Mephysto -- View this message in context: http://postgresql.1045698.n5.nabble.com/Temporary-table-already-exists-tp5789852p5789857.html Sent from the PostgreSQL - general mailing list archive at Nabble.com.
Re: [GENERAL] Grep'ing for a string in all functions in a schema?
On Thu, Jan 30, 2014 at 12:52:35PM -0800, bricklen wrote: > > Since Postgres does not consider a table as a dependency of a function if > > that table is referenced in the function (probably a good reason), I often > > find myself in a position of asking "is this table/sequence/index > > referenced in any of these N number of functions?" > > > > Is there an easy way of essentially grep'ing all of the functions in a > > given schema for a string? > A method I've used in the past is to create a view of function source which > can then be searched. Why not simply: select p.oid::regproc from pg_proc p join pg_namespace n on p.pronamespace = n.oid where n.nspname = 'your-schema' and p.prosrc ~ 'searched-string'; depesz signature.asc Description: Digital signature
Re: [GENERAL] Temporary table already exists
mephysto wrote: > Hi Albe, this is code of my stored function: > CREATE OR REPLACE FUNCTION :FUNCTION_SCHEMA.get_deck_types [...] > BEGIN [...] > CREATE LOCAL TEMPORARY TABLE deck_types > ON COMMIT DROP > AS > SELECT > stored_functions_v0.get_card_deck_types(t1.id_master_card) AS deck_type_ids > FROM ccg_schema.deck_composition T0 > ,ccg_schema.cards_per_user T1 > WHERE id_deck = p_id_deck > AND t1.id_owner = l_id_user > AND t0.id_card = t1.id_card; [...] > END; > ConnectionPool reuse connections, of course, but how you can see from my > code, the temporary table > deck_types are already defined with ON COMMIT DROP clause, so I think that my > work is not in > transaction. Am I true? If so, how can I put my code in transaction? Hmm, unless you explicitly use the SQL statements BEGIN (or START TRANSACTION) and COMMIT, PostgreSQL would execute each statement in its own connection. In this case, the statement that contains the function call would be in its own connection, and you should be fine. There are two things I can think of: - The function is called more than once in one SQL statement. - You use longer transactions without being aware of it (something in your stack does it unbeknownst to you). You could try to set log_statement to "all" and see what SQL actually gets sent to the database. You could also include "EXECUTE 'DROP TABLE deck_types';" in your function. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Temporary table already exists
I had a similar problem once. The pool is reusing connections and the temporary tables are still there. Now I always create new temporary tables with a unique name like this: tmpTableId = "TMP" + Math.abs(generateUUID().hashCode()); if (tmpTableId.length() > 15) tmpTableId = tmpTableId.substring(tmpTableId.length() - 15, tmpTableId.length()); conn.setAutoCommit(true); tableStmt = conn.createStatement(); try { // create global temporary tables tableStmt.executeUpdate("create temporary table TABLE_ANME_" + tmpTableId + "( ... ) on commit preserve rows"); etc. Then you have to add the tmpTableId to every statement in your code but it should work fine. Gesendet: Freitag, 31. Januar 2014 um 12:04 Uhr Von: mephysto An: pgsql-general@postgresql.org Betreff: Re: [GENERAL] Temporary table already exists Hi Albe, this is code of my stored function: CREATE OR REPLACE FUNCTION :FUNCTION_SCHEMA.get_deck_types ( p_id_deck BIGINT ) RETURNS BIGINT[] AS $$ DECLARE l_id_user BIGINT; l_cards_number INTEGER; l_deck_type BIGINT; l_result BIGINT[]; BEGIN SELECT INTO STRICT l_id_user id_user FROM ccg_schema.decks_per_user WHERE id = p_id_deck; CREATE LOCAL TEMPORARY TABLE deck_types ON COMMIT DROP AS SELECT stored_functions_v0.get_card_deck_types(t1.id_master_card) AS deck_type_ids FROM ccg_schema.deck_composition T0 ,ccg_schema.cards_per_user T1 WHERE id_deck = p_id_deck AND t1.id_owner = l_id_user AND t0.id_card = t1.id_card; SELECT INTO l_cards_number COUNT(*) FROM deck_types; FOR l_deck_type IN SELECT DISTINCT unnest(deck_type_ids) FROM deck_types LOOP IF (l_cards_number = (SELECT COUNT(*) FROM (SELECT unnest(deck_type_ids) AS id FROM deck_types) T0 WHERE id = l_deck_type)) THEN l_result := array_append(l_result, l_deck_type); END IF; END LOOP; RETURN l_result; END; $$ LANGUAGE PLPGSQL VOLATILE; ConnectionPool reuse connections, of course, but how you can see from my code, the temporary table deck_types are already defined with ON COMMIT DROP clause, so I think that my work is not in transaction. Am I true? If so, how can I put my code in transaction? Many thanks. Mephysto View this message in context: Re: Temporary table already exists Sent from the PostgreSQL - general mailing list archive at Nabble.com.
Re: [GENERAL] Temporary table already exists
Thank you Felix, but I would to create temporary table from stored procedure, non from application code. Thanks again. Meph -- View this message in context: http://postgresql.1045698.n5.nabble.com/Temporary-table-already-exists-tp5789852p5789877.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Temporary table already exists
2014-01-31 Albe Laurenz : > mephysto wrote: > > Hi Albe, this is code of my stored function: > > CREATE OR REPLACE FUNCTION :FUNCTION_SCHEMA.get_deck_types > [...] > > BEGIN > [...] > > CREATE LOCAL TEMPORARY TABLE deck_types > > ON COMMIT DROP > > AS > > SELECT > stored_functions_v0.get_card_deck_types(t1.id_master_card) AS deck_type_ids > > FROM ccg_schema.deck_composition T0 > > ,ccg_schema.cards_per_user T1 > > WHERE id_deck = p_id_deck > > AND t1.id_owner = l_id_user > > AND t0.id_card = t1.id_card; > [...] > > END; > > > ConnectionPool reuse connections, of course, but how you can see from my > code, the temporary table > > deck_types are already defined with ON COMMIT DROP clause, so I think > that my work is not in > > transaction. Am I true? If so, how can I put my code in transaction? > > Hmm, unless you explicitly use the SQL statements BEGIN (or START > TRANSACTION) > and COMMIT, PostgreSQL would execute each statement in its own connection. > > In this case, the statement that contains the function call would be in > its own connection, and you should be fine. > > There are two things I can think of: > - The function is called more than once in one SQL statement. > - You use longer transactions without being aware of it (something in > your stack does it unbeknownst to you). > > You could try to set log_statement to "all" and see what SQL actually > gets sent to the database. > > You could also include "EXECUTE 'DROP TABLE deck_types';" in your function. > I would recommend to use DISCARD ALL before returning the connection to the pool anyway. But it's not about current problem. The OP's problem is about "why ON COMMIT DROP does not work". -- // Dmitry.
[GENERAL] windows binaries for FDW implementations?
Is there a trove of Windows installers for FDWs? I'd like to try a proof-of-concept with the ODBC_FDW. In the meantime, I'll try to go figure out how to go about building from source on windows, but if I can avoid that learning curve for my POC, that would be great. -Chris -- I asked the Internet how to train my cat, and the Internet told me to get a dog.
Re: [GENERAL] windows binaries for FDW implementations?
Chris Curvey wrote: > Is there a trove of Windows installers for FDWs? I'd like to try a > proof-of-concept with the > ODBC_FDW. > > In the meantime, I'll try to go figure out how to go about building from > source on windows, but if I > can avoid that learning curve for my POC, that would be great. I guess the "easiest" way is to set up a build system for Windows, add the module like a contrib module and build. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Temporary table already exists
Dmitriy Igrishin wrote > 2014-01-31 Albe Laurenz < > laurenz.albe@.gv > >: > >> mephysto wrote: >> > Hi Albe, this is code of my stored function: >> > CREATE OR REPLACE FUNCTION :FUNCTION_SCHEMA.get_deck_types >> [...] >> > BEGIN >> [...] >> > CREATE LOCAL TEMPORARY TABLE deck_types >> > ON COMMIT DROP >> > AS >> > SELECT >> stored_functions_v0.get_card_deck_types(t1.id_master_card) AS >> deck_type_ids >> > FROM ccg_schema.deck_composition T0 >> > ,ccg_schema.cards_per_user T1 >> > WHERE id_deck = p_id_deck >> > AND t1.id_owner = l_id_user >> > AND t0.id_card = t1.id_card; >> [...] >> > END; >> >> > ConnectionPool reuse connections, of course, but how you can see from >> my >> code, the temporary table >> > deck_types are already defined with ON COMMIT DROP clause, so I think >> that my work is not in >> > transaction. Am I true? If so, how can I put my code in transaction? >> >> Hmm, unless you explicitly use the SQL statements BEGIN (or START >> TRANSACTION) >> and COMMIT, PostgreSQL would execute each statement in its own >> connection. >> >> In this case, the statement that contains the function call would be in >> its own connection, and you should be fine. >> >> There are two things I can think of: >> - The function is called more than once in one SQL statement. >> - You use longer transactions without being aware of it (something in >> your stack does it unbeknownst to you). >> >> You could try to set log_statement to "all" and see what SQL actually >> gets sent to the database. >> >> You could also include "EXECUTE 'DROP TABLE deck_types';" in your >> function. >> > I would recommend to use DISCARD ALL before returning the connection to > the > pool > anyway. But it's not about current problem. The OP's problem is about "why > ON COMMIT > DROP does not work". > > -- > // Dmitry. Is it possible that it is read-uncommitted transaction isolation level? -- View this message in context: http://postgresql.1045698.n5.nabble.com/Temporary-table-already-exists-tp5789852p5789896.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Temporary table already exists
On 01/31/2014 06:49 AM, mephysto wrote: Dmitriy Igrishin wrote 2014-01-31 Albe Laurenz < laurenz.albe@.gv You could try to set log_statement to "all" and see what SQL actually gets sent to the database. You could also include "EXECUTE 'DROP TABLE deck_types';" in your function. I would recommend to use DISCARD ALL before returning the connection to the pool anyway. But it's not about current problem. The OP's problem is about "why ON COMMIT DROP does not work". -- // Dmitry. Is it possible that it is read-uncommitted transaction isolation level? No http://www.postgresql.org/docs/9.3/interactive/transaction-iso.html#XACT-READ-COMMITTED In PostgreSQL, you can request any of the four standard transaction isolation levels. But internally, there are only three distinct isolation levels, which correspond to the levels Read Committed, Repeatable Read, and Serializable. When you select the level Read Uncommitted you really get Read Committed... The issue would seem to be here from you initial post: "The problem is that in a concurrent execution of a function, I received error of relation already exists." Per a previous post you will need to crank up the logging and see exactly how your statements are being sent to the back end. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] postgres cache vs ZFS cache
I'm looking at some performance tuning for a server running FreeBSD and the DB on top of ZFS. I keep reading the recommendation to turn off the primary data cache in ZFS since postgres does its own caching. The recommendations also say to set effective_cache_size to a significant amount of the RAM. The claim is that this avoids double caching of the data. My understanding from my past research is that effective_cache_size only tells postgres how big the OS cache of the data on disk is, so there really is no double cache. It is just a hint so it knows roughly how much to expect the OS to hold on to. Based on this understanding, I believe this recommendation to turn off the ZFS primary cache for postgres data to be wrong. Or is there some other place postgres is caching the data beyond the shared memory pool? I find it mentioned here: http://open-zfs.org/wiki/Performance_tuning#PostgreSQL among other places. Thanks for any clarifications and advice here.
Re: [GENERAL] Temporary table already exists
mephysto wrote: > Is it possible that it is read-uncommitted transaction isolation level? No; there is no such thing in PostgreSQL. The lowest isolation level is READ COMMITTED. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Large objects and savepoints - Snapshot reference leak
Hi, I'm trying to read/write large objects via libpq. I encapsulated the operations in a transaction but I wanted to put a savepoint before doing any operations, so I can do a rollback in case anything fails without breaking the current transaction. Now, when sth. actually fails and the transaction is rolled back to the savepoint, the next commit results in a warning: Snapshot reference leak: Snapshot 0xb5e4b0 still referenced I'm not sure what to make of that. Can it be ignored? Is rolling back large object operations not possible? -- with best regards, Andreas Lubensky Software Engineer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large objects and savepoints - Snapshot reference leak
Hello This bug was fixed few months by Heikki Regards Pavel Dne 31.1.2014 17:35 "Andreas Lubensky" napsal(a): > Hi, > > I'm trying to read/write large objects via libpq. I encapsulated the > operations in a transaction but I wanted to put a savepoint before doing > any operations, so I can do a rollback in case anything fails without > breaking the current transaction. Now, when sth. actually fails and the > transaction is rolled back to the savepoint, the next commit results in > a warning: > Snapshot reference leak: Snapshot 0xb5e4b0 still referenced > I'm not sure what to make of that. Can it be ignored? Is rolling back > large object operations not possible? > > -- > with best regards, > > Andreas Lubensky > > Software Engineer > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
Re: [HACKERS] [GENERAL] Insert result does not match record count
On Wed, Jul 24, 2013 at 08:08:32PM +0200, Andres Freund wrote: > On 2013-07-24 13:48:23 -0400, Tom Lane wrote: > > Vik Fearing writes: > > > Also worth mentioning is bug #7766. > > > http://www.postgresql.org/message-id/e1tlli5-0007tr...@wrigleys.postgresql.org > > > > Yeah, did you read that whole thread? The real issue here is going to > > be whether client-side code falls over on wider-than-32-bit counts. > > We can fix the backend and be pretty sure that we've found all the > > relevant places inside it, but we'll just be exporting the issue. > > > I did look at libpq and noted that it doesn't seem to have any internal > > problem, because it returns the count to callers as a string (!). > > But what do you think are the odds that callers are using code that > > won't overflow? I'd bet on finding atoi() or suchlike in a lot of > > callers. Even if they thought to use strtoul(), unsigned long is > > not necessarily 64 bits wide. > > Application code that relies on the values already has problems though > since the returned values are pretty bogus now. Including the fact that > it can return 0 as the number of modified rows which is checked for more > frequently than the actual number IME... > So I think client code that uses simplistic stuff like atoi isn't worse > off afterwards since the values will be about as bogus. I am more > worried about code that does range checks like java's string conversion > routines... > > I think fixing this for 9.4 is fine, but due to the compat issues I > think it's to late for 9.3. Where are we on this? There was a posted patch, attached, but Vik Fearing said it was insufficent and he was working on a new one: http://www.postgresql.org/message-id/51eff67a.7020...@dalibo.com -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + *** a/src/backend/commands/createas.c --- b/src/backend/commands/createas.c *** *** 172,178 ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString, /* save the rowcount if we're given a completionTag to fill */ if (completionTag) snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "SELECT %u", queryDesc->estate->es_processed); /* and clean up */ ExecutorFinish(queryDesc); --- 172,178 /* save the rowcount if we're given a completionTag to fill */ if (completionTag) snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "SELECT " UINT64_FORMAT, queryDesc->estate->es_processed); /* and clean up */ ExecutorFinish(queryDesc); *** a/src/backend/tcop/pquery.c --- b/src/backend/tcop/pquery.c *** *** 195,201 ProcessQuery(PlannedStmt *plan, { case CMD_SELECT: snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "SELECT %u", queryDesc->estate->es_processed); break; case CMD_INSERT: if (queryDesc->estate->es_processed == 1) --- 195,201 { case CMD_SELECT: snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "SELECT " UINT64_FORMAT, queryDesc->estate->es_processed); break; case CMD_INSERT: if (queryDesc->estate->es_processed == 1) *** *** 203,217 ProcessQuery(PlannedStmt *plan, else lastOid = InvalidOid; snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "INSERT %u %u", lastOid, queryDesc->estate->es_processed); break; case CMD_UPDATE: snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "UPDATE %u", queryDesc->estate->es_processed); break; case CMD_DELETE: snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "DELETE %u", queryDesc->estate->es_processed); break; default: strcpy(completionTag, "???"); --- 203,217 else lastOid = InvalidOid; snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "INSERT %u " UINT64_FORMAT, lastOid, queryDesc->estate->es_processed); break; case CMD_UPDATE: snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "UPDATE " UINT64_FORMAT, queryDesc->estate->es_processed); break; case CMD_DELETE: snprintf(completionTag, COMPLETION_TAG_BUFSIZE, ! "DELETE " UINT64_FORMAT, queryDesc->estate->es_processed); break; default: strcpy(completionTag, "???"); *** a/src/include/nodes/execnodes.h --- b/src/include/nodes/execnodes.h *** *** 375,381 typedef struct EState List *es_rowMarks; /* List of ExecRowMarks */ ! uint32 es_processed; /* # of tuples processed */ Oid es_lastoid; /* last oid processed (by INSERT) */ int es_top_eflags; /* eflags passed to ExecutorStart */ --- 375,381 List *es_rowMarks; /* List of ExecRowMarks */ ! uint64 es_processed; /* # of tuples processed */ Oid es_lastoid; /* last oid processed (by INSERT) */ int es_top_eflags; /* eflags passed to ExecutorStart */ -- Sent via pgsql-gener
Re: [HACKERS] [GENERAL] Insert result does not match record count
On 01/31/2014 06:19 PM, Bruce Momjian wrote: > On Wed, Jul 24, 2013 at 08:08:32PM +0200, Andres Freund wrote: >> On 2013-07-24 13:48:23 -0400, Tom Lane wrote: >>> Vik Fearing writes: Also worth mentioning is bug #7766. http://www.postgresql.org/message-id/e1tlli5-0007tr...@wrigleys.postgresql.org >>> Yeah, did you read that whole thread? The real issue here is going to >>> be whether client-side code falls over on wider-than-32-bit counts. >>> We can fix the backend and be pretty sure that we've found all the >>> relevant places inside it, but we'll just be exporting the issue. >>> I did look at libpq and noted that it doesn't seem to have any internal >>> problem, because it returns the count to callers as a string (!). >>> But what do you think are the odds that callers are using code that >>> won't overflow? I'd bet on finding atoi() or suchlike in a lot of >>> callers. Even if they thought to use strtoul(), unsigned long is >>> not necessarily 64 bits wide. >> Application code that relies on the values already has problems though >> since the returned values are pretty bogus now. Including the fact that >> it can return 0 as the number of modified rows which is checked for more >> frequently than the actual number IME... >> So I think client code that uses simplistic stuff like atoi isn't worse >> off afterwards since the values will be about as bogus. I am more >> worried about code that does range checks like java's string conversion >> routines... >> >> I think fixing this for 9.4 is fine, but due to the compat issues I >> think it's to late for 9.3. > Where are we on this? There was a posted patch, attached, but Vik > Fearing said it was insufficent and he was working on a new one: > > http://www.postgresql.org/message-id/51eff67a.7020...@dalibo.com > Unfortunately, I gave up on it as being over my head when I noticed I was changing the protocol itself. I should have notified the list so someone else could have taken over. -- Vik -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large objects and savepoints - Snapshot reference leak
Andreas Lubensky wrote: > Hi, > > I'm trying to read/write large objects via libpq. I encapsulated the > operations in a transaction but I wanted to put a savepoint before doing > any operations, so I can do a rollback in case anything fails without > breaking the current transaction. Now, when sth. actually fails and the > transaction is rolled back to the savepoint, the next commit results in > a warning: > Snapshot reference leak: Snapshot 0xb5e4b0 still referenced > I'm not sure what to make of that. Can it be ignored? Is rolling back > large object operations not possible? What vresion are you running? I wonder if this can be attributed to a bug fixed by this commit: Author: Heikki Linnakangas Branch: master [357f75213] 2013-09-30 12:53:14 +0300 Branch: REL9_3_STABLE Release: REL9_3_1 [f609d0743] 2013-09-30 12:53:56 +0300 Branch: REL9_2_STABLE Release: REL9_2_5 [fc7a38f32] 2013-09-30 12:54:37 +0300 Branch: REL9_1_STABLE Release: REL9_1_10 [cd6c03b5c] 2013-09-30 12:55:57 +0300 Branch: REL9_0_STABLE Release: REL9_0_14 [c5c87f065] 2013-09-30 12:58:51 +0300 Branch: REL8_4_STABLE Release: REL8_4_18 [fef01d419] 2013-09-30 13:00:00 +0300 Fix snapshot leak if lo_open called on non-existent object. lo_open registers the currently active snapshot, and checks if the large object exists after that. Normally, snapshots registered by lo_open are unregistered at end of transaction when the lo descriptor is closed, but if we error out before the lo descriptor is added to the list of open descriptors, it is leaked. Fix by moving the snapshot registration to after checking if the large object exists. Reported by Pavel Stehule. Backpatch to 8.4. The snapshot registration system was introduced in 8.4, so prior versions are not affected (and not supported, anyway). -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Insert result does not match record count
On Fri, Jan 31, 2014 at 06:34:27PM +0100, Vik Fearing wrote: > >> Application code that relies on the values already has problems though > >> since the returned values are pretty bogus now. Including the fact that > >> it can return 0 as the number of modified rows which is checked for more > >> frequently than the actual number IME... > >> So I think client code that uses simplistic stuff like atoi isn't worse > >> off afterwards since the values will be about as bogus. I am more > >> worried about code that does range checks like java's string conversion > >> routines... > >> > >> I think fixing this for 9.4 is fine, but due to the compat issues I > >> think it's to late for 9.3. > > Where are we on this? There was a posted patch, attached, but Vik > > Fearing said it was insufficent and he was working on a new one: > > > > http://www.postgresql.org/message-id/51eff67a.7020...@dalibo.com > > > > Unfortunately, I gave up on it as being over my head when I noticed I > was changing the protocol itself. I should have notified the list so > someone else could have taken over. OK, so that brings up a good question. Can we change the protocol for this without causing major breakage? Tom seems to indicate that it can be done for 9.4, but I thought protocol breakage was a major issue. Are we really changing the wire protocol here, or just the type of string we can pass back to the interface? I know the libpq API we give to clients is a string so it is OK. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgres FDW cost estimation options unrecognized in 9.3-beta1
On Fri, Jul 26, 2013 at 06:28:05PM -0400, Tom Lane wrote: > Our documentation appears not to disclose this fine point, but a look > at the SQL-MED standard says it's operating per spec. The standard also > says that ADD is an error if the option is already defined, which is a > bit more defensible, but still not exactly what I'd call user-friendly. > And the error we issue for that case is pretty misleading too: > > regression=# ALTER SERVER cuda_db10 OPTIONS (use_remote_estimate 'true') ; > ALTER SERVER > regression=# ALTER SERVER cuda_db10 OPTIONS (use_remote_estimate 'false') ; > ERROR: option "use_remote_estimate" provided more than once > > I think we could do with both more documentation, and better error > messages for these cases. In the SET-where-you-should-use-ADD case, > perhaps > > ERROR: option "use_remote_estimate" has not been set > HINT: Use ADD not SET to define an option that wasn't already set. > > In the ADD-where-you-should-use-SET case, perhaps > > ERROR: option "use_remote_estimate" is already set > HINT: Use SET not ADD to change an option's value. > > The "provided more than once" wording would be appropriate if the same > option is specified more than once in the command text, but I'm not sure > that it's worth the trouble to detect that case. Where are on this? -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Info
Hi, I dont want receive emails from mailing list users to my inbox because is lot. Please helpme, thank you! --- Este mensaje no contiene virus ni malware porque la protección de avast! Antivirus está activa. http://www.avast.com <>
Re: [GENERAL] Info
On 31/01/2014 19:30, Gary Rodríguez Ramírez wrote: > Hi, I don’t want receive emails from mailing list users to my inbox > because is lot. Please helpme, thank you! You just unsubscribe instructions are in the footer of every email sent from the list. Ray. -- Raymond O'Donnell :: Galway :: Ireland r...@iol.ie -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Insert result does not match record count
Bruce Momjian writes: > On Fri, Jan 31, 2014 at 06:34:27PM +0100, Vik Fearing wrote: >> Unfortunately, I gave up on it as being over my head when I noticed I >> was changing the protocol itself. I should have notified the list so >> someone else could have taken over. > OK, so that brings up a good question. Can we change the protocol for > this without causing major breakage? Tom seems to indicate that it can > be done for 9.4, but I thought protocol breakage was a major issue. Are > we really changing the wire protocol here, or just the type of string we > can pass back to the interface? What I said about it upthread was "this is effectively a protocol change, albeit a pretty minor one, so I can't see back-patching it". The discussion in bug #7766 shows that some client-side code is likely to need fixing, and that such fixing might actually be nontrivial for them. So changing this in a minor release is clearly a bad idea. But I don't have a problem with widening the counters in a major release where we can document it as a potential compatibility issue. I took a quick look and noted that CMDSTATUS_LEN and COMPLETION_TAG_BUFSIZE are set to 64, and have been for quite a long time, so command status string buffer sizes should not be a problem. I think we probably just need to widen es_processed and touch related code. Not sure what else Vik saw that needed doing. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Insert result does not match record count
On Fri, Jan 31, 2014 at 04:38:21PM -0500, Tom Lane wrote: > Bruce Momjian writes: > > On Fri, Jan 31, 2014 at 06:34:27PM +0100, Vik Fearing wrote: > >> Unfortunately, I gave up on it as being over my head when I noticed I > >> was changing the protocol itself. I should have notified the list so > >> someone else could have taken over. > > > OK, so that brings up a good question. Can we change the protocol for > > this without causing major breakage? Tom seems to indicate that it can > > be done for 9.4, but I thought protocol breakage was a major issue. Are > > we really changing the wire protocol here, or just the type of string we > > can pass back to the interface? > > What I said about it upthread was "this is effectively a protocol change, > albeit a pretty minor one, so I can't see back-patching it". > > The discussion in bug #7766 shows that some client-side code is likely to > need fixing, and that such fixing might actually be nontrivial for them. > So changing this in a minor release is clearly a bad idea. But I don't > have a problem with widening the counters in a major release where we > can document it as a potential compatibility issue. > > I took a quick look and noted that CMDSTATUS_LEN and > COMPLETION_TAG_BUFSIZE are set to 64, and have been for quite a long time, > so command status string buffer sizes should not be a problem. > > I think we probably just need to widen es_processed and touch related > code. Not sure what else Vik saw that needed doing. OK, thanks for the feedback. I understand now. The contents of the string will potentially have a larger integer, but the byte length of the string in the wire protocol doesn't change. Let's wait for Vik to reply and I think we can move forward. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_basebackup on standby node failed
hi, All, I intend to do a basebackup on a hot standby node. I followed the instructions on http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html to turn full_page_writes on at master and other settings on standby. When I try to select pg_start_backup('backuplabel'); on the standby instance, it give me the following: ERROR: recovery is in progress HINT: WAL control functions cannot be executed during recovery. I tried select pg_xlog_replay_pause(); before the select pg_start_backup('backuplabel'); but that does not help. Also why select pg_is_in_recovery(); always returns true when select pg_is_xlog_replay_paused(); is true? I thought when I do select pg_xlog_replay_pause(); the standby should not be in recovery Please help if you know anything about this. Thank you. best, Ying
Re: [GENERAL] pg_basebackup on standby node failed
On Sat, Feb 1, 2014 at 7:07 AM, Ying He wrote: > hi, All, > > I intend to do a basebackup on a hot standby node. I followed the > instructions on > http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html to turn > full_page_writes on at master and other settings on standby. > > When I try to select pg_start_backup('backuplabel'); on the standby > instance, it give me the following: > ERROR: recovery is in progress > HINT: WAL control functions cannot be executed during recovery. > > I tried select pg_xlog_replay_pause(); before the select > pg_start_backup('backuplabel'); but that does not help. > > Also why select pg_is_in_recovery(); always returns true when select > pg_is_xlog_replay_paused(); is true? I thought when I do select > pg_xlog_replay_pause(); the standby should not be in recovery > > Please help if you know anything about this. Thank you. Taking a backup with pg_basebackup and with pg_start/stop_backup are two different things on a standby. In short, you are not allowed to run pg_start/stop_backup in recovery on a standby because pg_stop_backup needs to write a WAL record called XLOG_BACKUP_END once it is done. Only pg_basebackup is able to take backups from a standby because it uses the replication protocol to take the backup and bypasses the WAL record by waiting that all the needed WAL files have been archived. So use pg_basebackup for a standby :) Regards, -- Michael -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Insert result does not match record count
On 01/31/2014 10:56 PM, Bruce Momjian wrote: > On Fri, Jan 31, 2014 at 04:38:21PM -0500, Tom Lane wrote: >> Bruce Momjian writes: >>> On Fri, Jan 31, 2014 at 06:34:27PM +0100, Vik Fearing wrote: Unfortunately, I gave up on it as being over my head when I noticed I was changing the protocol itself. I should have notified the list so someone else could have taken over. >>> OK, so that brings up a good question. Can we change the protocol for >>> this without causing major breakage? Tom seems to indicate that it can >>> be done for 9.4, but I thought protocol breakage was a major issue. Are >>> we really changing the wire protocol here, or just the type of string we >>> can pass back to the interface? >> What I said about it upthread was "this is effectively a protocol change, >> albeit a pretty minor one, so I can't see back-patching it". >> >> The discussion in bug #7766 shows that some client-side code is likely to >> need fixing, and that such fixing might actually be nontrivial for them. >> So changing this in a minor release is clearly a bad idea. But I don't >> have a problem with widening the counters in a major release where we >> can document it as a potential compatibility issue. >> >> I took a quick look and noted that CMDSTATUS_LEN and >> COMPLETION_TAG_BUFSIZE are set to 64, and have been for quite a long time, >> so command status string buffer sizes should not be a problem. >> >> I think we probably just need to widen es_processed and touch related >> code. Yes. >> Not sure what else Vik saw that needed doing. Quite a lot, actually. It seemed to me at the time to be a pretty big rabbit hole. > OK, thanks for the feedback. I understand now. The contents of the > string will potentially have a larger integer, but the byte length of > the string in the wire protocol doesn't change. > > Let's wait for Vik to reply and I think we can move forward. Unfortunately, I just did some cleanup last week and removed that branch. Had I waited a bit more I still would have had all the work I had done. I'll see how quickly I can redo it to get to the part where I got scared of what I was doing. It will have to wait until next week though; I am currently at FOSDEM. -- Vik -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Insert result does not match record count
On Sat, Feb 1, 2014 at 02:25:16AM +0100, Vik Fearing wrote: > > OK, thanks for the feedback. I understand now. The contents of the > > string will potentially have a larger integer, but the byte length of > > the string in the wire protocol doesn't change. > > > > Let's wait for Vik to reply and I think we can move forward. > > Unfortunately, I just did some cleanup last week and removed that > branch. Had I waited a bit more I still would have had all the work I > had done. I'll see how quickly I can redo it to get to the part where I > got scared of what I was doing. > > It will have to wait until next week though; I am currently at FOSDEM. OK, thanks. I thought it only required passing the int64 around until it got into the string passed to the client. The original patch is in the email archives if you want it. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Fwd: lots of errors from fmgr.h when I try to write a C UDF
Hi, I had Postgres 9.3 setup and run successfully in Eclipse (MacOSX) I am trying to write a C user-defined function: *#include * *...* *#ifdef PG_MODULE_MAGIC* *PG_MODULE_MAGIC;* *#endif* ... However, when I tried to compile the code: *gcc -fpic -c* ... It gave me a lot of errors and warnings from fmgr.h: *.../Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:59:2: error: unknown type name 'MemoryContext'* *MemoryContext fn_mcxt; /* memory context to store fn_extra in */* *^* */Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:71:2: error: unknown type name 'Oid'* *Oid fncollation;/* collation for function to use */* *^* */Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:72:2: error: unknown type name 'bool'* *boolisnull; /* function must set true if result is NULL */* ... *Any suggestion where I did it wrong?* Thank you :)
Re: [GENERAL] Fwd: lots of errors from fmgr.h when I try to write a C UDF
2014-02-01 Anh Pham : > Hi, > I had Postgres 9.3 setup and run successfully in Eclipse (MacOSX) > I am trying to write a C user-defined function: > > #include > ... > #ifdef PG_MODULE_MAGIC > PG_MODULE_MAGIC; > #endif > ... > > However, when I tried to compile the code: > gcc -fpic -c ... > It gave me a lot of errors and warnings from fmgr.h: > > ... > /Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:59:2: error: unknown > type name 'MemoryContext' > MemoryContext fn_mcxt; /* memory context to store fn_extra > in */ > ^ > /Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:71:2: error: unknown > type name 'Oid' > Oid fncollation;/* collation for function to > use */ > ^ > /Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:72:2: error: unknown > type name 'bool' > boolisnull; /* function must set true if > result is NULL */ > … You'll need to include the appropriate header files. Certainly "postgres.h". Regards Ian Barwick -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Fwd: lots of errors from fmgr.h when I try to write a C UDF
Hi, I actually included 'postgres.h' However I found out that when I stopped the server, the code then compiled successfully. It's weird On Fri, Jan 31, 2014 at 10:53 PM, Ian Lawrence Barwick wrote: > 2014-02-01 Anh Pham : > > Hi, > > I had Postgres 9.3 setup and run successfully in Eclipse (MacOSX) > > I am trying to write a C user-defined function: > > > > #include > > ... > > #ifdef PG_MODULE_MAGIC > > PG_MODULE_MAGIC; > > #endif > > ... > > > > However, when I tried to compile the code: > > gcc -fpic -c ... > > It gave me a lot of errors and warnings from fmgr.h: > > > > ... > > /Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:59:2: error: unknown > > type name 'MemoryContext' > > MemoryContext fn_mcxt; /* memory context to store > fn_extra > > in */ > > ^ > > /Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:71:2: error: unknown > > type name 'Oid' > > Oid fncollation;/* collation for > function to > > use */ > > ^ > > /Users/atpham/workspacecdt/pgsql/src/include/fmgr.h:72:2: error: unknown > > type name 'bool' > > boolisnull; /* function must set > true if > > result is NULL */ > > ... > > You'll need to include the appropriate header files. Certainly > "postgres.h". > > Regards > > Ian Barwick > -- Anh T Pham Computer Science Worcester Polytechnic Institute