[BUGS] BUG #2752: Website bug
The following bug has been logged online: Bug reference: 2752 Logged by: Denis Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1.5 Operating system: Gentoo 2006.1 server x86 Description:Website bug Details: Don't know where to file this bug, but when I submit a bug report I get this page: Thank you for your bug report The report (reference: 2751) will be forwarded to the development team for further investigation. Return to the original page <- this LINK IS BROKEN ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[BUGS] BUG #5749: Case sensivity of names of sequences.
The following bug has been logged online: Bug reference: 5749 Logged by: Denis Email address: dolgalevde...@mail.ru PostgreSQL version: 9.0.1 Operating system: Windows XP Description:Case sensivity of names of sequences. Details: Hello. I found some problem with sequences names. I am new in Postgresql, so sorry if this problem was already discussed. I wanted to make a simple table with ID column, that value must be taken from the sequence (automatically incremented by 1 in every next record). 1. In GUI I created the sequence with name “NameSeq” (uppercase “N” and “S”). 2. Then I was trying to create a table using this simple querry: CREATE TABLE "Names" ( "NameId" bigint NOT NULL DEFAULT nextval('NameSeq') primary key, "Name" char(20), "FirstName" char(30), "SecondName" char(30) ) WITH ( OIDS = FALSE ) ; But I get the error: “ERROR: relation "nameseq" does not exist SQL state: 42P01” As you can see, sequence name in error message written in lowercase, but in querry I used “NameSeq”. 3. I created new sequence “nameseq” (“n” and “s” in lowercase) and the querry was completed without errors. Should the sequence names always be in lowercase or it`s a bug? -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4389: FATAL: could not reattach to shared memory(key=1804, addr=018E0000): 487
Hi. Thanks for fast reply! I had always completely removed Postgre before each try of installing (file system, system services). I have tried your advice and, of course, it did not help. Seems like some windows update made critical for Postgre changes in windows system libraries, used by Postgre. Best regards, Denys. --- Исходное сообщение --- От кого: Zdenek Kotala <[EMAIL PROTECTED]> Кому: [EMAIL PROTECTED] Дата: 1 сентября, 19:55:38 Тема: Re: [BUGS] BUG #4389: FATAL: could not reattach to shared memory(key=1804, addr=018E): 487 could not reattach to shared memory napsal(a): > The following bug has been logged online: > > Bug reference: 4389 > Logged by: could not reattach to shared memory > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.3.3-1 > Operating system: any 8.3.* > Description: FATAL: could not reattach to shared memory (key=1804, > addr=018E): 487 > Details: > > This error came week ago. > From that 'black' day I can not use Postgre. > I have reinstalled several 8.3.* versions (including last version with > vcredist_x86.exe) and nothing helps me. > try to remove postgesql.pid file in the data directory. Zdenek -- Элитные вакансии компаний Создайте резюме на сайте и получите работу! - http://www.hh.ua.
Re: [BUGS] BUG #4389: FATAL: could not reattach to shared memory(key=1804, addr=018E0000): 487
Hi! The reason was a corrupted system library and "sfc /scannow" have helped me. I wish you remember this solution and will advise it to other people. Thanks. --- Исходное сообщение --- От кого: Zdenek Kotala <[EMAIL PROTECTED]> Кому: [EMAIL PROTECTED] Дата: 1 сентября, 19:55:38 Тема: Re: [BUGS] BUG #4389: FATAL: could not reattach to shared memory(key=1804, addr=018E): 487 could not reattach to shared memory napsal(a): > The following bug has been logged online: > > Bug reference: 4389 > Logged by: could not reattach to shared memory > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.3.3-1 > Operating system: any 8.3.* > Description: FATAL: could not reattach to shared memory (key=1804, > addr=018E): 487 > Details: > > This error came week ago. > From that 'black' day I can not use Postgre. > I have reinstalled several 8.3.* versions (including last version with > vcredist_x86.exe) and nothing helps me. > try to remove postgesql.pid file in the data directory. Zdenek -- HeadHunter:Украина - http://www.hh.ua Элитные вакансии компаний. Создайте резюме на сайте и получите работу!
[BUGS] BUG #4562: ts_headline() adds space when parsing url
The following bug has been logged online: Bug reference: 4562 Logged by: Denis Monsieur Email address: [EMAIL PROTECTED] PostgreSQL version: 8.3.4 Operating system: Debian etch Description:ts_headline() adds space when parsing url Details: My system is 8.3.4, but people in #postgresql with 8.3.5 have confirmed the issue. The problem is a space being added to text in the form of http://some.url/path Compare the output: shs=# SELECT ts_headline('http://some.url', to_tsquery('sometext')); ts_headline - http://some.url (1 row) shs=# SELECT ts_headline('http://some.url/path', to_tsquery('sometext')); ts_headline --- http:// some.url/path (1 row) -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Strange behavior with to_char and dates
Hello, To day we are the 9th of January 2009. The following request preformed on a version 8.0.8 or on 8.3.5 gives the same strange result. As you can see below, the request "to_char((current_date - 11), 'DD MM IYYY')" gives "29 12 2009" instead of "29 12 2008". With 12 or with 8 the result is good. EXTRACT is a good workaround. What did I wrong? Is my request bad or is it a bug? Best regards, select to_char(current_date, 'DD MM IYYY') as good_curdate, current_date - 10 as good_cur_10, to_char((current_date - 10), 'DD MM IYYY') as bad_date1_10, to_char((date(now())-10), 'DD MM IYYY') as bad_date2_10, to_char((current_date - 13), 'DD MM IYYY') as good_date_13, to_char((current_date - 12), 'DD MM IYYY') as good_date_12, to_char((current_date - 11), 'DD MM IYYY') as bad_date_11, to_char((current_date - 10), 'DD MM IYYY') as bad_date_10, to_char((current_date - 9), 'DD MM IYYY') as bad_date_9, to_char((current_date - 8), 'DD MM IYYY') as good_date_8, to_char((current_date - 7), 'DD MM IYYY') as good_date_7, EXTRACT(day FROM (date(now())-10) ) as good_day_10, EXTRACT(month FROM (date(now())-10) ) as good_month_10, EXTRACT(YEAR FROM (date(now())-10) ) as good_year_10; good_curdate: "09 01 2009" good_cur_10 : "2008-12-30" bad_date1_10: "30 12 2009" bad_date2_10: "30 12 2009" good_date_13: "27 12 2008" good_date_12: "28 12 2008" bad_date_11 : "29 12 2009" bad_date_10 : "30 12 2009" bad_date_9 : "31 12 2009" good_date_8 : "01 01 2009" good_date_7 : "02 01 2009" good_day_10 : 30 good_month_10 : 12 good_year_10: 2008 Denis Percevault d.perceva...@pnsconcept.fr -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #4680: Server crashed if using wrong (mismatch) conversion functions
The following bug has been logged online: Bug reference: 4680 Logged by: Denis Afonin Email address: v...@itkm.ru PostgreSQL version: 8.3.6 Operating system: Linux Debian Lenny Description:Server crashed if using wrong (mismatch) conversion functions Details: I do: =cut= postg...@sunset:~$ createdb test -E KOI8 postg...@sunset:~$ psql test Welcome to psql 8.3.6, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit test=# SHOW server_version; server_version 8.3.6 (1 row) test=# CREATE DEFAULT CONVERSION test1 FOR 'LATIN1' TO 'KOI8' FROM ascii_to_mic; CREATE CONVERSION test=# CREATE DEFAULT CONVERSION test2 FOR 'KOI8' TO 'LATIN1' FROM mic_to_ascii; CREATE CONVERSION test=# set client_encoding to 'LATIN1'; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. Соединение с сервером было потеряно. Попытка переустановить: Безуспешно. !> \q =end cut= In Logs: =cut= 2009-02-27 10:29:40 UTC LOG: database system was shut down at 2009-02-27 10:29:38 UTC 2009-02-27 10:29:40 UTC LOG: autovacuum launcher started 2009-02-27 10:29:40 UTC LOG: database system is ready to accept connections 2009-02-27 10:29:50 UTC ERROR: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC STATEMENT: set client_encoding to 'LATIN1'; 2009-02-27 10:29:50 UTC ERROR: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC STATEMENT: set client_encoding to 'LATIN1'; 2009-02-27 10:29:50 UTC ERROR: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC ERROR: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC ERROR: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: ERRORDATA_STACK_SIZE exceeded 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: ERRORDATA_STACK_SIZE exceeded <<<>>>> 2009-02-27 10:29:50 UTC PANIC: ERRORDATA_STACK_SIZE exceeded 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: ERRORDATA_STACK_SIZE exceeded 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC PANIC: expected source encoding "MULE_INTERNAL", but got "KOI8" 2009-02-27 10:29:50 UTC LOG: server process (PID 4958) was terminated by signal 11: Segmentation fault 2009-02-27 10:29:50 UTC LOG: terminating any other active server processes 2009-02-27 10:29:50 UTC LOG: all server processes terminated; reinitializing 2009-02-27 10:29:50 UTC LOG: database system was interrupted; last known up at 2009-02-27 10:29:40 UTC 2009-02-27 10:29:50 UTC LOG: database system was not properly shut down; automatic recovery in progress =end cut= -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] postgresql 8.1.5 psql -P recordsep='\n' not work
Hello All. I want to output result of query with double newline record separated. For example: psql -At -P recordsep="\n\n" -U postgres -c "select generate_series(1,3)" 1 2 3 But my result is: psql -At -P recordsep="\n\n" -U postgres -d chronopay -c "select generate_series(1,3)" 1\n\n2\n\n3 Simple newline not work too: psql -At -P recordsep="\n" -U postgres -d chronopay -c "select generate_series(1,3)" 1\n2\n3 -- psql -At -P recordsep='\n\n' -U postgres -d chronopay -c "select generate_series(1,3)" psql -At -R '\n\n' -U postgres -d chronopay -c "select generate_series(1,3)" psql -At -R "\n\n" -U postgres -d chronopay -c "select generate_series(1,3)" no resultat Is it a bug? p.s. PostgreSQL 8.1.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-3) Thanks.
[BUGS] BUG #1756: PQexec eats huge amounts of memory
The following bug has been logged online: Bug reference: 1756 Logged by: Denis Vlasenko Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.1 Operating system: Linux Description:PQexec eats huge amounts of memory Details: Verbatim from http://bugs.php.net/bug.php?id=33587: Description: Seen on php-4.3.4RC2. Since I was just testing how good PG fares compared to Oracle, and I am not feeling any real pain from this (IOW: not my itch to scratch), I do not research this in depth, apart from submitting bug report. Sorry. Symptom: even the simplest query $result = pg_query($db, "SELECT * FROM big_table"); eats enormous amounts of memory on server (proportional to table size). I think this is a problem with PostgreSQL client libs. php's source is included for easy reference. PHP_FUNCTION(pg_query) { ... pgsql_result = PQexec(pgsql, Z_STRVAL_PP(query)); if ((PGG(auto_reset_persistent) & 2) && PQstatus(pgsql) != CONNECTION_OK) { PQclear(pgsql_result); PQreset(pgsql); pgsql_result = PQexec(pgsql, Z_STRVAL_PP(query)); } if (pgsql_result) { status = PQresultStatus(pgsql_result); } else { status = (ExecStatusType) PQstatus(pgsql); } switch (status) { case PGRES_EMPTY_QUERY: case PGRES_BAD_RESPONSE: case PGRES_NONFATAL_ERROR: case PGRES_FATAL_ERROR: php_error_docref(NULL TSRMLS_CC, E_WARNING, "Query failed: %s.", PQerrorMessage(pgsql)); PQclear(pgsql_result); RETURN_FALSE; break; case PGRES_COMMAND_OK: /* successful command that did not return rows */ default: if (pgsql_result) { pg_result = (pgsql_result_handle *) emalloc(sizeof(pgsql_result_handle)); pg_result->conn = pgsql; pg_result->result = pgsql_result; pg_result->row = 0; ZEND_REGISTER_RESOURCE(return_value, pg_result, le_result); } else { PQclear(pgsql_result); RETURN_FALSE; } break; } } ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] BUG #1756: PQexec eats huge amounts of memory
On Wednesday 06 July 2005 16:52, Harald Armin Massa wrote: > Denis, > > $result = pg_query($db, "SELECT * FROM big_table"); > > you are reading a big result (as I suspect from big_table) into memory. It > is perfectly normal that this uses large amounts of memory. No, I am not reading it into memory. I am executing query _on the server_, fetching result row-by-row and discarding rows as prey are processed (i.e. without accumulating all rows in _client's memory_) in the part of php script which you snipped off. Similar construct with Oracle, with 10x larger table, does not use Apache (php) memory significantly. php's pg_query() calls PQuery(), a Postgresql client library function, which is likely implemented so that it fetches all rows and stores them in client's RAM before completion. Oracle OCI8 does not work this way, it keeps result set on db server (in a form of a cursor or something like that). > [it would be rather suspicious if loading a big file / big resultset would > not use big amounts of memory] -- vda ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #1756: PQexec eats huge amounts of memory
On Thursday 07 July 2005 08:54, Neil Conway wrote: > Denis Vlasenko wrote: > > Symptom: even the simplest query > > $result = pg_query($db, "SELECT * FROM big_table"); > > eats enormous amounts of memory on server > > (proportional to table size). > > Right, which is exactly what you would expect. The entire result set is > sent to the client and stored in local memory; if you only want to > process part of the result set at a time, use a cursor. The same php script but done against Oracle does not have this behaviour. > (And I'm a little suspicious that the performance of "SELECT * FROM > big_table" will contribute to a meaningful comparison between database > systems.) I wanted to show colleagues which are Oracle admins that peak data fetch rate of PostgreSQL is way better than Oracle one. While it turned out to be true (Oracle+WinNT = 2kb TCP output buffer, ~1Mb/s over 100Mbit; PostgreSQL+Linux = 8kb buffer, ~2.6Mb/s), I was ridiculed instead when my php script failed miserably, crashing Apache with OOM condition, while alanogous script for Oracle ran to completion just fine. -- vda ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #1756: PQexec eats huge amounts of memory
On Thursday 07 July 2005 20:43, Alvaro Herrera wrote: > On Thu, Jul 07, 2005 at 08:17:23AM -0700, John R Pierce wrote: > > Neil Conway wrote: > > >Denis Vlasenko wrote: > > > > > >>The same php script but done against Oracle does not have this > > >>behaviour. > > > > > > > > >Perhaps; presumably Oracle is essentially creating a cursor for you > > >behind the scenes. libpq does not attempt to do this automatically; if > > >you need a cursor, you can create one by hand. > > > > I do not understand how a cursor could be autocreated by a query like > > > > $result = pg_query($db, "SELECT * FROM big_table"); > > > > php will expect $result to contain the entire table (yuck!). > > Really? I thought what really happened is you had to get the results > one at a time using the pg_fetch family of functions. If that is true, > then it's possible to make the driver fake having the whole table by > using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do > it behind the scenes.) Even without cursor, result can be read incrementally. I mean, query result is transferred over network, right? We just can stop read()'ing before we reached the end of result set, and continue at pg_fetch as needed. This way server does not need to do any of cursor creation/destruction work. Not a big win, but combined with reduced memory usage at client side, it is a win-win situation. -- vda ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #1756: PQexec eats huge amounts of memory
On Monday 11 July 2005 03:38, Alvaro Herrera wrote: > On Sun, Jul 10, 2005 at 01:05:10PM +0300, Denis Vlasenko wrote: > > On Thursday 07 July 2005 20:43, Alvaro Herrera wrote: > > > > Really? I thought what really happened is you had to get the results > > > one at a time using the pg_fetch family of functions. If that is true, > > > then it's possible to make the driver fake having the whole table by > > > using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do > > > it behind the scenes.) > > > > Even without cursor, result can be read incrementally. > > > > I mean, query result is transferred over network, right? > > We just can stop read()'ing before we reached the end of result set, > > and continue at pg_fetch as needed. > > It's not that simple. libpq is designed to read whole result sets at a > time; there's no support for reading incrementally from the server. > Other problem is that neither libpq nor the server know how many tuples > the query will return, until the whole query is executed. Thus, > pg_numrows (for example) wouldn't work at all, which is a showstopper > for many PHP scripts. > > In short, it can be made to work, but it's not as simple as you put it. This sounds reasonable. Consider my posts in this thread as user wish to * libpq and network protocol to be changed to allow for incremental reads of executed queries and for multiple outstanding result sets, or, if above thing looks unsurmountable at the moment, * libpq-only change as to allow incremental reads of single outstanding result set. Attempt to use pg_numrows, etc, or attempt to execute another query forces libpq to read and store all remaining rows in client's memory (i.e. current behaviour). -- vda ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #1756: PQexec eats huge amounts of memory
On Wednesday 13 July 2005 17:43, Tom Lane wrote: > Denis Vlasenko <[EMAIL PROTECTED]> writes: > > Consider my posts in this thread as user wish to > > * libpq and network protocol to be changed to allow for incremental reads > > of executed queries and for multiple outstanding result sets, > > or, if above thing looks unsurmountable at the moment, > > * libpq-only change as to allow incremental reads of single outstanding > > result set. Attempt to use pg_numrows, etc, or attempt to execute > > another query forces libpq to read and store all remaining rows > > in client's memory (i.e. current behaviour). > > This isn't going to happen because it would be a fundamental change in > libpq's behavior and would undoubtedly break a lot of applications. > The reason it cannot be done transparently is that you would lose the > guarantee that a query either succeeds or fails: it would be entirely > possible to return some rows to the application and only later get a > failure. > > You can have this behavior today, though, as long as you are willing to > work a little harder at it --- just declare some cursors and then FETCH > in convenient chunks from the cursors. Thanks, I already tried that. It works. -- vda ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[BUGS] BUG #2438: error connect with odbc
The following bug has been logged online: Bug reference: 2438 Logged by: MATTASOGLIO DENIS Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1 Operating system: WINDOWS XP Description:error connect with odbc Details: Hello world i work with powerbuilder odbc and postgres i can't manage transaction properly and i got error message with odbc driver is somebody know the : CONN ERROR: func=PGAPI_GetInfo, desc='', errnum=215, errmsg='The buffer was too small for the InfoValue.' Thank's Global Options: Version='08.02.0002', fetch=100, socket=4096, unknown_sizes=0, max_varchar_size=254, max_longvarchar_size=8190 disable_optimizer=1, ksqo=0, unique_index=1, use_declarefetch=0 text_as_longvarchar=1, unknowns_as_longvarchar=0, bools_as_char=1 NAMEDATALEN=64 extra_systable_prefixes='dd_;', conn_settings='' conn_encoding='OTHER' [ PostgreSQL version string = '8.1.2' ] [ PostgreSQL version number = '8.1' ] conn=31a3df8, query='select oid, typbasetype from pg_type where typname = 'lo'' [ fetched 0 rows ] [ Large Object oid = -999 ] [ Client encoding = 'UTF8' (code = 6) ] conn=31a3df8, PGAPI_DriverConnect(out)='DSN=gc_pgsql;DATABASE=gc;SERVER=localhost;PORT=543 2;UID=postgres;PWD=;CA=d;A6=;A7=100;A8=4096;B0=254;B1=8190;BI=0;C2=d d_;;CX=1b50dbb;A1=7.4-1' CONN ERROR: func=PGAPI_GetInfo, desc='', errnum=215, errmsg='The buffer was too small for the InfoValue.' henv=31a3dc0, conn=31a3df8, status=1, num_stmts=16 sock=31a6920, stmts=31a8998, lobj_type=-999 Socket Info --- socket=532, reverse=0, errornumber=0, errormsg='(NULL)' buffer_in=52062600, buffer_out=52066704 buffer_filled_in=77, buffer_filled_out=0, buffer_read_in=77 conn=31a3df8, query='show max_identifier_length' [ fetched 1 rows ] conn=31a9338, PGAPI_DriverConnect( in)='DSN=gc_pgsql;UID=postgres;PWD=;', fDriverCompletion=1 DSN info: DSN='gc_pgsql',server='localhost',port='5432',dbase='gc',user='postgres',pas swd='x' onlyread='0',protocol='7.4',showoid='0',fakeoidindex='0',showsystable='0' conn_settings='',conn_encoding='OTHER' translation_dll='',translation_option='' Global Options: Version='08.02.0002', fetch=100, socket=4096, unknown_sizes=0, max_varchar_size=254, max_longvarchar_size=8190 disable_optimizer=1, ksqo=0, unique_index=1, use_declarefetch=0 text_as_longvarchar=1, unknowns_as_longvarchar=0, bools_as_char=1 NAMEDATALEN=64 extra_systable_prefixes='dd_;', conn_settings='' conn_encoding='OTHER' [ PostgreSQL version string = '8.1.2' ] [ PostgreSQL version number = '8.1' ] conn=31a9338, query='select oid, typbasetype from pg_type where typname = 'lo'' [ fetched 0 rows ] [ Large Object oid = -999 ] [ Client encoding = 'UTF8' (code = 6) ] conn=31a9338, PGAPI_DriverConnect(out)='DSN=gc_pgsql;DATABASE=gc;SERVER=localhost;PORT=543 2;UID=postgres;PWD=;CA=d;A6=;A7=100;A8=4096;B0=254;B1=8190;BI=0;C2=d d_;;CX=1b50dbb;A1=7.4-1' CONN ERROR: func=PGAPI_GetInfo, desc='', errnum=215, errmsg='The buffer was too small for the InfoValue.' henv=31a3dc0, conn=31a9338, status=1, num_stmts=16 sock=31a8ed8, stmts=31aef68, lobj_type=-999 Socket Info --- socket=540, reverse=0, errornumber=0, errormsg='(NULL)' buffer_in=52084320, buffer_out=53411912 buffer_filled_in=77, buffer_filled_out=0, buffer_read_in=77 conn=31a9338, query='show max_identifier_length' [ fetched 1 rows ] conn=31a9338, query='INSERT INTO public.gc_pc ( exr_id, pc_id, pc_typ, statut ) VALUES ( 8, 24312, 'AA', 'O' )' conn=31a9338, query='COMMIT' ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] [Fwd: ERROR: cannot extract system attribute from minimal tuple]
Пересылаемое сообщение От: Denis Feklushkin Кому: sub...@bugs.debian.org Тема: ERROR: cannot extract system attribute from minimal tuple Дата: Sat, 05 Feb 2011 09:50:06 +0700 Package: postgresql-9.0 Version: 9.0.3-1 Severity: normal Tags: upstream Query: SELECT currency_id1 FROM bug0.currency_pairs p FOR SHARE; returns error: ERROR: cannot extract system attribute from minimal tuple Schema: --- CREATE SCHEMA bug0; SET search_path = bug0, pg_catalog; CREATE VIEW insider AS SELECT true AS insider; CREATE TABLE pairs ( currency_id1 text, currency_id2 text, hidden boolean, pair_id integer ); CREATE VIEW currency_pairs AS SELECT p.pair_id, p.currency_id1, p.currency_id2 FROM (pairs p CROSS JOIN insider i) WHERE ((NOT p.hidden) OR i.insider) ORDER BY p.pair_id; COPY pairs (currency_id1, currency_id2, hidden, pair_id) FROM stdin; BTC RUB f 1 \. --- -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] additional message to the bug #7499
I have now VERY strong argument to consider it is as a bug: if there a understandable for SQL language sequence which sorts in other fashion when adding "LIMIT". I did try the same with a last name starting with "G" (there also more than one entry with identical surnames) and it worked ok(the results were represented as I waited). this one last example brings me to conseder it is as a bug. id |str_last_name -+-- 83 | GX 175 | GX and id |str_last_name +-- 83 | GX (1 строка) select id, str_last_name from tbl_owners_individual order by str_last_name offset 26; and select id, str_last_name from tbl_owners_individual order by str_last_name offset 26 limit 1; corresponding... and even sorting by id: select id, str_last_name from tbl_owners_individual where id in (83,175,111,1) order by str_last_name; id |str_last_name -+-- 83 | GX 175 | GX 1 | Kolesnik 111 | Kolesnik (4 строки) select id, str_last_name from tbl_owners_individual where id in (83,175,111,1) order by id; id |str_last_name -+-- 1 | Kolesnik 83 | GX 111 | Kolesnik 175 | GX (4 строки) anyway sorted by id results the record with the "1" id appear before the record with the id "111". -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] bug #7499 additional comments
My arguments are: is that even select id, str_last_name from tbl_owners_individual where id in (83,175,111,1) order by id; id |str_last_name -+-- 1 | Kolesnik 83 | GX 111 | Kolesnik 175 | GX (4 строки) select id, str_last_name from tbl_owners_individual where id in (83,175,111,1) order by str_last_name; id |str_last_name -+-- 83 | GX 175 | GX 1 | Kolesnik 111 | Kolesnik (4 строки) Compare this 2 results and you see, that even if the records with the same last names do not come directly one after other then "id 1" always closer to the top, then "id 111" and "id 83" always clother to the top then "id 175". It proves, that the sorting by id remains always even if only among records for the same lastname. Suppose a person who has basic SQL knowledges would learn on praxis how would result a query if a person adds the clause "limit 1" to it and if a person sees results for this query: select id, str_last_name from tbl_owners_individual order by str_last_name offset 26 limit 1; id |str_last_name +-- 83 | GX (1 строка) and compares result to the query select id, str_last_name from tbl_owners_individual order by str_last_name offset 26; id |str_last_name -+-- 83 | GX 175 | GX ... then one makes conclusion, that a sorting by id always remain in both cases, but if one replaces this queries so: select id, str_last_name from tbl_owners_individual order by str_last_name limit 1 offset 53; id |str_last_name -+-- 111 | Kolesnik (1 строка) select id, str_last_name from tbl_owners_individual order by str_last_name offset 53; id |str_last_name -+-- 1 | Kolesnik 111 | Kolesnik ... Then a person comes to misunderstanding. You would sugguest, that one should read documentation. in the (where with ... replaced a directory in which the PostgreSQL installed) ...PostgreSQL\9.1\doc\postgresql\html\queries-limit.html "...When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. .." here asked to use "ORDER BY" which is done in every query above. "...The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; ..." the values of "ORDER BY" for LIMIT/OFFSET are not different as you see. All requirements are filled. this part "...The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET." would explain, that adding "LIMIT" will result in some unxplained data sorting, but this "...Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY." then the query with the results as you see: select id, str_last_name from tbl_owners_individual where str_last_name='Kolesnik' order by str_last_name limit 2 offset 2; id |str_last_name -+-- 111 | Kolesnik 144 | Kolesnik (2 строки) inconsistent results unless you enforce a predictable result ordering with ORDER BY. order by is here predictable, exists, but: select id, str_last_name from tbl_owners_individual order by str_last_name; ... 49 | Kolesnik 224 | Kolesnik 144 | Kolesnik 1 | Kolesnik 111 | Kolesnik ... as you see: offset 2 should return "144 | Kolesnik " and "...inconsistent results..." nowhere in this page of documentation (as I read it, if I do wrong) stated that "...inconsistent results..." not applies to the following 2 queries: select id, str_last_name from tbl_owners_individual order by str_last_name; select id, str_last_name from tbl_owners_individual order by str_last_name limit 2 offset 2; I and not only I by reading this page of documentation will conclude not without a reason that the queries different on presense or absense "...limit 2 offset 2..." should return results which are consistent. Basing on this I conclude, that it is a bug. With respect, Denis Kolesnik. On 8/22/12, Kevin Grittner wrote: > Denis Kolesnik wrote: > >> I have now VERY strong argument to consider it is as a bug: > > No, you appear to have very strong feelings about it, but you are > not making an argument that holds water. >
[BUGS] the bug #7499 is not more a bug, but my missunderstanding (RESOLVED INVALID)
the bug #7499 is not more a bug, but my missunderstanding (RESOLVED INVALID) My arguments are: > create table tbl_test > (id int not null primary key, > str_last_name text not null, > misc text); > insert into tbl_test values > (1, 'Kolesnik'), > (83, 'GX'), > (111, 'Kolesnik'), > (175, 'GX'); > select id, str_last_name from tbl_test > where id in (83,175,111,1) order by str_last_name; > update tbl_test set misc = 'x' where id = 1; > select id, str_last_name from tbl_test > where id in (83,175,111,1) order by str_last_name; > analyze tbl_test; > select id, str_last_name from tbl_test > where id in (83,175,111,1) order by str_last_name; here you are right: after "analyze tbl_test;" the records with the str_last_name with value Kolesnik sorted now in different order and for the last name GX works the same. >No, it asked to specify ORDER BY such that it "constrains the result >rows into a unique order" -- which you are not doing in your >examples. That is exactly what you *should* do to get the results >you want. here you are right also, because it seems now, that if "order by id" is missing then data results of a query could vary depending on changes to a record done(or other algorythms). Lets close this bug. With deep respect, Denis Kolesnik. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Bug reporting form :-(
Hi! Bug reporting form: http://www.postgresql.org/support/submitbug/ is not works - it slows down and fails with "CSRF verification failed" (in google chrome, at least) -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] PQsendQueryParams() causes prepared statement error
in 9.2.0: PQsendQueryParams() with multiple command statement (like "select 123; select 456") unexpectedly causes error: "cannot insert multiple commands into a prepared statement." But PQsendQuery() works fine. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] PQsendQueryParams() causes prepared statement error
В Срд, 07/11/2012 в 11:48 -0500, Tom Lane пишет: > Denis Feklushkin writes: > > in 9.2.0: > > PQsendQueryParams() with multiple command statement (like "select 123; > > select 456") unexpectedly causes error: > > > "cannot insert multiple commands into a prepared statement." > > This is not a bug, it's intentional (and documented) behavior. Advise as I can override this? I am need a simple form of overlapped processing, as in manual: "the client can be handling the results of one command while the server is still working on later queries in the same command string." -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Beta5 on Linux Alpha
I just have tested your 7.0beta5. My system is RH Linux 6.1 2.2.14 on Alpha. gcc --version says: egcs-2.91.66. I still have two major problems: 1. ./configure doesn't recognize my OS. ./config.guess reports: alphaev5-unknown-linux-gnu so ./configure chooses 'linux' and fails. 2. I compiled and installed all the stuff successfully using ./configure --with-x --with-tcl --with-template=linux_alpha Environment is set as follows: PGDATA=/usr/local/pgsql/data PGLIB=/usr/local/pgsql/lib LD_LIBRARY_PATH=/usr/local/pgsql/lib Running initdb causes the following: > This database system will be initialized with username "postgres". > This user will own all the data files and must also own the server process. > > Creating database system directory /usr/local/pgsql/data > Creating database system directory /usr/local/pgsql/data/base > Creating database XLOG directory /usr/local/pgsql/data/pg_xlog > Creating template database in /usr/local/pgsql/data/base/template1 > > FATAL: s_lock(2030d400) at spin.c:115, stuck spinlock. Aborting. > > FATAL: s_lock(2030d400) at spin.c:115, stuck spinlock. Aborting. > > initdb failed. > Removing /usr/local/pgsql/data. > Removing temp file /tmp/initdb.32548. I faced with exactly the same trouble since PostgreSQL 6.4. Sincerely yours, Denis N. Stepanov. BINP SB RAS, Novosibirsk, Russia.
[BUGS] 7.0RC1 on Linux Alpha
The following confirmed for your 7.0RC1 (except that spinlock stuck arises at spin.c:116). Please, do smth before the official release comes! > I just have tested your 7.0beta5. My system is RH Linux 6.1 2.2.14 on Alpha. > gcc --version says: egcs-2.91.66. I still have two major problems: > > 1. ./configure doesn't recognize my OS. ./config.guess reports: > alphaev5-unknown-linux-gnu > so ./configure chooses 'linux' and fails. > > 2. I compiled and installed all the stuff successfully using > ./configure --with-x --with-tcl --with-template=linux_alpha > > Environment is set as follows: > PGDATA=/usr/local/pgsql/data > PGLIB=/usr/local/pgsql/lib > LD_LIBRARY_PATH=/usr/local/pgsql/lib > > Running initdb causes the following: > > > This database system will be initialized with username "postgres". > > This user will own all the data files and must also own the server process. > > > > Creating database system directory /usr/local/pgsql/data > > Creating database system directory /usr/local/pgsql/data/base > > Creating database XLOG directory /usr/local/pgsql/data/pg_xlog > > Creating template database in /usr/local/pgsql/data/base/template1 > > > > FATAL: s_lock(2030d400) at spin.c:115, stuck spinlock. Aborting. > > > > FATAL: s_lock(2030d400) at spin.c:115, stuck spinlock. Aborting. > > > > initdb failed. > > Removing /usr/local/pgsql/data. > > Removing temp file /tmp/initdb.32548. > > I faced with exactly the same trouble since PostgreSQL 6.4. > > Sincerely yours, > > Denis N. Stepanov. > BINP SB RAS, Novosibirsk, Russia.
[BUGS] postmaster dies on EOF?
r/local/pgsql/bin/postmaster: ServerLoop:handling reading 61 /usr/local/pgsql/bin/postmaster: ServerLoop:handling writing 61 /usr/local/pgsql/bin/postmaster: ServerLoop:handling reading 62 /usr/local/pgsql/bin/postmaster: ServerLoop:handling reading 62 /usr/local/pgsql/bin/postmaster: ServerLoop:handling writing 62 /usr/local/pgsql/bin/postmaster: ServerLoop:handling reading 63 /usr/local/pgsql/bin/postmaster: ServerLoop:handling reading 63 FATAL 1: ReleaseLruFile: No open files available to be closed proc_exit(0) shmem_exit(0) exit(0) i'm running: pgsql [30] $ psql -c 'select version();' template1 version - PostgreSQL 7.0.2 on i386-unknown-openbsd2.6, compiled by gcc 2.95.1 (1 row) pgsql [31] $ configuration options were: ./configure \ --with-tcl \ --prefix=/usr/local/pgsql \ --with-template=openbsd \ --with-includes=/usr/local/include \ --with-libraries=/usr/local/lib \ --with-tkconfig=/usr/local/lib/tk8.0 Thanks! -- Denis A. Doroshenko -- VAS/IN group engineer [Address: Omnitel Ltd., T.Sevcenkos 25, Vilnius 2600, Lithuania] [Phone: +370 98 63207] [E-mail: mailto:[EMAIL PROTECTED]]
[BUGS] Output of date_part('quarter', date)
2000-10-16 00:00:00+07 2000-10-16 00:00:00+07 2000-10-17 00:00:00+07 2000-10-17 00:00:00+07 2000-10-18 00:00:00+07 2000-10-18 00:00:00+07 2000-10-18 00:00:00+07 2000-10-19 00:00:00+07 2000-10-19 00:00:00+07 2000-10-19 00:00:00+07 2000-10-20 00:00:00+07 2000-10-20 00:00:00+07 2000-10-23 00:00:00+07 2000-10-23 00:00:00+07 2000-10-24 00:00:00+07 2000-10-24 00:00:00+07 2000-10-25 00:00:00+07 2000-10-25 00:00:00+07 2000-10-26 00:00:00+07 2000-10-26 00:00:00+07 2000-10-26 00:00:00+07 2000-10-27 00:00:00+07 2000-10-27 00:00:00+07 2000-10-30 00:00:00+06 2000-10-30 00:00:00+06 2000-10-31 00:00:00+06 2000-10-31 00:00:00+06 2000-10-31 00:00:00+06 2000-10-31 00:00:00+06 2000-10-31 00:00:00+06 2000-11-02 00:00:00+06 2000-11-02 00:00:00+06 2000-11-03 00:00:00+06 2000-11-03 00:00:00+06 2000-11-03 00:00:00+06 2000-11-04 00:00:00+06 2000-11-04 00:00:00+06 2000-11-04 00:00:00+06 2000-11-04 00:00:00+06 2000-11-06 00:00:00+06 2000-11-08 00:00:00+06 2000-11-09 00:00:00+06 2000-11-09 00:00:00+06 2000-11-10 00:00:00+06 2000-11-10 00:00:00+06 2000-11-10 00:00:00+06 2000-11-13 00:00:00+06 2000-11-14 00:00:00+06 2000-11-14 00:00:00+06 2000-11-15 00:00:00+06 2000-11-15 00:00:00+06 2000-11-16 00:00:00+06 2000-11-16 00:00:00+06 2000-11-16 00:00:00+06 2000-11-17 00:00:00+06 2000-11-17 00:00:00+06 2000-11-20 00:00:00+06 2000-11-20 00:00:00+06 2000-11-21 00:00:00+06 2000-11-21 00:00:00+06 2000-11-22 00:00:00+06 2000-11-22 00:00:00+06 2000-11-22 00:00:00+06 2000-11-23 00:00:00+06 2000-11-23 00:00:00+06 2000-11-24 00:00:00+06 2000-11-24 00:00:00+06 2000-11-27 00:00:00+06 2000-11-27 00:00:00+06 2000-11-27 00:00:00+06 2000-11-28 00:00:00+06 2000-11-28 00:00:00+06 2000-11-28 00:00:00+06 2000-11-29 00:00:00+06 2000-11-29 00:00:00+06 2000-11-30 00:00:00+06 2000-11-30 00:00:00+06 2000-11-30 00:00:00+06 2000-11-30 00:00:00+06 2000-11-30 00:00:00+06 (203 rows) billing=> select InDate from FirmICO where date_part('year', indate)=2000 and date_part('quarter', indate)=4 ORDER BY InDate; indate 2000-12-04 00:00:00+06 2000-12-04 00:00:00+06 2000-12-04 00:00:00+06 2000-12-05 00:00:00+06 2000-12-05 00:00:00+06 2000-12-06 00:00:00+06 2000-12-06 00:00:00+06 2000-12-06 00:00:00+06 2000-12-06 00:00:00+06 2000-12-07 00:00:00+06 2000-12-08 00:00:00+06 2000-12-08 00:00:00+06 2000-12-09 00:00:00+06 2000-12-09 00:00:00+06 2000-12-13 00:00:00+06 2000-12-13 00:00:00+06 2000-12-14 00:00:00+06 2000-12-14 00:00:00+06 2000-12-15 00:00:00+06 2000-12-15 00:00:00+06 2000-12-18 00:00:00+06 2000-12-18 00:00:00+06 2000-12-18 00:00:00+06 2000-12-19 00:00:00+06 2000-12-20 00:00:00+06 2000-12-20 00:00:00+06 2000-12-20 00:00:00+06 2000-12-21 00:00:00+06 2000-12-21 00:00:00+06 2000-12-22 00:00:00+06 2000-12-22 00:00:00+06 2000-12-22 00:00:00+06 2000-12-22 00:00:00+06 2000-12-25 00:00:00+06 2000-12-25 00:00:00+06 2000-12-25 00:00:00+06 2000-12-26 00:00:00+06 2000-12-27 00:00:00+06 2000-12-27 00:00:00+06 2000-12-27 00:00:00+06 2000-12-27 00:00:00+06 2000-12-27 00:00:00+06 2000-12-28 00:00:00+06 2000-12-29 00:00:00+06 2000-12-29 00:00:00+06 (45 rows) Denis Osadchy Russia, Novosibirsk
Re: [BUGS] WIN32 Non Blocking
gt; --- 1114,1120 > if (pqWait(0, 1, conn)) > { >conn->status = CONNECTION_BAD; > + sprintf(FLastError,conn->errorMessage.data); >return 0; > } > break; > *** > *** 1110,1115 > --- 1122,1128 > default: > /* Just in case we failed to set it in PQconnectPoll */ > conn->status = CONNECTION_BAD; > + sprintf(FLastError,conn->errorMessage.data); > return 0; > } > > *** > *** 1208,1222 > { > ACCEPT_TYPE_ARG3 laddrlen; > > - #ifndef WIN32 > - int optval; > - > - #else > - char optval; > - > - #endif > - ACCEPT_TYPE_ARG3 optlen = sizeof(optval); > - > /* >* Write ready, since we've made it here, so the >* connection has been made. > --- 1221,1226 > *** > *** 1226,1235 >* Now check (using getsockopt) that there is not an error >* state waiting for us on the socket. >*/ > > if (getsockopt(conn->sock, SOL_SOCKET, SO_ERROR, > ! (char *) &optval, &optlen) == -1) > ! { >printfPQExpBuffer(&conn->errorMessage, > "PQconnectPoll() -- getsockopt() failed: " > "errno=%d\n%s\n", > --- 1230,1241 >* Now check (using getsockopt) that there is not an error >* state waiting for us on the socket. >*/ > + #ifndef WIN32 > + int optval; > + ACCEPT_TYPE_ARG3 optlen = sizeof(optval); > > if (getsockopt(conn->sock, SOL_SOCKET, SO_ERROR, > ! (char *) &optval, &optlen) == -1){ >printfPQExpBuffer(&conn->errorMessage, > "PQconnectPoll() -- getsockopt() failed: " > "errno=%d\n%s\n", > *** > *** 1247,1252 > --- 1253,1272 >connectFailureMessage(conn, "PQconnectPoll()", optval); >goto error_return; > } > + #else > + char far optval[8]; > + ACCEPT_TYPE_ARG3 optlen = sizeof(optval); > + > + int OptResult=getsockopt(conn->sock, SOL_SOCKET, SO_ERROR,optval, > &optlen); > + if (OptResult==SOCKET_ERROR){ > + printfPQExpBuffer(&conn->errorMessage, > + "PQconnectPoll() -- getsockopt() failed: " > +"errno=%i\n", > +WSAGetLastError()); > + connectFailureMessage(conn, "PQconnectPoll()", OptResult); > + goto error_return; > + } > + #endif > > /* Fill in the client address */ > laddrlen = sizeof(conn->laddr); > *** > *** 1929,1934 > --- 1949,1955 > #endif >if (conn->sock >= 0) > #ifdef WIN32 > + //WSACleanup(); > closesocket(conn->sock); > #else > close(conn->sock); > *** > *** 2699,2706 > char * > PQerrorMessage(const PGconn *conn) > { >if (!conn) > ! return "PQerrorMessage: conn pointer is NULL\n"; > >return conn->errorMessage.data; > } > --- 2720,2732 > char * > PQerrorMessage(const PGconn *conn) > { > + //char ErrBuffer[200]; >if (!conn) > ! #ifdef WIN32 > !return FLastError; > ! #else > !return "PQerrorMessage: conn pointer is NULL\n"; > ! #endif > >return conn->errorMessage.data; > } > > > > > ---(end of broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly -- Denis A. Doroshenko [GPRS engineer] .-._|_ | [Omnitel Ltd., T.Sevcenkos st. 25, Vilnius, Lithuania] | | _ _ _ .| _ | [Phone: +370 9863486 E-mail: [EMAIL PROTECTED]] |_|| | || |||(/_|_ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [BUGS] Bug #878: different format of float values in 7.2.and
On Mon, 20 Jan 2003, Tom Lane wrote: TL> [EMAIL PROTECTED] writes: TL> > The following line: TL> > SELECT 1875/1000.0 TL> > produces different results. In 7.2.3 it is: TL> > ?column? TL> > - TL> >1.875 TL> TL> > while in 7.3.1 it is: TL> > ?column? TL> > - TL> > 1.87500 TL> TL> The above expression is taken as NUMERIC datatype in 7.3, rather than TL> FLOAT8 as it was in 7.2. To get the same output as before, try TL> SELECT 1875/1000.0::float8; Thanks. The main thing I understood is that it was done intentionally. Is there a kind of document (mail message maybe) that describes the intentions to make typecasts more strict and/or pecularities of such changes? I guess many users faced with similar problems, probably it was already explained somewhere. -- Regards, Den. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] BUG #1044: snprintf() shipped with PostgreSQL is not
On Thu, 8 Jan 2004, Tom Lane wrote: TL> "PostgreSQL Bugs List" <[EMAIL PROTECTED]> writes: TL> > Some OSes lack proper snprintf()/vsnprintf() fuctions so PostgreSQL includes TL> > its own version (src/port/snprintf.c) during building. Unfortunately, this TL> > version of snprintf() is not reentrant (it uses global vars to keep internal TL> > state), so for example running libpq-based concurrent applications (threads) TL> > causes libpq fuctions to fail sometimes. TL> TL> What platforms have workable thread support but not snprintf? I think TL> this change is not likely to accomplish much except clutter the snprintf TL> code ... I discovered this problem while porting libpq (client interface) on RTEMS OS (rtems.org). This is an embedded OS and as many other embedded OSes it lacks non-ANSI C functions (at least RTEMS image from my vendor does not have them). snprintf()/vsnprintf() functions are not ANSI-compliant so they should be used with care. This OS has POSIX thread support though I did not use it (i.e. I keep all PgSQL activity in one thread, so the code was compiled without --enable-thread-safety). The difficulty I observed is: if even I keep PgSQL calls serialized, calling bare snprintf() from some other thread would likely cause concurrent PgSQL call to fail. Quite a strange result for such an inoffensive action, don't you think so? Anyway, I have fixed this for my code but if you think that the change is inappropriate for the main stream then let it be. I guess you would hear some more complaints as there will be more ports on embedded platforms. TL> TL> regards, tom lane TL> -- Thanks, Denis. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #1044: snprintf() shipped with PostgreSQL is not
On Sun, 11 Jan 2004, Peter Eisentraut wrote: PE> Denis N. Stepanov wrote: PE> > snprintf()/vsnprintf() functions are not ANSI-compliant PE> PE> Yes, they are. Sorry, I was talking about ANSI X3.159-1989, which certainly does not declare snprintf(). In practice it is diffucult to count on, say, C99-compliant C runtime, especially for embedded systems. Thanks, Denis. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] overlapping rules can let you break referential integrity
Step by step how to reproduce: -- nodes CREATE TABLE nodes ( node_id serial, CONSTRAINT nodes_pkey PRIMARY KEY (node_id) ) WITHOUT OIDS; -- domains CREATE TABLE domains ( domain_id int NOT NULL, domain_is_publicbool NOT NULL default false, CONSTRAINT domains_pkey PRIMARY KEY (domain_id), CONSTRAINT domains_domain_id_fkey FOREIGN KEY (domain_id) REFERENCES nodes (node_id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE CASCADE ) WITHOUT OIDS; -- drop_domain: drop the node and rely on the delete cascade CREATE OR REPLACE RULE drop_domain AS ON DELETE TO domains DO INSTEAD DELETE FROM nodes WHERE node_id = OLD.domain_id; -- public_domain_delete_protect: add delete protection CREATE OR REPLACE RULE public_domain_delete_protect AS ON DELETE TO domains WHERE domain_is_public = true DO INSTEAD NOTHING; -- version check select version(); -- 8.1.1 on i686-pc-mingw32 yada yada (standard binary on WinXP SP2) -- create a node insert into nodes default values; -- 1 row affected, normal -- create a domain insert into domains (domain_id, domain_is_public) values (currval('nodes_node_id_seq'), true); -- 1 row affected, normal -- delete the domain delete from domains; -- 1 row affected, not normal -- 0 expected because of public_domain is write protected -- lookup nodes select * from nodes; -- 0 rows, normal since the write protection didn't work -- lookup domaisn select * from domains; -- 1 row -- ouch! this piece of data is now corrupt I'm not familiar with the pgsql internals, but it looks as if: 1. delete on domains 2. rewritten as delete on nodes via drop_domain 3. triggers cascade delete on domains via foreign key 4. rewritten as do nothing <-- missing integrity check and/or rollback here via public_domain_delete_protect (things work fine without this step) Best, Denis postgresql-bug.sql Description: Binary data ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] overlapping rules can let you break referential integrity
Pardon if I insist, but accepting data that contradicts an external key contraint without raising an error is a bug and by no means a feature. Denis > -Original Message- > From: Tom Lane [mailto:[EMAIL PROTECTED] > Sent: Friday, February 10, 2006 11:35 PM > To: Denis de Bernardy > Cc: pgsql-bugs@postgresql.org > Subject: Re: [BUGS] overlapping rules can let you break > referential integrity > > > "Denis de Bernardy" <[EMAIL PROTECTED]> writes: > > Step by step how to reproduce: > > This is not a bug, it's a feature. > > regards, tom lane > ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[BUGS] expanded mode + wrapping in psql
Wrapping apparently doesn't want to work in expanded mode... Lengthier discussion here: http://stackoverflow.com/questions/6306063/ test=# \t Showing only tuples. test=# \pset border 0 Border style is 0. test=# \pset format wrapped Output format is wrapped. test=# \pset columns 20 Target width for "wrapped" format is 20. This works as expected: test=# select id, name from test; 2 abc abc abc abc . abc abc abc abc . abc abc abc abc . (etc.) This doesn't: test=# \x Expanded display is on. test=# select id, name from test; id 2 name abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc Expected result would be more like this, since wrapping for a column-width of 20 was set: test=# select id, name from test; id 2 name abc abc abc abc . abc abc abc abc . abc abc abc abc . (etc.) D. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] PG regression with row comparison when btree_gist is enabled (BUG)
I only did some cursory tests, but the patch (applied to Macport's beta2 distribution) seems to be working on my dev box (OSX / Snow Leopard). I'll report back if I run into oddities further down the road. Thanks a lot! Denis > >From: Jeff Davis >To: o...@sai.msu.su >Cc: Denis de Bernardy ; Teodor Sigaev >; pgsql-bugs@postgresql.org >Sent: Sunday, June 19, 2011 7:23 PM >Subject: Re: PG regression with row comparison when btree_gist is enabled (BUG) > >On Sat, 2011-06-18 at 13:20 -0700, Jeff Davis wrote: >> Interesting problem... the bug is in get_op_btree_interpretation() which >> has code like this: >> >> /* >> >> * If we can't find any opfamily containing the op, perhaps it is a >> <> >> * operator. See if it has a negator that is in an >> opfamily. >> */ >> op_negated = false; >> if (catlist->n_members == 0) >> >> >> However, that's a bogus test, because btree_gist puts <> into an >> opfamily. Thus, catlist->n_members == 1 even though we really do need to >> look for the negator. Really, we need to unconditionally search for the >> operator as well as unconditionally searching for the negator. > >Patch attached. > >Regards, > Jeff Davis > > >
Re: [BUGS] BUG #8226: Inconsistent unnesting of arrays
The actual query was something like: select id, person, unnest(groups) as grp from people … where groups is a crazy column containing an array that needed to be joined with another table. In this case, you cannot do your suggested solution, which would look like this: select id, person, grp from people, unnest(groups) as grp Admittedly, there are other ways to rewrite the above, but — if I may — that's entirely besides the point of the bug report. The Stack Overflow question got me curious about what occurred when two separate arrays are unnested. Testing revealed the inconsistency, which I tend to view as a bug. This statement works as expected, unnesting the first array, then cross joining the second accordingly: >> select 1 as a, unnest('{2,3}'::int[]) as b, unnest('{4,5,6}'::int[]) This seems to only unnest one of the arrays, and match the element with the same subscript in the other array: >> select 1 as a, unnest('{2,3}'::int[]) as b, unnest('{4,5}'::int[]) Methinks the behavior should be consistent. It should always do one (presumably like in the first statement) or the other (which leads to undefined behavior in the first statement). Or it should raise some kind of warning, e.g. "you're using undocumented/unsupported/deprecated/broken syntactic sugar". Denis On Jun 12, 2013, at 12:05 PM, Greg Stark wrote: > On Wed, Jun 12, 2013 at 9:58 AM, wrote: >> denis=# select 1 as a, unnest('{2,3}'::int[]) as b, unnest('{4,5}'::int[]) > > set returning functions in the target list of the select don't behave > the way you're thinking. What you probably want to do is move the > unnest() to the FROM clause: > > select 1 as a, b, c from unnest('{2,3}'::int[]) as b(b), > unnest('{4,5}'::int[]) as c(c) > > > -- > greg -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs