[BUGS] libber library not found on RC1
Hi! I update server to RC1 version and after call pg_ctl to start I get this error opt/PostgreSQL/9.1/bin/pg_ctl: error while loading shared libraries: liblber-2.3.so.0: cannot open shared object file: No such file or directory But I have 2.4 version of this library. And I see in ldd pg_ctl that this library was added in RC or why I can't start server without ldap auth? -- View this message in context: http://postgresql.1045698.n5.nabble.com/libber-library-not-found-on-RC1-tp4733621p4733621.html Sent from the PostgreSQL - bugs mailing list archive at Nabble.com. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #6176: pg_dump dumps pg_catalog tables
The following bug has been logged online: Bug reference: 6176 Logged by: Chander Ganesan Email address: chan...@otg-nc.com PostgreSQL version: 9.0.4 Operating system: Centos 5.6 Description:pg_dump dumps pg_catalog tables Details: Normally, the pg_dump command ignored the pg_catalog tables when performing a dump. However, when provided the '--table' argument it fails to ignore the pg_catalog table. For example, suppose I had tables p1-p10 that I wanted to dump, I could use the following command: pg_dump test_db --table 'p*' This command would dump the requested tables, but it would also dump all the tables (in all schemas) that start with 'p*' . Generally speaking, there are no "excluded schemas" when using the pg_dump command with the '--table' argument. It is my belief that the pg_catalog tables should almost always be ignored (lest restores fail miserably). -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] libber library not found on RC1
alexondi writes: > I update server to RC1 version and after call pg_ctl to start I get this > error > opt/PostgreSQL/9.1/bin/pg_ctl: error while loading shared libraries: > liblber-2.3.so.0: cannot open shared object file: No such file or directory > But I have 2.4 version of this library. And I see in ldd pg_ctl that this > library was added in RC or why I can't start server without ldap auth? How did you install RC1? The only obvious explanation for this error is that you are trying to use somebody else's executables that were built for a different environment than you have (specifically, wanting different revision numbers of some shared libraries). If so, you may need to build the software locally to get something that will work for you. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] libber library not found on RC1
On Thu, Aug 25, 2011 at 3:17 PM, Tom Lane wrote: > alexondi writes: >> I update server to RC1 version and after call pg_ctl to start I get this >> error > >> opt/PostgreSQL/9.1/bin/pg_ctl: error while loading shared libraries: >> liblber-2.3.so.0: cannot open shared object file: No such file or directory > >> But I have 2.4 version of this library. And I see in ldd pg_ctl that this >> library was added in RC or why I can't start server without ldap auth? > > How did you install RC1? The only obvious explanation for this error is > that you are trying to use somebody else's executables that were built > for a different environment than you have (specifically, wanting > different revision numbers of some shared libraries). If so, you may > need to build the software locally to get something that will work for > you. It's an installer bug that's being worked on at the moment (we added LDAP support, and ran into an rpath issue). -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6176: pg_dump dumps pg_catalog tables
"Chander Ganesan" writes: > Normally, the pg_dump command ignored the pg_catalog tables when performing > a dump. However, when provided the '--table' argument it fails to ignore > the pg_catalog table. > For example, suppose I had tables p1-p10 that I wanted to dump, I could use > the following command: > pg_dump test_db --table 'p*' > This command would dump the requested tables, but it would also dump all the > tables (in all schemas) that start with 'p*' . Generally speaking, there > are no "excluded schemas" when using the pg_dump command with the '--table' > argument. > It is my belief that the pg_catalog tables should almost always be ignored > (lest restores fail miserably). This proposal seems overly simplistic to me: if we did this, it would be impossible to use pg_dump to dump a catalog's contents at all. (I don't care whether the resulting script is restorable; sometimes you just need to see what's actually in pg_class.) I wonder whether it would be helpful to provide a default setting for --exclude-schema that lists pg_catalog, information_schema, etc. If we approached it that way, it'd be possible to override the default at need. However, I'm not sure how that switch interacts with wildcard --table specs ... regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6176: pg_dump dumps pg_catalog tables
On 8/25/11 10:52 AM, Tom Lane wrote: "Chander Ganesan" writes: Normally, the pg_dump command ignored the pg_catalog tables when performing a dump. However, when provided the '--table' argument it fails to ignore the pg_catalog table. For example, suppose I had tables p1-p10 that I wanted to dump, I could use the following command: pg_dump test_db --table 'p*' This command would dump the requested tables, but it would also dump all the tables (in all schemas) that start with 'p*' . Generally speaking, there are no "excluded schemas" when using the pg_dump command with the '--table' argument. It is my belief that the pg_catalog tables should almost always be ignored (lest restores fail miserably). This proposal seems overly simplistic to me: if we did this, it would be impossible to use pg_dump to dump a catalog's contents at all. (I don't care whether the resulting script is restorable; sometimes you just need to see what's actually in pg_class.) Hence the "almost always" in my proposal - I agree with you. I think the common use case would want to preclude the export of pg_catalog tables, and most folks reading the documentation would end up getting confusing output in their dumps... At the very least, the documentation might include a caveat to warn users of the side effect... Especially since the default behavior is to exclude pg_catalog I wonder whether it would be helpful to provide a default setting for --exclude-schema that lists pg_catalog, information_schema, etc. If we approached it that way, it'd be possible to override the default at need. However, I'm not sure how that switch interacts with wildcard --table specs ... I tried that, at present it seems that the --exclude-schema flag is ignored when the --table flag is used. I'd love to see those work together (i.e., all tables starting with 'p' except those in schema 'old_data'.) I'd hate to see a command like this result in pg_catalog being dumped (which might be more backwards-incompatible than my suggestion below): pg_dump test_db --exclude-schema old_stuff How about making those schemas *always* excluded except when specified in the '--schema' flag (so one could explicity say "include pg_catalog") In that case the use case to dump pg_class would be: pg_dump test_db --schema pg_catalog --table pg_class Such flag wouldn't break the existing behavior (which is, by default, to exclude system schemas) Chander -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6170: hot standby wedging on full-WAL disk
On Mon, Aug 22, 2011 at 2:57 AM, Heikki Linnakangas wrote: > So the problem is that walreceiver merrily writes so much future WAL that it > runs out of disk space? A limit on the maximum number of future WAL files to > stream ahead would fix that, but I can't get very excited about it. Usually > you do want to stream as much ahead as you can, to ensure that the WAL is > safely on disk on the standby, in case the master dies. So the limit would > need to be configurable. It seems like perhaps what we really need is a way to make replaying WAL (and getting rid of now-unneeded segments) to take priority over getting new ones. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #6177: Size field type TEXT
The following bug has been logged online: Bug reference: 6177 Logged by: Claudio Oliveira Email address: claudio...@hotmail.com PostgreSQL version: 9.1rc1 Operating system: Windows 7 Description:Size field type TEXT Details: Hello, Use version 8.4 and have no issues with the field type TEXT. In version 9.1rc1 is limited to 4680 characters. Where do I change that size? Thank you. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6177: Size field type TEXT
"Claudio Oliveira" wrote: > Use version 8.4 and have no issues with the field type TEXT. > > In version 9.1rc1 is limited to 4680 characters. > > Where do I change that size? test=# create table txt (val text); CREATE TABLE test=# insert into txt values (repeat('long string', 100)); INSERT 0 1 test=# select char_length(val) from txt; char_length - 1100 (1 row) What makes you think it's limited to 4680 characters? -Kevin -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6170: hot standby wedging on full-WAL disk
On 25.08.2011 19:11, Robert Haas wrote: On Mon, Aug 22, 2011 at 2:57 AM, Heikki Linnakangas wrote: So the problem is that walreceiver merrily writes so much future WAL that it runs out of disk space? A limit on the maximum number of future WAL files to stream ahead would fix that, but I can't get very excited about it. Usually you do want to stream as much ahead as you can, to ensure that the WAL is safely on disk on the standby, in case the master dies. So the limit would need to be configurable. It seems like perhaps what we really need is a way to make replaying WAL (and getting rid of now-unneeded segments) to take priority over getting new ones. With the defaults we start to kill queries after a while that get in the way of WAL replay. Daniel had specifically disabled that. Of course, even with the query-killer disabled, it's possible for the WAL replay to fall so badly behind that you fill the disk, so a backstop might be useful anyway, although that seems a lot less likely in practice and if your standby can't keep up you're in trouble anyway. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6177: Size field type TEXT
Kevin Grittner wrote: > "Claudio Oliveira" wrote: > > > Use version 8.4 and have no issues with the field type TEXT. > > > > In version 9.1rc1 is limited to 4680 characters. > > > > Where do I change that size? > > test=# create table txt (val text); > CREATE TABLE > test=# insert into txt values (repeat('long string', 100)); > INSERT 0 1 > test=# select char_length(val) from txt; > char_length > - > 1100 > (1 row) > > What makes you think it's limited to 4680 characters? My guess is there is an index on the column: test=> create table txt (val text); CREATE TABLE test=> create index i_txt on txt(val); CREATE INDEX test=> insert into txt values (repeat('long string', 100)); ERROR: index row requires 125944 bytes, maximum size is 8191 You should probably not index long columns but rather index an md5 hash of the value. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6169: a non fatal error occured during cluster.... problem with environment variables
On Fri, Aug 19, 2011 at 2:19 PM, ondro wrote: > > The following bug has been logged online: > > Bug reference: 6169 > Logged by: ondro > Email address: balu...@horizon.sk > PostgreSQL version: 8.4.8 > Operating system: WindowsXP > Description: a non fatal error occured during cluster problem > with environment variables > Details: > > During instalation error occur "a non fatal error occured during cluster > initialisation" and after instalation postgresql not work. > > Problem is that postgresql instalation expect in the windows environment > variables path "c:\windows\system32". If not set path c:\windows\system32 or > doesn`t work instalation fail with this error. Why on earth would the system32 directory not be in the path? That's the *nix equivalent of not having /bin and /sbin in the path. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6170: hot standby wedging on full-WAL disk
On Thu, Aug 25, 2011 at 10:16 AM, Heikki Linnakangas wrote: > On 25.08.2011 19:11, Robert Haas wrote: >> >> On Mon, Aug 22, 2011 at 2:57 AM, Heikki Linnakangas >> wrote: >>> >>> So the problem is that walreceiver merrily writes so much future WAL that >>> it >>> runs out of disk space? A limit on the maximum number of future WAL files >>> to >>> stream ahead would fix that, but I can't get very excited about it. >>> Usually >>> you do want to stream as much ahead as you can, to ensure that the WAL is >>> safely on disk on the standby, in case the master dies. So the limit >>> would >>> need to be configurable. >> >> It seems like perhaps what we really need is a way to make replaying >> WAL (and getting rid of now-unneeded segments) to take priority over >> getting new ones. > > With the defaults we start to kill queries after a while that get in the way > of WAL replay. Daniel had specifically disabled that. Of course, even with > the query-killer disabled, it's possible for the WAL replay to fall so badly > behind that you fill the disk, so a backstop might be useful anyway, > although that seems a lot less likely in practice and if your standby can't > keep up you're in trouble anyway. I do think it's not a bad idea to have postgres prune unnecessary WAL at least enough so it can get the WAL segment it wants -- basically unsticking the recovery command so progress can be made. Right now someone (like me) has to go and trim away what appear to be unnecessary wal in (what is currently) a manual process. Also, I'm not sure if the segments that are downloaded via restore_command during the fall-behind time are "counted" towards replay when un-sticking after a restart of postgres: in particular, I believe that PG will want to copy the segments a second time, although I'm not 100% sure right now. Regardless, not being able to restart properly or make progress after killing the offensive backend are unhappy things. More thoughts? -- fdr -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6177: Size field type TEXT
Hello, I'm doing the test in PGAdmin. Must be a bug in PGAdmim. I'm sorry I have not tested in psql create table txt (val text); txt insert into values (repeat ('x', 4500)); char_length select (val) from txt; txt insert into values (repeat ('x', 4685)); char_length select (val) from txt; select *, length (val) val is null, (val ~ 'x') from txt; Thank you. Claudio Oliveira http://www.msisolucoes.com.br > From: br...@momjian.us > Subject: Re: [BUGS] BUG #6177: Size field type TEXT > To: kevin.gritt...@wicourts.gov > Date: Thu, 25 Aug 2011 13:20:22 -0400 > CC: claudio...@hotmail.com; pgsql-bugs@postgresql.org > > Kevin Grittner wrote: > > "Claudio Oliveira" wrote: > > > > > Use version 8.4 and have no issues with the field type TEXT. > > > > > > In version 9.1rc1 is limited to 4680 characters. > > > > > > Where do I change that size? > > > > test=# create table txt (val text); > > CREATE TABLE > > test=# insert into txt values (repeat('long string', 100)); > > INSERT 0 1 > > test=# select char_length(val) from txt; > > char_length > > - > > 1100 > > (1 row) > > > > What makes you think it's limited to 4680 characters? > > My guess is there is an index on the column: > > test=> create table txt (val text); > CREATE TABLE > test=> create index i_txt on txt(val); > CREATE INDEX > test=> insert into txt values (repeat('long string', 100)); > ERROR: index row requires 125944 bytes, maximum size is 8191 > > You should probably not index long columns but rather index an md5 hash > of the value. > > -- > Bruce Momjian http://momjian.us > EnterpriseDB http://enterprisedb.com > > + It's impossible for everything to be true. +
Re: [BUGS] BUG #6177: Size field type TEXT
Claudio Oliveira wrote: > I'm doing the test in PGAdmin. > > Must be a bug in PGAdmim. > > I'm sorry I have not tested in psql > > create table txt (val text); > txt insert into values (repeat ('x', 4500)); > char_length select (val) from txt; > txt insert into values (repeat ('x', 4685)); > char_length select (val) from txt; > > select *, length (val) val is null, (val ~ 'x') from txt; Hmm. Maybe you should try taking this to the pgadmin-support list. Your script came out sort of mangled in email, and apparently has funny characters in it because I couldn't copy/paste and modify -- I had to retype. But this runs fine in psql for me: (Printing the hundreds of x's omitted from the post, but that looks OK to me, too.) test=# create table txt (val text); CREATE TABLE test=# insert into txt values (repeat('x', 4500)); INSERT 0 1 test=# insert into txt values (repeat('x', 4685)); INSERT 0 1 test=# select char_length(val), val is null, (val ~ 'x') from txt; char_length | ?column? | ?column? -+--+-- 4500 | f| t 4685 | f| t (2 rows) -Kevin -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #6178: date_trunc : interval units "week" not supported contradicts documentation
The following bug has been logged online: Bug reference: 6178 Logged by: Noah Hamerslough Email address: n...@pcc.com PostgreSQL version: 8.4 Operating system: Windows Vista Description:date_trunc : interval units "week" not supported contradicts documentation Details: http://www.postgresql.org/docs/8.4/static/functions-datetime.html#FUNCTIONS- DATETIME-TRUNC The documentation for date_trunc('field', source) lists 'week' in the as a valid value for 'field' However, if the source is an interval, 'week' is not supported. select date_trunc('week', '1 month 15 days'::interval); ERROR: interval units "week" not supported SQL state: 0A000 Either 'week' should be supported or the documentation should be updated to reflect that it is not. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs