[BUGS] BUG #7917: Error while migrating data

2013-03-05 Thread renjith . pt
The following bug has been logged on the website:

Bug reference:  7917
Logged by:  Renjith
Email address:  renjith...@galfar.com
PostgreSQL version: 9.1.0
Operating system:   Centos
Description:

I using one application alfresco with postgresql 9.1. it throws error once
data is migrated from another postgresql 9.1 database. kindly suggest your
solution. 

log details

[org.springframework.extensions.webscripts.AbstractRuntime] Exception from
executeScript - redirecting to status template error: SqlMapClient
operation; SQL [];   
--- The error occurred in
alfresco/ibatis/#resource.dialect#/avm-common-SqlMap.xml.  
--- The error occurred while applying a parameter map.  
--- Check the alfresco.avm.parameter_IdPatternMap.  
--- Check the statement (query failed).  
--- Cause: org.postgresql.util.PSQLException: ERROR: invalid escape string
  Hint: Escape string must be empty or one character.; nested exception is
com.ibatis.common.jdbc.exception.NestedSQLException:   
--- The error occurred in
alfresco/ibatis/#resource.dialect#/avm-common-SqlMap.xml.  
--- The error occurred while applying a parameter map.  
--- Check the alfresco.avm.parameter_IdPatternMap.  
--- Check the statement (query failed).  
--- Cause: org.postgresql.util.PSQLException: ERROR: invalid escape string
  Hint: Escape string must be empty or one character.




-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #7914: pg_dump aborts occasionally

2013-03-05 Thread Shin-ichi MORITA

Hi Tom,



s-mor...@beingcorp.co.jp writes:

I encountered a strange error during pg_dump as below:

pg_dump: Dumping the contents of table "t_file_data" failed: 
PQgetCopyData()

failed.
pg_dump: Error message from server: lost synchronization with server: got
message type "d", length 21861


Hm.  Can you create a reproducible test case for this?


This issue does not always happen.
It seems to depend on the environment and the load.
My test environment is actually a virual one as follows:
   Hypervisor: VMware vSphere Hypervisor 5.1
   Allocated vCPUs: 1
   Allocated Memory: 4GB
   OS: Windows Server 2012

1. Create test data using the script below.
2. Run pg_dump:
  pg_dump --host localhost --port 5432 --username postgres
  --format custom --verbose --file "..."
  --table public.t_data test_db


CREATE TABLE t_data
(
 c_id integer NOT NULL,
 c_data bytea,
 CONSTRAINT t_data_pkey PRIMARY KEY (c_id)
)
WITH (
 OIDS=FALSE
);

CREATE LANGUAGE plpgsql;

CREATE FUNCTION create_data(n INT) RETURNS VOID AS
$$
BEGIN
   FOR i IN 1..n LOOP
   INSERT INTO t_data VALUES (i, rpad('', 2, 'x')::bytea);
   END LOOP;
END;
$$
LANGUAGE plpgsql;

SELECT create_data(30);



I think that pqCheckInBufferSpace() should be called only when inBuffer 
is

left-justified like this:
1432:  if (conn->inStart == 0 && pqCheckInBufferSpace(conn->inCursor 
+

(size_t) msgLength - 4,


I think changing that code that way would break the considerations
mentioned in pqParseInput3; nor is it clear why this would help, anyway.
It doesn't seem likely that the buffer is so large that it can't be
enlarged by another 20K.


I think this issue happens when pg_dump is slower than the backend
for some reason.
Actually, pg_dump's memory consumption glows over 500MB
when this issue is happening.

If pg_dump is slower than the backend for some reason
and each message is not exactly 8K,
inBuffer will frequently overflow in getCopyDataMessage().

This will cause frequent calls to pqCheckInBufferSpace()
without left-justification.
Each call will double inBuffer and inBuffer will rapidly glow
even if each message is 20K.



regards, tom lane


Regards,
Shin-ichi MORITA




--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #7913: TO_CHAR Function & Turkish collate

2013-03-05 Thread Tom Lane
Euler Taveira  writes:
> As you suggested, s/str_toupper/pg_toupper/ in the else block (no TM) is the
> right fix. I'm not aware of another locale that would break if we apply such a
> change in a stable branch. Are you want me to post a fix?

Thanks, but I have a fix mostly written already.

regards, tom lane


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #7913: TO_CHAR Function & Turkish collate

2013-03-05 Thread Tom Lane
Peter Eisentraut  writes:
> On Sun, 2013-03-03 at 10:42 -0500, Tom Lane wrote:
>> Anybody have an opinion on whether to back-patch such a fix?

> I think it's a bug that should be backpatched.

Done.  In addition to day/month names, I found that there were
case-folding hazards for timezone abbreviations ('tz' format)
and Roman numerals for numbers ('rn' format) ... though, curiously,
not for Roman numerals for months.

regards, tom lane


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #7914: pg_dump aborts occasionally

2013-03-05 Thread Tom Lane
"Shin-ichi MORITA"  writes:
> Hi Tom,
>> Hm.  Can you create a reproducible test case for this?

> I think this issue happens when pg_dump is slower than the backend
> for some reason.

If so, perhaps injecting a sleep() delay into the right place in pg_dump
or libpq would make it reproducible?  I wouldn't have any problem
crediting a test with such modified code as being a valid test.

regards, tom lane


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #7916: memory leak with array_agg

2013-03-05 Thread Tom Lane
eshkin...@gmail.com writes:
> Looks like array_agg have memory leak. In my example it eat 3 Gb RSS, but
> equal user defined aggregate
> created by myself use only 7Mb RSS.

AFAICT there's no actual leak here; array_agg is just optimized for
speed rather than space.  It eats about 8K per hashtable entry.
While the planner knows that, it's got no good idea how many groups
will be produced by the query, so it underestimates the space needed
--- and the HashAggregate code is not currently capable of spilling
the hashtable to disk, so the table balloons well past the intended
work_mem limit.

Although no real fix for this is within easy reach, it strikes me
that we could possibly ameliorate things a bit by tweaking the
memory context size parameters used by accumArrayResult().
It would likely be reasonable to set the min size to 1K or so not 8K.
This would make things a trifle slower when the actual space requirement
exceeds 1K, but probably not by enough to notice.

BTW, I don't believe your assertion that the handmade aggregate does
this in 7MB.  Even a very optimistic calculation puts the space needed
for 70 10-element integer arrays at forty-some MB, and when I try
it I see more like 100MB consumed thanks to hashtable overhead.

regards, tom lane


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #7916: memory leak with array_agg

2013-03-05 Thread Sergey Burladyan
Tom Lane  writes:

> AFAICT there's no actual leak here; array_agg is just optimized for
> speed rather than space.  It eats about 8K per hashtable entry.
> While the planner knows that, it's got no good idea how many groups
> will be produced by the query, so it underestimates the space needed
> --- and the HashAggregate code is not currently capable of spilling
> the hashtable to disk, so the table balloons well past the intended
> work_mem limit.

Aha, I found this "8K per hashtable entry" with ltrace and now understand
what is going on here:
  === array_agg malloc calls count ===  === myagg malloc calls count===
  3 malloc(1024)3 malloc(1024) 
  3 malloc(1048576) 3 malloc(1048576)  
  3 malloc(131072)  3 malloc(131072)   
  1 malloc(16056)   4 malloc(16384)
  5 malloc(16384)   2 malloc(16440)
  2 malloc(16440)   1 malloc(2048) 
  1 malloc(2048)3 malloc(2097152)  
  3 malloc(2097152) 3 malloc(262144)   
  3 malloc(262144)  3 malloc(32768)
  3 malloc(32768)   1 malloc(32824)
  1 malloc(32824)   1 malloc(4096) 
  1 malloc(4096)3 malloc(4194304)  
  3 malloc(4194304) 3 malloc(524288)   
  3 malloc(524288)  3 malloc(65536)
  3 malloc(65536)  12 malloc(8192) 
 724151 malloc(8192)1 malloc(8296) 
  1 malloc(8296)   29 malloc(8360) 
 44 malloc(8360)   16 malloc(8388608)  
  8 malloc(8388608)

Thank you for answer Tom!

> Although no real fix for this is within easy reach, it strikes me
> that we could possibly ameliorate things a bit by tweaking the
> memory context size parameters used by accumArrayResult().
> It would likely be reasonable to set the min size to 1K or so not 8K.
> This would make things a trifle slower when the actual space requirement
> exceeds 1K, but probably not by enough to notice.

Looks good.

> BTW, I don't believe your assertion that the handmade aggregate does
> this in 7MB.  Even a very optimistic calculation puts the space needed
> for 70 10-element integer arrays at forty-some MB, and when I try
> it I see more like 100MB consumed thanks to hashtable overhead.

Yes you are right, Tom. My mistake.

-- 
Sergey Burladyan


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #7918: limitation of pagination with LIMIT and OFFSET

2013-03-05 Thread adrianopatrick
The following bug has been logged on the website:

Bug reference:  7918
Logged by:  Adriano Patrick do Nascimento Cunha
Email address:  adrianopatr...@gmail.com
PostgreSQL version: 8.4.7
Operating system:   Windows/Linux
Description:

Hello guys, today I need to process a query, the query returned as somewhere
around 20 million records, I thought to do with LIMIT and OFFSET where the
limit is fixed for 5000 records and will incrementing the OFFSET, but when
reached OFFSET 400 000 consultation was very slow, taking almost the time of
the query without LIMIT and OFFSET. I noticed that when the OFFSET is less
than 400 000 the query execution plan is, but when this value exceeds the
execution plan for the query changes and the query is too long.



-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs