Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer

On 07/26/2012 09:32 AM, karave...@mail.bg wrote:

Finally I have managed to migrate it in batches of 100-200k user ids and
disconnecting after each query in order to free the backend and leaked
memory.
If you do it in batches, but you do NOT disconnect and reconnect, does 
the backend continue to grow?


What's the output of:

SELECT count(sub.user_id), to_char(AVG(sub.n_prefs), '9.99') FROM (
SELECT user_id, count(name) AS n_prefs FROM old_prefs GROUP BY user_id) 
AS sub;


and

SELECT pg_size_pretty(pg_total_relation_size('old_prefs'));

?

--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread karavelov
- Craig Ringer (ring...@ringerc.id.au), на 26.07.2012 в 11:17 -
> On 07/26/2012 09:32 AM, karave...@mail.bg wrote: >> Finally I have
managed to migrate it in batches of 100-200k user ids and >> disconnecting
after each query in order to free the backend and leaked >> memory. > If
you do it in batches, but you do NOT disconnect and reconnect, does > the
backend continue to grow? > > What's the output of: > > SELECT
count(sub.user_id), to_char(AVG(sub.n_prefs), '9.99') FROM ( > SELECT
user_id, count(name) AS n_prefs FROM old_prefs GROUP BY user_id) > AS sub;
> > and > > SELECT pg_size_pretty(pg_total_relation_size('old_prefs')); > >
? > > -- > Craig Ringer > Ok, I will do the procedure again with taking
notes on each step. First, here are the results of the queries you asked:
pg=> SELECT count(sub.user_id), to_char(AVG(sub.n_prefs), '9.99') FROM
( SELECT user_id, count(name) AS n_prefs FROM old_prefs GROUP BY user_id)
AS sub; count | to_char -+--- 1257262 | 2.26 (1 row) pg=>
SELECT pg_size_pretty(pg_total_relation_size('old_prefs')); pg_size_pretty
 264 MB (1 row) pg=> d old_prefs Table "public.old_prefs"
Column | Type | Modifiers -+---+--- user_id
| integer | not null name | character varying | not null value | character
varying | not null Indexes: "old_prefs_user_id_ids" btree (user_id) Also
there are max of 34 rows per user_id in old_prefs here is the new table I
just created: pg=> d new_preferences Table "public.new_preferences" Column
| Type | Modifiers -+-+--- user_id | integer | not
null prefs | hstore | Indexes: "new_preferences_pkey" PRIMARY KEY, btree
(user_id) Foreign-key constraints: "new_preferences_user_id_fkey" FOREIGN
KEY (user_id) REFERENCES users(user_id) ON DELETE CASCADE Here is a newly
connected the backend: root@pg:/var/log# ps axu | egrep '10.0.2.71|USER' |
grep -v grep USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
postgres 19121 0.0 0.0 2266944 3448 ? Ss 15:23 0:00 postgres: pg pg
10.0.2.71(51734) idle Migrating the first 200k of the users to the new
scheme: pg=> select count(*) from old_prefs where user_id INSERT INTO
new_preferences SELECT user_id,hstore(array_agg(name), array_agg(value))
FROM old_prefs WHERE user_id commit; COMMIT Here is the backend: USER PID
%CPU %MEM VSZ RSS TTY STAT START TIME COMMAND postgres 19121 0.8 7.1
3081772 582712 ? Ss 15:23 0:02 postgres: pg pg 10.0.2.71(51734) idle
Migrating another batch of users: pg => select count(*) from old_prefs
where user_id>=20 and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=20 AND user_id commit; COMMIT USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND postgres 19121 1.1 8.5 3176164 697444 ? Ss 15:23
0:05 postgres: pg pg 10.0.2.71(51734) idle Another batch: pg=> select
count(*) from old_prefs where user_id>=60 and user_id INSERT INTO
new_preferences SELECT user_id,hstore(array_agg(name), array_agg(value))
FROM old_prefs WHERE user_id>=60 AND user_id commit; COMMIT USER PID
%CPU %MEM VSZ RSS TTY STAT START TIME COMMAND postgres 19121 0.7 9.6
3210224 791404 ? Ss 15:23 0:08 postgres: pg pg 10.0.2.71(51734) idle
Another batch: pg=> select count(*) from old_prefs where user_id>=110
and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=110 AND user_id commit; COMMIT USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND postgres 19121 0.9 10.8 3277412 889860 ? Ss 15:23
0:11 postgres: pg pg 10.0.2.71(51734) idle So Pg backeng keep growing with
100M per 200k row from old table that became 50-60k rows in the new table
Proceeding with another batch: pg=> select count(*) from old_prefs where
user_id>=160 and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=160 AND user_id commit; COMMIT USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND postgres 19121 0.9 11.5 3277412 945560 ? Ss 15:23
0:15 postgres: pg pg 10.0.2.71(51734) idle Another batch: pg=> select
count(*) from old_prefs where user_id>=240 and user_id INSERT INTO
new_preferences SELECT user_id,hstore(array_agg(name), array_agg(value))
FROM old_prefs WHERE user_id>=240 AND user_id commit; COMMIT USER PID
%CPU %MEM VSZ RSS TTY STAT START TIME COMMAND postgres 19121 1.2 16.2
3736968 1331796 ? Ss 15:23 0:20 postgres: pg pg 10.0.2.71(51734) idle
Another batch: pg => select count(*) from old_prefs where user_id>=340
and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=340 AND user_id rollback; ROLLBACK Ops.. have to cleanup the
old_prefs, some users were deleted in the meantime: pg=> delete from
old_prefs where user_id not in (select user_id from users); DELETE 7 pg=>
commit; COMMIT USER PID %CPU %MEM VSZ RSS TTY STAT START 

Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread luben karavelov

On Jul 26, 2012, at 11:17 AM, Craig Ringer wrote:

> On 07/26/2012 09:32 AM, karave...@mail.bg wrote:
>> Finally I have managed to migrate it in batches of 100-200k user ids and
>> disconnecting after each query in order to free the backend and leaked
>> memory.
> If you do it in batches, but you do NOT disconnect and reconnect, does the 
> backend continue to grow?
> 
> What's the output of:
> 
> SELECT count(sub.user_id), to_char(AVG(sub.n_prefs), '9.99') FROM (
> SELECT user_id, count(name) AS n_prefs FROM old_prefs GROUP BY user_id) AS 
> sub;
> 
> and
> 
> SELECT pg_size_pretty(pg_total_relation_size('old_prefs'));
> 
> ?
> 
> --
> Craig Ringer
> 

-
Sorry for the broken formatting in the last message. Here it is again:

Ok, I will do the procedure again with taking notes on each step.

First, here are the results of the queries you asked:

pg=> SELECT count(sub.user_id), to_char(AVG(sub.n_prefs), '9.99') FROM (
  SELECT user_id, count(name) AS n_prefs FROM old_prefs GROUP BY 
user_id)  AS sub;

  count  |  to_char  
-+---
 1257262 |  2.26
(1 row)

pg=> SELECT pg_size_pretty(pg_total_relation_size('old_prefs'));
 pg_size_pretty 

 264 MB
(1 row)

pg=> \d old_prefs
Table "public.old_prefs"
 Column  |   Type| Modifiers 
-+---+---
 user_id | integer   | not null
 name| character varying | not null
 value   | character varying | not null
Indexes:
"old_prefs_user_id_ids" btree (user_id)

Also there are max of 34 rows per user_id in old_prefs

Here is the new table I just created:

pg=> \d new_preferences
Table "public.new_preferences"
 Column  |  Type   | Modifiers 
-+-+---
 user_id | integer | not null
 prefs   | hstore  | 
Indexes:
"new_preferences_pkey" PRIMARY KEY, btree (user_id)
Foreign-key constraints:
"new_preferences_user_id_fkey" FOREIGN KEY (user_id) REFERENCES 
users(user_id) ON DELETE CASCADE


Here is the newly connected backend:

root@pg:/var/log# ps axu | egrep '10.0.2.71|USER' | grep -v grep
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres 19121  0.0  0.0 2266944 3448 ?Ss   15:23   0:00 postgres: pg 
pg 10.0.2.71(51734) idle   


Migrating the first 200k of the users to the new scheme:

pg=> select count(*) from old_prefs where user_id<20;
 count  

 174767
(1 row)

pg=> INSERT INTO new_preferences SELECT user_id,hstore(array_agg(name), 
array_agg(value)) FROM old_prefs WHERE user_id<20 GROUP BY user_id;
INSERT 0 48993
pg=> commit;
COMMIT

Here is the backend:
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres 19121  0.8  7.1 3081772 582712 ?  Ss   15:23   0:02 postgres: pg 
pg 10.0.2.71(51734) idle

Migrating another batch of users:

pg => select count(*) from old_prefs where user_id>=20 and user_id<60;
 count  

 193824
(1 row)

pg=> INSERT INTO new_preferences SELECT user_id,hstore(array_agg(name), 
array_agg(value)) FROM old_prefs WHERE user_id>=20 AND user_id<60 GROUP 
BY user_id;
INSERT 0 54157
pg=> commit;
COMMIT

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres 19121  1.1  8.5 3176164 697444 ?  Ss   15:23   0:05 postgres: pg 
pg 10.0.2.71(51734) idle

Another batch:
pg=> select count(*) from old_prefs where user_id>=60 and user_id<110;
 count  

 190504
(1 row)

pg=> INSERT INTO new_preferences SELECT user_id,hstore(array_agg(name), 
array_agg(value)) FROM old_prefs WHERE user_id>=60 AND user_id<110 
GROUP BY user_id;
INSERT 0 56199
pg=> commit;
COMMIT

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres 19121  0.7  9.6 3210224 791404 ?  Ss   15:23   0:08 postgres: pg 
pg 10.0.2.71(51734) idle

Another batch:

pg=> select count(*) from old_prefs where user_id>=110 and user_id<160;
 count  

 194965
(1 row)

pg=> INSERT INTO new_preferences SELECT user_id,hstore(array_agg(name), 
array_agg(value)) FROM old_prefs WHERE user_id>=110 AND user_id<160 
GROUP BY user_id;
INSERT 0 60257
pg=> commit;
COMMIT

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres 19121  0.9 10.8 3277412 889860 ?  Ss   15:23   0:11 postgres: pg 
pg 10.0.2.71(51734) idle


So Pg backeng keep growing with 100M per 200k row from old table that became 
50-60k rows in the new table
Proceeding with another batch:

pg=> select count(*) from old_prefs where user_id>=160 and user_id<240;
 count  

 170858
(1 row)

Time: 83,994 ms
pg=> INSERT INTO new_preferences SELECT user_id,hstore(array_agg(name), 
array_agg(value)) FROM old_prefs WHERE user_id>=160 AND user_id<240 
GROUP BY user_id;
INSERT 0 55447
pg=> commit;
COMMIT

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres 19121  0.9 11.5 3277412 945560 ?  Ss   15:23   0:15 postgr

Re: [BUGS] BUG #6758: ./configure script sets HAVE_WCSTOMBS_L 1

2012-07-26 Thread Jez Wain
Alvaro, 

This is indeed the same problem; my apologies for posting a duplicate.

I've checked on two AIXv7.1 servers, one using an xlCv11 runtime, the other 
using an xlCv12 runtime. 

On the v11 neither mbstowcs_l nor wcstombs_l are defined.  On the v12, both 
are. So to answer the question posed on the archive link you provided, the 
assumption made by your configure script appears to be valid.  This points to 
there being a problem in the way the presence of these APIs is detected in the 
configure script.

I built postgres on the V11 platform when I posted my bug report.  I'll have a 
go on the V12 to ensure that this error goes away.

Thanks for your reply,

   Jez


On 25 Jul 2012, at 18:17, Alvaro Herrera wrote:

> 
> Excerpts from jez.wain's message of mié jul 25 05:49:35 -0400 2012:
> 
>> ld: 0711-317 ERROR: Undefined symbol: .mbstowcs_l
>> ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more
>> information.
>> 
>> AIX7 supports the mbstowcs API but not mbstowcs_l.  In
>> src/include/pg_config.h I commented out the define for HAVE_WCSTOMBS_L 1,
>> recompiled src/backend/utils/adt/pg_locale.c and the build completed
>> successfully.
> 
> This had been reported some time ago, see bug #6585:
> http://archives.postgresql.org/message-id/e1sinxs-0001zy...@wrigleys.postgresql.org
> 
> -- 
> Álvaro Herrera 
> The PostgreSQL Company - Command Prompt, Inc.
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6758: ./configure script sets HAVE_WCSTOMBS_L 1

2012-07-26 Thread Tom Lane
Jez Wain  writes:
> On the v11 neither mbstowcs_l nor wcstombs_l are defined.  On the v12, both 
> are. So to answer the question posed on the archive link you provided, the 
> assumption made by your configure script appears to be valid.  This points to 
> there being a problem in the way the presence of these APIs is detected in 
> the configure script.

OK, now we understand what we're dealing with at least.  Can you look
into why mbstowcs_l is misdetected on v11?  The portion of config.log
where configure is probing for that function might be illuminating.

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] primary and hot standby database don' work now

2012-07-26 Thread Craig Ringer

On 07/22/2012 02:00 PM, leo xu wrote:

hello everyone:
   my pg version is 9.1.2 running on red hat 5.4 x64 servers.one
day,i found all database indexes can't work,reindex database,it can work,but
some table has primary key,found exists the same two records in table.


When did this start happening? At random? After a server crash or power 
loss? After an administrative action?


Are you running with fsync=off or using storage that has write caching 
enabled?


--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer
Woah. Your email client did something insane, and I cannot read your 
message. See below:


On 07/26/2012 09:37 PM, karave...@mail.bg wrote:

- Craig Ringer (ring...@ringerc.id.au), на 26.07.2012 в 11:17 -

On 07/26/2012 09:32 AM, karave...@mail.bg wrote: >> Finally I have

managed to migrate it in batches of 100-200k user ids and >> disconnecting
after each query in order to free the backend and leaked >> memory. > If
you do it in batches, but you do NOT disconnect and reconnect, does > the
backend continue to grow? > > What's the output of: > > SELECT
count(sub.user_id), to_char(AVG(sub.n_prefs), '9.99') FROM ( > SELECT
user_id, count(name) AS n_prefs FROM old_prefs GROUP BY user_id) > AS sub;

and > > SELECT pg_size_pretty(pg_total_relation_size('old_prefs')); > >

? > > -- > Craig Ringer > Ok, I will do the procedure again with taking
notes on each step. First, here are the results of the queries you asked:
pg=> SELECT count(sub.user_id), to_char(AVG(sub.n_prefs), '9.99') FROM
( SELECT user_id, count(name) AS n_prefs FROM old_prefs GROUP BY user_id)
AS sub; count | to_char -+--- 1257262 | 2.26 (1 row) pg=>
SELECT pg_size_pretty(pg_total_relation_size('old_prefs')); pg_size_pretty
 264 MB (1 row) pg=> d old_prefs Table "public.old_prefs"
Column | Type | Modifiers -+---+--- user_id
| integer | not null name | character varying | not null value | character
varying | not null Indexes: "old_prefs_user_id_ids" btree (user_id) Also
there are max of 34 rows per user_id in old_prefs here is the new table I
just created: pg=> d new_preferences Table "public.new_preferences" Column
| Type | Modifiers -+-+--- user_id | integer | not
null prefs | hstore | Indexes: "new_preferences_pkey" PRIMARY KEY, btree
(user_id) Foreign-key constraints: "new_preferences_user_id_fkey" FOREIGN
KEY (user_id) REFERENCES users(user_id) ON DELETE CASCADE Here is a newly
connected the backend: root@pg:/var/log# ps axu | egrep '10.0.2.71|USER' |
grep -v grep USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
postgres 19121 0.0 0.0 2266944 3448 ? Ss 15:23 0:00 postgres: pg pg
10.0.2.71(51734) idle Migrating the first 200k of the users to the new
scheme: pg=> select count(*) from old_prefs where user_id INSERT INTO
new_preferences SELECT user_id,hstore(array_agg(name), array_agg(value))
FROM old_prefs WHERE user_id commit; COMMIT Here is the backend: USER PID
%CPU %MEM VSZ RSS TTY STAT START TIME COMMAND postgres 19121 0.8 7.1
3081772 582712 ? Ss 15:23 0:02 postgres: pg pg 10.0.2.71(51734) idle
Migrating another batch of users: pg => select count(*) from old_prefs
where user_id>=20 and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=20 AND user_id commit; COMMIT USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND postgres 19121 1.1 8.5 3176164 697444 ? Ss 15:23
0:05 postgres: pg pg 10.0.2.71(51734) idle Another batch: pg=> select
count(*) from old_prefs where user_id>=60 and user_id INSERT INTO
new_preferences SELECT user_id,hstore(array_agg(name), array_agg(value))
FROM old_prefs WHERE user_id>=60 AND user_id commit; COMMIT USER PID
%CPU %MEM VSZ RSS TTY STAT START TIME COMMAND postgres 19121 0.7 9.6
3210224 791404 ? Ss 15:23 0:08 postgres: pg pg 10.0.2.71(51734) idle
Another batch: pg=> select count(*) from old_prefs where user_id>=110
and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=110 AND user_id commit; COMMIT USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND postgres 19121 0.9 10.8 3277412 889860 ? Ss 15:23
0:11 postgres: pg pg 10.0.2.71(51734) idle So Pg backeng keep growing with
100M per 200k row from old table that became 50-60k rows in the new table
Proceeding with another batch: pg=> select count(*) from old_prefs where
user_id>=160 and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=160 AND user_id commit; COMMIT USER PID %CPU %MEM VSZ RSS TTY
STAT START TIME COMMAND postgres 19121 0.9 11.5 3277412 945560 ? Ss 15:23
0:15 postgres: pg pg 10.0.2.71(51734) idle Another batch: pg=> select
count(*) from old_prefs where user_id>=240 and user_id INSERT INTO
new_preferences SELECT user_id,hstore(array_agg(name), array_agg(value))
FROM old_prefs WHERE user_id>=240 AND user_id commit; COMMIT USER PID
%CPU %MEM VSZ RSS TTY STAT START TIME COMMAND postgres 19121 1.2 16.2
3736968 1331796 ? Ss 15:23 0:20 postgres: pg pg 10.0.2.71(51734) idle
Another batch: pg => select count(*) from old_prefs where user_id>=340
and user_id INSERT INTO new_preferences SELECT
user_id,hstore(array_agg(name), array_agg(value)) FROM old_prefs WHERE
user_id>=340 AND user_id rollback; ROLLBACK Ops.. have to cleanup the
old_prefs, some users were deleted in the meantime: pg=> del

Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer

Hi all

Here's a fully self contained, automated test case that demonstrates the 
leak.


Example output on my system, pasted as quote to stop Thunderbird 
mangling it:



$ ./hstore-leak-demo.sh
NOTICE:  extension "hstore" already exists, skipping
CREATE EXTENSION
DROP TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 10
INSERT 0 80175
INSERT 0 72267
INSERT 0 50649
INSERT 0 30430
avg | max
+-
 3.33521000 |  20
(1 row)

Backend PID is: 3167
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167  0.0  0.0 504276  4368 ?Ss   10:29   0:00 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167 91.0 13.0 2163384 1055652 ? Ss   10:29   0:00 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167 77.5 13.2 2163408 1071496 ? Ss   10:29   0:01 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167  108 13.4 2163408 1084056 ? Ss   10:29   0:02 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167 93.0 13.5 2163408 1092244 ? Ss   10:29   0:02 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167 86.0 13.6 2163408 1100444 ? Ss   10:29   0:03 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167  101 13.7 2163408 1108704 ? Ss   10:29   0:04 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167 93.8 13.8 2163408 1116896 ? Ss   10:29   0:04 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167 88.8 13.9 2163408 1125048 ? Ss   10:29   0:05 postgres: 
craig regress [local] idle
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres  3167 99.1 14.0 2163408 1133228 ? Ss   10:29   0:05 postgres: 
craig regress [local] idle




hstore-leak-demo.sh
Description: application/shellscript

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer

On 07/26/2012 09:55 PM, luben karavelov wrote:


Ok, I will do the procedure again with taking notes on each step.


Thankyou for taking the time to do this in detail.


First, here are the results of the queries you asked:
   count  |  to_char
-+---
  1257262 |  2.26



  pg_size_pretty

  264 MB


OK, none of that looks obviously insanely huge to me. That's a lot of 
hstores, but with your query I wouldn't expect them to all sit around in 
memory, and nothing individually is particularly huge.



Also there are max of 34 rows per user_id in old_prefs


Thanks, I forgot to ask that. Again, nothing particularly big.


USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
postgres 19121  0.0  0.0 2266944 3448 ?Ss   15:23   0:00 postgres: pg 
pg 10.0.2.71(51734) idle
postgres 19121  0.8  7.1 3081772 582712 ?  Ss   15:23   0:02 postgres: pg 
pg 10.0.2.71(51734) idle
postgres 19121  1.1  8.5 3176164 697444 ?  Ss   15:23   0:05 postgres: pg 
pg 10.0.2.71(51734) idle
postgres 19121  0.7  9.6 3210224 791404 ?  Ss   15:23   0:08 postgres: pg 
pg 10.0.2.71(51734) idle
postgres 19121  0.9 10.8 3277412 889860 ?  Ss   15:23   0:11 postgres: pg 
pg 10.0.2.71(51734) idle
postgres 19121  0.9 11.5 3277412 945560 ?  Ss   15:23   0:15 postgres: pg 
pg 10.0.2.71(51734) idle
postgres 19121  1.2 16.2 3736968 1331796 ? Ss   15:23   0:20 postgres: pg 
pg 10.0.2.71(51734) idle



1. Backend does not free allocated memory between transactions.
2. Rolled back transactions also leak memory.
3. Leaked memory is not linear to work done - 2 transactions with 200k keys 
will leak less than 1 transaction with 400k keys


Ouch. Sure looks like a leak to me, yeah.

Thankyou for taking the time to do this. It's common to see "leaks" 
reported here that are really just queries that require lots of memory, 
so the first response tends to be somewhat cautious.


This doesn't look like one of those reports.

I don't know if I can be much use with the actual tracking down of the 
leak, but there certainly appears to be one, and you've provided a 
pretty clear illustration of it.


--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer

On 07/27/2012 10:30 AM, Craig Ringer wrote:

Hi all

Here's a fully self contained, automated test case that demonstrates 
the leak.


Gah. Except it doesn't now, as shown by the text I pasted. WTF?

I was *definitely* seeing this on my system. What's changed?

Will follow up.

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6768: Failure in OBDC

2012-07-26 Thread fabio . lunkes
The following bug has been logged on the website:

Bug reference:  6768
Logged by:  Fábio Hentz Lunkes
Email address:  fabio.lun...@gmail.com
PostgreSQL version: 9.1.0
Operating system:   Windows 7
Description:

Hellow.
My teste to developer application with Microssoft Access, ODBC and Postgres.
With grant selet in one field, other fields is revoke permissions, access in
table with Microsoft Access is not possible. Failure is generate, to
permission denied. In Microsoft MS Query, no error.


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer
OK, it's certainly leaking, but not in the same drastic way I was able 
to reproduce manually a couple of times earlier. Self-contained test 
case attached.


$ bash hstore-leak-demo.sh
NOTICE:  extension "hstore" already exists, skipping
CREATE EXTENSION
DROP TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 10
INSERT 0 80014
INSERT 0 72434
INSERT 0 50340
INSERT 0 30077
avg | max
+-
 3.32865000 |  21
(1 row)

Backend PID is: 4823
 USER   PID %CPU %MEMVSZ   RSS TTY STAT START   
TIME COMMAND
Before first postgres  4823  0.0  0.0 504276  4312 ? Ss   11:19   
0:00 postgres: craig regress [local] idle
Iteration 1  postgres  4823  0.0  0.3 536908 25416 ? Rs   11:19   
0:00 postgres: craig regress [local] INSERT
Iteration 2  postgres  4823 33.0 13.1 2163384 1056560 ? Rs   11:19   
0:00 postgres: craig regress [local] INSERT
Iteration 3  postgres  4823 56.0 13.3 2163408 1072300 ? Rs   11:19   
0:01 postgres: craig regress [local] INSERT
Iteration 4  postgres  4823 58.7 13.4 2163408 1084936 ? Rs   11:19   
0:02 postgres: craig regress [local] INSERT
Iteration 20 postgres  4823 85.3 14.3 2173776 1156784 ? Rs   11:19   
0:13 postgres: craig regress [local] INSERT
Iteration 40 postgres  4823 92.0 16.3 2176848 1314700 ? Rs   11:19   
0:28 postgres: craig regress [local] INSERT
Iteration 60 postgres  4823 94.1 16.4 2173776 1322208 ? Rs   11:19   
0:43 postgres: craig regress [local] INSERT
Iteration 80 postgres  4823 96.0 16.4 2173776 1323768 ? Rs   11:19   
0:58 postgres: craig regress [local] INSERT
Iteration 100postgres  4823 95.7 16.5 2176848 1329880 ? Rs   11:19   
1:14 postgres: craig regress [local] INSERT
Iteration 120postgres  4823 97.1 16.4 2176848 1329132 ? Rs   11:19   
1:31 postgres: craig regress [local] INSERT
Iteration 140postgres  4823 96.8 16.4 2176848 1329524 ? Rs   11:19   
1:48 postgres: craig regress [local] INSERT




hstore-leak-demo.sh
Description: application/shellscript

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Tom Lane
Craig Ringer  writes:
> OK, it's certainly leaking, but not in the same drastic way I was able 
> to reproduce manually a couple of times earlier. Self-contained test 
> case attached.

Using HEAD with stock parameters, I don't see any significant change in
allocated address space (VSZ): it sits right around 170MB.  The reported
resident set size (RSS) starts from very little and rises to about
140MB, but I think that's just an artifact of the process touching more
and more of the shared-buffers array as it runs.  The actual backend
memory consumption seems to be just a few meg.

I can get it to blow out memory if I set work_mem large enough to
persuade the planner to use hash aggregation (whereupon it tries to run
all the array_agg aggregates in parallel).  However, that requires
work_mem set to a couple of GB, and I don't think it's really a bug when
the backend goes ahead and uses a couple of GB after I told it it could.

It's possible that the OP's problem boiled down to the planner making
a drastic underestimate of the number of GROUP BY groups, which could
mislead it into applying hash aggregation when there's not room; or
if the space used per aggregate was a lot more than the 8K that the
planner assumes when dealing with array_agg.  But neither of those
errors seems to be happening in this example case.

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer

On 07/27/2012 01:47 PM, Tom Lane wrote:

Craig Ringer  writes:

OK, it's certainly leaking, but not in the same drastic way I was able
to reproduce manually a couple of times earlier. Self-contained test
case attached.

Using HEAD with stock parameters, I don't see any significant change in
allocated address space (VSZ): it sits right around 170MB.  The reported
resident set size (RSS) starts from very little and rises to about
140MB, but I think that's just an artifact of the process touching more
and more of the shared-buffers array as it runs.

Gah. I should know better than that. Sorry.

This makes me wonder if the "leak-like" pattern I saw earlier was just a 
similar growth in shared_buffers, and carried on more steeply rather 
than tapering off because I was working with a smaller data set.


--
Craig Ringer


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6763: Severe memory leak with arrays and hstore

2012-07-26 Thread Craig Ringer

On 07/27/2012 10:31 AM, Craig Ringer wrote:


Ouch. Sure looks like a leak to me, yeah.
... but it turns out I'm not thinking very straight. I forgot to check 
the size of your `shared_buffers' or `work_mem' and forgot to get you to 
report `free -m' output after each run to measure _real_ memory use.


During testing I did got a backend crash when running an INSERT - which 
I didn't expect given that I have only 20MB of work_mem and 256MB of 
shared_buffers. I was surprised by that as I would not have expected 
that query to require a huge gob of RAM. I didn't dig much further as 
I'm on a swapless system with overcommit enabled ( 'cos the Java VM 
does't work with overcommit off ) which isn't exactly a recommended Pg 
configuration.


--
Craig Ringer

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs