As I understand it, a single execution of a pl/perl function will not be affected by the perl memory issue, so I don't think that is your problem.

My guess is that you are reading a large query into perl, so the whole thing will be kept in memory (and you can't use more memory than you have). For a large query, this can be a huge amount of memory indeed. You could use another language like plpgsql that can support cursors/looping over query results or, in plperl you could use DBI (not spi_exec_query) and loop over query results.

Hope this helps,
Sean

On Mar 30, 2005, at 9:33 AM, FERREIRA William (COFRAMI) wrote:

i have a similar problem
i'm running PostgreSQL on a PIV with 1GO and Windows 2000 NT
i have a large database and a big traitment taking more than 4 hours.
during the first hour postgresql use as much memory as virtual memory and i find this strange (growing to more 800MB)


and during the execution i get :
out of memory
Failed on request of size 56
and at the end, postgresql use 300 MB of memory and more than 2GB of virtual memory


does this problem can be resolve by tuning postgresql settings ?
here are my parameters :
shared_buffers = 1000
work_mem = 131072
maintenance_work_mem = 131072
max_stack_depth = 4096
i tried work_mem with 512MB and 2MB and i get the same error...

i read all the post, but i don't know how i can configure perl on Windows...

thanks in advance
       
         Will

-----Message d'origine-----
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Dan Sugalski
Envoyé : vendredi 25 mars 2005 19:34
À : Greg Stark; pgsql-general@postgresql.org
Objet : Re: [GENERAL] plperl doesn't release memory



At 6:58 PM -0500 3/24/05, Greg Stark wrote:
>Dan Sugalski <[EMAIL PROTECTED]> writes:
>
>>  Anyway, if perl's using its own memory allocator you'll want to rebuild it
>>  to not do that.
>
>You would need to do that if you wanted to use a debugging malloc. But there's
>no particular reason to think that you should need to do this just to work
>properly.
>
>Two mallocs can work fine alongside each other. They each call mmap or sbrk to
>allocate new pages and they each manage the pages they've received. They won't
>have any idea why the allocator seems to be skipping pages, but they should be
>careful not to touch those pages.


Perl will only use a single allocator, so there's not a huge issue
there. It's either the external allocator or the internal one, which
is for the best since you certainly don't want to be handing back
memory to the wrong allocator. That way lies madness and unpleasant
core files.

The bigger issue is that perl's memory allocation system, the one you
get if you build perl with usemymalloc set to yes, never releases
memory back to the system -- once the internal allocator gets a chunk
of memory from the system it's held for the duration of the process.
This is the right answer in many circumstances, and the allocator's
pretty nicely tuned to perl's normal allocation patterns, it's just
not really the right thing in a persistent server situation where
memory usage bounces up and down. It can happen with the system
allocator too, though it's less likely.

One of those engineering tradeoff things, and not much to be done
about it really.
--
                                Dan

--------------------------------------it's like this-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED] have teddy bears and even
                                       teddy bears get drunk

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend


This mail has originated outside your organization,
either from an external partner or the Global Internet.
 Keep this in mind if you answer this message.


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to