> No, explain analyze for the query that wouldn't execute before but now
> does, with, I assume, a large work_mem. I'd like to see how it
> differes from the one with smaller work_mem.
Ah, I pasted that in an earlier email, sent February 10, 2009 9:58:00 AM
GMT+13:00... that plan was the one usin
>> Yes... and indeed changing vm.overcommit_ratio to 80 does allow that
>> previously-failing query to execute successfully. Do you think this is
>> also what caused the out-of-memory error we saw today just when a
>> transaction was initiated?
>
> Curious, what's the explain analyze look like for
> I don't think changing work_mem down is actually going to reduce the
> memory allocated without changing the plan to something less optimal.
> In the end, all of this is putting off the inevitable, if you get enough
> PGs going and enough requests and whatnot, you're going to start running
> out
> I'd do both. But only after I'd reduced work_mem. Given that
> reducing work_mem removed the problem, it looks to me like pgsql is
> requesting several large blocks of ram, then only using a small port
> of them. But overcommit set to 2 means that the OS will not allow an
> overcommit of memor
> with 100 concurrent postgres connections, if they all did something
> requiring large amounts of work_mem, you could allocate 100 * 125MB (I
> believe thats what you said it was set to?) which is like 12GB :-O
>
> in fact a single query thats doing multiple sorts of large datasets for
> a messy
> I suspect this may be it... Apparently, while you're only using about
> 2G, you've got 10G or so of outstanding commitments, and Linux is
> refusing to allocate more.
>
> You probably want to up your overcommit_ratio, esp. in light of the fact
> that you've only got 2G of swap on this box. I'd
> * Matt Magoffin (postgresql@msqr.us) wrote:
>> [r...@170226-db7 ~]# cat /proc/meminfo
>> CommitLimit: 10312588 kB
>> Committed_AS: 9760756 kB
>
> I suspect this may be it... Apparently, while you're only using about
> 2G, you've got 10G or so
> Which is a longwinded way of saying "that doesn't seem to match the
> symptoms, but ..." If you're not dead certain that your kernel is
> configured to allow *well* north of 7000 open files, you might consider
> cutting max_files_per_process in half at your next restart.
I think it is, looking
> Err, yea, overcommit_memory=2, a small amount of swap space, and a low
> overcommit_ratio could cause this to happen... The default
> ratio is 50 though, which should mean, on this system, there is about
> 10G available for user processes, but his usage shows only a bit over 2G
> being used outs
> So did the backend crashed on this one, or just produced 'out of
> memory ' message ?
No crash, just the error message.
-- m@
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
>> We have 100+ postgres processes running, so for an individual process,
>> could the 1024 file limit be doing anything to this query? Or would I
>> see
>> an explicit error message regarding this condition?
>
> you would get one of "Open files rlimit 1024 reached for uid " in
> syslog (which
> [r...@170226-db7 ~]# su -l postgres -c "ulimit -a"
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> max nice(-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 139264
> max locked memory
> erp, that stinks. Must be on an older kernel? I've got it under (a
> Debian-built) 2.6.26. I can't recall if there's another way to get
> limit info for an active process.. Could use Tom's suggestion of
> echo'ing ulimit -a out to a file somewhere during database start-up.
Yes, this is a RHE
> Yeah, I remember we found a few xml-related leaks based on your reports.
> However, there's not anything here to suggest that this query is
> straining the capabilities of a 64-bit system with lots o RAM. It seems
> certain you're hitting some artificial process-size limit, and the only
> one I
> Agreed, that ulimit isn't reflecting any such limit, but is that really
> the same environment the postmaster gets started in? I wouldn't trust
> a system startup script to be launched in the same environment that a
> login shell gets. You might try adding
> ulimit -a >/tmp/something
> to
> How about cat /proc//limits for the postmaster?
> And maybe:
> status
> stat
> maps
>
> Though I'm kinda grasping at straws here, to be honest. I've had PG up
> and running through >16G of memory at a time before.
There is no /prod//limits file, but here are
status:
Name: postmaster
State:
> Hmm ... a gig here, a gig there, pretty soon you're talking about real
> memory? He's got several sorts and hashes that are each taking over
> 100MB according to the memory context dump, so it seems impossible that
> it all fits into a strict 32-bit address space. There's surely no harm
> in do
> I think it must be compiled 64-bit, or he'd not be able to get
> shared_buffers that high to start with. However, it's possible that the
> postmaster's been started under a ulimit setting that constrains each
> backend to just a few hundred meg of per-process memory.
Here's the output of ulimit
>> total used free sharedbuffers
>> cached
>> Mem: 16432296 16273964 158332 0 173536
>> 14321340
>> -/+ buffers/cache:1779088 14653208
>> Swap: 20964405602095880
>
> That certainly looks fine.. And you've got 14G or so
>> PostgreSQL 8.3.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
>> 4.1.2 20070626 (Red Hat 4.1.2-14)
>
> Does the result from 'free' look reasonable on this box?
I think so:
total used free sharedbuffers cached
Mem: 16432296 16273964 1583
> erm.. How much memory do you have in the system?
This system has 16GB of RAM, and Postgres is basically the only service
running on the box.
>> shared_buffers = 4000MB
>
> I hope you've got a fair bit more than 4G of memory if you're going to
> use 4G for shared buffers... Once that memory is
We've been having persistent out-of-memory errors occur in our production
8.3 deployment, which is now running 8.3.5. I'm not sure the query here is
the cause of the problem, but this is our most-recent example which
triggered an out-of-memory error for us.
Perhaps our configuration needs tweaking
> Huh, I was overthinking the problem. xml_out has a memory leak :-(
>
> More than one in fact: a large leak (equal to size of output) in
> the executor context (hence query lifespan) and a small one in
> LibxmlContext (hence transaction lifespan).
>
> Try the patch here:
> http://archives.postgre
> These numbers don't even have any demonstrable connection to Postgres,
> let alone to an xpath-related memory leak. You're going to need to come
> up with a concrete test case if you want anyone to investigate.
>
> regards, tom lane
As I said in the start of this thread, t
>> Gregory Stark <[EMAIL PROTECTED]> writes:
>>> That's just a special case of what would be expected to happen with
>>> memory
>>> allocation anyways though. Few allocators return memory to the OS
>>> anyways.
>>
>> Well, that does happen on Linux for instance. Since Matt knew in his
>> original
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> That's just a special case of what would be expected to happen with
>> memory
>> allocation anyways though. Few allocators return memory to the OS
>> anyways.
>
> Well, that does happen on Linux for instance. Since Matt knew in his
> original report t
> Probably the right thing for you to do now is just to install the known
> fix, and keep an eye on your server for awhile to see if you still see
> any indication of the long-term leak behavior.
Certainly, that is my plan. Once I can get the patch rolled out to these
systems, I should be able to
> This part seems to match the bug though --- the leak is approximately
> the same size as all the text returned by xpath() within the current
> transaction.
>
> So there may be a second issue remaining to be found. Can you put
> together a test case for the long-term small leak?
>
>
>> I think this should fix it.
>>
>> Kris Jurka
>
> It looks like xml.c source has changed considerably since 8.3 (looking at
> revision 1.68.2.2 from the 8.3.3. release). Do you know where/if this
> patch would apply to the 8.3 branch?
I diff'ed 1.74 and 1.68.2.2, and I'm guessing this new line c
>> I'm able to duplicate the memory leak in this function with the current
>> Fedora 8 libxml2 (2.6.32). The leak is definitely inside libxml2
>> itself, because the bloat shows up here:
>>
>
> I think this should fix it.
>
> Kris Jurka
It looks like xml.c source has changed considerably since 8.
> I looked through the libxml2 sources a little bit but couldn't
> immediately find the problem. I'm fairly confident though that
> this could be reproduced outside Postgres, by replicating the sequence
> of libxml2 calls we make in xpath(). The next step should probably be
> to build a reproduce
>> OK, I'll try to come up with something. Do you have a recommended way of
>> capturing the amount memory being used by Postgres related to this? I
>> was
>> thinking I would have a plpgsql function that loops a large number of
>> times, calling a few xpath() calls,
>
> Yeah, that's what I'd try f
> Ugh. Sounds like "small memory leak inside libxml2" --- probably not
> going to be easy to find. Can you put together a self-contained test
> case?
OK, I'll try to come up with something. Do you have a recommended way of
capturing the amount memory being used by Postgres related to this? I was
I've been using Postgres 8.3 in a production environment for some time
now, and have noticed a difference in long-term non-shared memory use by
Postgres since upgrading the system from 8.1. After initially making the
switch to 8.3 and making only one schema change (changed a text column to
xml) the
I was trying to extract XML fragments within a pl/pgsql function by nested
xpath() function calls, and found that when namespaces are used in the
XML, only the first xpath() call is able to correctly use namespaces.
First here is an example that works, when no namespaces are used:
BEGIN;
CREATE T
> But it's true that it's possible for a slow expression to make the recheck
> very expensive. The planner doesn't have a very good understanding of how
> to
> tell whether the expression is likely to be slow.
>
> The case I ran into is thing like "WHERE x = ANY $1::integer[]" which
> become
> very
>> The problem for me is, the Recheck Cond is then on the xpath() function
>> used by the function-based index. My understanding is that then the
>> database must actually call the xpath() function again on all matches
>> from
>> the index lookup.
>
> This is mistaken. It only happens if there are
Hello, I'm trying to find a way to use a text[] index lookup using an
xpath() function in 8.3, but I suspect this situation is not specific to
8.3 or this exact query style. The query plan looks like
Bitmap Heap Scan on lead (cost=37.39..7365.22 rows=2206 width=8)
Recheck Cond:
((xpath('/als:
> Not sure --- when are you thinking of, and what was the context?
> I don't usually keep sample data unless the issue still seems open.
I was referring to a dump I provided a link to you called
"pg83-leads-sanitized.db" which was around 20 Dec, with email subject "Re:
[GENERAL] 8.3b2 XPath-based
> This was reported before,
> http://archives.postgresql.org/pgsql-general/2007-12/msg00716.php
> but neither he nor you have provided anything approximating a
> reproducible test case. The interactions with libxml are messy
> enough that I'm not even going to think about fixing this without
> a t
Hello, I'm using 8.3b4 and keep experiencing server crash when I execute
various queries using XML functions. The crash backtraces look like this:
---
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x3f847ae1
0x004b140c in xmlCleanupCharEn
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> Thanks very much, that helps. Now I'm wondering if it's also possible to
>> then fill in another nested element level in the XML output, from the
>> rows
>> that are aggregated into the c
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> Hello, I'm trying to write a query to return an XML document like
>>
>>
>>
>> ...
>>
>
> Something like this:
>
> regression=# select xmlelement(name root, xmlag
Hello, I'm trying to write a query to return an XML document like
...
I started with
select xmlelement(name range, xmlattributes(m.range, count(s.id) as "count"))
from mb_sale s
inner join mb_lead m on m.sale_id = s.id
where
s.sale_date >= date('2007-08-01') and s.sale_date <=
> So what is the state-of-the-art in the Postgresql world if I _do_ want
> synchronous replication? 2-phase commit from the client application? Any
> success/horror stories about doing it in Java?
For Java, you could check out Sequoia (http://sequoia.continuent.org/) or
their commercial version un
> AFAICT that's exactly what it does.
>
> regression=# select xpath('//[EMAIL PROTECTED]"mykey"]/text()',
> 'ABC key="mykey">XYZRSTDEF');
>xpath
> ---
> {XYZ,RST}
> (1 row)
>
> regression=#
>
> Of course this is of type xml[], but you can cast to text[] and then
> index.
Ugh, you're
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> 2) Even if I could have an xpath() result return an array with multiple
>> values, like {value1,value2,value3} I wasn't able to define a GIN index
>> against the xml[] type. Should this be possible?
>
I've working with XML in Postgres 8.3 and am trying to find a way to
create a text-based index from an XPath that returns multiple nodes. For
example, if I have an XPath like
/[EMAIL PROTECTED]"mykey"]/text()
which might return a few text nodes like
value1
value2
value3
I'd like 3 index values
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> I took the latest snapshot from /dev, but I still get the same crash:
>
> Drat :-(. Please try a bit harder at generating a self-contained
> test case. Given where the crash is happening, I suspect it may be
&g
>>> (gdb) bt
>>> #0 0x8820 in __memcpy ()
>>> #1 0x004d9098 in xmlBufferAdd ()
>>> #2 0x004e0dc4 in xmlParserInputBufferCreateMem ()
>>> #3 0x004ced98 in xmlCtxtReadMemory ()
>>> #4 0x0026ea0c in xpath (fcinfo=0x37) at xml.c:3183
>>> #5 0x001095bc in ExecMakeFunctionResultNoSets (fcache=
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> (gdb) bt
>> #0 0x8820 in __memcpy ()
>> #1 0x004d9098 in xmlBufferAdd ()
>> #2 0x004e0dc4 in xmlParserInputBufferCreateMem ()
>> #3 0x004ced98 in xmlCtxtReadMemory ()
>> #4 0x0026ea0c
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> (gdb) bt
>> #0 0x8820 in __memcpy ()
>> #1 0x004d9098 in xmlBufferAdd ()
>> #2 0x004e0dc4 in xmlParserInputBufferCreateMem ()
>> #3 0x004ced98 in xmlCtxtReadMemory ()
>> #4 0x0026ea0c
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> I understand. I cannot make the SELECT query nor the ADD INDEX command
>> break on an empty database. I cannot share this database data, either.
>
> So try to make a test case using dummy data, or with suitably
> On Mon, Nov 19, 2007 at 10:02:20AM +1300, Matt Magoffin wrote:
>> Sorry if I left any relavent details out. I've been looking at this for
>> a while so many things are probably obvious only to me. Could you hint
>> at which additional details you think would be use
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Tom Lane) wrote:
> "Matt Magoffin" <[EMAIL PROTECTED]> writes:
> > So I explored with a SELECT statement, thinking there was some specific
> > XML document causing the crash. I could consistently exec
Hello, I currently have a table in Postgres 8.1 with a text column that
contains XML. I use the xml2 module to define several XPath-based function
indices on that column, and this has worked very well.
I'm trying not to evaluate the native XML support in 8.3b2. I dumped this
table from 8.1, then l
ue has to be written in C language (plpgsql lang
doesn't
> allow parameter passing for trigger functions).
>
> Has someone already written that function?
>
> regards, nico
>
>
> > From: "Matt Magoffin" <[EMAIL PROTECTED]>
> > X-Newsgroups: comp.
> I'd guess you could use TG_RELID or TG_RELNAME inside your trigger.
>
> On Tue, 20 Mar 2001, Matt Magoffin wrote:
>
> > Is there any way to make use of the tableoid either as an argument to
the
> > function or as a reference within the function (in plpgsql)? For
exam
58 matches
Mail list logo