Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
So for a system which was being used to serve many clients it would be fine (web service, etc). But for my purposes where I am using a single session to process large tables of data, (such as a mammoth update statement normalising and encoding 25million rows of string data) the 32-bit version i

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Magnus Hagander
It does when you have many sessions. But each individual session can only use "32 bits worth of memory", and shaared memory counts in all processes. The memory can be used for *os level cache*, not postgresql buffercache. //Magnus On Wed, Jun 2, 2010 at 16:08, Tom Wilcox wrote: > Hi Stephen, > >

Re: Fwd: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
Stephen, You're a legend! That is exactly the answer I needed to hear from someone who actually knows the score. I am now powering on with Plan B: Postgres64 on Linux64. After relentless searching I have become fairly convinced that a stable release of 64-bit postgres for Windows doesn't exist ye

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
Hi Stephen, The impression I was getting from Magnus Hagander's blog was that a 32-bit version of Postgres could make use of >4Gb RAM when running on 64-bit Windows due to the way PG passes on the responsibility for caching onto the OS.. Is this definitely not the case then? Here's where Im getti

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Stephen Frost
* Tom Wilcox (hungry...@googlemail.com) wrote: > My plan now is to try increasing the shared_buffers, work_mem, > maintenance_work_mem and apparently checkpoint_segments and see if that > fixes it. er. work_mem and maintenance_work_mem aren't *limits*, they're more like *targets*. The ou

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-06-02 Thread Tom Wilcox
I have now hit a new query that produces Out of memory errors in a similar way to the last ones. Can anyone please suggest why I might be getting this error and any way I can go about diagnosing or fixing it.. The error I get is: ERROR: out of memory SQL state: 53200 Detail: Failed on request

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-31 Thread Tom Wilcox
I am having difficulties. I have rerun my update that uses the python functions.. (1) UPDATE nlpg.match_data SET org = normalise(org); And some other similar queries on neighbouring fields in the table. They have all now worked. Without any changes to the configuration. I have done one thing

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-29 Thread Tom Wilcox
Thanks Bill, That sounds like good advice. I am rerunning my query with the python function peppered with plpy.notice("msg") call. Hopefully that'll shed some light on which inputs it's crashing on. Does anyone know of a way to measure the memory being consumed by the function/query so that

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-29 Thread Bill Moran
On 5/28/10 8:43:48 PM, Tom Wilcox wrote: I ran this query: EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org; And I got this result: "Seq Scan on match_data (cost=0.00..9762191.68 rows=32205168 width=206) (actual time=76873.592..357450.519 rows=2961 loops=1)" "Total runtime: 8028212.36

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Tom Wilcox
I ran this query: EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org; And I got this result: "Seq Scan on match_data (cost=0.00..9762191.68 rows=32205168 width=206) (actual time=76873.592..357450.519 rows=2961 loops=1)" "Total runtime: 8028212.367 ms" On 28 May 2010 19:39, Tom Wilcox w

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Alban Hertroys
On 28 May 2010, at 20:39, Tom Wilcox wrote: > out = '' > for tok in toks: > ## full word replace > if tok == 'house' : out += 'hse'+ADDR_FIELD_DELIM > elif tok == 'ground' : out += 'grd'+ADDR_FIELD_DELIM > elif tok == 'gnd' : out += 'grd'+ADDR_FIELD_DELIM >

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Tom Wilcox
Oops. Sorry about that. I am having this problem with multiple queries however I am confident that a fair number may involve the custom plpython "normalise" function which I have made myself. I didn't think it would be complicated enough to produce a memory problem.. here it is: -- Normalises com

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Bill Moran
In response to Tom Wilcox : > In addition, I have discovered that the update query that runs on each row > of a 27million row table and fails with Out of memory error will work when > limited to 1million rows in an extremely shorter period of time: > > EXPLAIN ANALYZE > UPDATE nlpg.match_

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Bill Moran
In response to Tom Wilcox : > Also, can anyone give me any pointers for configuring postgres to use > ALL 96GB of RAM in my new machine? I would like to know it was using > everything available.. especially when it is possible to load an entire > 30m row table into memory! I am currently usin

Re: [GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Stephen Frost
* Tom Wilcox (hungry...@googlemail.com) wrote: > UPDATE tbl SET f1 = COALESCE(f2,'') || ' ' || COALESCE(f3); > > Can anyone suggest reasons why I might be running out of memory on such > a simple query? Do you have any triggers on that table? Or FK's? Stephen signature.asc Descriptio

[GENERAL] Out of Memory and Configuration Problems (Big Computer)

2010-05-28 Thread Tom Wilcox
Hi, I am fairly new to postgres and I have been using it with Python to develop a set of functions that operate on some moderately large tables (30million rows with 13-15 columns). I have spent some time messing with the configuration file to get PG to use indexes when planning queries, etc