>> Ok I removed the geometry column from the cursor query within the function
>> and the session still runs out of memory. I'm still seeing the same error
>> message as well:
>> PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440
>> used
>> ExecutorState: 122880 to
Jeremy Palmer writes:
> Ok I removed the geometry column from the cursor query within the function
> and the session still runs out of memory. I'm still seeing the same error
> message as well:
> PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440
> used
> Executor
> No, given the info from the memory map I'd have to say that the leakage
> is in the cursor not in what you do in the plpgsql function. The cursor
> query looks fairly unexciting except for the cast from geometry to text.
> I don't have PostGIS installed here so I can't do any testing, but I
> wo
Jeremy Palmer writes:
> The plpgsql code that is could be to blame is in the below snippet. I had a
> look and I'm not sure why it might be leaking. Is it because I assign the
> v_id1 and v_id2 to the return table 'id' record, return it and then assign to
> v_id1 or v_id2 again from the cursor?
ULL,
guarantee_status TEXT NOT NULL,
estate_description TEXT,
number_owners INT8 NOT NULL,
part_share BOOLEAN NOT NULL,
shape GEOMETRY,
);
CREATE INDEX shx_title_shape ON titles USING gist (shape);
Thanks,
Jeremy
From: Tom Lane [t...@sss.pgh
Jeremy Palmer writes:
> Ok I have attached the map, or least what I think the map is.
Yup, that's what I was after. It looks like the main problem is here:
> PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440
> used
> ExecutorState: 122880 total in 4 blocks; 63984
Jeremy Palmer writes:
> I running PostgreSQL 9.0.3 and getting an out of memory error while running a
> big transaction. This error does not crash the backend.
If it's a standard "out of memory" message, there should be a memory
context map dumped to postmaster's stderr. (Which is inconvenient
Hi All,
I running PostgreSQL 9.0.3 and getting an out of memory error while running a
big transaction. This error does not crash the backend.
The nature of this transaction is it is sequentially applying data updates to a
large number (104) of tables, then after applying those updates, a serie