<[EMAIL PROTECTED]> writes: > I have a table with about 1,400,000 rows in it. Each DELETE cascades to > about 7 tables. When I do a 'DELETE FROM events' I get the following > error:
> ERROR: Memory exhausted in AllocSetAlloc(84) > I'm running a default install. What postgres options to I need > to tweak to get this delete to work? It isn't a Postgres tweak. The only way you can fix it is to allow the backend process to grow larger, which means increasing the kernel limits on process data size. This might be as easy as tweaking "ulimit" in the postmaster's environment, or it might be painful, depending on your OS. You might also have to increase swap space. There's a TODO item to allow the list of pending trigger events (which is, I believe, what's killing you) to be pushed out to temp files when it gets too big. However, that will have negative performance implications of its own, so... > Also, if my tables grows to 30,000,000 rows will the same tweaks > still work? Or do I have to use a different delete strategy, such > as deleting 1000 rows at a time. On the whole, deleting a few thousand rows at a time might be your best bet. BTW: make sure you have indexes on the referencing columns, or this will take a REALLY long time. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])