On Mon, Oct 1, 2012 at 5:15 PM, Stefan Keller wrote:
> Sorry for the delay. I had to sort out the problem (among other things).
>
> It's mainly about swapping.
Do you mean ordinary file IO? Or swapping of an actual process's
virtual memory? The latter shouldn't happen much unless you have
somet
On 10/01/2012 07:15 PM, Stefan Keller wrote:
Any ideas? Partitioning?
Yes. Make sure you have a good column to partition on. Tables this large
are just bad performers in general, and heaven forbid you ever have to
perform maintenance on them. We had a table that size, and simply
creating an
Stefan --
- Original Message -
> From: Stefan Keller
> To: Ivan Voras
> Cc: pgsql-performance@postgresql.org
> Sent: Monday, October 1, 2012 5:15 PM
> Subject: Re: [PERFORM] Inserts in 'big' table slowing down the database
>
> Sorry for the delay. I had
Sorry for the delay. I had to sort out the problem (among other things).
It's mainly about swapping.
The table nodes contains about 2^31 entries and occupies about 80GB on
disk space plus index.
If one would store the geom values in a big array (where id is the
array index) it would only make up
On 03/09/2012 13:03, Stefan Keller wrote:
> Hi,
>
> I'm having performance issues with a simple table containing 'Nodes'
> (points) from OpenStreetMap:
>
> CREATE TABLE nodes (
> id bigint PRIMARY KEY,
> user_name text NOT NULL,
> tstamp timestamp without time zone NOT NULL,
>