On Tue, Mar 27, 2012 at 11:54 PM, Francois Deliege wrote:
> select col1 from table1 group by col1 limit 1;
> select distinct on (col1) col1 from table1 limit 1;
>
> select col1 from table1 group by col1 limit 2;
> select distinct on (col1) col1 from table1 limit 2;
>
> Performing any of these foll
Hi group,
I have the following table with millions of rows
CREATE TABLE table1
(
col1 text,
col1 text,
doc text
)
select col1 from table1 group by col1 limit 2;
select distinct on (col1) col1 from table1 limit 2;
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
Hi list,
I have the following table with millions of rows:
CREATE TABLE table1
(
col1 text,
col2 text,
col3 text,
col4 text,
col5 text,
col6 text
)
select col1 from table1 group by col1 limit 1;
select distinct on (col1) col1 from table1 limit 1;
select col1 from table1 group by co
This may just be a typo, but if you really did create write (dirty) block
device cache by writing the pg_dump file somewhere, then that is what it's
supposed to do ;) Linux is more aggressive about write cache and will allow
more of it to build up than e.g. HP-UX which will start to throttle
proces
On Tue, Mar 27, 2012 at 5:06 PM, Joshua Berkus wrote:
> In an effort to test this, we deliberately ran a pg_dump. This did grow the
> cache to all available memory, but Linux rapidly cleared the cache (flushing
> to disk) down to 25GB within an hour.
This would happen if some queries (or some
Have run across some memory behavior on Linux I've never seen before.
Server running RHEL6 with 96GB of RAM.
Kernel 2.6.32
PostgreSQL 9.0
208GB database with fairly random accesses over 50% of the database.
Now, here's the weird part: even after a week of uptime, only 21 to 25GB of
cache is ev
Peter,
Check out pg_fincore. Still kind of risky on a production server, but does an
excellent job of measuring page access on Linux.
- Original Message -
> Baron Swartz's recent post [1] on working set size got me to
> thinking.
> I'm well aware of how I can tell when my database's wor