Hi

I'm pretty sure PostgreSQL can handle this.
But since you asked with a theoretic background,
it's probably worthwhile to look at column stores (like [1]).

-S.

[*] http://citusdata.github.io/cstore_fdw/

2015-01-19 22:47 GMT+01:00 Jonathan Vanasco <postg...@2xlp.com>:
> This is really a theoretical/anecdotal question, as I'm not at a scale yet 
> where this would measurable.  I want to investigate while this is fresh in my 
> mind...
>
> I recall reading that unless a row has columns that are TOASTed, an `UPDATE` 
> is essentially an `INSERT + DELETE`, with the previous row marked for 
> vacuuming.
>
> A few of my tables have the following characteristics:
>         - The Primary Key has many other tables/columns that FKEY onto it.
>         - Many columns (30+) of small data size
>         - Most columns (90%) are 1 WRITE(UPDATE) for 1000 READS
>         - Some columns (10%) do a bit of internal bookkeeping and are 1 
> WRITE(UPDATE) for 50 READS
>
> Has anyone done testing/benchmarking on potential efficiency/savings by 
> consolidating the frequent UPDATE columns into their own table?
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to