On Mon, Jan 2, 2017 at 2:57 PM, Rob Sargent <robjsarg...@gmail.com> wrote:

> Perhaps this is your opportunity to correct someone else's mistake. You
> need to show the table definition to convince us that it cannot be
> improved. That it may be hard work really doesn't mean it's not the right
> path.
>

​This may not be possible. The data might be coming in from an external
source. I imagine you've run into the old "well, _we_ don't have any
problems, so it must be on your end!" scenario.

Example: we receive CSV files from an external source. These files are
_supposed_ to be validated. But we have often received files where NOT NULL
fields have "nothing" in them them. E.g. a customer bill which has
_everything_ in it _except_ the customer number (or an invalid one such as
"123{"); or missing some other vital piece of information.

In this particular case, the OP might want to do what we did in a similar
case. We had way too many columns in a table. The performance was horrible.
We did an analysis and, as usual, the majority of the selects were for a
subset of the columns, about 15% of the total. We "split" the table into
the "high use" columns table & the "low use" columns table. We then used
triggers to make sure that if we added a new / deleted an old row from one
table, the corresponding row in the other was created / deleted.



>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
There’s no obfuscated Perl contest because it’s pointless.

—Jeff Polk

Maranatha! <><
John McKown

Reply via email to