"Merlin Moncure" writes:
> On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane wrote:
>> Yeah, the average expansion of bytea data in COPY format is about 3X :-(
>> So you need to get the max row length down to around 300mb. I'm curious
>> how you got the data in to start with --- were the values assembl
On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane wrote:
> Ted Allen writes:
>> 600mb measured by get_octet_length on data. If there is a better way to
>> measure the row/cell size, please let me know because we thought it was the
>> >1Gb problem too. We thought we were being conservative by getting
Ted Allen writes:
> 600mb measured by get_octet_length on data. If there is a better way to
> measure the row/cell size, please let me know because we thought it was the
> >1Gb problem too. We thought we were being conservative by getting rid of
> the larger rows but I guess we need to get ri
600mb measured by get_octet_length on data. If there is a better way to
measure the row/cell size, please let me know because we thought it was the
>1Gb problem too. We thought we were being conservative by getting rid of the
larger rows but I guess we need to get rid of even more.
Thanks,
Te
The thing to keep in mind is that every update creates a new row version
that has to be indexed for all indexes on the table, not just the indexes on
the column updated. You can test the weight of indexes by copying the table
then trying your query again.
I've heard tell that if you have a table