> > Is there significant overhead involoved in using large objects that
> > aren't very large?
>
> Yes, since each large object is a separate table in 7.0.* and before.
> The allocation unit for table space is 8K, so your 10K objects chew up
> 16K of table space. What's worse, each LO table has a
Philip Crotwell <[EMAIL PROTECTED]> writes:
> Is there significant overhead involoved in using large objects that aren't
> very large?
Yes, since each large object is a separate table in 7.0.* and before.
The allocation unit for table space is 8K, so your 10K objects chew up
16K of table space.
Hi
I'm putting lots of small (~10Kb) chunks of binary seismic data into large
objects in postgres 7.0.2. Basically just arrays of 2500 or so ints that
represent about a minutes worth of data. I put in the data at the rate of
about 1.5Mb per hour, but the disk usage of the database is growing at