The following bug has been logged online:

Bug reference:      1333
Logged by:          Keith Halewood

Email address:      [EMAIL PROTECTED]

PostgreSQL version: 7.4.5

Operating system:   HP-UX 11i

Description:        vacuum full apparently fails to complete

Details: 

We have a database of approximately 22Gb of which about 18Gb is large 
objects. The spread of sizes is roughly 16000x 2-6Mb objects, 16000x 1-2Mb 
objects, 140000x 1-9Kb objects. 

When this database is copied, record by record, to another blank database 
cluster, a vacuum full completes in approximately an hour, presumably 
because it has nothing to vacuum. 

In general, records are added to ordinary tables on a daily basis (some 45 
to one table) and this results in approximately 450 large objects being 
created (1x 2-6Mb, 2x 1-2Mb and about 10x 1-9Kb) 

Over the past few weeks, a large amount of changes of accumulated data has 
been made which mostly results in large objects being discarded and 
recreated. There are triggers in the database which ensure that just before 
a record is deleted, the large objects in any columns are unlinked. 
Similarly triggers for record update ensure that changes to the oid column 
types result in the appropriate unlinks first. 

All other tables vacuum full successfully. Here is the result of removing 
approximately 140000 large objects (of the 1-9Kb size) and 230 objects (of 
the 2-6Mb size): 

INFO:  "pg_largeobject": found 1052029 removable, 8638984 nonremovable row 
versions in 3086785 pages 
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 36 to 2084 bytes long.
There were 313026 unused item pointers.
Total free space (including removable row versions) is 7324762424 bytes.
242145 pages are or will become empty, including 1102 at the end of the 
table. 
3085017 pages containing 7315763688 free bytes are potential move 
destinations. 
CPU 495.25s/62.74u sec elapsed 3943.42 sec.
INFO:  index "pg_largeobject_loid_pn_index" now contains 8638984 row 
versions in 3819 

At this point, the vacuum is *still* in progress (after 8 hours) and disc 
activity is exclusively the result of this vacuum, about 140 blocks/second 
with no disc queues. Is this likely to be a bug or just a scalability issue 
involving vacuum and the large object table? 

Configuration:

HP 9000/J2240, 2xCPU, 4Gb RAM, HP-UX 11i, June 2003 required/gold patches, 
HP ANSI-C developers bundle B.11.11.04. No HP AutoRAID performance problems. 
Postgres 7.4.5 

Non-default contents of postgresql.conf in this db cluster:

max_connections=50
shared_buffers=10000
sort_mem=8192
vacuum_mem=81920
max_fsm_pages=40000
wal_buffers=32

The major reason we continue to use large objects rather than bytea columns 
directly in tables is due to the poor/buggy handling of binary column values 
in the perl DBI an DBD::Pg. 

Hoping for some insights.

Keith Halewood


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to