Thomas Hallgren <[EMAIL PROTECTED]> writes: > What is the quality of the large object solution today. Does it have > known flaws that nobody cares about since it's discontinued or is it > considered a maintained and worthy part of the overall solution?
More the former than the latter, I think, at least in the minds of the usual suspects for backend work. The main problem I'd see with the idea of supporting over-2GB LOs is that we store all LOs in a database in the same table (pg_largeobject) and so you would run into the table size limit (around 16TB IIRC) with not an amazingly large number of such LOs. We used to store each LO in its own table but that was not better, as a few thousand LOs could easily bring the filesystem to its knees (on platforms where the directory lookup mechanism doesn't scale to huge numbers of entries in a single directory). I don't think there'd be any point in upgrading the LO support to 64 bits without some rethinking of the underlying storage structure. A generic issue with LOs is the extreme pain involved in dump/reload; not only the difficulty of transporting the LOs themselves, but that of updating references to them from the database. Vacuuming no-longer-referenced LOs is a serious problem too. If LOs were considered a first-class feature then I'd want to see more interest in dealing with those problems. Lesser issues with LOs are protection (there isn't any), user-accessible locking (there isn't any), MVCC (there isn't any). The latter has been on the to-do list since http://archives.postgresql.org/pgsql-hackers/2002-05/msg00875.php I think it could actually be fixed now without too much pain because there is a mechanism for finding out the surrounding query's snapshot, which functions could not do before 8.0. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]