Re: [GENERAL] Large objects and savepoints - Snapshot reference leak

2014-02-03 Thread Andreas Lubensky
Thanks Alvaro, that is good to know. At the moment we are stuck with version 9.1.9 and have to stay there at least for Linux. But do I understand correctly, that the warning can be ignored for the moment? On Fri, 2014-01-31 at 15:15 -0300, Alvaro Herrera wrote: > Andreas Lubensky wrote: > > Hi, >

Re: [GENERAL] Large objects and savepoints - Snapshot reference leak

2014-01-31 Thread Alvaro Herrera
Andreas Lubensky wrote: > Hi, > > I'm trying to read/write large objects via libpq. I encapsulated the > operations in a transaction but I wanted to put a savepoint before doing > any operations, so I can do a rollback in case anything fails without > breaking the current transaction. Now, when st

Re: [GENERAL] Large objects and savepoints - Snapshot reference leak

2014-01-31 Thread Pavel Stehule
Hello This bug was fixed few months by Heikki Regards Pavel Dne 31.1.2014 17:35 "Andreas Lubensky" napsal(a): > Hi, > > I'm trying to read/write large objects via libpq. I encapsulated the > operations in a transaction but I wanted to put a savepoint before doing > any operations, so I can do

[GENERAL] Large objects and savepoints - Snapshot reference leak

2014-01-31 Thread Andreas Lubensky
Hi, I'm trying to read/write large objects via libpq. I encapsulated the operations in a transaction but I wanted to put a savepoint before doing any operations, so I can do a rollback in case anything fails without breaking the current transaction. Now, when sth. actually fails and the transactio

Re: [GENERAL] Large objects system

2013-10-04 Thread John R Pierce
On 10/3/2013 2:22 AM, Rafael B.C. wrote: My real doubt right now is why bytea does not gets processed by toast system even when is grow enough. Since ive read that tuples are not allowed to expand over several dtabase pages. a tuple can't expand over ONE database page, and generally it prefers

Re: [GENERAL] Large objects system

2013-10-04 Thread Albe Laurenz
Rafael B.C. wrote: > I am dealing with the old decision about hiw to store data objects and trying > to understand deep the > postgre system including toast, pg-largedataobject table and so on. > > My real doubt right now is why bytea does not gets processed by toast system > even when is grow e

[GENERAL] Large objects system

2013-10-03 Thread Rafael B.C.
Hello, I am dealing with the old decision about hiw to store data objects and trying to understand deep the postgre system including toast, pg-largedataobject table and so on. My real doubt right now is why bytea does not gets processed by toast system even when is grow enough. Since ive read tha

Re: [GENERAL] Large Objects and and Vacuum

2012-01-03 Thread Albe Laurenz
Simon Windsor wrote: [pg_largeobject keeps growing] > The data only has to be kept for a few days, and generally the system is > performing well, but as stated in the email, regular use of vacuumlo, vacuum > and autovacuum leaves the OS disc space slowly shrinking. > > As a last resort this week,

Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread Simon Windsor
regularly. Simon -Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce Sent: 02 January 2012 11:18 To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Large Objects and and Vacuum On 12/30/11 3:54 PM, Sim

Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread John R Pierce
On 12/30/11 3:54 PM, Simon Windsor wrote: I am struggling with the volume and number of XML files a new application is storing. how big are these XML files? large_object was meant for storing very large files, like videos, etc. multi-megabyte to gigabytes. XML stuff is typically a lot sma

Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread Alban Hertroys
On 31 December 2011 00:54, Simon Windsor wrote: > I am struggling with the volume and number of XML files a new application is > storing. The table pg_largeobjects is growing fast, and despite the efforts > of vacuumlo, vacuum and auto-vacuum it keeps on growing in size I can't help but wonder wh

Re: [GENERAL] Large Objects and and Vacuum

2012-01-02 Thread Albe Laurenz
Please don't send HTML mail to this list. Simon Windsor wrote: > I am struggling with the volume and number of XML files a new application is storing. The table > pg_largeobjects is growing fast, and despite the efforts of vacuumlo, vacuum and auto-vacuum it keeps > on growing in size. Have you c

[GENERAL] Large Objects and and Vacuum

2011-12-30 Thread Simon Windsor
Hi I am struggling with the volume and number of XML files a new application is storing. The table pg_largeobjects is growing fast, and despite the efforts of vacuumlo, vacuum and auto-vacuum it keeps on growing in size. The main tables that hold large objects are partitioned and every few

Re: [GENERAL] Large Objects: Sizeof and Deleting Unlinked LOs

2010-02-11 Thread Harald Fuchs
In article <4b72aeb3.4000...@selestial.com>, Howard Cole writes: > Is there an SQL function to determine the size of a large object? I'm using a pgsql helper function for that: CREATE FUNCTION lo_size(oid oid) RETURNS integer LANGUAGE plpgsql AS $$ DECLARE fd int; res i

[GENERAL] Large Objects: Sizeof and Deleting Unlinked LOs

2010-02-10 Thread Howard Cole
Is there an SQL function to determine the size of a large object? Also, can I safely delete all the large objects in pg_catalog.pg_largeobject? For example: select lo_unlink(loid) from (select distinct loid from pg_catalog.pg_largeobject) as loids where loid not in (select my_oid from my_onl

Re: [GENERAL] Large Objects and Replication question

2009-12-12 Thread Linos
Symmetric-ds it is a replication solution that handles large objects, it is asynchronous and multi-master, i have been using between 30 separate postgresql connected by a slow link and until now i have been working without problems, i think this project should be in the wiki. http://symmetricd

Re: [GENERAL] Large Objects and Replication question

2009-12-02 Thread Tatsuo Ishii
> > However you need to use newer API > > of libpq to create large objects: > > > > Oid lo_create(PGconn *conn, Oid lobjId); > [...] > > You cannot use old API lo_creat() since it relies on OID, which > > pgpool-II does not guarantee OIDs can be replicated. > > Does it mean that lo_create(conn,

Re: [GENERAL] Large Objects and Replication question

2009-12-02 Thread Daniel Verite
Tatsuo Ishii wrote: > However you need to use newer API > of libpq to create large objects: > > Oid lo_create(PGconn *conn, Oid lobjId); [...] > You cannot use old API lo_creat() since it relies on OID, which > pgpool-II does not guarantee OIDs can be replicated. Does it mean that lo_cr

Re: [GENERAL] Large Objects and Replication question

2009-12-02 Thread Alexey Klyukin
On Dec 2, 2009, at 5:48 PM, Tatsuo Ishii wrote: > BTW > >> Additionally there is a list of available open-source replication solutions >> here: >> http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling > > The link http://www.slony2.org/ mentioned in the wiki page

Re: [GENERAL] Large Objects and Replication question

2009-12-02 Thread Tatsuo Ishii
BTW > Additionally there is a list of available open-source replication solutions > here: > http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling The link http://www.slony2.org/ mentioned in the wiki page above apparently does nothing to do with Slony-II. Can someon

Re: [GENERAL] Large Objects and Replication question

2009-12-02 Thread Tatsuo Ishii
> Does anyone know of a replication solution that can handle large > objects? Preferrably on a per database rather than per cluster basis. pgpool-II can handle large objects. However you need to use newer API of libpq to create large objects: Oid lo_create(PGconn *conn, Oid lobjId); I'm not sur

Re: [GENERAL] Large Objects and Replication question

2009-12-02 Thread Alexey Klyukin
On Dec 2, 2009, at 4:23 PM, Howard Cole wrote: > Does anyone know of a replication solution that can handle large objects? > Preferrably on a per database rather than per cluster basis. Take a look at Mammoth Replicator: https://projects.commandprompt.com/public/replicator. Additionally there

[GENERAL] Large Objects and Replication question

2009-12-02 Thread Howard Cole
Does anyone know of a replication solution that can handle large objects? Preferrably on a per database rather than per cluster basis. Incidentally - out of interest - why doesn't Slony handle large objects? Thanks. Howard www.selestial.com -- Sent via pgsql-general mailing list (pgsql-genera

Re: [GENERAL] Large objects oids

2008-06-10 Thread Tom Lane
David Wall <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> Yup, and in practice you'd better have a lot less than that or assigning >> a new OID might take a long time. > What's a rough estimate of "a lot less"? Are we talking 2 billion, 3 > billion, 1 billion? It's difficult to say --- the as

Re: [GENERAL] Large objects oids

2008-06-10 Thread David Wall
Tom Lane wrote: David Wall <[EMAIL PROTECTED]> writes: Since large objects use OIDs, does PG 8.3 have a limit of 4 billion large objects across all of my various tables Yup, and in practice you'd better have a lot less than that or assigning a new OID might take a long time. What

Re: [GENERAL] Large objects oids

2008-06-10 Thread Tom Lane
David Wall <[EMAIL PROTECTED]> writes: > Since large objects use OIDs, does PG 8.3 have a limit of 4 billion > large objects across all of my various tables Yup, and in practice you'd better have a lot less than that or assigning a new OID might take a long time. > (actually, I presume OIDs > a

[GENERAL] Large objects oids

2008-06-10 Thread David Wall
Since large objects use OIDs, does PG 8.3 have a limit of 4 billion large objects across all of my various tables (actually, I presume OIDs are used elsewhere besides just large objects)? Is there any plan on allowing large objects to support more than 2GB? As data gets larger and larger, I c

Re: [GENERAL] large objects,was: Restoring 8.0 db to 8.1

2008-01-08 Thread Chris Browne
[EMAIL PROTECTED] ("Harald Armin Massa") writes: >> Not likely to change in the future, no. Slony uses triggers to manage the >> changed rows. We can't fire triggers on large object events, so there's no >> way for Slony to know what happened. > > that leads me to a question I often wanted to ask

Re: [GENERAL] Large Objects

2007-02-23 Thread Richard Huxton
[EMAIL PROTECTED] wrote: Hi all ! I'm working on a database that needs to handle insertion of about 10 large objects (50..60GB) a day. It should be able to run 200 days, so it will become about 10TB eventually, mostly of 200..500KB large objects. How does access to large objects work ? I giv

Re: [GENERAL] Large Objects

2007-02-22 Thread Albe Laurenz
[EMAIL PROTECTED] wrote: > I'm working on a database that needs to handle insertion of > about 10 large objects (50..60GB) a day. It should be > able to run 200 days, so it will become about 10TB > eventually, mostly of 200..500KB large objects. > How does access to large objects work ? I gi

[GENERAL] Large Objects

2007-02-22 Thread haukinger
Hi all ! I'm working on a database that needs to handle insertion of about 10 large objects (50..60GB) a day. It should be able to run 200 days, so it will become about 10TB eventually, mostly of 200..500KB large objects. How does access to large objects work ? I give the oid and get the lar

Re: [GENERAL] Large Objects

2005-01-03 Thread Pierre-Frédéric Caillaud
gives me (and those on high) the warm-fuzzies. If I store files (PDFs of varying sizes by the way, say from 500k to 50M) as large objects, will I still be able to restore the _whole_ database from a single pg_dump tar file? Don't forget a thing : If you put a webserver in front of this

Re: [GENERAL] Large Objects

2005-01-03 Thread Robby Russell
On Sat, 2005-01-01 at 19:50 -0600, Dan Boitnott wrote: > On Jan 1, 2005, at 11:40 AM, Joshua D. Drake wrote: > > > > >>> > >> Intresting. > >> What is the size when bytea become inafective ? > >> > >> Currently i keep all my products images in bytea record. is it > >> practical ? > > > > Well I a

Re: [GENERAL] Large Objects

2005-01-03 Thread Dan Boitnott
On Jan 1, 2005, at 11:40 AM, Joshua D. Drake wrote: Intresting. What is the size when bytea become inafective ? Currently i keep all my products images in bytea record. is it practical ? Well I am going to make the assumption that you product images are small... sub 100k or something. Bytea is

Re: [GENERAL] Large Objects

2005-01-02 Thread Karsten Hilbert
> > >BYTEA is not always pragmatic. What is the file is 100 megs? 256 megs? > > What is the size when bytea become inafective ? > I don't think it's so much a matter of effectiveness, it makes no > difference at all in storage space. Ah, thanks, good to know. Something new to learn every day...

Re: [GENERAL] Large Objects

2005-01-01 Thread Joshua D. Drake
Intresting. What is the size when bytea become inafective ? Currently i keep all my products images in bytea record. is it practical ? Well I am going to make the assumption that you product images are small... sub 100k or something. Bytea is just fine for that. The problem is when the binary yo

Re: [GENERAL] Large Objects

2005-01-01 Thread Martijn van Oosterhout
On Sat, Jan 01, 2005 at 01:28:04PM +0300, Michael Ben-Nes wrote: > Joshua D. Drake wrote: > >Frank D. Engel, Jr. wrote: > >>I'd advise use of BYTEA as well. It's much simpler to work with than > >>the OIDs, and has simpler semantics. You do need to escape data > >>before handing it to the query

Re: [GENERAL] Large Objects

2005-01-01 Thread Michael Ben-Nes
Joshua D. Drake wrote: Frank D. Engel, Jr. wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I'd advise use of BYTEA as well. It's much simpler to work with than the OIDs, and has simpler semantics. You do need to escape data before handing it to the query string, and handle escaped results

Re: [GENERAL] Large Objects

2004-12-31 Thread Joshua D. Drake
Frank D. Engel, Jr. wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I'd advise use of BYTEA as well. It's much simpler to work with than the OIDs, and has simpler semantics. You do need to escape data before handing it to the query string, and handle escaped results (see the docs), but ov

Re: [GENERAL] Large Objects

2004-12-31 Thread Frank D. Engel, Jr.
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I'd advise use of BYTEA as well. It's much simpler to work with than the OIDs, and has simpler semantics. You do need to escape data before handing it to the query string, and handle escaped results (see the docs), but overall much nicer than worki

Re: [GENERAL] Large Objects

2004-12-30 Thread Bruno Wolff III
On Mon, Dec 27, 2004 at 10:39:48 -0600, Dan Boitnott <[EMAIL PROTECTED]> wrote: > I need to do some investigation into the way Postgres handles large > objects for a major project involving large objects. My questions are: I don't know the answer to all of your questions. >* Is it practic

[GENERAL] Large Objects

2004-12-30 Thread Dan Boitnott
I need to do some investigation into the way Postgres handles large objects for a major project involving large objects. My questions are: * Can large objects be stored within a field value or must they be referenced by OID? * Are large objects backed up in the normal way or does special

Re: [GENERAL] Large objects [BLOB] again - general howto

2003-12-01 Thread Rick Gigger
Thanks! This is exactly what I wanted to know when I first asked the question. And it is the only response that seems to make sense. Does anyone else have experiecne with this? rg < Here's a quick list of my experiences with BLOB's and such. Performance is just fine, I get

Re: [GENERAL] Large objects [BLOB] again - general howto

2003-11-29 Thread Paul Thomas
On 25/11/2003 21:55 Jeremiah Jahn wrote: [snip] I have found that it is best to have a separate connection for BLOB's and one for everything else. Mind you, this is with Java, but the autocommit settings on the connection don't appear to be thread safe, so in high traffic you can accidentally cut o

Re: [GENERAL] Large objects [BLOB] again - general howto

2003-11-28 Thread Jeremiah Jahn
Here's a quick list of my experiences with BLOB's and such. Performance is just fine, I get about 1M hits a month and haven't had any problems. Use a BLOB if you don't need to search though the data. The main reason being that bytea and text types are parsed. To explain, your entire SQL statement

[GENERAL] Large objects [BLOB] again - general howto

2003-11-23 Thread Lada 'Ray' Lostak
Hi there :) Someone asks about performance of Large objects [LO] in PgSql [PG]. It was interesting for me, because I didn't work yet with them yet and I will have to soon. I tried search the web, doc and mailinglists, but I didn't found any adequate reply. I would be happy, of someone, who ha

[GENERAL] Large Objects in serializable transaction question

2003-07-15 Thread Andreas=20Sch=F6nbach
I have a testprogram (using libpq) reading data from a cursor and large objects according to the result of the cursor. The cursor is opened in a serializable transaction. Just for test reasons I know tried the following: I started the test program that reads the data from the cursor and that rea

[GENERAL] Large objects in web applications

2001-06-26 Thread wsheldah
Hi, Has there been any substantial change in the way large objects are handled with the coming of 7.1 and the expanded row size limit? Some old online articles suggested that would change things, but the current docs seem say I still need to use functions like lo_import. Assuming things haven

Re: [GENERAL] Large objects

2000-11-02 Thread Denis Perchine
> On Thu, Nov 02, 2000 at 05:35:04PM +0600, Denis Perchine wrote: > > Except on case... When you would like to be sure in transaction safety... > > Ok, but not for image galery. Again... If you can accept that you will have half of image, it's OK. If not... -- Sincerely Yours, Denis Perchine -

Re: [GENERAL] Large objects

2000-11-02 Thread Igor Roboul
On Thu, Nov 02, 2000 at 02:39:54PM +0300, Igor Roboul wrote: > On Thu, Nov 02, 2000 at 05:35:04PM +0600, Denis Perchine wrote: > > Except on case... When you would like to be sure in transaction safety... > Ok, but not for image galery. I have answered on argument about transactions -- Igor Rob

Re: [GENERAL] Large objects

2000-11-02 Thread Igor Roboul
On Thu, Nov 02, 2000 at 05:35:04PM +0600, Denis Perchine wrote: > Except on case... When you would like to be sure in transaction safety... Ok, but not for image galery. -- Igor Roboul, Unix System Administrator & Programmer @ sanatorium "Raduga", Sochi, Russia http://www.brainbench.com/transcr

Re: [GENERAL] Large objects

2000-11-02 Thread Denis Perchine
> > I want to make a image catalogue. I will use postgresql, perl and php. > > > > What are the advantages of having the images in the database instead of > > having them out in a directory? > > > > After all, to show the images I need them on a directory? > > Really, you can show images from data

Re: [GENERAL] Large Objects

2000-09-19 Thread dyp
Hello Steven, Tuesday, September 19, 2000, 11:00:02 PM, you wrote: SL> A couple of questions and concerns about Blobs. SL> I'm wondering what kind of performance hits do BLOBS have on a database SL> large database. SL> Currently working on implementing a database with images. I guess i'm SL>

[GENERAL] Large Objects

2000-09-19 Thread Steven Lacroix
A couple of questions and concerns about Blobs. I'm wondering what kind of performance hits do BLOBS have on a database large database. Currently working on implementing a database with images. I guess i'm looking for some numbers showing the performence. Note that it would be for web databas

Re: [GENERAL] Large objects

2000-06-14 Thread Tatsuo Ishii
> Hi everyone, I have to insert a few PDF files into my database, and I > am not able to do it ... > I have to use PHP3, and all I could do was use the lo_import, but in > this case, I am not able to insert any data because I am not an > administrator. Is there any other way of doing it, using,

[GENERAL] Large objects

2000-06-13 Thread Luis Martins
 Hi everyone, I have to insert a few PDF files into my database, and I am not able to do it ...   I have to use PHP3, and all I could do was use the lo_import, but in this case, I am not able to insert any data because I am not an administrator. Is there any other way of doing it, using, let's sa

[GENERAL] Large objects...

2000-04-22 Thread Andrew Schmeder
Hello all, I am attempting to use large objects to store chunks of text and binary data. I am using PHP and sometimes need to do things through psql also. PHP has a function to "unlink" a large object, i.e. delete it. Is there an explict way to delete a large object via psql? I have been using

Re: [GENERAL] Large objects + JDBC

1999-12-13 Thread Gunther Schadow
Marcin Mazurek - Multinet SA - Poznan wrote: > Hi, > I'm put several gifa into a table. I did as a exercise:) form psql using: > INSERT INTO images (id, data) > VALUES (3, lo_import('/usr/local/apache/servlets/images/a.gif')); are you sure this lo_import(...) thing in the SQL will work? I hav

[GENERAL] Large objects + JDBC

1999-12-12 Thread Marcin Mazurek - Multinet SA - Poznan
Hi, I'm put several gifa into a table. I did as a exercise:) form psql using: INSERT INTO images (id, data) VALUES (3, lo_import('/usr/local/apache/servlets/images/a.gif')); but I have a problem with creating Java stream to read these data. Here serveral lines of code I was using: PreparedSta

[GENERAL] large objects

1999-07-26 Thread Tim Joyce
Hi, I have an existing table and want to change the type of a column from text to oid. I have tried pg_dumping and psql -e, but get "broken pipe" when inserting the date into the new table. pastgresql 6.4 on linux thanks for your help timj [EMAIL PROTECTED]

[GENERAL] large objects

1999-06-30 Thread Lauri Posti
Hi! I've been trying to get postgres LO interface to work with python. I have been successful with three configurations: 1) pgsql 6.4.2 & PyGreSQL 2.2 on Linux/x86 2) pgsql 6.5beta1 & PyGgeSQL 2.3 on Linux/x86 3) pgsql 6.5beta1 & PyGreSQL 2.3 on SPARC/Solaris 2.6 And failed with all other: * 6

[GENERAL] Large Objects

1998-11-19 Thread David Giffin
I'm working on integrating Large Objects into a database. I'm courious about a couple of different things. 1) Can you view large objects from the psql interface? 2) Is there a way to reference the large object from a standard query or do I have create a method of running a query then opening the