Thanks Alvaro, that is good to know. At the moment we are stuck with
version 9.1.9 and have to stay there at least for Linux. But do I
understand correctly, that the warning can be ignored for the moment?
On Fri, 2014-01-31 at 15:15 -0300, Alvaro Herrera wrote:
> Andreas Lubensky wrote:
> > Hi,
>
Andreas Lubensky wrote:
> Hi,
>
> I'm trying to read/write large objects via libpq. I encapsulated the
> operations in a transaction but I wanted to put a savepoint before doing
> any operations, so I can do a rollback in case anything fails without
> breaking the current transaction. Now, when st
Hello
This bug was fixed few months by Heikki
Regards
Pavel
Dne 31.1.2014 17:35 "Andreas Lubensky" napsal(a):
> Hi,
>
> I'm trying to read/write large objects via libpq. I encapsulated the
> operations in a transaction but I wanted to put a savepoint before doing
> any operations, so I can do
Hi,
I'm trying to read/write large objects via libpq. I encapsulated the
operations in a transaction but I wanted to put a savepoint before doing
any operations, so I can do a rollback in case anything fails without
breaking the current transaction. Now, when sth. actually fails and the
transactio
On 10/3/2013 2:22 AM, Rafael B.C. wrote:
My real doubt right now is why bytea does not gets processed by toast
system even when is grow enough. Since ive read that tuples are not
allowed to expand over several dtabase pages.
a tuple can't expand over ONE database page, and generally it prefers
Rafael B.C. wrote:
> I am dealing with the old decision about hiw to store data objects and trying
> to understand deep the
> postgre system including toast, pg-largedataobject table and so on.
>
> My real doubt right now is why bytea does not gets processed by toast system
> even when is grow e
Hello,
I am dealing with the old decision about hiw to store data objects and
trying to understand deep the postgre system including toast,
pg-largedataobject table and so on.
My real doubt right now is why bytea does not gets processed by toast
system even when is grow enough. Since ive read tha
Simon Windsor wrote:
[pg_largeobject keeps growing]
> The data only has to be kept for a few days, and generally the system
is
> performing well, but as stated in the email, regular use of vacuumlo,
vacuum
> and autovacuum leaves the OS disc space slowly shrinking.
>
> As a last resort this week,
regularly.
Simon
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: 02 January 2012 11:18
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Large Objects and and Vacuum
On 12/30/11 3:54 PM, Sim
On 12/30/11 3:54 PM, Simon Windsor wrote:
I am struggling with the volume and number of XML files a new
application is storing.
how big are these XML files? large_object was meant for storing very
large files, like videos, etc. multi-megabyte to gigabytes. XML stuff
is typically a lot sma
On 31 December 2011 00:54, Simon Windsor wrote:
> I am struggling with the volume and number of XML files a new application is
> storing. The table pg_largeobjects is growing fast, and despite the efforts
> of vacuumlo, vacuum and auto-vacuum it keeps on growing in size
I can't help but wonder wh
Please don't send HTML mail to this list.
Simon Windsor wrote:
> I am struggling with the volume and number of XML files a new
application is storing. The table
> pg_largeobjects is growing fast, and despite the efforts of vacuumlo,
vacuum and auto-vacuum it keeps
> on growing in size.
Have you c
Hi
I am struggling with the volume and number of XML files a new application is
storing. The table pg_largeobjects is growing fast, and despite the efforts
of vacuumlo, vacuum and auto-vacuum it keeps on growing in size.
The main tables that hold large objects are partitioned and every few
In article <4b72aeb3.4000...@selestial.com>,
Howard Cole writes:
> Is there an SQL function to determine the size of a large object?
I'm using a pgsql helper function for that:
CREATE FUNCTION lo_size(oid oid) RETURNS integer
LANGUAGE plpgsql
AS $$
DECLARE
fd int;
res i
Is there an SQL function to determine the size of a large object?
Also, can I safely delete all the large objects in
pg_catalog.pg_largeobject? For example:
select lo_unlink(loid) from (select distinct loid from
pg_catalog.pg_largeobject) as loids where loid not in (select my_oid
from my_onl
Symmetric-ds it is a replication solution that handles large objects, it is
asynchronous and multi-master, i have been using between 30 separate postgresql
connected by a slow link and until now i have been working without problems, i
think this project should be in the wiki.
http://symmetricd
> > However you need to use newer API
> > of libpq to create large objects:
> >
> > Oid lo_create(PGconn *conn, Oid lobjId);
> [...]
> > You cannot use old API lo_creat() since it relies on OID, which
> > pgpool-II does not guarantee OIDs can be replicated.
>
> Does it mean that lo_create(conn,
Tatsuo Ishii wrote:
> However you need to use newer API
> of libpq to create large objects:
>
> Oid lo_create(PGconn *conn, Oid lobjId);
[...]
> You cannot use old API lo_creat() since it relies on OID, which
> pgpool-II does not guarantee OIDs can be replicated.
Does it mean that lo_cr
On Dec 2, 2009, at 5:48 PM, Tatsuo Ishii wrote:
> BTW
>
>> Additionally there is a list of available open-source replication solutions
>> here:
>> http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling
>
> The link http://www.slony2.org/ mentioned in the wiki page
BTW
> Additionally there is a list of available open-source replication solutions
> here:
> http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling
The link http://www.slony2.org/ mentioned in the wiki page above
apparently does nothing to do with Slony-II. Can someon
> Does anyone know of a replication solution that can handle large
> objects? Preferrably on a per database rather than per cluster basis.
pgpool-II can handle large objects. However you need to use newer API
of libpq to create large objects:
Oid lo_create(PGconn *conn, Oid lobjId);
I'm not sur
On Dec 2, 2009, at 4:23 PM, Howard Cole wrote:
> Does anyone know of a replication solution that can handle large objects?
> Preferrably on a per database rather than per cluster basis.
Take a look at Mammoth Replicator:
https://projects.commandprompt.com/public/replicator.
Additionally there
Does anyone know of a replication solution that can handle large
objects? Preferrably on a per database rather than per cluster basis.
Incidentally - out of interest - why doesn't Slony handle large objects?
Thanks.
Howard
www.selestial.com
--
Sent via pgsql-general mailing list (pgsql-genera
David Wall <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Yup, and in practice you'd better have a lot less than that or assigning
>> a new OID might take a long time.
> What's a rough estimate of "a lot less"? Are we talking 2 billion, 3
> billion, 1 billion?
It's difficult to say --- the as
Tom Lane wrote:
David Wall <[EMAIL PROTECTED]> writes:
Since large objects use OIDs, does PG 8.3 have a limit of 4 billion
large objects across all of my various tables
Yup, and in practice you'd better have a lot less than that or assigning
a new OID might take a long time.
What
David Wall <[EMAIL PROTECTED]> writes:
> Since large objects use OIDs, does PG 8.3 have a limit of 4 billion
> large objects across all of my various tables
Yup, and in practice you'd better have a lot less than that or assigning
a new OID might take a long time.
> (actually, I presume OIDs
> a
Since large objects use OIDs, does PG 8.3 have a limit of 4 billion
large objects across all of my various tables (actually, I presume OIDs
are used elsewhere besides just large objects)?
Is there any plan on allowing large objects to support more than 2GB?
As data gets larger and larger, I c
[EMAIL PROTECTED] ("Harald Armin Massa") writes:
>> Not likely to change in the future, no. Slony uses triggers to manage the
>> changed rows. We can't fire triggers on large object events, so there's no
>> way for Slony to know what happened.
>
> that leads me to a question I often wanted to ask
[EMAIL PROTECTED] wrote:
Hi all !
I'm working on a database that needs to handle insertion of about
10 large objects (50..60GB) a day. It should be able to run 200
days, so it will become about 10TB eventually, mostly of 200..500KB
large objects. How does access to large objects work ? I giv
[EMAIL PROTECTED] wrote:
> I'm working on a database that needs to handle insertion of
> about 10 large objects (50..60GB) a day. It should be
> able to run 200 days, so it will become about 10TB
> eventually, mostly of 200..500KB large objects.
> How does access to large objects work ? I gi
Hi all !
I'm working on a database that needs to handle insertion of about 10 large
objects (50..60GB) a day. It should be able to run 200 days, so it will become
about 10TB eventually, mostly of 200..500KB large objects.
How does access to large objects work ? I give the oid and get the lar
gives me (and those on high) the warm-fuzzies. If I store files (PDFs
of varying sizes by the way, say from 500k to 50M) as large objects,
will I still be able to restore the _whole_ database from a single
pg_dump tar file?
Don't forget a thing :
If you put a webserver in front of this
On Sat, 2005-01-01 at 19:50 -0600, Dan Boitnott wrote:
> On Jan 1, 2005, at 11:40 AM, Joshua D. Drake wrote:
>
> >
> >>>
> >> Intresting.
> >> What is the size when bytea become inafective ?
> >>
> >> Currently i keep all my products images in bytea record. is it
> >> practical ?
> >
> > Well I a
On Jan 1, 2005, at 11:40 AM, Joshua D. Drake wrote:
Intresting.
What is the size when bytea become inafective ?
Currently i keep all my products images in bytea record. is it
practical ?
Well I am going to make the assumption that you product images are
small...
sub 100k or something. Bytea is
> > >BYTEA is not always pragmatic. What is the file is 100 megs? 256 megs?
> > What is the size when bytea become inafective ?
> I don't think it's so much a matter of effectiveness, it makes no
> difference at all in storage space.
Ah, thanks, good to know. Something new to learn every day...
Intresting.
What is the size when bytea become inafective ?
Currently i keep all my products images in bytea record. is it
practical ?
Well I am going to make the assumption that you product images are small...
sub 100k or something. Bytea is just fine for that. The problem is when
the binary yo
On Sat, Jan 01, 2005 at 01:28:04PM +0300, Michael Ben-Nes wrote:
> Joshua D. Drake wrote:
> >Frank D. Engel, Jr. wrote:
> >>I'd advise use of BYTEA as well. It's much simpler to work with than
> >>the OIDs, and has simpler semantics. You do need to escape data
> >>before handing it to the query
Joshua D. Drake wrote:
Frank D. Engel, Jr. wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'd advise use of BYTEA as well. It's much simpler to work with than
the OIDs, and has simpler semantics. You do need to escape data
before handing it to the query string, and handle escaped results
Frank D. Engel, Jr. wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'd advise use of BYTEA as well. It's much simpler to work with than
the OIDs, and has simpler semantics. You do need to escape data
before handing it to the query string, and handle escaped results (see
the docs), but ov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'd advise use of BYTEA as well. It's much simpler to work with than
the OIDs, and has simpler semantics. You do need to escape data before
handing it to the query string, and handle escaped results (see the
docs), but overall much nicer than worki
On Mon, Dec 27, 2004 at 10:39:48 -0600,
Dan Boitnott <[EMAIL PROTECTED]> wrote:
> I need to do some investigation into the way Postgres handles large
> objects for a major project involving large objects. My questions are:
I don't know the answer to all of your questions.
>* Is it practic
I need to do some investigation into the way Postgres handles large
objects for a major project involving large objects. My questions are:
* Can large objects be stored within a field value or must they be
referenced by OID?
* Are large objects backed up in the normal way or does special
Thanks! This is exactly what I wanted to know when I first asked the
question. And it is the only response that seems to make sense. Does
anyone else have experiecne with this?
rg
<
Here's a quick list of my experiences with BLOB's and such.
Performance is just fine, I get
On 25/11/2003 21:55 Jeremiah Jahn wrote:
[snip]
I have found that it is best to have a separate connection for BLOB's
and one for everything else. Mind you, this is with Java, but the
autocommit settings on the connection don't appear to be thread safe, so
in high traffic you can accidentally cut o
Here's a quick list of my experiences with BLOB's and such.
Performance is just fine, I get about 1M hits a month and haven't had
any problems. Use a BLOB if you don't need to search though the data.
The main reason being that bytea and text types are parsed. To explain,
your entire SQL statement
Hi there :)
Someone asks about performance of Large objects [LO] in PgSql [PG]. It
was interesting for me, because I didn't work yet with them yet and I will
have to soon. I tried search the web, doc and mailinglists, but I didn't
found any adequate reply. I would be happy, of someone, who ha
I have a testprogram (using libpq) reading data from a cursor and large objects
according to the result of the cursor. The cursor is opened in a serializable
transaction.
Just for test reasons I know tried the following:
I started the test program that reads the data from the cursor and that rea
Hi,
Has there been any substantial change in the way large objects are handled with
the coming of 7.1 and the expanded row size limit? Some old online articles
suggested that would change things, but the current docs seem say I still need
to use functions like lo_import.
Assuming things haven
> On Thu, Nov 02, 2000 at 05:35:04PM +0600, Denis Perchine wrote:
> > Except on case... When you would like to be sure in transaction safety...
>
> Ok, but not for image galery.
Again... If you can accept that you will have half of image, it's OK.
If not...
--
Sincerely Yours,
Denis Perchine
-
On Thu, Nov 02, 2000 at 02:39:54PM +0300, Igor Roboul wrote:
> On Thu, Nov 02, 2000 at 05:35:04PM +0600, Denis Perchine wrote:
> > Except on case... When you would like to be sure in transaction safety...
> Ok, but not for image galery.
I have answered on argument about transactions
--
Igor Rob
On Thu, Nov 02, 2000 at 05:35:04PM +0600, Denis Perchine wrote:
> Except on case... When you would like to be sure in transaction safety...
Ok, but not for image galery.
--
Igor Roboul, Unix System Administrator & Programmer @ sanatorium "Raduga",
Sochi, Russia
http://www.brainbench.com/transcr
> > I want to make a image catalogue. I will use postgresql, perl and php.
> >
> > What are the advantages of having the images in the database instead of
> > having them out in a directory?
> >
> > After all, to show the images I need them on a directory?
>
> Really, you can show images from data
Hello Steven,
Tuesday, September 19, 2000, 11:00:02 PM, you wrote:
SL> A couple of questions and concerns about Blobs.
SL> I'm wondering what kind of performance hits do BLOBS have on a database
SL> large database.
SL> Currently working on implementing a database with images. I guess i'm
SL>
A couple of questions and concerns about Blobs.
I'm wondering what kind of performance hits do BLOBS have on a database
large database.
Currently working on implementing a database with images. I guess i'm
looking for some numbers showing the performence. Note that it would be
for web databas
> Hi everyone, I have to insert a few PDF files into my database, and I
> am not able to do it ...
> I have to use PHP3, and all I could do was use the lo_import, but in
> this case, I am not able to insert any data because I am not an
> administrator. Is there any other way of doing it, using,
Hi everyone, I have to insert a few PDF files into my database, and
I am not able to do it ...
I have to use PHP3, and all I could do was use the lo_import,
but in this case, I am not able to insert any data because I am not an
administrator. Is there any other way of doing it, using, let's sa
Hello all,
I am attempting to use large objects to store chunks of text and binary data.
I am using PHP and sometimes need to do things through psql also.
PHP has a function to "unlink" a large object, i.e. delete it.
Is there an explict way to delete a large object via psql? I have been using
Marcin Mazurek - Multinet SA - Poznan wrote:
> Hi,
> I'm put several gifa into a table. I did as a exercise:) form psql using:
> INSERT INTO images (id, data)
> VALUES (3, lo_import('/usr/local/apache/servlets/images/a.gif'));
are you sure this lo_import(...) thing in the SQL will work? I hav
Hi,
I'm put several gifa into a table. I did as a exercise:) form psql using:
INSERT INTO images (id, data)
VALUES (3, lo_import('/usr/local/apache/servlets/images/a.gif'));
but I have a problem with creating Java stream to read these data. Here
serveral lines of code I was using:
PreparedSta
Hi,
I have an existing table and want to change the type of a column from text to oid.
I have tried pg_dumping and psql -e, but get "broken pipe" when inserting the date
into the new table.
pastgresql 6.4 on linux
thanks for your help
timj
[EMAIL PROTECTED]
Hi!
I've been trying to get postgres LO interface to work with python.
I have been successful with three configurations:
1) pgsql 6.4.2 & PyGreSQL 2.2 on Linux/x86
2) pgsql 6.5beta1 & PyGgeSQL 2.3 on Linux/x86
3) pgsql 6.5beta1 & PyGreSQL 2.3 on SPARC/Solaris 2.6
And failed with all other:
* 6
I'm working on integrating Large Objects into a database. I'm courious
about a couple of different things. 1) Can you view large objects from the
psql interface? 2) Is there a way to reference the large object from a
standard query or do I have create a method of running a query then
opening the
62 matches
Mail list logo