Stefan Kaltenbrunner writes:
> well the usually problem is that it is fairly easy to get large (several
> hundred megabytes) large bytea objects into the database but upon
> retrieval we tend to take up to 3x the size of the object as actual
> memory consumption which causes us to hit all kind
Hi Robert,
On 11/08/2010 10:23 PM, Robert Haas wrote:
On Sat, Oct 30, 2010 at 9:30 PM, Arturas Mazeika wrote:
Thanks for the info, this explains a lot.
Yes, I am upgrading from the 32bit version to the 64bit one.
We have pretty large databases (some over 1 trillion of rows, and some
con
Arturas Mazeika wrote:
> the shared buffers is set to 128M, and the working mem
> is set to 1GB. We've got 16GB memory in total
Each connection can allocate work_mem memory, potentially multiple
times -- for multiple nodes in a query plan.
-Kevin
--
Sent via pgsql-bugs mailing list (pgsql
Robert Haas wrote:
On Sat, Oct 30, 2010 at 9:30 PM, Arturas Mazeika wrote:
Thanks for the info, this explains a lot.
Yes, I am upgrading from the 32bit version to the 64bit one.
We have pretty large databases (some over 1 trillion of rows, and some
containing large documents in blobs.) Givin
Arturas Mazeika wrote:
> Hi Dave,
>
> Thanks for the info, this explains a lot.
>
> Yes, I am upgrading from the 32bit version to the 64bit one.
>
> We have pretty large databases (some over 1 trillion of rows, and some
> containing large documents in blobs.) Giving a bit more memory than 4GB
Bruce Momjian writes:
>> On 10/30/2010 7:33 PM, Dave Page wrote:
>>> upgrade from a 32bit 8.3 server to a 64 bit 9.0 server, which isn't
>>> going to work without a dump/restore. With pg_upgrade, the two builds
>>> need to be from the same platform, same word size, and have the same
>>> configurat