On Apr 9, 2011, at 9:23 PM, Stephen Frost wrote:
> Actually, Tom has a point in that work_mem can be set above 1GB (which
> is where I had it set previously..). I didn't think it'd actually do
> anything given the MaxAlloc limit, but suprisingly, it does (at least,
> under 8.4). I'm currently try
* Greg Stark (gsst...@mit.edu) wrote:
> On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane wrote:
> > BTW, it sounded like your argument had to do with whether it would use
> > HashAgg or not -- that is *not* dependent on the per-palloc limit, and
> > never has been.
>
> His point was he wanted to be allow
On Sun, 2011-04-10 at 03:05 +0100, Greg Stark wrote:
> On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane wrote:
> > BTW, it sounded like your argument had to do with whether it would use
> > HashAgg or not -- that is *not* dependent on the per-palloc limit, and
> > never has been.
> >
>
> His point was he
On Sat, Apr 9, 2011 at 6:20 PM, Tom Lane wrote:
> BTW, it sounded like your argument had to do with whether it would use
> HashAgg or not -- that is *not* dependent on the per-palloc limit, and
> never has been.
>
His point was he wanted to be allowed to set work_mem > 1GB. This is
going to becom
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> If we were actually trying to support such large allocations,
>> what I'd be inclined to do is introduce a separate call along the lines
>> of MemoryContextAllocLarge() that lacks the safety check.
> This sounds like the right ap
Tom, all,
Having run into issues caused by small work_mem, again, I felt the need
to respond to this.
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> You would break countless things. It might be okay anyway in a trusted
> environment, ie, one without users trying to crash the system, but there
> are a
Tom Lane wrote:
> Bruce Momjian writes:
> > Tom Lane wrote:
> >> You seem to be confusing one limitation in one code path with the
> >> overall meaning of maintenance_work_mem.
>
> > Oh, OK, so sorts are limited, but not hash sizes? Are there any other
> > uses? Should this be documented someho
Tom Lane wrote:
> Bruce Momjian writes:
> > Well, I figure it will be hard to allow larger maximums, but can we make
> > the GUC variable maximums be more realistic? Right now it is
> > MAX_KILOBYTES (INT_MAX).
>
> You seem to be confusing one limitation in one code path with the
> overall meani
Bruce Momjian writes:
> Tom Lane wrote:
>> You seem to be confusing one limitation in one code path with the
>> overall meaning of maintenance_work_mem.
> Oh, OK, so sorts are limited, but not hash sizes? Are there any other
> uses? Should this be documented somehow? What is the actual sort
>
Bruce Momjian writes:
> Well, I figure it will be hard to allow larger maximums, but can we make
> the GUC variable maximums be more realistic? Right now it is
> MAX_KILOBYTES (INT_MAX).
You seem to be confusing one limitation in one code path with the
overall meaning of maintenance_work_mem.
--On 20. Februar 2011 15:48:06 +0100 Bernd Helmle
wrote:
Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic? Right now it is
MAX_KILOBYTES (INT_MAX).
This is something i proposed some time ago, too. At least, it will stop
--On 20. Februar 2011 09:32:02 -0500 Bruce Momjian wrote:
Well, I figure it will be hard to allow larger maximums, but can we make
the GUC variable maximums be more realistic? Right now it is
MAX_KILOBYTES (INT_MAX).
This is something i proposed some time ago, too. At least, it will stop u
Devrim G?ND?Z wrote:
> On Wed, 2011-02-16 at 23:24 +0200, Peter Eisentraut wrote:
> >
> > > But before expending time on that, I'd want to see some evidence
> > that
> > > it's actually helpful for production situations. I'm a bit dubious
> > > that you're going to gain much here.
> >
> > If you
On Wed, 2011-02-16 at 23:24 +0200, Peter Eisentraut wrote:
>
> > But before expending time on that, I'd want to see some evidence
> that
> > it's actually helpful for production situations. I'm a bit dubious
> > that you're going to gain much here.
>
> If you want to build an index on a 500GB ta
Frederik Ramm writes:
> The single query where pg9.0 beat pg8.3 by a country mile was a CREATE
> INDEX statement on a BIGINT column to a table with about 500 million
> records - this cost 2679 seconds on normal 8.3, 2443 seconds on
> large-memory 8.3, and aroung 1650 seconds on 9.0, large memor
Tom & Kevin,
thank you for your replies. Kevin, I had already employed all the
tricks you mention, except using temporary tables which would be hard
for me due to the structure of my application (but I could try using
something like pgbouncer or so), but thanks a lot for sharing the ideas.
On mån, 2011-02-14 at 10:11 -0500, Tom Lane wrote:
> But before expending time on that, I'd want to see some evidence that
> it's actually helpful for production situations. I'm a bit dubious
> that you're going to gain much here.
If you want to build an index on a 500GB table and you have 1TB RA
Frederik Ramm wrote:
> I am (ab)using a PostgreSQL database (with PostGIS extension) in
> a large data processing job - each day, I load several GB of data,
> run a lot of analyses on it, and then throw everything away again.
> Loading, running, and dumping the results takes about 18 hours
> eve
Frederik Ramm writes:
> Now I assume that there are reasons that you're doing this. memutils.h
> has the (for me) cryptic comment about MaxAllocSize: "XXX This is
> deliberately chosen to correspond to the limiting size of varlena
> objects under TOAST. See VARATT_MASK_SIZE in postgres.h.", but
19 matches
Mail list logo