On Tue, 2004-11-16 at 16:25 +1100, Neil Conway wrote:
> Attached is a revised patch
Applied to HEAD, and backpatched to REL7_4_STABLE.
-Neil
---(end of broadcast)---
TIP 8: explain analyze is your friend
Neil Conway <[EMAIL PROTECTED]> writes:
> Attached is a revised patch -- I just did the check at the end of
> transformStmt(),
Looks OK offhand.
> BTW I figure this should be backpatched to REL7_4_STABLE. Barring any
> objections I will do that (and apply to HEAD) this evening.
No objection here
On Mon, 2004-11-15 at 20:53 -0500, Tom Lane wrote:
> I think the SELECT limit should be MaxTupleAttributeNumber not
> MaxHeapAttributeNumber.
Ah, true -- I forgot about the distinction...
> What I think needs to happen is to check p_next_resno at some point
> after the complete tlist has been bui
On Mon, 2004-11-15 at 21:08 -0500, Tom Lane wrote:
> Are we going to try to test whether the behavior is appropriate when
> running out of memory to store the tlist?
We absolutely should: segfaulting on OOM is not acceptable behavior.
Testing that we recover safely when palloc() elogs (or _any_ ro
Neil Conway <[EMAIL PROTECTED]> writes:
> On Sun, 2004-11-14 at 11:24 +, Simon Riggs wrote:
>> Does this mean that we do not have
>> regression tests for each maximum setting ... i.e. are we missing a
>> whole class of tests in the regression tests?
> That said, there are some minor logistical
Neil Conway <[EMAIL PROTECTED]> writes:
> Attached is a patch. Not entirely sure that the checks I added are in
> the right places, but at any rate this fixes the three identified
> problems for me.
I think the SELECT limit should be MaxTupleAttributeNumber not
MaxHeapAttributeNumber. The point o
On Sun, 2004-11-14 at 11:24 +, Simon Riggs wrote:
> This seems too obvious a problem to have caused a bug
Well, I'd imagine that we've checked CREATE TABLE et al. with
somewhat-too-large values (like 2000 columns), which wouldn't be
sufficiently large to trigger the problem.
> presumably this
On Sun, 2004-11-14 at 18:29 -0500, Tom Lane wrote:
> Good analysis. We can't check earlier than DefineRelation AFAICS,
> because earlier stages don't know about inherited columns.
>
> On reflection I suspect there are similar issues with SELECTs that have
> more than 64K output columns. This pro
Neil Conway <[EMAIL PROTECTED]> writes:
> This specific assertion is triggered because we represent attribute
> numbers throughout the code base as a (signed) int16 -- the assertion
> failure has occurred because an int16 has wrapped around due to
> overflow. A fix would be to add a check to Def
On Sun, 2004-11-14 at 10:05, Neil Conway wrote:
> Joachim Wieland wrote:
> > this query makes postmaster (beta4) die with signal 11:
> >
> > (echo "CREATE TABLE footest(";
> > for i in `seq 0 66000`; do
> > echo "col$i int NOT NULL,";
> > done;
> > echo "PRIMARY KEY(col0));") |
Joachim Wieland wrote:
this query makes postmaster (beta4) die with signal 11:
(echo "CREATE TABLE footest(";
for i in `seq 0 66000`; do
echo "col$i int NOT NULL,";
done;
echo "PRIMARY KEY(col0));") | psql test
ERROR: tables can have at most 1600 columns
LOG: serve
11 matches
Mail list logo