Regarding the schema: I'm afraid the schema cannot be changed at this point, though I appreciate the suggestions.
Regarding an INSERT performance test, what kind of table shape would you like me to exercise? The patch as submitted may actually shave some cycles off of the insertion of rows with trailing nulls even when there are less than 64 columns because it avoids iterating over the null columns a 2nd time in heap_fill_tuple(), so I want to be sure that I pick something that you feel is properly representative. Thanks. -Jamie ________________________________ From: Tom Lane <t...@sss.pgh.pa.us> To: Jameison Martin <jameis...@yahoo.com> Cc: "pgsql-hackers@postgresql.org" <pgsql-hackers@postgresql.org> Sent: Tuesday, April 17, 2012 9:57 PM Subject: Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap Jameison Martin <jameis...@yahoo.com> writes: > The use-case I'm targeting is a schema that has multiple tables with ~800 > columns, most of which have only the first 50 or so values set. 800 columns > would require 800 bits in a bitmap which equates to 100 bytes. With 8-byte > alignment the row bitmap would take up 104 bytes with the current > implementation. If only the first 50 or so columns are actually non-null, > then the minimum bitmap size wouldn't need to be more than 8 bytes, which > means the proposed change would save 96 bytes. For the data set I have in > mind roughly 90% of the rows would fall into the category of needing only 8 > bytes for the null bitmap. I can't help thinking that (a) this is an incredibly narrow use-case, and (b) you'd be well advised to rethink your schema design anyway. There are a whole lot of inefficiencies associated with having that many columns; the size of the null bitmap is probably one of the smaller ones. I don't really want to suggest an EAV design, but perhaps some of the columns could be collapsed into arrays, or something like that? > What kind of test results would prove that this is a net win (or not a net > loss) for typical cases? Are you interested in some insert performance tests? > Also, how would you define a typical case (e.g. what kind of data shape)? Hmm, well, most of the tables I've seen have fewer than 64 columns, so that the probability of win is exactly zero. Which would mean that you've got to demonstrate that the added overhead is unmeasurably small. Which maybe you can do, because there's certainly plenty of cycles involved in a tuple insertion, but we need to see the numbers. I'd suggest an INSERT/SELECT into a temp table as probably stressing tuple formation speed the most. Or maybe you could write a C function that just exercises heap_form_tuple followed by heap_freetuple in a tight loop --- if there's no slowdown measurable in that context, then a fortiori we don't have to worry about it in the real world. regards, tom lane