Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread david
On Wed, 22 Apr 2009, Stephen Frost wrote: * Glenn Maynard (glennfmayn...@gmail.com) wrote: On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost wrote: For a single column table, I wouldn't expect much either.  With more columns I think it would be a larger improvement. Maybe. I'm not sure why pa

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 9:48 PM, Stephen Frost wrote: > Erm..  Prepared queries is about using PQexecPrepared(), not about > sending a text string as an SQL EXECUTE().  PQexecPrepared takes an > array of arguments.  That gets translated into a Bind command in the > protocol with a defined number o

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* da...@lang.hm (da...@lang.hm) wrote: > On Wed, 22 Apr 2009, Glenn Maynard wrote: >> You're talking about round-trips to a *local* server, on the same >> system, not a dedicated server with network round-trips, right? > > the use-case for a production setup for logging servers would probably > i

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* Glenn Maynard (glennfmayn...@gmail.com) wrote: > On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost wrote: > > For a single column table, I wouldn't expect much either.  With more > > columns I think it would be a larger improvement. > > Maybe. I'm not sure why parsing "(1,2,3,4,5)" in an EXECUTE

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread david
On Wed, 22 Apr 2009, Glenn Maynard wrote: On Wed, Apr 22, 2009 at 4:53 PM, James Mansion wrote: And I'm disagreeing with that.  Single row is a given, but I think you'll find it pays to have one round trip if at all possible and invoking multiple prepared statements can work against this. Yo

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost wrote: > For a single column table, I wouldn't expect much either.  With more > columns I think it would be a larger improvement. Maybe. I'm not sure why parsing "(1,2,3,4,5)" in an EXECUTE parameter should be faster than parsing the exact same thin

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* Glenn Maynard (glennfmayn...@gmail.com) wrote: > >> separate inserts, no transaction: 21.21s > >> separate inserts, same transaction: 1.89s > >> 40 inserts, 100 rows/insert: 0.18s > >> one 4-value insert: 0.16s > >> 40 prepared inserts, 100 rows/insert: 0.15s > >> COPY (text): 0.10s > >> COPY

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 4:53 PM, James Mansion wrote: > And I'm disagreeing with that.  Single row is a given, but I think you'll > find it pays to have one > round trip if at all possible and invoking multiple prepared statements can > work against this. You're talking about round-trips to a *lo

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 4:37 PM, Stephen Frost wrote: > Thanks for doing the work.  I had been intending to but hadn't gotten to > it yet. I'd done similar tests recently, for some batch import code, so it was just a matter of recreating it. >> separate inserts, no transaction: 21.21s >> separat

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Joshua D. Drake
On Wed, 2009-04-22 at 21:53 +0100, James Mansion wrote: > Stephen Frost wrote: > > You're re-hashing things I've already said. The big win is batching the > > inserts, however that's done, into fewer transactions. Sure, multi-row > > inserts could be used to do that, but so could dropping begin/c

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 4:07 PM, wrote: > are these done as seperate round trips? > > i.e. > begin > insert > insert > .. > end > > or as one round trip? All tests were done by constructing a file and using "time psql -f ...". >> 40 inserts, 100 rows/insert: 0.18s >> one 4-value insert:

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread James Mansion
Stephen Frost wrote: You're re-hashing things I've already said. The big win is batching the inserts, however that's done, into fewer transactions. Sure, multi-row inserts could be used to do that, but so could dropping begin/commits in right now which probably takes even less effort. Well,

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Tom Lane
Stephen Frost writes: > * Glenn Maynard (glennfmayn...@gmail.com) wrote: >> separate inserts, no transaction: 21.21s >> separate inserts, same transaction: 1.89s >> 40 inserts, 100 rows/insert: 0.18s >> one 4-value insert: 0.16s >> 40 prepared inserts, 100 rows/insert: 0.15s >> COPY (text): 0.

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
Glenn, * Glenn Maynard (glennfmayn...@gmail.com) wrote: > This is all well-known, covered information, but perhaps some numbers > will help drive this home. 4 inserts into a single-column, > unindexed table; with predictable results: Thanks for doing the work. I had been intending to but ha

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread david
On Wed, 22 Apr 2009, Glenn Maynard wrote: On Wed, Apr 22, 2009 at 8:19 AM, Stephen Frost wrote: Yes, as I beleive was mentioned already, planning time for inserts is really small.  Parsing time for inserts when there's little parsing that has to happen also isn't all *that* expensive and the s

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 8:19 AM, Stephen Frost wrote: > Yes, as I beleive was mentioned already, planning time for inserts is > really small.  Parsing time for inserts when there's little parsing that > has to happen also isn't all *that* expensive and the same goes for > conversions from textual

Re: [PERFORM] GiST index performance

2009-04-22 Thread Matthew Wakeling
On Wed, 22 Apr 2009, Matthew Wakeling wrote: I will post a patch when I have ported my bioseg code over to the seg data type. Here is my patch ported over to the seg contrib package, attached. Apply it to seg.c and all should be well. A similar thing needs to be done to cube, but I haven't lo

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Simon Riggs
On Mon, 2009-04-20 at 14:53 -0700, da...@lang.hm wrote: > the big win is going to be in changing the core of rsyslog so that it can > process multiple messages at a time (bundling them into a single > transaction) That isn't necessarily true as a single "big win". The reason there is an overh

Re: [PERFORM] GiST index performance

2009-04-22 Thread Matthew Wakeling
On Tue, 21 Apr 2009, Matthew Wakeling wrote: Unfortunately, it seems there is another bug in the picksplit function. My patch fixes a bug that reveals this new bug. The whole picksplit algorithm is fundamentally broken, and needs to be rewritten completely, which is what I am doing. I have no

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* James Mansion (ja...@mansionfamily.plus.com) wrote: > Fine. But like I said, I'd suggest measuring the fractional improvement > for this > when sending multi-row inserts before writing something complex. I > think the > big will will be doing multi-row inserts at all. You're re-hashing t

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
David, * da...@lang.hm (da...@lang.hm) wrote: > in a recent thread about prepared statements, where it was identified > that since the planning took place at the time of the prepare you > sometimes have worse plans than for non-prepared statements, a proposal > was made to have a 'pre-parsed, b

Re: [PERFORM] probelm with alter table add constraint......

2009-04-22 Thread roopabenzer
Tom Lane-2 wrote: > > "Albe Laurenz" writes: >> roopasatish wrote: >>> I have an issue with the add foreign key constraint which >>> goes for waiting and locks other queries as well. >>> >>> ALTER TABLE ONLY holding_positions ADD CONSTRAINT >>> holding_positions_stock_id_fkey FOREIGN KEY (s