Re: [PERFORM] performance for high-volume log insertion

2009-05-01 Thread Glenn Maynard
On Fri, May 1, 2009 at 8:29 PM, PFC wrote: >        Roundtrips can be quite fast but they have a hidden problem, which is > that everything gets serialized. The client and server will serialize, but what usually matters most is avoiding serializing against disk I/O--and that's why write-back cach

Re: [PERFORM] performance for high-volume log insertion

2009-05-01 Thread david
On Sat, 2 May 2009, PFC wrote: Blocking round trips to another process on the same server should be fairly cheap--that is, writing to a socket (or pipe, or localhost TCP connection) where the other side is listening for it; and then blocking in return for the response. The act of writing to an

Re: [PERFORM] performance for high-volume log insertion

2009-05-01 Thread PFC
Blocking round trips to another process on the same server should be fairly cheap--that is, writing to a socket (or pipe, or localhost TCP connection) where the other side is listening for it; and then blocking in return for the response. The act of writing to an FD that another process is wait

Re: [PERFORM] performance for high-volume log insertion

2009-04-27 Thread david
On Sun, 26 Apr 2009, Kris Jurka wrote: Scott Marlowe wrote: On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka wrote: As a note for non-JDBC users, the JDBC driver's batch interface allows executing multiple statements in a single network roundtrip. This is something you can't get in libpq, so be

Re: [PERFORM] performance for high-volume log insertion

2009-04-27 Thread Scott Marlowe
On Mon, Apr 27, 2009 at 12:45 AM, Kris Jurka wrote: > Scott Marlowe wrote: >> >> On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka wrote: >>> >>> As a note for non-JDBC users, the JDBC driver's batch interface allows >>> executing multiple statements in a single network roundtrip.  This is >>> somethi

Re: [PERFORM] performance for high-volume log insertion

2009-04-26 Thread Kris Jurka
Scott Marlowe wrote: On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka wrote: As a note for non-JDBC users, the JDBC driver's batch interface allows executing multiple statements in a single network roundtrip. This is something you can't get in libpq, so beware of this for comparison's sake.. Re

Re: [PERFORM] performance for high-volume log insertion

2009-04-26 Thread Scott Marlowe
On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka wrote: > > > On Thu, 23 Apr 2009, Thomas Kellerer wrote: > >> Out of curiosity I did some tests through JDBC. >> >> Using a single-column (integer) table, re-using a prepared statement took >> about 7 seconds to insert 10 rows with JDBC's batch inte

Re: [PERFORM] performance for high-volume log insertion

2009-04-26 Thread Thomas
Kris Jurka wrote on 26.04.2009 19:07: Despite the size of the batch passed to the JDBC driver, the driver breaks it up into internal sub-batch sizes of 256 to send to the server. It does this to avoid network deadlocks from sending too much data to the server without reading any in return. If

Re: [PERFORM] performance for high-volume log insertion

2009-04-26 Thread Kris Jurka
On Thu, 23 Apr 2009, Thomas Kellerer wrote: Out of curiosity I did some tests through JDBC. Using a single-column (integer) table, re-using a prepared statement took about 7 seconds to insert 10 rows with JDBC's batch interface and a batch size of 1000 As a note for non-JDBC users,

Re: [PERFORM] performance for high-volume log insertion

2009-04-23 Thread Thomas Kellerer
Stephen Frost wrote on 22.04.2009 23:51: What about 4 individual prepared inserts? Just curious about it. 4 inserts, one prepared statement each (constructing the prepared statement only once), in a single transaction: 1.68s I'm surprised that there's any win here at all. For a sin

Re: [PERFORM] performance for high-volume log insertion

2009-04-23 Thread Stephen Frost
* Glenn Maynard (glennfmayn...@gmail.com) wrote: > I'd suggest this be mentioned in the sql-prepare documentation, then, > because that documentation only discusses using prepared statements to > eliminate redundant planning costs. (I'm sure it's mentioned in the > API docs and elsewhere, but if i

Re: [PERFORM] performance for high-volume log insertion

2009-04-23 Thread Stephen Frost
* da...@lang.hm (da...@lang.hm) wrote: > On Wed, 22 Apr 2009, Stephen Frost wrote: >> Erm.. Prepared queries is about using PQexecPrepared(), not about >> sending a text string as an SQL EXECUTE(). PQexecPrepared takes an >> array of arguments. That gets translated into a Bind command in the >>

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread david
On Wed, 22 Apr 2009, Stephen Frost wrote: * Glenn Maynard (glennfmayn...@gmail.com) wrote: On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost wrote: For a single column table, I wouldn't expect much either.  With more columns I think it would be a larger improvement. Maybe. I'm not sure why pa

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 9:48 PM, Stephen Frost wrote: > Erm..  Prepared queries is about using PQexecPrepared(), not about > sending a text string as an SQL EXECUTE().  PQexecPrepared takes an > array of arguments.  That gets translated into a Bind command in the > protocol with a defined number o

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* da...@lang.hm (da...@lang.hm) wrote: > On Wed, 22 Apr 2009, Glenn Maynard wrote: >> You're talking about round-trips to a *local* server, on the same >> system, not a dedicated server with network round-trips, right? > > the use-case for a production setup for logging servers would probably > i

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* Glenn Maynard (glennfmayn...@gmail.com) wrote: > On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost wrote: > > For a single column table, I wouldn't expect much either.  With more > > columns I think it would be a larger improvement. > > Maybe. I'm not sure why parsing "(1,2,3,4,5)" in an EXECUTE

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread david
On Wed, 22 Apr 2009, Glenn Maynard wrote: On Wed, Apr 22, 2009 at 4:53 PM, James Mansion wrote: And I'm disagreeing with that.  Single row is a given, but I think you'll find it pays to have one round trip if at all possible and invoking multiple prepared statements can work against this. Yo

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost wrote: > For a single column table, I wouldn't expect much either.  With more > columns I think it would be a larger improvement. Maybe. I'm not sure why parsing "(1,2,3,4,5)" in an EXECUTE parameter should be faster than parsing the exact same thin

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* Glenn Maynard (glennfmayn...@gmail.com) wrote: > >> separate inserts, no transaction: 21.21s > >> separate inserts, same transaction: 1.89s > >> 40 inserts, 100 rows/insert: 0.18s > >> one 4-value insert: 0.16s > >> 40 prepared inserts, 100 rows/insert: 0.15s > >> COPY (text): 0.10s > >> COPY

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 4:53 PM, James Mansion wrote: > And I'm disagreeing with that.  Single row is a given, but I think you'll > find it pays to have one > round trip if at all possible and invoking multiple prepared statements can > work against this. You're talking about round-trips to a *lo

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 4:37 PM, Stephen Frost wrote: > Thanks for doing the work.  I had been intending to but hadn't gotten to > it yet. I'd done similar tests recently, for some batch import code, so it was just a matter of recreating it. >> separate inserts, no transaction: 21.21s >> separat

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Joshua D. Drake
On Wed, 2009-04-22 at 21:53 +0100, James Mansion wrote: > Stephen Frost wrote: > > You're re-hashing things I've already said. The big win is batching the > > inserts, however that's done, into fewer transactions. Sure, multi-row > > inserts could be used to do that, but so could dropping begin/c

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 4:07 PM, wrote: > are these done as seperate round trips? > > i.e. > begin > insert > insert > .. > end > > or as one round trip? All tests were done by constructing a file and using "time psql -f ...". >> 40 inserts, 100 rows/insert: 0.18s >> one 4-value insert:

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread James Mansion
Stephen Frost wrote: You're re-hashing things I've already said. The big win is batching the inserts, however that's done, into fewer transactions. Sure, multi-row inserts could be used to do that, but so could dropping begin/commits in right now which probably takes even less effort. Well,

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Tom Lane
Stephen Frost writes: > * Glenn Maynard (glennfmayn...@gmail.com) wrote: >> separate inserts, no transaction: 21.21s >> separate inserts, same transaction: 1.89s >> 40 inserts, 100 rows/insert: 0.18s >> one 4-value insert: 0.16s >> 40 prepared inserts, 100 rows/insert: 0.15s >> COPY (text): 0.

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
Glenn, * Glenn Maynard (glennfmayn...@gmail.com) wrote: > This is all well-known, covered information, but perhaps some numbers > will help drive this home. 4 inserts into a single-column, > unindexed table; with predictable results: Thanks for doing the work. I had been intending to but ha

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread david
On Wed, 22 Apr 2009, Glenn Maynard wrote: On Wed, Apr 22, 2009 at 8:19 AM, Stephen Frost wrote: Yes, as I beleive was mentioned already, planning time for inserts is really small.  Parsing time for inserts when there's little parsing that has to happen also isn't all *that* expensive and the s

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Glenn Maynard
On Wed, Apr 22, 2009 at 8:19 AM, Stephen Frost wrote: > Yes, as I beleive was mentioned already, planning time for inserts is > really small.  Parsing time for inserts when there's little parsing that > has to happen also isn't all *that* expensive and the same goes for > conversions from textual

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Simon Riggs
On Mon, 2009-04-20 at 14:53 -0700, da...@lang.hm wrote: > the big win is going to be in changing the core of rsyslog so that it can > process multiple messages at a time (bundling them into a single > transaction) That isn't necessarily true as a single "big win". The reason there is an overh

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
* James Mansion (ja...@mansionfamily.plus.com) wrote: > Fine. But like I said, I'd suggest measuring the fractional improvement > for this > when sending multi-row inserts before writing something complex. I > think the > big will will be doing multi-row inserts at all. You're re-hashing t

Re: [PERFORM] performance for high-volume log insertion

2009-04-22 Thread Stephen Frost
David, * da...@lang.hm (da...@lang.hm) wrote: > in a recent thread about prepared statements, where it was identified > that since the planning took place at the time of the prepare you > sometimes have worse plans than for non-prepared statements, a proposal > was made to have a 'pre-parsed, b

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread James Mansion
Stephen Frost wrote: apart again. That's where the performance is going to be improved by going that route, not so much in eliminating the planning. Fine. But like I said, I'd suggest measuring the fractional improvement for this when sending multi-row inserts before writing something compl

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Robert Haas
On Tue, Apr 21, 2009 at 8:12 PM, wrote: >> Using prepared queries, at least if you use PQexecPrepared or >> PQexecParams, also reduces the work required on the client to build the >> whole string, and the parsing overhead on the database side to pull it >> apart again.  That's where the performan

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
On Tue, 21 Apr 2009, Stephen Frost wrote: * James Mansion (ja...@mansionfamily.plus.com) wrote: da...@lang.hm wrote: on the other hand, when you have a full queue (lots of stuff to insert) is when you need the performance the most. if it's enough of a win on the database side, it could be wort

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Greg Smith
On Tue, 21 Apr 2009, da...@lang.hm wrote: 1) Disk/controller has a proper write cache. Writes and fsync will be fast. You can insert a few thousand individual transactions per second. in case #1 would you expect to get significant gains from batching? doesn't it suffer from problems similar

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Stephen Frost
* James Mansion (ja...@mansionfamily.plus.com) wrote: > da...@lang.hm wrote: >> on the other hand, when you have a full queue (lots of stuff to >> insert) is when you need the performance the most. if it's enough of a >> win on the database side, it could be worth more effort on the >> applic

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread James Mansion
da...@lang.hm wrote: 2. insert into table values (),(),(),() Using this structure would be more database agnostic, but won't perform as well as the COPY options I don't believe. It might be interesting to do a large "insert into table values (),(),()" as a prepared statement, but then you'd ha

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Stephen Frost
* da...@lang.hm (da...@lang.hm) wrote: > by the way, now that I understand how you were viewing this, I see why > you were saying that there would need to be a SQL parser. I was missing > that headache, by going the direction of having the user specify the > individual components (which has it's

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Stephen Frost
* da...@lang.hm (da...@lang.hm) wrote: >> Ignoring the fact that this is horrible, horrible non-SQL, > > that example is for MySQL, nuff said ;-) indeed. > for some reason I was stuck on the idea of the config specifying the > statement and variables seperatly, so I wasn't thinking this way, ho

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
On Tue, 21 Apr 2009, da...@lang.hm wrote: I see that you use %blah% to define variables inside your string. That's fine. There's no reason why you can't use this exact syntax to build a prepared query. No user-impact changes are necessary. Here's what you do: for some reason I was stuck o

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
On Tue, 21 Apr 2009, Stephen Frost wrote: * da...@lang.hm (da...@lang.hm) wrote: I think the key thing is that rsyslog today doesn't know anything about SQL variables, it just creates a string that the user and the database say looks like a SQL statement. err, what SQL variables? You mean th

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Kenneth Marshall
On Tue, Apr 21, 2009 at 11:09:18AM -0700, da...@lang.hm wrote: > On Tue, 21 Apr 2009, Greg Smith wrote: > >> On Mon, 20 Apr 2009, da...@lang.hm wrote: >> >>> while I fully understand the 'benchmark your situation' need, this isn't >>> that simple. in this case we are trying to decide what API/int

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
On Tue, 21 Apr 2009, Greg Smith wrote: On Mon, 20 Apr 2009, da...@lang.hm wrote: while I fully understand the 'benchmark your situation' need, this isn't that simple. in this case we are trying to decide what API/interface to use in a infrastructure tool that will be distributed in common di

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Stephen Frost
* da...@lang.hm (da...@lang.hm) wrote: > I think the key thing is that rsyslog today doesn't know anything about > SQL variables, it just creates a string that the user and the database > say looks like a SQL statement. err, what SQL variables? You mean the $NUM stuff? They're just placeholde

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Greg Smith
On Mon, 20 Apr 2009, da...@lang.hm wrote: while I fully understand the 'benchmark your situation' need, this isn't that simple. in this case we are trying to decide what API/interface to use in a infrastructure tool that will be distributed in common distros (it's now the default syslog packa

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
On Tue, 21 Apr 2009, Stephen Frost wrote: * Ben Chobot (be...@silentmedia.com) wrote: On Mon, 20 Apr 2009, da...@lang.hm wrote: one huge advantage of putting the sql into the configuration is the ability to work around other users of the database. +1 on this. We've always found tools much ea

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Stephen Frost
* Ben Chobot (be...@silentmedia.com) wrote: > On Mon, 20 Apr 2009, da...@lang.hm wrote: >> one huge advantage of putting the sql into the configuration is the >> ability to work around other users of the database. > > +1 on this. We've always found tools much easier to work with when they > coul

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Ben Chobot
On Mon, 20 Apr 2009, da...@lang.hm wrote: one huge advantage of putting the sql into the configuration is the ability to work around other users of the database. +1 on this. We've always found tools much easier to work with when they could be adapted to our schema, as opposed to changing our

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Kenneth Marshall
ts. Cheers, Ken > On Tue, 21 Apr 2009, Kenneth Marshall wrote: > >> Date: Tue, 21 Apr 2009 08:33:30 -0500 >> From: Kenneth Marshall >> To: Richard Huxton >> Cc: da...@lang.hm, Stephen Frost , >> Greg Smith , pgsql-performance@postgresql.org >> Sub

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
shall wrote: Date: Tue, 21 Apr 2009 08:33:30 -0500 From: Kenneth Marshall To: Richard Huxton Cc: da...@lang.hm, Stephen Frost , Greg Smith , pgsql-performance@postgresql.org Subject: Re: [PERFORM] performance for high-volume log insertion Hi, I just finished reading this thread. W

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Kenneth Marshall
Hi, I just finished reading this thread. We are currently working on setting up a central log system using rsyslog and PostgreSQL. It works well once we patched the memory leak. We also looked at what could be done to improve the efficiency of the DB interface. On the rsyslog side, moving to prepa

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread Richard Huxton
da...@lang.hm wrote: On Tue, 21 Apr 2009, Stephen Frost wrote: * da...@lang.hm (da...@lang.hm) wrote: while I fully understand the 'benchmark your situation' need, this isn't that simple. It really is. You know your application, you know it's primary use cases, and probably have some data t

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
On Tue, 21 Apr 2009, Stephen Frost wrote: David, * da...@lang.hm (da...@lang.hm) wrote: I thought that part of the 'efficiancy' and 'performance' to be gained from binary modes were avoiding the need to parse commands, if it's only the savings in converting column contents from text to specifi

Re: [PERFORM] performance for high-volume log insertion

2009-04-21 Thread david
On Tue, 21 Apr 2009, Stephen Frost wrote: * da...@lang.hm (da...@lang.hm) wrote: while I fully understand the 'benchmark your situation' need, this isn't that simple. It really is. You know your application, you know it's primary use cases, and probably have some data to play with. You're c

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Stephen Frost
David, * da...@lang.hm (da...@lang.hm) wrote: > is this as simple as creating a database and doing an explain on each of > these? or do I need to actually measure the time (at which point the > specific hardware and tuning settings become an issue again) No, you need to measure the time. An

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Stephen Frost
* da...@lang.hm (da...@lang.hm) wrote: > while I fully understand the 'benchmark your situation' need, this isn't > that simple. It really is. You know your application, you know it's primary use cases, and probably have some data to play with. You're certainly in a much better situation to at

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Stephen Frost
David, * da...@lang.hm (da...@lang.hm) wrote: > I thought that part of the 'efficiancy' and 'performance' to be gained > from binary modes were avoiding the need to parse commands, if it's only > the savings in converting column contents from text to specific types, > it's much less importan

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread david
On Mon, 20 Apr 2009, Stephen Frost wrote: Greg, * Greg Smith (gsm...@gregsmith.com) wrote: The win from switching from INSERT to COPY can be pretty big, further optimizing to BINARY you'd really need to profile to justify. Have you done any testing to compare COPY vs. INSERT using prepared s

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread david
On Mon, 20 Apr 2009, Greg Smith wrote: On Mon, 20 Apr 2009, da...@lang.hm wrote: any idea what sort of difference binary mode would result in? The win from switching from INSERT to COPY can be pretty big, further optimizing to BINARY you'd really need to profile to justify. I haven't foun

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread david
On Tue, 21 Apr 2009, Stephen Frost wrote: David, * da...@lang.hm (da...@lang.hm) wrote: the database structure is not being defined by (or specificly for) rsyslog. so at compile time we have _no_ idea how many variables of what type there are going to be. my example of ($timestamp,$msg) was in

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Stephen Frost
David, * da...@lang.hm (da...@lang.hm) wrote: > any idea what sort of difference binary mode would result in? It depends a great deal on your application.. > currently rsyslog makes use of it's extensive formatting capabilities to > format a string along the lines of > $DBformat="insert into t

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Stephen Frost
David, * da...@lang.hm (da...@lang.hm) wrote: > the database structure is not being defined by (or specificly for) > rsyslog. so at compile time we have _no_ idea how many variables of what > type there are going to be. my example of ($timestamp,$msg) was intended > to just be a sample (avoi

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread david
On Mon, 20 Apr 2009, Stephen Frost wrote: David, * da...@lang.hm (da...@lang.hm) wrote: any idea what sort of difference binary mode would result in? It depends a great deal on your application.. currently rsyslog makes use of it's extensive formatting capabilities to format a string along

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Stephen Frost
Greg, * Greg Smith (gsm...@gregsmith.com) wrote: > The win from switching from INSERT to COPY can be pretty big, further > optimizing to BINARY you'd really need to profile to justify. Have you done any testing to compare COPY vs. INSERT using prepared statements? I'd be curious to know how

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Greg Smith
On Mon, 20 Apr 2009, da...@lang.hm wrote: any idea what sort of difference binary mode would result in? The win from switching from INSERT to COPY can be pretty big, further optimizing to BINARY you'd really need to profile to justify. I haven't found any significant difference in binary mo

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread david
On Mon, 20 Apr 2009, Stephen Frost wrote: David, * da...@lang.hm (da...@lang.hm) wrote: I am working with the rsyslog developers to improve it's performance in inserting log messages to databases. Great! currently they have a postgres interface that works like all the other ones, where rsy

Re: [PERFORM] performance for high-volume log insertion

2009-04-20 Thread Stephen Frost
David, * da...@lang.hm (da...@lang.hm) wrote: > I am working with the rsyslog developers to improve it's performance in > inserting log messages to databases. Great! > currently they have a postgres interface that works like all the other > ones, where rsyslog formats an insert statement, pa

[PERFORM] performance for high-volume log insertion

2009-04-20 Thread david
I am working with the rsyslog developers to improve it's performance in inserting log messages to databases. currently they have a postgres interface that works like all the other ones, where rsyslog formats an insert statement, passes that the the interface module, that sends it to postgres (