On 03/08/2011 09:46 PM, Stefan Certic wrote:
Hi Sebastian,
Thanks for response. Problem with another log file is that solution is
doubling number of I/O transactions. At some point, data needs to be
phrased into database and written to disk. I'm afraid doubling
operations will cause bottlenecks during high load traffic peaks and
slow-down maximal throughput.
AFAIK bind does not do transactional logging, and it doesn't do any mode
of logging where it will stop answering queries if logging stops.
Personally I consider this a good thing.
If I were you, I would log to files using standard bind "file" logging,
and use an asynchronous, stateful "tail" of the logfiles to generate
database records. Something like:
open logfile
begin loop
begin transaction
select lastposition from logfile_state for update
seek to lastposition
read X lines -> create SQL rows
update logfile_state set lastposition
commit
loop
Since you're storing both the query logs and the file position in the
same SQL transaction, this should be pretty much bombproof. Obviously
you'll need to handle filename changes/rotation but that's fairly
trivial. I've used code like this before - it's handy because you can
periodically rsync the files to do incremental "remote tail" (you need
to code in support for partial lines in that case)
I really, really wouldn't stop answering queries if logging stops, but
if you must - you could add a failure mode to the above process which
terminates bind or blocks port 53.
HTH
_______________________________________________
bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users