For those who's interested in the end-result I decided to post my code
on my blog.
http://garnser.blogspot.com/2009/04/dns-query-parser.html
The code creates a FIFO that BIND query-log writes to. Once the script
receives data it's parsed cached and written to a database.
I'll continue to make ad
Thanks for the tip, however the main problem that I'm seeing is that
perl + MySQL becomes a bottle-neck if this approach were to be used. I
ran some tests yesterday showing that caching 500k rows in a variable
and send it to MySQL was 10 times as effective (90k vs 9k) than doing
individual writes.
Hi,
You can forwrd your logs to other machine ( e.g. specially for logs ) and
there you can parse through log file.
It's good solution if you have more than one server.
Best regards,
Sebastian Tymków
___
bind-users mailing list
bind-users@lists.isc.or
You may be interested in using circular buffers, instead of a log file.
http://www.finalcog.com/replace-logs-emlog-circular-buffer
I've used emlog successfully in the past and been very pleased with
it's performance.
Hope this is useful.
Chris.
2009/4/29 Scott Haneda :
> I have read the other
After feedback and running some tests today I've found that the most
"cost-effective" approach as far as performance goes is to use the
native querylog and rotate it often enough to have as "live" data as
possible.
Some quick notes (all tests done with perl):
- Parse the querylog 500 000k queries:
I have read the other posts here, and it looks like you are setting on
tail, or a pipe, but that log rotation is causing you headaches.
I have had to deal with things like this in the past, and took a
different approach. Here are some ideas to think about.
Since you mentioned below you wan
Ah i.e. I'm using an incorrect logfacility... that would explain things.
Either way, I did try to parse tcpdump for queries, the problem I'm
getting is that perl isn't the best option for this so I'm going to
look into wether things could get sped up with python or something.
/Jonathan
2009/4/28
On Tue, 28 Apr 2009, Jonathan Petersson wrote:
> I did try to run the following option:
> syslog named;
syslog should define a "syslog facility".
Look in the openlog, syslog and/or syslog.conf manual pages to see lists
of facilities. The ARM says: " The syslog destination clause directs the
c
I did try to run the following option:
syslog named;
but when matching on named.* in syslog.conf there's no output.
/Jonathan
2009/4/28 JINMEI Tatuya / 神明達哉 :
> At Tue, 28 Apr 2009 10:01:02 -0700,
> Jonathan Petersson wrote:
>
>> So I gave tail a try in perl both via File::Tail and by putting t
At Tue, 28 Apr 2009 10:01:02 -0700,
Jonathan Petersson wrote:
> So I gave tail a try in perl both via File::Tail and by putting tail
> -f in a pipe. Neither seems to be handling the logrotation well. In my
> case I'm running a test sending 1 million queries, of those half is
> picked up by File::
Just realized something else, since I'm using perl in this case it's
going to be a permament bottleneck regardless of wether I use
syslog/tcpdump/querylog, it just isn't quick enough for that kind of
data-flow...
Back to the drawing-board
/Jonathan
On Tue, Apr 28, 2009 at 10:49 AM, Jonathan Pete
I don't think the cost is that great having querylogging enabled,
running the same test using dnsperf there's a 43% performance-increase
but 70 000 queries per second is still acceptable with query-logging
enabled.
/Jonathan
On Tue, Apr 28, 2009 at 10:05 AM, Alan Clegg wrote:
> Jonathan Petersso
Jonathan Petersson wrote:
> So I gave tail a try in perl both via File::Tail and by putting tail
> -f in a pipe.
As was stated previously in this thread, you are going down a bad path
by using query-log for any purpose beyond short debugging sessions.
The loss in performance is rather painful.
T
m: Jonathan Petersson
>>> Date: Tue, 28 Apr 2009 08:13:25 -0700
>>> Subject: Re: approach on parsing the query-log file
>>> To: niall.orei...@ucd.ie
>>> Cc: Bind Mailing
>>>
>>> Yeah I've thought about using tail but I'm not sure h
On Tue, 28 Apr 2009, Gregory Hicks wrote:
From: Jonathan Petersson
Date: Tue, 28 Apr 2009 08:13:25 -0700
Subject: Re: approach on parsing the query-log file
To: niall.orei...@ucd.ie
Cc: Bind Mailing
Yeah I've thought about using tail but I'm not sure how locking would
be ma
> From: Jonathan Petersson
> Date: Tue, 28 Apr 2009 08:13:25 -0700
> Subject: Re: approach on parsing the query-log file
> To: niall.orei...@ucd.ie
> Cc: Bind Mailing
>
> Yeah I've thought about using tail but I'm not sure how locking would
> be managed wh
Yeah I've thought about using tail but I'm not sure how locking would
be managed when logrotate kicks in, does anyone know?
On Tue, Apr 28, 2009 at 3:41 AM, Niall O'Reilly wrote:
> On Mon, 2009-04-27 at 22:26 -0700, Jonathan Petersson wrote:
>> The obvious question that occurs is; What would be w
The problem I'm seeing with this is that we'll get data that may be
inconsistent. Just because a query is sent to a server doesn't mean
that there's a name-server there to answer, I believe querying the
log-file one way or another would give a more accurate picture of load
etc.
On Tue, Apr 28, 200
On Mon, 2009-04-27 at 22:26 -0700, Jonathan Petersson wrote:
> The obvious question that occurs is; What would be what's the best
> approach to do this?
I've not used it, but a colleague is very keen on File::Tail
(http://search.cpan.org/~mgrabnar/File-Tail-0.99.3/Tail.pm).
On Apr 28, 2009, at 5:26 AM, Jonathan Petersson wrote:
Hi all,
I'm thinking of writing a quick tool to archive the query-log in a
database to allow for easier reports.
If it were me, I would turn off query logging and use a packet sniffer.
Chris Buxton
Professional Services
Men & Mice
__
20 matches
Mail list logo