Michael Shirk wrote:

> On May 23, 2015 10:42, "Predrag Punosevac" <punoseva...@gmail.com>
> wrote:
> >
> > 5. Finally I am open for simpler ideas. Any opinions on
> sysutils/logfmon
> > Is it possible to visualize on the web output from logfmon?
> >
> > Best,
> > Predrag Punosevac
> >
> 
> There is another aspect to log analysis tools that bothers me the most,
> why
> must we risk system security to review log files?
> 
> Any of the tools that "work well" open you up to web vulnerabilities, or
> cost money in the case of Splunk. I have not had time to work on it, but
> I
> would like to create a tool that avoids all of the issues of running a
> web
> service or requiring java.
> 
> My interest is in UNIX system logs and IDS/IPS events, with full packet
> captures. The simplest form I have used is with automated processing of
> IDS
> events, firewall logs, and full pcap data as static files shared on a
> webserver. I would be interested in a CLI log viewer with ncurses, or
> scripted output (maybe using pipecut to process data as you search for
> what
> you want in the simplest UNIX way).
> 
> --
> Michael Shirk
> Daemon Security, Inc.
> http://www.daemon-security.com


I am resurrecting this old thread I started almost a year ago in an
attempt to learn how other OpenBSD users are managing their centralized
logging servers. I also wanted to revisit the issues raised by 
Mr. Shirk. 

Namely the problem I am trying to solve seems very common. I am running
centralized logging server (syslog-ng) an OpenBSD host. This server
receives log files from my heterogeneous network consisting of OpenBSD
machines (running syslogd) Red Hat machines (rsyslog), and FreeBSD
machines running FreeBSD version of syslogd. I noticed that sending log
files generates lots of traffic on my monitoring server in part due to
the fact that I am recording lots of noise like

last message repeated 10 times

Next problem is properly rotating, archiving, and deleting monthly
directories containing log files of all my servers. For example
directory

/var/log/syslog-ng/HOSTS/2016-05

contains log files of all my servers for this month. That is not too
useful. Storing them per day would be probably better but having fewer
log files just for important things would be even better.

Log files are useless unless some kind analytics is run on them.
I would like to be able to do real time monitoring for anomalies using a
daemon for. The following seems obvious anomalies:

1 . SMART errors (I am big data/machine learning guy so I want to
replace failed HDD in timely fashion) even though SMART deamon is
sending separate e-mail

2. failing hardware (sensors, IPMI, mcelog)

3. firewall logs

4. IDS/IPS events 



A daemon should be able to send me an e-mail every couple of hours
containing as little noise as possible.

So far I have found in ports the following daemons:

1. security/logsurfer (package exists only for i386 and I use amd64)

2. sysutils/logfmon (From looking at /etc/logfmon.conf it looks like it
is written to monitor log files on the single OpenBSD machine running
syslogd. I don't see how I could monitor entire syslog-ng directories)

3. I noticed that syslog daemons do not work very well as SQL databases
as a storage backends. For example LibreNMS has the interface for
displaying and searching (manually which makes it useless) syslog files.
But MariaDB has to be restarted quite frequently and on the top of it.

4. I am not sure what to think of ELK anymore. The more I learn the less
i like it.

5. Finally I stumbled upon echofish 

https://echothrust.github.io/echofish/

which seems to be repeating old pattern. Using SQL database as a backend
and providing UI for searching messages (I can do that using grep) but
no e-mail notification when troubles are found.


What am I missing here? How do people monitor their log files in the
real time. That would seems such an obvious topic for people who care
about security. 

Predrag

Reply via email to