On Mon, 22 Aug 2016, 3:40 p.m. Thomas Güttler, <guettl...@thomas-guettler.de>
wrote:

>
>
> Am 19.08.2016 um 19:59 schrieb Andy Colson:
> > On 8/19/2016 2:32 AM, Thomas Güttler wrote:
> >> I want to store logs in a simple table.
> >>
> >> Here my columns:
> >>
> >>   Primary-key (auto generated)
> >>   timestamp
> >>   host
> >>   service-on-host
> >>   loglevel
> >>   msg
> >>   json (optional)
> >>
> >> I am unsure which DB to choose: Postgres, ElasticSearch or ...?
> >>
> >> We don't have high traffic. About 200k rows per day.
> >>
> >> My heart beats for postgres. We use it since several years.
> >>
> >> On the other hand, the sentence "Don't store logs in a DB" is
> >> somewhere in my head.....
> >>
> >> What do you think?
> >>
> >>
> >>
> >
> > I played with ElasticSearch a little, mostly because I wanted to use
> Kibana which looks really pretty.  I dumped a ton
> > of logs into it, and made a pretty dashboard ... but in the end it
> didn't really help me, and wasn't that useful.  My
> > problem is, I don't want to have to go look at it.  If something goes
> bad, then I want an email alert, at which point
> > I'm going to go run top, and tail the logs.
> >
> > Another problem I had with kibana/ES is the syntax to search stuff is
> different than I'm used to.  It made it hard to
> > find stuff in kibana.
> >
> > Right now, I have a perl script that reads apache logs and fires off
> updates into PG to keep stats.  But its an hourly
> > summary, which the website turns around and queries the stats to show
> pretty usage graphs.
>
> You use Perl to read apache logs. Does this work?
>
> Forwarding logs reliably is not easy. Logs are streams, files in unix are
> not streams. Sooner or later
> the files get rotated. RELP exists, but AFAIK it's usage is not wide
> spread:
>
>    https://en.wikipedia.org/wiki/Reliable_Event_Logging_Protocol
>
> Let's see how to get the logs into postgres ....
>
> > In the end, PG or ES, all depends on what you want.
>
> Most of my logs start from a http request. I want a unique id per request
> in every log line which gets created. This way I can trace the request,
> even if its impact spans to several hosts and systems which do not receive
> http requests.
>

You may decide not to use Elasticsearch but take a look at other components
of Elastic Stack like logstash and beats. They can be helpful even when you
use Postgres as the end point. Otherwise (IMHO), you would spend a lot of
time writing scripts and jobs to capture and stream logs. If I were you, I
would not want to do that.




> Regards,
>    Thomas Güttler
>
>
> --
> Thomas Guettler http://www.thomas-guettler.de/
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350

Skype: sameer.ashnik | www.ashnik.com

Reply via email to