On 2024-12-15 21:57, Gerald Galster via Postfix-users wrote:
>
> By default journald keeps about 4 GB of logs, which will only retain a
> few hours on a busy server. One might try to overcome that by setting
[...]
> when you discover some needed log data is not available anymore ...
>
> Storing logs in an organized form like journald does, has its advantages
> but it's also a lot slower compared to grep on plain log files.
It also uses much more space, even on btrfs with compression enabled, as
journal directory is chattr +C (otherwise btrfs extents will be the
bottleneck).
I'm dealing with this problem ...with /etc/logrotate.d/journal:
/var/log/journal/*/user-*@*.journal
/var/log/journal/*/system@*.journal {
olddir archive
createolddir 750 root logs
daily
missingok
nocreate
}
[with rotate 1000 and compress defined globally]
which in turn requires decompressing for journalctl --file to pick up.
And in general - if you have some predefined log retention policy,
especially required by law, there's no way to enforce it in journald.
You cannot expire log entries selectively (like: keep smtpd, remove
postscreen), unless you've set LogNamespace= in advance (for entire
service).
But grepping text files? No, it's not faster, unless you know every
possible combination to grep (or looking for something trivial). There
are cursors in journalctl and selectors that leverage structured logs.
But first you need to have them structured...
I'm using different approaches, including one system that parses log
entries, maps elements using regexps to structures and stores it it
PostgreSQL cstore_fdw (previously used GreenplumDB with ~hundred of
partitions).
Nowadays there are other solutions to store such amounts of logs, I
wouldn't advise anyone to keep 4 GB/few hours in neither journal nor
text files.
_______________________________________________
Postfix-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]