On 28/10/2015 22:53, Tim Chase wrote:
On 2015-10-29 09:38, Chris Angelico wrote:
On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich
<glicer...@gmail.com> wrote:
I'm writting an application that saves historical state in a log
file. I want to be really efficient in terms of used bytes.

Why, exactly?

By zipping the state, you make it utterly opaque.

If it's only zipped, it's not opaque.  Just `zcat` or `zgrep` and
process away.  The whole base64+minus_newlines thing does opaquify
and doesn't really save all that much for the trouble.

Disk space is not expensive. Even if you manage to cut your file by
a factor of four (75% compression, which is entirely possible if
your content is plain text, but far from guaranteed)

Though one also has to consider the speed of reading it off the drive
for processing.  If you have spinning-rust drives, it's pretty slow
(and SSD is still not like accessing RAM), and reading zipped
content can shovel a LOT more data at your CPU than if it is coming
off the drive uncompressed.  Logs aren't much good if they aren't
being monitored and processed for the information they contain.  If
nobody is monitoring the logs, just write them to /dev/null for 100%
compression. ;-)

-tkc


Can you get better than 100% compression if you write them to somewhere other than /dev/null/ ?

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to