On 30Oct2015 00:35, Marc Aymerich wrote:
Usually I use my home router (which has an attached HDD) for
downloading movies and stuff (big files) from the WAN... it has a
800Mhz mips cpu... anyway my experience with it is that:
rsync tops at ~400Kbps
Rsync is not a maximally efficient file trans
On Thu, Oct 29, 2015 at 11:57 PM, Laura Creighton wrote:
> In a message of Fri, 30 Oct 2015 09:47:42 +1100, Cameron Simpson writes:
>>Another post suggests that the OP is transferring log info in UDP packets and
>>hopes to keep the state within a maximum packet size, hence his desire for
>>compact
On 29Oct2015 23:16, Laura Creighton wrote:
In a message of Fri, 30 Oct 2015 08:28:07 +1100, Cameron Simpson writes:
On 29Oct2015 09:15, Laura Creighton wrote:
Did the OP say he wanted to keep his compressed logfiles on a
local disk? What if he wants to send them across the internet
to some o
In a message of Fri, 30 Oct 2015 09:47:42 +1100, Cameron Simpson writes:
>Another post suggests that the OP is transferring log info in UDP packets and
>hopes to keep the state within a maximum packet size, hence his desire for
>compact representation. I suspect that personally I'd be going for s
In a message of Fri, 30 Oct 2015 08:28:07 +1100, Cameron Simpson writes:
>On 29Oct2015 09:15, Laura Creighton wrote:
>>Did the OP say he wanted to keep his compressed logfiles on a
>>local disk? What if he wants to send them across the internet
>>to some other machine and would like the transfer
On 29Oct2015 09:15, Laura Creighton wrote:
Did the OP say he wanted to keep his compressed logfiles on a
local disk? What if he wants to send them across the internet
to some other machine and would like the transfer to happen as
quickly as possible?
Then he's still better off keeping them un
On Thu, Oct 29, 2015 at 11:52 AM, Chris Angelico wrote:
> On Thu, Oct 29, 2015 at 9:35 PM, Marc Aymerich wrote:
>> 1) Each node on the cluster needs to keep track of *all* the changes
>> that ever ocurred. So far, each node is storing each change as
>> individual lines on a file (the "historical
On Thu, Oct 29, 2015 at 9:35 PM, Marc Aymerich wrote:
> 1) Each node on the cluster needs to keep track of *all* the changes
> that ever ocurred. So far, each node is storing each change as
> individual lines on a file (the "historical state log" I was referring
> to, the concept is very similar t
On Wed, Oct 28, 2015 at 11:30 PM, Marc Aymerich wrote:
> Hi,
> I'm writting an application that saves historical state in a log file.
> I want to be really efficient in terms of used bytes.
>
> What I'm doing now is:
>
> 1) First use zlib.compress
> 2) And then remove all new lines using binascii.
Did the OP say he wanted to keep his compressed logfiles on a
local disk? What if he wants to send them across the internet
to some other machine and would like the transfer to happen as
quickly as possible?
Laura
--
https://mail.python.org/mailman/listinfo/python-list
Hello Marc,
I think you have gotten quite a few answers already, but I'll add my
voice.
> I'm writting an application that saves historical state in a log
> file.
If I were in your shoes, I'd probably use the logging module rather
than saving state in my own log file. That allows the applic
On 2015-10-29 00:21, Mark Lawrence wrote:
> On 28/10/2015 22:53, Tim Chase wrote:
>> If nobody is monitoring the logs, just write them to /dev/null
>> for 100% compression. ;-)
>
> Can you get better than 100% compression if you write them to
> somewhere other than /dev/null/ ?
Well, /dev/null is
On Thu, Oct 29, 2015 at 12:09 PM, Cameron Simpson wrote:
> On 29Oct2015 11:39, Chris Angelico wrote:
>>>
>>> If it's only zipped, it's not opaque. Just `zcat` or `zgrep` and
>>> process away. The whole base64+minus_newlines thing does opaquify
>>> and doesn't really save all that much for the t
On 29Oct2015 11:39, Chris Angelico wrote:
If it's only zipped, it's not opaque. Just `zcat` or `zgrep` and
process away. The whole base64+minus_newlines thing does opaquify
and doesn't really save all that much for the trouble.
If you zip the whole file as a whole, yes. If you zip individual
On Thu, Oct 29, 2015 at 11:21 AM, Mark Lawrence wrote:
>> Though one also has to consider the speed of reading it off the drive
>> for processing. If you have spinning-rust drives, it's pretty slow
>> (and SSD is still not like accessing RAM), and reading zipped
>> content can shovel a LOT more d
On Thu, Oct 29, 2015 at 9:53 AM, Tim Chase
wrote:
> On 2015-10-29 09:38, Chris Angelico wrote:
>> On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich
>> wrote:
>> > I'm writting an application that saves historical state in a log
>> > file. I want to be really efficient in terms of used bytes.
>>
>> W
On 28/10/2015 22:53, Tim Chase wrote:
On 2015-10-29 09:38, Chris Angelico wrote:
On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich
wrote:
I'm writting an application that saves historical state in a log
file. I want to be really efficient in terms of used bytes.
Why, exactly?
By zipping the st
On 2015-10-29 09:38, Chris Angelico wrote:
> On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich
> wrote:
> > I'm writting an application that saves historical state in a log
> > file. I want to be really efficient in terms of used bytes.
>
> Why, exactly?
>
> By zipping the state, you make it utterl
On Thu, Oct 29, 2015 at 9:30 AM, Marc Aymerich wrote:
> I'm writting an application that saves historical state in a log file.
> I want to be really efficient in terms of used bytes.
Why, exactly?
By zipping the state, you make it utterly opaque. It'll require some
sort of tool to tease it apart
Hi,
I'm writting an application that saves historical state in a log file.
I want to be really efficient in terms of used bytes.
What I'm doing now is:
1) First use zlib.compress
2) And then remove all new lines using binascii.b2a_base64, so I have
a log entry per line.
but b2a_base64 is far fro
20 matches
Mail list logo