Max,

Am 23.07.2013 um 17:27 schrieb Max Leske <maxle...@gmail.com>:

> 
> On 23.07.2013, at 15:32, Mariano Martinez Peck <marianop...@gmail.com> wrote:
> 
>> 
>> 
>> 
>> On Tue, Jul 23, 2013 at 9:48 AM, Norbert Hartl <norb...@hartl.name> wrote:
>> Mariano,
>> 
>> Am 23.07.2013 um 14:43 schrieb Mariano Martinez Peck <marianop...@gmail.com>:
>> 
>>> Norbert, does the model have lots of strings? Because if so you can try 
>>> using the compression. It may increase speed. There is always a trade off 
>>> between the additional time of compressing/uncompressing. But if the model 
>>> has quite an amount of strings, since writing to disk is also slow and the 
>>> final stream would be smaller, it could be faster. 
>>> 
>> thanks. Yes, the model is a lot of Dictionaries having string keys. And 
>> there are lots more. The usage of the model is write once read many. And 
>> 30ms is good enough for me. So I might try compression only to see the 
>> difference. Well, and maybe to reduce storage size. The mcz is now 261 kb. 
>> 
>> Thanks, I'll see but I am already satisfied with the current approach. 
>> 
>> 
>> Mmmm I thought it was ready, sorry my bad. Seems it was not ready nor 
>> integrated: https://code.google.com/p/fuel/issues/detail?id=160
>> But I might be wrong. 
>> Max?
> 
> No, that issue is still open. You're welcome to use the experimental version 
> though (Fuel-MaxLeske.757.mcz in http://www.squeaksource.com/FuelExperiments) 
> but keep in mind that this is a few months behind stable.
> 
I'm not sure the priority is high enough to try.

> The other idea however, that Mariano describes in the linked issue, is 
> available but probably not desirable: simply pass a compressing stream to 
> Fuel (see FLGZippedBasicSerializationTest for an example on how to do that). 
> That will compress *all* data, not only strings (which is the idea outlined 
> in the linked issue).
> 
I don't think that will gain anything. A monticello package is written as zip. 
So I wouldn't expect much from an additional compression.

thanks,

Norbert

> Cheers,
> Max
> 
> (Note: I don't read the users list so please put me on the CC explicitly if 
> you want me to read on)
> 
>> 
>> Cheers, 
>> 
>>  
>> Norbert
>> 
>>> Cheers, 
>>> 
>>> 
>>> On Tue, Jul 23, 2013 at 9:40 AM, Norbert Hartl <norb...@hartl.name> wrote:
>>> 
>>> Am 23.07.2013 um 14:23 schrieb "jtuc...@objektfabrik.de" 
>>> <jtuc...@objektfabrik.de>:
>>> 
>>> > Norbert,
>>> >
>>> > Am 23.07.13 14:07, schrieb Norbert Hartl:
>>> >> Joachim,
>>> >>
>>> >> I'm aware that you can do serialization/deserialization yourself. I was 
>>> >> asking for something to help along the way because a have quite some 
>>> >> model classes to cover. I want a more generalized way of doing it. As 
>>> >> often I really think about things _after_ I've sent the mail :)
>>> >
>>> > We seem to have a lot in common here. Regard it this way: sending the 
>>> > mail made you think again. So the mail itself was all but useless.
>>> >> I think I have to options:
>>> >>
>>> >> 1. The most generic one is serializing into Fuel and compile the 
>>> >> ByteArray in a method.
>>> >> 2. The most readable one will be using STON and compiling the STON 
>>> >> string into a method
>>> >>
>>> >> Maybe I should try both.
>>> > I'd be very interested in what you find. I guess Fuel is smaller and 
>>> > maybe even faster. STON, OTOH, is much friendlier for comparing versions 
>>> > and such.
>>> >
>>> Well, I can't look at the method but I don't find ByteArrays not that 
>>> interesting anyway. So the actual size of the serialized byte array is
>>> 
>>> (ByteArray streamContents: [:stream|
>>>                 FLSerializer serialize: TCAP current model on: stream ]) 
>>> size
>>> 
>>> is 237445 Bytes. Serializing
>>> 
>>> [ MAPExampleModels class compile: 'tcap
>>>         ^ ', (ByteArray streamContents: [:stream|
>>>                 FLSerializer serialize: TCAP current model on: stream ]) 
>>> storeString ] timeToRun
>>> 
>>> took 647 ms
>>> 
>>> but
>>> 
>>> [FLMaterializer materializeFromByteArray: MAPExampleModels tcap] timeToRun
>>> 
>>> took only 30ms. Basically I'm not fond of putting too much binary stuff 
>>> into compiled methods but it is the most easy and reliable way of providing 
>>> tests with test data. The whole grammar is stored in methods as well. It is 
>>> text but….
>>> 
>>> I think I stay with it for a while.
>>> 
>>>  Norbert
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Mariano
>>> http://marianopeck.wordpress.com
>> 
>> 
>> 
>> 
>> -- 
>> Mariano
>> http://marianopeck.wordpress.com
> 

Reply via email to