Just when I found a way to reduce the total size of the changes file, I
noticed the latest versions in the repository doesn't have this issue,
what a good way of wasting one's time learning about the guts of
Monticello :)

But just for the record:
The problem happened after installing into GlorpSession a
MCMethodDefinition (#dropTables:), I logged every single method
installation, and the point where things went bananas was after
installing that culprit method:

After a MCMethodDefinition(dropTables):     1201686 bytes (~1.2Mb)
After a MCMethodDefinition(dropTables:):     135419895 bytes (~135Mb)

I did a fileOut, deleted the method, and then file it in again, and
after versioning (and reloading) the changes increased ~1Mb after
installing Glorp.

I didn't find anything particularly suspicious about it, not in the
source code nor in the bytecodes.

Now after running PharoChangesCondenser condense the changes file
effectively shrinks.

Regards!


Esteban A. Maringolo


2014-10-28 12:31 GMT-03:00 Sven Van Caekenberghe <s...@stfx.eu>:
>
>> On 28 Oct 2014, at 14:58, Esteban A. Maringolo <emaring...@gmail.com> wrote:
>>
>> 2014-10-28 5:41 GMT-03:00 Sven Van Caekenberghe <s...@stfx.eu>:
>>> Esteban,
>>>
>>> The Reddit example's CI Job 
>>> (https://ci.inria.fr/pharo-contribution/job/Reddit/) also loads Seaside and 
>>> Glorp (two big packages) in Pharo 3 and it too results in a 110 Mb image 
>>> and 142 Mb changes file. After condensing, that goes to 276 Mb !
>>
>>> I would say something is wrong here ;-)
>>
>> Either something is wrong, or we feature it and rename
>> PharoChangesCondenser>>#condense to PharoChangesExpander>>#expand :)
>>
>>> Note that condensing changes on newly loaded code should not make much 
>>> difference (in essence, all multiple versions of methods are reduced to 1).
>>
>>> I think we should create some issues out of this.
>>
>> This is true, however I find a changes of 140 megs to be MASSIVE.
>
> I totally agree, this is unacceptable. However, I did some tests, and the 
> problem is with Glorp (or any of its sub packages). I tried building both my 
> Reddit and HP35 examples from scratch in Pharo 3 on Linux.
>
> $ ./pharo reddit.image config 
> http://smalltalkhub.com/mc/SvenVanCaekenberghe/Reddit/main 
> ConfigurationOfReddit --install=stable
>
> $ ./pharo hp35.image config 
> http://smalltalkhub.com/mc/SvenVanCaekenberghe/HP35/main ConfigurationOfHP35 
> --install=stable --group=Web-UI
>
> Both load Seaside and some other stuff, but only Reddit loads Glorp and 
> PostgresV2.
>
> I did a condense changes on the HP35 image, the one on the Reddit image ran 
> out of physical memory (granted is was a small machine) !
>
> Here are the resulting sizes:
>
> $ ls -lah
> total 337M
> drwxr-xr-x  4 root root 4.0K Oct 28 15:09 .
> drwx------ 12 root root 4.0K Oct 28 12:28 ..
> -rw-r--r--  1 root root 5.8M Oct 28 13:17 hp35.changes
> -rw-r--r--  1 root root 5.6M Oct 28 14:38 hp35-condense-test.changes
> -rw-r--r--  1 root root 5.8M Oct 28 14:27 hp35-condense-test.changes.bak
> -rw-r--r--  1 root root  29M Oct 28 14:38 hp35-condense-test.image
> -rw-r--r--  1 root root  29M Oct 28 13:17 hp35.image
> drwxr-xr-x  2 root root 4.0K Oct 28 13:12 package-cache
> -rwxr-xr-x  1 root root  367 Oct 28 12:29 pharo
> -rw-rw-r--  1 root root 265K Oct 24 14:42 Pharo.changes
> -rw-rw-r--  1 root root  21M Oct 24 14:42 Pharo.image
> -rwxr-xr-x  1 root root  354 Oct 28 12:29 pharo-ui
> drwxr-xr-x  3 root root 4.0K Oct 28 12:29 pharo-vm
> -rw-r--r--  1 root root 136M Oct 28 12:48 reddit.changes
> -rw-r--r--  1 root root 105M Oct 28 12:48 reddit.image
>
> Note how the HP35 image and changes sizes are totally acceptable/normal, 
> while the Reddit one explodes, with the difference being Glorp.
>
> But I have no idea what causes this.
>
> There is no way Glorp can generate 100 Mb changes, when Seaside is OK with 6 
> Mb.
>
>> I remember loading ~4000 classes in the order of 10^6 LOC in Dolphin, and
>> changes never got half that size. Sizes were ~28/55MB image/changes.
>> The same in VAST, any image beyond the 20MB was an alert of something
>> being leaked.
>>
>>> Hmm, we need more tests and data points, I will try on Linux command line 
>>> later on.
>>
>> I ran in it in Linux (Ubuntu) through the command line.
>>
>>
>> Regards!
>>
>
>

Reply via email to