On Tue, Aug 27, 2013 at 12:50 PM, Henrik Lindberg <
[email protected]> wrote:

>
> This is very interesting discussion.
>
> In Geppetto the approach is to parse everything up-front. This is not the
> same as what is typically referred to as parsing in the puppet community
> which seems to also involve evaluation and linking. We often talk about
> "parse order" when we actually mean "evaluation order".
>
>
Yes, I find this confusion of terminology makes it really hard to talk
accurately about parts of the system.


> Parsing is quite straight forward, simply turn the DSL source into an AST
> and remember it. It is the linking (and evaluation) that is tricky,
> especially when changes can take place to files mid-transaction.
>
> In Geppetto it is not good to keep all information about all files in
> memory all the time and there a technique is used to populate an index with
> references to the source positions where referable elements are located,
> this index is used during linking. A graph of all dependencies is created
> in a way that makes it possible to compute if a change to a "name =>
> element" change will have effect on other resolutions (it was missing, now
> it exists, it existed was resolved, but should now resolve differently).
>
>
Right. Geppetto encounters many of the same problems that the puppet master
deals with right now because the files could be changing at any time and it
needs to track that. I suspect Geppetto also has a similar problem to other
Eclipse editors in which if you change the file on disk via another
mechanism (edit in vim, for example) it kinda "freaks out" as it were. I
think that has to do with the caching that Eclipse tries to do in order to
not have to hit the disk as often.


> The build system that kicks in on any change makes use of the dependency
> information to link the resulting AST in the correct order.
> Sometimes it is required for the builder to make an extra pass due to lack
> of information (the dependencies were not yet computed), or when there is
> circularity.
>
> Puppet is a very tricky language to link since many of the links can be
> dynamic (not known until evaluation takes place). You can see this in
> Geppetto; sometimes you have to do a "build clean" as the interactive state
> does not know that issues where resolved. Also, in many cases it is not
> possible to validate/link these constructs at all.
>
>
Yes, the evaluation of the puppet language is not very static. There is a
lot that is determined dynamically, but the AST itself is mostly static. I
say mostly because there is always the possibility that a custom function
manipulates the known_resource_types at runtime, but I really hope nothing
is doing that.


> In Geppetto, the majority of the computing time is to validate the links.
>
> With all of that said. In a Puppet Master we could parse all files to AST
> but when we start evaluation we start over from the beginning every time
> (check if file is stale, if so reparse it). Holding more state than that in
> memory is very problematic (esp. when environments are involved).
>
>
Yes, and getting that state correct has been the cause of a lot of problems
and confusion :(


> I was talking with Eric Dalén, and he said they tried running the master
> in way where each catalog request got a fresh process. If I understood him
> correctly this did not have much (if any) negative impact on performance !!
> I can think of many reasons why, ruby is not the best at garbage
> collection, when things are done in virgin state there are less things to
> check (poor cache implementations are sometimes worse than no cache),
> memory is better organized. I can also imagine that speedups that were
> measured in the past may not be as relevant today when disks are much
> faster (SSD even), and with speed improvements in the Ruby runtime. It is
> high time to again measure where the bottlenecks are.
>
> So - Andy, I am very much looking forward to hearing about the
> measurements you are doing on parse time. Are your running both old/new
> parser and 1.8.7, 1.9.3 ?
>

I'm not doing the measurements. I'm trying to crowd source them :) Adding
what ruby version is in use would also be interesting, but I didn't think
of adding that into my little measurement script.


>
> Regards
> - henrik
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 
> puppet-dev+unsubscribe@**googlegroups.com<puppet-dev%[email protected]>
> .
> To post to this group, send email to [email protected].
> Visit this group at 
> http://groups.google.com/**group/puppet-dev<http://groups.google.com/group/puppet-dev>
> .
> For more options, visit 
> https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
> .
>



-- 
Andrew Parker
[email protected]
Freenode: zaphod42
Twitter: @aparker42
Software Developer

*Join us at PuppetConf 2014, September 23-24 in San Francisco*

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/puppet-dev.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to