#memoized is one of the most efficient and hardest optimizations. It cannot
be done efficiently in an automated way. It depends on input. Best way is
to identify repeated invocation of the same parser combinator at the same
position for a typical input, pp2 has a tooling support for this, I wrote a
chapter about #memoized in PP2 [1]. PP2 does the poor-man version of
memoization (based on grammar analysis) automatically, just by calling
#optimize.

If really needed, provide me with parser and input, I can check and suggest
optimizations.

There should be no fundamental issue with porting PP2 to VW. As far as I
know, there is an automated tool to do so, right? On the other hand, PP is
stable and does not change, PP2 is maintained and updated from time to time
(mostly adding optimizations), so there might be an overhead of syncing PP2
to VW2.

Cheers,
Jan

[1]:
https://kursjan.github.io/petitparser2/pillar-book/build/Chapters/memoization.html

On Fri, Oct 5, 2018, 13:26 Steffen Märcker <merk...@web.de> wrote:

> Hi Doru!
>
> > I assume that you tried the original PetitParser. PetitParser2 offers
> > the possibility to optimize the parser (kind of a compilation), and
> this
> > provides a significant speedup:
> > https://github.com/kursjan/petitparser2
> >
> > Would you be interested in trying this out?
>
> Yes, I'd like to give this a shot, too. However, as far as I know, PP2 is
> only available for Pharo and not VW, is it?
>
> Speaking of optimizations, I also tried to use memoizing the petit
> parser.
> However, the times got worse instead of better. Is there a rule of thumb
> where to apply #memoized in a sensible way? As far as I understand,
> applying it to the root parser does not memoize subsequent parsers, does
> it?
>
> Kind regards, Steffen
>
>

Reply via email to