Alex! I just pulled the changes you made and rebuilt Falcon code base and
parsers.

I just ran my asdoc tool on the flex-asjs repo, I think another screenshot
is in order. :)

http://snag.gy/TDXfe.jpg

Mike

On Fri, Jun 5, 2015 at 3:06 PM, Alex Harui <aha...@adobe.com> wrote:

> I just pushed a change that seems to get ASDoc working.  There was a rule
> that looked like it tried to eliminate double-spaces and somehow the lexer
> ended up there at the end of string instead of thinking it was done.
>
> I have no idea what that was for since the output currently seems to
> capture line-feeds in the asdoc so whitespace in general probably needs to
> be trimmed before turning it into its final form.
>
> Let me know if you think that lexer rule was important and we can revisit
> why we get stuck there.
>
> -Alex
>
> On 6/4/15, 12:13 PM, "Michael Schmalle" <teotigraphix...@gmail.com> wrote:
>
> >Yeah sorry to confuse you, the Velocity stuff doesn't matter, for that
> >matter my asdoc framework I wrote doesn't(it was just showing I had all
> >this working in the context of Falcon), we just need the ASDocTokenizer to
> >tokenize the comment data given to the ASDocDelegate.
> >
> >Mike
> >
> >On Thu, Jun 4, 2015 at 3:12 PM, Michael Schmalle
> ><teotigraphix...@gmail.com>
> >wrote:
> >
> >> The way I did it was, I did exactly what you did, implemented the
> >> ASDocDelegate and saved the tokes as it parsed all the files.
> >>
> >> Then I used the token String like you in the ASDocTokenizer to parse the
> >> loop I showed you above.
> >>
> >> I would add the DocTag and stuff if you can get the ASDocTokenizer
> >>working
> >> like how I have it in the code above. I already wrote an API for easy
> >> access to the tags and comment in an ASDocCOmment class that has a list
> >>of
> >> DocTags.
> >>
> >> Mike
> >>
> >> On Thu, Jun 4, 2015 at 3:09 PM, Alex Harui <aha...@adobe.com> wrote:
> >>
> >>> For this exercise though, we don’t care about the output as Velocity or
> >>> XSL right?  All you want is ASDocTokens in the AST?  IIRC, in Falcon
> >>>you
> >>> retrieve ASDoc comments via node.getASDocComment() and get an
> >>>ASDocComment
> >>> instance.  Do you want the Token to be the root of a mini-tree of
> >>>parsed
> >>> nodes?
> >>>
> >>> -Alex
> >>>
> >>>
> >>> On 6/4/15, 11:47 AM, "Michael Schmalle" <teotigraphix...@gmail.com>
> >>> wrote:
> >>>
> >>> >I actually wrote a WHOLE NEW asdoc program that uses Apache Velocity
> >>> >templates instead of XSL.
> >>> >
> >>> >That DocTag is my class.
> >>> >
> >>> >Mike
> >>> >
> >>> >On Thu, Jun 4, 2015 at 2:45 PM, Alex Harui <aha...@adobe.com> wrote:
> >>> >
> >>> >> I don’t see any signs of ASDoc support in flex-falcon.  I see
> >>> >> ASDocTokenizer and ASDocToken, but no ASDOC.java that would be
> >>> >>equivalent
> >>> >> to MXMLC.java and have a main() method.  The current Flex SDK has an
> >>> >> ASDoc.jar.  Shouldn’t we have these pieces? Do you have them around
> >>> >> somewhere?   Otherwise I will try to quickly create them.
> >>> >>
> >>> >> -Alex
> >>> >>
> >>> >> On 6/4/15, 11:36 AM, "Michael Schmalle" <teotigraphix...@gmail.com>
> >>> >>wrote:
> >>> >>
> >>> >> >BTW, the loop always happens at the VERY end of the comment, so
> >>>when
> >>> >>you
> >>> >> >get to the end the(the last call of next() that should return
> >>>null);
> >>> >> >
> >>> >> >tok = tokenizer.next();
> >>> >> >
> >>> >> >never returns, it gets stuck trying to exit.
> >>> >> >
> >>> >> >Mike
> >>> >> >
> >>> >> >On Thu, Jun 4, 2015 at 2:34 PM, Michael Schmalle
> >>> >> ><teotigraphix...@gmail.com>
> >>> >> >wrote:
> >>> >> >
> >>> >> >> I posted about this a couple weeks ago and I tried recompiling
> >>>with
> >>> >> >>JFlex
> >>> >> >> 1.5 I think, the older version and still had the problem.
> >>> >> >>
> >>> >> >> Maybe I messed up something but I tried with my same asdoc code
> >>>when
> >>> >>I
> >>> >> >> fixed the build for the FlexJS asdocs. I wanted to see it work
> >>>with
> >>> >>my
> >>> >> >> version of a documentor.
> >>> >> >>
> >>> >> >> I think IIRC, I actually tried a simple test case and it would
> >>>work.
> >>> >> >>
> >>> >> >> I have code that uses the tokenizer;
> >>> >> >>
> >>> >> >>
> >>> >> >>     public void compile()
> >>> >> >>     {
> >>> >> >>         if (token == null)
> >>> >> >>             return;
> >>> >> >>
> >>> >> >>         String data = token.getText();
> >>> >> >>         ASDocTokenizer tokenizer = new ASDocTokenizer(false);
> >>> >> >>         tokenizer.setReader(new StringReader(data));
> >>> >> >>         ASDocToken tok = tokenizer.next();
> >>> >> >>         boolean foundDescription = false;
> >>> >> >>         DocTag pendingTag = null;
> >>> >> >>
> >>> >> >>         try
> >>> >> >>         {
> >>> >> >>             while (tok != null)
> >>> >> >>             {
> >>> >> >>                 if (!foundDescription
> >>> >> >>                         && tok.getType() ==
> >>> >> >>ASTokenTypes.TOKEN_ASDOC_TEXT)
> >>> >> >>                 {
> >>> >> >>                     description = tok.getText();
> >>> >> >>                 }
> >>> >> >>                 else
> >>> >> >>                 {
> >>> >> >>                     // do tags
> >>> >> >>                     if (tok.getType() ==
> >>> >>ASTokenTypes.TOKEN_ASDOC_TAG)
> >>> >> >>                     {
> >>> >> >>                         if (pendingTag != null)
> >>> >> >>                         {
> >>> >> >>                             addTag(pendingTag);
> >>> >> >>                             pendingTag = null;
> >>> >> >>                         }
> >>> >> >>                         pendingTag = new
> >>> >> >> DocTag(tok.getText().substring(1));
> >>> >> >>                     }
> >>> >> >>                     else if (tok.getType() ==
> >>> >> >> ASTokenTypes.TOKEN_ASDOC_TEXT)
> >>> >> >>                     {
> >>> >> >>                         pendingTag.setDescription(tok.getText());
> >>> >> >>                         addTag(pendingTag);
> >>> >> >>                         pendingTag = null;
> >>> >> >>                     }
> >>> >> >>                 }
> >>> >> >>
> >>> >> >>                 foundDescription = true;
> >>> >> >>
> >>> >> >>                 tok = tokenizer.next();
> >>> >> >>             }
> >>> >> >>         }
> >>> >> >>         catch (Exception e)
> >>> >> >>         {
> >>> >> >>             e.printStackTrace();
> >>> >> >>         }
> >>> >> >>     }
> >>> >> >>
> >>> >> >> Mike
> >>> >> >>
> >>> >> >>
> >>> >> >> On Thu, Jun 4, 2015 at 2:30 PM, Alex Harui <aha...@adobe.com>
> >>> wrote:
> >>> >> >>
> >>> >> >>>
> >>> >> >>>
> >>> >> >>> On 6/4/15, 11:23 AM, "Michael Schmalle"
> >>><teotigraphix...@gmail.com
> >>> >
> >>> >> >>> wrote:
> >>> >> >>> >>Hmm.  Maybe I should spend some time looking into fixing
> >>> >> >>>ASDocTokenizer?
> >>> >> >>> >> Was the problem that it didn’t work on every AS file we
> >>>current
> >>> >> >>>have?
> >>> >> >>> >>
> >>> >> >>> >
> >>> >> >>> >
> >>> >> >>> >It doesn't work on anything, there is an infinite loop in the
> >>> >>scanner
> >>> >> >>> that
> >>> >> >>> >is created by JFlex, the RawASDocTokenizer is broken.
> >>> >> >>> >
> >>> >> >>> >What is weird is I was using the SAME code base when I wrote
> >>>the
> >>> >>asdoc
> >>> >> >>> >documenter I have 2 years ago and it worked fine.
> >>> >> >>>
> >>> >> >>> We upgraded the version of JFlex, IIRC.  I’ll take a look.  What
> >>> >>setup
> >>> >> >>>did
> >>> >> >>> you have for trying it?  Did you run it on the Flex SDK or
> >>>FlexJS
> >>> >>SDK
> >>> >> >>>or
> >>> >> >>> did it even loop on a simple test case?
> >>> >> >>>
> >>> >> >>> -Alex
> >>> >> >>>
> >>> >> >>>
> >>> >> >>
> >>> >>
> >>> >>
> >>>
> >>>
> >>
>
>

Reply via email to