On 23.04.2017 17:08, Cédric Champeau wrote:
[...]
Any tool that performs bytecode analysis, or computes signatures is
affected. So if the compiler is not reproducible, since your at the
bottom of the execution, you affect all tools that are built on top.
This affects Gradle, but also Grails, IntelliJ (because different
bytecode forces reindexing), tools which perform bytecode decoration
(intercepting Foo#foo() is not the same as intercepting Bar#foo()) so it
also affects correctness.

I don´t 100% agree, but let´s leave it at that.

    Also I doubt we can be 100% reproducible as long as we use
    reflection somewhere, simple because reflection is not stable. Only
    because of the great contribution from intellij to be able read
    class files directly without reflection in the compiler we have the
    ability for *most* classes to have a stable result depending on the
    occurrence in bytecode. But for for example the Groovy classes
    directly known by the compiler like Closure, this won´t be the case.

    100% stability is not achievable simply by using linked maps and sets.

I agree, but I don't get the point of not trying to do better.

Didn´t want to say that. I would like to have an achievable and specific goal.

It should
be pretty obvious that for the same sources and same compile classpath,
we should produce the same output. I don't get why we wouldn't try to
fix this, even incrementally, as we found bugs. This is better.

because that is no different "towards a better", than what we did before. Ok, maybe I misunderstood and the original mail´s intention.

I agree
that we cannot do 100% reproducibility today, because we don't implement
equals/hashcode, which means we're system dependent (and probably JDK
dependent too).

Implementing equals/hashcode does not give you any order in a Set or Map you can rely on. Really, I still don´t get what you target with that. You require them in set to avoid duplicates.

So if you give back a Set<ClassNode>, then it is potentially wrong to use a Set as long has ClassNode does not implement equals and hashcode *or* this Set should actually be a List - unless of course you can ensure two supposed to be equal ClassNodes are not used here, but then again the Set should be a List. And I also think it is no solution to internally use a LinkedHashSet, and externally use a Set. If the order is a given and guaranteed element of your API, then it should be reflected in the type and LinkedHashSet should be used for the parameter or return type.

Of course the same goes for the usage of MethodNode as set element.

And if we talk about Maps, then the same should be said about the Map type at least, because I hope we do not use AST elements as keys anywhere.

This means for me that at least the fix you did for getAllInterfaces is not sufficient, since this solution still allows for duplicates. And I don´t think ClassNode should have an equals and hashcode either, not before with the twin role of being a reference to a type or the type itself.

And implementing those is doing to be hard, because our
structures are not immutable (so mutating something that is already in
the set/map is problematic).

then don´t. Let´s change the API to not return Set.

Anyway, I'm not asking for 100%
reproducible today. I'm asking for every dev to realize that we have a
problem that leads to non deterministic behavior, and that everyone
should try to take that into consideration when writing code, and
futhermore, tests.

fair enough...only I would add that this should be also taken into consideration when doing reviews.

    once you include reflection here... Those cases you could solve by
    sorting though.

Yes, we should avoid reflection as much as possible. I wonder if there
are still lots of places in the compiler where we do this: reflection
means loading the classes and too often initializing, which is probably
no good in any case.

We do not initialize the classes if we must not, which is only the case for annotations and annotations using constants - if reflection is used for them. Reflection is our fallback. For example the GroovyClassLoader/GroovyShell/Eval/GroovyScriptEngine/basically any runtime compilation logic will in most cases reflection. And you won´t have a transform path or compile class path there either. GroovyScriptEngine could possibly changed here somewhat... but I don´t see that for the more general cases of GCL, shell and eval.

        Especially, order of interfaces in
        declaration type matter (and they are reproducible today), We
        *must not*
        reorder them, or it would change semantics (typically for traits). I
        have fixed the bugs we, in the Gradle team, have identified, but my
        email was there to mention that we should take better care of this,
        because we do a pretty bad job at checking that the behavior of the
        compiler is deterministic.

    well, that was no real option for a long time and no target till
    now. But if you say your fixes are enough for gradle, then your
    tests should ensure this stays. And then we don´t do a bad job at it
    anymore, do we?

I have added tests for the cases I fixed, yes. They are not
cross-platform, though.

well, I do think the fixes are not sufficient, but I think for other reasons than you do ;)

        Regarding AST transformations classpath, I had actually
        forgotten that
        Gradle integrates directly with the compiler, so can use the 2
        different
        "classpath". But AFAIK, our CLI doesn't. This should be easy to fix.

    easy fix? depends... first of all... what will you gain from that?
    Since I have no idea where you get an advantage from those things
    outside a build tool that goes beyond something like make and ant, I
    fail to see the advantage

There would be an advantage for the Groovy compiler itself: today, if
you use `groovyc`, you put everything on the compile classpath. We don't
make the difference between classes that need to be found for
implementation of the compiled classes, and those which are dependencies
of AST transformations. The consequence is the same as the one people
have with annotation processors when on the compile classpath instead of
the annotation processor path: mixing classpath which have nothing to do
together. In particular, say the user code requires Guava 19, but an AST
transformation was built using Guava 17. Then you have a conflict, but
the user shouldn't care about this, because Guava 17 is an
implementation dependency of an AST transformation: nothing that should
prevent them from using the class they wish, because once compiled,
Guava 17 is not required anymore. It's about better modelling of
dependencies, which has direct consequences on user pain.

ok, yes, I agree with that one. I still think for simplicity the current behaviour has to be the default.

bye Jochen

Reply via email to