On Sat, 20 Oct 2018 at 11:37, Axel Wagner <axel.wagner...@googlemail.com>
wrote:

>
> Thank, I feel I can work with this :)
>
> On Sat, Oct 20, 2018 at 10:58 AM roger peppe <rogpe...@gmail.com> wrote:
>
>> That transformation might be valid for the simple case you have there,
>> where there's a single parameter of exactly the interface type, but generic
>> functions allow much more than that. You can have types that include the
>> generic types as a component too. To take one simple example, how would
>> your transformation work with a function that looks like this?
>>
>>     func G(type A R someContract)(a func()[]A) *R
>>
>> In general, I believe the representation of generic values *must* be the
>> same as that of the concrete values in other non-generic parts of the
>> program because both can alias each other. This precludes boxing
>> implementations of generics such as you're describing there.
>>
>
> First, am I concluding correctly, that you are thus also doubting the dual
> implementation property of the contracts design as it stands? If you do,
> that's fair and probably not a fight I want to get into :)
>

Yes. The draft proposal says "generic functions, rather than generic types,
can probably be compiled using an interface-based approach. That will
optimize compile time, in that the package is only compiled once, but there
will be some run-time cost.". I don't think this is possible as stated. I
don't even know what it means to compile a generic type, as types
themselves aren't compiled - unless they're talking about compiling methods
on generic types, in which case I don't see how they differ from generic
functions.

I do believe it's possible to avoid generating code for every possible set
of type parameters to a generic function. There's a continuum of
efficiency. At one end, we *could* generate code for each possible set of
type parameters (giving efficient code but binary bloat), and at the other,
I think it's possible to generate a function containing reflect-based code
that will work on all possible type parameters (*) but is comparatively
very slow. Somewhere in the middle, is probably a happy medium: perhaps
something like "generate a version of the function for all types with the
same size and pointer map".

(*) some things aren't currently possible, like iterating over maps in O(1)
space, but it's likely they can be solved in time.

To answer your question: I handwavily put that under the same "operator"
> headline of a) (lord, all of this would be so much easier with a formal
> type system. That was the intention of a) - restrict the set of
> type-constructors we have to talk about to func). An extremely simple way
> would be
>
> type A = interface { … }
> type R = interface { … }
> func G(a func() []A) *R
>
> and at the call site
>
> var x []CA
>

Your code here assumes that x is known at the call site. That's not a valid
general assumption. The function might be well be generating the slice on
every call.

var y *CR
> _y := G(func() []A {
>     var out []A
>     for _, a := range x {
>         out = append(out, a)
>

Making a copy of the slice isn't valid in general either, as the slice
might legitimately alias some other slice elsewhere in the program. When G
mutates the slice, it's important that the elements that it mutates are the
same ones as in the slice originally returned by the function.

By way of example, think about how your transformation could work on this
code: https://play.golang.org/p/_nfS2rEJIJy


>     }
>     return a
> })
> if _y != nil {
>     tmp := (*_y).(CR)
>     y = &tmp
> }
>
> For the full argument, we'd also need that the compiler is able to
> recognize and reverse these patterns. There are ways to do this
> transformation that make this reversal simpler (if you transform
> slice-accesses in the parametric function into method calls and generate
> appropriate wrappers, the reverse transformation becomes simply inlining
> and the trivial devirtualization), but I'm too lazy to write them down now.
>

I don't understand why you think the compiler needs to be able to recognize
and reverse these patterns.

For the record, I'm fully aware of the problems with this approach
> <https://blog.merovius.de/2018/06/03/why-doesnt-go-have-variance-in.html>
> in general. However, the big advantage we have here is that we can ignore
> type-safety. This is an internal optimization after the type-checking pass
> that has already guaranteed that what we're doing is safe. Which is pretty
> much the crux of b) - being able to ignore type-safety allows us to do
> abhorrent, reflect/unsafe based things that we wouldn't want to do in the
> language proper but which allows us to just assume any co-/contravariance
> we need.
>

I don't think it's possible to assume co-/contravariance even with unsafe,
because the runtime representation of interface values really is different
from that of concrete values. The only way of doing this would be to make
all values in the program the same size, which is indeed the approach taken
by many languages, so []byte would have exactly the same layout as
[]io.Reader, for example. I don't think this is an acceptable approach for
Go though.

c) Given a function
>>> func F(a A) R { … }
>>> with A, R interface types and concrete types CA, CR implementing A and R
>>> respectively and assuming every return statement of F can be proven to
>>> return a value of dynamic type CR, we can create a devirtualized version of
>>> F with the signature
>>> func CF(CA) CR
>>> such that for any a, the expression `F(a).(CR)` is equivalent to `CF(a)`
>>> and the expression `F(a)` is equivalent to `R(CF(a))`.
>>> (and similarly for the func value CF when passing it around)
>>>
>>
>> "assuming every return statement of F can be proven to return a value of
>> dynamic type CR". This is a big assumption, and can only be done with flow
>> analysis and then only in a limited set of cases. For cases where you can't
>> prove it, you can't make devirtualized version of F.
>>
>
> It may be a big assumption - but that's not what c) is about. c) is the
> lemma "assuming we have this proof, no matter where it came from". d) is
> then the lemma that at least the cases resulting from b) fall within that
> "limited set of cases" :) So, for clarity: Is this your only problem with
> c)? Because then I'm going to assume we are discussing d).
>
> Again, I agree with you that, *in general*, obtaining this proof is
> non-trivial. But we don't have to care about the general case, we only have
> to care about the case where the function resulted from our self-devised
> generic rewrites, which gives us a very good idea of how these functions
> look and what to look out for.
>

Originally you claimed that "the analysis [of the exact set of possible
types that a type parameter can be] for interfaces is [...] *exactly as
hard *[as that for generic type-parameterized functions]".

As I understand it, this means that you're claiming that taking some
arbitrary piece of Go code that uses interfaces and determining the exact
set of possible dynamic types that those interfaces can be is *exactly as
hard* as taking a generic function and determining the set of possible
types it can be called with. I guess I must be misinterpreting you here,
because it's quite easy to see that if I have a method Foo:

    type T struct {
        I interface { A() }
    }
    func (t *T) Foo() {
        t.I.A()
    }

then it's only possible to know what types t.I can be by knowing where all
possible references there might be to t.I in the entire program and knowing
what dynamic types it might be set to. This is the problem solved by Guru's
"pointsto" query - it's slow and hard and inaccurate (to do it right, you
really have to trace reflect-based code too).

Note, that if we *absolutely* had to, we could even add a fast-path to
> whatever heuristic we come up with by keeping the information from b)
> around (e.g. in a special `//go:` directive - or just a
> `map[FuncDecl]ParametricFuncDecl`). As long as we don't *solely* rely on
> that, this wouldn't weaken the overall argument. But I'm pretty sure that
> the fact that we know what transformations we did in b) makes writing this
> heuristic easy(ish).
>

Ah, I *think* I see where you might be coming from now. You're thinking
that the compiler could do some kind of Go-to-Go transformation for generic
functions so that they looked exactly as if they'd been written using
interfaces, and then later transform that back into devirtualized calls.
Even if it were possible in general (I'm fairly sure it's not), I don't see
why anyone would want to take such an indirect approach when it's quite
possible for the compiler to compile to an intermediate format that
preserves all the information needed and decide how to generate the code
based on that.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to