Ken Fox wrote:
> 
> IMHO type inference is the best way to get typing into Perl.
> We don't lose any expressiveness or hurt backwards compatibility.
> About the only thing we need to make visible are a few additional
> pragmas to enable type inference warnings.
> 
> Steve Fink wrote:
> > Types should be inferred whenever possible, and optional type qualifiers
> > may be used to gain whatever level of type stricture desired.
> 
> This will impact the bytecode storage format because we don't
> want to lose any type information present in the op tree. Reverse
> engineering source from bytecode is easier, but I think performance
> and distribution (not obfuscation) is driving the need for bytecode
> format.

There are many possible goals for any typing-related stuff we do. I'd
say the top three are:

- compile-time warnings
   - definitely unsafe operations
   - probably unsafe operations
- runtime storage optimization
- runtime time optimization

I was shooting for only the first. But "compile-time" is perhaps too
simplistic. We have BEGIN{}, bytecode, AUTOLOAD, etc. We probably ought
to agree on the goals of any typing first.

> > I propose that we create a type hierarchy, such as
> >
> >    any
> >       list
> >          list(T)
> >       hash
> >          hash(T -> T)
> >       scalar
> >          reference
> >              ref(T)
> >          nonref
> >              number
> >                 integer
> >       void
> 
> The top level types you sketch are easily deduced from Perl's
> syntax alone -- we don't need any deep analysis. In order to
> produce anything useful, global dataflow analysis is required
> with all the optimization stages delayed until all the source
> has been seen. That's not a small thing to bite off and I don't
> see any way to move the decision to a module -- it's got to be
> in the core.

I was thinking much less radically -- no global analysis, just local
inference that throws up its hands when calling unseen functions. And
then mitigating the drawbacks by explicit type declarations. It seems
too constraining to require seeing all of the source. I don't agree that
you can't do useful stuff without global analysis, but I can't prove it
yet.

> Many Perl operators are polymorphic, so they don't reveal a whole
> lot about their operand types (++$a is a good example). I propose
> that we abandon the typical view of a type hierarchy like the one
> listed above and adopt a capability-oriented one. It would be
> relatively easy for a sub to publish it's prototype in terms of
> what operations must be defined on the arguments.

Just to confirm: so with capabilities, it would be something like

preincrement: (T such that T is incrementable) -> T
and addable(T) implies incrementable(T)

as opposed to

preincrement: number -> number
| preincrement: string -> string

?

Hm, really need to sit down and play with some examples.

> Warnings would be more like "numeric comparison required" instead
> of "integer required".

It does sound promising.

> > Now say we insert C<my $x : number> at the beginning of the example
> > (or some other syntax). That means that we are asserting that I<$x>
> > will I<always> be of type C<number>
> 
> Type inference and static typing are two different issues. I think
> they should be proposed in different RFCs.

Only if you think that it's possible to do global analysis. If you
assume that there are areas where you just won't be able to figure out
the type of f(), then you need some way of telling the system what it
can't figure out, in order to hang onto some benefit. But then, I wasn't
doing static typing -- that was intended to just be a directive to the
type inference system, telling it to warn if the user's declaration were
ever _known_ to be violated. Which is backwards from your usual type
inference, which *must* trigger an error whenever a type rule is
suspected of being violated (either because it really is or because the
system isn't powerful enough). Otherwise, you end up with an unsound
system, and your generated code is incorrect because it depends on the
inferred types being correct.

I don't know if this upside-down sort of inference would actually work,
but it doesn't sound completely crazy to me yet. I was shooting for
something that would be incrementally applicable, so I thought it would
be nice to have the default behavior accept all programs, and only
complain about those things (variables) that the user specifically asked
for, but only when it's sure that something is going wrong.

> > Note that error messages are only generated when two things with
> > strong types collide. So C<my ($x : integer) = /(\d.*)/> will not
> > complain, but C<my $x : integer = "string"> will.
> 
> That's not useful -- it needs to be more general. Warnings should
> occur if a capability is required that the value does not implement.
> This is orthogonal to static typing.

It seems to me that capabilities vs (polymorphic) type hierarchies is
one decision, and static typing is another, but static typing would
interact with inference regardless of whether you use capabilities or
not. Am I missing something?

SML uses type inference, but allows you to tell it the type of a
function if you need to.

When you say static typing, do you mean that my $x : integer means that
you are guaranteed to be able to store $x in 32 bits? Or do you mean
that all type checking is done at compile time (as opposed to dynamic or
runtime typing)?

I'm in a little over my head here. I'm going to ask someone who knows
about this stuff whether I'm completely crazy.

Reply via email to