Re: File.seek() interface

2005-07-07 Thread Luke Palmer
On 7/7/05, wolverian <[EMAIL PROTECTED]> wrote:
> On Thu, Jul 07, 2005 at 08:18:40PM +0300, wolverian wrote:
> > I'm a pretty high level guy, so I don't know about the performance
> > implications of that. Maybe we want to keep seek() low level, anyway.
> 
> Sorry about replying to myself, but I want to ask a further question on
> this.
> 
> Would it be possible to make this work, efficiently:
> 
> for =$fh[-10 ...] -> $line { ... }
> 
> to iterate over the last ten lines?

No.  Most notably because -10 ... gives (-10, -9, ... -1, 0, 1, 2, 3,
...).  I also don't think that without a special interface filehandles
can behave as an array of lines.  If they could, then you'd have:

for $fh[-10..-1] -> $line {...}

> Can we generalise that to be as performance-effective as seek()?

Perhaps.  That's what tail(1) does.  But it's a tricky problem.  You
have to guess where the end should be, then do a binary search on the
number of lines after your position.  Sounds like a job for a
specialized module to me.

If you don't care about speed, then I suppose you could even do:

for [ =$fh ].[-10..-1] -> $line {...}

Which won't be speed efficient, and may or may not be memory
efficient, depending on the implementation.  I'd guess not.

Luke


Re: Hackathon notes

2005-07-08 Thread Luke Palmer
On 7/8/05, "TSa (Thomas Sandlaß)" <[EMAIL PROTECTED]> wrote:
> > * Constrained types in MMD position, as well as value-based MMDs, are _not_
> >   resolved in the type-distance phase, but compile into a huge given/when
> >   loop that accepts the first alternative.  So this:
> >
> > multi sub foo (3) { ... }
> > multi sub foo (2..10) { ... }
> >
> >   really means:
> >
> > multi sub foo ($x where { $_ ~~ 3 }) { ... }
> > multi sub foo ($x where { $_ ~~ 2..10 }) { ... }
> >
> >   which compiles two different long names:
> >
> > # use introspection to get the constraints
> > &foo
> > &foo
> >
> >   which really means this, which occurs after the type-based MMD tiebreaking
> >   phase:
> >
> > given $x {
> > when 3 { &foo.goto }
> > when 2..10 { &foo.goto }
> > }
> >   in the type-based phase, any duplicates in MMD is rejected as ambiguous; 
> > but
> >   in the value-based phase, the first conforming one wins.
> 
> I hope that is a temporary "solution". Usually one would expect 3 beeing a
> more specific type then 2..10 irrespective of definition sequence.

Not unless you want to write the Halting engine that determines that 3
is in fact more specific that 2..10.  It's based on definition order,
so that if you have dependencies in you condition (which you
oughtn't), you'd better define the multis together to get well-defined
semantics.

> >   The upshot is that these are now errors:
> >
> > sub foo ($x) is rw { $x }
> > my $a;
> > foo($a) = 4;# runtime error - assign to constant
> 
> I assumed lvalue subs would implicitly return void and an
> assignment goes to the function slot of the args used in the assignment
> and subsequent calls with these args return exactly this value.
> In that respect arrays and hashes are the prime examples of lvalue
> subs. Other uses are interpolated data, Delauny Triangulation etc.

Well, in the absence of optimization, what's usually going on is that
the lvalue sub is returning a tied proxy object, which you then call
STORE on.

Luke


Re: Hackathon notes

2005-07-08 Thread Luke Palmer
On 7/8/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> I have a draft of a proposition for what I think is proper MMD
> dispatching order:
> 
> http://svn.openfoundry.org/pugs/docs/mmd_match_order.txt

He meant:

http://svn.openfoundry.org/pugs/docs/notes/mmd_match_order.txt

Luke


Re: Hackathon notes

2005-07-08 Thread Luke Palmer
On 7/8/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> I have a draft of a proposition for what I think is proper MMD
> dispatching order:
> 
> http://svn.openfoundry.org/pugs/docs/mmd_match_order.txt
--
> Order of definition tie breaking:
> 
>   Two signatures defined in the same file:
>
>   the one EARLIER in the file wins
>
>   Two signatures defined in a different file:
>
>   the one defined LATER in the file wins

Hmm.  I wonder if we should just make the later ones win in all cases.
 Generally when I structure code, I find it most natural to go from
general to specific.  If we're going to make a choice for the user
(something we usually avoid), we might as well go with the one that I
would pick :-)

I like the idea of your tree of match order, I just don't like the
tree itself too much.  If we're going to reorder things for the user,
it does need to happen in a predictable way, even if it's not correct
100% of the time.  I find your tree to be pretty complex (that could
be because I don't understand the reasoning for the ordering
decisions).  I'd prefer something more like:

1. Constants
2. Junctions / Ranges
3. Regexes
4. Codeblocks

Where none of them is recursively decended into for matching.  That
particular order has no special significance, it just felt natural. 
I'm just pointing out that it should be simple[1].

Still, I very much agree with your desire to be able to extend someone
else's interface, which we can solve by messing with the tiebreaking
order.

Luke

[1] That is also my complaint about the Manhattan metric for
multimethod resolution: it is only simple and predictable when you
have the whole class heirarchy in your head.


Re: Hackathon notes

2005-07-08 Thread Luke Palmer
On 7/8/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> > If we're going to reorder things for the user,
> > it does need to happen in a predictable way, even if it's not correct
> > 100% of the time.  I find your tree to be pretty complex (that could
> > be because I don't understand the reasoning for the ordering
> > decisions).  I'd prefer something more like:
> >
> > 1. Constants
> > 2. Junctions / Ranges
> > 3. Regexes
> > 4. Codeblocks
> 
> This is pretty match the same as what I proposed...
> 
> The sub points are usually clarifications, not a tree Did you
> actually read it?

I suppose I was mostly commenting on the junctions part.  I'm
proposing that All Junctions Are Created Equal.  That is, there is no
specificity measuring on junctions.  I also didn't really understand
your right-angle-tree-ratio measure.  Does it have a name, and is
there a mathematical reason that you chose it?

Anyway, I think that once we start diving inside expressions to
measure their specificity, we've gotten too complex to be predictable.

Luke


Re: How do I... create a value type?

2005-07-11 Thread Luke Palmer
On 7/11/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
>   my $x = 42;
>   my $y = $x;
>   $y++;
>   say $x; # Still 42, of course
> 
> 
>   class Foo {
> has $.data;
> method incr () { $.data++ }
> 
> # Please fill in appropriate magic here
>   }

I think it's just `class Foo is value`. It's probably wrong to do it
by overloading &infix:<=> like Perl 5 did.

Luke


Re: How do I... create a value type?

2005-07-11 Thread Luke Palmer
On 7/11/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
>   class Foo {
> has $.data;
> method incr () { $.data++ }
> 
> # Please fill in appropriate magic here
>   }
> 
>   my Foo $x .= new(:data(42));
>   my Foo $y  = $x;
>   $y.incr();
>   say $x.data;# Should still be 42
>   say $x =:= $y;  # Should be false

Whoops, I didn't read that carefully enough.  You shouldn't mutate
inside a value type (the body of incr() is a no-no).  Ideally, it
should be implemented like so:

class Foo is value {
has $.data;
method succ () { $.data + 1 }
}
my $x = Foo.new(data => 42);
my $y = $x;
$y.=succ;
...

Then you don't need any special magic.  But if you want to have
mutator methods that aren't in .= form (which I recommend against for
value types), maybe the `is value` trait makes the class do the
`value` role, which gives you a method, `value_clone` or something,
with which you specify that you're about to mutate this object, and
that Perl should do any copying it needs to now.

Luke


Re: MML dispatch

2005-07-14 Thread Luke Palmer
Thanks for your very detailed explanation of your views on the Pure
MMD scheme, Damian.  I finally understand why you're opposed to it.  I
could never really buy your previous argument: "Manhattan distance is
better".

Damian writes:
> Similarly, since the number of potential variants is the Cartesian product of
> the total sizes of the class hierarch(y|ies) for each parameter position,
> getting adequate coverage of the MMD search space quickly becomes tedious if
> most of the search space is (by default) ambiguous, as in "pure ordering"
> dispatch schemes.

Indeed, pure MMD will be ambiguous in more cases.  If you think
narrowly, then it causes you to write many disambiguating cases which
*usually* end up being what a manhattan metric would give you anyway. 
But you can get away from that terrible duplication by being more
precise about your types.  You can define type classes using empty
roles[1], where you simply organize your type dag into abstractions
that make sense to your dispatcher.  The upshot of all this is that it
forces you to think in a way so that the "*usually*" above becomes an
"always".  And it involves defining more (very small) types that make
sense to humans, not gratuitously many MMD variants which are pretty
hard to think about.

> One very common problem with pure ordering schemes is that subsequent changes
> in one or more class hierarchies can cause previously unambiguous cases to
> become ambiguous, by extending a zone of ambiguity in the search space. 
> In
> contrast, because a metric approach always fully partitions the entire search
> space, hierarchy changes may alter where a particular call dispatches to, but
> only ever to a "closer", more appropriate variant.

You just made my primary argument against manhattan distance for me. 
If you change something in the middle of the class hierarchy,
manhattan distance causes multimethods on the leaves to change
semantics.  You say that pure MMD causes them to break when a change
occurs.  Isn't that better than changing?  Presumably you ran it
through the ambiguity checker and a test suite and got it right once,
and then you have to do it again when you refactor.  This comes with
the meaningless "unit derivation" that a metric scheme defines.

Perhaps I've made this argument before, but let me just ask a
question:  if B derives from A, C derives from A, and D derives from
C, is it sensible to say that D is "more derived" from A than B is? 
Now consider the following definitions:

class A { }
class B is A {
method foo () { 1 }
method bar () { 2 }
method baz () { 3 }
}
class C is A {
method foo () { 1 }
}
class D is C {
method bar () { 2 }
}

Now it looks like B is more derived than D is.  But that is, of
course, impossible to tell.  Basically I'm saying that you can't tell
the relative relationship of D and B when talking about A.  They're
both derived by some "amount" that is impossible for a compiler to
detect.  What you *can* say is that D is more derived than C.

In conclusion, the reason that manhattan distance scares me so, and
the reason that I'm not satisfied with "use mmd 'pure'" is that for
the builtins that heavily use MMD, we require *precision rather than
dwimmyness*.  A module author who /inserts/ a type in the standard
hierarchy can change the semantics of things that aren't aware that
that type even exists.  If you're going to go messing with the
standard types, you'd better be clear about your abstractions, and if
you're not, the program deserves to die, not "dwim" around it.

Oh, and the mmd style should probably look like:

multi foo (...) is mmd {...}
multi bar (...) is mmd {...}

Rather than a pragma.

> Note too that Perl 6 *will* still support a form of "pure ordered" dispatch--a
> left-most-closest-match scheme like that used by CLOS--via "invocant groups":
> 
> multi sub Foo(A: B: C:) {...}
> multi sub Foo(A: D: C:) {...}
> multi sub Foo(F: B: G:) {...}
> 
> This, of course, is not "global pure ordering", but rather "left-biased pure
> ordering".
> 
> To summarize: pure ordering renders more of the MMD search space "ambiguous",
> which is potentially safer but much less DWIMish. Metric schemes have far
> fewer ambiguities and usually dispatch in an predictable way. Metric schemes
> can still provide pure-ordering analyses via static analysis tools or special
> warning modes, but pure ordering schemes can't avoid ambiguities or DWIM.
> 
> Of course, none of this prevents:
> 
> use MMD ;
> 
> 
> Damian

Luke

[1] And I find this to be useful even when using manhattan distance. 
I'd like to be able to define such type classes out-of-band, like:

role Foo
defines Bar  # Bar does Foo now
defines Baz  # Baz does Foo now
{ }


Re: Type::Class::Haskell does Role

2005-07-16 Thread Luke Palmer
On 16 Jul 2005 12:22:31 -, David Formosa (aka ? the Platypus)
<[EMAIL PROTECTED]> wrote:
> On Sat, 16 Jul 2005 12:14:24 +0800, Autrijus Tang
> <[EMAIL PROTECTED]> wrote:
> 
> [...]
> 
> > On Sat, Jul 16, 2005 at 12:24:21AM +0300, Yuval Kogman wrote:
> >> > There is a new generic comparison operator known as ~~.
> >>=20
> >> ~~ is just Eq, there is also Ord
> >
> > Hmm, <~ and ~> for generic comparators? ;)
> 
> Unfortunitly we have ~< and ~> meaning shift string left and shift
> right.

Well, those can always move aside for more important operators.  In
fact, there's something to be said for being like Prolog and avoiding
all arrow-looking things that aren't actually arrows, so:

=< >=less-equal, greater-equal
~< >~string shift left, string shift right
...

But, there's also something to be said against it, especially for
those two. ~ is acting as a kind of meta-prefix, so putting it on the
end is silly.  =< Reads like equal or less than, which is weird.

Anyway, what I really meant to say is that for higher level code, you
see generic comparisons a lot more than you see numeric or string
comparisons.  But for scripting code, the opposite is true.  Last
year, Larry used the operator "equal" for generic equality (I
personally would have picked "equals").  But I think that's a little
awkward if it is to be used often.

I'm going to have some coffee mugs thrown at me for saying this, but perhaps:

  Generic StringNumericIdentity
   +---+---++---+
Equality   |== |  =~=  |   =+=  |   =:= |
   +---+---++---+
Comparison | > < >= <= | ~> ~< ~>= ~<= | +> <+ +<= +>=  | $a.id < $b.id |
   +---+---++---+

Another possibility is to embrace the concept that two objects with
reference semantics are not equal unless they are identical, and also
say that 3 =:= "3" (because they behave the same in every way).  Then
we could keep our old way of life with numeric == and string eq, and
use <: and :> for generic comparison (by analog of =:= for generic
equality).  But if we want to do that, then we'd have to define <: and
:> in terms of .id or some other thing that's not overloadable except
for value types.  For the record, this sounds great to me, because I
use value types for everything.  But I expect some people would be
pretty irritated by that.

All in all, generic equality and comparison is something that Perl 5
did really poorly.  Some people overloaded eq, some overloaded ==,
some wrote a ->equal method, and there was no way to shift between the
different paradigms smoothly.  This is one of the times where we have
to choose for them.

Luke


Re: Type::Class::Haskell does Role

2005-07-16 Thread Luke Palmer
On 7/16/05, Luke Palmer <[EMAIL PROTECTED]> wrote:
> I'm going to have some coffee mugs thrown at me for saying this, but perhaps:
> 
>   Generic StringNumericIdentity
>+---+---++---+
> Equality   |== |  =~=  |   =+=  |   =:= |
>+---+---++---+
> Comparison | > < >= <= | ~> ~< ~>= ~<= | +> <+ +<= +>=  | $a.id < $b.id |
>+---+---++---+

Oh darn, I forgot !=.   The best I can come up with is:

!==!=~=   !=+=   !=:=

Luke


Re: Type::Class::Haskell does Role

2005-07-17 Thread Luke Palmer
On 7/17/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> I have another view.
> 
> The Num role and the Str role both consume the Eq role. When your
> class tries to both be a Num and a Str, == conflicts.
> 
> I have two scenarios:
> 
> class Moose does Num does Str { ... }
> 
> # Moose was populated with:
> multi method infix:<==> (Moose does Num, Moose does Num) { ... }
> multi method infix:<==> (Moose does Str, Moose does Str) { ... }

Which is an ambiguity error.

> OR
> 
> # Str and Num try to add the same short name, and a class
> # composition error happenns at compile time.

Which is a composition error.  So they both amount to the same thing.

> Recently I discussed MMD with chromatic, and he mentioned two things
> that were very important, in my opinin:
> 
> * The Liskov substitution principal sort of means that MMD
>   between two competing superclasses, no matter how far, is
>   equal

Amen.  For those of you who have not been exposed to this "paradox",
here's an example:

class A  {...}
class B is A {...}
class C is B {...}
class D is A {...}

multi foo(C $x, A $y) { "(1)" }
multi foo(A $x, D $y) { "(2)" }

If you call foo(C.new, D.new), (1) will be called (because it has
distance 1, while (2) has distance 2).  Now suppose I refactor to
prepare for later changes, and add two *empty* classes:

class A  {...}
class B is A {...}
class C is B {...}
class E is A { }
class F is E { }
class D is F {...}

Now if you call foo(C.new, D.new), (2) will be called instead of (1)
(because (1) has distance 3, while (2) still has distance 2).  That is
how Liskov subtly breaks.  As a matter of taste, classes that don't do
anything shouldn't do anything!  But here they do.  If I had only put
E in and omitted F, we would have moved from a functioning program to
a breaking one.

> * Coercion of parameters and a class's willingness to coerce
>   into something is a better metric of distance

Well, if you think metrics at all are a good way to do dispatch.

> Under these rules the way this would be disambiguated is one of:
> 
> - Moose provided it's own infix:<==>
> - Moose said which <==> it prefers, Num or Str. A syntax:
> 
> multi method infix:<==> from Str;
> 
>   (this could also be used for importing just part of a
>   hierarchy?)

Well, I like your proposal.  It's a very generics-oriented world view,
which I hold very dear.  However, you didn't actually solve anything. 
What happened to our numeric == and string eq.  Are you proposing that
we toss out string eq and let MMD do the work?

If I recall correctly the reason for == and eq existing and being
distinct is so that you don't have to do:

[EMAIL PROTECTED]()+$expression] == +%another{long($hairy % expression()) }
^..^

You can't see what's going on according to that operator, because the
eye scanning distance is too great.  We still need to satisfy the
scripters who like the distinction between numeric and string
comparison.  It's hard for me to argue for them, since I'm not one of
them.

One more possibility is operator adverbs.  We could assume that == is
generic unless you give it a :num or :str adverb:

$a == $b   # generic
$a == $b :num  # numeric
$a == $b :str  # string

But that has the eye scanning distance problem again.  Maybe that's a
flaw in the design of adverbs this time...

Luke


Re: The Use and Abuse of Liskov (was: Type::Class::Haskell does Role)

2005-07-19 Thread Luke Palmer
On 7/17/05, Damian Conway <[EMAIL PROTECTED]> wrote:
>  "You keep using that word. I do not think
>   it means what you think it means"
>   -- Inigo Montoya

Quite.  I abused Liskov's name greatly here.  Sorry about that.

Anyway, my argument is founded on another principle -- I suppose it
would be Palmer's zero-effect principle, that states:

"In absence of other information, a derived class behaves just
like its parent."

I can argue that one into the ground, but it is a postulate and
doesn't fall out of anything deeper (in my thinking paradigm, I
suppose).  My best argument is that, how can you expect to add to
something's behavior if it changes before you start?

Every SMD system that I can think of obeys this principle (if you
ignore constructors for languages like Java and C++).

And as I showed below, the Manhattan metric for MMD dispatch is
outside the realm of OO systems that obey this principle.  In fact,
I'm pretty sure (I haven't proved it) that any MMD system that relies
on "number of derivations" as its metric will break this principle.

> All of which really just goes to show that the standard LSP is
> simply not a concept that is applicable to multiple dispatch. LSP is a
> set of constraints on subtype relationships within a single hierarchy.
> But multiple dispatch is not an interaction mediated by a single-hierarchy
> subtyping relationship; it's an interaction *between* two or more hierarchies.

I agree, now.

>  > As a matter of taste, classes that don't do
>  > anything shouldn't do anything!  But here they do.
> 
> But your classes *do* do something that makes them do something. They
> change the degree of generalization of the leaf classes under an L[1]
> metric. Since they do that, it makes perfect sense that they also change
> the resulting behaviour under an L[1] metric. If the resulting behaviour
> didn't change then the L[1] *semantics* would be broken.

Yep.  And I'm saying that L[1] is stupid.  In fact, (as I believe
you've already picked up on), I'm not picking on the Manhattan
combinator in particular, but using a derivation metric at all!

In another message, you wrote:
> In MMD you have an argument of a given type and you're trying to find the most
> specifically compatible parameter. That means you only ever look upwards in a
> hierarchy. If your argument is of type D, then you can unequivocally say that
> C is more compatible than A (because they share more common components), and
> you can also say that B is not compatible at all. The relative derivation
> distances of B and D *never* matter since they can never be in competition,
> when viewed from the perspective of a particular argument.
> 
> What we're really talking about here is how do we *combine* the compatibility
> measures of two or more arguments to determine the best overall fit. Pure
> Ordering does it in a "take my bat and go home" manner, Manhattan distance
> does it by weighing all arguments equally.

For some definition of "equal".  In the message that this was
responding to (which I don't think is terribly important to quote) I
was referring to the absurdity of "number of derivations" as a metric.
 Picture a mathematician's internal monologue:

Well, B is a subset of A, C is a subset of A, and D is a subset of
C.  Clearly, D has fewer elements than B.

This mathematician is obviously insane.  Then I think about the way
you're using numbers to describe this, and I picture his friend
responding to his thoughts (his friend is telepathic) with:

You can make a stronger statement than that: The difference of the
number of elements between A and D is exactly twice the difference of
elements between A and B.

And now maybe you see why I am so disgusted by this metric.  You see,
I'm thinking of a class simply as the set of all of its possible
instances.  And then when you refer to L[1] on the number of
derivations, I put it into set-subset terms, and mathematics explodes.

Here's how you can satisfy me: argue that Palmer's zero-effect
principle is irrelevant, and explain either how Manhattan dispatch
makes any sense in a class-is-a-set world view, or why that world view
itself doesn't make sense.

Or just don't satisfy me.

Luke


Re: Hash creation with duplicate keys

2005-07-21 Thread Luke Palmer
On 7/20/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> # Perl 5
> my %hash = (a => 1, b => 2, a => 3);
> warn $hash{a};   # 3
> 
> But I vaguely remember having seen...:
> 
> # Perl 6
> my %hash = (a => 1, b => 2, a => 3);
> say %hash;# 1
> 
> Can somebody confirm this?

Yes.  This is for the case:

foo(a => 1, *%defaults);

So that foo's $a gets one even if it exists in %defaults.

Luke


Re: Do slurpy parameters auto-flatten arrays?

2005-07-27 Thread Luke Palmer
On 7/26/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> are the following assumptions correct?
> 
>   sub foo ([EMAIL PROTECTED]) { @args[0] }
> 
>   say ~foo("a", "b", "c"); # "a"

Yep.

>   my @array = ;
>   say ~foo(@array);# "a b c d" (or "a"?)
>   say ~foo(@array, "z");   # "a b c d" (or "a"?)

"a" for both of these.  The *@ area behaves just like Perl 5's calling
conventions.  I could argue for never auto flattening arrays, but then
there'd really be no difference between @ and $.

>   say ~foo([EMAIL PROTECTED]);   # "a"
>   say ~foo(*(@array, "z"));# "a"

Hmm, *(@array, "z")... what does that mean?  Whatever it means, you're
correct in both of these.  In the latter, the @array is in a
flattening context, so it gets, well, flattened.

>   sub bar ([EMAIL PROTECTED]) { [EMAIL PROTECTED] }
> 
>   say bar(1,2,3);  # 3
>   say bar(@array); # 1 (or 4?)

4

>   say bar(@array, "z");# 2 (or 5?)

5

>   say bar([EMAIL PROTECTED]);# 4

Yep.

Luke


Re: Exposing the Garbage Collector (Iterating the live set)

2005-07-27 Thread Luke Palmer
On 7/26/05, "TSa (Thomas Sandlaß)" <[EMAIL PROTECTED]> wrote:
> Piers Cawley wrote:
> > I would like to be able to iterate over all the
> > objects in the live set.
> 
> My Idea actually is to embedd that into the namespace syntax.
> The idea is that of looking up non-negativ integer literals
> with 0 beeing the namespace owner.
> 
>for ::Namespace -> $instance
>{
>   if +$instance != 0 { reconfigure $instance }
>}

Oh, so a namespace can behave as an array then.  Well, to avoid
auto-flattening problems in other, more common places, we ought to
make that:

for *::Namespace -> $instance {...}

However, this is very huffmanly incorrect.  First of all, what you're
doing may take a quarter of a second or more for a large program
(which isn't a short amount of time by any means).  Second, the fact
that you're using it means you're doing something evil.  Third, only a
fraction of 1/omega perl programmers will be using this feature. 
Therefore, it should probably look like:

use introspection;
for introspection::ALL_INSTANCES(::Namespace) -> $instance {...}

And it might even be platform-specific, given the constraints of some
of our targets.

Luke


Messing with the type heirarchy

2005-07-27 Thread Luke Palmer
http://repetae.net/john/recent/out/supertyping.html

This was a passing proposal to allow supertype declarations in
Haskell.  I'm referencing it here because it's something that I've had
in the back of my mind for a while for Perl 6.  I'm glad somebody else
has thought of it.

Something that is worth looking into is Sather's type system.  I
haven't read anything about it yet, but worry not, I will.

Anyway, on to the proposal.

I've often thought that it would be very powerful to put types
in-between other types in the hierarchy.  This includes making
existing types do your roles (which Damian describes as "spooky action
at a distance").  Allow me to provide an example.

Let's say that Perl 6 does not provide a complex number class by
default.  How would you go about writing one?  Well, let's do the
standard Perl practice of making words that your users are supposed to
say in their code roles.

role Complex { 
# implementation details are unimportant (as always :-p)
}

Now, where does it belong in the type heirarcy so it can interact well
with standard types?  It belongs *above* Num (and below whatever is
above Num).  Everything that is a Num is a Complex right?  It just has
a zero imaginary part.  But currently that is impossible.  So we have
to define conversions, which behave quite differently from simple
interface compatibilites.  For one, we have to reference a concrete
complex type.  Basically, we've made Complex feel like an outsider to
the Perl standard hierarchy.

As another example, let's say I'm implementing my own Junction class,
MyJunction.  I want it to be lower precedence than standard Junctions
(for an appropriate definition of precedence; in this case, it means
it will be threaded first).  Put aside for the moment how we define
the multimethods for all existing subs at once.

The safe way to implement a threading object is to give it its own
level in the type hierarchy.  For example, to do Junction, we
structure the type hierarchy like so:

Any
|- Junction
|- JunctionObject   # or some other appropriate name
   |- Object
  |- ...

Then we can safely define MMD variants and be sure that they won't
change their semantics when derivation levels change under manhattan
distance.  Under pure ordering, we prevent against ambiguity errors
(which is in fact how I came up with this pattern).

So, anyway, to define MyJunction, I'd like the hierarchy to look like this:

Any
|- MyJunction
|- MyJunctionObject
   |- Junction
   |- JunctionObject
  |- Object
 |- ...

This is a case where it is absolutely necessary to supertype in order
to achieve certain semantics.

Okay, now that I have the need out of the way, the syntax is obvious:

role Complex
does Object
contains Num
{...}

There is probably a better word than "contains".  I was thinking set
theory when I came up with that one.

It might be wise only to allow this for roles if we go with a
derivation metric-based MMD scheme.  Allowing this for classes would
mean that you could add depth to the metric that the author of the
classes in question didn't intend, and I've already shown that you can
screw up manhattan MMD semantics at a distance, spookily, if you do
that.

Luke


Re: The Use and Abuse of Liskov

2005-07-27 Thread Luke Palmer
On 7/19/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> > And now maybe you see why I am so disgusted by this metric.  You see,
> > I'm thinking of a class simply as the set of all of its possible
> > instances.
> 
> There's your problem. Classes are not isomorphic to sets of instances and
> derived classes are not isomorphic to subsets.

Ahh, I understand now.  If you think that way, then there is no way to
convince you, since that is the piece of mathematics that my whole
argument is based on.  Please seriously consider this world model and
its implications (especially regarding my new thread about
superclassing).  I'll give up on the theoretical side now.

~

I've just released Class::Multimethods::Pure for an account of how
pure ordering works in practice.  As the first case study, see
Class::Multimethods::Pure (it bootstraps itself :-).  The ambiguities
that it pointed to me turned out to be very important design-wise, and
I noticed that under manhattan distance it would have silently worked
and then broken (in ambiguity) later.

Readers, do your best to follow along.  This is pretty complex, but
that's exactly what I'm arguing: that a derivation metric like
Manhattan will decieve you when things get complex.

The piece was the junction factoring that I described in my other
thread (I use junctions in MMD to implement type junctions).  At first
I had this model:

Object
|- Junction
   |- Disjunction
   |- Conjunction
   |- Injunction
|- Constrained
   |- Subtype
|- PackageType
|- ...

And the multis defined as:

multi subset (Junction, Object)   {...}
multi subset (Object, Junction)   {...}
multi subset (Junction, Junciton) {...}

Which made recursive calls to subset on their constituent types.  The
various Junction subclasses have a "logic" method which knows how to
evaluate the junction in boolean context.  I also had:

multi subset (Subtype, Object)  {...}
multi subset (Object, Subtype)  {...}
multi subset (Subtype, Subtype) {...}

Then:

multi subset (Package, Package) {...}

Etc. for all the other non-combinatoric types, and:

multi subset (Object, Object) { 0 }

As the fallback.  Naturally, when I called:

subset(Disjunction.new(...),  Subtype.new(...))

I got an ambiguity.  Did you mean (Junction, Object) or (Object,
Subtype)?  Something was wrong with my design: I needed to structure
my types to tell the MMD system which one I wanted to thread first. 
This is an error that you'd expect, right?  I didn't tell the compiler
something it needed to know.

However, look at the applicable candidates:

subset(Junction, Object)   #  1 + 2  =  3
subset(Object, Subset) #  2 + 0  =  2
subset(Object, Object) #  2 + 2  =  4

The second variant, (Object, Subset) matches.  Oh goody, it worked! 
Now I can go on my merry way documenting and releasing my module.

Now Mr. Joe Schmoe comes along and decides that he wants to write a
new subtype type -- one that accepts his new statically-analyzable
subtyping language or something.  He decides to reuse code and derive
from the existing Subtype type.  The new type hierarchy follows:

Object
|- Junction
   |- Disjunction
   |- Conjunction
   |- Injunction
|- Constrained
   |- Subtype
  |- MagicSubtype# the new type
|- PackageType
|- ...

Now look at what happens for subtype(Disjunction.new(...),
MagicSubtype.new(...)):

subset(Junction, Object) # 1 + 2 = 3
subset(Object, Subtype)  # 2 + 1 = 3
subset(Object, Object)   # 2 + 2 = 4

Oh no!  An ambiguity!  What the hell, Joe's just trying to extend
Subtype a little, and now he has to write a specialized MMD variant
just for that, which delegates *exactly* to the (Object, Subtype)
variant.

I'll also point out that if you remove the Constrained intermediate
type, which I did (!), you also end up in ambiguity for the call
subtype(Disjunction.new(...), Subtype.new(...)).

And that's it.  Two innocent changes, and a working program breaks
into ambiguity errors.  And the person who sees the ambiguity errors
is not the person who wrote -- or even touched -- the multimethods. 
Keep in mind: these multimethods could be for internal use, so the
extender may not even know they exist.

Using pure ordering, we saw the ambiguity early and were forced to
think about the design and come up with one that passed the tests. 
When I did that, I was able to factor things to avoid duplication and
needless disambiguating variants[1].  It is impossible to break the
new factoring by simply deriving from any class.  You would have to
add a new generic, like Junction, to the top in order to break
existing code.  Manhattan distance suffers from the same problem.  See
the supertyping thread for a solution :-)

I'm seeing after this case study, and something that I suspected all
along, that Manhattan MMD is to pure ordering as mixins are to roles. 
Roles don't provide any extra semantic

Re: The Use and Abuse of Liskov (was: Type::Class::Haskell does Role)

2005-07-27 Thread Luke Palmer
I just realized something that may be very important to my side of the
story.  It appears that I was skimming over your example when I should
have been playing closer attention:

On 7/18/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> Consider the following classes:
> 
>class A   {...}   #   AB
>class B   {...}   #|
>class C is B  {...}   #C   D
>class D   {...}   # \ /
>class E is C is D {...}   #  E
> 
>multi sub foo(A $x, B $y) { "(1)" }
>multi sub foo(A $x, C $y) { "(2)" }
> 
>foo(A.new, E.new);
> 
> Clearly this produces "(2)" under either Pure Ordering or Manhattan Metric.
> 
> Now we change the class hierarchy, adding *zero* extra empty classes
> (which is surely an even stricter LSP/Meyer test-case than adding one
> extra empty class!)
> 
> We get this:
> 
>class A   {...}   #   A  B
>class B   {...}   # / \
>class C is B  {...}   #C   D
>class D is B  {...}   # \ /
>class E is C is D {...}   #  E
> 
>multi sub foo(A $x, B $y) { "(1)" }
>multi sub foo(A $x, C $y) { "(2)" }
> 
>foo(A.new, E.new);
> 
> Manhattan Metric dispatch continues to produce "(2)", but Pure Ordering
> now breaks the program.

Um... no it doesn't.  Pure ordering continues to produce "(2)" as
well.  Here are the precise semantics of the algorithm again:

A variant a is said to be _more specific than_ a variant b if:
* Every type in a's signature is a subset (derived from or
equal) of the corresponding type in b's signature.
* At least one of these is a proper subset (not an equality).
A variant is dispatched if there is a unique most specific method
applicable to the given arguments. That is, there exists an applicable
variant a such that for all other applicable variants x, a is more
specific than x.

A is equal to A, and C is a proper subset of B, therefore the latter
variant is more specific than the former.  Since the latter is the
unique most specific variant, and it is applicable to the argument
types, it is chosen.

How did you think pure ordering worked?  Were we arguing over
different definitions of that algorithm?

Luke


Re: Messing with the type heirarchy

2005-07-27 Thread Luke Palmer
On 7/27/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Luke Palmer wrote:
> > role Complex
> > does Object
> > contains Num
> > {...}
> 
> I've probably misunderstood you, but...:
> 
> role Complex does Object {...}
> Num does Complex;
> # That should work and DWYM, right?

Supposing that you can actually do that, and that "Num does Complex"
gets executed at compile time.  I didn't know you could add "does"
declarations to classes referring to other classes (rather than making
the class object do a metaclass role... though I admit that that would
only be warranted by a pretty bizarre situation).


Re: Messing with the type heirarchy

2005-07-30 Thread Luke Palmer
On 7/27/05, Larry Wall <[EMAIL PROTECTED]> wrote:
> On Wed, Jul 27, 2005 at 11:00:20AM +0000, Luke Palmer wrote:
>> Everything that is a Num is a Complex right?
> 
> Not according to Liskov.  Num is behaving more like a constrained
> subtype of Complex as soon as you admit that "isa" is about both
> implementation and interface.  By the interface definition it's
> slightly truer to say that Complex is a Num because it extends Num's
> interface.  But this is one of the standard OO paradoxes, and we're
> hoping roles are the way out of it. 

Well, everything that is a Num is a Complex in a value-typed world,
which Num and Complex are in.  I don't like reference types much
(though I do admit they are necessary in a language like Perl), and
I'm not sure how this fits there anymore.  Anyway, that's beside the
point, since a supertyping need is still there for referential types.

Luke


Re: Curious use of .assuming in S06

2005-07-31 Thread Luke Palmer
On 7/29/05, Autrijus Tang <[EMAIL PROTECTED]> wrote:
> In S06's Currying section, there are some strange looking examples:
> 
> &textfrom := &substr.assuming(:str($text) :len(Inf));
> 
> &textfrom := &substr.assuming:str($text):len(Inf);
> 
> &woof ::= &bark:(Dog).assuming :pitch;
> 
> Why is it allowed to omit comma between adverbial pairs, and even
> omit parens around method call arguments?  Is .assuming a special form?

I don't think you can pass pairs without parens on method calls that
way.  However, omitting commas between argument pairs has been a main
idea from the beginning of the colon pair syntax.

Luke


Re: zip with ()

2005-08-01 Thread Luke Palmer
On 8/1/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> In general, (@foo, @bar) returns a new list with the element joined,
> i.e. "@foo.concat(@bar)". If you want to create a list with two sublists,
> you've to use ([EMAIL PROTECTED], [EMAIL PROTECTED]) or ([EMAIL PROTECTED], 
> [EMAIL PROTECTED]). But of course, I could
> be totally wrong. :)

I think that's right.  However, it might be a good idea not to
auto-enreference such bare lists:

sub foo ($x) {...}
foo (1,2,3,4,5);   # foo gets called with [1,2,3,4,5]

When you could just as easily have said:

foo [1,2,3,4,5];

And we'll probably catch a lot of Perl 5 switchers that way.  That
actually makes a lot of sense to me.  The statement:

my $x = (1,2,3,4,5);

Looks like an error more than anything else.  That's the "scalar
comma", which has been specified to return a list.  But maybe it
should be an error.  The main reason that we've kept a scalar comma is
for:

loop (my $x = 0, my $y = 0; $x*$y <= 16; $x++, $y++)
{...}

However, I think we can afford to hack around that.  Make the first
and last arguments to loop take lists and just throw them away.  Can
anyone think of any other common uses of the scalar comma?

Luke


Re: Do slurpy parameters auto-flatten arrays?

2005-08-03 Thread Luke Palmer
On 8/3/05, Aankhen <[EMAIL PROTECTED]> wrote:
> On 8/3/05, Piers Cawley <[EMAIL PROTECTED]> wrote:
> > So how *do* I pass an unflattened array to a function with a slurpy 
> > parameter?
> 
> Good question.  I would have thought that one of the major gains from
> turning arrays and hashes into references in scalar context is the
> ability to specify an unflattened array or a hash in a sub call
> without any special syntax...

Well, you can, usually.  This is particularly in the flattening
context.  In most cases, for instance:

sub foo ($a, $b) { say $a }
my @a = (1,2,3);
foo(@a, "3");

Passes the array into $a.  If nothing flattened by default, then you'd
have to say, for example:

map {...} [EMAIL PROTECTED];

And even:

for [EMAIL PROTECTED] -> $x {...}

Which I'm not sure people want.  

And the way you pass an array in slurpy context as a single reference
is to backwhack it.  What it comes down to is that either you're
backwhacking things a lot or you're flattening things a lot.  Perl
currently solves it by making the common case the default in each
"zone" of parameters.

I would be interested to hear arguments to the contrary, however. 

Luke


Re: What role for exceptional types?

2005-08-03 Thread Luke Palmer
On 8/3/05, Nigel Hamilton <[EMAIL PROTECTED]> wrote:
> Instead of passing the "buck" from object to object via parameter lists
> and type inference (traversing OO hierarchies etc) maybe we could ..
> 
> Model the flow of control through a program as a simple linear queue of
> topic changes. A central "Controller" holds the current "Topic" and guides
> the conversation between "Objects"? Rather than the Topic moving from
> lexical scope to lexical scope it stays in one place (i.e., it doesn't get
> dispatched).
> 
> The Controller only discusses the current Topic with Objects that are
> qualified to listen (e.g., based on their Role/Type etc). Objects that
> earn the ear of the Controller can change the Topic and influence what
> happens next. The Controller doesn't dispatch to Objects based on
> parameter lists - instead it grants access to the current Topic based on
> the Types/Roles of interested objects - messages are not passed around
> with traditional parameter lists but via changes in the central Topic.
> 
> As a programmer you spend time modelling how your Objects respond to
> events and keeping the Controller on Topic. The Topic also contains the
> current scratchpad of variables so the need to transfer parameters is
> eliminated/reduced - they now form part of the current Topic.
> 
> System events (e.g., socket closed) could then propagate into your program
> via the Controller. For example, a new topic has arrived, "socket closed
> exception" - the Controller then finds an interested object.
> 
> In traditional OO models exception objects propagate up until somebody
> catches them. Imagine if your program worked like exceptions currently do?
> Instead of raising exceptions you constantly "raise a new topic". The
> Controller then must decide what object should handle the change in topic
> (i.e., who catches the Exception).
> 
> Which I think brings me around to the initial problem ... h.

You're worried that the flow of control could be unpredictable and
then propose this? :-)

Anyway, I think you have an interesting idea for an experimental
language or module.  If it works well, then maybe it can underlie Perl
and be hidden from everyone but the interested user.  That's how we're
putting in most advanced features these days.

Write a Perl 5 module that implements this control style, if you think
it's possible.

Luke


If topicalization

2005-08-03 Thread Luke Palmer
I vaguely recall that we went over this already, but I forgot the
conclusion if we did.

In Damian and Larry's talk here at OSCON, I saw the example:

if foo() -> $foo {
# use $foo
}

How can that possibly work?  If a bare closure { } is equivalent to ->
?$_ is rw { }, then the normal:

if foo() {...}

Turns into:

if foo() -> ?$_ is rw { }

And every if topicalizes! I'm sure we don't want that.

Luke


Re: $pair[0]?

2005-08-04 Thread Luke Palmer
On 8/4/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> my $pair = (a => 1);
> say $pair[0];  # a?
> say $pair[1];  # 1?
> 
> I've found this in the Pugs testsuite -- is it legal?

Nope.  That's:

say $pair.key;
say $pair.value;

Also:

say $pair;  # 1
say $pair{anything else};   # undef

But we don't implicitly cast references like that.

Luke


Re: undef.chars?

2005-08-04 Thread Luke Palmer
On 8/4/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> (found in the Pugs testsuite.)
> 
> my $undef = undef;
> say $undef.chars?   # 0? undef? die?
> say chars $undef;   # 0? undef? die?
> 
> I'd opt for "undef.chars" to be an error ("no such method") and "chars
> undef" to return 0 (with a warning printed to STDERR^W$*ERR).

Well, I think that "chars $undef" should be exactly equivalent to
"$undef.chars".  In fact, I think it is: "chars $undef" is just the
indirect object form.

So perhaps "method not found" errors "fail" instead of "die".

Luke


Data constructors / Unidirectional unification

2005-08-04 Thread Luke Palmer
I'm writing a new module that optimizes sets of conditions into
decision trees.  Initially I allowed the user to specify conditions as
strings, and if that condition began with a "!", it would be the
inverse of the condition without the "!".

But then I thought, "the user will more than likely have condition
*objects* if the conditions are anything but trivial".  Then you can't
just put a "!" on the front.  The way Haskell and ML do this is by
allowing data constructors: symbols that can take arguments and be
pattern matched against.  I thought that this was a particularly
elegant way to solve the problem, so I implemented it in the
Symbol::Opaque module.  Now I want it for Perl 6.

Here's my proposal.  Let's generalize the backtick from unit support
into data constructor support.  The following are equivalent:

4`meters
`meters(4)

The postfix form is only available for single-argument constructors,
but the prefix form can be used with more than one argument:

`foo(4, 5)

These things don't need to be declared, but you can use a "data"
declaration to give them a type (which does Symbol, the type of all
such constructors):

data Quux (`foo, `bar, `baz);

Now whenever you create a `foo, it is a Quux.  These can overlap:

data Foo (`baz);
data Bar (`baz);

A `baz object is now both a Foo and a Bar.  These could be easily
extended to allow type signatures, to come up with those nice
type-checked data structures that we're using for PIL.  But I'm not
proposing that part yet.

Here's what makes them so useful:  they can be bound against:

sub to_SI (`meters($m)) { `meters($m) }
sub to_SI (`feet($f))   { `meters(feet_to_meters($f)) }

Here's an excerpt from my module (perl6ized):

sub invert ($in) {
my `not($x) := $in ?? $x :: `not($in);
}

Or maybe that's:

sub invert ($in) {
`not(my $x) := $in ?? $x :: `not($in);
}

Anyway, the point is that bindings can fail.  In boolean context, they
return whether they succeed; in void context, they blow up if they
fail (probably "fail").

As multimethods:

multi invert (`not($x)) { $x }
multi invert ($x)   { `not($x) }

Which I like the best.

Pairs are values: like numbers.  `foo =:= `foo.  They can just have sub-values.


Re: Reassigning .ref and .meta? Rebinding class objects?

2005-08-05 Thread Luke Palmer
On 8/5/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> my $str   = "Hello";
> $str.ref  = Int;  # allowed?
> $str.meta = &some_sub.meta;   # allowed?

I hardly think those work.  Both of those require a change of
implementation, which we can't do generically.  So people would have
to specify how the implementation changes in that way, which they
generally won't/can't.  You can do the first using user-defined
methods like so:

$str = $str as Int# $str as= Int ?

I'm not sure about the latter.

> my $str   = "Hello";
> Str ::= Int;  # allowed?
> ::Str   ::= ::Int;# or is this allowed?

One of these two is probably allowed.  But maybe it should be louder.

> say $str; # still "Hello"? Or is it an Int now?

That's a hard question.  If say is implemented like so (this is
hypothetical; it will surely be implemented in terms of print):

multi say () { internal_put_str "\n" }
multi say (*$first, [EMAIL PROTECTED]) {
given $first {
when Str { internal_put_str $first }
when Int { internal_put_int $first }
...
}
say @stuff;
}

Then the first case becomes equivalent to "when Int", which would call
internal_put_str with an Int, which could be very dangerous.  This is
why rebinding names globally is a bad idea.  And in that case, I don't
know how or whether we should provide the ability.

Globally subclassing, however, isn't so dangerous:

Str does role {
method blah () {...}
};

But then comes to your question:

my $foo = "hello";
Str does role {
method blah () {...}
};
$foo.blah;   # allowed

That is, do existing Strs all "does" the new role as well?

Luke


Re: Various questions on .ref and .meta

2005-08-05 Thread Luke Palmer
On 8/5/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> ~Str;# ""? "Str"?

"Str"

> ~::Str;  # ""? "Str"?

I don't know how :: works anymore.  I'll avoid these.

> ~Str.meta;   # ""? (fill in please)?

"Class"

> ~::Str.meta; #  ""? (fill in please)?
> 
> +Str; +::Str;
> +Str.meta; +::Str.meta;  # all errors?

Yep... unless Str somehow specifies that it can be numified, which it shouldn't.

> "some string".ref =:= Str;   # true?

Yeah, if .ref is what we end up calling it.

> Str.ref; # (fill in please)

Class ?

> Str.meta.ref;# (fill in please)

ETOOMUCHMETA at Luke line 40.

I don't know about this one.

Luke


Re: $object.meta.isa(?) redux

2005-08-10 Thread Luke Palmer
On 8/9/05, Larry Wall <[EMAIL PROTECTED]> wrote:
> So why not just use "describes"?  Then maybe Object.isa(Foo) delegates
> to $obj.meta.describes(Foo).

Hmm.  We have a similar problem with the new class-set notation. 
These two things:

$a.does(Foo);
Bar.does(Foo);

Mean two different things:

$a (e) Foo;# or whatever we decide for `elem`
Bar (<=) Foo;

"does" is a nice dwimmy name--it works linguistically for both of
those concepts--but it would be nice to have two names that are
unambiguous for the two cases that "does" delegates to.

Of course, we could use "element" and "subset", but that doesn't work
well for people who don't like to think of types in terms of sets. 
Infinite sets aren't exactly beginner-friendly.

Any ideas?  I've never been very good at coming up with good names.

Luke


Re: Perl 6 Meta Object Protocols and $object.meta.isa(?)

2005-08-10 Thread Luke Palmer
On 8/10/05, Autrijus Tang <[EMAIL PROTECTED]> wrote:
> But it's an toplevel optimization, which is not applicable to
> module authors.  So I'd very much welcome a lexical pragma that
> forces static binding of subroutine calls.

Yeah, but the whole point of not allowing that is so that you can
override modules when you need to.  However, I think the toplevel
closed requirement might be a little too harsh.  After all, you could
just say:

use dynamic 'Foo';

Or something, to unforce that on a module's behalf.  That declaration
should probably not load the module for you, so that you can force
dynamicity on modules that other modules use.

use dynamic;

Would force every module to be dynamic, and should probably only be
allowed at the top level.

In summary: You're allowed to say "I don't do funny stuff to myself",
and someone else is allowed to say "but I do funny stuff to you".

The only thing I see as problematic is a module that knows it's evil
and has to "use dynamic". The module could request that the user of
the module "use dynamic", so that people know when they're using a
module that will slow everything down.  The other way is to hack
around it using something like a source filter, pretending that the
user of the module actually did write that.  But that's not $&,
because it's quite a bit more difficult to do. If you encapsulate that
difficulty into a module, that module will eventually be shunned by
the speed-centric part of the community.

Luke


Re: $object.meta.isa(?) redux

2005-08-10 Thread Luke Palmer
On 8/10/05, TSa <[EMAIL PROTECTED]> wrote:
> HaloO,
> 
> Luke Palmer wrote:
> > On 8/9/05, Larry Wall <[EMAIL PROTECTED]> wrote:
> >
> >>So why not just use "describes"?  Then maybe Object.isa(Foo) delegates
> >>to $obj.meta.describes(Foo).
> >
> >
> > Hmm.  We have a similar problem with the new class-set notation.
> > These two things:
> 
> Did I miss something? What is the class-set notation?

A new development in perl 6 land that will make some folks very happy.
 There is now a Set role.  Among its operations are (including
parentheses):

(+)   Union
(*)   Intersection
(-)   Difference
(<=)  Subset
(<)   Proper subset
(>=)  Superset
(>)   Proper superset
(in)  Element
(=)   Set equality

I believe the unicode variants are also allowed.

And now we're doing away with junctive types in favor of set types. 
This was mostly because:

(A|B) (<=) A
Expands to
A (<=) A || B (<=) A

Which is true.  Such a type can not exist, lest every type be equal to
every other type.

So now Type does Set, so we have:

A (+) B # type representing a value that is either A or B
A (*) B # type representing a value that is both A and B
A (-) B # type representing a value that is A but not B
etc.

So types are merely sets of (hypothetical, eventual) instances,
together with a few operations to make introspection etc. easier.

You might even get Piers's all-instance introspection by Foo.elements.
 That's not a guarantee though.

> > $a.does(Foo);
> > Bar.does(Foo);
> >
> > Mean two different things:
> 
> Really?

Yep:

$a.does(Foo)   $a (in) Foo
Bar.does(Foo)  Bar (<=) Foo

Assuming that $a is not a type.

If there is any extra structure beyond set structure that we need in a
type lattice, I think it's your job to tell us what that is.

Luke


Re: Perl 6 Meta Object Protocols and $object.meta.isa(?)

2005-08-10 Thread Luke Palmer
On 8/10/05, TSa <[EMAIL PROTECTED]> wrote:
> Here is an example of a 2D distance method
> 
>role Point
>{
>  has Num $.x;
>  has Num $.y;
>}
>method distance( Point $a, Point $b --> Num )
>{
>   return sqrt( ($a.x - $b.x)**2 - ($a.y - $b.y)**2);
>}
> 
> Now comes the not-yet-Perl6 part:

I wouldn't say that...

>Complex does Point where { $.x := $.real; $.y := $.imag }

use self './';

Complex does role {
does Point;
method x () is rw { ./real }
method y () is rw { ./imag }
}

>Array[Num] does Point
>  where { $.size >= 2 or fail;
>  $.x := @.values[0]; $y := @.values[1] }

# This one is quite a bit more dubious than the last
Array[Num] where { .elems >= 2 } does role {
does Point;
method x () is rw { ./[0] }
method y () is rw { ./[1] }
}

That subtype would have to be an "auto-subtype", that is, it would
have to know that it is a member of that type without being told so. 
Such subtypes can introduce major inefficiencies in the program, and
are best left either out of the language, or somewhere in the dark
depths of I-know-what-I'm-doing-land.

Luke


Re: Typed type variables (my Foo ::x)

2005-08-11 Thread Luke Palmer
On 8/11/05, TSa <[EMAIL PROTECTED]> wrote:
> HaloO,
> 
> Autrijus Tang wrote:
> > On Thu, Aug 11, 2005 at 08:02:00PM +1000, Stuart Cook wrote:
> >>my Foo ::x;
> >>a) ::x (<=) ::Foo (i.e. any type assigned to x must be covariant wrt. Foo)
> >>b) ::x is an object of type Foo, where Foo.does(Class)
> >>c) Something else?
> >
> > My current reading is a) -- but only if ::x stays implicitly
> > "is constant".  So your "assigned" above should read "bound".
> 
> Same reading here! ::x is declared as a subtype of Foo.
> 
> Note that the subtype relation is *not* identical to the
> subset relation, though. E.g. Foo (=) (1,2,3) gives
> Foo (<=) Int but Foo is not a subtype of Int if Int requires
> e.g. closedness under ++. That is forall Int $x the constraint
> $t = $x; $x++; $t + 1 == $x must hold. Foo $x = 3 violates
> it because 4 is not in Foo. 

That's perfectly okay.  The following is a valid subtype in Perl:

subtype Odd of Int where { $_ % 2 == 1 };

Even though it doesn't obey the algebraic properties of Int.

The way to state algebraic properties of a type is the same way as in
Haskell: use type classes.

macro instance (Type $type, Role $class) {
$type does $role; ""
}

role Addable[::T] {
multi sub infix:<+> (T, T --> T) {...}
}

instance Addable[Int], Int;

This would require the definition of a "multi sub infix:<+> (Int, Int
--> Int)", thus declaring that Ints are closed under addition. 
However, the subtype Odd above also does Addable, but it does
Addable[Int], not Addable[Odd].  That means that you can add two Odds
together and all you're guaranteed to get back is an Int.

Hmm, if we can have K&R C-style where clauses on subs, then we can
constrain certain parameters in the signature.  For instance:

sub foo ($x, $y)
where Int $x
where Int $y
{...}

Using this style, we can constrain types as well:

sub foo (T $x, T $y)
where Addable[T] ::T
{...}

foo is now only valid on two types that are closed under addition.

Notationally this has a few problems.  First, you introduce the
variable ::T after you use it, which is parserly and cognitively
confusing.  Second, you say T twice.  Third, there is a conflict in
the returns clause:

sub foo ($x) returns Int
where Int $x
{...}

The where there could go with Int as a subtype or with foo as a constraint.

Luke


Re: $object.meta.isa(?) redux

2005-08-11 Thread Luke Palmer
On 8/10/05, Sam Vilain <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-08-10 at 21:00 -0400, Joe Gottman wrote:
> >Will there be an operator for symmetric difference?  I nominate (^).
> 
> That makes sense, although bear in mind that the existing Set module for
> Perl 6, and the Set::Scalar and Set::Object modules for Perl 5 use % for
> this (largely due to overloading restrictions, of course).
> 
> There is no unicode or mathematical operator for symmetric difference,
> it seems.

I usually see infix delta (which probably comes from /\, where \ is
set difference).  That makes (^) seem all the sweeter, since it kinda
looks like a delta.  But then maybe we should make set difference (\).

Luke


Re: "set" questions -- Re: $object.meta.isa(?) redux

2005-08-11 Thread Luke Palmer
On 8/10/05, Flavio S. Glock <[EMAIL PROTECTED]> wrote:
> I wonder if infinite sets (recurrences) will be supported - then I'll
> move all(ext/Recurrence, ext/Span, ext/Set-Infinite) to
> Perl6::Container::Set::Ordered - cool.

Note "there is now a Set role".   Emphasis on role.  There will be a
finite set class to go with it, but really these operators just
specify an interface (and a few default implementations when they can
be inferred).  You can implement whatever you like that implements
this interface.

You might not want to call it a container, since it's not a container.

Luke

> - Flavio S. Glock
> 
> 2005/8/10, Dave Whipp <[EMAIL PROTECTED]>:
> > Luke Palmer wrote:
> >
> > > A new development in perl 6 land that will make some folks very happy.
> > >  There is now a Set role.  Among its operations are (including
> > > parentheses):
>


Re: Set operators in Perl 6 [was Re: $object.meta.isa(?) redux]

2005-08-11 Thread Luke Palmer
On 8/10/05, Dave Rolsky <[EMAIL PROTECTED]> wrote:
> [changing the subject line for the benefit of the summarizer ...]
> 
> On Wed, 10 Aug 2005, Larry Wall wrote:
> 
> > And now some people will begin to wonder how ugly set values will look.
> > We should also tell them that lists (and possibly any-junctions)
> > promote to sets in set context, so that the usual way to write a set
> > of numbers and strings can simply be
> >
> ><1 dog 42 cat 666.5>
> 
> Groovy, but what about this?
> 
>   <1 dog 42 cat 42>
> 
> Maybe a warning with an optional fatality under "use strict 'sets'"?

I doubt that should be any kind of warning or error.  It's just that
your set will end up having four elements instead of five.  But you
really don't want to warn in this case:

@myset (+) <1>;

By using the (+) operator (instead of the list concatenation, er,
operator?), the user is implying that he wants duplicates in @myset
thrown away.

Luke


Re: Ambiguity of parsing numbers with underscores/methods

2005-08-16 Thread Luke Palmer
On 8/16/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> 1_234;  # surely 1234
> 1e23;   # surely 1 * 10**23
> 
> 1._5;   # call of method "_5" on 1?
> 1._foo; # call of method "_foo" on 1?
> 
> 1.e5;   # 1.0 * 10**5?
> 1.efoo; # call of method "efoo" on 1?
> 1.e_foo;# call of method "e_foo" on 1?
> 
> 0xFF.dead;  # call of method "dead" on 0xFF?

I think we should go with the method call semantics in all of the
ambiguous forms, mostly because "no such method: Int::e5" is clearer
than silently succeeding and the error coming up somewhere else.

Luke


Re: my $pi is constant = 3;

2005-08-17 Thread Luke Palmer
On 8/17/05, Larry Wall <[EMAIL PROTECTED]> wrote:
> You could still reason about it if you can determine what the initial
> value is going to be.  But certainly that's not a guarantee, which
> is one of the reasons we're now calling this write/bind-once behavior
> "readonly" and moving true constants to a separate declarator:
> 
> my $pi is readonly;
> $pi = 3;
> 
> vs
> 
> constant $pi = 3;
> 
> or
> 
> constant Num pi = 3;
> 
> or if you like, even
> 
> constant π = 3;

Minor detail: when does the right side get evaluated?  That is, what
happens here:

constant pi = make_pi();
sub make_pi() { 4*atan2(1,1) }

If you want this to succeed, then this must fail:

constant pi = 4*atan2(1,1);
BEGIN { say "pi = {pi}" }

Is it even possible to evaluate constants at CHECK time and then
constant-fold them in before the program is run?

Luke


Re: Hoping that Params::Validate is not needed in Perl6

2005-08-17 Thread Luke Palmer
On 8/17/05, Dave Rolsky <[EMAIL PROTECTED]> wrote:
> I'm going to go over the various features in P::V and see if there are
> equivalents in Perl6, and bring up any questions I have.  I think this
> will be interesting for folks still new to P6 (like myself) and existing
> P::V users (I think there's a fair number, but maybe I flatter myself ;)

Thanks!

> P::V also allows one to specify a class membership ("isa)" or one or more
> methods ("can") a given object/class must have.  In Perl6 we can just
> specify a class:
> 
>   sub transmit (Receiver $receiver)
> 
> If I understand this correctly, Receiver is a role here, and one that many
> different classes may use/implement.  This basically combines the isa &
> can concepts.  Instead of specifying required _methods_, we specify the
> role, which seems conceptually cleaner anyway.

... Sometimes.  We are missing the "can" functionality (except from
where clauses).  That is, how do you specify that you want an object
that does a particular role, but doesn't know it yet.  I'm thinking
something like:

role Mortal is automatic {
method die () {...}
}

That is, anything that can "die" is Mortal, even though it didn't say
that it was.  Then what really gets tricky is this:

role Mortal is automatic {
method die () {...}
method jump_off_building() { die }
}

class Human {
method die () { die "Auuugh" }
}

Human.new.jump_off_building;   # no method found or "Auuugh"?

Anyway, that's beside the point.  Moving on.

> Dependencies, Exclusions, and "Require one-of"
> 
> With P::V I can do this:
> 
>{ credit_card_number =>
>  { optional => 1,
>depends => [ 'credit_card_expiration', 'credit_card_holder_name' ] },
> 
>  credit_card_expiration => { optional => 1 },
> 
>  credit_card_holder_name => { optional => 1 },
>}
> 
> I have no idea how I might do this in Perl6, but I would love to see it
> supported as part of parameter declarations

You sortof can:

sub validate (+$credit_card_number, 
  +$credit_card_expiration,
  +$credit_card_holder_name)
where { defined $credit_card_number xor 
  defined $credit_card_expiration && 
  defined $credit_card_holder_name }
{...}

But that's really yucky.

> Similarly, something I've wanted to add to P::V is exclusions:
> 
>{ credit_card_number =>
>  { optional => 1,
>excludes => [ 'ach_bank_account_number' ] },
>}
> 
> Another thing that would be really nice would be to require one of a set
> of parameters, or one set out of multiple sets, so we could say "we need
> credit card info _or_ bank account info".

Yeah, I suppose that would be nice.  However, when you're doing these
kinds of complex dependencies, you'd like to provide your own error
messages when they fail.  That is, instead of:

'$credit_card_number excludes $ach_bank_account_number'

You could say:

'You can't give a credit card number and a bank account number, silly!'

So I wonder whether this kind of logic is better for a P::V module in
Perl 6.  Let somebody else think about the hard stuff like that.

> Transformations
> 
> Another potential future feature for P::V is the ability to specify some
> sort of transformation callback for a parameter.  This is handy if you
> want to be flexible in what inputs you take, but not explicitly write code
> for all cases:
> 
>{ color => { regex => qr/^(?:green|blue|red)$/i,
> transform => sub { lc $_[0] } }
>}
> 
> I suspect that this could be implemented by a user-provide trait like "is
> transformed":
> 
>sub print_error ($color where m:i/^ [green | blue | red] $/ is transformed 
> { lc })
> 
> Presumably this can be done with the existing language.  It doesn't add
> anything at compile time, so it really doesn't need to be part of the
> language.

Even things that do add things at compile time don't need to be part
of the language.  That's why "use" is a macro.  :-)

Luke


Rebinding binding

2005-08-17 Thread Luke Palmer
Two years ago or so, I became very happy to learn that the left side
of binding works just like a routine signature.  So what if binding
*were* just a routine signature.  That is, could we make this:

sub foo () {
say "hello";
my $x := bar();
say "goodbye $x";
}

Equivalent to this:

sub foo () {
say "hello";
-> $x {
say "goodbye $x"; 
}.(bar());
}

Then observe:

sub foo() {
say "hello";
my $x := 1|2|3;
say "goodbye $x";
}

foo();
__END__
hello
goodbye 1
goodbye 2
goodbye 3

If you bind an existing variable (instead of introducing a new one
with my), you get a delimited continuation for the scope of that
variable.  Consequently, if you bind a global, you get a full
continuation.

I know, I know, this is putting continuations into the realm of
!/{10...}/, but I have an argument for this.  The only people
who will ever see the continuations are the people *implementing*
junctions and junction-like things.  And those people are already
gurus, so they should be comfortable in dealing with continuations.

The only other time that a user might get caught with continuation
"weirdness" is if they're binding to junctions as above (since there
are no other such values in the language... just yet).  And with the
default "no junctions", junctions aren't allowed in variables anyway,
so what do they expect?

Luke


Demagicalizing pairs

2005-08-19 Thread Luke Palmer
We've seen many problems come up with the current special treatment of
pairs.  Here's what I can think of:

* Pairs are restricted to a particular position in the argument list, which 
leads to confusion (why isn't this being passed named?) and poor
end-weight in something like this:

foo {
   # lots of code in here
   # blah blah blah
   ...
} :delay(1)

which would probably be more readable with :delay(1) up at the top

* We had to special-case pointy blocks not to take named arguments because 
of this construct:

for %hash.pairs -> $pair { ... }

* (IIRC) We still have to scan potentially infinite lists when using the
foo([EMAIL PROTECTED]) caller-side flattening form.

I propose that we move the magic out of the Pair type, and into a
syntactic form.  Here's the best we have (from #perl6) at the moment:

a => b# always a plain-vanilla pair, never a named argument
:a(b) # always a named argument (in sub calls)
  # degenerates to a plain pair when not in a sub call
named(a => b) # make an ordinary pair (or list of pairs) into 
  # runtime named arguments
named(%named) # same

This has a couple of advantages.  First of all, in the call:

foo($a, $b, $c)

You *know* that you're passing three positionals.  It looks like what
it is.  Also, you don't have to take extra cautionary measures if
you're dealing with generic data.

It's much less work for the runtime.  You don't have to scan for
pairs, you just have the caller stuff them where you need them.

You only lose a *little* bit of flexibility at a great gain of
usability of pairs.  That little bit is this ability:

my @args = (1, 2, 3, foo => 'bar');
baz([EMAIL PROTECTED]);   # $foo gets argument 'bar'

And yet, you can fudge that back in on the callee side:

sub baz ([EMAIL PROTECTED]) {
my (@pos, %named);
for @args {
when Pair { $named{.key} = .value; }
default   { push @pos: $_ }
}
real_baz([EMAIL PROTECTED], named(%named))
}

And you lose the ability to pretend that you're taking named arguments
when you're not; that is, you say you're not being order dependent
when you are.

I think this is fixable with a trait on the sub:

sub form ([EMAIL PROTECTED]) is unnameable # refuse named parameters 
altogether
{ }

Anything more magical than that can be dealt with explicitly.

Luke


Multidimensional hyper

2005-08-19 Thread Luke Palmer
What is the resulting data structure in each of the following:

-<< [1, 2]
-<< [[1,2], [3,4]]
-<< [[1,2], 3]
[[1,2], 3] >>+<< [[4,5], 6]
[1, 2, [3]] >>+<< [[4,5], 6]

Luke


Synopsis 3 Update

2005-08-19 Thread Luke Palmer
Here is an update to Synopsis 3 incorporating recent additions.  If
any of this is wrong or disagreeable, this is the time to say so.

Luke


S03.pod.diff
Description: Binary data


*%overflow

2005-08-21 Thread Luke Palmer
Output?

sub foo (+$a, *%overflow) {
say "%overflow{}";
}

foo(:a(1), :b(2));   # b2
foo(:a(1), :overflow{ b => 2 }); # b2
foo(:a(1), :overflow{ b => 2 }, :c(3));  # ???

Luke


Re: Need correction to S06

2005-08-22 Thread Luke Palmer
On 8/22/05, Larry Wall <[EMAIL PROTECTED]> wrote:
> I think the simplest thing is to say that you can't bind to the name
> of the slurpy hash.  You give a name to it so that you can refer to it
> inside, but that name is not visible to binding.

Fixed in https://svn.perl.org/perl6/doc.  Thanks.

Luke


Re: Can a scalar be "lazy" ?

2005-08-22 Thread Luke Palmer
On 8/22/05, Yiyi Hu <[EMAIL PROTECTED]> wrote:
> my( $s, $t ); $s = "value t is $t"; $t = "xyz"; print $s;

I have an answer for you that is much more detailed than what you want
to hear.  The short answer is "yes".

This is possible to implement, provided you appropriately declare $t. 
It all depends on how deep you want the lazy semantics to go.  The
shallowest you can get is using the junctive hook (which I'll call
CallHook right now):

role CallHook {
# this is the interface of a role that junctions use for
# autothreading

# call this with the appropriately curried function whenver
# a value that does this role is passed to a function
method CALLHOOK (&call) {
call($?SELF);
}

# call this with the return continuation whenever this
# value is returned from a function
method RETURNHOOK (&retcont) {
retcont($?SELF);
}

# call this with the current continuation whenever this value
# is assigned to a variable
method ASSIGNHOOK (&cont) {
cont($?SELF);
}

# perhaps this role provides more hooks
}

class LazyValue does CallHook {
has &.thunk = { undef };
method CALLHOOK (&call) {
LazyValue.new( :thunk{ call(.eval) } );
}
method RETURNHOOK (&retcont) {
retcont(.eval);
}
method eval () {
.thunk()();
}
method set_thunk (&.thunk) { }
}

my $s = LazyValue.new;
my $t = "Hello, $s!";
$s = "World";
say $t.eval;# Hello World![1]

You could get that pesky eval out of there by defining print (and say)
multimethod variants yourself.  You have to tell it when to evaluate
sometime.

Of course, this behavior will not be default, since it can get you
into a lot of trouble.  You say that in Perl 6 arrays will behave
lazily, but this is *not* what is meant:

my @array;
my $string = "Hello, @array[]";
@array = ("World!");
say $string;# "Hello, "  !!!

What we mean by a lazy array is that laziness propagates automatically:

my @array = <>;   # <> returns a lazy array, so @array is now lazy
@array = (1,2,3); # (1,2,3) is not lazy, so @array becomes unlazy

Luke

[1] The well-typed version of this follows.  I have hand-checked it
(fairly well, but humans do make mistakes).  Hooray for strong typing
when we want it!

use traits ;
role CallHook[::of] {
method CALLHOOK ( &call:(::of $value --> ::T) --> ::T ) {
call($?SELF);
}

method RETURNHOOK ( &retcont:(::of $value --> ::T) --> ::T ) {
retcont($?SELF); 
}

method ASSIGNHOOK ( &cont:(::of $value --> ::T) --> ::T ) {
cont($?SELF);
}
}

# I'm Ignoring that you can only parameterize roles, because that's
# a silly policy, and nobody is executing this code, so I'm
# writing in Luke's dialect.

class LazyValue[::of] {
does CallHook of ::of;

# is mock() tells the type checker to trust that I conform to
# this interface without checking[2].
is mock(::of);

has &.thunk = { undef };
method CALLHOOK (&call(::of $value --> ::T) --> ::T) {
LazyValue[::T].new( :thunk{ call(.eval) } );
}
method RETURNHOOK (&retcont(::of $value --> ::T) --> ::T) {
retcont(.eval);
}
method eval (--> ::of) {
.thunk()();
}
method set_thunk (&.thunk:(--> ::of)) { }
}

[2] Which is safe to do, because I intercept all method calls on this
object and replace then with something that takes what the methods
take and returns what the methods return[3].

Also, such an "is mock" declaration must turn off optimizations
associated with the type in question.  So all you get is the type
checking.

[3] Unless that method is eval() or set_thunk(), in which case I don't
really conform to ::of.  That probably means that eval() and
set_thunk() should be out-of-band functions, not methods.


Re: Calling positionals by name in presence of a slurpy hash

2005-08-23 Thread Luke Palmer
On 8/23/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> (asking because a test testing for the converse was just checked in to
> the Pugs repository [1])
> 
> sub foo ($n, *%rest) {...}
> 
> foo 13;
> # $n receives 13, of course, %rest is ()
> 
> foo 13, foo => "bar";
> # $n receives 13 again, %rest is (foo => "bar")
> 
> foo n => 13;
> # $n receives 13, %rest is (), right?
> 
> foo n => 13, foo => "bar";
> # $n receives 13, %rest is (foo => "bar"), right?

Yep, that's all correct.  Matter of fact, what %rest actually gets has
not been defined. "Maybe %rest mirrors all the named arguments, maybe
it doesn't".  I can see a very small utility if it does, but it seems
like it would be faster[1] if it didn't.  I think it's fair to say no
here.

[1] Yeah, yeah, premature optimization and whatnot.  You always have
the sig (*%hash) if you really want to.

Luke


Re: Demagicalizing pairs

2005-08-25 Thread Luke Palmer
On 8/24/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> Larry wrote:
> 
> > Plus I still think it's a really bad idea to allow intermixing of
> > positionals and named.  We could allow named at the beginning or end
> > but still keep a constraint that all positionals must occur together
> > in one zone.
> 
> If losing the magic from =>'d pairs isn't buying us named args wherever we
> like, why are we contemplating it?

Well, that was one of the nice side-effects of the proposal, solving
something that had been bugging me. But the main reason for this
proposal was to demote Pair into a regular data type that wouldn't
sneak into somebody's named argument when we weren't looking.  In the
Old Regime, I fear that I would never ever use Pair *except* for named
arguments precisely because I need to keep far too much information in
my head to use them safely.

> > so we should put some thought into making it syntactically trivial, if
> > not automatic like it is now. 

The whole point was to deautomatize it!  However, here's an
interesting solution:  pairs are scanned for *syntactically* *on the
top level* of a function call (allowing named() or however we spell it
as a fallback when we want to be dynamic).  However, :foo(bar) and foo
=> bar are equivalent again.

foo $x, $y;   # two positionals, regardless of what they contain
foo $x, :y($y)# a positional and a named
foo $x, y => $y   # a positional and a named
foo $x, (y => $y) # two positionals: $x and the pair y => $y
foo $x, (:y($y))  # same

In the fourth example, y => $y is no longer on the syntactic top
level, so it is not interpreted as a named argument.


> 
> > I hate to say it, but the named args should probably be marked
> > with : instead of + in the signature.

That's pretty cool.  Can't say I like the secondary sigil: it's really
not marking a property of the variable, but a property of the
parameter list.  That information should probably be kept inside the
parameter list alone.

Luke


Re: Perl 6 code - a possible compile, link, run cycle

2005-08-25 Thread Luke Palmer
On 8/25/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> On Thu, Aug 25, 2005 at 11:16:56 -, David Formosa (aka ? the Platypus) 
> wrote:
> > On Wed, 24 Aug 2005 16:13:03 +0300, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> >
> > [...]
> >
> > > perl6 creates a new instance of the perl compiler (presumably an
> > > object). The compiler will only compile the actual file 'foo.pl',
> > > and disregard any 'require', 'use', or 'eval' statements.
> >
> > use has the potentional to change the way the compiler
> > parses the code.  So use needs to be regarded.
> 
> Hmm... Good point.
> 
> I don't know how this is dealt with WRT to the "every module is
> compiled with it's own compiler" approach perl 6 is supposed to
> have.

It's pretty simple, really.  If module A "use"s module B, then you go
and compile B first.  Then, when you get to "use B" in module A, you
just call "B::import" at compile time.

That is, the modules are compiled separately, but you're allowed to
run things from an already compiled module while you are compiling
another one.

Luke


Manuthreading

2005-08-28 Thread Luke Palmer
While nothingmuch and I are gutting junctions and trying to find the
right balance of useful/dangerous, I'm going to propose a new way to
do autothreading that doesn't use junctions at all.

First, let me show you why I think junctions aren't good enough:

I can't extract the information that the threaded call returns to me. 
For instance, I want to say:

my ($val1, $val2, $val3) = map { foo("bar", $_, "baz") } 1,2,3

But without obscuring the nature of what I'm doing with that unsightly
"map".  That is, I want the following (using pseudosyntax):

my ($val1 | $val2 | $val3) = foo("bar", 1|2|3, "baz")

But this is impossible using junctions, because the order is lost
before it comes back to my declaration.

So I propose a simple extension to the hyper syntax.  The hyper meta
operator always points to the operator that is being hypered:

+<< @a   # prefix
@a >>*<< @b  # infix
@a >>++  # postfix

Now I'm going to propose a variant for circumfix:

foo(1, <<@a>>, 2);

Where the meta operator is pointing to the parentheses around the
call.  Then it is easy to do my map above:

my ($val1, $val2, $val3) = foo("bar", <<1,2,3>>, "baz")

Luke


Operator sub names are not special

2005-08-31 Thread Luke Palmer
Let me just clarify something that my intuition led me to believe:

sub foo(&infix:<+>) { 1 + 2 }
sub bar($a, $b) { say "$a,$b" }
foo(&bar); # "1,2"

That is, operator names can be lexically bound just like any other
name.  Also, this doesn't have any affect on implicit coercions, etc. 
(That is, lexically binding &prefix:<+> does not change things in
numeric context; only when there's actually a + in front of them)

Luke


Re: Operator sub names are not special

2005-08-31 Thread Luke Palmer
On 8/31/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 31, 2005 at 04:56:25 -0600, Luke Palmer wrote:
> 
> > (That is, lexically binding &prefix:<+> does not change things in
> > numeric context; only when there's actually a + in front of them)
> 
> Unless you override &prefix:<+> ?
> 
> sub foo (&prefix:<+>) { +1 }

Uh yeah, I think that's what I was saying.  To clarify:

sub foo (&prefix:<+>) { 1 == 2 }# 1 and 2 in numeric context
foo(&say);   # nothing printed

But:

sub foo (&prefix:<+>) { +1 == +2 }
foo(&say);# "1" and "2" printed

Luke


Re: Operator sub names are not special

2005-09-01 Thread Luke Palmer
On 9/1/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 31, 2005 at 13:43:57 -0600, Luke Palmer wrote:
> > Uh yeah, I think that's what I was saying.  To clarify:
> >
> > sub foo (&prefix:<+>) { 1 == 2 }# 1 and 2 in numeric context
> > foo(&say);   # nothing printed
> >
> > But:
> >
> > sub foo (&prefix:<+>) { +1 == +2 }
> > foo(&say);# "1" and "2" printed
> >
> > Luke
> 
> Furthermore, even if:
> 
> sub &infix:<==> ($x, $y) { +$x == +$y }
> sub foo (&prefix:<+>) { 1 == 2 }
> 
> foo(&say); # nothing printed
> 
> but if
> 
> sub foo (&*prefix:<+>) { 1 == 2 }
> 
> then what?

Um, you can't do that.  You can't lexically bind a global; that's why
it's global. You could do:

sub foo (&plus) { temp &*prefix:<+> = +  1 == 2 }

And then something would be printed... maybe, depending on the
implementation of == (with your implementation above, it would).

Luke


Re: for $arrayref {...}

2005-09-01 Thread Luke Palmer
On 9/1/05, Juerd <[EMAIL PROTECTED]> wrote:
> Ingo Blechschmidt skribis 2005-09-01 20:29 (+0200):
> > for ($arrayref,) {...};  # loop body executed only one time
> 
> Yes: scalar in list context.
> 
> > for ($arrayref)  {...};  # loop body executed one or three times?
> 
> Same thing: scalar in list context. So once.
> 
> > for  $arrayref   {...};  # loop body executed one or three times?
> 
> Same thing: scalar in list context. So once.
> 
> Scalars only automatically dereference in *specific* contexts. An
> arrayref not used in Array context is still an arrayref, a hashref not
> used in Hash context is still a hashref.

I would probably say that scalars never automatically dereference. 
It's lists and hashes that automatically dereference/enreference. 
That is, everything is a scalar, really, but if you have an @ or a %
on the front of your variable, that means that you flatten yourself
into specific kinds of contexts.

Luke


Re: for $arrayref {...}

2005-09-02 Thread Luke Palmer
On 9/2/05, Juerd <[EMAIL PROTECTED]> wrote:
> Luke Palmer skribis 2005-09-01 23:43 (+):
> > I would probably say that scalars never automatically dereference.
> > It's lists and hashes that automatically dereference/enreference.
> 
> arrays

Yes, arrays, right.

> > That is, everything is a scalar, really, but if you have an @ or a %
> > on the front of your variable, that means that you flatten yourself
> > into specific kinds of contexts.
> 
> sub foo (@bar) { ... }
> 
> foo $aref;
> 
> Here $aref is dereferenced because of the Array context. The scalar
> can't do this by itself, of course.

No, my view is that $foo and @foo are really the same kind of thing. 
They're both references to lists (yes, lists).  So binding @bar :=
$aref is not doing anything special, it's just like binding two
scalars.  The only thing that makes @bar different from $aref is that
in list context, @bar flattens out the list it's holding whereas $aref
doesn't.

Luke


perl6-language@perl.org

2005-09-03 Thread Luke Palmer
On 9/3/05, Stuart Cook <[EMAIL PROTECTED]> wrote:
> On 03/09/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> > A multi sub is a collection of variants, so it doesn't have arity,
> > each variant has arity.
> >
> > I'd say it 'fail's.
> 
> But if the reason you're calling `&foo.arity` is to answer the
> question "Can I call this sub with three arguments?" then that kind of
> behaviour isn't going to help very much.
> 
> Which leads me to believe that instead of writing
> 
>   if &foo.arity == 3 { ... }
> 
> you should probably be writing something like this:
> 
>   if &foo.accepts(3) { ... }

That's a nice reformulation.  However, .arity is still important.  But
maybe .arity doesn't exist, and all you get are .accepts, .min_arity,
and .max_arity.  After all, "for" has to know how many things to take
off of the list, and doing:

my @pass;
given ({ &foo.accepts($_) }) {
when 1 { @pass = @args.shift(0,1) }
when 2 { @pass = @args.splice(0,2) }
...
}

Is unacceptable.  But *the* arity of a function in Perl is rather ill-defined.

(As a matter of fact, I use the existence numeric casing to determine
when a language is not general enough in a particular area;  C++ had
to do numeric casing to implement typelists, Haskell has to do numeric
casing to implement variadic functions and lifts, etc.   So far, I've
never had to do it in Perl. :-)

Luke


multi scoping

2005-09-04 Thread Luke Palmer
Here's a good Perl 6 final exam question:

Spot the mistake (hint: it's not in the math):

module Complex;

sub i() is export { 
Complex.new(0,1)
}
multi sub infix:<+> (Complex $left, Complex $right) is export {
Complex.new($left.real + $right.real, $left.imag + $right.imag);
}
multi sub infix:<*> (Complex $left, Complex $right) is export {
Complex.new(
$left.real * $right.real - $left.imag * $right.imag,
$left.real * $right.imag + $right.real * $left.imag)
}
# ...

Give up?

When you add two complex numbers, you get into an infinite loop. 
That's because infix:<+> adds things using the plus operator, which
is, you guessed it, infix:<+>.  Now you'd think that multimethods
would handle that, but they don't because by defining "multi sub
infix:<+>" you are defining a *package* operator which *masks* the
global operator!  So this turns into infinite recursion.

By the way, this was done in Set.pm.  Pugs's scoping rules were
recently fixed, so suddenly the subtraction operator became an
infinite loop.

I see two solutions:

1) Disallow a "multi" to introduce a symbol if it masks another
symbol of the same name.  You must explicitly ask for masking by
predeclaring the symbol without "multi" somewhere before you define
the multi.  This still doesn't solve anything when you accidentally
put "is export" on an operator (as was done in Set.pm).
2) Make multi automatically find the symbol that you would have
referred to if the definition had not been there, and add a multi case
to that symbol.  So in the example above, the innermost infix:<+> that
existed before you said "multi" was *infix:<+>, so the multi
definition would basically infer that you meant to say multi
*infix:<+> and do the right thing.

There could also be some combination of the two.  Maybe something else
entirely.  Ideas?

Luke


Re: multi scoping

2005-09-04 Thread Luke Palmer
On 9/4/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> I always saw scoping of multis as something that applies to the
> variants...
> 
> multi sub foo {
> 
> }
> 
> {
> my multi sub foo {
> 
> }
> 
> # the second variant is just for this scope, but 
> neither masks
> # the other
> }

Reading over A12 again (in the section "Multiple Dispatch"), it
appears that you are right.

So let's figure out if that's actually the right way or the wrong way.
 I don't understand the second-to-last paragraph of that section,
which apparently explains why single variants mask.

> > You must explicitly ask for masking
> 
> I think this should be an option... You can either mask off a single
> variant by declaring one that overrides it in a tighter scope, with
> yadda yadda as the body, or you could ask a whole scope to be
> omitted from the possible variants.

Just to clarify, is this what you are suggesting?

multi foo (Int $x) { $x + 1 }
multi bar (Int $x) { $x + 1 } 
{
my sub foo ($x) {...}
my multi foo (Str $x) { "x" ~ $x }

my multi bar (Str $x) { "x" ~ $x }

bar("hi"); # xhi
bar(1);# 2
foo("hi"); # xhi
foo(1);# error, no compatible sub found
}

Though if that's the case, then it almost feels to me like the "multi"
should go before the "my".  But that would be screwing with the
consistency of the grammar a little too much, I think.

> > 2) Make multi automatically find the symbol that you would have
> > referred to if the definition had not been there, and add a multi case
> > to that symbol.  So in the example above, the innermost infix:<+> that
> > existed before you said "multi" was *infix:<+>, so the multi
> > definition would basically infer that you meant to say multi
> > *infix:<+> and do the right thing.
> 
> I don't agree with this... It takes the lexical scoping semantics
> out of things.

Oh, by the way, I was suggesting that a *bare* "multi" would do this. 
If you said "our" or "my", you are stating exactly what you mean. 
Well, almost (what does "my multi foo" mean if there is an outer
lexical multi foo -- mask or append?).

> There is one more problem though:
> 
> class Complex {
> multi sub &infix:<*> { ... }
> }
> 
> package Moose;
> use Complex;
> use SomeMathLib ;
> 
> ...
> 
> function($some_complex_number); # if function calls infix:<*> on
> # it's operand, somehow... What happens?

This is actually my *whole* problem, and the reason that I don't think
that variants should mask, but entire symbols should mask.  It seems
like you'd use a lexically scoped multi variant about as much as you'd
use a lexically scoped method on a class.  Can you think of a use for
that?

Isn't the point of lexical scoping so that you don't have to worry
whether somebody else called something the same thing you did?  I can
picture this:

multi combine (Any $x, Any $y) { ZCombinator.new($x, $y) }
multi combine (@x, @y) { ZList.new([ @x, @y ]) }   # concat
multi combine (Str $x, Str $y) { ZStr.new($x ~ $y) }

# ... many lines pass ...

sub process($x, $y) {
my multi combine (Any $a, Any $b) { die "Cannot combine" }
my multi combine (Int $a, Int $b) { $a + $b }

return combine($x, $y);
}

process("Foo", "Bar");
# Gets back a... what? ZStr?  What the heck is that

Clearly the author of process intended that two integers get added and
anything else dies.  He was not aware of the combine defined many many
lines above.   But he was smart and made his multis lexically scoped
so he didn't have to worry.

Really, he should have written process like so:

sub process($x, $y) { 
my sub combine ($a, $b) {...}
my multi combine (Any $a, Any $b) { die "Cannot combine" }
my multi combine (Int $a, Int $b) { $a + $b }

return combine($x, $y);
}

That case, to me, seems a lot more common than lexically overriding a
multi variant.  But I am open to counterexamples.

Assuming we have no information about the frequency, there is another
question to ask whether it makes sense to override or extend:  Is it
natural to do it both ways?  That is, does it feel right?  (Please,
get your mind out of the gutter and pay attention! :-)

It seems fairly natural when we go with the extension mechanism that
A12 seems to propose.  "multis extend, subs mask", so if you want to
mask, you just define it as a sub first.  It's a little awkward, but
it'll do.  The other way, however, is not nearly as natural:

# let's say that he wanted process() to do what I said he didn't intend
sub process($x, $y) { 
my multi combine (Any $x, Any $y) { OUTER::combine($x, $y) }
my multi combine (Int $x, Int $y) 

Re: our constant pi, my constant pi?

2005-09-05 Thread Luke Palmer
On 9/5/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> quick questions:
> 
> constant pi = 3;  # works
>   # Is &pi package- or lexically-scoped?
> 
> our constant pi = 3;  # legal?
> 
> my  constant pi = 3;  # legal?

Yep.  Bare constant is package, just like everything else bare
(including classes now, yay!).

> This is consistent with "sub foo", "our sub foo", and "my sub foo",
> which are all allowed.

Luke


Re: Proposal: split ternary ?? :: into binary ?? and //

2005-09-05 Thread Luke Palmer
On 9/5/05, Juerd <[EMAIL PROTECTED]> wrote:
> Thomas Sandlass skribis 2005-09-05 14:38 (+0200):
> >b) if this is true, ?? evaluates its rhs such that it
> >   can't be undef
> 
> But
> 
> $foo ?? undef // 1
> 
> then is a problem.

Yeah.  Hmm, but I kinda like the look of ?? //, and I don't like the
overloading of :: in that way anymore.   So it's possible just to add
a ternary ?? // in addition to, and unrelated to (from the parser's
perspective), the regular //.

?? !! ain't bad either.

Luke


Re: Proposal: split ternary ?? :: into binary ?? and //

2005-09-05 Thread Luke Palmer
On 9/6/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> Luke wrote:
> 
>  > Yeah.  Hmm, but I kinda like the look of ?? //, and I don't like the
>  > overloading of :: in that way anymore.   So it's possible just to add
>  > a ternary ?? // in addition to, and unrelated to (from the parser's
>  > perspective), the regular //.
> 
> Bad idea. This useful construct would then be ambiguous:
> 
>  $val = some_cond()
>?? $arg1 // $arg1_default
>// $arg2 // $arg2_default;

Huh, yeah.  We'd have to go one way or the other on that, and neither
of those are what you intend.

Not that being explicit is always a bad thing:

$val = some_cond()
  ?? ($arg1 // $arg1_default)
  // ($arg2 // $arg2_default)

And I question your notion of "highly useful" in this case.  Still, it
is probably linguistically not a good idea to overload // like that.

> 
>  > ?? !! ain't bad either.
> 
> It's definitely much better that sabotaging the (highly useful) // operator
> within (highly useful) ternaries.

I guess the thing that I really think is nice is getting :: out of
that role and into the type-only domain.

Luke


Re: Proposal: split ternary ?? :: into binary ?? and //

2005-09-06 Thread Luke Palmer
On 9/6/05, Thomas Sandlass <[EMAIL PROTECTED]> wrote:
> Right. To make :: indicate type or meta was my primary concern.

Okay, now why don't you tell us about this new binary :: you're proposing.

Luke


perl6-language@perl.org

2005-09-06 Thread Luke Palmer
On 9/3/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> H. The arity of a given multi might be 3 or 4 or 5.
> 
> If *only* there were a way to return a single value that was simultaneously
> any of 3 or 4 or 5.
> 
> Oh, wait a minute...

Well, we'd better document that pretty damn well then, and provide
min_arity and max_arity, too.   This is one of those places where
Yuval is spot on about autothreading being evil.  This is also one of
those places where I am spot on about Junctive logic being evil.

It looks like returning a junction is the dwimmiest thing we can do here:

given &code.arity {
when 1 { code(1) }
when 2 { code(1,2) }
}

So if &code is a multimethod that has variants that take two
parameters or three parameters, great, we call it with (1,2), which
will succeed.  And if it has variants that take one parameter or three
parameters, great, we call it with (1), which will succeed.  And if it
has variants that take one parameter or two parameters, um..., great,
we call it with (1), which will succeed.

In that last case though, this is not equivalent to the above:

given &code.arity {
when 2 { code(1,2) }
when 1 { code(1) }
}

That may be a little... surprising.  Still, it's fixed to succeed
either way, so that's probably okay, right?

But let's do a little thinking here.  You're asking a code reference
for its arity.  It's pretty likely, that unless you are doing
dwimminess calculations (which aren't that uncommon, especially in
Damian's modules), that you have no idea what the arguments are
either.  An example of a function that uses arity is &map.

sub map (&code, [EMAIL PROTECTED]) {
gather {
my @args = @list.splice(0, &code.arity);
take &code([EMAIL PROTECTED]);
}
}

In the best case (depending on our decision), this fails with "can't
assign a junction to an array".  In the worst case, it autothreads
over splice, returning a junction of lists, makes @args a junction of
lists, and then returns a list of junctions for each arity the
multimethod could have taken on.  I don't think that's correct... at
all.   The correct way to have written that function is:

sub map (&code, [EMAIL PROTECTED]) {
gather {
my @args = @list.splice(0, min(grep { ?$_ } &code.arity.states));
take &code([EMAIL PROTECTED]);
}
}

Not quite so friendly anymore.  In order to use this, we had to access
the states of the junction explicitly, which pretty much killed the
advantage of it being a junction.

Junctions are logical travesty, and it seems to me that they cease to
be useful in all but the situations where the coder knows
*everything*.

But I still like them.

Here's how I'm thinking they should work.  This is a minimalistic
approach: that is, I'm defining them in the safest and most limited
way I can, and adding useful cases, instead of defining them in the
richest and most general way possible and forbidding cases that are
deemed "unsafe".

Throw out all your notions about how Junctions work.  We're building
from scratch.

Things that come on the right side of smart match do a role called
Pattern, which looks like this:

role Pattern {
method match(Any --> Bool) {...}
}

Among the things that do pattern are Numbers, Strings, Arrays, ...
(equivalence); Types, Sets, Hashes (membership); Bools, Closures
(truth); and Junctions (pattern grep).  That is, a Junction is just a
collection of Patterns together with a logical operation.  It, in
turn, is a pattern that can be smart-matched against.  Therefore, we
sidestep the issue of the existence of junctions making every ordered
set have exactly one element[1]. because there is nothing illogical
against testing against a pattern.  You can also safely pass it to
functions, and they can use it in their smart matches, and everything
is dandy.

That's it for the base formulation.  A junction is just a pattern, and
it makes no sense to use it outside of the smart-match operator.

But that means that we have to change our idioms:

if $x == 1 | 2 | 3   {...}
# becomes
if $x ~~ 1 | 2 | 3   {...}# not so bad

if $x < $a | $b  {...}
# becomes
if ($x ~~ { $_ < $a } | { $_ < $b }) {...}  # eeeyuck
if $x < $a || $x < $b {...}  # back to square one

# from E06
if any(@newvals) > any(@oldvals) {
say "Already seen at least one smaller value";
}
# becomes
if (grep { my $old = $_; grep { $_ > $old } @newvals } @oldvals) {
say "I don't remember what I was going to say because the
condition took so long to type"
}

So, that sucks.  But I'm beginning to wonder whether we're really
stuck there.  The two yucky examples above can be rewritten, once we
take advantage of some nice properties of orderings:

if $x < max($a, $b) {...}
if min(@newvals) > min(@oldvals) {...}

But that's a solution to the specific comparison problems, not the
general threaded 

perl6-language@perl.org

2005-09-07 Thread Luke Palmer
On 9/7/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> Luke wrote:
>  > In that last case though, this is not equivalent to the above:
>  >
>  > given &code.arity {
>  > when 2 { code(1,2) }
>  > when 1 { code(1) }
>  > }
>  >
>  > That may be a little... surprising.  Still, it's fixed to succeed
>  > either way, so that's probably okay, right?
> 
> It's not surprising at all. The order of C tests (usually) matters,
> because a series of C statements (usually) short-circuits.

Okay, fair enough.  The reason that I thought it was surprising is
because 1 and 2 are usually orthogonal patterns.  But, I guess in the
presence of junctions I'm not able to assume that (as I'll explain in
my conclusion later, junctions make it impossible for me to assume
just about anything).

>  > Junctions are logical travesty,
> 
> Well, that's very emotive, but I don't believe it's either a useful or an
> accurate characterization. I would agree that junctions can be logically
> *sophisticated*, but then I'd argue that *all* programming constructs are 
> that.

No, that wasn't emotive, that was a logical statement explaining that
junctions are not logical.  Maybe that's what I should have said
before.  See below.

>  > and it seems to me that they cease to be useful in all but the
>  > situations where the coder knows *everything*.
> 
> What does that mean, exactly? How can anyone *ever* write sensible code
> without knowing what kind of values they're processing?

Have you ever heard of generic programming?

How can any *ever* write a sensible generic Set class when you are
required to know what kind of thing you have in the set?

> Otherwise hard things that junctions make a lot easier:
> 
>  if 0 <= @coefficients < 1 {...}

Ummm... that's an array in numeric context...

>  if 0 <= all(@new_coefficients) < all(@prev_coefficients) < 1 {...}

I'll give you this one.  This takes twice the amount of code.

if 0 <= min(@new_coefficients) && 
max(@new_coeffficients) < min(@prev_coefficients) &&
max(@prev_coefficients) < 1   {...}

>  if 0 <= all(@new_coefficients) != all(@prev_coefficients) < 1 {...}
> 
>  if 0 <= any(@new_coefficients) != all(@prev_coefficients) < 1 {...}

Okay, you may be convincing me here.  Until I find a good way to do
these.  The fact that I have to look is already junctions++.

>  if ! defined one(@inputs) {...}

I don't get how this could possibly be useful.

>  > $a <= any($a, $b).
>  > any($a, $b) <= $b.
>  > Therefore, $a <= $b.

...

>  Luke is no smarter than Luke or Luke is no smarter than a rock.
>  Luke is no smarter than a rock or a rock is no smarter than a rock.
> 
> Remove the false assertions (since nothing correct can ever be deduced
> from a false assertion):

Sure, everything correct can be deduced from a false premise.  Just...
um... everything incorrect can too.  :-)

>  Luke is no smarter than Luke
>a rock is no smarter than a rock.
> 
> But now the remaining assertions support *no* new conclusion.

And this is based on lexical expansion.  Which is cool.  In fact, once
upon a time I was going to propose that junctions are a purely lexical
entity, expanded into greps and whatnot by the compiler; that you
can't ever stick them in variables.  Your examples above are just more
attestment to that, since there is not one of them that I can't write
confining all junctions to lexical areas.

I think you missed my original point.  Here is a similar proof:

Assume for the sake of contradiction that:
  For all $a,$b,$c:
$a < $b && $b < $c implies $a < $c;

let $a = 3, $b = any(1,4), and $c = 2

Substituting:
3 < any(1,4) && any(1,4) < 2 implies 3 < 2
   True  implies False
Contradiction!

I just proved that < is not transitive.

I can do that for every boolean operator that Perl has.  They no
longer have any general properties, so you can't write code based on
assumptions that they do.   In particular, testing whether all
elements in a list are equal goes from an O(n) operation to an O(n^2)
operation, since I can't make the assumption that equality is
transitive.

So my original point was that, as cool as junctions are, they must not
be values, lest logical assumptions that code makes be violated.  I
can tell you one thing: an ordered set class assumes that < is
transitive.  You had better not make an ordered set of junctions!

Luke


perl6-language@perl.org

2005-09-07 Thread Luke Palmer
On 9/7/05, Brent 'Dax' Royal-Gordon <[EMAIL PROTECTED]> wrote:
> Here's a Real Live Perl 6 module I wrote recently.  I've omitted a few
> magic portions of the code for clarity.

Thanks for real live perl 6 code.  It's always nice to have real examples.

However, I'm arguing for logical stability without losing expressive
power.  The case that convinces me is the one where something becomes
a lot harder without lexical junctions.  This one doesn't:

> module Trace-0.01-BRENTDAX;
> 
> my $active;
> ...
> 
> sub activate(*%newtags) {
> $active |= any(keys %newtags);
   @active.push(keys %newtags);
> }
> 
> sub trace([EMAIL PROTECTED] is copy, *%to is copy) is export {
> ...
> if $active eq any('all', keys %to) {
  if any(@active) eq any('all', keys %to) {
> ...
> print $ERR: @msg;
> return [EMAIL PROTECTED] #but true;
> }
> return;
> }

And that is clearer to me at least.  You can tell the nature of the
comparison: that you're checking a list of pattern against a list of
active objects, rather than a list of patterns against a single
object, which is what it looked like before.  YMMV on that angle,
though.

The reason I think that this approach won't lose expressive power is
mostly because of our new Set class.  The one remaining thing is when
you build up a nested junction in terms of ors and ands and check
that, as was given in one of Damian's old examples.  I really haven't
come up with a reason you'd want to do that yet, but I *am* looking. 
I'm not on a mission to destroy junctions, really. I'm just on a
mission to make things make sense. :-/

Luke


perl6-language@perl.org

2005-09-07 Thread Luke Palmer
On 9/8/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> Luke wrote:
> 
>  > Okay, fair enough.  The reason that I thought it was surprising is
>  > because 1 and 2 are usually orthogonal patterns.
> 
> It depends what they're doing. Matched against a regex like /[12]/ they're
> not orthogonal either.

Well, then they're not patterns; they're the things being matched
against a pattern.  But then you could think of junctions in the same
way.  In fact, the proposal was to do precisely that.  But I think
you've knocked me off of that one (with your "multiple junctions in
the same expression" examples).

>  > that was a logical statement explaining that junctions are not logical.
> 
> Right. So "Luke's statement is a logical travesty" isn't an
> unsubstantiated emotive assertion either? ;-)

Hey. I backed up my claim with a proof!  Let's settle on "It doesn't
matter!" :-)

>  >> if ! defined one(@inputs) {...}
>  >
>  > I don't get how this could possibly be useful.
> 
> That doesn't mean it's not. ;-)

You don't need to sell me on these, by the way.  You already did. 
That was just a comment on this particular example.

>  > So my original point was that, as cool as junctions are, they must not
>  > be values, lest logical assumptions that code makes be violated.  I
>  > can tell you one thing: an ordered set class assumes that < is
>  > transitive.  You had better not make an ordered set of junctions!
> 
> You can rest assured that I won't try to make one. Because it doesn't
> makes *sense* to even talk about an ordered set of junctions. Any more
> than it makes sense to talk about an ordered set of vectors, or lists,
> or regexes. Will you also be recommending that lists and regexes not be
> values???

Well, none of those things define a < operator.  I suppose one could
say that lists do, but he'd be wrong.  They coerce to numbers, which
in turn have a < operator, so you'd end up with a set of numbers...

I think I see where you're coming from though.  If we go with
Haskell's type classes, then I could see how this would work. 
Junction simply wouldn't do Eq or Ord (equality or ordering), since it
doesn't meet the prerequisites for doing those type classes
(transitivity and antisymmetry).  Then the Junctive > would be about
as similar to numeric > as IO::All's > is, at least from Perl's
perspective.

> More seriously, since the ordered set's methods presumably *won't* use
> (Item(+)Junction) type arguments, why will this be a problem? The
> ordered set will never *see* junctions.

Set of Junction?  If the methods are declared with parameters ::T (the
parameterization type of Set), then it certainly would accept
junctions.

Admittedly, that should just fail since it doesn't make any sense. 
Admittedly, it would probably end up succeeding and giving you an
infinite loop, and then comp.lang.perl.misc or #perl would give you a
belated compile-time error.  Or Set could say "No junctions!", but
then we're getting into special cases.

Hmm, incidentally, if we have:

theory Value[::T] {
# in order to do Value, eqv must be transitive:
# $a eqv $b and $b eqv $c implies $a eqv $c
multi infix: (::T, ::T --> Bool) {...}
# ... some useful methods
}

(For those of you following along at home, don't worry if you don't
know what a "theory" is, I haven't proposed it yet.  For those of you
lambdacamels following along at home, a "theory" is basically a type
class.)

Then Junction shouldn't do Value, since it doesn't meet one of the
implicit requirements.  Since Sets only operate on Values (no
reference types allowed: they can change under your feet), then a Set
would reject a Junction as a valid parameterization.  Oh, I guess
Junction's absence of the Ordered class already did that.

Alright, this seems to be working out mathematically if we get type
classes.  It's really not a solution I like much, as it's basically
saying "yeah, junctions exist, but they really don't participate in
the algebra of your program".  Actually, since they don't do Value,
you can't make any aggregate of Junctions whether you declare it or
not (sub parameters are still okay when declared though; they don't
have to be Values).

Okay, with that, my position changes.  I no longer see anything wrong
with Junctions from a pure perspective.  I still think it's wrong to
Humans to have something that looks ordered but in fact isn't, and I
still think that if Junctions are values, you ought to be able to
metaprogram with them.  But I don't really have any ideas other than
the ones I've proposed in this thread and the one involving Haskell's
"M" word.

Luke


Re: !!/nor and ??!! vs. ??::

2005-09-08 Thread Luke Palmer
On 9/8/05, Benjamin Smith <[EMAIL PROTECTED]> wrote:
> Pugs currently implements &infix: as an ugly version of the
> &infix: operator.
> 
> Are these in the spec?

No they are not.  Destroy!

Luke


Hyper fmap

2005-09-10 Thread Luke Palmer
I think we should generalize the hyper stuff a little bit more.  I
want hyper operators serve as "fmap", or "functor map", rather than
just list.  This is a popular concept, and a pretty obvious
generalization.

A functor is any object $x on which you can do "fmap" such that it
satisfies these laws:

$x.fmap:{$_} eqv $x   # identity maps to identity

$x.fmap:{ f($_) }.fmap:{ g($_) } eqv
$x.fmap:{ g(f($_)) }  # transparent to composition

Another way to think of a functor is just some sort of aggregate data
structure, where "fmap" is what you use to do something to each of its
elements.

In order to do this, I want to desymmetricalize hyper operators again.
 That is, you put the arrows on the side of the operator you have the
list (now functor).  This is so that if you have a functor on either
side of a hyper operator, it knows which side to map.

Here's how it works:

$x >>+ $y #  $x.fmap:{ $_ + $y }
$x +<< $y #  $y.fmap:{ $x + $_ }
+<< $x#  $x.fmap:{ +$_ }
foo(1, >>$x<<, 2) #  $x.fmap:{ foo(1, $_, 2) }

etc.

For the current hyper semantics where (1,2,3) >>+<< (4,5,6) eqv
(5,7,9), there is also a "binary functor".  I call the mapping
function fmap2 (but maybe we can find a better name).

fmap2($a, $b, { $^x + $^y })

It takes a two argument function and two functors and returns the
appropriate combination.  Not all functors have to be binary functors
(it's not clear that binary functors have to be regular functors
either).

$x >>+<< $y   # fmap2($x, $y, &infix:<+>)

Luke


Re: Regarding Roles and $?ROLE

2005-09-10 Thread Luke Palmer
On 9/11/05, Stevan Little <[EMAIL PROTECTED]> wrote:
> Hello all.
> 
> I have some questions about how Roles will behave in certain
> instances, and when/where/what $?ROLE should be bound too.
> 
> 1) Given this example, where 'bar' is a method stub (no implementation)
> 
> role Foo {
>  method bar { ... }
> }
> 
> Should the eventually implemented method still have a binding for $?
> ROLE?

The way you're referring to $?ROLE here sounds kind of akin to asking
*the* type (not class) of a particular object.  That is, you're asking
the compiler for the answer to a question that doesn't make any sense.

> 4) If a Role has a method stub, you are in effect creating a contract
> with any class which consumes that Role. The class must implement
> that method. However, what happens if the class consumes another Role
> which implements that method. Do they still conflict? or does the
> second Role's version fufill the first Role's contract?

The second role fulfill's the first role's contract.  Definitely. 
There's a school of design based precisely on this notion (see Modern
C++ Design and do s/<.*?>//g).

Luke


Re: Unified prelude, FFI, multiple runtimes

2005-09-12 Thread Luke Palmer
On 9/12/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> Hi,

Hi.  These are superficial thoughts, before I've had time to really
think about the Big Picture.

> 2.  each block of code has a cryptographic digest, which is the hash
> of it's body with the digests of all the functions it calls.
> This digest is stable regardless of optimization semantics, and
> is applied to the PIL structure.

Okay, a little clarification needed here.  Is the digest of the code
itself a collection of digests, one for the lexical body, and one for
each funciton it calls (1)?  Or is it a hash that combines all of
those somehow?

How do you deal with recursive/mutually recursive functions?  (I'm
pretty sure there's a way, I just can't think of it)

What about "temp &foo = sub {...}" ?

> 5.  Functions have a notion of equivelence. This is managed based on
> the digest. For example
> 
> my &c_int_mul = BEGIN { Code::Foreign.new(:language, 
> :body("
> int foo (int x, int y) { return x * y }
> ") };
> 
> multi &infix:<*> (int $x, int $y --> int) {
> [+] $x xx $y;
> }
> 
> my $str = &infix:<*> int>.digest; # must specify 
> the variant

You mean:

my $str = &infix:<*>:(int, int --> int).digest;

Also, you said that the digest contains the digests of all called
functions.  How do you deal with multis there, which may depend on the
runtime types?

> &c_int_mul.set_equiv($str); # could be in another file
> 
> # or, if they are maintained together
> 
> &c_int_mul.set_equiv(&infix:<*> int>);
> 
> This equivelence is with respect to the semantics of the input
> and output. The test suite supposedly can assure that these are
> really equivelent by running the same tests against either
> version.

Okay, good.  So you have a way of marking that two functions do the
same thing, without having the program try to figure that out (which
is impossible).  That's something that Haskell didn't figure out :-)

I suppose it would be nice to have subtype semantics, in that
"function A does the same thing as function B for all arguments that
are valid for function B (but function A may have additional
functionality for arguments that are not valid for B)".  Then you
specify equivalence by specifying subtype equivalence in both
directions (with a shortcut, of course).

> 9.  static analysis may be leveraged to compile direct calls to
> native functions when compile time resolution is possible. In
> the example graph, for example, no eval or symbol table
> assignments are made, so there is no way the code will ever
> change. Hence the entire program can be pre-resolved. This
> should be controlled via the 'optimize' pragma.

Rather than having the compiler try to infer for itself, which would
come up negative 99% of the time and just waste compile time.

> Since FFIs are going to be a core feature of perl 6, they can be
> used to bootstrap the whole compilation process. In effect, the
> "primitive" operations are now just FFI calls to the runtime we
> happen to be executing on.

And if you have a circular reference implementation, the implementors
can decide to implement whatever generating subset is easiest and get
a working Perl.  I like this.

Hmm, could we pull the idea of a generating subset out to
roles/theories?  That is, have a role specify that "a" and "b" are
implemented in terms of each other, so if you provide one, you fulfill
the role contract.  In Haskell, if you don't fulfill a class contract
that's mutually recursive, you get infinite loops.  It be nice to
catch that.

Then I guess we would have "theory PerlCore".  Neat.

> To make things modular, the paring of FFI and pure perl functions is
> orthogonal to their location of definition based on the hashing
> scheme.

So as we're hashing PIL, we'd leave out line number information and whatnot.

> WRT MMD, you can set the entire MM equivalent to
> a certain foreign function, and you can also set any variant
> individually. You can even set a single variant to be equivalent to
> a multimethod to make the FFI implementation simpler. The compiler
> simply presents the runtime with all the possible MMD choices, and
> lets the runtime choose between conflicting ones.

Like a nice "wrapping" MMD scheme ought to.

All in all, I like the idea.  I hope there are no major technical
hurdles.  The hashing scheme scares me a little, because it's easy to
create equivalent code that does not look the same from PIL, but it
seems like you covered that base.

Luke


Re: coercion and context

2005-09-14 Thread Luke Palmer
On 9/14/05, Juerd <[EMAIL PROTECTED]> wrote:
> Instead, if you don't want something to coerce, be explicit:
> $foo.does(Blah) or fail;, or even: $foo.isa(Blah) or fail;.)

We've been thinking of changing .isa to something longer, or a method
on .meta, because it's a notion that is often misused.  Use .does
instead; that's what you mean anyway (unless you're snooping around
the guts of an object).

> my Int $int = $num;
> 
> Explicit coercion, however, isn't done with context: it is done with the
> .as() method: $num.as(Int). I think that's weird.

Not to mention the fact that you might have put an Int there for
typechecking purposes instead of coersion purposes, and you actually
want it to die if $num is a string.  Hmmm, how do we get both at once?

> Why not see "bare" types as functions that create context? We'd then
> have Int $num, Int($num), and automatically, $num.Int. All are easy to
> recognise because of the capital I.

You realize that those functions would all be the identity map with
various expectations on their input.  That's okay, it's just an
interesting notion.

> Is this weird syntax? Perhaps so, but we've known it from undef for
> ages. undef without arguments is just undef, but when you give it
> arguments, it suddenly actually *does* something.
> 
> undef($foo) makes $foo undef, undef() just returns undef.

Except we changed that because it was biting people:

undef is null-ary and represents the undefined value; undef $foo is illegal
undefine($foo) is mandatory unary, and always undefines its argument

> Compare with:
> 
> Int($foo) makes $foo Int, Int() just returns ::Int.

That bugs me a little.  But it's a bug I could get used to pretty
quickly I reckon.

I just wonder what kind of role coercion plays in the larger scheme of
things.  Does coercing to a Str work with anything that has a
stringify operation (conversely, is ~ just a Str context applicator?)?
 If a parent class defines a coercion operation, do you get it too
(and what are the implications of that)?  What role does coercion play
in multimethod dispatch?

I can't seem to map the notion of coercion into my world model, so I'd
personally like to see a proposal that covers these questions from
someone who can map it into his world model.  I'll bash it to pieces
of course, but it'd be good to have somewhere to start.  :-)

Luke


Junctions, patterns, and fmap again

2005-09-18 Thread Luke Palmer
Okay, due to some discussion on #perl6, I'll assume that the reason my
fmap proposal was Warnocked is because a fair number of people didn't
understand it.  Also, for the people who did understand, this message
includes a complete proposal for the workings of Junctions that will
fluster Damian again.  :-)

Part 1: fmap

De-symmetricalize hyper.  So, what used to be @foo »+« 1 is now @foo
»+ 1; the hyper marker is only on the side of the operator that is
being hypered over.  Of course, we still have @foo »+« @bar.

An object may do the role[1] Functor, in which case it defines a
method 'fmap' which is a generalization of map.  For instance, let's
try it with a tree.

class Tree does Functor {
method fmap (&code) {...}
}
class Branch is Tree {
has Tree $.left;
has Tree $.right;
method fmap (&code) {
# Return an identical tree with the leaves mapped with &code
return Branch.new(
left  => $.left.fmap(&code),
right => $.right.fmap(&code),
);
}
}
class Leaf is Tree {
has $.data;
method fmap (&code) {
# Just apply &code to the value in the leaf
return Leaf.new( data => &code($.data) )
}
}

Now if we have a $tree that looks like:

+---+---3
|   +---4
+---5

$tree.fmap:{ $_ * 2 } returns:

+---+---6
|   +---8
+---10

The formal signature of fmap is, if T is a Functor:

multi fmap (T[::U], &code:(::U --> ::V) --> T[::V])

That is, it takes a T of some type, and a code that maps some type to
some other type, and returns a T of that other type.

Now, hypers are just syntactic sugar for various forms of fmap:

$x »+ $y# $x.fmap:{ $_ + $y }
$x +« $y# $y.fmap:{ $x + $_ }
foo(»$x«)   # $x.fmap:{ foo($_) }

I have a plan for the $x »+« $y form (and also foo(»$x«, »$y«, »$z«)),
but I don't want to go into that right now.  It basically involves
zipping the structures up into tuples and applying the function to the
tuples.

Part 2: Junctions

A Junction is a set of values together with a logical operation to be
applied when the Junction is in boolean context.  When you add to a
Junction, you add to each of its values.  When you pass a Junction to
a function, you call the function for each of its values and
reconstruct based on the return values.  All in all, a Junction sounds
like a perfect candidate to be a Functor.  Except all the fmapping is
implicit, which is what makes Junctions break the transitivity of <,
the orthogonality of the patterns "1" and "2", and all that other
stuff that people like me covet so.

So my proposal is to make a Junction into a plain old Functor.  So
what used to be:

if any(@values) == 4 {...}

Is now:

if any(@values) »== 4 {...}

And the only thing that makes junctions different from Sets (which are
also Functors) is their behavior in boolean context (and their ability
to be Patterns; see below).

Yeah, it's a little teeny bit longer, but I think it is pretty easy to
get used to.  And now junctinve threading looks like threading (just
like with hypers).  The best part is that now it is okay to pass
Junctions to functions since they don't screw with your logical
assumptions, and the signature (Junction $x) is no longer semantically
special; it is a type check just like any other typed signature (and
optional just like any other typed signature :-).

Part 3: Patterns

Like I've said before, a Pattern is a thing that can be on the right
side of a ~~ operator.  Most builtin things are Patterns: numbers,
strings, regexes, lists (maybe), booleans, closures, classes, and
junctions.  Pattern is really a role[2] that requires a match(Any -->
Bool).  So then $x ~~ $y is equivalent to $y.match($x).

Here is a table of standard Patterns:

numbers, strings:  eqv
regexes:   match (note this gives us /foo/.match($str) for free)
lists: dunno
booleans:  boolean truth (ignores left side)
closures:  apply closure and test truth
classes:   .does
junctions: see below

A Junction of Patterns does Pattern under its logical operation.  So
$x ~~ any($y,$z) is equivalent to $x ~~ $y || $x ~~ $z.  This is the
only autothreading operation.  And that is so you can say:

given $value {
when 1 | 2 | 3   {...}
when /^foo/ | /^bar/ {...}
}

The signature of grep is:

sub grep (Pattern $pat, [EMAIL PROTECTED]) {...}

So then these all make sense:

grep /foo/, @values;
grep 1|2,   @values;
grep Node,  @objects

And the regular closure form of grep is only for straight boolean
tests against the argument:

grep { lc eq 'foo' } @strings;

This really doesn't give us anything new.  But in my opinion, it
solidifies what we already have greatly, and makes it much easier to
think about and work with (no more guessing :-).  It also trivializes
the smart match table in S04.

Luke

Re: Junctions, patterns, and fmap again

2005-09-19 Thread Luke Palmer
On 9/19/05, Stuart Cook <[EMAIL PROTECTED]> wrote:
> On 19/09/05, Luke Palmer <[EMAIL PROTECTED]> wrote:
> > Part 1: fmap
> >
> > I have a plan for the $x »+« $y form (and also foo(»$x«, »$y«, »$z«)),
> > but I don't want to go into that right now.  It basically involves
> > zipping the structures up into tuples and applying the function to the
> > tuples.
> 
> Does this mean that 'unary' (one-side) hyper would be
> structure-preserving, but 'binary' (two-side) hyper would not? Or
> would you take the final list of tuples and re-build a structure?

Well, I've written up the details in a 40 line Haskell program to make
sure it worked.  I think I deleted the program, though.

The basic idea is that, alongside Functor, you have a Zippable theory
which defines:

theory Zippable[::T] {
multi zip (T[::A], T[::B] --> T[:(::A, ::B)]) {...}
}

Where that last coloney madness is a yet-to-be-proposed tuple type
(but tuples can be emulated if they are not in the core language, so
it's no biggie).  That is, zip takes two structures and figures out
how to combine them in a reasonable way into pairs of values.  So:

zip([1,2,[3,4]], [["a","b"], "c", "d"])

Gives:

[[:(1,"a"), :(1,"b")], :(2,"c"), [:(3,"d"), :(4,"d")]]

In order to be consistent with the specced semantics.  In order to
keep with the specced semantics of junctions, you'll probably see:

zip(1|2, 3&4)

Give:

(:(1,3) & :(1,4)) | (:(2,3) & :(2,4))

So it's really up to the zippable functor itself to figure out the
best way to zip.  After the structures are zipped up, you fmap the
binary function on each of the tuples, resulting in a reasonable
functor structure again.

Hmmm, that should probably be fzip or something. 

Luke


Re: Junctions, patterns, and fmap again

2005-09-19 Thread Luke Palmer
On 9/19/05, Luke Palmer <[EMAIL PROTECTED]> wrote
> Well, I've written up the details in a 40 line Haskell program to make
> sure it worked.  I think I deleted the program, though.

Nope.  Here it is.  And it was 22 lines. :-)

http://svn.luqui.org/svn/misc/luke/work/code/haskell/hyper.hs

Luke


Re: skippable arguments in for loops

2005-09-22 Thread Luke Palmer
On 9/22/05, Carl Mäsak <[EMAIL PROTECTED]> wrote:
> FWIW, to me it looks fairly intuitive. undef here means "don't alias
> the element, just throw it away"... gaal joked about using _ instead
> of undef. :)

Joked?  Every other language that has pattern matching signatures that
I know of (that is, ML family and Prolog) uses _.  Why should we break
that?  IMO, it's immediately obvious what it means.

Something tells me that in signature unification, "undef" means "this
has to be undef", much like "1" means "this has to be 1".

Luke


Re: Exceptuations

2005-09-25 Thread Luke Palmer
On 9/25/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> I propose a new model - each exception has a continuation that
> allows it to be unfatalized.

I think we've already talked about something like this.  But in the
presence of "use fatal", it makes a lot more sense.

Something comes to mind:

use fatal;
sub foo()  { bar()  }
sub bar()  { baz()  }
sub baz()  { quux() }
sub quux() { fail }
{
say foo();
CATCH { $!.continue(42) }
}

Exactly which exception is continued?  Where do we cut off the call
chain and replace our own value?  This comes up again with open(). 
Let's say open is implemented with a series of five nested calls, the
innermost which knows how to fail and propagate outwards.  However,
the programmer using open() has no idea of its internals, so it ought
to override the return value of open() itself, rather than its utility
functions.  However, we can't go with outermost, because then you'd
only be "fixing" the lexical call ("say foo()" above).  So it's
somewhere in between.  Where?

Luke


Re: matching colors (was Stringification, numification, and booleanification of pairs)

2005-09-25 Thread Luke Palmer
On 9/25/05, Juerd <[EMAIL PROTECTED]> wrote:
> We can do better than equivalence testing for colors. Instead, try to
> match. Surely a *smart* match operator really is smart?
>
> $color ~~ '#FF00FF'
>==
> $color ~~ 'magenta'
>==
> $color ~~ [ 255, 0, 255 ]

Hmm.  That violates my proposal that the right side is the thing that
determines how the left side is matched.  So there's something wrong
with one of the two...

If we keep my proposal, then we get:

$color ~~ color('#FF00FF')
$color ~~ color('magenta')

etc., for some definition of color().   In fact, that's probably the
right thing to do, is to make color a pattern constructor.  Then you
can even put it in signatures:

   sub name(color('#FF00FF')) { "magenta" }

Whatever that buys you

If we ditch my proposal, then we can write it as you did.  But I feel
uncomfortable with that (thus the proposal), since you never know how
people may have overloaded ~~ for "dwimminess", and when you say:

sub print_colored ($color, $text) {
$color = do given $color {
when 'red'   { Color.new('#FF') }
when 'green' { Color.new('#00FF00') }
when Color   { $color }
default  { fail "Unknown color" }
}
}

Though in this case it's not much of an issue, unless the red pattern
matches /#../ or something, in which case you are losing
granularity.  I guess the reason it makes me uncomfortable is that old
obsession of mine where things that look orthogonal ought to be, like
'red' (a string) and Color (a class).

Then again, would one expect:

$foo ~~ 'bar'

To be equivalent to:

~$foo eq 'bar'

Or not?  I mean, that can be the string pattern's job, but then 'red'
and Color aren't really equivalent anymore.

/me punts patterns until he understands more.

Luke


Re: Look-ahead arguments in for loops

2005-09-29 Thread Luke Palmer
On 9/29/05, Dave Whipp <[EMAIL PROTECTED]> wrote:
>for grep {defined} @in -> $item, ?$next {
>  print $item unless defined $next && $item eq $next;
>}

This is an interesting idea.  Perhaps "for" (and "map") shift the
minimum arity of the block from the given list and bind the maximum
arity.  Of course, the minimum arity has to be >= 1 lest an infinite
loop occur.  But then perhaps you have another way to avoid integer
indices:

for @list -> $this, [EMAIL PROTECTED] {
...
}

As long as you don't look backwards.  Looking backwards makes problems
for GC in lazy contexts, so this might just be perfect.

Luke


Re: Look-ahead arguments in for loops

2005-09-29 Thread Luke Palmer
On 9/29/05, Austin Hastings <[EMAIL PROTECTED]> wrote:
> Luke Palmer wrote:
> >>This is an interesting idea.  Perhaps "for" (and "map") shift the
> >>minimum arity of the block from the given list and bind the maximum
> >>arity.  Of course, the minimum arity has to be >= 1 lest an infinite
> >>loop occur.

> Or not. We've already seen idioms like
>>
>   for (;;) ...
>
> If you specify your minimum arity as 0, then you're obviously planning to 
> deal with it. This presumes that iterators can handle behind-the-scenes 
> updating, of course.

Well, I see two reasons for not allowing arity zero.  First, I think
it's too easy to come up with a function with minimum arity zero:

my @lengths = @list.map:&length   # oops, infinite loop

Second, you don't get anything by doing this:

for @list -> [EMAIL PROTECTED] {
   ...
}

As it's equivalent to:

loop {
...
}

Where you use @list instead of @items.

Luke


Maybe it's Just Nothing (was: Look-ahead arguments in for loops)

2005-09-29 Thread Luke Palmer
On 9/29/05, Austin Hastings <[EMAIL PROTECTED]> wrote:
> Matt Fowles wrote:
> >
> >for (1, 2) -> ?$prev, $cur, ?$next {
> >   say "$prev  -> $cur" if $prev;
> >   say $cur;
> >   say "$cur -> $next" if $next;
> >   say "next";
> >}
> >
> [...]
>
> I assume so because it's the only execution path that seems to work. But
> that would be assuming there was always at least one non-optional
> binding. Given that Luke's against all-optional signatures, too, I'll
> withdraw that part of the suggestion. And with at least one required
> binding, then there's no reason that we can't have the window extend on
> both sides of the current value.
>
> Luke?

Hm, I'm being called upon now.

Well, then I start to ask questions like:

for 1..10 -> ?$a, $b, ?$c, $d, ?$e {...}

Which simply doesn't make any sense to me.  Also, figuring out such
things (as is the case with lookbehind argumentS) needs a little bit
too much knowledge about the signature of the function you're viewing.

So instead of sticking that in the signature, we could do it with
adverbs on for:

for 1..10, :lookbehind(1) :lookahead(1) -> $cur, ?$left, ?$right {
...
}

You'll note that, unfortunately, the lookbehind has to come *after*
the cur argument because it must be optional.  Hmm... actually, that
doesn't work, because at the beginning of the list you won't have a
$left, and at the end you won't have a $right.

It's possible that it's time to start kicking undef into gear.  Until
now in Perl culture, undef has been just a little bit special, with
people not fearing to put it in lists and data structures.  There may
be benefit in making it more special, so that people shouldn't define
their own meaning for it.  Making exceptions undefs are a step in that
direction.  If we take another step in that exact same direction, we
could make undefs exceptions (the converse of before):

sub foo() {
return undef;# almost the exact same as "fail"
}

That is, under "use fatal", all undef return values are converted into
exceptions.

That was somewhat beside the point, but not really.  If undefs are a
bit taboo--for example, actually writing "undef" in most code is
considered bad style--then we can steal them for the language's
purposes, such as passing to a for block as lookbehind and lookahead
parameters when you're near the beginning or end of a list.

It seems like I'm making a pretty big deal out of just passing undef
when there is no look-behind/-ahead, but I really want to be able to
distinguish between "there was an undef in the list" and "we're almost
done, so there was actually nothing here".  Of course, I still don't
get to distinguish those, but a list with an undef in it becomes much
less common.

The way we can help ease the pain of undef not being available for
user purposes anymore is to allow easy manipulation of Either types. 
If you define "easy" weakly, then union types give us that:

union Maybe[::t] (Nothing, Just(::t));

Mmm, Haskellicious.  But of course you wouldn't need to declare your
types everywhere because of Perl's dynamic typing/type inference
(depending on your mood).  Nothing is nice, but I wouldn't call
working with Just "easy" for Joe Schmoe.  It's nice and safe, but it's
annoying sometimes.  What I might think I want is:

union Maybe[::t] (Nothing, ::t);   # not legal union syntax

You lose a lot with that, though.  For instance, Just(Nothing) becomes
unrepresentable.  And consequently, nested calls to things that return
Maybes become woosy.  So that's not a good idea.

So allowing the definition of Maybe is a good start.   But it's
difficult to know whether Perl programmers will put up with it.  It's
easier--lazier--just to use undef.

Maybe we ought to call the whole thing off.  Undefs have no stigma,
and everything is as usual.  If you want to iterate over a list with
lookahead and lookbehind, you shouldn't have put undefs in your list.

The last thing to do, if we want to keep the undef status quo, is to
define Maybe in the language and use it for things like for's
lookahead binding.  It's kind of like a "formal undef", something like
"listen, it's pretty common that I won't give you a value here, so I'm
going to mark it specially when I both do and do not".  Again, not
easy enough; too much abstraction to think about for an everyday task.
 I think my favorite so far is the previous paragraph's resolution. 
Just because it's my favorite doesn't mean I'm happy with it.

Oh, right, and as for my favorite actual usage of for:

for @list, :lookbehind(2) :lookahead(1)
-> $behind1, $behind2, $value, $ahead {
...
}

Luke


Re: Look-ahead arguments in for loops

2005-09-30 Thread Luke Palmer
On 9/30/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> Rather than addition Yet Another Feature, what's wrong with just using:
>
> for @list ¥ @list[1...] -> $curr, $next {
> ...
> }
>
> ???

Thanks.  I missed that one.

However, I think your point is pretty much the same as mine. 
Certainly adding this to specialized syntax in signature matching is
an overfeature, so I tried to squish it down into options, which we
can add at will without really complexifying the core language.  But
without options, like this, is even better.

Incidentally, the undef problem just vanishes here (being replaced by
another problem).  Since zip takes the shorter of its argument lists,
you'll never even execute the case where $next is undef.

Luke


Re: Look-ahead arguments in for loops

2005-10-01 Thread Luke Palmer
On 10/1/05, John Macdonald <[EMAIL PROTECTED]> wrote:
> I forget what the final choice was for syntax for the reduce
> operator (it was probably even a different name from reduce -
> that's the APL name), but it would be given a list and an
> operator and run as:
>
> my $running = op.identity;
> $running = $running op $_ for @list;
>
> So, to get a loop body that knows the previous value, you
> define an operator whose identity is the initial value of the
> list and reduce the rest of the list.

And that was never quite resolved.  The biggest itch was with
operators that have no identity, and operators whose codomain is not
the same as the domain (like <, which takes numbers but returns
bools).

Anyway, that syntax was

$sum = [+] @items;

And the more general form was:

$sum = reduce { $^a + $^b } @items;

Yes, it is called reduce, because "foldl" is a miserable name.

Luke


Re: seeing the end of the tunnel

2005-10-01 Thread Luke Palmer
On 10/1/05, David Storrs <[EMAIL PROTECTED]> wrote:
> All in all, I think that might just be the end of the tunnel up
> ahead.  Go us for getting here, and loud applause to @Larry for
> guiding us so well!

Applause for p6l for hashing out the issues that we didn't think of.

I recently wrote a "Perl 6 design TODO", which was surprizingly small,
which enumerated the things to be done before I considered the design
of Perl 6 to be finished.  Larry replied with a couple more items.  In
particular:

> Here are the chapters which haven't been covered yet:
>
>  * 17. Threads
>  * 26. Plain Old Documentation
>  * 29. Functions

Luke


Re: zip: stop when and where?

2005-10-04 Thread Luke Palmer
On 10/4/05, Juerd <[EMAIL PROTECTED]> wrote:
> What should zip do given 1..3 and 1..6?
>
> (a) 1 1 2 2 3 3 4 5 6
> (b) 1 1 2 2 3 3 undef 4 undef 5 undef 6
> (c) 1 1 2 2 3 3
> (d) fail
>
> I'd want c, mostly because of code like
>
> for @foo Y 0... -> $foo, $i { ... }
>
> Pugs currently does b.

I think (c) is correct, precisely for this reason.  The idiom:

for 0... Y @array -> $index, $elem {...}

Is one we're trying to create.  If it involves a pain like:

for 0... Y @array -> $index, $elem {
$elem // last;
}

Then it's not going to be a popular idiom.

If you want behavior (b), SWIM:

for 0... Y @array, undef xx Inf -> $index, $elem {
...
}

If that ends up being common, we could create a syntax for it, like
postfix:<...>:

@array...  # same as (@array, undef xx Inf)

Luke


Re: zip: stop when and where?

2005-10-04 Thread Luke Palmer
On 10/4/05, Luke Palmer <[EMAIL PROTECTED]> wrote:
> If that ends up being common, we could create a syntax for it, like
> postfix:<...>:
>
> @array...  # same as (@array, undef xx Inf)

No, no, that's a bad idea, because:

@array...# same as @array.elems..Inf

So I think I'm pretty much with Damian on this one.  I don't like the
idea of it discriminating between finite and infinite lists, though. 
What about things like =<>, for which it is never possible to know if
it is infinite?

I don't think people make assumptions about the zip operator.  "Does
it quit on the shortest one or the longest one?" seems like a pretty
common question for a learning Perler to ask.  That means they'll
either write a little test or look it up in the docs, and we don't
need to be so strict about its failure.  I'd like to go with the
minimum.

I was thinking a good name for the adverbs would be :long and :short.

Luke


Re: A listop, a block and a dot

2005-10-05 Thread Luke Palmer
On 10/4/05, Miroslav Silovic <[EMAIL PROTECTED]> wrote:
> Playing with pugs, I ran into this corner case:
>
> sub f($x) { say $x; }
> f {1}.(); # ok, outputs 1
>
> sub f([EMAIL PROTECTED]) { say @_; }
> f {1}.(); # outputs block, tries to call a method from the return of say,
> dies
>
> Whitespace after f doesn't change the behaviour (in either case). Is this
> behaviour a bug in pugs? Should . bind stronger than non-parenthesised
> function call regardless of slurpy?

I think that's what it should do, yes.

Luke


Re: A listop, a block and a dot

2005-10-05 Thread Luke Palmer
On 10/5/05, TSa <[EMAIL PROTECTED]> wrote:
> IIRC, this puts f into the named unary precedence level
> which is below method postfix.

We're trying to stop using the words "below" and "above" for
precedence.  Use "looser" and "tighter" instead, as there is not
ambiguity with those.

>(f ({1}.()))
>
>((f {1}).())

Listop application has always had looser precedence than unary op
application.  Eg.

% perl -le 'print(length "foo" < 4)'
1
% perl -le 'print 3 < 4'
1

With parentheses:

print((length "foo") < 4)
print(3 < 4)

So this was quite a disturbing bug.


Re: A listop, a block and a dot

2005-10-05 Thread Luke Palmer
On 10/5/05, Autrijus Tang <[EMAIL PROTECTED]> wrote:
> However:
> f:{1}.()
>
> still parses as
>
> (&f(:{1})).()
>
> as the "adverbial block" form takes precedence.  Is that also wrong?

No, that seems right to me, much in the same way that:

$x.{1}.{2}

Binds to the left.

Luke


Re: Exceptuations

2005-10-05 Thread Luke Palmer
On 10/5/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 05, 2005 at 16:57:51 +0100, Peter Haworth wrote:
> > On Mon, 26 Sep 2005 20:17:05 +0200, TSa wrote:
> > > Whow, how does a higher level exception catcher *in general* know
> > > what type it should return and how to construct it? The innocent
> > > foo() caller shouldn't bother about a quux() somewhere down the line
> > > of command. Much less of its innards.
> >
> > Well said.
>
>
> No! Not well said at all!
>
> The exception handler knows *EVERYTHING* because it knows what
> exception it caught:

I don't think it was a "how is this possible", but more of a "what
business does it have?".  And as far as I gathered, they're saying
pretty much what you've been saying, but in a different way.  It's
about the continuation boundary; that is, if you're outside a module,
you have no say in how the module does its business.  You can continue
only at the module boundary, replacing a return value from its public
interface.

Of course, exactly how this "public interface" is declared is quite undefined.

Luke


Re: Roles and Trust

2005-10-05 Thread Luke Palmer
On 10/5/05, Ovid <[EMAIL PROTECTED]> wrote:
>   sub _attributes {
> my ($self, $attrs) = @_;
> return $$attrs if UNIVERSAL::isa( $attrs, 'SCALAR' );
>
> my @attributes = UNIVERSAL::isa( $attrs, 'HASH' )
>   ? %$attrs : @$attrs;
> return unless @attributes;
> # more code here
>   }
>
> This was a private method, but $attrs is an argument that is passed in
> by the person using my class.  It would be nice if I could just assign
> an "attributes" role to the SCALAR, ARRAY, and HASH classes and say
> that only my class could see the method(s) it provides.  Thus, my
> caller would be blissfully unaware that I am doing this:
>
>   $attrs->attributes; # even if it's an array reference

sub _attributes($ref) {
my multi attributes ($scalar) { $$scalar }
my multi attributes (@array) { @array }
my multi attributes (%hash) { %hash }
attributes($ref)
}

That attributes look suspiciously indentityish.  There is some context
magic going on.

Or doesn't this solve your problem?

Luke


Re: zip: stop when and where?

2005-10-05 Thread Luke Palmer
On 10/5/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> So I now propose that C works like this:
>
> C interleaves elements from each of its arguments until
> any argument is (a) exhausted of elements I (b) doesn't have
> a C property.
>
> Once C stops zipping, if any other element has a known finite
> number of unexhausted elements remaining, the  fails.

Wow, that's certainly not giving the user any credit.

I'm just wondering why you feel that we need to be so careful.

Luke


Re: zip: stop when and where?

2005-10-06 Thread Luke Palmer
On 10/5/05, Damian Conway <[EMAIL PROTECTED]> wrote:
> Luke wrote:
>  > I'm just wondering why you feel that we need to be so careful.
>
> Because I can think of at least three reasonable and useful default behaviours
> for zipping lists of differing lengths:
>
>  # Minimal (stop at first exhausted list)...
>  for @names ¥ @addresses -> $name, $addr {
>  ...
>  }
>
>
>  # Maximal (insert undefs for exhausted lists)...
>  for @finishers ¥ (10..1 :by(-1))  -> $name, $score {
>  $score err next;
>  ...
>  }
>
>
>  # Congealed (ignore exhausted lists)...
>  for @queue1 ¥ @queue2 -> $server {
>  ...
>  }
>
> Which means that there will be people who expect each of those to *be* the
> default behaviour for unbalanced lists.

Perhaps that makes sense.  That certainly makes sense for other kinds
of constructs.  Something makes me think that this is a little
different.  Whenever somebody asks what "Y" is on #perl6, and I tell
them that it interleaves two lists, a follow-up question is *always*
"what does it do when the lists are unbalanced."  Now, that may just
be a behavior of #perl6ers, but I'm extrapolating.  It means that
there isn't an assumption, and if they weren't #perl6ers, they'd RTFM
about it.

When I learned Haskell and saw zip, I asked the very same question[1].
 I was about as comfortable writing Haskell at that point as beginning
programmers are with writing Perl, but it still took me about ten
seconds to write a test program to find out.  The rest of Perl doesn't
trade a reasonable default behavior for an error, even if it *might*
be surprising the first time you use it.  It doesn't take people long
to discover that kind of error and never make that mistake again.

If we make zip return a list of tuples rather than an interleaved
list, we could eliminate the final 1/3 of those errors above using the
typechecker.  That would make the for look like this:

for @a Y @b -> ($a, $b) {...}

An important property of that is the well-typedness of the construct. 
With the current zip semantics:

my A @a;
my B @b;
for @a Y @b -> $a, $b {
# $a has type A (+) B
# $b has type A (+) B
}

With tuple:

my A @a;
my B @b;
for @a Y @b -> ($a, $b) {
# $a has type A
# $b has type B
}

Which is more correct.  No... it's just correct, no superlative
needed.  It also keeps things like this from happening:

for @a Y @b -> $a, $b {
say "$a ; $b"
}
# a1 b1
# a2 b2
# a3 b3
# ...

"Oh, I need a count," says the user:

for @a Y @b Y 0... -> $a, $b {  # oops, forgot to add $index
say "$a ; $b"
}
# a1 b1
# 0  a2
# b2 1
# ...

Luke

[1] But I didn't need to.  The signature told me everything:

zip :: [a] -> [b] -> [(a,b)]

It *has* to stop at the shortest one, because it has no idea how to
create a "b" unless I tell it one.  If it took the longest, the
signature would have looked like:

zip :: [a] -> [b] -> [(Maybe a, Maybe b)]

Anyway, that's just more of the usual Haskell praise.


Re: zip: stop when and where?

2005-10-06 Thread Luke Palmer
On 10/6/05, Juerd <[EMAIL PROTECTED]> wrote:
> for @foo Y @bar Y @baz -> $quux, $xyzzy { ... }
>
> is something you will probably not see very often, it's still legal
> Perl, even though it looks asymmetric. This too makes finding the
> solution in arguments a non-solution.

Don't be silly.  There's no reason we can't break that; it's not an
idiom anybody is counting on.  If you still want the behavior:

for flatten(@foo Y @bar Y @baz) -> $quux, $xyzzy {...}

But your point about Y returning a list and therefore not being
for-specific is quite valid.

Luke


Re: $value but lexically ...

2005-10-06 Thread Luke Palmer
On 10/6/05, Dave Whipp <[EMAIL PROTECTED]> wrote:
> sub foo( $a, ?$b = rand but :is_default )
> {
> ...
> bar($a,$b);
> }
>
> sub bar( $a, ?$b = rand but :is_default )
> {
>warn "defaulting \$b = $b" if $b.is_default;
>...
> }
>
>
> It would be unfortunate if the "is_default" property attached in &foo
> triggers the warning in &bar. So I'd like to say somthing like
>
>sub foo( $a, ?$b = 0 but lexically :is_default ) {...}
> or
>sub foo( $a, ?$b = 0 but locally :is_default ) {...}
>
> to specify that I don't want the property to the propagated.

This came up before when I proposed "lexical properties".  That was
before we knew that a property was just a role.  So you can do a
lexical property like so:

{
my role is_default {}   # empty
sub foo($a, ?$b = 0 but is_default) {...}
}
{
my role is_default {}
sub bar($a, ?$b = rand but is_default) {...}
}

If this turns out to be a common want, I can see:

sub bar($a, ?$b = rand but my $is_default) {
warn "Defaulted to $b" if $b.does($is_default);
}

But I don't think it will be, and the empty role is easy enough.

Luke


Re: Exceptuations

2005-10-06 Thread Luke Palmer
On 10/6/05, Yuval Kogman <[EMAIL PROTECTED]> wrote:
> when i can't open a file and $! tells me why i couldn't open, i
> can resume with an alternative handle that is supposed to be the
> same
>
> when I can't reach a host I ask a user if they want to wait any
> longer
>
> when disk is full I ask the user if they want to write somewhere
> else
>
> when a file is unreadable i give the user the option to skip

I'm not bashing your idea, because I think it has uses.  But I'll
point out that all of these can be easily accompilshed by writing a
wrapper for open().  That would be the usual way to abstract this kind
of thing.

Luke


Re: $value but lexically ...

2005-10-06 Thread Luke Palmer
On 10/6/05, Juerd <[EMAIL PROTECTED]> wrote:
> Luke Palmer skribis 2005-10-06 14:23 (-0600):
> > my role is_default {}   # empty
> > sub foo($a, ?$b = 0 but is_default) {...}
>
> Would this work too?
>
> 0 but role {}

Most certainly, but you would have no way to refer to that role later,
so it is questionable how useful that construct is.  No, it's not
questionable.  That is a useless construct.

Luke


Type annotations

2005-10-06 Thread Luke Palmer
Autrijus convinced me that we have to really nail down the semantics
of type annotation without "use static".   Let's first nail down what
I meant by "semantics" in that sentence.  Basically, when do various
things get checked?  Run time or compile time (not coercion; I have a
proposal for that coming).

Oh, I'm asking p6l here, not Larry in particular.  This part of the
language is yet-undesigned, so some arguments one way or the other
would be nice to hear.

So we're in line one of a Perl program, with static typing/inference
disabled (or at least inference *checking* disabled; perl may still
use it for optimization).  When do the following die: compile time
(which includes CHECK time), run time, or never?

my Array $a = 97;  # dies eventually, but when?
my Int   $b = 3.1415;  # dies at all?

sub foo (Int $arg) {...}
foo("hello");  # should die at the latest when foo() is called

sub bar (Int $arg --> Str) {...}
foo(bar(42));

sub static  (Code $code, Array $elems --> Array) {
[ $elems.map:{ $code($_) } ]
}
sub dynamic ($code, $elems) {
[ $elems.map:{ $code($_) } ]
}
static({ $_+1 }, dynamic("notcode", [1,2,3,4,5]));
dynamic("notcode", static({ $_+1 }, [1,2,3,4,5]));

That should cover most of the interesting cases.

Luke


Re: Type annotations

2005-10-07 Thread Luke Palmer
On 10/7/05, chromatic <[EMAIL PROTECTED]> wrote:\
> If I added a multisub for Array assignment so that assigning an integer
> value set the length of the array, would 97 be compatible with Array?

You're not allowed to overload assignment.

But you are allowed to overload coersion.  Essentially, every
expression gets a coerce:($expr, $current_context) wrapped around
it (where these are optimized away when they do nothing).  If you
allow definition of these at runtime, there are two side-effects:

1) No typechecking can ever take place in any form.
2) No coerce calls can ever be optimized away.

These are very unfortunate.  So I'm inclined to say that you can't
overload coersion at runtime.

> Juerd writes:
> > Do remember that some programs run for weeks or months, rather than a
> > few seconds. It's nice to get all the certain failures during compile
> > time.

There is a tradeoff around typecheckers that bounce on either side of
the Halting program.  Either: There are programs you call erroneous
when they are not; or there are programs you call correct when they
are erroneous.  I get the impression that most of us want the latter
kind for annotations (in the absence of "use static").

Luke

Luke


  1   2   3   4   5   6   7   8   9   10   >