Re: Ways to add behavior

2005-11-07 Thread TSa
HaloO,

Larry Wall wrote:
> : ::Takes3Ints ::= :(Int,Int,Int --> Any);
> : 
> : my &foo:(Takes3Ints);
> 
> I'd say that has to be something like:
> 
> my &foo:(Takes3Ints:);
> 
> or maybe one of
> 
> my &foo:(Takes3Ints \!);
> my &foo:(\Takes3Ints);
> my &foo\(Takes3Ints);
> 
> since Takes3Ints is the implementation and/or arglist type.

Sorry, you lost me. Why is there a invocant colon in the
first example, and what exactly does the ref indicator
mean in all the three different alternatives?


> Otherwise how do you distinguish
> 
> my &foo:(Takes3Ints);
> my &foo:(Int, Int, Int);

You mean the latter &foo as non-code, simple 3-tuple type?
Otherwise I would think the & sigil implies the optional
default &:( --> Any). But than again I haven't fully grasped
your notion of the pure type name before the first colon.


> The colon would still be required in an rvalue context.  But the extension
> of that to subs seems to be:
> 
> my &sub(Sub: \$args)
> 
> Hmm, does that mean that a method actually looks like this?
> 
> my &meth(Method: $self: \$args)

I have ranted that I see method invocation as a two step
call with a covariant dispatch first and then a normal
contravariant call with argument type check, but why the
second colon? And why the first on &sub(Sub: \$args)? Just
to name the intended code subtype?
-- 


proposal: rename 'subtype' declarator to 'set'

2005-11-09 Thread TSa

Larry Wall wrote:
> On Mon, Nov 07, 2005 at 01:05:16PM +0100, TSa wrote:
> : With the introduction of kind capture variables ^T we could complety
> : drop the subtype special form. As you pointed out the adding of
> : constraints happens with the where clause anyway. Thus we return to
> : the usage of the compile time name assignment form
> :
> :   ::Blahh ::= ::Fasel where {...}
> :
> : where the :: in all cases means working on the MIR (Meta Information
> : Repository) which is the world of names (In Russian, Mir (Мир) means
> : "peace," and connotes "community." --- cited from Wikipedia).
>
> Except that a type declarator can remove the need for all those extra ::
> markers.  (Actually, the ::Fasel can be presumably written Fasel if
> Fasel is already declared, so the only strangeness is on the left.
> We could hack identifier-left-of ::= like we did identifier-left-of =>,
> or we could turn ::= into more of a general alias macro idea.  But I think
> people generally like declarators out to the left for readability.)

So, why not call the thing what it is---a set *type* declarator!

  set SmallInt of Int where { abs < 10 };

  set SomeNums of Num = (3.14, 4, 89, 23.42);

  set Bit of Int = (0,1);


Enumerations are then just sets of pairs

  set NumWords of Pair = { zero => 0, one => 1, two => 2, three => 3 };
# or with ( ... ) instead?

  enum NumWords = ; # short form of the above


Without the 'of' it doesn't look so good

  set Num SomeNums = (3.14, 4, 89, 23.43);

but OTOH, it looks nicely like

  my Num @SomeNums = (3.14, 4, 89, 23.43);

  my Int %NumWords = { zero => 0, one => 1, two => 2, three => 3 };

YMMV, though ;)


Does that work? Perhaps even while maintaining compatibility with the
set *value* creator sub that exists in Pugs, IIRC?

  my $x = set( 1, 2, 3 ); # $x contains a set reference at best
--
$TSa.greeting := "HaloO"; # mind the echo!



Re: given too little

2005-11-10 Thread TSa

HaloO,

Gaal Yahas wrote:

I know why the following doesn't work:

 given $food {
 when Pizza | Lazagna { .eat }
 when .caloric_value > $doctors_orders { warn "no, no no" }
 # ...
 }

The expression in the second when clause is smart-matched against $food,
not tested for truth like an if.


The smart match is hopefully *type* aware. In the above second case
the match could be :( Item ~~ Code ) the Item is $food, the Code is the
block { .caloric_value > $doctors_orders }. So far so bad, you say?!
But I think the type inferencer will type the Code as :(Item --> bool)
and as such we could specify that a booleanizing, inline block just
does what you expect. If the same should be the case for

   sub foo:( --> bool) {...}

   given $food
   {
   when foo {...} # call foo and enter if true is returned
   }

I would like that.
--


Re: co/contra variance of roles/factories in theory.pod

2005-11-14 Thread TSa

HaloO,

Larry Wall wrote:

Another possibility is to take $? away from the compiler.  All the
compiler variables could go under $= instead, since pod is actually
just one particular kind of compiler-time data, and there's really
no particular mnemonic relationship between ? and the compiler.
But $?foo is harder to type than $!foo.  Maybe we hold $?foo in
reserve for some other kind of attribute scope.


Well, I would like to see a very deep running opportunistic ? and
existential ! markup. That is

  quest- *in*
*ex* -pect

That would make $? the natural choice for the INvocant like $!
is the EXception. In multi methods the invocants would naturally
be aliased to @?, then.

Perhaps this also gives the in/ex ?/! equivalence we are searching
syntactically. I mean the the 'in' prefix is partly a negative and
partly a positive prefix in many latin derived english words. So,
we drop 'true' and use 'in' or 'inex' to complement 'not' as the
loose precedence booleanizer.


And the rest falls in beautifully

  $*  global (infinitely outer env)
  $+  one step out and then outwards env

BTW, how is the lazy flattening op disambuated?

  $x = *foo 1,2,3;  #1 call global foo on 1,2,3; or
#2 flatten return values of foo into 1,2,3; ?

I would like to disambiguate as

  $x = &*foo 1,2,3; # global call, same as *::foo 1,2,3;
  $x = *&foo 1,2,3; # flatten result
  $x = * foo 1,2,3; # unary prefix a bit ugly, but OK.
--


Re: Test Case: Complex Numbers

2005-11-17 Thread TSa

HaloO,

Jonathan Lang wrote:

Complex numbers come in two representations: rectilinear coordinates
and polar coordinates:


I think there's also the Riemanian two angle form of the complex
number sphere with r = 0.5 around (0,0,0.5) touching the plane at
the origin (0,0) and reaching up to (0,0,1) in space. But admittedly
the resulting math is that of projective space.

Then there is the classic fact that you should make e.g. role Complex
an intentional supertype of Num while the class Complex needs a wider
extensional type with the added imaginary part in one form or another.

As Luke pointed out, a plain role/class pair might not be enough to
model the structural properties of the communalities and differences
of Num and Complex. We need to define a family of related types that
manage the different interpretations of a 2-tupel of Nums while sharing
the underlying data layout---not to mention to be compatible with more
generic vector stuff like the scalar product.

Another idea is to model nums to have a directional bit where the
polar complex have a full range angle.
--


Re: =>'s autoquoted identifiers

2005-11-18 Thread TSa

HaloO,

Luke Palmer wrote:

I think => gets special treatment from the parser; i.e. it is
undeclarable.  It's probably not even declarable as a macro, since it
needs to look behind itself for what to quote.

And I think this is okay.  For some reason, we are not satisfied if
"if" is undeclarable, but => being so is fine.  I can't put my finger
on why I feel this way, though.


The 'operator orientation' of Perl to me dictates that the core
where the recognition process---be it machine or human---starts
is non-alphanumeric. Named functionality are derived concepts in this
respect. The sigils fall into the same atomicity category as the
key part of pairs that link a compile time name to a varying value
at runtime as well.

I hope all these are now the same:

  foo => bar  ;  # result of evaluating bar available under foo key
 :foo(   bar );
 :foobar  ;  # does that exist?

In a certain way these are inner pairs where the label syntax is an
outer pair

  foo:   bar;  # perhaps also written as:  foo --> bar

that names the landing pat for the bar evaluation. The question to me
is, where the symbol foo is inserted into. E.g. if it were available
in the innermost syntactical scope---this email---the following

  $x = foo;

would unequivocally mean to copy a pre-calculated value into $x. In the
end this makes => a detachable *postfix* sigil! The only thing that
surpasses such a thing is an attached prefix sigil, right? Thus

  $foo => bar;

links the variadic key of $foo to an eager bar evaluation right there.
Here my thought stream reaches a branch point :)

  $foo => *bar; # lazy value or global symbol or both?

and as we know, not even the one can see past such choices!
I can argue it both ways with a slight preference for global symbol
because the lazy evaluation could be written as

  $foo => {bar}; # circumfix anonymous closure,
  $foo => &bar;  # ref to not yet called code invocation

or

  $foo => * bar; # detached unary prefix.

These lead me to the question: which evironment is captured in each
of these cases? And what is the latest on the twigil cases?

   :$$:   # void  name
   :foo$  $foo:   # infix name
   :$foo $:foo# postfix   name
   :foo$bar   $foo:bar# postcircumfix names

And similarly with ^$ and $^
--


Re: \x{123a 123b 123c}

2005-11-21 Thread TSa

HaloO,

Patrick R. Michaud wrote:

There's also , unless someone redefines the  subrule.
And in the general case that's a slightly more expensive mechanism 
to get a space (it involves at least a subrule lookup).  Perhaps 
we could also create a visible meta sequence for it, in the same 
way that we have visible metas for \e, \f, \r, \t.  But I have 
no idea what letter we might use there.


How about \x and \X respectively? Note the *space* after it :)
I mean that much more serious than it might sound err read.
I hope the concept of unwritten things in the source beeing
interesting values of void/undef applies always.

OTOH, I'm usually not saying anything in the area of the grammar
subsystem, but I still try to wrap my brain around the underlying
unifyed conceptual level where rules and methods or subs and macros
are indistinguishable. So, please consider this as a well wanting
question. And please forgive the syntax errors.

With something like

   # or token? perhaps even sub?
   macro   x ( HexLiteral *[$char = 32, [EMAIL PROTECTED] )
   is parsed( * )
   {...}

and \ in match strings escaping out to the macro level when
the circumfix match creator is invoked, I would expect

   m/  \x   /;  # single space is required
   m/  \x20 /;  # same
   m/ <{x}> /;  # same?
   m/  \X   /;  # any single char except space
   m/  \x\x\x   /;  # exactly three spaces
   m/  \x[20,20,20] /;  # same, as proposed by Larry
   m/  \xy  /;  # parse error 'y not a hex digit'
   m/  \x y /;  # one space then y

to insert verbatim, machine level chars into the match definition.
In particular *no* lookup is compiled in.

I would call \x the single character *exact* matcher and \X
the *excluder*. BTW, the definition of the latter could just be

   &X ::= !&x; # or automagically defined by up-casing and outer negation

if ? and ! play in the meta operator league.


I don't think I like this, but perhaps  C<< <> >> becomes  
and C<< < > >> becomes <' '>?  Seems like not enough visual distinction

there...


I strongly agree. I would ask the moot question *how* the single space
in / / is removed ---as leading, trailing or separating space---when the
parser goes over it. But I would never expect the source space to make it
into the compiled match code!
--


Re: statement_control() (was Re: lvalue reverse and array views)

2005-11-21 Thread TSa

HaloO,

Luke Palmer wrote:

On 11/21/05, Ingo Blechschmidt <[EMAIL PROTECTED]> wrote:


Of course, the compiler is free to optimize these things if it can prove
that runtime's &statement_control: is the same as the internal
optimized &statement_control:.



Which it definitely can't without some pragma.


Isn't the question just 'when'? I think at the latest it could be
optimized JIT before the first execution, or so. The relevant AST
branch stays for later eval calls which in turn branch off the
sourrounding module's version from within the running system such
that the scope calling the eval sees the new version. And this in
turn might be optimzed and found unchanged in its optimized form.

Sort of code morphing of really first class code. Everything else
makes closures second class ;)
--


Re: dis-junctive patterns

2005-11-22 Thread TSa

HaloO,

Gaal Yahas wrote:

In pugs, r7961:

 my @pats = /1/, /2/;
 say "MATCH" if 1 ~~ any @pats; # MATCH
 say "MATCH" if 0 ~~ any @pats; # no match

So far so good. But:

 my $junc = any @pats;
 say "MATCH" if 1 ~~ $junc; # no match
 say "MATCH" if 0 ~~ $junc; # no match

Bug? Feature?


Ohh, interesting. This reminds me to my proposal
that junctions are code types and exert their magic
only when recognized as such. The any(@pats) form
constructs such a code object right in the match while
the $junc var hides it. My idea was to explicitly
request a code evaluation by one of

  my &junc = any @pats; # 1: use code sigil
  say "MATCH" if 1 ~~ junc;

  say "MATCH" if 1 ~~ do $junc; # 2: do operator

  say "MATCH" if 1 ~~ $junc();  # 3: call operator

But this might just be wishful thinking on my side.
--


Re: implied looping

2005-11-23 Thread TSa

Larry Wall wrote:

On Wed, Nov 23, 2005 at 06:21:56PM +0100, Juerd wrote:
: Larry Wall skribis 2005-11-23  9:19 (-0800):
: > ^5.each { say }
: 
: Without colon?


Yeah, that one doesn't work a couple of way.  Unfortunately .each still
binds tighter than ^ too.  So it'd have to be:

(^5).each: { say }

unless we made an infix: operator of looser precedence:

^5 each { say }

Or we don't have infix:, so maybe

^5 do { say }


Well, or we go back to ^ beeing the kind sigil that, like $5 retrieves
matched data, captures some countably 5 numbers singly stepping
to result in the lazy list (0,1,2,3,4) with five elements
as ordered. I would call that 'five out of void' :)

Note that ^5[-1] obviously is 4. And also ^5[-5] == 0. Welcome to the
heart of cardinal numbers. Or was that a card attack? Perhaps we call
that idiom a cardiogram? Or cardiarray? Might sound funny, but is
running very deep! To the heart of Perl6 arrays, actually.
--


Re: type sigils redux, and new unary ^ operator

2005-11-24 Thread TSa

HaloO,

Ruud H.G. van Tol wrote:

Yes, it could use a step:

^42.7   = (0, 7, 14, 21, 28, 35)
^42.-7  = (35, 28, 21, 14, 7, 0)


OK, fine if the step sign indicates reversal after creation.
That is, the modulus is 7 in both cases.



^-42.7  = (-35, -28, -21, -14, -7, 0)
^-42.-7 = (0, -7, -14, -21, -28, -35)


I would make these

  ^-42.7  == (-42, -35, -28, -21, -14,  -7)
  ^-42.-7 == ( -7, -14, -21, -28, -35, -42)



and (^-42.7 + ^42.7) has length 11, maybe better expressed as ^-42.7.42,


And the fact that you concatenate two six-element lists and get one with 
*11* elements doesn't strike you as odd? I find it very disturbing! E.g.

when shifting by 42 rightwards I would expect

  ^-42.7.42 == (-42, -35, -28, -21, -14, -7, 0, 7, 14, 21, 28, 35)

to become

  ^84.7 == (0, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77)

and of course

  ^84.7 »- 42;

or to more resemble your notation

  -42 +« ^84.7;

beeing two other forms to write this kind. Ahh, and should there be
a warning about a remainder for

  ^45.7 == (0, 7, 14, 21, 28, 35, 42) # rest 3

and how should a negative---err---endpoint be handled? I opt for

  ^-45.7 == (-49, -35, -28, -21, -14, -7) # rest 4

But the warning could be avoided with some dwimmery after we
observe that 45 == 42 + 3 and -45 == -49 + 4 the respective rests
mean to shift the list rightwards accordingly

  ^45.7 == (3, 10, 17, 24, 31, 38, 45)# shift right 3
 ^-45.7 == (-46, -39, -30, -23, -18, -11, -4) # same


  ^-45.7 == (-45, -38, -31, -24, -17, -10, -3) # shift right 4
   ^45.7 == (4, 11, 18, 25, 32, 39, 46)# same

If you find the above odd, than use the homogenious cases

  ^45.7  == ( 3,  10,  17,  24,  31,  38,  45)  # shift right  3

and

 ^-45.-7 == (-3, -10, -17, -24, -31, -38, -45)  # reversed shift right -3
 == -« ^45.7

which results in pairwise nullification as expected

  ^45.7 »+« ^-45.-7 == ^7.0 == (0,0,0,0,0,0,0)

Let's switch to a shorter example list and use the , to build
some subsets of int

 ^-21.7.0 , ^21.7.0 == (-21, -14, -7, 0,  7, 14)  # length: 42/7 == 6
 ^-21.7.1 , ^21.7.1 == (-20, -13, -6, 1,  8, 15)
 ^-21.7.2 , ^21.7.2 == (-19, -12, -5, 2,  9, 16)
 ^-21.7.3 , ^21.7.3 == (-18, -11, -4, 3, 10, 17)
 ^-21.7.4 , ^21.7.4 == (-17, -10, -3, 4, 11, 18)
 ^-21.7.5 , ^21.7.5 == (-16,  -9, -2, 5, 12, 19)
 ^-21.7.6 , ^21.7.6 == (-15,  -8, -1, 6, 13, 20)
 ^-21.7.7 , ^21.7.7 == (-14,  -7,  0, 7, 14, 21)

If the lists where extended on both sides to infinity then a
shift of 7 changes anything, as can be seen from the last line.

Hmm, the syntax is ambigous with respect to the . if we want to
allow steps < 1. Looks like a jobs for the colon:

 ^21:7:0 == (0, 7, 14)

 ^1:0.25 == (0, 0.25, 0.5, 0.75)

 ^1:0.2:0.2 == (0.2, 0.4, 0.6, 0.8, 1.0)

which perhaps just mean

 ^1:step(0.2):shift(0.2)

Please note that all of the above are *list literals* not
prefix ^ operator invocations. If one wants to become variable
in this type/kind then a @var is needed. A ^$x might be just
a short form of capturing the kind of $x into ^x which not
auto-listifies. Thus

  my ^x $x = 7;

  say ^x;   # Int
  say +$x;  # 7

but

  my ^a @a = (0,0,0);

  say [EMAIL PROTECTED];  # 3
  say ^a;   # Array is shape(3) of Int
# Array[^3] of Int
# Array[ shape => 3, kind => Int ]

or however the structure of an array is printed.



which makes '^5' the short way to write '^5.1.0'.


And ^0 is *the* empty list. Hmm, and ^Inf.0 the infinite
list full of zeros (0, 0, 0, ...), ^Inf.1 are of course
the non-negative integers in a list (0, 1, 2, ...). Then
if we hyperate it and pick the last entry (^Inf.1 »+ 1)[-1]
we get the first transfinite ordinal Omega[0]. From there we
keep counting transfinitely...

And of course 10 * ^0.pi == 3.14...
--
$TSa.greeting := "HaloO"; # mind the echo!



Re: directional ranges

2005-11-25 Thread TSa

HaloO,

Ruud H.G. van Tol wrote:

Not at all: they just overlap at 0.


OK, to me this spoils regularity. Like 'ab ' ~ ' xy'
becoming 'ab xy' is at least surprising, if not outright wrong.
That is

   (~$x).chars + (~$y).chars == +((~$x) ~ (~$y))

should always hold. Same thing for list concatenation which
in Perl6 is provided by comma

   (-3,-2,-1,0),(0,1,2,3) »==« (-3,-2,-1,0,0,1,2,3) »==« -3..0,0..3

where the length of each is 4 + 4 == 8.



I hope those all resulted from the spoiling step, because I got lost.


Well, I tried to argue for maintaining ^ as the kind/type capture
sigil and make it available for structural analyses of arrays.

  my ^x ^y ^z [EMAIL PROTECTED] = ( ( ( 1, 2); ( 3, 4); ( 5, 6) );
  ( ( 7, 8); ( 9,10); (11,12) );
  ( (13,14); (15,16); (17,18) );
  ( (19,20); (21,22); (23,24) ) );

  [EMAIL PROTECTED] == 24; ^a == Array[^24] of Int;
  +^z == 24; ^z == (0..23).kind;
  +^y == 12; ^y == (0..11).kind;
  +^x ==  4; ^x == (0,1,2,3).kind;

This depends if capturing is from the outer-level inwards or the
other way round. In my example ^x captures the outer dimension.
But that could also appear in ^z and ^y and ^x would then drill
further into the structure of the type.



And ^0 is *the* empty list.


Unintentional, but that's how many great things are found.


You are not trying to make fun of me? What I tried to say is,
that ^0 is the empty list type, not an empty list value. That
is (,). In other words

  $foo = ^0; # syntax error or on the fly type instanciation?


Note that as Larry set up this thread, this would now be
written

  $foo = ::0;

again. At it does not look like an empty list anymore.

In a more detailed view the type of ^3 might actually
be :(0=>0, 1=>1, 2=>2) or with the arrow :(0-->0, 1-->1, 2-->2).
And  ^3.0 might be :(0=>0, 1=>0, 2=>0).

This nicely lifts Juerds concerns about mixing the last index as
ordinal number with the cardinal number of elements in the array
from the value level up to the type level. Not making off-by-one
errors is easier for the compiler than the programmer ;)

Apart from having a nice syntax for ranges I would like to get
array variables as *the* integer variables when you don't happen
to care for the data in the array. That is things like

  my @a = 10;
  my @b = -3;

  say @a + @b; # prints 7

should first of all be allowed and imply proper integer semantics
at a neglectable performance cost if any. And I wouldn't mind if this
arithmetic spilled over to concatenating the lists of data in the array
lazily at the top level:

  @a = (0,1,2);
  @b = (3,4,5,6);

  @c = @a + @b; # higher precedence than (@a,@b)

  say [EMAIL PROTECTED];  # prints 7

  say @c[]; # prints 0 1 2 3 4 5 6

This would also allow the distinction of

  @a = 1..3;   # [EMAIL PROTECTED] == 3
  @b = 1000..1003; # [EMAIL PROTECTED] == 3

  @added = @a + @b; # @added = (1,2,3,1001,1002,1003); [EMAIL PROTECTED] == 6

and

  @range = @[EMAIL PROTECTED];  # @range = 1..1003; [EMAIL PROTECTED] = 1003

  @range[207] == undef;

BTW, should then

  @range[2] = 99;

add a single element separated by a 18997 elements long gap and
consequently [EMAIL PROTECTED] == 1004? This would re-optimize the index-set
on assignment:

  @packed = @range; # [EMAIL PROTECTED] == [EMAIL PROTECTED] == 1004
# but @packed.last == 1003 and @range.last = 2

  say @packed[1003]; # prints 99

What were the advantage of [EMAIL PROTECTED] == 20001? Other than
allowing the dubious @range.last == [EMAIL PROTECTED] - 1? The gap
could be made fillable by something to the effect of

  @range.replenish(42); # fills undef cells eagerly, thus [EMAIL PROTECTED] == 
20001

  @range.gaps = 23; # lazy fill and [EMAIL PROTECTED] == 1004 but
  say @range[]; # prints 23

Comments?
--
$TSa.greeting := "HaloO"; # mind the echo!



Re: type sigils redux, and new unary ^ operator

2005-11-25 Thread TSa

HaloO,

Michele Dondi wrote:
IMHO the former is much more useful and common. Mathematically (say, in 
combinatorics or however dealing with integers) when I happen to have to 
do with a set of $n elements chances are to a large extent that it is 
either 0..$n or 1..$n; 0..$n may lead to confusion since it it actually 
has $n+1 elements.


Not to mention border cases like 0..0 == (0, 0) or (0,)? How many
elements should $x..$y have in general? In particular when fractional
parts are allowed like in $x = 3.2 and $y = 4.6. Does that yield (3.2, 4.2)
or (3.2, 4.2, 4.6) to reach the end exactly? Or would a non-integer number
force the range to have infinitely many members?

How do ranges relate to list concatenation? (0..4,4..6) looks odd, but then
would better be written (0..6) if no double entry 4 is intended. Perhaps we
can live with the numerically lower end always beeing part of the range, the
larger one never, irrespective of the side of the .. they are written on.
Swapping them just means reversing the list:

  0 .. 5 == ( 0, 1, 2, 3, 4)
 -5 .. 0 == (-5,-4,-3,-2,-1)
  0 ..-5 == (-1,-2,-3,-4,-5)
 -5 .. 5 == (-5,-4,-3,-2,-1, 0, 1, 2, 3, 4)

which is how array indices work as well. This also gives proper modulo
semantics which is 5 in all cases above, and applied two times in the
last line.
--


Re: type sigils redux, and new unary ^ operator

2005-11-25 Thread TSa

HaloO,

  0 .. 5 == ( 0, 1, 2, 3, 4)


Hmm, and 0..5.1 == (0,1,2,3,4,5) to "rescue" the end.
--


Re: the $! too global

2005-12-05 Thread TSa

HaloO,

Luke Palmer wrote:

The most immediate offender here is the referential passing semantics.


IIRC, the default is to be a read-only ref. Not even local modifications
are permitted if the 'is copy' property is missing.



 Here is a code case:

sub foo ($x, &code) {
&code();
say $x;
}
my $bar = 41;
foo($bar, { $bar++ });

As far as I recall, the spec says that this should output "42".  My
intuition tells me that it should output "41".


My intuition also opts for 41. Evaluating &code() inside foo either
violates the constance of its own $x if the changes propagate back
into the already bound/passed argument or violates the callers
excapsulation. The latter depends on what exactly the second argument
of the call of foo closes over. Theoretically the binding of
CALLER<$bar> to foo<$x> could spill over to the closure which then
would mean foo(x => $bar, { $x++ }) and require $x beeing declared
either 'is copy' or 'is rw'. In other words it could be marked as error.
--
$TSa.greeting := "HaloO"; # mind the echo!


Re: the $! too global

2005-12-05 Thread TSa

HaloO,

Darren Duncan wrote:
The problem is that $! is being treated too much like a global variable 
and not enough like a lexical variable.  Consider the following example:


Wasn't the idea to have $! only bound in CATCH blocks?



  sub foo () {
try {
  die MyMessage.new( 'key' => 'dang', 'vars' => {'from'=>'foo()'} );
};


 CATCH {


if ($!) {
  display_message( $! );
}
else {
  display_message( MyMessage.new(
'key' => 'nuthin_wrong', 'vars' => {'from'=>'foo()'} ) );
}


} # end of CATCH


  }


Note that, while my examples don't use a CATCH block (which isn't yet 
implemented in Pugs), you get the exception from $! or an alias thereof 
anyway, so whether or not I used CATCH is factored out of the discussion 
at hand here.


Well, I think it brings us back to the discussion about the exception
semantics ;)
A CATCH block is not supposed to be executed under normal circumstances
and $! is undef outside of CATCH blocks. Your code mixes these two
things.


The big problem here is that almost any argument for any subroutine 
could potentially come from $!, and it would be rediculous for them all 
to be 'is copy' just to handle the possibility.


Oh, you are advocating exception handling while an exception is handled!
Thus after handling the exception exception the one-level-down exception
handler procedes normally exceptional again. Finally the normal,
non-exceptional layer is re-instated, or some such.


Nor is it reasonable for a caller to have to copy $! to another variable 
before passing it to a subroutine just because that sub *may* have its 
own try block ... a caller shouldn't have to know about the 
implementation of what it called.


I welcome feedback on this issue and a positive, elegant resolution.


I'm still advocating a semantic for exceptions that drop out of the
normal flow of events and call code explicitly dedicated to that task.
Returning to the normal flow as if nothing exceptional happened is
particularly *not* guarenteed! Thus the context of $! is *not* the
runtime context, just like the compile context isn't runtime either.

Actually, many of the exceptions might be compiled-in assertions that
where silenced warnings of the compiler: "didn't I say we'll meet again
:)".
--
$TSa.greeting := "HaloO"; # mind the echo!


Re: the $! too global

2005-12-05 Thread TSa

HaloO,

Nicholas Clark wrote:

No, I think not, because the closure on the last line closes over a
read/write variable. It happens that read only reference to the same variable
is passed into the subroutine, but that's fine, because the subroutine never
writes to *its* reference.


So, you argue that Luke's example should print 42?



Thinking about it in C terms, where pointers and values are explicit, it's as
if function arguments are always passed as const pointers to a value in a
outer scope. This seems fine and consistent to me, but you can never be sure
whether anyone else has another pointer to that same value, which they can
modify at any time.


Sure. But I consider this a semantic nightmare. And the optimizer thinks
the same. Basically every variable then becomes volatile in C terms :(
Note that lazy evaluation does *not* imply permanent refetching.
Actually the opposite is true, lazy evaluation implies some capturing
of former state as far as the lazied part needs it to produce its
content. And item parameters aren't even lazy by default!



If nearly all function parameters are PMCs (internally) I don't think that
this is an efficiency problem, as PMCs are always reference semantics, even
when loaded into Parrot's registers.


The point I'm argueing about is how exposed subs like Luke's foo should
be to this. Essentially I want the invocation of foo to create a COW
branchpoint in the outer $bar such that when, from where ever, this
value is changed, the former $bar that foo's $x was bound to remains
unharmed. And is garbage collected when foo is done with it.
--
$TSa.greeting := "HaloO"; # mind the echo!


Re: the $! too global

2005-12-07 Thread TSa

HaloO,

Larry Wall wrote:

My gut-level feeling on this is that $! is going to end up being an
"env" variable like $_.


I just re-read about exceptions. Well, I undestand now that $! is
intented as a variable with a spectrum of meanings ranging from

  1) the return value of a sub, through
  2) an error return value, up to
  3) a non-return value.

Of these only the last is what I know as exceptions. I'm for now
undecided to regard these three purposes of $! as very clever or
as big potential for subtle problems. From the viewpoint of the
execution model this environment approach is very clear, though.
It nicely unifies return values with in-band errors and out-band
exceptions. Which leaves just one wish from my side: make CATCH
blocks dispatched. That is they get a signature what kind of
exception they handle. The optimizer will make good use of this
information.



Then the problem reduces
to what you do with an unhandled $! at the end of a lexical scope,


I don't understand this. The end of scope is either the closing
brace or an explicit statement like next, leave, return, yield,
fail or die. In the above spectrum of $! these are resultizer
just as given, for and friends are topicalizer for $_. But I think
simple assignment to $! is not foreseen, or is it?

For an example let's assume that / returns 'undef but division by zero'
in $! if its rhs is zero.

  {  # begin of scope

 my $x = 3;
 my $y = 4;

 say $x/$y;

 $x = foo; # foo could return 0

 my $z = $y/$x; # $! == 'undef but division by zero' if $x == 0
# Is that assignable to $z?

 $z++;  # This is now a statement in a potential CATCH block?
  }
--
$TSa.greeting := "HaloO"; # mind the echo!


Re: handling undef better

2005-12-22 Thread TSa

HaloO,

Larry Wall wrote:

And replying to the thread in general, I'm not in favor of stricter
default rules on undef, because I want to preserve the fail-soft
aspects of Perl 5.


Also replying to the thread in general, I feel that undef as a
language concept mixes too many usefull concept into a single
syntactic construct. The problem is the same as overloaded $!.

Autovivification e.g. is hardly a problem as far as lvalues
are concerned because the undef content is overwritten with
the rvalue anyway. Operators like ++ could have a Undef of Int
specialisation that writes 1 into the affected container. So
the point here is more of defineing behaviour then actually
trying to dwim the undwimable!



 And that is better served by promotion of
undef to "" or 0 with warnings than it is by letting one undef rot
all your inputs.


I agree. Every entity operating large software systems should
have well defined procedures for reading log output of their
systems and *do* something about warnings in the long run. It
is not the task of a low-level routine that whatever context
asks for its services is prevented from continuing just because
the service can't carry out its business. Hmm, on this level of
unspecificity this last sentence hardly makes a point.

So let me illustrate what I mean by picking an example. Let's
consider &open:( Str --> File ) where we all know, that not all
strings denote a file. But returning undef in that case---at least
to me---feels as violating the interface even if the particular
undef has got type Undef of File.

In other words the designer of the File type should foresee a special
instance that implements the interface of File in a reasonable way.
E.g. returning EOF on .read and swalling arbitrary input on .write and
more important can be tested for. The latter requirement nicely maps to
the Perl6 concept of booleanification where *only* this particular
invalid file instance returns false an its .bit property.

If a fallback instance isn't what the designer wants the interface
should read &open(Str --> File ^ Undef) in the first place. Not
specifying the return type at all is of course a generic solution
or implementation of unspecificity :)



 When they put multiple actuators on an airplane
control surface, they do so with the assumption that some subset of
the actuators might freeze in some position or other.  It would be
ridiculous in such a situation to throw away the authority of the
other actuators even though "tainted" by the undefined values of some
of the actuators.  That's failsoft behavior.


But note that the failsoftness is a property of the construction
and not of nature itself! In the context of Perl6 this means to
me that the language must address both sides of the deal in a fashion
that clearly separates error communication from defaulting/fallback
values. In other words this thread tries to address the practise of
using undef as don't care value indicating the intent to get the
default behaviour. Thus the concept of a caller provided undef beeing
a value that is defined as far as falling back to the default of the
interface is concerned but is undef as far as the actual value is
concerned is bad:

   sub foo( ?$bar = "default" ) { say $bar }

and then

   foo( undef );  # doesn't print "default"
   foo(); # prints "default"



Strict typing is all very well at compile time when you can do
something about it,


Perl6 has many levels of compile time! That is like airplanes
have a strict pre-flight check which is of course counter productive
enroute. And then exception handling for the airline and passengers
might e.g. mean a stay in a hotel until the machine is rpaired or
a replacement has arrived etc.



but you do not want your rocket control software
throwing unexpected exceptions just because one of your engine
temperature sensors went haywire.  That's a good way to lose a rocket.


But then again you might want to do something about temperatures out
of deadbands. Otherwise you could just optimize the temperature sensor
away in the first place. IIRC that is the case for rockets used on
New Year's eve ;) Here the question to the operator is how to handle
a rocket that didn't lift off. Walking there and inspecting it with
bare hands and *eyes* might be a bad idea!
--


Re: Problem with dwimmery

2005-12-22 Thread TSa

HaloO,

Luke Palmer wrote:

Recently, I believe we decided that {} should, as a special case, be
an empty hash rather than a do-nothing code, because that's more
common.


Ups, is that distinction needed eagerly? Wouldn't the return value
of a do-nothing code return a value that when coerced into hashness
result in an empty hash? Avoiding this superflous call is more of
interest to the optimizer than code correctness. OTOH, implementing
an empty block is easy as well.



However, what do we do about:

while $x-- && some_condition($x) {}


The {} obviously has no influence on the boolean result
tested by while. Why suddenly bother if it is a code or hash?
Same with {;} or { $x => random, foo => bar }. The lexer is
concerned with the closing brace terminating the while statement
and not an empty hash consumed by the return value of &some_condition
or whatever.

So the underlying question is how should

  %hash = undef;  # same as %hash = {}; ?

behave where I assume the undef is the return value of the
do-nothing code.

This touches again on the many meanings of undef and the question
if the above is a valid idiom to detach a hash container from its
former content, or if it indicates "don't touch %hash unless you
are prepared to catch an exception". And how long would such a
touchiness last? E.g. would

  %hash = 7;

blow-up but

  %hash = { a => 7 };

cure the undef?
--


Re: Problem with dwimmery

2005-12-22 Thread TSa

HaloO,

Juerd wrote:

I think it should be both.


So do I.



my $foo = {};
$foo();  # It was a sub


The postfix () is valid syntax irrespective of the former
assignment, right?



my $foo = {};
$foo = 1;  # It was a hash


Would you expect the second line to work witout the first?
I would, but perhaps with a more convoluted sequence
how that comes to pass:

  1) $foo has no particular constraints
  2) postfix <> lazily autovivifies a hash element ref
 which has key bar and stores a not-yet ref to the
 not-yet hash holding the key into $foo
  3) the lvalue from 2) is the lhs arg of infix =
  4) the rhs obviously is 1
  5) now infix = creates a new hash with the single key value
 pair bar => 1 and actualizes the not-yet ref in $foo



It's not as if we're allergic to intelligent autovivifithingy... :)


No allergy on my side, as long as the magic is under kind control ;)
--


Re: handling undef better

2005-12-23 Thread TSa

HaloO,

Nicholas Clark wrote:

I think that Larry is referring to slightly larger and more expensive rockets
than regular fireworks: http://www.siam.org/siamnews/general/ariane.htm


I know. But where would we put Perl 6 onto a range of programming 
languages parrallel to rockets ranging from fireworks to the Ariane 5?

Do you assume Perl 6 will replace Ada or subsets of it like SPARK-Ada?

The conclusion of the article is exactly my point: preparing for
handling your own errors. This is not directly the task of the
underlying language. It has to provide the means though. And I agree
that the idea is to confine errors and not propagate them. But that
is more the task of catch blocks then undef values.
--


Re: Problem with dwimmery

2005-12-23 Thread TSa

HaloO,

Nicholas Clark wrote:

Well, I assume that the do-nothing sub is assigned into the variable, and
gets re-evaluated each time the variable is use. Which would mean that you'd
get a new (different) empty hash each time. Whereas an empty hash constructor
gives you a hash reference to keep. (not just for Christmas)


Hmm, I think the type of value stored in $foo after

   my $foo = {};

is Code ^ Hash which is viable for &postfix:{'<','>'} which when
assigned to drops the Code from the type. And I further think that
a plain {} empty code literal doesn't have the is rw property set
and thus something like

  $foo() = 23;

is not allowed. The above shall mean that the code in $foo is evaluated,
delivers a fresh, empty hash which gets the kv blahh => 23 and then is
lost.



It's like the autovivification bug:

sub stuff {
  my $h = shift;
  $h->{k}++;
}


Do you consider that a Perl5 bug or a program bug? I find it
inconsistent with respect to $h beeing either a const alias
to $a and thus always a detached copy after the shift or it
should always install the autovivified hash also into $a. But
the mixed mode of directly accessing the hashref in $a if it
refers to a hash already is at best surprising. The Perl6
sub declarations make this much clearer:

  sub stuff (%h is rw)
  {
 %h++;
  }


my $a;
$a->{$_}++ for @ARGV;

stuff($a);
print "$a->{k}\n";
__END__

--


real ranges

2005-12-23 Thread TSa

HaloO Everybody,

here's a an idea from me about makeing range object a bit like
junctions. That is a range object has a $.min and $.max and the
following comparison behaviour:

str  num

lt   <   strictly inside  -+
gt   >   strictly outside  --+ |
eq   ==  exactly on boundary --+ | |
   | | | negation
ne   !=  not on boundary --+ | |
le   <=  inside or on boundary --+ |
ge   >=  outside or on boundary ---+

# strictly inside
($a <  3..6) === (3 <  $a <  6) === (3 <  $a && $a <  6)

#strictly outside
($a >  3..6) === (3 >  $a >  6) === (3 >  $a || $a >  6)

# inside or boundary
($a <= 3..6) === (3 <= $a <= 6) === (3 <= $a && $a <= 6)

# outside or boundary
($a >= 3..6) === (3 >= $a >= 6) === (3 >= $a || $a >= 6)

Please note what light the comaprisons sheds on chained comparison
in general!

The driving idea is to make the .. range not a list constructing
operator but a real uncountably infinite set of reals bounded by
the two nums. The measure of the interval always beeing the difference.

 +(3..7) == 7 - 3 == 4

Such a range might be interated of course. But only if the stepping
is given. Well, a range might be subdivided into subranges. But I'm
unsure what the default number of sections should be---10 perhaps?

   (3..7) == (3..4..5..6..7) # array/list of subranges
  [0][1][2][3]

   (3..7)[2] == 5..6

   (3..7).split(10) ==   # or perhaps (3..7)/10 returning a list
   (3.0..3.4..3.8..4.2..4.6..5.0..5.4..5.8..6.2..6.6..7.0)
[0]  [1]  [2]  [3]  [4]  [5]  [6]  [7]  [8]  [9]
[-10][-9] [-8] [-7] [-6] [-5] [-4] [-3] [-2] [-1]

   # step = (7-3)/10 = 0.4

Driving it even further we get closure bounded ranges. Here is
a circle with radius 5:

  my Point $p = ( x => 2, y => 3);

  $circle = $p..{ (.x - self.min.x)**2
 +(.y - self.min.y)**2 <= 5**2 };

The self is the range and .x and .y are applied to $_
when it comes to inside checking as in

  if $some_point < $circle { ... }

I'm not sure how usefull this numerics inspired idea fits into Perl6.


The former meaning of .. constructing a list could be implemented
with ,, and ;; as follows.

 + ^5 == +(0,1,2,3,4) == 5

 +(3,,7) == +(3,4,5,6,7) == 5

The mnemonic is that the ,, is just filling the left out parts
by iterating, indicated by parens here:

 +(1,,5,5,,8) == +(1,(2,3,4),5,5,(6,7),8) == 7

Note the duplicated 5 which was there before expansion.
Left out ends default to 0 naturally:

  (,,7)   == ( 0,, 7) == ( 0, 1, 2, 3, 4, 5, 6, 7)
(7,,) == ( 7,, 0) == ( 7, 6, 5, 4, 3, 2, 1, 0)
 (,,-7)   == ( 0,,-7) == ( 0,-1,-2,-3,-4,-5,-6,-7)
   (-7,,) == (-7,, 0) == (-7,-6,-5,-4,-3,-2,-1, 0)

But counting over zero doesn't duplicate the 0 of course:

(-3,,3) == (-3,-2,-1, 0, 1, 2, 3)

Here are some ideas how to extend to semicolon list:

  +(1,2 ;; 3,4) == +(1,2,3 ; 2,3,4) == 2 * 3 == 6

  +(1;1 ;; 3;3) == +( (1;1),(1;2),(1;3),
  (2;1),(2;2),(2;3),
  (3;1),(3;2),(3;3) ) == 9

Combining , and .. gives

  +(1..5,5..8) == 2  # List of Range

not a list from 1 to 8 or so.

Comments?
--


Re: real ranges

2005-12-23 Thread TSa

Ups,

I forgot to mention things like

 +('a0',,'e3') == +( ,
  ,
  ,
  ,
   )

== +( 'a0','a1','a2','a3',
  'b0','b1','b2','b3',
  'c0','c1','c2','c3',
  'd0','d1','d2','d3',
  'e0','e1','e2','e3' )
== 20

  +('a0';;'e3') == +( ;
  ;
  ;
  ;
   )

== +( 'a0','a1','a2','a3';
  'b0','b1','b2','b3';
  'c0','c1','c2','c3';
  'd0','d1','d2','d3';
  'e0','e1','e2','e3' )

== 5 * 4 == 20 # == (4,4,4,4,4) ?

  [+]('a0';;'e3') ==  +('a0','a1','a2','a3') +
  +('b0','b1','b2','b3') +
  +('c0','c1','c2','c3') +
  +('d0','d1','d2','d3') +
  +('e0','e1','e2','e3')

  == [+](4,4,4,4,4) == 5 * 4 == 20
--


Re: Deep copy

2005-12-23 Thread TSa

HaloO,

Luke Palmer wrote:

That's an interesting idea.  A "deep reference".


I also instantaniously loved the idea to dinstinguish between
the types Hash and Ref of Hash. Or Array etc.
--


Re: $/ and $! should be env (and explanation of env variables)

2006-01-02 Thread TSa

HaloO,

happy new year to Everybody!

Luke Palmer wrote:

Env variables are implicitly passed up through any number of call
frames.


Interesting to note that you imagine the call chain to grow upwards
where I would say 'implicitly passed down'. Nevertheless I would
also think of upwards beeing the positive direction where you find
your CALLER's environment with a $+ twigil var or where the $^ twigiled
vars come from ;)

Since exception handling is about *not* returning but calling your
CALLER's CATCH blocks in the same direction as the environments are
traversed there should be some way of navigating in the opposite
direction of these environments up---or down---to the point where
the error occured. Could this be through $- twigil vars? Or by
having an @ENV array that is indexed in opposite call direction yielding
this frame's %ENV? Thus @ENV[-1] would nicely refer to the actual
error environment. And [EMAIL PROTECTED] tells you how far away you are catching
the error. But this would not allow to retrieve non-exceptional
environmental data frames unless forward indexing from @[EMAIL PROTECTED] and
beyond is supported.

Well, alternative we could have an ERR namespace. BTW, how is the
object data environment handled? Is there a SELF namespace? How much
of that is automagically accessible in CATCH blocks?



$/ was formerly lexical, and making it environmental, first of all,
allows substitutions to be regular functions:

$foo.subst(rx| /(.*?)/ |, { "$1" })

Since .subst could choose bind $/ on behalf of its caller.


Just let me get that right. Whatever $foo contains is stringyfied
and passed as $?SELF to .subst with $_ as a readonly alias?
Then the string is copied into an 'env $/ is rw' inside subst and
handed over to some engine level implementation routine which actually
performs the subtitution. Finally .subst returns the value of this $/.
Well, and for an in-place modification of $foo the call reads
$foo.=subst(...), I guess?

Or was the idea to have three declarations

  env $_ is rw = CALLER<$_>; # = $?SELF in methods?
  env $/ is rw = CALLER<$/>; # = $?SELF in rules?
  env $! is rw = undef;  # = CALLER<$!> when exceptional

beeing implicitly in effect at the start of every sub (or block)?
Thus subst could just modify $+/ without disturbing CALLER<$foo>
which might actually be invisible to subst?



It is also aesthetically appealing to have all (three) punctuation
variables being environmental.


And it clears up the strange notion of lexical my-vars :)
I hope lexical now *only* means scanning braces outwards on
the source code level with capturing of state in the (run)time
dimension thrown in for closure creation, lazy evaluation etc.

Talking about lazy evaluation: does it make sense to unify
the global $*IN and $*OUT with lazy evaluation in general?
That is $OUT is where yield writes it value(s) to and $IN
is for retrieving the input. Iteration constructs automatically
stall or resume coroutines as needed.
--


Re: Deep copy

2006-01-02 Thread TSa

HaloO,

Larry Wall wrote:

I think that deep copying is rare enough in practice that it should
be dehuffmanized to .deepcopy, perhaps with optional arguments saying
how deep.


So perhaps .copy:deep then?



 Simple shallow copy is .copy, whereas .clone is a .bless
variant that will copy based on the deep/shallow preferences of the
item being cloned.  The default might be identical to .copy though.
Perhaps those two can/should be unified.


Yes! Variations of the same theme should go into variations on the
call site with the underlying implementation beeing generic enough
to support all variants parametrically.
--


Re: real ranges

2006-01-02 Thread TSa

HaloO Eric,

you wrote:

#strictly outside
($a >  3..6) === (3 >  $a >  6) === (3 >  $a || $a >  6)


Just looking at that hurts my head, how can $a be smaller than three
and larger than 6?  That doesn't make even a little since.


To my twisted brain it does ;)

The idea is that
   outside === !inside
   === !($a < 3..6)
   === !(3 < $a < 6)
   === !(3 < $a && $a < 6)
   === !(3 < $a) || !($a < 6)  # DeMorgan of booleans
   ===  3 >= $a || $a >= 6

Well, stricture complicates the picture a bit.

  strictly outside === !strictly !inside

that is also the case if comparison operators could be
negated directly

  > === !<=
=== !( < || == )
=== !< && !=
=== >= && !=

We could write that as operator junctions

 infix:{'>'}  ::= none( infix:{'<'}, infix:{'=='} )
 infix:{'>='} ::= any(  infix:{'>'}, infix:{'=='} )



($a >  3..6) === ($a < 3 || $a > 6)


I would write that ($a < 3 || 6 < $a) which is just the flipped
version of my (3 > $a || $a > 6) and short circuit it to (3 > $a > 6).
That makes doubled > and < sort of complementary orders when you think
of the Nums as wrapping around from +Inf to -Inf. In fact the number
line is the projection of the unit circle. In other words the range
6..3 might be considered as the inverse of 3..6, that is all real
numbers outside of 3..6 not including 3 and 6.



Your intermediate step makes no sense at all.  I would think that (and expect)

($a >  3..6) === ($a > 6);

You could always use.

my $range = 3..6;
if ($range.min < $a < $range.max) == inside
if ($range.min > $a || $a > $range.max) == outisde

Then you don't have to warp any meaning  of > or <, sure its longer
but its obvious to anyone exactly what it means.


With Juerd's remark of using ~~ and !~ for insideness checking on ranges
I think using < and <= to mean completely below and and > and >= meaning
completely above in the order there remains to define what == and !=
should mean. My current idea is to let == mean on the boundary and !=
then obviuosly everything strictly inside or outside. But it might also
be in the spirit of reals as used in maths to have no single Num $n
beeing equal to any range but the zero measure range $n..$n. This makes
two ranges equal if and only if their .min and .max are equal.

This gives the set of 4 dualing pairs of ops

 <   <=  ==  ~~

 >=  >   !=  !~ # negation of the above


The driving force behind this real ranges thing is that I consider
nums in general as coroutines iterating smaller and smaller intervalls
around a limit. That actually is how reals are defined! And all math
functions are just parametric versions of such number iterating subs.

Writing programs that implement certain nums means adhering to a number
of roles like Order which brings in < and > or Equal which requires == 
and != taken together also give <= and => and <=>. Finally, role Range

brings in .., ~~ and !~ for insideness checking.

Role Division is actually very interesting because there are just the 4
division algebras of the Reals, Complex, Quaternions and Octonions.

Sorry, if this is all too far off-topic.
--


Re: real ranges

2006-01-02 Thread TSa

HaloO,

Luke Palmer wrote:

In fact, it might even bug me more.  I'm a proponent of the idea that
one name (in a particular scope) is one concept.  We don't overload +
to mean "concatenation", we don't overload << to mean "output", and we
don't overload > to mean "outside".


I agree. And have converted inside to be implemented by ~~ as
hinted by Juerd. But even with the previous idea 1..10 < -5
would have been false I guess:

  1..10 < -5 === 1 < -5 < 10
 === 1 < -5 && -5 < 10 === false

Actually +(10..1) == -9 because the measure is always .max - .min
even if .max < .min! But the Range role might be applicable where
the strict ordering < and > don't. E.g. in the complex plane ranges
form rectangles or ring segments depending on the parametrisation
choosen. In the ifinitesimal (point)limit both coincide, of course!



Supposing you fixed those operators, I really think that this is a job
for a module.  All the syntax and semantics you propose can be easily
implemented by a module, except for the basic definition of "..". 


Which was the actual reason to post the idea in the first place.
Note that the proposed ,, and ;; nicely complement the other
list constructors. The infinite lazy list might actually be
written  with pre- or postfix ,,, without running into the same
problem as ... which in my conception of .. as numeric nicely denotes
a Code that doesn't actually iterate a Num. Also the ^..^ forms
are much less important since we've got the unary ^ now. Character
doubling and trippling is actually a theme in Perl6 operator design.

Just compare @a[[;]0,1,2] to @a[0;;2] visually. Not to mention
the---at least to my eye---very strange @; slice vars. BTW, is @foo
automagically related to @;foo? S02 is actually very close to
@foo[1;;5] with its @foo[1;*;5] form. The fact that the [;] reduce
form as slice actually means [<==] makes things not easier.

Here is an idea to replace @;x just with a code var:

 my &x;
&x <==  %hash.keys.grep: {/^X/};
&x <== =<>;
&x <== 1,,,;  # note the ,,,
&x <== gather { loop { take rand 100 } };

%hash{x} # nice usage of closure index

which to me looks much nicer and increases the importance of
the & sigil and code types! At the first invocation of <== the
Pipe is autovivified in &x and the second invocation adds to the
outer dimension of the List of Pipe. Note that I think that in

  x <== 1,,4;

operator <== sees a :(Call of x) type as lhs. What semantic
that might have I don't now. But if that's not an error then perhaps
the former content is lost, an empty code ref is autovivified into &x
and given to <== as lhs. YMMV, though ;)

Actually evaluating a Pipe stored in a & var might just result
in unboxing the Pipe without changing the former content.

 &x <== 1,,4;

 &x <== x <== x <== x; # &x now is Pipe (1,,4;1,,4;1,,4)


Also I realized that the zip, each, roundrobin and cat functions
mentioned in S03 are somewhat redundant with proper typing.
What diffentiates

 cat(@x;@y)

from

 (@x,@y)  #?

Or

   for zip(@names; @codes) -> [$name, $zip] {...}

from

   for (@names; @codes) -> [$name, $zip] {...} #?

And why is

   for each(@names; @codes) -> $name, $zip {...}

not just

   for (@names; @codes) -> $name, $zip {...}  #?

Apart from beeing good documentation, of course.


Yet another thing I didn't mention so far is that I see , and ;
as corresponding to ~ for strings. So parts of the conclomerat of
roles for Num and List is replicated with a different syntactical
wrapper for Str. And user defined classes can hook into this at will
by means of role composition and operator overloading.

At a lower level all this is unified syntactically! But how much
the syntax there resembles the humanized forms of Perl6, I can only
guess. On the engine level the only thing that remains is MMD and
error handling.



Maybe you could make it your first Perl 6 module?


Uhh, let's see what I can do about that. Unfortunately
I'm lazy there... *$TSa :)
--


Re: relationship between slurpy parameters and named args?

2006-01-02 Thread TSa

HaloO,

Austin Frank wrote:
It seems to me like these are related contexts-- arguments to a sub are 
supposed to fulfill its parameter list.  This makes the overloading of 
prefix:<*> confusing to me.


Would an explicit type List help?


I'm pretty sure we don't need slurpiness in argument lists, and I don't 
know if the prefix:<*> notation for named arguments would be useful in 
parameter lists.  But just because I can reason that, for example, 
prefix:<*> in an argument list can't mean slurpiness, that doesn't make 
it clear to me what prefix:<*> _does_ mean in that context.


Slurpiness in a parameter list is a property of a *zone* not of a
single parameter, IIRC. In a parameter list * actually is no operator
but a syntactic type markup! As arity indicator it could actually be
given after the param like the !, ? and = forms.

If the slurpy zone has more type constraints then just 'give me all'
these have to be met by the dynamic args. The very rudimentary split
in the types is on behalf of named versus positional reflected in the @ 
versus % sigils. Using prefix:<*> at the call site just defers the

matching to dispatch time unless the type inferencer knows that there
is no chance of meeting the requirements! And parens are needed to
itemize pairs syntactically.

Note that the invocant zone does neither support slurpyness with *
nor optionality with ?. And I'm not sure how explicit types like
List or Pipe behave with respect to dispatch.


I think I understand that prefix:<*> is available outside of parameter 
lists because that's the only place we need it to mean slurpiness.


No, I think outside of parameter declarations prefix:<*> indicates
lazy evaluation which I see as a snapshoting of the part of the state
that is relevant for producing the actual values later in the program.
But this might contradict the synopsis which talk of 'when a value
becomes known' which sounds like a ASAP bound ref whose value at deref
time depends on side-effects between the lazification and deref!


So, is there a conceptual connection between imposing named argument 
interpretation on pairs in an arg list and slurping up the end of a 
parameter list?  Are there other meanings of prefix:<*> that relate to 
one or the other of these two meanings?


I see the following type correspondences:

  *$   Item of List  # accessible through the $ variable
  *@   List of Item  # accessible through the @ variable
  *%   List of Pair  # accessible through the % variable

and perhaps

  *&   Item of Code  # accessible through the & variable

but this last one is not slurping up more than one block.
Hmm, I'm also unsure if a single *$ param suffices to
bind a complete list of args into an Item of Ref of List.
OTOH, it is clearly stated that *$ after a *@ or *% never
receive values.
--


Junctions again (was Re: binding arguments)

2006-01-02 Thread TSa

HaloO,

Luke Palmer wrote:

The point was that you should know when you're passing a named
argument, always.  Objects that behave specially when passed to a
function prevent the ability to abstract uniformly using functions.[1]
...
[1] This is one of my quibbles with junctions, too.


You mean the fact that after $junc = any(1,2,3) there is
no syntactical indication of non-scalar magic in subsequent
uses of $junc e.g. when subs are auto-threaded? I strongly
agree. But I'm striving for a definition where the predicate
nature of the junctions is obvious and the magic under control
of the type system.

The least I think should be done here is to restrict the magic
to happen from within & vars combined with not too much auto-enref
and -deref of junction (code) refs. The design is somewhat too
friendly to the "junctions are values" idea but then not auto-hypering
list operations...

But I have no idea for this nice syntax, yet. Perhaps something like

  my &junc = any(1,2,3);
  my $val = 1;

  if junc( &infix:<==>, $val ) {...}

which is arguably clumsy. The part that needs smarting up is handing
in the boolean operator ref. Might a slurpy block work?

  if junc($val): {==} {...}

Or

  if junc:{==}($val) {...}

Or $val out front

  if $val == :{junc} {...}

which doesn't work with two junctions.

Or reduce syntax:

  if [==] junc, $val {...}

OTOH, explicit overloads of all ops applicable to junctions
might end up where we are now:

  if  $junc == $val {...}

Hmm, wasn't that recently defined as actually beeing

  if +$junc === +$val {...}  #  3 === 1 --> false

Or how else does a junction numerify? Thus the problem only remains with
generic equivalence if explicit, inhomogenous overloads for junctions
exist. And there are no generic <, >, >= and <=. Does a !== or ^^^
antivalence op exist? I guess not.
--


Re: Junctions again (was Re: binding arguments)

2006-01-03 Thread TSa

HaloO,

Luke Palmer wrote:
Which "reads nicely", but it is quite opaque to the naive user. 


I guess many things are opaque to naive users ;)



Whatever solution we end up with for Junctions, Larry wants it to
support this:

if $x == 1 | 2 | 3 {...}

And I'm almost sure that I agree with him.


This actually would be easy if Larry decided that *all* operators
are recognized syntactically and dispatched at runtime. But he likes
to have == strictly mean numeric comparison. Well, with the twist
that the numericy is imposed on the values retrieved from the list
backing the any-junction and the whole thing beeing settled at compile
time syntactically.

But if we assume that all information that is available to the
compiler is available to the runtime as well, then I don't see
any reason why the evaluation of the condition above might not be
a dispatch on the if flow controller multi sub or some such with
some JITing and auto-loading thrown in.

Let me illustrate the point with the well know set of the two
inner operators of nums + and * where * is sort of defined in
terms of +:

  3 * 4 --> 12   # symmetric MMD, infix notation,
 # atomic evaluation

 (3 *)4 --> 4+4+4 # asymmetric prefix, left adjoined operator,
  # two step evaluation (addition used for illustration)
  #   1: curry lhs to form prefix op
  #   2: apply op from step 1

  3(* 4) --> 3+3+3+3 # asymmetric postfix, right adjoined operator,
 # two step evaluation

I think the compiler decides which of the alternatives is applicable
and generates code accordingly. But why shouldn't it defer the
generation of the semantic tree until dispatch time where the
associativity of the op and the involved types are known? I would
call that syntax tree dispatch or perhaps multi stage MMD. With caching
the results in a local multi this appears as doable to me.



There is a conflict of design interest here.  We would like to maintain:

* Function abstraction
* Variable abstraction


With container types these two are embedded into the type system
which I still hope supports arrow types properly. I've sort of developed
the mental model of --> being a co-variant forward flow of control
or program execution and <-- beeing a contra-variant keeping of achieved
state:

   -->   sub junction /   iteration / implication  / flow
  <--  resub junction / reiteration / replication  / keep
  <--> bisub junction / equivalence / rw container / sync / snap (shot)

The latter two are not yet Perl 6, and the --> has a contra-variant part
in the non-invocant params, which are of course sync or snap points of
a program. Note that lazy params aren't snapped valueish but typeish in
the sense that the imposed constraints *at that moment* in execution
time hold. Well, and if they don't then we have of course 'ex falso quod
libet' later on :)



There's got to be a good solution lying around here somewhere...


Hmm, operator junctions perhaps? I think that the three relations <, ==
and > are the prototypical order ops. I do not see a priori numerics in
them. This is at best Perl 5 legacy. Note that <=, != and >= are two
element any-junctions from the set of the three primary comparators.
And <=> is a one-junction of all three with succeeding mapping of the
ops to -1, 0 and +1.

Together with negation as center point we get the Fano Plane of the
comparison ops in a crude ASCII chart:

 -1|0 00|+1
 {<=}---{==}---{>=}
   \/
\   {! }   /
{<}  {>}
 none(0,1) <-- -1 \  / +1 --> none(-1,0)
   \/
{!=}
-1|+1 --> none(0)

 <=> --> one(-1,0,+1) as circle connecting {<} {==} {>}

can theory theory capture these things in a systematic framework?
--


Re: Junctions again (was Re: binding arguments)

2006-01-04 Thread TSa

HaloO,

Luke Palmer wrote:

Junctions are frightfully more abstract than that.  They only take on
meaning when you evaluate them in boolean context.  Before that, they
represent only a potential to become a boolean test.


This is very well spoken err written---except that I would use
beautifully there :)
Well, and their meaning is sort of reduced to one(0,1) in
boolean context. It does not originate there ;)
So junctions are mean outside boolean context :))

BTW, is 'the means' just the plural of 'the mean' or are
these two completely unrelated substantives? E.g. does
one say 'a means' with 'means' beeing a singular noun?
How about the type Mean or Means as supertype of the four
junction types? For example contrasting them with Plain
values?


So, to support the four junction generators the compiler has to
decide at a boolean node of the AST if the subtree originating
there could possibly be junctive and prepare the code for it!

If there is no proof or disproof, this decision must be deferred
to runtime when the actual data---that is the involved parties
not their types---are available.

OTOH, the programmer might short-cut very easily with a 'no junctions'
in the scope. BTW, does that imply that junctions fall back to list
behaviour when it comes to evaluating the node? How does this lexical
pragma traverse the dynamic scopes at runtime? In other words how do
junctive and non-junctive parts of a program interact?

Once again this boosts my believe that junctions should travel
in code vars. E.g.

   my &test = any(1,2,3).assuming:op<==>;

   my $x = 3;

   if $x.test {???}

Also the typeish aspects of first class closures is somewhat
under-represented in Perl6---or I don't see it. What should e.g.
the following

  my $sci := (&sin == &cos); # intersectons of sin and cos

put into the var? I think it is a lazy list(ref), an iterator or
generator or simply a calculated code object(ref) with type
:(Ref of List of Num where {sin == cos}). In other words

  @sci = pi/4 »+« pi »*« (-Inf..+Inf); # :(Array of List ...)

where I would like the last thing beeing written (-Inf,,+Inf)
or (-Inf,,,). Actually it is the Int type as list, set or any.
With real ranges it might be (-Inf..+Inf):by(pi) or so.

To summarize: I propose to unify the invocations of junctions
with embedded closure calls in general. Of these Perl6 has got
many: there are pipes, lazy lists, slices, control structures,
sorting, key extraction, ...
--


Re: Junctions again (was Re: binding arguments)

2006-01-04 Thread TSa

HaloO,

Rob Kinyon wrote:

I'm confused at the confusion. To me, junctions are just magical
values, not magical scalars. In theory, one should be able to create
junctions of arrays, hashes, or subs just as easily.

my @junc = any( @a, @b, @c );
my %junc = any( %a, %b, %c );


Hmm, and this is unflattened

  my @joj = any( @a; @b; @c );

and

  [EMAIL PROTECTED] --> any( [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL 
PROTECTED] );

then? How long is the junctiveness kept?



Then,

if @junc[2] == 9 { ... }

would imply

if @a[2] == 9 || @b[2] == 9 || @c[2] == 9 { ... }


Uhh, and

  if @joj[2] == 9 { ... }

should effectively means

  if @joj[2;*] == 9 { ... }

with re-junctification of the sliced out arrays?

 if any( [;] @joj.values[2;*] ) == 9 { ... }



IMHO, one thing that may be causing issues with the junction concept
is that I've never seen anyone talk about the implicit read-only
behavior of a junction. To me, a junction would be required to be
read-only (except for overall assignment). To modify the junction
would be to overwrite it. So, given

my $junc = any( $a, $b, $c );


Like Ref values, then?



If you wanted to add $d into there, you'd have to do it this way:

$junc = any( $junc, $d );

Obviously, modifications to $a, $b, or $c would carry through.


Which first of all smells strongly like closure creation, that
is Code types! But shouldn't the read-only enreferencing be
more prominent notationally? Like

   my $junc = any( \$a, \$b, \$c );

or through an Arglist (which means malice in German):

   my $junc = any\( $a, $b, $c );

Otherwise I would strongly opt for value snapshotting
when the junction closure is created. And of course
the underlying functionality must be easily available
for user defined junctions. Things like set-, min-, max-
and sort-junctions come to mind...


Doing
this means that array, hash, and sub junctions make sense and behave
no differently than any other readonly variable.


Combined with lazy evaluation that yields on unavailable
values we enter the realm of constraint programming.
The grand unification then comes with multi threading
this concurrently. With side-effects that becomes *REALLY*
challenging!



In fact, this behavior seems mandated given the possibility of tying
variables (or is this a Perl5ism that is being discarded in Perl6?)


That is what := does these days? BTW, a rw junction is just
three chars around the corner:

  my $junc = any:rw( $a = 1, $b = 2, $c = 4);#1
  my $junc = any( $a = 1, $b = 2, $c = 4, :rw);  #2
  my $junc = any( $a = 1, $b = 2, $c = 4 ):rw;   #3

  $junc = 3; # means $junc.pick = 3?

I'm not sure which syntax of #1 to #3 really hits the junctions
creator with a named arg. And it might not support it for good
reasons! Just think what fun breaks loose if someone hands around
an array that contains (rw => 1):

  push @array, (rw => 1);

  $surprise = any([EMAIL PROTECTED]);


Or simply

  my $junc := any( $a = 1, $b = 2, $c = 4);

So, do we want junctive assignment? Or should it complain like

  my $val := 3; # error: can't bind constant

hopefully does.
--


Re: Junctions again (was Re: binding arguments)

2006-01-04 Thread TSa

HaloO,

Luke Palmer wrote:

But junctions are so "special", that this abstraction wouldn't work.


Well my point is that I dought that it is tractible for the compiler
to come up with the dwimmery to pull the invocation of .values on the
return value out of toomany and leave the cardinality check in,
depending on the input beeing a junction or not. That is to re-write
the junctive case to:

  sub toomany($j)
  {
 if $j > 4 { die "Too many values" }
 return $j;
  }
  my ($j,$k,$l) = map { toomany(get_junction().values) } ^3;

Sort of a junctive compile :)
My syntactical idea is that this might work if you naively write
it without explicit .value and a junction numerifying to its
cardinality. To get the junctive behaviour to trigger as you outline
would be

  sub toomany(&j)  # or perhaps even *&j
  {
 if j > 4 { die "Too many values" }
 return &j;
  }

while an &j.values in the if still keeps the junction untriggered.
Then an uncertain j.values might result in the compile time warning
"return value of j not necessarily a junction". The only remaining
thing for @Larry is to define an exclusive set of methods for querying
junctions. Sort of like the "handle with care" stickers on containers!



Yes, yes, you say "oh, just put a 'Junction' type on the $j
parameter".  That's not the point.  The point is that now I have to be
aware whenever I'm using a junction, and I have to declare all my
generics as being able to take junctions, instead of it just being
obvious that they do because I didn't put any constraints on the
values.  I figure that if you don't have any constraints on the
values, you're not assuming anything about them, so using a junction
should be fine.


Could it be that you think about it the wrong way around? I mean
the values should be in the object grammar slot: "..., so a
junction using *them* should be fine."



Of course, this was introduced for a reason:

sub min($x,$y) {
$x <= $y ?? $x !! $y
}
sub min2($x, $y) {
if $x <= $y { return $x }
if $x > $y { return $y }
}

In the presence of junctions, these two functions are not equivalent.


Neither are they in the presence of references

  min(\3,{\3}) === min2({\3},\3)

when the outer non-closure refs are created and optimized away
by the compiler and the inner ones later depending on the order
in which === evaluates its args. I mean that it sees

 3 === \3

and returns false. Well, or I don't understand === ;)



In fact, it is possible that both or neither of the conditionals
succeed in min2(), meaning you could change the order of the if
statements and it would change the behavior.  This is wacky stuff, so
we said that you have to be aware that you're using a junction for
"safety".


But control is outside. Junctions help people to obfuscate things like

   if min2(1,0) && min2(0,1) {die}

where the inputs to && are returned from different spots in min2.



 I figure that if something
says it's totally ordered (which junctions do simply by being allowed
arguments to the <= function), both of these functions should always
be the same.  The fact is that junctions are not totally ordered, and
they shouldn't pretend that they are.


Junctions aren't ordered totally of course. But neither are they
arguments to <=. The reverse is true: <= is an argument to the junction.
And my point originally was about inventing a pleasing syntax that
highlights this fact.

E.g. the Complex type might support <= such that it returns true
in case of equality and undef otherwise. The latter might lead
people to regard it as false and hopefully cross-check with > and
then draw the right conclusion after receiving another undef and
promoting it to false. Namely that the net effect was a negated
equality check! A gray view of a picture emerges from a blurred
array of black and white pixels, and it takes more than two primal
constituents for a color picture to emerge, e.g. black, white and
fast motion with the right patterns deceive the human eye into
perceiving color!

Or think of contexts where 5 is not considered prime! Is it
always? If not, in which context?

The real objective of a type system is to quench undefinedness
out of the essential parts of a program.



The other thing that is deeply disturbing to me, but apparently not to
many other people, is that I could have a working, well-typed program
with explicit annotations.  I could remove those annotations and the
program would stop working!


The thing that haunts me is that junctions make only sense if they
bubble up into the control heaven so far that to achieve invariance
a *complete* program has to be run for all conceivable permutations
of pathes through the call graph! Might be nice for testing if given
enough time, though :)



 In the literature, the fact that a
program is well-typed has nothing to do with the annotations in the
program; they're there for redundancy, so they can catch you more
easily when you write a poorly-typed program.  But

Re: Junctions again (was Re: binding arguments)

2006-01-05 Thread TSa

HaloO,

Jonathan Lang wrote:

Rob Kinyon wrote:


To me, this implies that junctions don't have a complete definition.
Either they're ordered or they're not.


So, is there a number between 0 and 1? Shades between black and white?
When is a 360 degree turn not returning a system into its initial state?
What has a 720 degree turn to do with sign reversal, negation and
twisted arms? Anyone know the Eisenstein integers? Is 5 a prime complex
number? What is its quadratic residue?

(-1)**2 == -i**2 == +1
  \
   ?-->0  # if <=>
  /
i**2 == -1
-- mirror symmetry axis
j**2 == -1
  \
   !-->0  # unless <=>
  /
(-1)**2 == -j**2 == +1

But note that *same* symmetrie is broken! Ever wondered why a mirror
changes left and right, but not up and down? And what has all that
to do with anagrams like 'OTTO' and 'TOOT' or with bits '.**.' and
'*..*'? Or the range operators ^..^ .^^.?

Conceptually the 0 has no sign unless you want it to be junctive ;)
if you rotate the above vector diagrams by 120 degrees endlessly
clockwise and counter clockwise you get

   upper:  ...,0, 1,-1,0,...
   lower:  ...,0,-1, 1,0,...

which when superimposed additively lets the infinity of the
lists nicely vanish. But not the same is not true if one is
shifted, the other squared and then subtracted, or some such.
The thing is you end up with the infinte list (...,!0,!0,...).

The net effect is the same as reading the list's string representations
in opposite directions. There is no dark side of the moon really!

Here's the same in another ascii picture:

   i\+/+
 -\+ +\-2**-<=>+**2+/+ -/-
   -/-\-i**2

center stage we find <=> which returns one(-1,0,+1) or rotations
thereof: one(0,+1,-1) or one(+1,-1,0). Or the odd permutations
one(0,-1,+1), one(-1,+1,0) and one(+1,0,-1). In other words three
pairs of ones that cancel each other if zipped additively. The only
thing we need is a neutral element to get a group. Let's take the
empty one() junction. But an empty any() would do as well. These
sort of represent no comparison at all and a comparison you don't
care to constrain any further! In the latter case an undef return
value is not unexpected, I hope.

This is how that looks in an example:

   'foo' cmp 'bor'

-->  »cmp«  <--

-->  (-1, +0, +1) | (+1, -0, -1)  <--

The arrows indicate reading direction of the respective string.
In a optimised implementation the outer f/b and and r/o give
the invariant relative positions of one('foo','oof') and
one('bor','rob') in the order of strings in viewing direction.
At least that is the observable behaviour. Endianess of the
machine, representation of junctions and what not may differ
for concrete implementations of Perl6 :)

Note that

   [?] (-1, +0, +1) »+« (+1, -0, -1)
   --> [?] (0,0,0)
   --> false

With (-++) and (+--) beeing dual Minkowski metrics.



Either I can put them in a <=
expression and it makes sense or I can't.


HiHi, this is a statement about the programmer's abilities...
not about Perl 6 the language!



If it makes sense, then that
implies that if $x <= $y is true, then $x > $y is false. Otherwise,
the definitions of <= and > have been violated.


I'll beg to differ.  If you insist on that kind of restriction on
Junctions, they won't be able to serve their original (and primary)
purpose of aggragating comparison tests together.  Remember:

  if $x <= 1 & 5 {...}

is intended to be 'shorthand' for

  if $x <= 1 && $x <= 5 {...}


I beg to differ, two err too. I mean potentially from both depending
on what they intented to say and how I understood them.

My point is that the junctive case sort of means:

  if $x.all:{<=}(1,5)  {...}

where :{<=} receives similar meta treatment as [<=] reduce would.
BTW, isn't bareword suffixing {<=} reserved for type theoretic
manipulations? I still regard the junctions as Code subtypes and
thus any{<=} and friends as parametric meta operation.



Therefore,

  $x = 3;
  if $x <= 1 & 5 {say 'smaller'}
  if $x > 1 & 5 {say 'larger'}

should produce exactly the same output as

  $x = 3;
  if $x <= 1 && $x <= 5 {say 'smaller'}


This is slightly untrue. because if the junction contains two
identical values or an undef ordered object the < part is
essentially stripped away:

if $x <= 5 && $x <= 5 {say 'smaller'}

can be permuted into

if $x <= 5 && 5 > $x {say 'smaller'}

and optimized to

if $x == 5 {say 'smaller'}



  if $x > 1 && $x > 5 {say 'larger'}

If it doesn't, then Junctions are useless and should be stricken from Perl 6.

And the definition of '>' is "greater than", not "not (less than or
equal to)".  The latter is a fact derived from how numbers and letters
behave, not anything inherent in the comparison operator.


The complete group of comparison operators imposes the

S02: reserved namespace ops

2006-02-21 Thread TSa

HaloO,

S02 states "(Directly subscripting the type with either square brackets 
or curlies is reserved for various generic type-theoretic operations. In 
most other matters type names and package names are interchangeable.)"


What are these type-theoretic operations? And what do directly
attached < > pointy brackets on a type mean? How are they related
to « »? Is there something like ::_ as 'topic' type?

Here are the details of all four bracketing ops:

Foo($bar) # function call? constructor?
Foo .($bar)   # same

Foo::($bar)   # symbolic symbol lookup with outward scanning
Foo .::($bar) # same


Foo::<$bar>   # direct symbol lookup without outward scanning
Foo::{'$bar'} # same
Foo .::<$bar> # same


Foo{$bar} # type constraint check?

Foo<$bar> # valid?
Foo«$bar» # valid?

Foo[$bar] # parametric type instanciation?

Foo::[$bar]   # valid?
Foo .::[$bar] # same?
--


Re: S02: reserved namespace ops

2006-02-23 Thread TSa

HaloO,

Larry Wall wrote:

Um, I always thought that "is reserved" in a spec means "we don't have
the foggiest idea what we'll do with this, but we have a suspicion
that if we let people use this particular thing right now, we'll
regret it someday."


OK, but how official is theory.pod? I mean is it part of the spec?
It definitly doesn't fit the synopsis numbering. It's more like
the track the Hogwart's express uses ;)
--


why no negative (auto reversed) ranges?

2006-03-20 Thread TSa

HaloO,

S03 does explicitly disallow auto-reversed ranges.
And I'm not sure if the upto operator has a downto
cousin where ^-4 == (-1, -2, -3, -4) returns a list
that is suitable for indexing an array from the back.
Why is that so?

With negative ranges, negative array and list length
becomes a logical extension and .reverse just flips
the sign of the array. But I know that code snippets
like 'if @array < 10 {...}' then need to be "upgraded"
to explicitly take the abs: 'if abs @array < 10 {...}'
which is good documentation but somewhat inconvenient.
OTOH, using contentless arrays as kind of integer becomes
even more attractive ;)

Is there a .reversed property for arrays, lists and
ranges that allows to query the direction of growth?
And is .reverse working lazily with respect to the
list or array and just flips this property?
--


Re: [svn:perl6-synopsis] r8573 - doc/trunk/design/syn

2006-04-06 Thread TSa

HaloO,

[EMAIL PROTECTED] wrote:

* S02: fix the three places where the old form:
$x  .(...)
  needs to be replaced to the new form:
$x.  (...)



-&foo.($arg1, $arg2);
+&foo.   ($arg1, $arg2);


What is the reason for this change? I find the
old definition of whitespace before the dot much
more pleasing. The trailing dot looks somewhat lost
and does not link to the arglist visually while
the preceding dot in .() made it look like a
method that binds leftwards. Same asthetics apply
to postfix ops where

  $x  .++;

looks better to my eyes than

  $x.  ++;

Does the same apply for method calls?

  $x.  foo;

I've always seen the method dot as a kind of pseudo
sigil. A metapher that doesn't work with whitespace
between dot and identifier.
--


Re: [svn:perl6-synopsis] r8573 - doc/trunk/design/syn

2006-04-06 Thread TSa

HaloO,

Damian Conway wrote:

We can't. The problem is that:

foo .bar

has to mean:

foo($_.bar)

So the only way to allow whitespace in dot operations is to put it after 
the dot.


The obvious alternative is to make 'foo .bar' simply mean
'call foo and dispatch .bar on the return value'. The topic
comes into play only if there's no syntactical invocant.
Kind of last resort fallback to $_ before 'method on void'
error. Why should the dispatch on topic get such a privilege?

Note that a prominent, typical foo actually reads:

  self .bar;

And a self($_.bar) is pretty much useless. In other words
wrongly huffmanized.
--


Re: $a.foo() moved?

2006-04-07 Thread TSa

HaloO,

Larry Wall wrote:

Sure, that one might be obvious, but quick, tell me what these mean:

say .bar
say .()
say .1
when .bar
when .()
when .1
foo .bar
foo .()
foo .1
.foo .bar
.foo .()
.foo .1

I'd rather have a rule you don't have to think about so hard.  To me
that implies something simple that let's you put whitespace *into*
a postfix without violating the "postfixes don't take preceding
whitespace" rule.


To me your examples fall into three categories that are distinguished
by the type of characters following the dot

  .bar  # identifier --> postfix method
  .()   # parens --> postfix invocation
  .1# number literal

From there on the difficulty comes not from the syntax but the
intended semantics! E.g. the non-whitespace when(), when.() and
when.bar are syntax errors, right? So why should the whitespace
version suddenly mean invocation or dispatch on topic?

The only thing that bothers me is that more parens and explicit
$_ will be needed. E.g. say (.bar) or say $_.bar in the first triple.
And the compiler has to know something about the bare foo. Apart
from that the syntactic rules are simple.
--


foo..bar or long dot and the range operator

2006-04-11 Thread TSa

HaloO,

I'm unsure what the outcome of the recent long dot discussions is
as far as the range operator is concerned. Since the whole point
of the long dot is to support alignment styles the following cases
shouldn't mean different things:

   foobar #0  single call to foobar (OK, that is different)
   foo.bar#1  method .bar on retval of foo
   foo..bar   #2  range from retval of foo to retval of bar?
   foo. .bar  #3  same as #1

   $x++   #0
   $x.++  #1
   $x..++ #2  or: $x. ++?
   $x. .++#3

   foo()  #0
   foo.() #1
   foo..()#2
   foo. .()   #3

Are all the #2 cases valid syntax? Does the range operator
need whitespace then? Or is there a hole for spacing level 2?

BTW, the above dot sequences let the successions of long dots
look like a wedge, so there are some merits in the idea.
But isn't forced whitespace on the range too high a price?
--


Re: foo..bar or long dot and the range operator

2006-04-12 Thread TSa

HaloO,

Larry Wall wrote:

On Tue, Apr 11, 2006 at 12:41:30PM +0200, TSa wrote:
: I'm unsure what the outcome of the recent long dot discussions is
: as far as the range operator is concerned.

.. is always the range operator.  The "dot wedge" just has a discontinuity
in it there.  I can't think of any wedgey applications that wouldn't work
about as well by starting the wedge with $x. .y instead of $x.y.


Doesn't that discontinuity devalue the long dot? Its purpose is 
alignment in the first palce. For a one char diff in length one

now needs

   foo.  .bar;
   self. .bar;

instead of

   foo .bar;
   self.bar;

with the rules as before long dot was invented. Why are calls
on the topic so important? Wouldn't it be cleaner to force
a leading zero in numeric literals?

I might be to blind to see it, but could someone give some
examples where the cleanliness of the new parsing is obvious?
I mean compared to the old rules, not counting intended calls
on topic.
--


Re: Perl 6 built-in types

2006-04-28 Thread TSa

HaloO,

Darren Duncan wrote:
Long story shortened, if we consider the point in time where an 
"immutable" object's constructor returns as being when the object is 
born, then we have no problem.  Any type of object is thereby immutable 
if it can not be changed after its constructor returns.


My lines of thinking about programs run along yours with values
existing eternally. But I think of the constructor of a constant
not as building the object in the first place *but* as making it
part of the current program. I imagine a conceptual membrane in
type space that encloses the program and shrinks and grows while
it is running. Mutability says something about the membrane's
changeability *not* about the value. This inside/outside metapher
is very strong. A mutable object is an aggregate of changing pieces
of the membrane in a fixed membrane frame of program real estate.

After understanding values as above, I just want to add that
the type of objects makes statements about all possible values
of its class. The runtime component of the type system enforces
all contstraints placed on values of the type while the program
attempts to modify the membrane to catch the values into it
or some such :)

In yet another view an object conceptually is a One Junction
over all possible values. At any given time in a program flow
the program space representing the object has a well defined
value. If it matches the intents of the programmer remains to
be seen!
--


Re: Perl 6 built-in types

2006-04-28 Thread TSa

HaloO,

Larry Wall wrote:

On Fri, Apr 28, 2006 at 04:41:41AM +, Luke Palmer wrote:
: It seems like a hash whose values are the unit type.  Does Perl have a
: unit type?  I suppose if it doesn't, we could define one:
: 
:subtype Unit of Int where 1;
: 
: (Assuming that "where" groks whatever "when" does).
: 
: Then your mutable set is:
: 
:my Hash of Unit $set;


Hmm, well, if values are just single-element subsets, then:

my %set of 1;
my 1 %set;


And presumeably also

  my %set of true;
  my true %set;

But the real question is: "Does the storage optimization
fall out naturally or does it need explicit instanciation
of a generic hash container for the unit type?"
--


Re: "normalized" hash-keys

2006-05-08 Thread TSa

HaloO,

Dr.Ruud wrote:

What would be the way to define-or-set that a specific hash has
non-case-sensitive keys?


There are two things in this:
 (1) The syntax to type the keys of a hash---too bad that I forgot it
 and currently don't find it in the Synopsyses. Pointers welcome!
 (2) A way to constrain a string to be case insensitive.

The latter could be nicely stated as

  subset CaseInsensitive of Str where { .lc eq .uc }

but it actually is a constraint on the &infix:, not on
the strings. So I doubt that it works as expected. But then
I'm not sure if one can make subsets of operators at all:

  subset CaseInsensitive of eq where { .lc eq .uc }

I guess you have to simply define CaseInsensitive as an alias---that is
an unconstraint subset---and overload &infix:. Then the hash
hopefully uses this operator when its key type is CaseInsensitive.
--


Re: "normalized" hash-keys

2006-05-09 Thread TSa

HaloO,

Smylers wrote:

But why would a hash be doing equality operations at all?


I think it does so in the abstract. A concrete implementation
might use the .id method to get a hash value directly.



 Assuming that
a hash is implemented efficiently, as a hash, then it needs to be able
to map directly from a given key to its corresponding value, not to have
to compare the given key in turn against each of the stored keys to see
if they happen to match under some special meaning of eq.

You snipped Ruud's next bit:



Or broader: that the keys should be normalized (think NFKC()) before
usage?


As $Larry pointed out, this all boils down to getting the key type
of the hash wired into the hash somehow. Assuming it dispatches on
the .id method the following might do:

  class CaseInsensitive does Str
  {
 method id { self.uc.Str::id }
  }

  my %hash{CaseInsensitive};

As usual the compiler might optimize dynamic method lookup away
if the key type is known at compile time.



That seems the obvious way to implement this, that all keys are
normalized (say with C, for this specific example) both on storage
and look-up.  Then the main hashy bit doesn't have to change at all.


Yep. But the @Larry need to confirm that the Hash calls .id on strings
as well as on objects. Note that Array might rely on something similar
to access the numeric key. Then you could overload it e.g. to get a
modulo index:

  class ModuloIndex does Int
  {
 method id { self % 10 }
  }

  my @array[ModuloIndex];
--


Re: Can foo("123") dispatch to foo(Int) (was: Mutil Method Questions)

2006-06-27 Thread TSa

HaloO,

Paul Hodges wrote:

so back to foo("bar"). What's the default behavior? String doesn't Num,
does it? though is does convert if the value is good 


I think that Str and Num are disjoint mutually exclusive types.
If you want to get coercive behaviour you need an overloaded
&foo:(Str|Num) that handles conversions and redispatches.



Does that mean foo("123") should or should not dispatch to foo(Int)?
Or even foo(Num), for that matter Oy, I could see some headaches
around setting these rules in mind, lol What's the DWIMmiest
behavior hear?


I strongly believe that foo("123") should never *dispatch* to
&foo:(Int). The best in dwimmery would be an auto-generated
&foo:(Item) that handles coercions. Would that be switchable
by a 'use autocoercions' or so?



I assume foo(1) would dispatch to foo(Int) rather than foo(Num), as the
most restrictive fit, and I could see how that could blow up in my
face. Is there some basic compile time warning (and a way to tell it
"shut up, I know what I'm doing") that says "hey, these are
suspiciously close signatures, are you sure this is what you wanted?"


Last time I checked the subtyping of Perl6 was set-theoretic. That means
to me that Int is a strict subset of Num along the lines of

  subset Int of Num where { .frac == 0 }

Note that Num might be implemented as a subclass of Int that adds the
fractional part! But subclassing is not subtyping.
--


Re: clarifying the spec for 'ref'

2006-09-04 Thread TSa

HaloO,

Luke Palmer wrote:

Removing abilities, counterintuitive though it may seem on the
surface, makes the type *larger*.  It is not adding constraints, it is
removing them (you might not be able to call set($x) on this object
anymore).


Welcome to the co- and contra-variance problem again. We must
distinguish two sets:
 (1) the set of all conceivable instances
 (2) the set of constraints

Larry means (2) while Luke is talking about (1) in the
particular case of record subtyping I think. That is
methods are arrow typed slots of the object record (That is
they have type :(Object --> ::X --> ::Y)). Interestingly Perl6
doesn't provide a sound sublanguage for defining constraints
in a way that is amenable for predicate dispatch. I would
expect the where blocks to be under very strict control of the
type system but seemingly they aren't.



In order to resolve the linguistic conundrum of "when Array", we could
say that the Array role in fact implements a constant Array.  If you
would like to be a mutable array, you implement MutableArray or
something.  That would make code like this:

   given $x {
 when Array { $x[42] = 42; }
   }

broken from a pure point of view.  You may not be able to set element
42 of that array.  Perl would still allow the code to compile, of
course, because Perl is a laid-back language.


The point is that reference types are co-variant for reading and
contra-variant for writing. The only escape for rw access is mandating
type equality which in Perl6 comes as 'does Array' and has got the
rw interface.



Constness is something that exists, so we have to solve it somehow.


Yes, but it's only half the story! The other thing that has to be
solved is writeonlyness. Both in isolation result in type soundness
in the presence of subtype directed dispatch. But both at the same
time lose this and mandate type equality.

Note that a rw Array is nicely applicable where either a readonly
or writeonly subtype is expected. The only difference is in the
return type handling, of course! Also sharing parts of the Array
implementation is orthogonal to the question of subtyping.



But
pretending that const arrays are arrays with the added capability that
they throw an error when you try to write to them is going to get us
to a broken type model.


I think type inference should be strong enough to find out that
an Array parameter of a sub is consistently used readonly or writeonly
and advertise this property accordingly to the typechecker and the
dispatcher.


Regards,
--


Re: but semantics (was Re: Naming the method form of s///)

2006-09-04 Thread TSa

HaloO,

Trey Harris wrote:
I do not think that C should mutate its LHS, regardless what its 
RHS is.


I strongly agree. We have the mutating version

  $p but= { .y = 17 };

which is just one char longer and nicely blends as a meta operator.
But are assignment ops allowed as initializer?

  my $z = $p but= { .y = 17 };

Regards,
--


class interface of roles

2006-09-19 Thread TSa

HaloO,

After re-reading about the typing of mixins in
http://www.jot.fm/issues/issue_2004_11/column1
I wonder how the example would look like in Perl6.
Here is what I think it could look like:

role GenEqual
{
   method equal( : GenEqual $ --> Bool ) {...}
}

role GenPointMixin
{
   has Int $.x;
   has Int $.y;
   method equal( ::?CLASS GenEqual $self: ::?CLASS $p --> Bool )
   {
  return super.equal(p) and  # <-- handwave
 self.x == $p.x and self.y == $p.y;
   }
}

class GenSquare does GenEqual does GenPointMixin
{
   has Int $.side;
   method equal ( : ::?CLASS $p --> Bool )
   {
  return self.side == $p.side;
   }
}

The handwave part is the interface of the composed role to
the class it is composed into and the typing of this interface.
The article proposes to expand the mixin self type prior to the
class interface. The latter then poses type constraints onto the
class. Do roles work like that in Perl6? I mean would the approach
of the article of using two F-bounded quantifications (see the last
formular in section 8) be a valid type model for class composition?


Regards, TSa.
--


Re: Nitpick my Perl6 - parametric roles

2006-09-25 Thread TSa

HaloO,

Sam Vilain wrote:

Anyone care to pick holes in this little expression of some Perl 6 core
types as collections?  I mean, other than missing methods ;)


My comments follow.



  role Collection[\$types] {
   has Seq[$types] @.members;
  }


This is a little wrapper that ensures that collections have got
a @.members sequence of arbitrary type. This immediately raises
the question how Seq is defined.



  role Set[::T = Item] does Collection[T] where {
  all(.members) =:= one(.members);
  };


Nice usage of junctions! But how is the assertion of
uniqueness transported into methods that handle adding
of new values? In other words: I doubt that this comes
freely out of the type systems inner workings but is
more a statement of the intentions of a Set. Which leads
to the question when this where clause will be checked
or used to base MMD decisions.



  role Pair[::K = Item, ::V = Item] does Seq[K,V] {
  method key of K { .[0] }
  method value of V { .[1] }
  };
 
  role Mapping[::K = Item, ::V = Item] does Collection[Pair[K,V]] {

  all(.members).does(Pair) and
  all(.members).key =:= one(.members).key;
  }


I guess this is a typo and you wanted a where clause. The first
assertion of the members doing the Pair role should be guaranteed
by using Pair as the type argument when instantiating the Collection
role. Are you sure that the underlying Seq type handles the one and
two parameter forms you've used so far?



  role Hash[Str ::K, ::V = Item] does Mapping[K, V]


Will the Str assertion not be too specific for non-string hashes?


  where { all(.members).key == one(.members).key }
  {
  method post_circumfix:<{ }> (K $key) of V|Undefined {


Nice union type as return type. But isn't the type name Undef?


  my $seq = first { .key == $key } &.members;


Wasn't that @.members?


  $seq ?? $seq.value :: undef;


Ternary is spelled ?? !! now.



  }
  }
 
  role ArrayItem[::V = Item] does Seq[Int, V] {

  method index of Int { .[0] }
  method value of Item { .[1] }
  };


Here we see the two parameter version of Seq at work.



  role Array of Collection[ArrayItem]
  where { all(.members).index == one(.members).index }
  {
  method post_circumfix:<[ ]> (Int $index) of Item {
  my $seq = first { .index == $index } &.members;


Is this first there a grep-like function? Shouldn't it then
read 'first { .index == $index }, @.members'?



  $seq ?? $seq.value :: undef;
  }
  }

I'll take the feedback I get, and try to make a set of Perl 6 classes in
the pugs project that look and feel just like regular Perl 6 hash/arrays
but are expressed in more elementary particles.


This might be very useful in future debates about these types. But
I think everything hinges on the basic Seq type.


Regards, TSa.
--


Re: Nitpick my Perl6 - parametric roles

2006-09-25 Thread TSa

HaloO,

Miroslav Silovic wrote:

TSa wrote:

Nice usage of junctions!



But buggy - one means *exactly* one. So for an array of more than 1 
element, all(@array) never equals one(@array) - if they're all the same, 
it's more than 1, otherwise it's 0.


Doesn't all(1,2,3) == one(1,2,3) expand the all junction first?
So that we end up with 1 == one(1,2,3) && 2 == one(1,2,3)
&& 3 == one(1,2,3) which is true. In the case of duplicated
entries we get a false if the one-junction supports that.
That is, how does e.g. one(1,1,2) work? Is it failing for 1?

Regards,
--


Re: Nitpick my Perl6 - parametric roles

2006-09-26 Thread TSa

HaloO,

Sam Vilain wrote:

perl -MPerl6::Junction=one,all -le '@foo=qw(1 2 3 4); print "yes" if
(all(@foo) eq one(@foo))'
yes


But does it fail for duplicates? I guess not because junctions
eliminate duplicates and you end up testing unique values as
above. E.g. all(1,1,2) == one(1,1,2) might actually give the
same result as all(1,2) == one(1,2).

Regards,
--


Re: [svn:perl6-synopsis] r12398 - doc/trunk/design/syn

2006-09-26 Thread TSa

HaloO,

Luke Palmer wrote:

Woohoo!  I was about to complain about this whole "capture sigil"
nonsense, but I'm guessing somebody else already did.  I also like the
new [,] :-)


I'm very glad, too. Even though I would like the new operator
spelled / for aesthetic reason.

Regards,
--


Re: class interface of roles

2006-09-26 Thread TSa

HaloO,

is this subject not of interest? I just wanted to start a
discussion about the class composition process and how a
role designer can require the class to provide an equal
method and then augment it to achieve the correct behavior.
Contrast that with the need to do the same in every class
that gets the equal method composed into if the role doesn't
have a superclass interface as described in the article.


Regards,
--


Re: Nitpick my Perl6 - parametric roles

2006-09-26 Thread TSa

HaloO,

Sam Vilain wrote:

Ah, yes, a notable omission.  I understood a Seq as a list with
individual types for each element, which are applied positionally.


I can understand that the type-checker can produce this type
for immutable sequences.



 The
superclass for things like Pair.


Hmm, have to think about that. Tuple subtyping usually has the
reverse logic where a larger tuple is a subtype if the common
parts are subtypes.



Here's a quick mock-up of what I mean:

   role Seq[ \$types ] {
   submethod COMPOSE {  # a la BUILD for classes
   loop( my $i = 0; $i < $types.elems; $i++ ) {
   # assumption: "self" here is the class
   # we're composing to, and "has" is a class
   # method that works a little like Moose
   self.has( $i++ => (isa => $types.[$i]) );


I don't understand this line. First of all, why the increment?
Then what does the double pair achieve when given to the .has
method? I guess you want a numeric attribute slot.


   }
   }
   my subset MySeqIndex of Int
   where { 0 <= $_ < $types.elems };

   method post_circimfix:<[ ]>( $element: MySeqIndex ) {
   $?SELF.$element;


This is also unclear. Firstly there's no $?SELF compile time
variable anymore. Secondly you are using the invocant as method?
Shouldn't that be the sequence index for retrieving the numerical
attribute slot?


   }
   }

Seq is certainly interesting for various reasons.  One is that it is a
parametric role that takes an arbitrary set of parameters.  In fact,
it's possible that the type arguments are something more complicated
than a list (making for some very "interesting" Seq types); the above
represents a capture, but the above code treats it was a simple list.


Can type parameters of roles be handled through normal variables as
your code does?


Regards,
--


Re: RFC: multi assertions/prototypes: a step toward programming by contract

2006-09-28 Thread TSa

HaloO,

Miroslav Silovic wrote:
What bugs me is a possible duplication of functionality. I believe that 
declarative requirements should go on roles. And then packages could do 
them, like this:


package Foo does FooMultiPrototypes {
...
}


I like this idea because it makes roles the central bearer of type
information. I think there has to be a way for a role to define
constraints onto the thing it is going to be composed into---see my
unreplied mail 'class interface of roles'. The type requirements
of a role could be mostly inferred from the role definition, especially
if we have something like a super keyword that in a role generically
refers to the thing it's composed into before the role is composed.
Note that self refers to the object *after* composition and instance
creation.


Of course, I hadn't quite thought this through - as packages aren't 
classes, there would probably have to be heavy constraints on what 
FooMultiPrototypes may declare.


Basically instance data declarations and methods require a class
for composition. Everything else can also go into packages and modules.
IIRC, additional methods can come as package qualified as in

role Foo
{
   method Array::blahh (...) {...}  # goes into Array class
}

and as such require the package to make an Array class available.


Regards,
--


Re: RFC: multi assertions/prototypes: a step toward programming by contract

2006-09-28 Thread TSa

HaloO,

Trey Harris wrote:

I would hate for Perl 6 to start using C or C in the
sort of ways that many languages abuse "Object" to get around the 
restrictions of their type systems.  I think that, as a rule, any 
prototype encompassing all variants of a multi should not only

specify types big enough to include all possible arguments, but also
specify types small enough to exclude impossible arguments.


As Miroslav proposed to handle the specification of the interface with
role composition we get another thing as well: implicit type parameter
expansion. That is, a role can be defined in terms of the self type and
e.g. use that as parameter type, return type and type constraints. All
of which nicely expand to the module or package type the role is
composed into!


Regards, TSa.
--


Re: class interface of roles

2006-10-05 Thread TSa

HaloO,

Brad Bowman wrote:

Sam Vilain wrote:

This will be the same as requiring that a class implements a
method, except the method's name is infix:<==>(::T $self: T $other)
or some such.


Sure. The point is, how does a role designer mix in the x and y
coordinate attributes *and* augment the notion of equality to
encompass these new attributes *without* shifting this burden onto
the class implementor!



Does the "class GenSquare does GenEqual does GenPointMixin" line
imply an ordering of class composition?  This would seem to be
required for the super.equal hand-wave to work but part of the Traits
Paper goodness came from avoiding an ordering.  Composition is just
order insensitive flattening.  Conflicts like the equal method in the
OP have to be explicitly resolved in the target class, either using
aliases or fully qualified names.  So there's no super needed.


Hmm, my aim was more at the class composition process as such.
I envision a type bound calculation for all composed roles. This
bound then is available as super to all roles. Inter-role conflicts
are orthogonal to this. If the bound is fulfilled the order of
role composition doesn't matter. In the case of the equality checks
the different participants of the composition call each other and
use logical and to yield the final equality check. To the outside
world the equality method appears to be a single call on the type
of the objects created from the class. Note that it is type correct
in the sense that all participants are considered. My hope is that
this is achieved automatically as outcome of the composition process
and not by intervention of the class implementor.



(I should confess that I haven't yet read the OP linked article),


To understand it you might actually need to read previous articles
in the series, too.


Regards,
--


Re: RFC: multi assertions/prototypes: a step toward programming by contract

2006-10-05 Thread TSa

HaloO,

Larry Wall wrote:

Basically, all types do Package whenever they need an associated
namespace.


Great! This is how I imagined things to be. And the reason why
the :: sigil is also the separator of namespaces.



 And most of the Package role is simply:

method postfix:<::> () { return %.HOW.packagehash }

or some such, so "$type.::" returns the symbol table hash associated
with the type, if any.  It's mostly just a convention that the Foo
prototype and the Foo:: package are considered interchangable for
most purposes.


Do these namespaces also give a structural type definition in the
sense of record subtyping? That is a package is a subtype of another
package if the set of labels is a superset and the types of the slots
available through the common labels are in a subtype relation?


Regards,
--


Re: class interface of roles

2006-10-06 Thread TSa

HaloO,

Stevan Little wrote:

As for how the example in the OP might work, I would suspect that
"super" would not be what we are looking for here, but instead a
variant of "next METHOD".


I'm not familiar with the next METHOD syntax. How does one get the
return value from it and how are parameters passed? Would the respective
line in the equal method then read:

   return next METHOD($p) and self.x == $p.x and self.y == $p.y;

I think that a super keyword might be nice syntactic sugar for this.


> However, even with that an ordering of some
> kind is implied

The only ordering I see is that the class is "up" from the role's
perspective. When more then one role is combined and all require
the presence of an equal method I think the roles can be combined
in any order and the super refers to the class combined so far.
IOW, at any given time in the composition process there is a current
version of the class' method. The final outcome is a method WALK
or however this is called in composition order. Conceptually this
is method combination: seen from outside the class has just one
type correct method equal. Theoretical background can be found in
http://www.jot.fm/issues/issue_2004_01/column4



I suppose this is again where the different concepts of classes are
roles can get very sticky. I have always look at roles, once composed
into the class, as no longer needing to exist. In fact, if it weren't
for the idea of runtime role compostion and runtime role
introspection, I would say that roles themselves could be garbage
collected at the end of the compile time cycle.


I see that quite different: roles are the primary carrier of type
information! Dispatch depends on a partial ordering of roles. I
think all roles will form a type lattice that is available at
runtime for type checks. With parametric roles there will be dynamic
instanciations as needed.


Regards,
--


Re: class interface of roles

2006-10-06 Thread TSa

HaloO,

Stevan Little wrote:

On 10/2/06, Jonathan Lang <[EMAIL PROTECTED]> wrote:

This notion of exclusionary roles is an interesting one, though.  I'd
like to hear about what kinds of situations would find this notion
useful; but for the moment, I'll take your word that such situations
exist and go from there.


Well to be honest, I haven't found a real-world usage for it yet (at
least in my travels so far), but the Fortress example was this:

 trait OrganicMolecule extends Molecule
 excludes { InorganicMolecule }
 end
 trait InorganicMolecule extends Molecule end


Wouldn't that be written in Perl6 the other way around?

  role OrganicMolecule {...}
  role InorganicMolecule {...}

  role Molecule does OrganicMolecule ^ InorganicMolecule {...}

Which is a nice usage of the xor role combinator.



And from that I could see that given a large enough set of roles you
would surely create roles which conflicted with one another on a
conceptual level rather then on a methods/attribute (i.e. - more
concrete) level.


I don't abide to that. If roles are conceptually modelling the same
entity their vocabulary should conflict also. Well unless some
differing coding conventions accidentally produce non-conflicting
roles. The whole point of type systems relies on the fact that
concrete conflicts indicate conceptual ones!


Regards,
--


Re: class interface of roles

2006-10-09 Thread TSa

HaloO,

Stevan Little wrote:

I think that maybe we need to seperate the concept of roles as types
and roles as partial classes, they seem to me to be in conflict with
one another. And even they are not in conflict with one another, I
worry they will bloat the complexity of roles usage.


The bloat aside I believe it is essential to have roles as the key
players of the type system. I propose to handle the typeish aspect of
roles as described in the paper I linked to: there should be a trait
'is augmented' that is applicable to a role and to methods in a role.
In such a method a call to next METHOD should invoke the class's method.
Alternatively we could make the method combination behavior the default
and have a class method trait 'is disambig' or 'is override' for the
case where the class needs to have the final word.

Note that the superclass interface of roles should be mostly inferred
from the usage of next METHOD. As such it is a useful guidance for
error reports in the class composition process.



My experiences thus far with roles in Moose have been that they can be
a really powerful means of reuse. I point you towards Yuval Kogman's
latest work on Class::Workflow, which is basically a loose set of
roles which can be composed into a highly customizable workflow
system. This is where I see the real power of roles coming into play.


Is this a set of free mixins or are they dependent on the class to
provide a certain interface to fully achieve the means of the roles?
I also consider roles a powerful tool but I believe that the type
system should play a role in the composition process that goes beyond
simple checking of name clashes.


Regards,
--


Re: class interface of roles

2006-10-09 Thread TSa

HaloO,

Jonathan Lang wrote:

TSa wrote:
 > Dispatch depends on a partial ordering of roles.

Could someone please give me an example to illustrate what is meant by
"partial ordering" here?


In addition to Matt Fowles explanation I would like to
give the following example lattice build from the roles

  role A { has $.x }
  role B { has $.y }
  role C { has $.z }

There can be the four union combined roles A|B, A|C, B|C
and A|B|C which complete the type lattice:

 Any={}
/  |  \
   /   |   \
  /|\
 / | \
/  |  \
  A={x}  B={y}  C={z}
   | \/ \/ |
   |  \  /   \  /  |
   |   \/ \/   |
   |   /\ /\   |
   |  /  \   /  \  |
   | /\ /\ |
 A|B={x,y} A|C={x,z} B|C={y,z}
\  |  /
 \ | /
  \|/
   \   |   /
\  |  /
A|B|C={x,y,z}

Note that A = (A|B) & (A|C) is the intersection type of A|B and A|C.
Note further that A|B is a subtype of A and B written A|B <: A and
A|B <: B and so on. Usually the A|B|C is called Bottom or some such.
I think it is the Whatever type of Perl6. It is the glb (greatest lower
bound) of A, B and C. In a larger lattice this type gets larger as well.
Any is the trivial supertype of all types.

This lattice can then be used for type checks and specificity
comparisons in dispatch. BTW, such a lattice can be calculated lazily
from any set of packages. In pure MMD the selected target has to be
the most specific in all dispatch relevant positions.

Regards,
--


Re: class interface of roles

2006-10-09 Thread TSa

HaloO,

TSa wrote:

Note that the superclass interface of roles should be mostly inferred
from the usage of next METHOD. As such it is a useful guidance for
error reports in the class composition process.


Actually 'next METHOD' doesn't catch all superclass interface issues.
There is the simple case of calling e.g. accessor methods on super
which should result in the requirement to provide them. So I still
propose a super keyword that in roles means the object as seen from
the uncomposed class.


Regards,
--


Re: class interface of roles

2006-10-09 Thread TSa

HaloO,

Stevan Little wrote:

I do not think method combination should be the default for role
composition, it would defeat the composeability of roles because you
would never have conflicts.


I don't get that. The type system would give compile time errors.
The current spec means that in case of a "conflicting" method the
class version simply overrides the role version. That is there is
simple, short or long name based "conflict" checking with priority
to the class.



 > I see that quite different: roles are the primary carrier of type
 > information!

Well yes, they do seem to have taken on this role ;).


If it is not roles that carry type information then the Perl6 type
system is as of now completely unspecced. Even with roles I miss
some clear statements about their theoretical background.



However, roles
as originally envisioned in the Traits paper are not related to the
type system, but instead related to class/object system. In fact the
Trait paper gave it's examples in Smalltalk, which is not a strongly
typed language (unless you count the idea that *everything* is an
object and therefore that is their type).


Remember the paper did not include state for traits and thus nicely
avoided several typing issues particularly in SmallTalk that is based
on single inheritance and dispatch along these lines.



I think we need to be careful in how we associate roles with the type
system and how we assocaite them with the object system. I worry that
they will end up with conflicting needs and responsibilities and roles
will end up being too complex to be truely useful.


My current understanding is that properly typed roles can obliviate
the need of the things described in theory.pod and directly go with
F-bounded polymorphism as the theoretical model of the type system.
It e.g. is strong enough to model Haskell type classes. Note that there
are free mixins that pose no requirements on the class in the theory
as described in the article.



 > Dispatch depends on a partial ordering of roles.

Type based dispatch does (MMD), but class based method dispatch
doesn't need it at all.


I strongly agree. The two hierarchies should be separated. The
only interweaving that occurs is the class composition process.
And this process should IMHO be directed by the type system and
provide for method combination when the correct typing of the role
requires it. Typical examples that need method combination are
equality checking, sorting support and generic container types.

There seems to be another connection from the class hierarchy to
the role hierarchy that is that a class has a role of the same
name so that class names can be used where a type is expected or
however this is supposed to work. In the end there shall be some
mixing of class and type based dispatch or some kind of embedding
of the class dispatch into type dispatch.



The whole fact that dispatching requires roles to be partially ordered
actually tells me that maybe roles should not be so hinged to the type
system since roles are meant to be unordered.


But how else do we define a subtype relation if not through a role
type lattice?



Possiblely we should be seeing roles as a way of *implementing* the
types, and not as a core component of the type system itself?


Hmm, again what is the type system then? All indications from the
Synopsis and this list go for roles taking over the responsibility
of key player in the type department. E.g. type parameters also go
with roles not with classes.


Regards,
--


Re: class interface of roles

2006-10-09 Thread TSa

HaloO,

TSa wrote:

Note that A = (A|B) & (A|C) is the intersection type of A|B and A|C.
Note further that A|B is a subtype of A and B written A|B <: A and
A|B <: B and so on. Usually the A|B|C is called Bottom or some such.
I think it is the Whatever type of Perl6. It is the glb (greatest lower
bound) of A, B and C. In a larger lattice this type gets larger as well.
Any is the trivial supertype of all types.


Damn it! I always puzzle glb and lub (least upper bound). So it should
read lub there. This is because the intension set gets larger even
though a subtype is formed. Note that the extension set "the instances"
becomes smaller! In a limiting case the Whatever type has got no
instances at all :)

Sorry,
--


Re: Nitpick my Perl6 - parametric roles

2006-10-10 Thread TSa

HaloO,

Darren Duncan wrote:
Within a system that already has an underlying set-like type, the 
Junction in this case, a test for uniqueness is (pardon any spelling):


  all(@items).elements.size === @items.size

The all() will strip any duplicates, so if the number of elements in 
all(@items) is the same as @items, then @items has no duplicates.


OK, but you are not using the 'set with additional behavior' of
junctions. How would that be spelled with a pure set?

   set(@items).elements.size === @items.size

perhaps? This would nicely blend with the junction forms. Would
it even be the case that Set is a super role of the junctions?

Regards,
--


Re: class interface of roles

2006-10-11 Thread TSa

HaloO,

Jonathan Lang wrote:

What do you mean by "uncomposed class"?


The self always refers to the object as instance of the composed
class. Methods are therefore resolving to the outcome of the
composition process. But super in a role refers to methods from
the class definition even when the final method comes from method
combination in the composition process.



I think the word "super" is already to overloaded to be used for
this purpose.  Setting that aside for now...


Better ideas? Is there a super keyword already? Or do you mean
overloaded in general OO and type speak?



Do you mean that super.blah() appearing in a method defined in a
role A should require that all classes which do A need to provide a
blah implementation?  (or produce a composition time error if they
don't)


Yes, this is exactly what I mean with superclass interface of roles.



I hope not; that's exactly what declaring an unimplemented method in
a role is supposed to do.


My idea is that the type system calculates a type bound for the class
from the definition of the role. That includes attributes and their
accessor methods. It's a matter of taste how explicit this interface
is declared or how much of it is infered.



 (And declaring an implemented method does
the same thing, with the addition that it also suggests an
implementation that the class is free to use or ignore as it sees
fit.)


We have a priority conflict here. The question is if the class or the
role is seeing the other's methods for method combination.



A Role should be able to say "to do this Role you need to implement
these methods" and have a compile/composition time error if not.


Agreed.


I agree as well. But on a wider scope then just method provision.



(There does need to be a way to call, in a Role A, both the "blah"
 defined in A and whatever the "blah" the final class may use.


Yes, this is the subject of the current debate. I'm opting for a
method combination semantics that allows the role to call the class
method.



$self.blah() is the later, $self.A::blah() or similar is likely to
be the former.)


No, there doesn't.  Given that C and CB>, There needs to be a way to call, in class Foo, both the "blah" 
defined in Foo, the "blah" defined in A (so that Foo can reimplement 
A's version as a different method or as part of its own), and the 
"blah" defined in B;


From the class all composed parts are available through namespace
qualified names. But a role is a classless and instanceless entity.
The self refers to the objects created from the composed class. The role
is not relevant in method dispatch. That is a method is never dispatched
to a role. But the role should be able to participate in the method
definition of the composed class.


and there needs to be a way to call, in role A, 
both the "blah" defined in Foo and the "blah" defined B; but role A 
does not need a way to explicitly call a method defined in A.


I'm not sure if I get this right. But as I said above a role can not
be dispatched to. Which method do you think should take precedence
the role's or the class's? That is who is the defining entity in the
method combination process? I would hope it is the role if a as of now
unknown syntax has declared it. Perhaps it should be even the default.
The rational for my claim is that a role is composed several times
and then every class doing the role automatically gets the correct
version. Otherwise all classes are burdened with caring for the role's
part in the method.


 It 
should assume that if Foo overrides A's implementation of blah, Foo 
knows what it's doing; by the principle of least surprise, Foo should

 never end up overriding A's implementation of blah only to find that
 the original implementation is still being used by another of the 
methods acquired from A.


Could you make an example because I don't understand what you mean with
original implementation and how that would be used by role methods.
Method dispatch is on the class never on the role. As far as dispatch is
concerned the role is flattend out. But the question is how the class's
method is composed in the first place.


Regards,
--


Re: class interface of roles

2006-10-11 Thread TSa

HaloO,

Jonathan Lang wrote:

So if I'm reading this right, a class that does both A and B should be
"lower" in the partial ordering than a class that does just one or the
other.  And if A does B, then you'll never have a class that does just
A without also doing B, which trims out a few possible nodes and paths
from the lattice for practical purposes:

 Any={}
   |  \
   |   \
   |\
   | \
   |  \
 B={y}  C={z}
  / \  |
 /   \ |
/ \|
   /   \   |
  / \  |
 /   \ |
 A|B={x,y}   B|C={y,z}
\ /
 \   /
  \ /
   \   /
\ /
A|B|C={x,y,z}


Correct. The lattice is a structural analysis of the roles.



I note that while the lattice is related to whatever role hierarchies
may or may not exist, it is not the same as them.  In particular,
roles that have no hierarchal relationship to each other _will_ exist
in the same lattice.  In fact, all roles will exist in the same
lattice, on the first row under "Any".  Right?


Yes, if they are disjoined structurally. Otherwise intersection roles
appear as nodes under Any.



 Or does the fact that
"A does B" mean that A would be placed where "A|B" is, and "A|C" would
end up in the same node as "A|B|C"?


To get at the node labeled A|B above you either need a definition

 role A does B { has $.x }

or an outright full definition

 role A { has $.x; has $.y }

So, yes the node should be called A and A|C coincides with A|B|C.

I'm not sure if this ordering of roles can be called duck typing
because it would put roles that have the same content into the
same lattice node. The well known bark method of Dog and Tree
comes to mind. But the arrow types of the methods will be different.
One has type :(Dog --> Dog) the other :(Tree --> Tree) and a joined
node will have type :(Dog&Tree --> Dog|Tree). This might just give
enough information to resolve the issues surrounding the DogTree
class.



By "most specific", you'd mean "closest to the top"?


No, closer to the bottom. The join operator | of the lattice produces
subtypes with a larger interface that is more specific. It's like
the more derived class in a class hierarchy.


Regards, TSa.
--


Re: Runtime role issues

2006-10-11 Thread TSa

HaloO,

Ovid wrote:

Third, http://dev.perl.org/perl6/doc/design/syn/S12.html says:

  You can also mixin a precomposed set of roles:

$fido does Sentry | Tricks | TailChasing | Scratch;

Should that be the following?

$fido does Sentry & Tricks & TailChasing & Scratch;


If you follow my idea of a type lattice build from roles with | as join
and & as meet operator these two mean something completely different.
The first is the join or union of the roles while the second is their
meet or intersection. The first creates a subtype of the roles the
second a supertype. But the typishness of roles is debated.

Regards, TSa.
--


Re: Runtime role issues

2006-10-11 Thread TSa

HaloO,

Ovid wrote:

First, when a role is applied to a class at runtime, a instance of
that class in another scope may specifically *not* want that role.
Is there a way of restricting a role to a particular lexical scope
short of applying that role to instances instead of classes?


I think you'll need an intermediate class that you use to apply your
role. Classes are open and that implies the possibility to merge
further roles into them. But this applies for all users of the class.
How this works when there are already instances I don't know.



Second, how can I remove roles from classes?


Is that a wise thing to do? Roles are not assigned and removed
as a regular operation. What is your use case?


Regards, TSa.
--


signature subtyping and role merging

2006-10-11 Thread TSa

HaloO,

with my idea of deriving a type lattice from all role definitions
the problem of subtyping signatures arises. Please help me to think
this through. Consider

role Foo
{
   sub blahh(Int, Int) {...}
}
role Bar
{
   sub blahh(Str) {...}
}
role Baz does Foo does Bar # Foo|Bar lub
{
  # sub blahh(Int&Str,Int?) {...}
}

The role Baz has to be the lub (least upper bound) of Foo and Bar.
That is the join of two nodes in the lattice. This means first of
all the sub blahh has to be present. And its signature has to be
in a subtype relation <: to :(Int,Int) and :(Str). Note that
Int <: Int&Str and Int|Str <: Int. The normal contravariant subtyping
rules for functions gives

+- :> ---+
||
   :(Int&Str,Int?) <: :(Int,Int)
  |  |
  +--- :> ---+
and

+- :> ---+
||
   :(Int&Str,Int?) <: :(Str)

I hope you see the contravariance :)

The question mark shall indicate an optional parameter that
allows the subtype to be applicable in both call sites that
have one or two arguments.

The choice of glb for the first parameter makes the sub in Baz
require the implementor to use the  supertype of Int and Str
which in turn allows the substitution of Int and Str arguments
which are subtypes---that is types with a larger interface.

Going the other way in the type lattice the meet Foo&Bar of the
two roles Foo and Bar is needed. But here the trick with the
optional parameter doesn't work and it is impossible to reconcile
the two signatures. This could simply mean to drop sub blahh from
the interface. But is there a better way? Perhaps introducing a
multi?

Apart from the arity problem the lub Int|Str works for the first
parameter:

+- <: ---+
||
   :(Int|Str,Int)  :> :(Int,Int)
  |  |
  +--- <: ---+

+ <: ---+
|   |
   :(Int|Str) :> :(Str)


Regards, TSa.
--


Re: class interface of roles

2006-10-12 Thread TSa
riginal implementation
of foo(), while C represents the final implementation
chosen by the class.


OK, thanks for the example.



If you were to allow a role method to directly refer to its own
implementation, you could do something like:

role A {
 method foo() { say "Ah..." }
 method bar() { $self.A::foo() }
}


Yes, this should call the role closure from within the class.


With precedence to class methods my original example should read

role GenPointMixin
{
   has Int $.x;
   has Int $.y;
   method class_equal ( : ::?CLASS $p ) {...}
   method equal( : ::?CLASS $p --> Bool )
   {
  return self.class_equal(p) and
 self.x == $p.x and self.y == $p.y;
   }
}

which is clumsy and relies on the fact that the class is *not*
overriding the equal method but of course provides the class_equal
method. I want that combination process to be available under the
equal name slot. Which requires the role to take precedence at least
for the equal method. The question is what syntax to use for this
feature. I propose a 'is augmented' trait on the role method.



Re: Partial Ordering

I'm not sure if this ordering of roles can be called duck typing
because it would put roles that have the same content into the
same lattice node. The well known bark method of Dog and Tree
comes to mind.


IIRC, duck typing is based on how well the set of available method
names match, its main flaw coming from the possibility of homonymous
methods.  With the Dog and Tree example, consider the possibility that
both versions of "bark" take no parameters other than the invocant,
and neither version returns anything.  Their signatures would thus be
identical (since the invocant's type is the final class' type).


Conceptually the methods take and return the invocant type. This
is why I gave them as :(Dog --> Dog) and :(Tree --> Tree) respectively.
That's what I meant with arrow types. But Void as return type is fine
as well. The question is now how these two signatures are merged
together to form the signature required from the class disambiguation.
The glb Dog&Tree to me means that a common supertype be implemented
in the class. Hmm, this at least is what contravariant arrow types
demand. But I'm as of now puzzled if it's not better to require the
class to implement the lub Dog|Tree based on the idea that this makes
instances of the class subtypes of both interfaces. The implementation
of which obliges to the class. Well, invocant parameters are covariant.
So, yes it should be a method on Dog|Tree the type of instances of a
class that does Dog and Tree.



But the arrow types of the methods will be different.
One has type :(Dog --> Dog) the other :(Tree --> Tree) and a joined
node will have type :(Dog&Tree --> Dog|Tree). This might just give
enough information to resolve the issues surrounding the DogTree
class.


Not sure what you mean by "the arrow types".  Are you referring to
embedding a return type in the signature?


I mean arrow types as used in formal treatments of type systems.
Functions are denoted with arrows there. This is also the reason why
we have the --> in signatures.


Regards, TSa.
--


Re: class interface of roles

2006-10-13 Thread TSa

HaloO,

Jonathan Lang wrote:

class GenSquare does GenPoint does GenEqual
{
  has Int $.side;
  method equal ( GenSquare $p --> Bool )
  {
 return $self.GenPoint::equal($p) and $self.side == $p.side;
  }
}


This is exactly what I don't want. Such an equal method needs to be
written in each and every class the role GenPoint is composed into.
This is what I call a burden instead of code reuse by means of role
composition. But I guess our points of view are hard to reconcile.
You and the current spec see the class having the final word whereas
I think that the role takes over responsibility. This does not require
a telepathic compiler. Simply giving priority to the role suffices.

Having the role in charge nicely matches the fact that the guarantee
of doing a role makes a stronger claim than being an instance of a
class. Doing the role happens with objects from different classes.
And now imagine that some classes implemented equality correctly and
some don't. With my approach the chance for this is diminished to
a correct class specific implementation that is required and used by
the role.


Regards, TSa.
--


Re: class interface of roles

2006-10-17 Thread TSa

HaloO Jonathan,

you wrote:
> Of course, you then run into a problem if the class _doesn't_ redefine
> method equal; if it doesn't, then what is GenPointMixin::equal
> calling?

This is the reason why there is a type bound on the class that should
result in a composition error when the equal method is missing.

Shouldn't the 'divert' be a trait of the method instead of a key/value
pair on the class? And what does your syntax mean? Looks like the key
is indicating the method and the value is the namespace where method
lookup starts.


>  And you also run into a problem if you want the class to
> track the position in polar coordinates instead of rectilinear ones -
> it would still represent a point conceptually, but the implementation
> of method equal (at least) would need to be overridden.

Oh, yes. Doing the right thing is difficult. Changing representation
of the point while keeping the interface can only be done in the class.
OTOH, the interface would include the rectilinear accessor methods.
And these are called in the equal method. Thus a class doing the
GenPoint role correctly needs to provide .x and .y methods even if
they aren't simple auto-generated accessors of attributes. But note
that these two are not going through the superclass interface but the
self type.

In the case of the equal method the dispatch slot contains the role's
closure which calls into the class' closure. There is no dispatch to the
role because the role is flattened out in the composition process.

It is interesting to think of another PolarPoint role that also has
an equal method that would conflict with the GenPoint one. Then the
class has to disambiguate. Which in turn requires the class to provide
an equal method that overrides both role versions. Note that I think
the conflict detection of role methods prevents the composition of the
equal method through the superclass interface. I admit that this warps
the meaning of the class' equal method from beeing an aspect in the
role's method to the definer of the method. This can be a source of
subtle bugs. That is the class composer can't distinguish an aspect
method from a disambiguation one unless we introduce e.g. an 'is
disambig' trait. And e.g. an 'is override' trait when the class designer
wishes to replace a role method even if there's no conflict.


> A cleaner solution would be to define a private helper method which
> the role then demands that the class override.  This is _very_ similar
> to the solution that you described as "clumsy" a few posts back, with
> the main difference being that the helper method, being private, can't
> be called outside of the class.  To me, the only clumsiness of this
> solution comes directly from the cluminess of the overall example, and
> I consider your proposed alternative to be equally clumsy - it merely
> trades one set of problems for another.

There's a lot of truth in that. But I don't consider the example as
clumsy. It is an important issue that arises whenever method
recombination is needed to deal with the guarantees that the role
as a type makes. Calculation of a type bound on the class that
has to be met in the composition is a strong tool.


>> And yes, in Perl 6 the method isn't called equal but eqv or === and
>> has a default implementation that retrieves the .WHICH of both args.
>
> What inspired that comment?

Sorry, I didn't want to intimidate you. But I wanted to prevent comments
that choosing a method equal is not the right approach and MMD should be
used instead. But I think all of what we discussed so far stays valid if
the equal method is a multi. IIRC there is just one namespace slot for
the short name. And we are discussing how this slot is filled in the
class composition process.

BTW, why have you gone off-list?


Regards, TSa.
--


Re: class interface of roles

2006-10-17 Thread TSa

HaloO,

Jonathan Lang wrote:
> TSa wrote:

Note that I think the conflict detection of role methods prevents the
composition of the equal method through the superclass interface.


Just to make sure we're speaking the same language: by "superclass",
you're referring to classes brought in via "is"; right?


No. I used the term 'superclass interface' as it appears in the
article I referred to. There the role has a type constraint interface
to the uncomposed class which is available through the super keyword.
Let's settle for the terms 'uncomposed class' and 'composed class'.
Then I meant the interface of the role to the uncomposed class.

It might be the case that you should think of the role composition
to happen on a new anonymous class that inherits everything from the
uncomposed class and therefore has a proper superclass interface.
Conflicts between roles prevent the installation of a composed method
and hence leave the task of defining it to the composed class.
I admit that the distinction between these cases can come as a surprise
to class developers. But composing more than one role is equal in
complexity as multiple inheritance. So a warning is a good thing here.
The 'is override' trait might be needed to silence the warning. Well,
and you can switch it off with a pragma or the trait on the class.

The whole point of the article is to calculate the composed class type.
This process is akin to inheritance but needs two implicit type
parameters to model it in F-bounded polymorphism. Free mixins just go
into the composed class unconstraint. Constraint mixins require certain
features from the class. This inevitably shifts the balance between
roles and classes towards roles which is OK in my eyes. In particular
since roles have taken on the meaning of type in Perl 6 and as such
make the stronger claims when used where a type is expected.



 If so, you're
correct: the presence of a valid method in any of the roles that a
class composes is sufficient to prevent the dispatch process from ever
reaching any of the superclasses.


I'm more thinking along the lines of closure composition and a flattened
view of the class. That is conceptually the class' namespace directly
maps slot names to methods. That this mapping is searching the
inheritance graph behind the scenes is an implementation detail. In the
case of roles the composed method calls the uncomposed one that's all.
In your terms this means 'dispatch starts at the role' I guess.

On a side track I want to note: the optimizer might munge all combined
methods into an optimized version that doesn't walk the class graph. In
the GenSquare case this method reads

   return self.side == $p.side and self.x == $p.x and self.y == $p.y

and is installed in the equal slot of the composed class. I'm
interpreting roles from a type theoretical background. This is viable
because all information needed is available at compile time whereas
classes are open.

Note that the combination process as I see it nicely explains runtime
role composition in the same way as compile time role composition. There
is no special rule that at runtime the role takes precedence or forces
its definition of methods. Also the difference between successive does
and a precomposed role with | falls out naturally. In the latter case
conflicts let role methods drop out and leave the class method as
"disambiguation". But the compiler can easily issue a warning for that.


Regards, TSa.
--


Re: class interface of roles

2006-10-17 Thread TSa

HaloO,

Jonathan Lang wrote:

Shouldn't the 'divert' be a trait of the method instead of a key/value
pair on the class?


I thought about doing it that way; but then the class wouldn't know to
look for it when composing the role.


I figure you see the class in a very active role when composing roles.
I see them more as a passive entity that the class composer is dealing
with. My scope is on producing a predictable result from the typing
point of view. Dispatch comes later.

Regards, TSa.
--


Re: Edge case: incongruent roles

2006-10-17 Thread TSa

HaloO,

Larry Wall wrote:

On Fri, Oct 13, 2006 at 04:56:05PM -0700, Jonathan Lang wrote:
: Trey Harris wrote:
: >All three objects happen to be Baz's, yes.  But the client code doesn't
: >see them that way; the first snippet wants a Foo, the second wants a Bar.
: >They should get what they expect, or Baz can't be said to "do" either.
: 
: In principle, I agree; that's how it _should_ work.  I'm pointing out

: that that's not how things work in practice according to the current
: documentation.

The current documentation already conjectures this sort of disambiguation
at S12:996, I believe.


Help me to get that right with a little, more concrete example.

  my Num $a = 5; # dynamic type is Int
  my Num $b = 4;

  say $a/$b; # 1 or 1.25?

When we assume that Int is a subtype of Num and leave co- and
contravariance issues of container types out of the picture
and further assume the availability of dispatch targets :(Int,Int-->Int)
and :(Num,Num-->Num) in multi infix: then there is a conflict
between the static type information of the container and the dynamic
type of the values. And it resolves to the static container type
unless it is typed as Any, then the dynamic type is used. Right?

I know that the return type of / could be Num in "reality" but that
spoils the example. Sorry if the above is a bad example.

Regards, TSa.
--


Re: Edge case: incongruent roles

2006-10-17 Thread TSa

HaloO,

TSa wrote:

I know that the return type of / could be Num in "reality" but that
spoils the example. Sorry if the above is a bad example.


Pinning the return type to Num is bad e.g. if you want multi targets
like :(Complex,Complex-->Complex). Should that also numerify complex
values when stored in a Num container? If yes, how?
--


Re: Edge case: incongruent roles

2006-10-18 Thread TSa

HaloO,

Jonathan Lang wrote:

If at all possible, I would expect Complex to compose Num, thus
letting a Complex be used anywhere that a Num is requested.


This will not work. The Num type is ordered the Complex type isn't.
The operators <, <=, > and >= are not available in Complex. Though
I can imagine Num as a subtype of Complex. But I'm not sure if there
are cases where this breaks down as well---the square root function
comes to mind. With the root of a negative Num returning NaN or
Undef of Num which are technically Nums we could save the subtyping
relation.

In the end I think the two types Num and Complex both do a number
of common roles but don't have the same set of roles they do. E.g.
Comparable is available only for Nums. Basic arithmetic operations
like +,-,* and / are shared. The bad thing is that it will be 
inconvenient to use. E.g. just using Num as parameter type in a sub

that does not use comparison should allow calling with a Complex even
though the nominal type is incompatible. IOW, the type inferencer
should determine a much more differentiated type than simple Num for
the parameter. How this type is then advertised I don't know.


Regards, TSa.
--


Re: class interface of roles

2006-10-18 Thread TSa

HaloO Jonathan,

you wrote:

I think I found the core of the issue here; it has to do with the
differences between roles and mixins, with an analogous difference
between compile-time composition and runtime composition.  Details
follow.


I think we are basically through the discussion. Thank you for your
patience so far. It's interesting to note that you make a distinction
between mixins and roles. I think of roles as being mixins.



Ah.  Unfortunately, said article is part 15 of a series.  While it's
true that only a handful of them are directly referenced by the
article, that's still more than I've had time to read so far - and the
unreferenced articles still matter in that they provide the
grammatical context in which Article 15 is phrased.


The series is worth reading, indeed.



Just to clarify some terminology: in the above statement,
'ComposedClass' composes 'Role', and 'ComposedClass' inherits from
'UncomposedClass'.  Conversely, 'Role' is composed into
'ComposedClass'.


This is still not how I think of it. I'm more thinking of the
composition process as having the *same* class in a pre- and a
post-composition state. The role poses a type constraint on the
pre-state and can access this state. There's no inheritance
between two classes.

As you describe it, the current compile-time composition let's
the role define a type bound of the composition result in the
sense that instances of the class do the role.



"F-bounded polymorphism"?


See e.g.
http://www.cs.utexas.edu/~wcook/papers/FBound89/CookFBound89.pdf
http://www.cs.washington.edu/research/projects/cecil/www/Vortex-Three-Zero/doc-cecil-lang/cecil-spec-86.html

The basic idea is to define the type T of instances of a class
or role as all types that are subtypes of a type function:

  forall T <: Generator[T]

The crux is that T appears on both sides of the typing relation.
I would like to find out how it can be applied to Perl 6.
So if you find the time to read through the series we can try
together to understand the type system of Perl 6.



Free mixins just go into the composed class unconstraint.


Do you mean "unconstrained"?


Yes, sorry for the typo.


Regards, TSa.
--


Re: class interface of roles

2006-10-19 Thread TSa

HaloO,

Larry Wall wrote:

You've got it inside out.  Unordered is just "does A | B | C" or
some such, the | there really being coerced to a set construction,
not a junction.  In fact, & would work just as well.  I only used |
because it's more readable.  Autocoercion of junctions to sets is,
of course, conjectural.  You really have

does set(A,B,C)


I would like "does A & B & C" mean the intersection type of A, B and C.
That is a supertype of all three roles. In addition we might need
negation to get what Jonathan Lang envisoned for the Complex type that
does Num & !Comparable. IOW, I'm opting for a role combination syntax
by means of logical operators that operate on the intension set of the
involved roles. Could we get that?

BTW, the synopsis reserve Foo[Bar] and Foo{Bar} for type theoretical
operations. The former is used for parametric types. How is the latter
used?


Regards, TSa.
--


Re: class interface of roles

2006-10-19 Thread TSa

HaloO

TSa wrote:

I would like "does A & B & C" mean the intersection type of A, B and C.
That is a supertype of all three roles. In addition we might need
negation to get what Jonathan Lang envisoned for the Complex type that
does Num & !Comparable. IOW, I'm opting for a role combination syntax
by means of logical operators that operate on the intension set of the
involved roles. Could we get that?


And while we're at it, could we also introduce the subtype operator <:
and perhaps >: as the supertype operator? This would come in handy for
expressing type constraints in does clauses.
--


Re: class interface of roles

2006-10-19 Thread TSa

HaloO,

Hmm, no one seems to read the article! There actually is another class
GenLocSquare that combines GenSquare and GenPointMixin. With that we
get a modified version of my code as follows:


role GenEqual
{
   method equal( : GenEqual $ --> Bool ) {...}
}

role GenPointMixin
{
   has Int $.x;
   has Int $.y;
   method equal( ::?CLASS GenEqual $self: ::?CLASS $p --> Bool )


This additional GenEqual type bound on the self type is all
that is needed to get the superclass interface as described
in the article.


   {
  return super.equal(p) and  # <-- handwave


return call($p) and  # normal superclass call, but I still
 # think that super.equal reads better.


 self.x == $p.x and self.y == $p.y;
   }
}

class GenSquare does GenEqual does GenPointMixin


  class GenSquare does GenEqual


{
   has Int $.side;
   method equal ( : ::?CLASS $p --> Bool )
   {
  return self.side == $p.side;
   }
}


And finally the combined class

  class GenLocSquare is GenSquare does GenPointMixin
  {}

I initially dropped it because I thought that roles can see
the pre-composed class somehow. But as Jonathan explained they
don't---at least not in compile-time class composition. And
for runtime composition you get the empty class for free!


Regards, TSa.
--


Re: set operations for roles

2006-10-20 Thread TSa

HaloO,

Jonathan Lang wrote:

   role R3 does A & B { ... }
   R3.does(A) # false: R3 can't neccessarily do everything that A can.
   A.does(R3) # false: A can't neccessarily do everything that R3 can.


That last one should be true. Role R3 contains the things that A and B
have in common. Hence each of them has everything that R3 has.



And because you have the ability to add methods to R3 that aren't in A
or B, you can't make use of the fact that A & B is a subset of A for
type-checking purposes.


The really funny thing is what can be written into the body of role R3.
Should it be forbidden to introduce new methods? Or should new methods
automatically enter into A and B? The latter would be more useful but
then roles become somewhat open like classes because code that did not
typecheck before the superrole creation suddenly typechecks after it.
Or if the role R3 adds a yada method that propagates down the
composition chains, classes might retroactively fail to compile!

Regards, TSa.
--


Re: set operations for roles

2006-10-20 Thread TSa

HaloO,

Jonathan Lang wrote:

In short, R3 isn't neccessarily a subset of A; it's a superset of A &
B.  In a partial ordering graph, there's no reliable ordering between
R3 and A.

The standard syntax for creating roles can't reliably produce a subset
of an existing role, because it always allows you to add to it.


Yes, but I was conjecturing that the additions to A&B are pushed
down to A and B such that their intension sets remain strict supersets
of A&B.



The only problem that might crop up is the use of 'A | B' in
signatures to mean 'can match any of A, B' - that is: in signatures,
'A | B' refers to the junctive operator; while in the above proposal,
'A | B' refers to the set union operator.  Different semantics.


In fact if we decide to specify a role combination syntax then it
should be the same everywhere. That means in a signature A|B would
require a more specific type and pure A or B wouldn't be admissible.
To get the old meaning of | you have to write A&B or perhaps the
juxtaposition which currently means what A|B should mean. Alternatively
the meaning of the role combination A&B could be defined to mean the
union and A|B the intersection.


Regards, TSa.
--


Re: set operations for roles

2006-10-20 Thread TSa

HaloO,

I wrote:

In fact if we decide to specify a role combination syntax then it
should be the same everywhere. That means in a signature A|B would
require a more specific type and pure A or B wouldn't be admissible.
To get the old meaning of | you have to write A&B or perhaps the
juxtaposition which currently means what A|B should mean. Alternatively
the meaning of the role combination A&B could be defined to mean the
union and A|B the intersection.


Here is yet another idea to go with the two lattice operations:

  /\ meet   also: infimum,  intersection, glb (greatest lower bound)
  \/ join   also: supremum, union,lub (least upper bound)

These have nice graphical mnemonics

 meet={x}
  /\
 /  \
/\
   A={a,x}  B={b,x}
\/
 \  /
  \/
 join={a,b,x}

and also read nice as english words:

  role J joins A, B, C; # same as role J does A \/ B \/ C
  role M meets A, B, C; # same as role M does A /\ B /\ C

Comments?
--


Re: set operations for roles

2006-10-20 Thread TSa

HaloO,

I wrote:

Yes, but I was conjecturing that the additions to A&B are pushed
down to A and B such that their intension sets remain strict supersets
of A&B.


Think of the Complex example that might read

  role Complex does Num & !Comparable
  {
  method im { return 0; }
  method re { return self as Num } # a no-op for Num
  }
  class Complex does Complex
  {
  has $.re; # accessor overwrites role method
  has $.im; # accessor overwrites role method
  }

Apart from the fact that the role and the class compete for the
same name slot this looks like what you need to make Num applicable
wherever a Complex is expected:

  module A
  {
 use Complex;
 Num $a;
 say $a.im; # statically type correct, prints 0
  }
  module B
  {
 Num $b;
 say $b.im; # syntactically admissible, but produces runtime error
  }

Actually the 'self as Num' should return the self type ::?CLASS to
preserve as much type information as possible. Hmm, could we get
the keyword Self for that?


Have a nice weekend, TSa.
--


Re: set operations for roles

2006-10-23 Thread TSa
HaloO,

Larry Wall wrote:
> I now think it's a bad idea to overload | or & to do type construction,

Is it then a god idea to overload the set operations? At least the
type constructions are set theoretic on the intension set of the
roles.


> especially since the tendency is to define them backwards from the
> operational viewpoint that most Perl programmers will take.

Can you give an example how these directions collide? Is it the
fact that A(|)B produces a subtype of A and B, and that A(&)B
produces a supertype? I can imagine that 'does A&B' could be
read as doing both interfaces.


BTW, what is set complement? Is it (!)?


Regards, TSa.
-- 


Re: set operations for roles

2006-10-23 Thread TSa
HaloO,

Jonathan Lang wrote:
> OK.  My main dog in this race is the idea of defining new roles
> through the concepts of the intersection or difference between
> existing roles (even the union was thrown in there mainly for the sake
> of completion), with the consequent extension of the type system in
> the opposite direction from the usual one (toward the more general);

I strongly agree. Having a language that allows supertying has novelty.
But I think that union is not there for completion but as integral part
when it comes to defining a type lattice which I still believe is the
most practical approach to typing. This includes computed types, that
is "artificial" nodes in the lattice. These intermediate types are
usually produced during type checking and automatic program reasoning.

Think e.g. of the type of an Array:

  my @array = (0,1,2); # Array of Int
  @array[3] = "three"; # Array of Int(&)Str

This Int(&)Str type might actually be Item. The flattened |@array
has type Seq[Int,Int,Int,Str] but the unflattend array should say
something that is applicable to all its contents. The array might
actually maintain this content type at runtime.


> And yes, this "roles as sets" paradigm would presumably mean that you
> could examine roles using '⊂', '⊃', '∈', and so on.

BTW, are the ASCII equivalents spelled (<), (>) and (in)?


Regards,
-- 


Re: set operations for roles

2006-10-23 Thread TSa
HaloO,

Jonathan Lang wrote:
> OK.  My main dog in this race is the idea of defining new roles
> through the concepts of the intersection or difference between
> existing roles

Note that you should not call these 'intersection type' because
this term is used for the union of role interfaces. That is the
typist intersects the extension sets of objects doing the roles
that are combined. IOW, set operations for roles could also be
defined the other way around. If that solves the perceptive
dissonance of a typical Perl programmer that Larry mentioned,
I don't know.


> And yes, this "roles as sets" paradigm would presumably mean that you
> could examine roles using '⊂', '⊃', '∈', and so on.  Given the
> semantic aspect of roles, I don't think that I'd go along with saying
> that 'A ⊃ B' is equivalent to 'A.does(B)' - although I _would_ agree
> that if 'A.does(B)' then 'A ⊃ B'.  Rather, I'd think of 'A ⊃ B' as
> being the means that one would use for duck-typing, if one really
> wanted to (presuming that one can mess with how perl 6 does
> type-checking).

I guess up to now it is undefined how structural and how nominal the
Perl 6 type system is. But I agree that when a role A says that it
does B that the type system should check if A ⊃ B. I believe that it
should be possible to fill a node in the type lattice with a named
role precisely to catch dispatches to this intersection, union or
another combination interface. Or you instanciate parametric roles
for the same purpose.

Note that union interfaces might need some merging of signatures as
I tried to argue elsewhere. Also, we might allow the subrole to change
signatures in accordance with the intended subtype relation.


Regards, TSa.
-- 


Re: set operations for roles

2006-10-23 Thread TSa

HaloO,

Ruud H.G. van Tol wrote:

TSa schreef:


A(|)B produces a subtype of A and B, and that A(&)B
produces a supertype


Are you sure?


Very sure ;)

In record subtyping a record is a mapping of labels to types.
In Perl 6 speak this is what a package does. One record type
is a subytpe if it has a superset of the label set and the
types of the common labels are subtypes. This record is the
intension set of types. When it grows the "number" of objects
in the extension set decreases. The limiting cases are the
universal set Any that has the empty intension set and Undef
or Whatever with the universal intension set but no defined
instances.



I see "&" as "limiting; sub" and "|" as "enlarging;
super".
To me, "&" is connected to multiplication (and inproduct, statistics,
fuzzy logic), and "|" to addition (and outproduct).


As I just wrote elsewhere this is the extensional view of the
sets underlying types. The extension of Bool e.g. is {0,1} and
that of Int is {...,-2,-1,0,1,2,...} from which one argues that
Bool is a subtype of Int and that Bool(&)Int (=) Bool. On the
interface side of things Bool has to support all methods that
Int has, e.g. +, -, *, and /. Note that both types are not closed
under these operations: 1 + 1 == 2, 5/4 == 1.25. Bool adds logical
operators like && and || to the intension set.


Regards, TSa.
--


Re: [svn:perl6-synopsis] r13252 - doc/trunk/design/syn

2006-10-23 Thread TSa

HaloO,

[EMAIL PROTECTED] wrote:

Log:
"does" operator is non-associative according to S03.  Leave it that way for now.
[..]
-$fido does Sentry | Tricks | TailChasing | Scratch;
+$fido does (Sentry, Tricks, TailChasing, Scratch);


Does that apply to the trait verbs used in class definitions as well?
I find it a sane thing to write

  class C is (A,B,C) does (X,Y,Z) {...}


Regards, TSa.
--


Re: set operations for roles

2006-10-23 Thread TSa

HaloO,

Smylers wrote:

In which case that provides a handy example supporting Larry's
suggestion that this is confusing, with some people expecting it to work
exactly opposite to how it does.


So the mere fact that there are two sets involved rules out the
set operators as well?



It doesn't really matter which way is right -- merely having some people
on each side, all naturally deriving what makes sense to them -- shows
that implementing this would cause much confusion.


Better suggestions? Other than just writing one or the other in the
spec, I mean. I would opt for A(&)B producing the subtype on the
footing that this is usually called an intersection type, even though
the interfaces are merged.


Regards, TSa.
--


Re: signature subtyping and role merging

2006-10-23 Thread TSa

HaloO,

Jonathan Lang wrote:

Please, no attempts to merge signatures.  Instead, use multiple
dispatch


That avoids the problem before it occurs. But do you expect
every role to provide its methods as multi just in case?



Also, sub is an odd choice to use while illustrating role composition;
while subs _are_ allowed in roles AFAIK, they're generally not put
there.  Methods and submethods are by far more common.


The problem remains the same. After method lookup the arrow type of
the non-invocant parameters has to be a contra-variant supertype of
the two merged signatures. This signature is then a requirement to
the composed class that indirectly does both roles.


Regards, TSa.
--


Re: set operations for roles

2006-10-23 Thread TSa

HaloO,

Jonathan Lang wrote:

If we make a point of highlighting the "set operations" perspective


You know that there are two sets involved. So which one do you mean?



and avoiding traditional type theory
terminology (which, as Larry pointed out and TSa demonstrated, is very
much inside out from how most people think), we can avoid most of the
confusion you're concerned about.


Well, the type theory terminology has it all. You just have to be
careful what you pick and how you combine the terms. "Sub" and
"super" be it in class, role or type connotate an order that in fact
is there as a partial order or preferably as a lattice. The rest is
about choosing a syntax. I for my part can live happily with whatever
flipping of (&) and (|) we settle on as long as I know to which set
they apply.

That being said I would think that prior art dictates (&) as meaning
subtype creation. Which puts it in line with & for the all junction
and && as logical connective. Note that the counterintuitive notation
for pre-composed roles using | is gone. It still exists in the
signatures, though.


Regards, TSa.
--


  1   2   3   4   5   6   7   >