Re: Another adverb on operator question

2008-08-07 Thread Jon Lang
Perhaps I'm missing something; but why couldn't you say '[lt]:lc $a, $b, $c'?

That is, I see the "reducing meta-operator" as a way of taking a
list-associative infix operator and treating it as a function call,
albeit one with a funny name.  As such, you should be able to do
things with a reduced operator in the same way that you could with any
function that has a signature of ([EMAIL PROTECTED], *%adverb).  Am I wrong?

-- 
Jonathan "Dataweaver" Lang


Re: [svn:perl6-synopsis] r14574 - doc/trunk/design/syn

2008-08-08 Thread Jon Lang
But are 'twas and -x valid identifiers?  IMHO, they should not be.

-- 
Jonathan "Dataweaver" Lang


Re: Differential Subscripts

2008-08-08 Thread Jon Lang
On Fri, Aug 8, 2008 at 7:41 PM, John M. Dlugosz
<[EMAIL PROTECTED]> wrote:
> How is @array[*-2] supposed to be implemented?
>
> S09
> // reported again 8-Aug-2008
>
> Is this magic known to the parser at a low level, or is it possible to
> define your own postcircumfix operators that interact with the
> interpretation of the argument?

IMHO, this can best be answered by the implementors: it depends on how
difficult it would be to enable user-defined code.  My own bias would
be toward the latter; but I'm not an implementor.

> Does this use of * apply to any user-defined postcurcumfix:<[ ]> or just
> those defining the function for the Positional role, or what?  And whether
> or not it always works or is particular to the built-in Array class, we need
> to define preciely what the * calls on the object, because I may be
> overridden. For example, it is obvious that a lone * in an expression calls
> .length.  But the *{$idx} and *[$idx] work by calling some method on the
> Array object, right?  Something like @array.user_defined_index to
> discover that May maps to 5, or @array.user_defined_index[5] to produce May.
>  That is, the accessor .user_defined_index() returns an object that supports
> [] and {} just like an Array.  It may or might not be an Array populated
> with the inverse mappings. But it behaves that way.
>
> But that doesn't work for multidimensional arrays.  The meaning of *{} is
> dependent on its position in the list.  This means that it must be able to
> count the semicolons!  If the slice is composed at runtime, it needs to
> operate at runtime as well, and this implies that the called function is
> told which dimension it is operating upon!

This is also true when using a lone * in an expression: the number of
elements that it represents depends on which dimension it appears in.
As well, a similar point applies to **.

IIRC, there's supposed to be a 'shape' method that provides
introspection on the index.  Exactly what it's capable of and how it
works has not been established AFAIK.

> Is it possible to write:
>@i = (1, 3, *-1);
>say @data[$i];
> and get the same meaning as
>say @data[1, 3, *-1]?

I believe so.  IIRC, evaluation of * is lazy; it gets postponed until
there's enough context to establish what its meaning is supposed to
be.

-- 
Jonathan "Dataweaver" Lang


Re: Multiple Return Values - details fleshed out

2008-08-09 Thread Jon Lang
John M. Dlugosz wrote:
> I wrote  to clarify and
> extrapolate from what is written in the Synopses.

A few comments:

1. I was under the impression that identifiers couldn't end with - or '.
2. While list context won't work with named return values, "hash
context" ought to.
3. The positional parameters of a Capture object essentially act as a
list with a user-defined index, don't they?  There _is_ some crossover
between the named parameter keys and the positional parameter
user-defined index, in that you ought to be able to provide a name and
have the Capture figure out which one you're asking for.  This works
because there should be no overlap between the keys of the hash and
the user-defined indices of the list.

-- 
Jonathan "Dataweaver" Lang


Re: The False Cognate problem and what Roles are still missing

2008-08-20 Thread Jon Lang
On Wed, Aug 20, 2008 at 3:16 PM, Aristotle Pagaltzis <[EMAIL PROTECTED]> wrote:
> Hi $Larry et al,
>
> I brought this up as a question at YAPC::EU and found to my
> surprise that no one seems to have thought of it yet. This is
> the mail I said I'd write. (And apologies, Larry. :-) )
>
> Consider the classic example of roles named Dog and Tree which
> both have a `bark` method. Then there is a class that for some
> inexplicable reason, assumes both roles. Maybe it is called
> Mutant. This is standard fare so far: the class resolves the
> conflict by renaming Dog's `bark` to `yap` and all is well.
>
> But now consider that Dog has a method `on_sound_heard` that
> calls `bark`.
>
> You clearly don't want that to suddenly call Tree's `bark`.
>
> Unless, of course, you actually do.
>
> It therefore seems necessary to me to specify dispatch such that
> method calls in the Dog role invoke the original Dog role methods
> where such methods exist. There also needs to be a way for a
> class that assumes a role to explicitly declare that it wants
> to override that decision. Thus, by default, when you say that
> Mutant does both Dog and Tree, Dog's methods do not silently
> mutate their semantics. You can cause them to do so, but you
> should have to ask for that.
>
> I am, as I mentioned initially, surprised that no one seems to
> have considered this issue, because I always thought this is what
> avoiding the False Cognate problem of mixins, as chromatic likes
> to call it, ultimately implies at the deepest level: that roles
> provide scope for their innards that preserves their identity and
> integrity (unless, of course, you explicitly stick your hands in),
> kind of like the safety that hygienic macros provide.

My thoughts:

Much of the difficulty comes from the fact that Mutant [i]doesn't[/i]
rename Dog::bark; it overrides it.  That is, a conflict exists between
Dog::bark and Tree::bark, so a class or role that composes both
effectively gets one that automatically fails.  You then create an
explicit Mutant::bark method that overrides the conflicted one; in its
body, you call the Tree::bark method (or the Dog::bark method, or both
in sequence, or neither, or...)  As such, there's no obvious link
between Mutant::bark and Tree::bark.  Likewise, you don't rename
Dog::bark; you create a new Mutant::yap that calls Dog::bark.

One thing that might help would be a trait for methods that tells us
where it came from - that is, which - if any - of the composed methods
it calls.  For instance:

  role Mutant does Dog does Tree {
method bark() was Tree::bark;
method yap() was Dog::bark;
  }

As I envision it, "was" sets things up so that you can query, e.g.,
Mutant::yap and find out that it's intended as a replacement for
Dog::bark.  Or you could ask the Mutant role for the method that
replaces Dog::bark, and it would return Mutant::yap.

It also provides a default code block that does nothing more than to
call Dog::bark; unless you override this with your own code block, the
result is that Mutant::yap behaves exactly like Dog::bark.

By default, this is what other methods composed in from Dog do: they
ask Mutant what Dog::bark is called these days, and then call that
method.  All that's left is to decide how to tell them to ask about
Tree::bark instead, if that's what you want them to do.

-- 
Jonathan "Dataweaver" Lang


Re: adverbial form of Pairs notation question

2008-09-08 Thread Jon Lang
TSa wrote:
> Ahh, I see. Thanks for the hint. It's actually comma that builds lists.
> So we could go with () for undef and require (1,) and (,) for the single
> element and empty list respectively. But then +(1,2,3,()) == 4.

Actually, note that both infix:<,> and circumfix:<[ ]> can be used to
build lists; so [1] and [] can be used to construct single-element and
empty lists, respectively.

>> On the other hand, there's just that sort of double discontinuity in
>> English pluralization, where we say "2 (or more) items", then "1
>> item", but then "no items".  So perhaps it's justifiable in Perl6 as
>> well.
>
> I would also opt for () meaning empty list as a *defined* value. Pairs
> that shall receive the empty list as value could be abbreviated from
> :foo(()) to :foo(,). As long as the distinction between Array and List
> doesn't matter one can also say :foo[], of course.

Personally, I'd like to see '()' capture the concept of "nothing" in
the same way that '*' captures the concept of "whatever".  There _may_
even be justification for differentiating between this and "something
that is undefined" (which 'undef' covers).  Or not; I'm not sure of
the intricacies of this.  One possibility might be that '1, 2, undef'
results in a three-item list '[1, 2, undef]', whereas '1, 2, ()'
results in a two-item list '[1, 2]' - but that may be a can of worms
that we don't want to open.

-- 
Jonathan "Dataweaver" Lang


Re: Should $.foo attributes without "is rw" be writable from within the class

2008-09-19 Thread Jon Lang
Daniel Ruoso wrote:
> TSa wrote:
>> May I pose three more questions?
>>
>> 1. I guess that even using $!A::bar in methods of B is an
>> access violation, right? I.e. A needs to trust B for that
>> to be allowed.
>
> Yes
>
>> 2. The object has to carry $!A::bar and $!B::bar separately, right?
>
> Yes
>
>> 3. How are attribute storage locations handled in multiple inheritance?
>> Are all base classes virtual and hence their slots appear only once
>> in the object's storage?
>
> In SMOP, it is handled based on the package of the Class, the private
> storage inside the object is something like
>
>   $obj.^!private_storage<$!bar>
>
> and
>
>   $ojb.^!private_storage<$!bar>

Note that this ought only be true of class inheritance; with role
composition, there should only be one $!bar in the class, no matter
how many roles define it.

-- 
Jonathan "Dataweaver" Lang


Re: Should $.foo attributes without "is rw" be writable from within the class

2008-09-19 Thread Jon Lang
Daniel Ruoso wrote:
> Jon Lang wrote:
>> Note that this ought only be true of class inheritance; with role
>> composition, there should only be one $!bar in the class, no matter
>> how many roles define it.
>
> er... what does that mean exactly?

Unless something has drastically changed since I last checked the
docs, roles tend to be even more ethereal than classes are.  You can
think of a class as being the engine that runs objects; a role, OTOH,
should be thought of as a blueprint that is used to construct a class.
 Taking your example:

>   role B {
>   has $!a;
>   }
>
>   role C {
>   has $!a;
>   }
>
>   class A does B, C {
>   method foo() {
>  say $!a;
>   }
>   }
>
> I think in this case $!B::a and $!C::a won't ever be visible, while the
> reference to $!a in class A will be a compile time error.

:snip:

> Or does that mean that
>
>class A does B, C {...}
>
> actually makes the declarations in B and C as if it were declared in the
> class A?

Correct.  Declarations in roles are _always_ treated as if they were
declared in the class into which they're composed.  And since only
classes are used to instantiate objects, the only time that a role
actually gets used is when it is composed into a class.

-- 
Jonathan "Dataweaver" Lang


Re: XPath grammars (Was: Re: globs and trees in Perl6)

2008-10-02 Thread Jon Lang
For tree-oriented pattern matching syntax, I'd recommend for
inspiration the RELAX NG Compact Syntax, rather than XPath.
Technically, RELAX NG is an XML schema validation language; but the
basic principle that it uses is to describe a tree-oriented pattern,
and to consider the document to be valid if it matches the pattern.

XPath, by contrast, isn't so much about pattern matching as it is
about creating a tree-oriented addressing scheme.

Also note that S05 includes an option near the end about matching
elements of a list rather than characters of a string; IMHO, a robust
"structured data"-oriented pattern-matching technology for perl6 ought
to use that as a starting point.

-- 
Jonathan "Dataweaver" Lang


Re: globs and rules and trees, oh my! (was: Re: XPath grammars (Was: Re: globs and trees in Perl6))

2008-10-03 Thread Jon Lang
Timothy S. Nelson wrote:
>>  note to treematching folks: it is envisaged that signatures in
>> a rule will match nodes in a tree
>>
>>My question is, how is this expected to work?  Can someone give an
>> example?
>
>I'm assuming that this relates to Jon Lang's comment about using
> rules to match non-strings.

Pretty much - although there are some patterns that one might want to
use that can't adequately be expressed in this way - at least, not
without relaxing some of the constraints on signature definition.
Some examples:

A signature always anchors its "positional parameters" pattern to the
first and last positional parameters (analogous to having implicit '^'
and '$' markup at the start and end of a textual pattern), and does
not provide any sort of "zero or more"/"one or more" qualifiers, other
than a single tail-end "slurpy list" option.  Its "zero or one"
qualifier is likewise constrained in that once you use an optional
positional, you're limited to optionals and slurpies from that point
on.  This makes it difficult to set up a pattern that matches, e.g.,
"any instance within the list of a string followed immediately by a
number".

The other issue that signatures-as-patterns doesn't handle very well
is that of capturing and returning matches.  I suppose that this could
be handled, to a limited extent, by breaking the signature up into
several signatures joined together by <,>, and then indicating which
"sub-signatures" are to be returned; but that doesn't work too well
once hierarchal arrangements are introduced.

Perhaps an approach more compatible with normal rules syntax might be
to introduce a series of xml-like tags:

<[> ... <]> lets you denote a nested list of patterns - analogous to
what [ ... ] does outside of rules.  Within its reach, '^' and '$'
refer to "just before the first element" and "just after the last
element", respectively.  Otherwise, this works just like the "list of
objects and/or strings" patterns currently described in S05.

<{> ... <}> does likewise with a nested hash of values, with standard
pair notation being used within in order to link key patterns to value
patterns.  Since hashes are not ordered, '^' and '$' would be
meaningless within this context.  Heck, order in general is
meaningless within this context.

 replaces  as the object-based equivalent of '.' ('elem'
is too list-oriented of a term).  I'd recommend doing this even if you
don't take either of the suggestions above.

You might even do a <[[> ... <]]> pairing to denote a list that is
nested perhaps more than one layer down.  Or perhaps that could be
handled by using '<[>+' or the like.

> But how would it be if I wanted to search a tree for all nodes
> whose "readonly" attribute were true, and return an array of
> those nodes?

This can already be done, for the most part:

/ (<.does(ro)>) /

Mind you, this only searches a list; to make it search a tree, you'd
need a drill-down subrule such as I outline above:

/ <[>* (<.does(ro)>) <]>* /

-- 
Jonathan "Dataweaver" Lang


Re: [svn:perl6-synopsis] r14586 - doc/trunk/design/syn

2008-10-05 Thread Jon Lang
<[EMAIL PROTECTED]> wrote:
> Log:
> Add missing series operator, mostly for readability.

Is there a way for the continuing function to access its index as well
as, or instead of, the values of one or more preceding terms?  And/or
to access elements by counting forward from the start rather than
backward from the end?

There is a mathematical technique whereby any series that takes the
form of "F(n) = A*F(n-1) + B*F(n-2) + C*F(n-3)" can be reformulated as
a function of n, A, B, C, F(0), F(1), and F(2).  (And it is not
limited to three terms; it can be as few as one or as many as n-1 -
although it has to be the same number for every calculated term in the
series.)  For the Fibonacci series, it's something like:

F(n) = (pow((sqrt(5) + 1)/2, n) + pow((sqrt(5) - 1)/2, n))/sqrt(5)

...or something to that effect.  It would be nice if the programmer
were given the tools to do this sort of thing explicitly instead of
having to rely on the optimizer to know how to do this implicitly.

-- 
Jonathan "Dataweaver" Lang


Re: [svn:perl6-synopsis] r14586 - doc/trunk/design/syn

2008-10-06 Thread Jon Lang
Larry Wall wrote:
> On Sun, Oct 05, 2008 at 08:19:42PM -0700, Jon Lang wrote:
> : <[EMAIL PROTECTED]> wrote:
> : > Log:
> : > Add missing series operator, mostly for readability.
> :
> : Is there a way for the continuing function to access its index as well
> : as, or instead of, the values of one or more preceding terms?  And/or
> : to access elements by counting forward from the start rather than
> : backward from the end?
>
> That's what the other message was about.  @_ represents the entire list
> generated so far, so you can look at its length or index it from the
> beginning.  Not guaranteed to be as efficient though.

If I understand you correctly, an "All even numbers" list could be written as:

  my @even = () ... { 2 * [EMAIL PROTECTED] }

And the Fibonacci series could be written as:

  my @fib = () ... { (pow((1 + sqrt(5))/2, [EMAIL PROTECTED]) - pow((1 - 
sqrt(5))/2,
[EMAIL PROTECTED])) / sqrt(5)) }

Mind you, these are bulkier than the versions described in the patch.
And as presented, they don't have any advantage to offset their
bulkiness, because you still have to determine every intervening
element in sequential order.  If I could somehow replace '[EMAIL PROTECTED]' in 
the
above code with an integer that identifies the element that's being
asked for, it would be possible to skip over the unnecessary elements,
leaving them undefined until they're directly requested.  So:

  say @fib[4];

would be able to calculate the fifth fibonacci number without first
calculating the prior four.

It's possible that the '...' series operator might not be the right
way to provide random access to elements.  Perhaps there should be two
series operators, one for sequential access (i.e., 'infix:<...>') and
one for random access (e.g., 'infix:<...[]>').  This might clean
things up a lot: the sequential access series operator would feed the
last several elements into the generator:

  0, 1 ... -> $a, $b { $a + $b }

while the random access series operator would feed the requested index
into the generator:

  () ...[] -> $n { (pow((1 + sqrt(5))/2, $n) - pow((1 - sqrt(5))/2,
$n)) / sqrt(5)) }

I'd suggest that both feed the existing array into @_.
__
> : It would be nice if the programmer
> : were given the tools to do this sort of thing explicitly instead of
> : having to rely on the optimizer to know how to do this implicitly.
>
> Um, I don't understand what you're asking for.  Explicit solutions
> are always available...

This was a reaction to something I (mis)read in the patch, concerning
what to do when the series operator is followed by a 'whatever'.
Please ignore.
__
On an additional note: the above patch introduces some ambiguity into
the documentation.  Specifically, compare the following three lines:

X  List infixZ minmax X X~X X*X XeqvX ...
R  List prefix   : print push say die map substr ... [+] [*] any $ @

N  Terminator; <==, ==>, <<==, ==>>, {...}, unless, extra ), ], }

On the first line, '...' is the name of an operator; on the second and
third lines, '...' is documentation intended to mean "...and so on"
and "yadda-yadda", respectively.  However, it is not immediately
apparent that this is so: a casual reader will be inclined to read the
first line as "...and so on" rather than 'infix:<...>', and will not
realize his error until he gets down to the point where the series
operator is defined.
__
Another question: what would the following do?

  0 ... { $_ + 2 } ... &infix:<+> ... *

If I'm reading it right, this would be the same as:

  infix:<...> (0; { $_ + 2 }; &infix:<+>; *)

...but beyond that, I'm lost.
__
Jonathan "Dataweaver" Lang


Re: [svn:perl6-synopsis] r14598 - doc/trunk/design/syn

2008-10-17 Thread Jon Lang
"also" seems to be the junctive equivalent of "andthen".  Should there
be a junctive equivalent of "orelse"?

-- 
Jonathan "Dataweaver" Lang


Re: Are eqv and === junction aware?

2008-11-13 Thread Jon Lang
Larry Wall wrote:
> eqv and === autothread just like any other comparisons.  If you really
> want to compare the contents of two junctions, you have to use the
> results of some magical .eigenmumble method to return the contents
> as a non-junction.  Possibly stringification will be sufficient, if
> it canonicalizes the output order.

Perhaps there should be a way of disabling the autothreading?
Something similar to the way that Lisp can take a block of code and
tag it as "do not execute at this time".  The idea is that there may
be some cases where one might want to look at a Junction as an object
in and of itself, rather than as a superposition of other objects; and
simply extracting its contents into a list or set won't always do,
since it might be Junction-specific details at which you want to look.
 And as long as Junctions autothread by default, I don't see them
losing any power.

-- 
Jonathan "Dataweaver" Lang


Re: [perl #60732] Hash indexes shouldn't work on array refs

2008-11-24 Thread Jon Lang
On Mon, Nov 24, 2008 at 7:19 AM, Rafael Garcia-Suarez
<[EMAIL PROTECTED]> wrote:
> Moritz Lenz wrote in perl.perl6.compiler :
>> jerry gay wrote:
>>> On Fri, Nov 21, 2008 at 10:43, via RT Moritz Lenz
>>> <[EMAIL PROTECTED]> wrote:
 # New Ticket Created by  Moritz Lenz
 # Please include the string:  [perl #60732]
 # in the subject line of all future correspondence about this issue.
 # http://rt.perl.org/rt3/Ticket/Display.html?id=60732 >


 From #perl6 today:

 19:33 < moritz_> rakudo: my $x = [ 42 ]; say $x<0>
 19:33 < p6eval> rakudo 32984: OUTPUT[42␤]

 I don't think that should be allowed.

>>> the real test is:
>>>
>>> (8:52:47 PM) [particle]1: rakudo: my $x = [42]; say $x<0_but_true>;
>>> (8:52:49 PM) p6eval: rakudo 32998: OUTPUT[42␤]
>>> (8:53:38 PM) [particle]1: rakudo: my $x = [42]; say $x;
>>> (8:53:40 PM) p6eval: rakudo 32998: OUTPUT[42␤]
>>> (8:53:50 PM) [particle]1: rakudo: my $x = [42]; say $x;
>>> (8:53:52 PM) p6eval: rakudo 32998: OUTPUT[42␤]
>>> (8:54:37 PM) [particle]1: rakudo: my $x = ['a', 42]; say $x;
>>> (8:54:39 PM) p6eval: rakudo 32998: OUTPUT[a␤]
>>> (8:58:41 PM) [particle]1: rakudo: my $x = ['a', 42]; say $x<1.4>;
>>> (8:58:44 PM) p6eval: rakudo 32998: OUTPUT[42␤]
>>> (8:58:48 PM) [particle]1: rakudo: my $x = ['a', 42]; say $x<0.4>;
>>> (8:58:50 PM) p6eval: rakudo 32998: OUTPUT[a␤]
>>>
>>> so, the index is coerced to an integer. is that really wrong?
>>> ~jerry
>>
>> IMHO yes, because Perl explicitly distinguishes between arrays and
>> hashes (and it's one of the things we never regretted, I think ;-). Any
>> intermixing between the two would only lead to confusion, especially if
>> somebody writes a class whose objects are both hashe and array.
>
> Yes, that leads to confusion. (And confusion leads to anger, and so on)
> Which is why we removed pseudo-hashes from Perl 5.10.

Perl 6 explicitly _does_ allow hash-reference syntax, for the specific
purpose of customized indices.  That said, the sample code would not
work, since you must explicitly define your custom index before you
use it.  Examples from the Synopses include a custom index that's
based on the months of the year, so that you can say, e.g., @x
instead of @x[0].

-- 
Jonathan "Dataweaver" Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-03 Thread Jon Lang
Aristotle Pagaltzis wrote:
> * Bruce Gray <[EMAIL PROTECTED]> [2008-12-03 18:20]:
>> In Perl 5 or Perl 6, why not move the grep() into the while()?
>
> Because it's only a figurative example and you're supposed to
> consider the general problem, not nitpick the specific example…

But how is that not a general solution?  You wanted something where
you only have to set the test conditions in one place; what's wrong
with that one place being inside the while()?

-- 
Jonathan "Dataweaver" Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-03 Thread Jon Lang
Mark J. Reed wrote:
> Mark J. Reed wrote:
>> loop
>> {
>>   doSomething();
>>   last unless someCondition();
>>   doSomethingElse();
>> }
>
> That is, of course, merely the while(1) version from Aristotle's
> original message rewritten with Perl 6's loop keyword.  As I said, I'm
> OK with that, personally, but it's clearly not what he's looking for.

But maybe it is.  I suspect that the difficulty with the while(1)
version was the kludgey syntax; the loop syntax that you describe does
the same thing (i.e., putting the test in the middle of the loop block
instead of at the start or end of it), but in a much more elegant
manner.  The only thing that it doesn't do that a more traditional
loop construct manages is to make the loop condition stand out
visually.

-- 
Jonathan "Dataweaver" Lang


Re: how to write literals of some Perl 6 types?

2008-12-03 Thread Jon Lang
Darren Duncan wrote:
> Now, with some basic types, I know how to do it, examples:
>
>  Bool # Bool::True

Please forgive my ignorance; but are there any cases where
'Bool::True' can be spelled more concisely as 'True'?  Otherwise, this
approach seems awfully cluttered.

-- 
Jonathan "Dataweaver" Lang


Re: Roles and IO?

2008-12-11 Thread Jon Lang
Leon Timmermans wrote:
> What I propose is using role composition for *everything*. Most
> importantly that includes the roles Readable and Writable, but also
> things like Seekable, Mapable, Pollable, Statable, Ownable, Buffered
> (does Readable), Socket, Acceptable (does Pollable), and more.
>
> That may however make some interfaces is a bit wordy. I think that can
> be conveyed using a subset like this (though that may be abusing the
> feature).
>
> subset File of Mapable & Pollable & Statable & Ownable;

subset is the wrong approach: a subset is about taking an existing
role and restricting the range of objects that it will match.  What
you're really asking for are composite roles:

  role File does Mappable does Pollable does Statable does Ownable {}

One of the things about roles is that once you have composed a bunch
of them into another role, they're considered to be composed into
whatever that role is composed into.  So "does File" would be
equivalent to "does Mappable does Pollable does Statable does Ownable"
(barring potential conflicts between Mappable, Pollable, Statable, and
Ownable which File would presumably resolve).

-- 
Jonathan "Dataweaver" Lang


Re: List.end - last item and last index mixing

2008-12-12 Thread Jon Lang
Moritz Lenz wrote:
> From S29:
>
> : =item end
> :
> :  our Any method end (@array: ) is export
> :
> : Returns the final subscript of the first dimension; for a one-dimensional
> : array this simply the index of the final element.  For fixed dimensions
> : this is the declared maximum subscript.  For non-fixed dimensions
> (undeclared
> : or explicitly declared with C<*>), the actual last element is used.
>
>
> The last sentence  seems to suggest that not the index of the last
> element is returned, but the element itself. (Which I think is pretty weird)
>
>
> And S02:
>
> : The C<$#foo> notation is dead.  Use C<@foo.end> or C<@foo[*-1]> instead.
> : (Or C<@foo.shape[$dimension]> for multidimensional arrays.)
>
> That doesn't clean it up totally either.
>
> So what should .end return? 2 or 'c'?
> (Currently pugs and elf return 2, rakudo 'c').

@foo[*-1] would return 'c'.  @foo[*-1]:k would return 2.  So the
question is whether @foo.end returns @foo[*-1] or @foo[*-1]:k.

You might also allow 'end' to take an adverb the way that 'postfix:[]'
does, allowing you to explicitly choose what you want returned; but
that still doesn't answer the question of what to return by default.

-- 
Jonathan "Dataweaver" Lang


Re: Roles and IO?

2008-12-12 Thread Jon Lang
Leon Timmermans wrote:
> I assumed a new role makes a new interface. In other words, that a
> type that happens to do Pollable, Mappable, Statable and Ownable
> wouldn't automatically do File in that case. If I was wrong my abuse
> of subset wouldn't be necessary. Otherwise, maybe there should be a
> clean way to do that.

Hmm... true enough.

There was another contributer here who proposed an "is like" modifier
for type matching - I believe that he used £ for the purpose, in that
'File' would mean 'anything that does File', while '£ File' would mean
'anything that can be used in the same sort of ways as File'.  That
is, perl 6 uses nominative typing by default, while '£' would cause it
to use structural typing (i.e., "duck-typing") instead.

FWIW, I'd be inclined to have anonymous roles use duck-typing by
default - that is, "$Foo.does role {does Pollable; does Mappable; does
Statable; does Ownable}" would be roughly equivalent to "$Foo.does £
role {does Pollable; does Mappable; does Statable; does Ownable}" -
the theory being that there's no way that a role that you generate on
the fly for testing purposes will ever match any of the roles composed
into a class through nominative typing; so the only way for it to be
at all useful is if it uses structural typing.

(I say "roughly equivalent" because I can see cause for defining "£
Foo" such that it only concerns itself with whether or not the class
being tested has all of the methods that Foo has, whereas the
anonymous "role {does Foo}" would match any role that does Foo.  As
such, you could say things like:

  role {does Foo; does £ Bar; has $baz}

to test for an object that .does Foo, has all of the methods of Bar,
and has an accessor method for $baz.

-- 
Jonathan "Dataweaver" Lang


Re: What does a Pair numify to?

2008-12-15 Thread Jon Lang
Mark Biggar wrote:
> The only use case I can think of is sorting a list of pairs;
>  should it default to sort by key or value?

But this isn't a case of numifying a Pair, or of stringifying it - or
of coercing it at all.  If you've got a list of Pairs, you use a
sorting algorithm that's designed for sorting Pairs (which probably
sorts by key first, then uses the values to break ties).  If you've
got a list that has a mixture of Pairs and non-Pairs, I think that the
sorting algorithm should complain: it's clearly a case of being asked
to compare apples and oranges.

When are you going to be asked to stringify or numify a Pair?  Actual
use-cases, please.  Personally, I can't think of any.

-- 
Jonathan "Dataweaver" Lang


Re: What does a Pair numify to?

2008-12-16 Thread Jon Lang
On Mon, Dec 15, 2008 at 10:26 PM, Larry Wall  wrote:
> On Mon, Dec 15, 2008 at 04:43:51PM -0700, David Green wrote:
>> I can't really think of a great example where you'd want to numify a
>> pair, but I would expect printing one to produce something like "a =>
>> 23" (especially since that's what a one-element hash would print,
>> right?).
>
> Nope, would print "a\t23\n" as currently specced.

The point, though, is that stringification of a pair incorporates both
the key and the value into the resulting string.  This is an option
that numification doesn't have.

As well, I'm pretty sure that "a\t23\n" doesn't numify.  I'm beginning
to think that Pairs shouldn't, either; but if they do, they should
definitely do so by numifying the value of the pair.

-- 
Jonathan "Dataweaver" Lang


Re: What does a Pair numify to?

2008-12-16 Thread Jon Lang
TSa wrote:
> I see no problem as long as say gets a pair as argument. Then it can
> print the key and value separated with a tab. More problematic are
> string concatenations of the form
>
>   say "the pair is: " ~ (foo => $bar);
>
> which need to be written so that say sees the pair
>
>   say "the pair is: ", (foo => $bar);
>
> and not a string that only contains the value of the pair. I'm not
> sure if the parens are necessary to pass the pair to say as argument
> to be printed instead of a named argument that influences how the
> printing is done.

That's a good point.  Is there an easy way to distinguish between
passing a pair into a positional parameter vs. passing a value into a
named parameter?  My gut instinct would be to draw a distinction
between the different forms that Pair syntax can take, with "foo =>
$bar" being treated as an instance of the former and ":foo($bar)"
being treated as an instance of the latter.  That is:

  say 'the pair is: ', foo => $bar;

would be equivalent to:

  say 'the pair is: ', "foo\t$bar\n";

while:

  say "the pair is: ", :foo($bar);

would pass $bar in as the value of the named parameter 'foo' (which, I
believe, would cause 'say' to squawk, as its signature doesn't allow
for a named parameter 'foo').

-- 
Jonathan "Dataweaver" Lang


Re: What does a Pair numify to?

2008-12-16 Thread Jon Lang
Moritz Lenz wrote:
> Off the top of my head, see S06 for the gory details:
>
> my $pair = a => 'b';
>
> named(a => 'b');
> named(:a);
> named(|$pair);
>
> positional((a => 'b'));
> positional((:a));
> positional($pair);

As you say: the gory details, emphasis on gory.  But if that's the way
of things, so be it.

-- 
Jonathan "Dataweaver" Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-16 Thread Jon Lang
How do you compute '*'?  That is, how do you know how many more
iterations you have to go before you're done?

Should you really be handling this sort of thing through an "iteration
count" mechanism?  How do you keep track of which iteration you're on?
 Is it another detail that needs to be handled behind the scenes, or
is the index of the current iteration available to the programmer?
(Remember, we're dealing with 'while' and 'loop' as well as 'for'.)

-- 
Jonathan "Dataweaver" Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-18 Thread Jon Lang
Aristotle Pagaltzis wrote:
> And it says exactly what it's supposed to say in the absolutely
> most straightforward manner possible. The order of execution is
> crystal clear, the intent behind the loop completely explicit.

If it works for you, great!  Personally, it doesn't strike me as being
as straightforward as putting a "last unless" clause into the middle
of an otherwise-infinite loop; but then, that's why Perl (both 5 and
6) works on the principle of TIMTOWTDI: you do it your way, and I'll
do it mine.

-- 
Jonathan "Dataweaver" Lang


Re: Support for ensuring invariants from one loop iteration to the next?

2008-12-19 Thread Jon Lang
Like I said: if the goto approach works for you, more power to you.  Me, I find:

   loop {
   @stuff = grep { $_->valid } @stuff;
   TEST: last unless @stuff;

   $_->do_something( ++$i ) for @stuff;
   }

to be at least as straightforward as:

   goto INVARIANT;

   while ( @stuff ) {
   $_->do_something( ++$i ) for @stuff;

   INVARIANT:
   @stuff = grep { $_->valid } @stuff;
   }

It strikes me as being more concise; and the use of the (superfluous)
label makes the position and significance of the "last unless..." line
stand out.

I'm not telling you that you're wrong; TIMTOWTDI.  I'm telling you
that it's a matter of taste.

On Fri, Dec 19, 2008 at 5:52 AM, Aristotle Pagaltzis  wrote:
> * Jon Lang  [2008-12-19 03:50]:
>> Personally, it doesn't strike me as being as straightforward
>> as putting a "last unless" clause into the middle of an
>> otherwise-infinite loop
>
> You have to keep more state in your head to read
>
>while(1) {
># ...
>last if $foo;
>}
>
> than to read
>
>while($foo) {
># ...
>}
>
> The goto in the code I gave happens just once and doesn't modify
> the loop semantics. Basically, any one point in the code I gave
> can be read in isolation, and is "locally complete" (I don't know
> how to say this better), whereas in the infinite loop the overall
> effect of certain points is depenent on other points.
>
> In Schwern's terms, goto'ing into the middle of a terminating
> loop is more skimmable than last'ing out of the middle of an
> infinite loop.
>
> Regards,
> --
> Aristotle Pagaltzis // <http://plasmasturm.org/>
>



-- 
Jonathan "Dataweaver" Lang


Re: returning one or several values from a routine

2009-01-05 Thread Jon Lang
Daniel Ruoso wrote:
>
> Hi,
>
> As smop and mildew now support ControlExceptionReturn (see
> v6/mildew/t/return_function.t), an important question raised:
>
>  sub plural { return 1,2 }
>  sub singular { return 1 }
>  my @a = plural();
>  my $b = plural();
>  my @c = singular();
>  my $d = singular();
>
> What should @a, $b, @c and $d contain?
>
> Note that the spec says explicitly that a Capture should be returned,
> delaying the context at which the value will be used, this allows
>
>  sub named { return :x<1> }
>  my $x := |(named);
>
> So, this also means that assigning
>
>  my @a = plural();
>  my @c = singular();
>
> forces list context in the capture, which should return all positional
> parameters, as expected. But
>
>  my $b = plural();
>  my $d = singular();
>
> would force item context in the capture, and here is the problem, as a
> capture in item context was supposed to return the invocant.

If item context is supposed to return the invocant, then it would seem
to me that returning a single value from a sub would put that value
into the capture object's invocant.  This would mean that the problem
crops up under 'my @c = singular()' instead of 'my $b = plural()'.

The idea in the spec is that the capture object can hold an item, a
distinct list, and a distinct hash all at once.  The problem that
we're encountering here is that there are times when the difference
between an item and a one-item list is fuzzy.  We _could_ kludge it by
saying that when a sub returns an item $x, it gets returned as a
capture object ($invocant := $x: $param1 := $x) or some such; but this
_is_ a kludge, which has the potential for unexpected and unsightly
developments later on.

Another option would be to change the way that applying item context
to a capture object works in general, to allow for the possibility
that a single-item list was actually intended to be a single item: if
there's no invocant, but there is exactly one positional parameter,
return the positional parameter instead:

  $a = |("title": 1)
  $b = |("title":)
  $c = |(1)

  $x = item $a; # $x == "title"
  $x = item $b; # $x == "title"
  $x = item $c; # $x == 1

  $x = list $a; # $x == [1]
  $x = list $b; # $x == []
  $x = list $c; # $x == [1]

With this approach, return values would return values as positional
parameters unless a conscious effort was made to do otherwise.

But let's say that you really wanted to get the invocant of a capture
object.  You can still do so:

  |($x:) = $a; # $x == "title"
  |($x:) = $b; # $x == "title"
  |($x:) = $c; # $x == undef

Likewise, you could specify that you want the first positional
parameter of the capture object by saying:

  |($x) = $a; # $x == 1
  |($x) = $b; # $x == undef
  |($x) = $c; # $x == 1

This isn't as clean as a straight mapping of invocant to item,
positional to list, and named to hash; but I think that it's got
better dwimmery.

--
Jonathan "Dataweaver" Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
Daniel Ruoso wrote:
> I've just realized we were missing some spots, so remaking the list of
> possibilities
>
>  my $a = sub s1 { return a => 1 }
>  my $b = sub s2 { return a => 1, b => 2 }
>  my $c = sub s3 { return 1, 2, 3, a => 1, b => 2 }
>  my $d = sub s4 { return 1 }
>  my $e = sub s5 { return 1, 2, 3 }
>  my $f = sub s6 { return 1: #< it doesnt matter > }
>  my $g = sub s7 { return }
>
> But while writing this email, I've realized that a Capture object is
> supposed to implement both .[] and .{}, so maybe we can just simplify...

Bear in mind that some list objects can use .{} (for customized
indices) as well as .[] (for standard indices).  As such, $e ought to
get a List rather than a Capture.  And if you're going to go that far,
you might as well go the one extra step and say that $b gets a Hash
rather than a Capture.

But I agree about $a, $c, $d, $f, and $g.

>  $g is an undefined Object
>  $f is 1
>  $d is 1
>  $a is a Pair
>  everything else is the Capture itself

Of course, that's only a third of the problem.  What should people
expect with each of these:

  my @a = sub s1 { return a => 1 }
  my @b = sub s2 { return a => 1, b => 2 }
  my @c = sub s3 { return 1, 2, 3, a => 1, b => 2 }
  my @d = sub s4 { return 1 }
  my @e = sub s5 { return 1, 2, 3 }
  my @f = sub s6 { return 1: }
  my @g = sub s7 { return }

  my %a = sub s1 { return a => 1 }
  my %b = sub s2 { return a => 1, b => 2 }
  my %c = sub s3 { return 'a', 1, 'b', 2, a => 1, b => 2 }
  my %d = sub s4 { return 1 }
  my %e = sub s5 { return 'a', 1, 'b', 2 }
  my %f = sub s6 { return 1: }
  my %g = sub s7 { return }

Should @a == (), or should @a == ( a => 1 )?  Or maybe even @a == ( 'a', 1 )?
Likewise with @b and @f.

Should %e == {} or { a => 1, b => 2 }?

-- 
Jonathan "Dataweaver" Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
Daniel Ruoso wrote:
> Hmm... I think that takes the discussion to another level, and the
> question is:
>
>  "what does a capture returns when coerced to a context it doesn't
> provide a value for?"
>
> The easy answer would be undef, empty array and empty hash, but that
> doesn't DWIM at all.
>
> The hard answer is DWIM, and that can be:
>
>  1) in item context, without an invocant
>   a) if only one positional argument, return it
>   b) if only one named argument, return it as a pair
>   c) if several positional args, but no named args,
>  return an array
>   d) if several named args, but no positional args,
>  return a hash
>   e) if no args at all, return undefined Object
>   f) return itself otherwise
>  2) in list context, without positional arguments
>   a) if one or more named arguments,
>  return a list of pairs
>   b) return an empty list otherwise
>  3) in hash context, without named arguments
>   a) if there are positional arguments,
>  return a hash taking key,value.
>  if an odd number of positional arguments,
>  last key has an undef Object as the
>  value and issue a warning.
>   b) return an empty hash otherwise

Elaborate further to account for the possibility of an invocant.  For
example, if you have a capture object with just an invocant and you're
in list context, should it return the invocant as a one-item list, or
should it return an empty list?  Should hash context of the same
capture object give you a single Pair with an undef value and a a
warning, or should it give you an empty hash?

I don't mind the dwimmery here, even though that means that you can't
be certain that (e.g.) 'list $x' will return the positional parameters
in capture object $x.  If you want to be certain that you're getting
the positional parameters and nothing else, you can say '$x.[]'; if
you want to be certain that you're getting the named parameters, you
can say '$x.{}'.  The only thing lost is the ability to ensure that
you're getting the invocant; and it wouldn't be too hard to define,
say, a postfix:<_> operator for the Capture object that does so:

  item($x) # Dwimmey use of item context.
  list($x) # Dwimmey use of list context.
  hash($x) # Dwimmey use of hash context.
  $x._ # the Capture object's invocant, as an item.
  $x.[] # the Capture object's positional parameters, as a list.
  $x.{} # the Capture object's named parameters, as a hash.

-- 
Jonathan "Dataweaver" Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
TSa wrote:
> Jon Lang wrote:
>>   item($x) # Dwimmey use of item context.
>
> IIRC this is the same as $$x, right? Or does that
> extract the invocant slot without dwimmery?

Umm... good question.  This is a rather nasty paradox: on the one
hand, we want to be able to stack $, @, and % with capture objects in
analogy to Perl 5's references, which would indicate that they should
tie directly to the invocant, positional parameters, and named
parameters, respectively.  OTOH, the intuitive meaning for these
symbols would seem to be "item context", "list context", and "hash
context", respectively, which would argue for the dwimmery.

The question is which of these two sets of semantics should be
emphasized; once that's answered, we need to be sure to provide an
alternative syntax that gives us the other set.  IOW, which one of
these is "make common things easy", and which one is "make uncommon
things possible"?

>>   $x._ # the Capture object's invocant, as an item.
>
> How about $x.() here? That looks symmetric to the other
> postfix operators and should be regarded as a method
> dispatched on the invocant or some such.

Symmetric, but ugly.  I suggested the underscore because '.foo' is the
same as '$_.foo'; so you can think of the invocant of a capture object
as being roughly analogous to a topic.

-- 
Jonathan "Dataweaver" Lang


Re: returning one or several values from a routine

2009-01-06 Thread Jon Lang
Dave Whipp wrote:
> Daniel Ruoso wrote:
>> Hmm... I think that takes the discussion to another level, and the
>> question is:
>>
>>  "what does a capture returns when coerced to a context it doesn't
>> provide a value for?"
>
> I'd like to take one step further, and ask what it is that introduced
> capture semantics in the first place. And I suggest that the answer should
> be "the use of a signature"
>
> I'd also suggest that we get rid of the use of backslash as a
> capture-creation operator (the signature of Capture::new can do that) and
> instead re-task it as a "signature" creation operator.

I believe that we already have a signature creation operator, namely
":( @paramlist )".

Note also that retasking '\' destroys the analogy that currently
exists between perl 6 captures and perl 5 references.

> If we do that, then I think we can reduce the discussion of the semantics of
> multi-returns to the semantics of assignments:
>
> If the sub/method defines a return-signature then that is used (with
> standard binding semantics), otherwise the result is semantically a flat
> list.
>
> If the LHS is an assignment is a signature, then the rhs is matched to it:
>
> my  (@a, %b) = 1,2,3, b => 4; ## everything in @a; %b empty
> my \(@a, %b) = 1,2,3, b => 4; ## @a = 1,2,3; %b = (b=>4)

Change that second line to:

  my :(*...@a, *%b) = 1, 2, 3, b => 4;

@a and %b have to be slurpy so that you don't get a signature
mismatch.  There's also the matter of how a signature with an invocant
would handle the assignment:

  my :($a: *...@b, *%c) = 1, 2, 3, b => 4;

Either $a == 1 and @b == (2, 3) or $a == undef and @b == (1, 2, 3).
Which one is it?  Probably the latter.

Regardless, the magic that makes this work would be the ability to
assign a flat list of values to a signature.  Is this wise?

-- 
Jonathan "Dataweaver" Lang


Writing to an iterator

2009-01-07 Thread Jon Lang
I was just reading through S07, and it occurred to me that if one
wanted to, one could handle stacks and queues as iterators, rather
than by push/pop/shift/unshift of a list.  All you'd have to do would
be to create a stack or queue class with a private list attribute and
methods for reading from and writing to it.  The first two parts are
easy: "has @!list;" handles the first, and "method prefix:<=> { .pop
}" handles the second (well, mostly).

How would I define the method for writing to an iterator?

-- 
Jonathan "Dataweaver" Lang


Re: r24819 - docs/Perl6/Spec

2009-01-08 Thread Jon Lang
Darren Duncan wrote:
> pugs-comm...@feather.perl6.nl wrote:
>>
>> Log:
>> [S02] clarify that Pairs and Mappings are mutable in value, but not in key
>
> 
>>
>> KeyHash Perl hash that autodeletes values matching default
>> KeySet  KeyHash of Bool (does Set in list/array context)
>> KeyBag  KeyHash of UInt (does Bag in list/array context)
>> +PairA single key-to-value association
>> +Mapping Set of Pairs with no duplicate keys
>
> 
>>
>> +As with C types, C and C are mutable in their
>> +values but not in their keys.  (A key can be a reference to a mutable
>> +object, but cannot change its C<.WHICH> identity.  In contrast,
>> +the value may be rebound to a different object, just as a hash
>> +element may.)
>
> Following this change, it looks to me like Mapping is exactly the same as
> Hash.  So under what circumstances should one now choose whether they want
> to use a Hash or a Mapping?  How do they still differ? -- Darren Duncan

I don't think they do.  IMHO, Mapping should definitely be immutable
in both key and value; it is to Hash as Seq is to Array.  (Side note:
why is List considered to be immutable?  Doesn't it change whenever
its iterator is read?)

The question in my mind has to do with Pair: if Pair is being
redefined as mutable in value, should it have an immutable
counterpart?  If so, what should said counterpart be called?

-- 
Jonathan "Dataweaver" Lang


Re: Extending classes in a lexical scope?

2009-01-12 Thread Jon Lang
Ovid wrote:
> Is it possible to modify the core Perl6Array class like that (without extra 
> keywords)?  If so, is it possible for each programmer to make such a change 
> so that it's lexically scoped?

AFAIK, it is not possible to modify a core class; however, I believe
that it _is_ possible to derive a new class whose "name" differs from
an existing class only in terms of version information, such that it
is substituted for the original class within the lexical scope where
it was defined, barring explicit inclusion of version information when
the class is referenced.

-- 
Jonathan "Dataweaver" Lang


Re: Not a bug?

2009-01-12 Thread Jon Lang
On Mon, Jan 12, 2009 at 2:15 AM, Carl Mäsak  wrote:
> Ovid (>):
>>  $ perl6 -e 'my $foo = "foo";say "{" ~ $foo ~ "}"'
>>   ~ foo ~
>
> Easy solution: only use double quotes when you want to interpolate. :)
>
> This is not really an option when running 'perl6 -e' under bash, though.

$ perl6 -e 'my $foo = "foo";say q:qq({" ~ $foo ~ "})'

...or something to that effect.

-- 
Jonathan "Dataweaver" Lang


Re: Not a bug?

2009-01-12 Thread Jon Lang
On Mon, Jan 12, 2009 at 1:08 PM, Larry Wall  wrote:
> On Mon, Jan 12, 2009 at 03:43:47AM -0800, Jon Lang wrote:
> : On Mon, Jan 12, 2009 at 2:15 AM, Carl Mäsak  wrote:
> : > Ovid (>):
> : >>  $ perl6 -e 'my $foo = "foo";say "{" ~ $foo ~ "}"'
> : >>   ~ foo ~
> : >
> : > Easy solution: only use double quotes when you want to interpolate. :)
> : >
> : > This is not really an option when running 'perl6 -e' under bash, though.
> :
> : $ perl6 -e 'my $foo = "foo";say q:qq({" ~ $foo ~ "})'
> :
> : ...or something to that effect.
>
> Assuming that's what was wanted.  I figgered they want something more
> like:
>
>$ perl6 -e 'my $foo = "foo"; say q[{] ~ $foo ~ q[}];'

True enough.  Either one of these would be more clear than the
original example in terms of user intent.

As well, isn't there a way to escape a character that would otherwise
be interpolated?  If the intent were as you suppose, the original
could be rewritten as:

  $ perl6 -e 'my $foo = "foo";say "\{" ~ $foo ~ "}"'

(Or would you need to escape the closing curly brace as well as the
opening one?)

-- 
Jonathan "Dataweaver" Lang


Re: Extending classes in a lexical scope?

2009-01-12 Thread Jon Lang
Ovid wrote:
> Actually, I'd prefer to go much further than this:
>
>  use Core 'MyCore';
>
> And have that override core classes lexically.
>
> That solves the "but I want it MY way" issue that many Perl and Ruby 
> programmers have, but they don't shoot anyone else in the foot.

Since 'use' imports its elements into the current lexical scope, the
version-based approach can do this.

The only catch that I can think of has to do with derived classes:
does the existence of a customized version of a class result in
same-way-customized versions of the classes that are derived from the
original class?  That is, if I added an "updated" version of Foo, and
Bar has previously been defined as being derived from Foo, would I get
a default "updated version" of Bar as well?  Or would I have to
explicitly update each derived class to conform to the updated base
class?

-- 
Jonathan "Dataweaver" Lang


Re: Extending classes in a lexical scope?

2009-01-12 Thread Jon Lang
Ovid wrote:
> - Original Message 
>
>> From: Jon Lang 
>
>> > Actually, I'd prefer to go much further than this:
>> >
>> >  use Core 'MyCore';
>> >
>> > And have that override core classes lexically.
>> >
>> > That solves the "but I want it MY way" issue that many Perl and Ruby
>> programmers have, but they don't shoot anyone else in the foot.
>>
>> Since 'use' imports its elements into the current lexical scope, the
>> version-based approach can do this.
>>
>> The only catch that I can think of has to do with derived classes:
>> does the existence of a customized version of a class result in
>> same-way-customized versions of the classes that are derived from the
>> original class?  That is, if I added an "updated" version of Foo, and
>> Bar has previously been defined as being derived from Foo, would I get
>> a default "updated version" of Bar as well?  Or would I have to
>> explicitly update each derived class to conform to the updated base
>> class?
>
>
> I'm not sure I understand you.  If 'Bar' inherits from 'Foo' and 'Foo' has 
> extended the core Array class to lexically implement a .shuffle method, then 
> I would expect 'Bar' to have that also.

No, you don't understand me.  The Foo/Bar example I was giving was
independent of your example.  Rephrasing in your terms, consider the
possibility of a class that's derived from Array, for whatever reason;
call it "Ring".  Now you decide that you want to redefine Array to
include a shuffle method, and so you implement an "Array version 2.0".
 Would you be given a "Ring version 2.0" that derives from Array
version 2.0, or would you have to explicitly ask for it?

As long as you limit your use of class inheritance, the above remains
manageable.  But consider something like the Tk widgets implemented as
a class hierarchy; then consider what happens if you reversion one of
the root widgets.  If you manually have to reversion each and every
widget derived from it, and each and every widget derived from those,
and so on and so forth, in order for your changes to the root to
propagate throughout the class hierarchy...

Instead, I'd rather see an approach where you need only reversion the
base class and those specific derived classes where problems would
otherwise arise due to your changes.

-- 
Jonathan "Dataweaver" Lang


Re: design of the Prelude (was Re: Rakudo leaving the Parrot nest)

2009-01-15 Thread Jon Lang
Forgive my ignorance, but what is a Prelude?

-- 
Jonathan "Dataweaver" Lang


Re: design of the Prelude (was Re: Rakudo leaving the Parrot nest)

2009-01-15 Thread Jon Lang
On Thu, Jan 15, 2009 at 6:45 PM, Jonathan Scott Duff
 wrote:
> On Thu, Jan 15, 2009 at 8:31 PM, Jon Lang  wrote:
>>
>> Forgive my ignorance, but what is a Prelude?
>>
>> --
>> Jonathan "Dataweaver" Lang
>
> The stuff you load (and execute) to bootstrap the language into utility on
> each invocation.  Usually it's written in terms of the language you're
> trying to bootstrap as much as possible with just a few primitives to get
> things started.

OK, then.  If I'm understanding this correctly, the problem being
raised has to do with deciding which language features to treat as
primitives and which ones to bootstrap from those primitives.  The
difficulty is that different compilers provide different sets of
primitives; and you're looking for a way to avoid having to write a
whole new Prelude for each compiler.  Correct?

Note my use of the term "language features" in the above.  Presumably,
you're going to have to decide on some primitive functions as well as
some primitive datatypes, etc.  For instance: before you can use
meta-operators in the Prelude, you're going to have to define them in
terms of some choice of primitive functions - and that choice is
likely to be compiler-specific.  So how is that any easier to address
than the matter of defining datatypes?  Or is it?

-- 
Jonathan "Dataweaver" Lang


Re: Operator sleuthing...

2009-01-15 Thread Jon Lang
Mark Lentczner wrote:
> STD has sym<;> as both an infix operator ( --> Sequencer), and as a
> terminator.
> ?? Which is it? Since I think most people think of it as a statement
> terminator, I plan on leaving it off the chart.

It is both.  Examples where it is used as an infix operator include:

  loop (my $i = 1; $i < 10; $i *= 2) { ... }

  my @@slice = (1, 2; 3, 4; 5, 6)

Presumably, Perl is capable of distinguishing the meanings based on
what the parser is expecting when it finds the semicolon.

-- 
Jonathan "Dataweaver" Lang


Re: r25060 - docs/Perl6/Spec src/perl6

2009-01-27 Thread Jon Lang
On Tue, Jan 27, 2009 at 9:43 AM,   wrote:
> +=head2 Reversed comparison operators
> +
> +Any infix comparison operator returning type C may be transformed 
> into its reversed sense
> +by prefixing with C<->.
> +
> +-cmp
> +-leg
> +-<=>
> +
> +To avoid confusion with the C<-=> operator, you may not modify
> +any operator already beginning with C<=>.
> +
> +The precedence of any reversed operator is the same as the base operator.

If there are only a handful of operators to which the new
meta-operator can be applied, why do it as a meta-operator at all?

This could be generalized to allow any infix operator returning a
signed type (which would include C) to reverse the sign.  In
effect, "$x -op $y" would be equivalent to "-($x op $y)".  (Which
suggests the possibility of a more generalized rule about creating
"composite operators" by applying prefix or postfix operators to infix
operators in an analogous manner; but that way probably lies madness.)

Also, wouldn't the longest-token rule cause C<-=> to take precedence
over C<=> prefixed with C<->?  Or, in the original definition, the
fact that C<=> isn't a comparison operator?

-- 
Jonathan "Dataweaver" Lang


Re: r25060 - docs/Perl6/Spec src/perl6

2009-01-27 Thread Jon Lang
Larry Wall wrote:
> Jon Lang wrote:
> : If there are only a handful of operators to which the new
> : meta-operator can be applied, why do it as a meta-operator at all?
>
> As a metaoperator it automatically extends to user-defined comparison
> operators, but I admit that's not a strong argument.  Mostly I want
> to encourage the meme that you can use - to reverse a comparison
> operator, even in spots where the operator is named by strings, such
> as (potentially) in an OrderingPair, which currently can be written
>
>&extract_key => &infix:<-leg>
>
> but that might be abbreviate-able to
>
>:extract_key<-leg>
>
> or some such.  That's not a terribly strong argument either, but
> perhaps they're additive, if not addictive.  :)

So "$a -<=> $b" is equivalent to "$b <=> $a", not "-($a <=> $b)".  OK.
 I'd suggest choosing a better character for the meta-operator (one
that conveys the meaning of reversal of order rather than opposite
value); but I don't think that there is one.

> : Also, wouldn't the longest-token rule cause C<-=> to take precedence
> : over C<=> prefixed with C<->?  Or, in the original definition, the
> : fact that C<=> isn't a comparison operator?
>
> It would be a tie, since both operators are the same length.

I guess I don't understand the longest-token rules, then.  I'd expect
the parser to be deciding between operator "-=" (a single
two-character token) vs. meta-operator "-" (a one-character token)
followed by operator "=" (a separate one-character token).  Since
meta-op "-" is shorter than op "-=", I'd expect op "-=" to win out.

-- 
Jonathan "Dataweaver" Lang


Re: r25102 - docs/Perl6/Spec

2009-01-30 Thread Jon Lang
Larry Wall wrote:
> So I'm open to suggestions for what we ought to call that envelope
> if we don't call it the prelude or the perlude.  Locale is bad,
> environs is bad, context is bad...the wrapper?  But we have dynamic
> wrappers already, so that's bad.  Maybe the setting, like a jewel?
> That has a nice static feeling about it at least, as well as a feeling
> of surrounding.
>
> Or we could go with a more linguistic contextual metaphor.  Argot,
> lingo, whatever...
>
> So anyway, just because other languages call it a prelude doesn't
> mean that we have to.  Perl is the tail that's always trying to
> wag the dog...
>
> What is the sound of one tail wagging?

whoosh, whoosh.

I tend to like "setting", because it makes me think of the setting of
a play, in which the actors (i.e., objects) perform their assigned
roles in following the script.

-- 
Jonathan "Dataweaver" Lang


Re: r25200 - docs/Perl6/Spec t/spec

2009-02-04 Thread Jon Lang
 wrote:
> -=item --autoloop-split, -F *expression*
> +=item --autoloop-delim, -F *expression*
>
>  Pattern to split on (used with -a).  Substitutes an expression for the 
> default
>  split function, which is C<{split ' '}>.  Accepts unicode strings (as long as

Should the default pattern be ' ', or should it be something more like /\s+/?

-- 
Jonathan "Dataweaver" Lang


Re: r25200 - docs/Perl6/Spec t/spec

2009-02-05 Thread Jon Lang
On Thu, Feb 5, 2009 at 9:21 AM, Larry Wall  wrote:
> On Thu, Feb 05, 2009 at 07:47:01AM -0800, Dave Whipp wrote:
>> Jon Lang wrote:
>>>>  Pattern to split on (used with -a).  Substitutes an expression for the 
>>>> default
>>>>  split function, which is C<{split ' '}>.  Accepts unicode strings (as 
>>>> long as
>>>
>>> Should the default pattern be ' ', or should it be something more like 
>>> /\s+/?
>>
>> // ?
>
> You guys are all doing P5Think.  The default should be autocomb, not 
> autosplit,
> and the default comb is already correct...

In that case, the -a option needs to be renamed to autocomb, rather
than autosplit as it is now.

-- 
Jonathan "Dataweaver" Lang


Re: 2 questions: Implementations and Roles

2009-02-06 Thread Jon Lang
Timothy S. Nelson wrote:
>Also, is there a simple way to know when I should be using a class
> vs. a role?

If you plan on creating objects with it, use a class.  If you plan on
creating classes with it, use a role.

-- 
Jonathan "Dataweaver" Lang


Re: References to parts of declared packages

2009-02-11 Thread Jon Lang
On Wed, Feb 11, 2009 at 12:15 PM, Jonathan Worthington
 wrote:
> Hi,
>
> If we declared, for example:
>
> role A::B {};
>
> Then what should a reference to A be here? At the moment, Rakudo treats it
> as a post-declared listop, however I suspect we should be doing something a
> bit smarter? If so, what should the answer to ~A.WHAT be?
>
> Thanks,

I'd go with one of two possibilities:

* Don't allow the declaration of A::B unless A has already been declared.

or

* A should be treated as a post-declared package.

-- 
Jonathan "Dataweaver" Lang


Re: References to parts of declared packages

2009-02-11 Thread Jon Lang
Carl Mäsak wrote:
>> * A should be treated as a post-declared package.
>
> Whatever this means, it sounds preferable. :)

It means that you can define package A without ever declaring it, by
declaring all of its contents using such statements as 'role A::B ',
'sub A::Foo', and so on.

-- 
Jonathan "Dataweaver" Lang


S03: how many metaoperators?

2009-02-11 Thread Jon Lang
With the addition of the reversing metaoperator, the claim that there
are six metaoperators (made in the second paragraph of the meta
operators section) is no longer true.  Likewise, the reduction
operator is no longer the fourth metaoperator (as stated in the first
sentence of the reduction operators section).  For now, the cross
operator _is_ still the final metaoperator, as it states in its first
paragraph; but it's possible that that might change eventually.

-- 
Jonathan "Dataweaver" Lang


S06: named arguments and long names

2009-02-12 Thread Jon Lang
Are required named parameters (e.g., ':$foo!') considered to be part
of the long name provided by a signature?  (S06 seems to indicate that
they aren't.)

Either way, can their status with respect to the long name be changed?
 That is, if they aren't included in the long name, can they be added
to it?  If they are included, can they be removed?

Suggestion: exclude them by default; but use a double exclamation mark
to indicate a required named parameter that is included in the long
name.  ':$foo!' would not be part of the long name, but ':$foo!!'
would be.

-- 
Jonathan "Dataweaver" Lang


Re: References to parts of declared packages

2009-02-13 Thread Jon Lang
TSa wrote:
> Does that imply that packages behave like C++ namespaces? That is
> a package can be inserted into several times:
>
>   package A
>   {
>   class Foo {...}
>   }
>   # later elsewhere
>   package A
>   {
>   class Bar {...}
>   }
>
> I would think that this is just different syntax to the proposed
> form
>
>   class A::Foo {...}
>   class A::Bar {...}

Well, we _do_ have a mechanism in place for adding to an existing
class (e.g., "class Foo is also { ... }"), and classes are a special
case of modules; so I don't see why you shouldn't be able to do
likewise with modules and even packages.  That said, I'd be inclined
to suggest that if this is to be allowed at all, it should be done
using the same mechanism that's used for classes - that is, an "is
also" trait.

-- 
Jonathan "Dataweaver" Lang


Synopsis for Signatures?

2009-02-13 Thread Jon Lang
At present, signatures appear to serve at least three rather diverse
purposes in Perl 6:

* parameter lists for routines (can also be used to specify what a
given routine returns; explored in detail in S06).
* variable declaration (see "declarators" in S03).
* parametric roles (currently only addressed to any extent in A12;
presumably, this will be remedied when S14 is written).

Given that signatures have grown well beyond their origins as
subroutine parameter lists, and given that signatures have their own
syntax, perhaps they should be moved out of S06?  I could see S08
being retasked to address signatures (and perhaps captures, given the
intimate connection between these two), since its original purpose
(i.e., references) has been deprecated.

-- 
Jonathan "Dataweaver" Lang


Re: References to parts of declared packages

2009-02-13 Thread Jon Lang
Larry Wall wrote:
> Jon Lang wrote:
> : Well, we _do_ have a mechanism in place for adding to an existing
> : class (e.g., "class Foo is also { ... }"), and classes are a special
> : case of modules; so I don't see why you shouldn't be able to do
> : likewise with modules and even packages.  That said, I'd be inclined
> : to suggest that if this is to be allowed at all, it should be done
> : using the same mechanism that's used for classes - that is, an "is
> : also" trait.
>
> These are actually package traits, not class traits, so your reasoning
> is backwards, with the result that your wish is already granted. :)

Darn it... :)

> However, it's possible we'll throw out "is also" and "is instead"
> in favor of "multi" and "only", so it'd be:
>
>multi package A {
>class Foo {...}
>}
>
>multi package A {
>class Bar {...}
>}
>
> to explicitly allow extended declarations.  Modifying a package:
>
>package A { # presumed "only"
>class Foo {...}
>}
>...
>multi package A {
>class Bar {...}
>}
>
> would result in an error.  In that case, the pragma
>
>use MONKEY_PATCHING;
>
> serves to suppress the redefinition error. It would also allow:
>
>package A { # presumed "only"
>class Foo {...}
>}
>...
>only package A {
>class Bar {...}
>}
>
> to do what "is instead" now does.
>
> Apart from the parsimony of reusing an existing concept, the advantage
> of doing it with "multi/only" is that the parser knows the multiness
> at the point it sees the name, which is when it wants to stick A into
> the symbol table.  Whereas "is also" has to operate retroactively on
> the name.
>
> This also lets us mark a package as explicitly monkeyable by design,
> in which case there's no need for a MONKEY_PATCHING declaration.

And with package versioning, you may not need an "is instead"
equivalent: if you want to "redefine" a package, just create a newer
version of it in a tighter lexical scope than the original package was
in.  You can still access the original package if you need to, by
referring to its version information.

And that, I believe, would put the final nail in the coffin of the
MONKEY_PATCHING declaration; without that, you'd still need the
declaration for the purpose of allowing redefinitions.

-- 
Jonathan "Dataweaver" Lang


Re: References to parts of declared packages

2009-02-13 Thread Jon Lang
Larry Wall wrote:
> Jon Lang wrote:
> : And with package versioning, you may not need an "is instead"
> : equivalent: if you want to "redefine" a package, just create a newer
> : version of it in a tighter lexical scope than the original package was
> : in.  You can still access the original package if you need to, by
> : referring to its version information.
> :
> : And that, I believe, would put the final nail in the coffin of the
> : MONKEY_PATCHING declaration; without that, you'd still need the
> : declaration for the purpose of allowing redefinitions.
>
> Except that the idea of monkey patching is that you're overriding
> something for everyone, not just your own lexical scope.

...right.

> While taking a shower I refined the design somewhat in my head,
> thinking about the ambiguities in package names when you're redefining.
> By my previous message, it's not clear whether the intent of
>
>multi package Foo::Bar {...}
>
> is to overload an existing package Foo::Bar in some namespace that
> we search for packages, or to be the prototype of a new Foo::Bar
> in the current package.  In the former case, we should complain if
> an existing name is not found, but in the latter case we shouldn't.
> So those cases must be differentiated.
>
> Which I think means that multi package always modifies an existing
> package, period.  To establish a new multi package you must use
> proto or some equivalent.  So our previous example becomes either:
>
>proto package A {
>class Foo {...}
>}
>
>multi package A {
>class Bar {...}
>}
>
> or
>
>proto package A {...}
>
>multi package A {
>class Foo {...}
>}
>
>multi package A {
>class Bar {...}
>}
>
> Then we'd say that if you want to retro-proto-ize an existing class you
> must do it with something like:
>
>proto class Int is MONKEY_PATCHING(CORE::Int) {...}
>multi class Int {
>method Str () { self.fmt("%g") }
>}
>
> or some such monkey business.

By "re-proto-ize", I'm assuming that you mean "open an existing class
up to modification".  With that in mind, I'd recommend keeping
something to the effect of "is instead" around, with the caveat that
it can only be used in conjunction with the multi keyword.  So if I
wanted to redefine package Foo from scratch, I'd say something like:

  proto package Foo { ... }

  multi package Foo is instead { ... }

But:

  package Foo is instead { ... }

would result in an error.

-- 
Jonathan "Dataweaver" Lang


infectious traits and pure functions

2009-02-13 Thread Jon Lang
In reading over the Debugging draft (i.e., the future S20), I ran
across the concept of the infectious trait - that is, a trait that
doesn't just get applied to the thing to which it is explicitly
applied; rather, it tends to spread to whatever else that thing comes
in contact with.  "Taint" is the primary example of this, although not
the only one.

In reading about functional programming, I ran across the concept of
the "pure function" - i.e., a function that doesn't produce side
effects.  The article went on to say that "While most compilers for
imperative programming languages detect pure functions, and perform
common-subexpression elimination for pure function calls, they cannot
always do this for pre-compiled libraries, which generally do not
expose this information, thus preventing optimisations that involve
those external functions. Some compilers, such as gcc, add extra
keywords for a programmer to explicitly mark external functions as
pure, to enable such optimisations."

It occurred to me that this business of marking functions as pure
could be done in perl by means of traits - that is, any function might
be defined as "is pure", promising the compiler that said function is
subject to pure function optimization.  It further occurred to me that
a variation of the contagious trait concept (operating on code blocks
and their component statements instead of objects; operating at
compile time rather than run time; and spreading via "all
participants" rather than "any participant") could be used to auto-tag
new pure functions, provided that all so-called "primitive pure
functions" are properly tagged to begin with.  The compiler could then
rely on this tagging to perform appropriate optimization.

Such auto-tagging strikes me as being in keeping with Perl's virtue of
laziness: writers of new code don't have to care about tagging pure
functions (or even know that they exist) in order for the tagging to
take place, leading to a potentially more robust library of functions
overall (in the sense of "everything in the library is properly
annotated").  As well, I could see it having some additional benefits,
such as in concurrent programming: "is critical" could be similarly
infectious, in that any function that might call a critical code block
should itself be tagged as critical.  Indeed, "is critical" and "is
pure" seem to be mutually exclusive concepts: a given code block might
be critical, pure, or neither; but it should never be both.

Am I onto something, or is this meaningless drivel?

-- 
Jonathan "Dataweaver" Lang


Re: infectious traits and pure functions

2009-02-16 Thread Jon Lang
Darren Duncan wrote:
> There are ways to get what you want if you're willing to trade for more
> restrictiveness in the relevant contexts.
>
> If we have a way of marking types/values and routines somehow as being pure,
> in the types case marking it as consisting of just immutable values, and in
> the routines case marking it as having no side effects, then we are telling
> the compiler that it needs to examine these type/routine definitions and
> check that all other types and routines invoked by these are also marked as
> being pure, recursively down to the system-defined ones that are also marked
> as being pure.
>
> Entities defined as being pure would be restricted in that they may not
> invoke any other entities unless those are also defined as being pure.  This
> may mean that certain Perl features are off-limits to this code.  For
> example, you can invoke system-defined numeric or string etc operators like
> add, multiply, catenate etc, but you can't invoke anything that uses
> external resources such as check the current time or get a truly random
> number.  The latter would have to be fetched first by an impure routine and
> be given as a then pure value to your pure routine to work with.  Also, pure
> routines can't update or better yet can't even read non-lexical variables.
>
> If it is possible with all of Perl's flexibility to make it restrictive of
> certain features' use within certain contexts, then we can make this work.

True.  In effect, this would be a sort of design-by-contract: by
applying the "is pure" trait to an object or function, you're
guaranteeing that it will conform to the behaviors and properties
associated with pure functions and immutable values.  A programmer who
chooses to restrict himself to functions and values thus tagged would,
in effect, be using a "functional programming dialect" of Perl 6, and
would gain all of the optimization benefits that are thus entailed.

It would also help the compiler to produce more efficient executable
code more easily, since (barring a module writer who mislabels
something impure as pure) it would be able to simply check for the
purity trait to decide whether or not functional programming
optimization can be tried.  If the purity tag is absent, a whole
segment of optimization attempts could be bypassed.

As well, my original suggestion concerning the auto-tagging of pure
functions was never intended as a means of tagging _everything_ that's
pure; rather, the goal was to set things up in such a way that if it
can easily be determined that thus-and-such a function is pure, it
should be auto-tagged as such (to relieve module writers of the burdon
of having to do so manually); but if there's any doubt about the
matter (e.g., conclusively proving or disproving purity would be
NP-complete or a halting problem), then the auto-tagging process
leaves the function in question untagged, and the purity tag would
have to be added in by the module writer.  This is still a valid idea
so long as the population of autotaggable pure functions (and, with
Darren's suggestions, truly invariant objects, etc.) is usefully
large.

In short, I don't want the perfect to be the enemy of the good.  Set
up a good first approximation of purity (one that errs on the side of
false negatives), and then let the coder tweak it from there.

> Now Perl in general would be different, with the majority of a typical Perl
> program probably impure without problems, but I think it is possible and
> ideal for users to be able to construct a reasonably sizeable pure sandbox
> of sorts within their program, where it can be easier to get correct results
> without errors and better performing auto-threading etc code by default.  By
> allowing certain markings and restrictions as I mention, this can work in
> Perl.

Exactly what I was thinking.  The key is that we're not trying to
force programmers to use the pure sandbox if they don't want to;
rather, we're trying to delineate the pure sandbox so that if they
want to work entirely within it, they'll have a better idea of what
they have to work with.

-- 
Jonathan "Dataweaver" Lang


Re: infectious traits and pure functions

2009-02-16 Thread Jon Lang
Martin D Kealey wrote:
> On Mon, 16 Feb 2009, Jon Lang wrote:
>> if there's any doubt about the matter (e.g., conclusively proving or
>> disproving purity would be NP-complete or a halting problem), then
>
> Deciding whether you have a halting problem IS a halting problem... :-)

You're making the perfect the enemy of the good.

I'm not saying that it needs to decide whether or not you have a
halting problem; I'm saying that if there's any possibility that you
_might_ have one, you should stop looking.  Let's take it as a given
that things such as exceptions, threads, and co-routines make the
automated establishment of whether or not a given function is pure a
nightmare.  The easy solution for this would be to say that if a given
function makes use of exceptions, threads, or co-routines, it will not
be auto-tagged as pure.  The process of auto-tagging pure functions
would not be perfect, in that there are likely to be a number of
functions that are in fact pure but don't get auto-tagged as such; but
it could still be _good_, in the sense that a useful set of pure
functions _would_ be auto-tagged.

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-24 Thread Jon Lang
Larry Wall wrote:
> So it might be better as a (very tight?) operator, regardless of
> the spelling:
>
>    $x ~~ $y within $epsilon

I like this: it's readable and intuitive.  As well, it leaves ±
available for use in its mathematical sense.

> For what it's worth, ± does happen to be in Latin-1, and therefore
> officially fair game for Standard Perl.  By the way, the mathematical
> definition can be derived from the engineering definition with
>
>    if $x == ($x ± $epsilon).minmax.any
>
> The problem with defining it the other direction is that junctions
> tend to lose ordering information of their eigenstates, and we can't
> just flip mins and maxes when we feel like it, or degenerate null
> ranges get broken.

OTOH, there aren't going to be very many cases where you're going to
want to derive either from the other.  You're more likely to derive
both from the same base stock:

  $y ± 5  # same as ($y - 5) | ($y + 5)
  $y within 5 # same as ($y - 5) .. ($y + 5)

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-24 Thread Jon Lang
Daniel Ruoso wrote:
> What about...
>
>  if $x ~~ [..] $x ± $epsilon {...}
>
> That would mean that $x ± $epsilon in list context returned each value,
> where in scalar context returned a junction, so the reduction operator
> could do its job...

(I'm assuming that you meant something like "if $y ~~ [..] $x ±
$epsilon {...}", since matching a value to a range that's centered on
that value is tautological.)

Junctions should not return individual values in list context, since
it's possible for one or more of said values to _be_ lists.  That
said, I believe that it _is_ possible to ask a Junction to return a
set of its various values (note: set; not list).  Still, we're already
at a point where:

  if $y ~~ $x within $epsilon {...}

uses the same number of characters and is more legible.  _And_ doesn't
have any further complications to resolve.

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-24 Thread Jon Lang
TSa wrote:
> Larry Wall wrote:
>> So it might be better as a (very tight?) operator, regardless of
>> the spelling:
>>
>>     $x ~~ $y within $epsilon
>
> This is a pretty add-on to smartmatch but I still think
> we are wasting a valueable slot in the smartmatch table
> by making numeric $x ~~ $y simply mean $x == $y. What
> is the benefit?

Larry's suggestion wasn't about ~~ vs. ==; it was about "within" as an
infix operator vs. "within" as a method or an adverb.

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-24 Thread Jon Lang
On Tue, Feb 24, 2009 at 1:39 PM, Daniel Ruoso  wrote:
> Em Ter, 2009-02-24 às 13:34 -0800, Jon Lang escreveu:
>> Daniel Ruoso wrote:
>> >  if $y ~~ [..] $x ± $epsilon {...}
>> Junctions should not return individual values in list context,
>
> It is not the junction that is returning the individual values, but the
> infix:<±> operator...

Hmm... true point.  Thinking through it some more, I'm reminded of an
early proposal to do something similar with square roots in
particular, and with non-integer exponents in general.  e.g., "sqrt(4)
=== ±2".  IIRC, the decision was made that such a capability would
work best as part of an advanced math-oriented module, and that things
should be arranged such that sqrt($x).[0] in that advanced module
would be equivalent to sqrt($x) in Perl's default setting.  That would
mean that sqrt(4) would have to produce (+2, -2) in list context,
rather than (-2, +2) - which, in turn, would mean that ±2 should do
likewise.  And however prefix:<±> works, infix:<±> should follow suit,
returning addition first and subtraction second.

Which would further mean that you should use the reversal metaoperator as well:

  if $y ~~ [R..] $x ± $epsilon {...}

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-24 Thread Jon Lang
Doug McNutt wrote:
> Thinking about what I actually do. . .
>
> A near equal test of a float ought to be a fractional error based on the
> current value of the float.
>
> $x  tested for between $a*(1.0 + $errorfraction) and $a*(1.0 -
> $errorfraction)
>
> If you're dealing with propagation of errors during processing of data the
> fractional error is usually the one that's important. Finances might be
> different  but floating dollars have their own set of problems relating to
> representation of decimal fractions.

Half-baked idea here: could we somehow use some dwimmery akin to
Whatever magic to provide some meaning to a postfix:<%> operator?
Something so that you could say:

  $x within 5%

And it would translate it to:

  $x within 0.05 * $x

?

Something along the lines of:

1. infix: sets Whatever to its lhs while evaluating its rhs
2. postfix:<%> returns the requested percentage of Whatever; if
Whatever is undefined, return the requested percentage of 1.

-- 
Jonathan "Dataweaver" Lang


S14 markup

2009-02-24 Thread Jon Lang
Someone should go through the Parametric Roles section and properly
indent the code blocks.  They're not rendering properly at
http://perlcabal.org/syn/S14.html

-- 
Jonathan "Dataweaver" Lang


Re: Synopsis for Signatures?

2009-02-24 Thread Jon Lang
On Fri, Feb 13, 2009 at 11:49 AM, Larry Wall  wrote:
> On Fri, Feb 13, 2009 at 10:24:14AM -0800, Jon Lang wrote:
> : Given that signatures have grown well beyond their origins as
> : subroutine parameter lists, and given that signatures have their own
> : syntax, perhaps they should be moved out of S06?  I could see S08
> : being retasked to address signatures (and perhaps captures, given the
> : intimate connection between these two), since its original purpose
> : (i.e., references) has been deprecated.
>
> That has been the intent, though nobody's got around to doing it.

Some questions:

Can anyone think of a use for an invocant or a slurpy code block in a
signature that's being used to initialize variables or to parameterize
a role?

Can anyone think of a use for named parameters in a signature that's
being used to initialize variables?

In general, what features of subroutine signatures might not be
applicable to role signatures and/or variable declaration signatures?

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-25 Thread Jon Lang
Mark J. Reed wrote:
> I do quite like the magical postfix %, but I wonder how far it should
> go beyond ±:
>
> $x += 5%;   # becomes $x += ($x * .05)?  Or maybe $x *= 1.05  ?
> $x * 5%;   # becomes $x * .05 ?

If it works with ±, it ought to work with + and -.  Rule of thumb: if
there's no easy way to answer "5% of what?" then default to "5% of
1.0", or 0.05.  +, -, and ± would need to be set up to provide the
necessary answer for "of what?" by means of setting Whatever; and by
basing it on Whatever, you have other options, such as:

@a[50%] # accesses the middle item in the list, since Whatever is
set to the length of the list.

--

Concerning "-> $val, $err { [..^] $val - $err, $val + $err }" vs "->
$val, $err { any $val - $err, $val + $err }": I'm not sold on the
notion that Huffman coding might imply that ± should go with the
former.  Perhaps an argument can be made for it; but I suspect that
the relative commonness of the two uses is extremely similar (which,
per Huffman coding, would imply that their names should have similar
lengths).  Whichever one we go with, we have a conundrum: if we use ±
as the name of the latter, the conundrum is that the only other
intuitive name for the former that has thus far been proposed (i.e.,
"within") is too long; if we use ± as the name for the former, the
conundrum is that no other intuitive name has been proposed for the
latter.

So: what we need are proposals for short, understandable names for
each operator.  Suggestions?

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-26 Thread Jon Lang
TSa wrote:
> HaloO,
>
> Jon Lang wrote:
>>
>>   �...@a[50%] # accesses the middle item in the list, since Whatever is
>> set to the length of the list.
>
> I don't understand what you mean with setting Whatever. Whatever is
> a type that mostly behaves like Num and is used for overloaded
> postcircumfix:<[ ]>:(Array @self: Whatever $i). So *-1 just casts
> the -1. Postfix % would do the same for Ratio. Then one can overload
> postcircumfix<[ ]>:(Array @self: Ratio $r) and multiply $r with the
> size of the array to calculate the index. BTW, this indexing reminds
> me of the way how textures are indexd in OpenGL. With that in mind
> the index ratio could also interpolate between entries of a texture
> array.

I'm not certain about the exact mechanism to use; all I know for
certain is that I want a kind of dwimmery with postfix:<%> that
returns a percentage of some other value when it's reasonably easy to
decide what that other value should be, and a percentage of the number
one otherwise.  Perhaps that _is_ best handled by means of a "Ratio"
type (or whatever), perhaps not.  If so, I think that Rat (as in "a
Rational Number") should be kept distinct from "Ratio" (as in "a
portion of something else").  Perhaps the latter should be called a
"Portion" rather than a "Ratio".

> So here comes some rant about Num as the type that in contrast to Rat
> carries an approximation error and an approximation closure that can be
> called to decrease the error. That is e.g. sqrt(2) returns such a thing
> that can be called again to continue the iteration. Numeric equality
> checks would then not only compare the approximate values but also the
> identity of the iteration closure. Whereas ~~ would check if the lhs
> number's interval falls into the range of the rhs which is build from
> the current approximation and the error. The approximation closure is
> never invoked by ~~. Infix ± would then not create a range but a Num
> with explicit error.

I'm not sold on the notion that Num should represent a range of values
(and I use "range" here in its mathematical sense of "any number
that's between the given lower and upper bounds", as opposed to its
Perlish sense of "a discrete list of numbers").  However, I _do_ like
the idea of distinguishing between "is exactly equal to" and "is
approximately equal to"; and tracking the margin of error would be
essential to getting the latter option to work.

> Another question is if we define some arithmetic on these closures
> so that asin(sin(2)) == 2 exactly.

I don't know how relevant this is; but this sounds like the sort of
optimization that pure functional programming allows for - that is, if
the compiler ever sees a call like asin(sin($x)), it might optimize
the code by just putting $x in there directly, and bypassing both the
sin and asin calls - but only because both sin and asin are pure
functions (i.e., they don't produce any side effects).

As well, I have a certain fondness for the idea of lazy evaluation of
mathematical functions (and pure functions in general), on the premise
that whenever you actually carry out an operation such as sqrt or sin,
you potentially introduce some error into the value with which you're
working; and if you postpone the calculation long enough, you may find
that you don't need to perform it at all (such as in the
aforementioned "asin(sin(2))" example).

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-26 Thread Jon Lang
Daniel Ruoso wrote:
> Em Qui, 2009-02-26 às 17:01 +0100, TSa escreveu:
>>      $y.error = 0.001;
>>      $x ~~ $y;
>
> Looking at this I just started wondering... why wouldn't that be made
> with:
>
>  my $y = 10 but Imprecise(5%);
>  $x ~~ $y;

That's not bad; I like it.

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-26 Thread Jon Lang
Jon Lang wrote:
> TSa wrote:
>> Jon Lang wrote:
>>>
>>>   �...@a[50%] # accesses the middle item in the list, since Whatever is
>>> set to the length of the list.
>>
>> I don't understand what you mean with setting Whatever. Whatever is
>> a type that mostly behaves like Num and is used for overloaded
>> postcircumfix:<[ ]>:(Array @self: Whatever $i). So *-1 just casts
>> the -1. Postfix % would do the same for Ratio. Then one can overload
>> postcircumfix<[ ]>:(Array @self: Ratio $r) and multiply $r with the
>> size of the array to calculate the index. BTW, this indexing reminds
>> me of the way how textures are indexd in OpenGL. With that in mind
>> the index ratio could also interpolate between entries of a texture
>> array.
>
> I'm not certain about the exact mechanism to use; all I know for
> certain is that I want a kind of dwimmery with postfix:<%> that
> returns a percentage of some other value when it's reasonably easy to
> decide what that other value should be, and a percentage of the number
> one otherwise.  Perhaps that _is_ best handled by means of a "Ratio"
> type (or whatever), perhaps not.  If so, I think that Rat (as in "a
> Rational Number") should be kept distinct from "Ratio" (as in "a
> portion of something else").  Perhaps the latter should be called a
> "Portion" rather than a "Ratio".

Another possible approach would be to define postfix:<%> as returning
a code block:

  sub postfix:<%>(Num $x) generates { $x / 100 * ($_ \\ 1.0) }

This would allow it to partake of nearly all of the same dwimmery of
which Whatever can partake, save for the purely Whatever stuff.

Brandon S. Allbery wrote:
> Jon Lang wrote:
>> I'm not sold on the notion that Num should represent a range of values
>
> Arguably a range is the only sane meaning of a floating point number.

Perhaps; but a Num is not necessarily a floating point number - at
least, it shouldn't always be.

Doug McNutt wrote:
> Jon Lang wrote:
>> I don't know how relevant this is; but this sounds like the sort of
>> optimization that pure functional programming allows for - that is, if
>> the compiler ever sees a call like asin(sin($x)), it might optimize
>> the code by just putting $x in there directly, and bypassing both the
>> sin and asin calls - but only because both sin and asin are pure
>> functions (i.e., they don't produce any side effects).
>
> Don't get too hung up on that example.
>
> If $x is 2*pi,  asin(sin($x)) would return 0.0 and not 2*pi.

True enough: asin is not the inverse function of sin, although it's
probably as close as you can get.  And even there, some sort of
compiler optimization could potentially be done, replacing the
composition of asin and sin (both of which have the potential to
intensify error) with a normalization of the value into the -pi ..^ pi
range (which might also introduce error).

-- 
Jonathan "Dataweaver" Lang


Re: Comparing inexact values (was "Re: Temporal changes")

2009-02-26 Thread Jon Lang
Martin D Kealey wrote:
> On Thu, 26 Feb 2009, Jon Lang wrote:
>> asin is not the inverse function of sin, although it's probably as close
>> as you can get.  And even there, some sort of compiler optimization could
>> potentially be done, replacing the composition of asin and sin (both of
>> which have the potential to intensify error) with a normalization of the
>> value into the -pi ..^ pi range (which might also introduce error).
>
> Hmmm ... the normal mathematical range of arc-sine is (-π,+π], rather than
> [-π,+π), especially where complex numbers are concerned: arg(-1) == +π.
> (Well, so much for consistently using lower half-open ranges.)

...you're right.  Sorry; my error.

-- 
Jonathan "Dataweaver" Lang


Re: Range and continuous intervals

2009-02-27 Thread Jon Lang
Darren Duncan wrote:
> I don't know if this was previously discussed and dismissed but ...
>
> Inspired by some recent discussion in the "comparing inexact values" thread
> plus some temporal discussion and some older thoughts ...
>
> I was thinking that Perl 6 ought to have a generic interval type that is
> conceptually like Range, in that it is defined using a pair of values of an
> ordered type and includes all the values between those, but unlike Range
> that type is not expected to have discrete consecutive values that can be
> iterated over.

Note that smart-matching currently treats Range as an interval.  The
question is whether we need intervals for any other purpose.  If we
do, perhaps we could still press Range into service, but indicate that
there are no discrete consecutive values by saying something like
":step(0)" (where "0" is being used to mean "arbitrarily small" rather
than truly zero).  This would disable some uses (such as using it as a
looping mechanism or counting the number of items that it
encompasses), but might enable some others.

> I'm thinking of a Range-alike that one could use with Rat|Num or Instant
> etc, and not just Int etc.  There would be operators to test membership of a
> value in the interval, and set-like operators to compare or combine
> intervals, such as is_inside, is_subset, is_overlap, union, intersection,
> etc.  Such an interval would be what you use for inexact matching and would
> be the result of a ± infix operator or % postfix operator.  Also, as Range
> has a .. constructor, this other type should have something.

(Not postfix:<%>, please; that should produce a single value by itself.)

Instead of saying "set-like operators", etc., why not just say that
Interval does Set?  The conceptual difference would be that it is a
set of continuous values rather than a set of discrete values... This
should also allow you to do things like "any($i)" (where $i is an
Interval).  Indeed, a Set membership test strikes me as being
equivalent to smart-matching an any-junction of the Set.

As for a constructor: as hinted above, ".." could probably pull
double-duty as a constructor for both Ranges and Intervals, based on
the spacing (or absence thereof) between members.  One _might_ even go
so far as to say that :step (or whatever it's called) defaults to 1
for discrete types (such as Int) and to 0 for continuous types (such
as Rat); so "0 .. 1" and "0.0 .. 1.0 :step(1)" would each create a
two-item Range, while "0 .. 1 :step(0)" and "0.0 .. 1.0" would each
create an Interval.

-- 
Jonathan "Dataweaver" Lang


Re: r25626 - docs/Perl6/Spec

2009-02-27 Thread Jon Lang
Let me see if I'm grasping the concept here: by default, all functions
are British in the sense that they always do things the British way no
matter where they are in the world:their behavior is determined by the
culture in which they were raised.  In contrast, lifted code "goes
native" and does things according to the local culture and customs
unless you explicitly tell it to do otherwise.  Does this sound about
right?


Re: r25626 - docs/Perl6/Spec

2009-02-27 Thread Jon Lang
> +Note that in each piec of lifted code there are references to

Typo: s/piec/piece/

--
Jonathan "Dataweaver" Lang



-- 
Jonathan "Dataweaver" Lang


Re: Range and continuous intervals

2009-02-27 Thread Jon Lang
On Fri, Feb 27, 2009 at 1:19 AM, Darren Duncan  wrote:
> Jon, I like all of your stated ideas in general.  I also don't care about
> the postfix %; that was just a concept example pulled from the inexact
> comparison thread.  The idea of using zero is also appropriate conceptually
> with shades of how calculus works.  I actually prefer the existing Range
> constructor of .. visually, as long as it works unambiguously in both roles.

Yeah; I was thinking of calculus when I made that suggestion; still,
it's intuitive enough of an approach that I don't think that someone
who lacks calculus training would have a problem with it.  And I was
aiming for an unambiguous way of distinguishing between Range
construction and Interval construction.

>  It does make a lot of sense for Range to do Set.  So we just need some way,
> probably a role, to distinguish whether this broader definition of a set is
> enumerable or not on a case by case basis.  When talking about any/all/etc
> this also means that junctions would have to work with the non-enumerable
> set. -- Darren Duncan

I'm not sure that it [i]does[/i] make sense for Range to do Set: in
S02, Range is among the immutable types that are said to do
Positional, while Set is among the ones that are said to do
Associative; and I'm leery about mixing Positional and Associative
without good reason.

More generally, though, both Positional and Associative carry an
unstated assumption that the object in question will be using a
discrete type such as Int or Str as the index or key.  In the
abstract, what we're talking about here is how one would handle an
object that uses a continuous type such as Num as the index or key.
As mentioned before, "loop through the members" (or, in the case of
Junctions, "autothread the members") simply isn't feasible with a
continuous key.

One question is whether Intervals should be Positional (i.e.,
list-like) or Associative (i.e., Set-like).  The former has the
advantage that Ranges are Positional, meaning that Intervals would
conform closely to Ranges, which is the intuitive result of having
them share a constructor.  The latter is what we both jumped to when
we first considered the idea of Intervals in their own right: we want
the Set operations, which Positional objects don't have; and indices
aren't particularly useful for Intervals.

As well, your concept about "disjoint intervals" maps quite nicely to
a Set of Nums (or other continuous types such as Rat or Instant); with
that concept, I'm not sure that we would actually _need_ an Interval
type: we could just have the .. constructor return a Set of Num (or
Rat, or Instant, etc.) when the step is zero, and a Range object
otherwise.  This, of course, assumes that Set is up to the task of
handling the continuous nature of Num, and doesn't merely collect a
bunch of discrete points on the Num line.

So start by addressing the issue of how to handle continuous indices
and/or keys.  Are lists conceptually compatible with the notion of
continuous indices, or is the whole idea too much of a divergence from
the underlying concept?  A basic assumption about lists is that if you
provide custom indices for a list, they can be mapped one-to-one to
non-negative integers; and continuous values, by definition, cannot be
so mapped.  So I'd say that whether or not continuous indices in
general are viable, they are _not_ viable for List.

Keys, OTOH, don't have any such requirement; so continuous keys may
very well be doable.  If they _are_ doable, you have to ask questions
such as "how do I assign values to a continuous interval of keys?"  To
truly be robust, we ought also answer this question in terms of
multidimensional keys, which can be a nightmare.

-- 
Jonathan "Dataweaver" Lang


pod variables?

2009-02-27 Thread Jon Lang
Under the section about twigils in S02, "$=var" is described as a "pod
variable".  I'm not finding any other references to pod variables;
what are tey, and how are they used?  (In particular, I'm wondering if
they're a fossil; if they aren't, I'd expect further information about
them to be in S26.)

-- 
Jonathan "Dataweaver" Lang


Re: Range and continuous intervals

2009-02-28 Thread Jon Lang
Darren Duncan wrote:
> In reply to Jon Lang,
>
> What I'm proposing here in the general case, is a generic collection type,
> "Interval" say, that can represent a discontinuous interval of an ordered
> type.  A simple way of defining such a type is that it is a "Set of Pair of
> Ordered", where each Pair defines a continuous interval in terms of 2
> ordered end-points, and the whole set is discontinuous (or replace Pair with
> 2-element Seq).

Note that you lose the distinction between inclusive and exclusive
endpoints if you do this.  That is one benefit that Range has over
Pair: Range _does_ keep track of whether its endpoints are inclusive
or exclusive.  The main problem with Range is that it is (usually?)
enumerable - which is generally considered to be a feature, but which
gets in the way in the case of using it to represent a continuous
interval.  Thus my suggestion of somehow doing away with the
enumerability if the step size is zero.  As such, I'd be inclined to
define Interval as a Range-like role that isn't enumerable,
representing a continuous span between its lower and upper boundaries.
 A "disjoint interval" would be a role that usually delegates to an
internal Set of Interval of Type, except when it needs to do otherwise
in order to resemble a Set of Type.

-- 
Jonathan "Dataweaver" Lang


Re: Range and continuous intervals

2009-03-01 Thread Jon Lang
Thomas Sandlaß wrote:
> The benefit of a dedicated Interval type comes from supporting set
> operations (&), (|) etc. which are still unmentioned in S03.

Have set operations been implemented in either Rakudo or Pugs?

> BTW,
> what does (1..^5).max return? I think it should be 4 because this
> is the last value in the Range. Only in 4.7 ~~ 1..^5 does the five
> matter. How does ~~ retrieve that information? For open intervals
> the .min and .max methods should return the bound outside. Or better,
> we should introduce infimum and supremum as .inf and .sup respectively.

No offense, but I've noticed a tendency on your part to suggest highly
technical names for things.  "Infimum" and "supremum" may be
technically accurate; but I wouldn't know, since I don't know for
certain what they mean.  "Min" and "max" may be misleading in terms of
the specifics; but they get the general point across.

>> I'm thinking of a Range-alike that one could use with Rat|Num or Instant
>> etc, and not just Int etc.  There would be operators to test membership of
>> a value in the interval, and set-like operators to compare or combine
>> intervals, such as is_inside, is_subset, is_overlap, union, intersection,
>> etc.  Such an interval would be what you use for inexact matching and would
>> be the result of a ± infix operator or % postfix operator.  Also, as Range
>> has a .. constructor, this other type should have something.
>
> Since the Interval type shall be an addition to the Set subsystem I
> propose the (..) Interval creation operator together with the open
> versions. Admittedly that leaves (^..^) at a lengthy six chars. But
> Jon's proposal of setting the by to zero isn't any shorter. Note that
> the open versions are more important for Interval than they are for Range
> because for Range you can always write it without ^ at the ends.

In defense of my proposal, note that it _can be_ shorter if you say
that :by defaults to zero when the endpoints are of continuous types
such as Num, Instance, or Rat.  That said, I admit that my proposal is
something of a kludge - albeit te type of kludge that Perl tends to
embrace (where reasonable, make a best guess as to the programmer's
intentions; but provide a means gor him to be more explicit if your
guess is wrong).

-- 
Jonathan "Dataweaver" Lang


Re: new Capture behavior (Was: Re: r25685 - docs/Perl6/Spec)

2009-03-05 Thread Jon Lang
Daniel Ruoso wrote:
> Daniel Ruoso escreveu:
>> What really got me confused is that I don't see what problem this change
>> solves, since it doesn't seem that a signature that expects an invocant
>> (i.e.: cares about invocant) will accept a call without an invocant, so
>> "method foo($b,$c) is export" still need to have a transformed signature
>> in the sub version of foo.
>
> Thinking again,
>
> Unless that actually means that we're really removing all the runtime
> semantics around the invocant... and methods implicitly do "my $self =
> shift" all the time...
>
> That'd be sad... we loose "invocant" semantics... SMOP will require a
> HUGE refactoring... :(

Remember that Captures are also being used as Perl 6's answer to
references.  When used in that way, problems arise when you treat "a
single item" as being fundamentally different from "a one-item list".

-- 
Jonathan "Dataweaver" Lang


Re: new Capture behavior (Was: Re: r25685 - docs/Perl6/Spec)

2009-03-05 Thread Jon Lang
Darren Duncan wrote:
> Here's a question:
>
> Say I had an N-adic routine where in OO terms the invocant is one of the N
> terms, and which of those is the invocant doesn't matter, and what we really
> want to have is the invocant automatically being a member of the input list.

How about allowing submethods to be declared outside of classes and
roles as well as inside them?  When declared inside a class or role,
it is declared in "has" scope, and works as described; when declared
elsewhere, it is declared in "my" scope, and works just like a sub,
except that you're forced to specify an invocant in the signature(s).
You could then use the same sort of tricks that subs can use to
specify alternate signatures, including the one that lets the
positional arguments be filled out in any order.

-- 
Jonathan "Dataweaver" Lang


Re: new Capture behavior (Was: Re: r25685 - docs/Perl6/Spec)

2009-03-05 Thread Jon Lang
OK; let me get a quick clarification here.  How does:

say "Hello, World!";

differ from:

"Hello, World!".say;

or:

say $*OUT: "Hello, World!";

in terms of dispatching?  And more generally, would there be a
reasonable way to write a single routine (i.e., implementation) that
could be invoked by a programmer's choice of these calling
conventions, without redirects (i.e., code blocks devoted to the sole
task of calling another code block)?

Could you use binding?

my sub say (String *$first, String *...@rest, OStream :$out = $*OUT,
OStream :$err = $*ERR)
{ ... }

role String {
has &say:(String $first: String *...@rest, OStream :$out = $*OUT,
OStream :$err = $*ERR)
:= &OUTER::say;
}

That (or something like it) might be doable.  But in the spirit of
TIMTOWTDI, I'd like to explore another possibility: what difficulties
would arise from allowing subs to have signatures with invocants,
which in turn allow the sub to be called using method-call syntax
(though possibly not method dispatch semantics)?  In effect, allow
some syntactic sugar that allows properly-sigged subs outside of a
role to masquerade as methods of that role.

-- 
Jonathan "Dataweaver" Lang


Re: r25807 - docs/Perl6/Spec

2009-03-12 Thread Jon Lang
> +To declare an item that is parsed as a simple term, you must use the
> +form C<< term: >>, or some other form of constant declaration such
> +as an enum declaration.  Such a term never looks for its arguments,
> +is never considered a list prefix operator, and may not work with
> +subsequent parentheses because it will be parsed as a function call
> +instead of the intended term.  (The function in question may or
> +may not exist.)  For example, C is a simple term in Perl 6
> +and does not allow parens, because there is no C function
> +(though there's a C<$n.rand> method).

So if I were to say:

rand $n:

is the compiler smart enough to notice that trailing colon and
recognize this as an indirect method call rather than two adjacent
terms?  Or would I have to say:

rand($n:)

to get the indirect method call?

-- 
Jonathan "Dataweaver" Lang


Re: r25807 - docs/Perl6/Spec

2009-03-14 Thread Jon Lang
On Sat, Mar 14, 2009 at 7:29 AM, Larry Wall  wrote:
> : So if I were to say:
> :
> :     rand $n:
> :
> : is the compiler smart enough to notice that trailing colon and
> : recognize this as an indirect method call rather than two adjacent
> : terms?
>
> No, currently under STD you get:
>
>    Obsolete use of rand(N); in Perl 6 please use N.rand or (1..N).pick 
> instead at (eval) line 1:
>
> : Or would I have to say:
> :
> :     rand($n:)
> :
> : to get the indirect method call?
>
> That would work, but then why not:
>
>    rand*$n
>    $n*rand
>    $n.rand
>    (1..$n).pick
>
> In fact, given that you usually want to integerize anyway, I could
> almost argue myself out of supporting the $n.rand form as well...

It's largely a matter of principle: if I can say $x.foo, I expect to
be able to say foo $x: as well.  Every time you introduce an exception
to the rule, you're throwing something in that has the potential to
cause confusion; so you should do so with some caution.  I think that
"best uses" should include something to the effect of "avoid using the
same identifier for both a term and a routine".  IMHO, rand should
either be a term or a method; but not both.

There are also some linguistic reasons for this "best uses" proposal:
people tend to think of terms as nouns and routines as verbs.  And
gerunds are more akin to "&foo" than to "term:".

Left-field idea here: there was recently some discussion on this list
about the possibility of continuous ranges, which would be in contrast
to how 1..$n is a discrete list of options.  If you were to do this,
then you could use .pick on a continuous range to generate a random
number anywhere within its bounds.  So:

(1 to 5).pick

(where infix: creates a continuous range, inclusive of both
boundaries) would in theory be as likely to return 2.5 or pi as 3.
IMHO, this does a better job of handling what most people want rand to
do when they start thinking in terms of assigning parameters to it.
And with that in place, rand could become a term that's short for
something like:

pick (0 to^ 1):

-- 
Jonathan "Dataweaver" Lang


Re: a junction or not

2009-03-15 Thread Jon Lang
This isn't the first (or second, or third, or fourth...) time that
I've seen complications arise with regard to junctions.  Every time,
the confusion arises when some variation of the question "is it a
junction?" is raised.  Ultimately, this is because Perl is trying it's
darnedest to treat Junctions as their members; this is something that
it doesn't try anywhere else, leading to a counterintuitive approach.
The other big example of this phenomenon that I've seen has to do with
passing parameters into a routine: Should the code auto-parallelize,
executing the code separately for each value in the junction, or
should it execute only once, passing the parallelization buck on to
whatever routines it happens to call?  That is, should parallelization
be eager or lazy?  (I'm pretty sure that this was resolved, although
I'm not recalling how.)

The problem that Richard just identified is that Junctions don't fully
manage to hide themselves when it comes to their method calls.  Let's
say that I'm writing a role that deals with quantum mechanics  (say,
Quantum), and for whatever reason I decide that I need an eigenstate
method.  Now we've got a problem: if I ever find myself dealing with a
Junction that has at least one Quantum among its eigenstates, what
happens when I call the .eigenstates method?  I'm betting that I'll
get Junction.eigenstates, rather than a Junction of
Quantum.eigenstates.  In fact, I see no easy way to get the latter.

I'm thinking that maybe Junction shouldn't be a type.  Instead, it
should be a "meta-type", in (very rough) analogy to the concept of the
meta-operator.  In particular, Junction bears a certain resemblance to
the hyper-operator.  Thus far, it's the only meta-type; and, like
meta-operators, additional meta-types should be added sparingly.

As I see it, the difference between a type and a meta-type is that all
of the meta-type's methods are accessed via HOW.  You wouldn't say
"$x.eigenstates" to access a Junction's eigenstates; you'd say
"$x.^eigenstates" for that.  (Further speculation: maybe there's a
meta-type that defines the default HOW methods.)

That still doesn't solve the problem that Richard first raised,
though.  To do that, I'm thinking that his original suggestion should
also be implemented: in the same way that an item can be treated as a
one-item list for the purposes of list context (and vice versa), a
non-Junction should be able to be treated as a Junction with a single
eigenstate (i.e., a Singleton) for the purposes of junctive semantics.
 That is, $x.^eigenstates === ($x) if $x is not a junction.  Not only
does this reduce the need to test for junctions, but it also makes
that test fairly straightforward: count the eigenstates.  If you only
have one, it isn't a junction.  (Further speculation: perhaps
undefined values have _no_ eigenstates...)

-- 
Jonathan Lang


Re: Regex syntax

2008-03-18 Thread Jon Lang
Moritz Lenz wrote:
>  I noticed that in larger grammars (like STD.pm and many PGE grammars in
>  the parrot repo) string literals are always quoted for clarity
>
>  regex foo {
> 'literal' 
>  }
>
>  Since I think this is a good idea, we could just as well enforce that, and
>  drop the <...> around method/regex calls:
>
>  regex foo {
> 'literal' subregex
>  }
>
>  This way we'll approximate "normal" Perl 6 syntax, and maybe even improve
>  huffman coding.
>
>  I guess such a syntax wouldn't be acceptible in short regexes, which are
>  typically written as m/.../, rx/.../ or s[...][..], so we could preseve
>  the old syntax for these regexes.

Nitpick: s[...][..] isn't valid syntax anymore; the correct P6
equivalent is s[...] = qq[..], for reasons that were hashed out on
this list some time ago.  However, s/.../../ is still valid.

I'm not in favor of the so-called "short forms" having a different
syntax from the "long forms", and I personally like the current syntax
for both.  That said, all's fair if you predeclare: I could see
someone creating a module that allows you to tweak the regex syntax in
a manner similar to what you're proposing, if there's enough of a
demand for it.

-- 
Jonathan "Dataweaver" Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Larry Wall wrote:
>  So here's another question in the same vein.  How would mathematicians
>  read these (assuming Perl has a factorial postfix operator):
>
> 1 + a(x)**2!
> 1 + a(x)²!

The "1 + ..." portion is not in dispute: in both cases, everything to
the right of the addition sign gets evaluated before the addition
does.  As such, I'll now concentrate on the right term.

As TsA pointed out, mathematicians wouldn't write the former case at
all; they'd use a superscript, and you'd be able to distinguish
between "a(x) to the two-factorial power" and "(a(x) squared)
factorial" based on whether or not the factorial is superscripted.  So
the latter would be "(a(x) squared) factorial".

OTOH, you didn't ask how mathematicians would write this; you asked
how they'd read it.  As an amateur mathematician (my formal education
includes linear algebra and basic differential equations), I read the
former as "a(x) to the two-factorial power": all unary operators, be
they prefix or postfix, should be evaluated before any binary operator
is.

-- 
Jonathan "Dataweaver" Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Larry Wall wrote:
>  Now, I think I know how to make the parser use precedence on either
>  a prefix or a postfix to get the desired effect (but perhaps not going
>  both directions simulatenously).  But that leads me to a slightly
>  different parsing question, which comes from the asymmetry of postfix
>  operators.  If we make postfix: do the precedence trick above with
>  respect to infix:<**> in order to emulate the superscript notation,
>  then the next question is, are these equivalent:
>
> 1 + a(x)**2!
> 1 + a(x)**2.!

To me, both of these should raise a(x) to the 2-factorial power; so
yes, they should be equivalent.

>  likewise, should these be parsed the same?
>
> $a**2i
> $a**2.i

In terms of form, or function?  Aesthetically, my gut reaction is to
see these parse the same way, as "raise $a to the power of 2i"; in
practice, though, "2i" on a chalkboard means "2 times i", where "i" is
unitary.  Hmm...

Again, though, this isn't a particularly fair example, as mathematical
notation generally uses superscripting rather than "**" to denote
exponents, allowing the presence or absence of superscripting on the
"i" to determine when it gets evaluated.  That is, the mathematical
notation for exponents includes an implicit grouping mechanism.

>  and if so, how to we rationalize a class of postfix operators that
>  *look* like ordinary method calls but don't parse the same.  In the
>  limit, suppose some defines a postfix "say" looser than comma:
>
> (1,2,3)say
> 1,2,3say
> 1,2,3.say
>
>  Would those all do the same thing?

Gut reaction: the first applies "say" to the list "1, 2, 3"; the
second and third apply "say" to 3.

>  Or should we maybe split postfix
>  dot notation into two different characters depending on whether
>  we mean normal method call or a postfix operator that needs to be
>  syntactically distinguished because we can't write $a.i as $ai?
>
>  I suppose, if we allow an unspace without any space as a degenerate
>  case, we could write $a\i instead, and it would be equivalent to ($a)i.
>  And it resolves the hypothetical postfix: above:
>
> 1,2,3.say   # always a method, means 1, 2, (3.say)
> 1,2,3\ say  # the normal unspace, means (1, 2, 3)say
> 1,2,3\say   # degenerate unspace, means (1, 2, 3)say
>
>  This may also simplify the parsing rules inside double quoted
>  strings if we don't have to figure out whether to include postfix
>  operators like .++ in the interpolation.

I'm in favor of the "minimalist unspace" concept, independent of the
current concerns.  In effect, it lets you insert the equivalent of
whitespace anywhere you want (except in the middle of tokens), even if
whitespace would normally be forbidden; and it does so in a way that
takes up as little space as possible.

>  It does risk a visual
>  clash if someone defines postfix::
>
> $x\t# means ($x)t
> "$x\t"  # means $x ~ "\t"
>
>  I deem that to be an unlikely failure mode, however.

:nod: Alphanumeric postfixes ought to be rare compared to symbolic postfixes.

-- 
Jonathan "Dataweaver" Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
TSa wrote:
>  Jon Lang wrote:
>  > all unary operators, be
>  > they prefix or postfix, should be evaluated before any binary operator
>  > is.
>
>  Note that I see ** more as a parametric postscript then a real binary.
>  That is $x**$y sort of means $x(**$y).

That's where we differ, then.  I'm having trouble seeing the benefit
of that perspective, and I can clearly see a drawback to it - namely,
you have to think of infix:<**> as being a different kind of thing
than infix:.«<+ - * />, despite having equivalent forms.

>  Note also that for certain
>  operations only integer values for $y make sense. E.g. there's no
>  square root of a function.

...as opposed to a square root of a function's range value.  That is,
you're talking in terms of linear algebra here, where "D²(x)" means
"D(D(x))", as opposed to basic algebra, where "f²(x)" means "(f(x))²".
 This is similar to your earlier "the other Linear" comment.

This is a case where the meaning of an operator will depend on the
system that you're dealing with.  Math is full of these, especially
when it comes to superscripts and subscripts.  I'd recommend sticking
to the basic algebraic terminology for the most part (e.g., "f²(x) :=
(f(x))²"), and apply "all's fair if you predeclare" if you intend to
use a more esoteric paradigm.  So if you want:

  D²(x) + 2D(x) + x

to mean:

  (D(D(x)) + 2 * D(x) + x

You should say:

  use linearAlgebra;
  D²(x) + 2D(x) + x

-- 
Jonathan "Dataweaver" Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
On Wed, Mar 26, 2008 at 12:03 PM, Larry Wall <[EMAIL PROTECTED]> wrote:
> On Wed, Mar 26, 2008 at 11:00:09AM -0700, Jon Lang wrote:
>
> : all unary operators, be they prefix or postfix, should be evaluated
>  : before any binary operator is.
>
>  And leaving the pool of voting mathematicians out of it for the moment,
>  how would you parse these:
>
> sleep $then - $now
> not $a eq $b
> say $a ~ $b
> abs $x**3

Those don't strike me as being unary operators; they strike me as
being function calls that have left out the parentheses.  Probably
because they're alphanumeric rather than symbolic in form.

>  These all work only because unaries can be looser than binaries.
>  And Perl 5 programmers will all expect them to work, in addition to
>  -$a**2 returning a negative value.

True enough.  Perhaps I should have said "as a rule of thumb..."

>  And we have to deal with unary
>  precedence anyway, or !%seen{$key}++ doesn't work right...
>
>  Larry
>



-- 
Jonathan "Dataweaver" Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Mark J. Reed wrote:
> Jon Lang wrote:
>  >  Those don't strike me as being unary operators; they strike me as
>  >  being function calls that have left out the parentheses.
>
>  At least through Perl5, 'tain't no difference between those two in Perl land.

True enough - though the question at hand includes whether or not
there should be.

Perhaps a distinction should be made between prefix and postfix.
After all, postfix already forbids whitespace.  True, the original
reason for doing so was to distinguish between infix and postfix; but
it tends to carry the implication that postfix operators are kind of
like method calls, while prefix operators arekind of like function
calls.

Personally, I'd like to keep that parallel as much as possible.
Unless it can be shown to be unreasonable to do so. :)

>  As for binary !, you could say posit that the second operand is the
>  degree of multifactorialhood, defaulting to 1; e.g. x!2 for what
>  mathematicians would write as x‼, which definitely does not mean
>  (x!)!.

Wouldn't that be "x ! 2"?  Mandatory whitespace (last I checked),
since infix: had better not replace postfix:.

>  Oh, and I've always mentally filed -x as shorthand for (-1*x); of
>  course, that's an infinitely recursive definition unless -1 is its own
>  token rather than '-' applied to '1'.

It's also only guaranteed to work if x is numeric; even in math,
that's not certain to be the case. :)

-- 
Jonathan "Dataweaver" Lang


Re: Musings on operator overloading

2008-03-26 Thread Jon Lang
Thom Boyer wrote:
>  But the main point I was trying to make is just that I didn't see the
>  necessity of positing
>
>  1,2,3\say
>
>  when (if I understand correctly) you could write that as simply as
>
>  1,2,3 say

Nope.  This is the same situation as the aforementioned '++' example,
in that you'd get into trouble if anyone were to define an infix:
operator.

>  That seems better to me than saying that there's no tab character in
>
>  say "blah $x\t blah"

Whoever said that?

>  Backslashes in double-quotish contexts are already complicated enough!

...and they'd remain precisely as complicated as they are now, because
backslashes in interpolating quotes and in patterns would continue to
behave precisely as they do now.  Backslash-as-unspace would remain
unique to "code" context, as it is now, changing only in that it gets
followed by \s* instead of \s+.  In particular, if you were to define
a postfix: operator, you'd embed it in a string as:

say "blah {$x\t} blah"

-- 
Jonathan "Dataweaver" Lang


postfix and postcircumfix

2008-04-02 Thread Jon Lang
In "Question on your last change to S02", Larry Wall wrote:
>  (By the way, you'll note the utility of being able to talk about a
>  postfix by saying .[], which is one of the reasons we allow the optional
>  dot there. :)

Can I take this as an indication that the rules for postcircumfix
operators are an extension of the rules for postfix operators?

-- 
Jonathan "Dataweaver" Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
John M. Dlugosz wrote:
> Here is a first look at the ideas I've worked up concerning the Perl 6 type
> system.  It's an overview of the issues and usage of higher-order types in
> comparison with traditional subtyping subclasses.
>
>  http://www.dlugosz.com/Perl6/

Very interesting, if puzzling, read.

I'm having some difficulty understanding the business with £.  I
_think_ that you're saying that £ sort of acts as a prefix operator
that changes the meaning of the type with which it is associated; and
the only time that a change in meaning occurs is if the type in
question makes use of ::?CLASS or a generic parameter.

You say that in Perl 6, a role normally treats ::?CLASS as referring
to the role.  Perhaps things have changed while I wasn't looking; but
I was under the impression that Perl 6 roles try to be as transparent
as possible when it comes to the class hierarchy.  As such, I'd expect
::?CLASS to refer to whatever class the role is being composed into,
rather than the role that's being composed.  If you have need to
reference the role itself, I'd expect something like ::?ROLE to be
used instead.  Of course, without something like your '£', the only
way to decide whether you actually want the class type or the role
type would be within the definition of the role itself; and without a
sort of "reverse '£'", there would be no way to take a role that
refers to ::?ROLE and make it refer to ::?CLASS instead.  That said,
I'm not convinced that this is a problem.

As for classes and roles that have generic parameters: here, you've
completely lost me.  How does your proposed '£' affect such classes
and roles?

-- 
Jonathan "Dataweaver" Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
TSa wrote:
>  The use of £ in
>
>   sub foo (£ pointlike ::PointType $p1, PointType $p2 --> PointType)
>
>  is that of *structural* subtyping. Here FoxPoint is found to be
>  pointlike. In that I would propose again to take the 'like' operator
>  from JavaScript 2. Doing that the role should be better named Point
>  and foo reads:
>
>   sub foo (like Point ::PointType $p1, PointType $p2 --> PointType)
>
>  This is very useful to interface between typed and untyped code.
>  With the 'like' the role Point has to be *nominally* available
>  in the argument. There's no problem with 'like'-types being more
>  expensive than a nominal check.

Ah; that clears things up considerably.  If I understand you
correctly, John is using '£' to mean "use Duck Typing here".  _That_,
I can definitely see uses for.  As well, spelling it as 'like' instead
of '£' is _much_ more readable.  With this in mind, the above
signature reads as "$p1 must be like a Point, but it needn't actually
be a Point.  Both $p2 and the return value must be the same type of
thing that $p1 is."

What, if anything, is the significance of the fact that pointlike (in
John's example; 'Point' in TSa's counterexample) is generic?

-- 
Jonathan "Dataweaver" Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
chromatic wrote:
> Jon Lang wrote:
>  > Ah; that clears things up considerably.  If I understand you
>  > correctly, John is using '£' to mean "use Duck Typing here".  _That_,
>  > I can definitely see uses for.  As well, spelling it as 'like' instead
>  > of '£' is _much_ more readable.  With this in mind, the above
>  > signature reads as "$p1 must be like a Point, but it needn't actually
>  > be a Point.  Both $p2 and the return value must be the same type of
>  > thing that $p1 is."
>
>  That was always my goal for roles in the first place.  I'll be a little sad 
> if
>  Perl 6 requires an explicit notation to behave correctly here -- that is, if
>  the default check is for subtyping, not polymorphic equivalence.

By my reading, the default behavior is currently nominal typing, not
duck-typing.  That said, my concern isn't so much about which one is
the default as it is about ensuring that the programmer isn't stuck
with the default.  Once it's decided that Perl 6 should support both
duck-typing and nominal typing, _then_ we can argue over which
approach should be the default, and how to represent the other
approach.

-- 
Jonathan "Dataweaver" Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-28 Thread Jon Lang
John M. Dlugosz wrote:
> TSa wrote:
> > Jon Lang wrote:
> > > I'm having some difficulty understanding the business with £.  I
> > > _think_ that you're saying that £ sort of acts as a prefix operator
> > > that changes the meaning of the type with which it is associated; and
> > > the only time that a change in meaning occurs is if the type in
> > > question makes use of ::?CLASS or a generic parameter.
> >
> > The difference seems to be the two definitions of bendit
> >
> >  sub bendit (IBend ::T $p -->T)
> >  {
> > IBend $q = get_something;
> > my T $result= $p.merge($q);
> > return $result;
> >  }
> >
> >  sub bendit (£ IBend ::T $p -->T)
> >  {
> > T $q = get_something;
> > my T $result= $p.merge($q);
> > return $result;
> >  }
> >
> > The interesting thing that is actually left out is the return type
> > of get_something. I think in both cases it does the IBend role but
> > in the second definition it is checked against the actual type T
> > which is Thingie if called with a Thingie for $p. So the advantage
> > of this code is that the compiler can statically complain about the
> > return type of get_something. But I fail to see why we need £ in
> > the signature to get that.
>
>  In the top example, merge has to be declared with invariant parameter
> types, so the actual type passed "isa" IBend.  That means merge's parameter
> is IBend.  If get_something returned the proper type, it would be lost.
>
>  In the lower example, the merge parameter is allowed to be covariant.  The
> actual type is not a subtype of IBend.  The parameter to merge is checked to
> make sure it is also T.  The £ means "use the higher-order £ike-this" rather
> than "isa" substitutability.
>
>  The issue is how to give covariant parameter types =and= minimal type
> bounds for T at the same time.

Perhaps it would be clearer if you could illustrate the difference between

   sub bendit (£ IBend ::T $p -->T)
   {
  T $q = get_something;
  my T $result= $p.merge($q);
  return $result;
   }

and

   sub bendit (IBend ::T $p -->T)
   {
  T $q = get_something;
  my T $result= $p.merge($q);
  return $result;
   }

Or perhaps it would be clearer if I actually understood what
"covariant" means.

> > The use of £ in
> >
> >  sub foo (£ pointlike ::PointType $p1, PointType $p2 --> PointType)
> >
> > is that of *structural* subtyping. Here FoxPoint is found to be
> > pointlike. In that I would propose again to take the 'like' operator
> > from JavaScript 2. Doing that the role should be better named Point
> > and foo reads:
> >
> >  sub foo (like Point ::PointType $p1, PointType $p2 --> PointType)
> >
> > This is very useful to interface between typed and untyped code.
> > With rthe 'like' the role Point has to be *nominally* available
> > in the argument. There's no problem with 'like'-types beeing more
> > expensive than a nominal check.
>
>  Yes, with Point would work for matching as well as pointlike.  When the
> covariant parameter type destroys the "isa" relationship between Point and
> Point3D, "£ Point" will still indicate conformance to the "like" rules.
>
>  I like "like" as the ASCII synonym to £, but didn't want to get into that
> in the whitepaper.  I wanted to concentrate on the need for a higher-order
> type check, not worry about how to modify the grammar.

OK; how does "higher-order type checks" vs. "isa relationships" differ
from "duck typing" vs. "nominal typing"?

-- 
Jonathan "Dataweaver" Lang


Re: treatment of "isa" and inheritance

2008-04-30 Thread Jon Lang
Brandon S. Allbery KF8NH wrote:
> TSa wrote:
> > I totally agree! Using 'isa' pulls in the type checker. Do we have the
> > same option for 'does' e.g. 'doesa'? Or is type checking always implied
> > in role composition? Note that the class can override a role's methods
> > at will.
>
>  It occurs to me that this shouldn't be new keywords, but adverbs, i.e. ``is
> :strict Dog''.

Agreed.  I'm definitely in the category of people who find the
difference between "is" and "isa" to be, as Larry put it, eye-glazing.
 I can follow it, but that's only because I've been getting a crash
course in type theory.

Brandon's alternative has the potential to be less confusing given the
right choice of adverb, and has the added bonus that the same adverb
could apply equally well to both 'is' and 'does'.

On a side note, I'd like to make a request of the Perl 6 community
with regard to coding style: could we please have adverbal names that
are, well, adverbs?  "is :strict Dog" brings to my mind the English
"Fido is a strict dog", rather than "Fido is strictly a dog".  Not
only is "is :strictly Dog" more legible, but it leaves room for the
possible future inclusion of adjective-based syntax such as "big Dog"
(which might mean the same thing as "Dog but is big" or "Dog where
.size > Average").  To misquote Einstein, things should be as simple
as is reasonable, but not simpler.

-- 
Jonathan "Dataweaver" Lang


Re: OK, ::?CLASS not virtual

2008-04-30 Thread Jon Lang
John M. Dlugosz wrote:
>  And you can use CLASS in a role also, confidant that it will be looked up
> according to the normal rules when the class is composed using that role,
> just like any other symbol that is not found when the role is defined.
> Using ::?CLASS in a role is an error (unless you mean the class surrounding
> this role's definition, in which case it is a warning).

Can a role inherit from a class?  If so, to which class does CLASS
refer: the inherited one, or the one into which the role is composed?

-- 
Jonathan "Dataweaver" Lang


Re: First look: Advanced Polymorphism whitepaper

2008-04-30 Thread Jon Lang
On Wed, Apr 30, 2008 at 9:58 PM, Brandon S. Allbery KF8NH
<[EMAIL PROTECTED]> wrote:
>
>  On May 1, 2008, at 0:53 , chromatic wrote:
>
>
> > correctness sense.  Sadly, both trees and dogs bark.)
> >
>
>  Hm, no.  One's a noun, the other's a verb.  Given the linguistic
> orientation of Perl6, it seems a bit strange that the syntax for both is the
> same:  while accessors and mutators are *implemented* as verbs, they should
> *look* like nouns.

In defense of chromatic's point, both people and syrup run.

-- 
Jonathan "Dataweaver" Lang


  1   2   3   4   >