Test Case: Complex Numbers
The following is an attempt to put a number of Perl6 concepts into practice, in order to see how useful and intuitive they actually are. Complex numbers come in two representations: rectilinear coordinates and polar coordinates: class complexRectilinear { has $.x, $.y; method infix:+ ($a is complexRectlinear, $b is complexRectilinear) returns complexRectilinear { return new complexRectilinear($a.x + $b.x, $a.y + $b.y); } ... method coerce:complexPolar () returns complexPolar { return new complexPolar ($.x * $.x + $.y * $.y, atn($.y / $.x) * (sgn($.x) || sgn($.y))); } ... } ... and a similar definition for class polar. (Technically, coerce:complexPolar and infix:+ are "generators" in the sense that they create new instances of the class; but ISTR something about generators not being allowed in classes.) You should then be able to say: union complex { class complexRectilinear; class complexPolar; } ...or something of the sort. At this point, you have the ability to freely represent complex numbers in either coordinate system and to switch between them at will. Am I right so far? Next: multi exp($a is complex, $b is complex) returns complex { ... } This defines a function that raises one complex number to the power of another complex number and returns a third complex number. One problem is that if $b's real component isn't an integer, there's no guarantee that there will be a unique answer. To address this, we could provide a second version of this function: multi exp($a is complex, $b is complex) returns list of complex { ... } However, if $b's real component isn't a rational number, you won't have a finite number of elements in the list. Here's where I get a little fuzzy: IIRC, there's some means of defining a list by providing an element generator: generator element($index) returns complex { ... } Is there a way of setting things up so that an attempt to ascertain the list's length would call a separate generator for that purpose? generator length() returns complex { ... } At any point above, am I abusing the concept of theories, or failing to use them to their fullest extent? With this example, it would be useful if the list of results could extend in both directions, so that one could advance through them in a clockwise or counterclockwise direction. If $b's real component is rational, it makes some sense to treat [-1] as the last element in the list, [-2] as the second-to-last, and so on; but if $b's real component is irrational, there _is_ no positive end to the list, and it would make sense if [-1] referred to the first result that you reach by rotating in the clockwise direction from the primary result. Can an "infinitely long" list be set up so that [-1] still has meaning? Also, even if $b's real component is rational, there are times that one might want to access a version of the result that is rotated some multiple of 2*pi radians from one of the main results. As an example, let's say that $b = 0.5. This means that the first result is rotated zero radians counterclockwise from $a's direction, and corresponds to [0]. [1] would correspond to being rotated pi radians, and the length generator would say that there are only two elements in the list. Could you set things up so that you could nonetheless access [2] (which is rotated by 2*pi radians), [3] (rotated by 3*pi radians), or possibly even have [-1] represent a rotation by -pi radians? -- Jonathan "Dataweaver" Lang
Re: Test Case: Complex Numbers
Luke Palmer wrote: > Just some initial thoughts and syntax issues. I'll come back to it on > the conceptual side a little later. I'm looking forward to it. > Jonathan Lang wrote: > > method coerce:complexPolar () returns complexPolar { > > return new complexPolar ($.x * $.x + $.y * $.y, atn($.y / $.x) * > > (sgn($.x) || sgn($.y))); > > } > > Hmm. I don't know why you marked this with coerce:? Neither do I; I was assuming that the purpose of coerce: is to allow implicit type conversions - that is, if I feed a complexRectilinear object into an argument slot that's looking for a complexPolar object, this method would be called implicitly and the result would be used instead. > > ... > > } > > > > ... and a similar definition for class polar. > > > > (Technically, coerce:complexPolar and infix:+ are "generators" in the > > sense that they create new instances of the class; but ISTR something > > about generators not being allowed in classes.) > > Ahh, you read that in theory.pod. It's not clear that the "factory" > and "generator" abstractions are all that useful (I'm still looking > for a good use though). I just included them for symmetry (like > Maxwell :-). FWIW, I didn't read theory.pod as thoroughly as I probably should have, nor did I grasp it as well as I would like. But when I looked at the concept of "virtual lists", the union/factory/generator triad seemed to be, conceptually, a perfect fit: the "virtual list" is actually a union, whose factory includes generators that produce information about the "list" on demand. More generally, a union would be the appropriate tool for the magic behind what Perl 5 used ties for. > Don't think that you're not allowed to return instances of the class > you are defining from methods. You can. Generators are more > restrictive: they say that any subclass must return an instance of > *itself* from that method; i.e. it probably wouldn't be able to use > your implementation. But that breaks subclassing laws, which is why > they aren't allowed in classes. I'm not sure I'm following this. > > You should then be able to say: > > > > union complex { > > class complexRectilinear; > > class complexPolar; > > } > > > > ...or something of the sort. At this point, you have the ability to > > freely represent complex numbers in either coordinate system and to > > switch between them at will. Am I right so far? > > Hmm, that union looks like a backwards role. That is, you're creating > a role "complex" and saying that the interface of that role is > whatever is common between these two classes. Interesting idea... not > sure how fruitful that is though. That wasn't the intent; the intent was to define a "something-or-other" C that represents the fact that whatever does this sometimes behaves like a complexRectilinear and other times behaves like a complexPolar. Even the underlying information (the attributes involved) may vary from role to role, though what those attributes represent ought to be the same regardless of which role is in use. > The idea of several isomorphic implementations of the same thing has > occurred to me several times, but I never figured out how it might > work. Let me think about that. C was probably a bad name for it, especially considering that theory.pod already uses C for a very specific purpose. C might have been a better name. > > However, if $b's real component isn't a rational number, you won't > > have a finite number of elements in the list. Here's where I get a > > little fuzzy: IIRC, there's some means of defining a list by providing > > an element generator: > > > > generator element($index) returns complex { ... } > > > > Is there a way of setting things up so that an attempt to ascertain > > the list's length would call a separate generator for that purpose? > > > > generator length() returns complex { ... } > > > > At any point above, am I abusing the concept of theories, or failing > > to use them to their fullest extent? > > Well, you're not using generator correctly. > > You're really just trying to define a lazy list or an iterator, right? > That most likely involves implementing some role: > > role ComplexPowers does Lazy {...} > > or: > > role ComplexPowers does Iterator {...} > > And then creating one of those objects from within your pow function. > Nothing deeply type-theoretical going on here. That's the most traditional way, yes; but is there anything wrong
Re: Test Case: Complex Numbers
Doug McNutt wrote: > As for complex operations which have multiple results I think a principle > value approach makes more sense than a list. It's well established for the > inverse trigonometric functions. Leave RootOf( ) to Maple and Mathematica. In the hypothetical module that I'm describing, the principle value approach _would_ be used - in scalar context. The only time the "list of all possible results" approach would be used would be if you use list context. If you have no need of the list feature, then you don't have to use it. I believe that what I have outlined is a reasonable approach to handling complex numbers. It may or may not be the best way to do so (and I suspect that it isn't), but that isn't really the point. The point is that it _is_ reasonable, and I'd like to see how well Perl 6 handles it. Michele Dondi wrote: > I must say that I didn't follow the discussion very much. But this makes > me think of this too: the two representations are handy for different > calculations. It would be nice if they somehow declared what they can do > better (or at all) and have the "union" take care of which one's methods > to use. That's one of the points I had in mind. Rectilinear coordinates are useful for addition and subtraction, while polar coordinates fare better for multiplication, division, powers, logs, trigonometric functions, and hyperbolic functions. So define addition and subtraction for the rectilinear coordinate system and define the remaining operations for the polar coordinate system. The problem is that the two coordinate systems don't just differ in terms of which methods they make available; they also differ in terms of how they represent the value that they represent. So if you want to multiply two complex numbers that were defined using rectilinear coordinates, you'd need to transform them to polar coordinates first, and then multiply those numbers. Phrased in terms of roles, the rectilinear role would have $.x and $.y attributes (which its methods would use), while the polar role would have $.magnitude and $.direction attributes (which _its_ methods would use). That's two sets of attributes which contain identical data in vastly different formats; simply having one class or role do both of these roles would result in extra memory usage (as both sets of attributes would need storage allocated to them), not to mention the likelihood that the two sets of attributes may end up getting out of synch with each other. What I'm looking at is a way to declare that the two roles are different representatives of the same thing. Memory would be allocated to exactly one set of attributes at a time, and transformations between the different sets would be called whenever you change the underlying representation of the data. In the former regard, this would be much like C's union; but it would differ from them in that it would be designed to preserve the logical value (what the data represents) rather than the physical value (the sequence of bits in memory). -- Jonathan "Dataweaver" Lang
Problem with dwimmery
Luke Palmer wrote: > Recently, I believe we decided that {} should, as a special case, be > an empty hash rather than a do-nothing code, because that's more > common. > > However, what do we do about: > >while $x-- && some_condition($x) {} > > Here, while is being passed a hash, not a do-nothing code. Should we > force people to write: > >while $x-- && some_condition($x) {;} > > Or should we make a counter-special case of some sort? Or should we > take out the {} special case altogether and force people to write > hash()? As a third possibility, could we huffman-code "do nothing" clauses by leaving out the appropriate argument? That is: while $x-- && some_condition($x); or loop ( ; some_condition($x) ; $x--); -- Jonathan "Dataweaver" Lang -- Jonathan "Dataweaver" Lang
Re: Junctions again (was Re: binding arguments)
Luke Palmer wrote: > Whatever solution we end up with for Junctions, Larry wants it to > support this: > > if $x == 1 | 2 | 3 {...} > > And I'm almost sure that I agree with him. It's too bad, because > except for that little "detail", fmap was looking pretty darn nice for > junctions. Not really. If I read the fmap proposal correctly, if any($x, $y, $z) »==« any(1, 2, 3) {...} would do the same thing as if $x == 1 || $y == 2 || $z == 3 {...} ...which fails to dwim. Not to mention if all($x, $y, $z) »==« any(1, 2, 3) {...} if any($x, $y, $z) »~~« all($a, $b, $c) {...} ...which ought to work like if ($x == 1 || $x == 2 || $x == 3) && ($y == 1 || $y == 2 || $y == 3) && ($z == 1 || $z == 2 || $z == 3) {...} if ($x ~~ $a && $x ~~ $b && $x ~~ $c) || ($y ~~ $a && $y ~~ $b && $y ~~ $c) || ($z ~~ $a && $z ~~ $b && $z ~~ $c) {...} And then there's the (minor) ugliness of having to remember to include hyperspanners (» and/or «) whenever you want to evaluate junctions. -- Side note: "any(1, 2, 3)" is indistinguishable from "one(1, 2, 3)", because the values 1, 2, and 3 are mutually exclusive. People often use the inclusive disjunction when they mean the exclusive one, and get away with it only because the values that they're dealing with are mutually exclusive. Another issue is that one-junctions conceptually represent a single value at a time - though which one is unknowable - whereas any-junctions can also represent several values at once, all-junctions usually do so, and none-junctions can even represent no values. Conceptually, one-junctions are scalar-like, while the other kinds of junctions are list-like; so one ought to think of "$a ~~ one(@set)" as matching a scalar to a scalar, whereas "$a ~~ any(@set)" would be matching a scalar to a list (and thus would more properly be "$a ~~« any(@set)"). But because the distinction between inclusive and exclusive disjunctions is moot when the component choices are already mutually exclusive, there's an advantage to any-junctions and one-junctions behaving in the same way. -- Jonathan "Dataweaver" Lang
S3 vs. S4: parallel lists
I think there might be a discrepency between S3 and S4. S3: > In order to support parallel iteration over multiple arrays, > Perl 6 has a zip function that builds tuples of the elements of > two or more arrays. > > for zip(@names; @codes) -> [$name, $zip] { > print "Name: $name; Zip code: $zip\n"; > } > > zip has an infix synonym, the Unicode operator ¥. > > To read arrays in parallel like zip but just sequence the > values rather than generating tuples, use each instead of zip. > > for each(@names; @codes) -> $name, $zip { > print "Name: $name; Zip code: $zip\n"; > } S4: > To process two arrays in parallel, use either the zip function: > > for zip(@a;@b) -> $a, $b { print "[$a, $b]\n" } > > or the "zipper" operator to interleave them: Shouldn't S4 replace "zip" with "each"? -- Jonathan "Dataweaver" Lang
choice of signatures
Instead of multi sub *infix:<~>(ArabicStr $s1, ArabicStr $s2) {...} multi sub *infix:<~>(Str $s1, ArabicStr $s2) {...} multi sub *infix:<~>(ArabicStr $s1, Str $s2) {...} could you say multi sub *infix:<~>(ArabicStr $s1, ArabicStr | Str $s2) | (Str $s1, ArabicStr $s2) {...} or something to that effect? -- Jonathan "Dataweaver" Lang
Re: Junctions again (was Re: binding arguments)
Luke Palmer wrote: > Of course, this was introduced for a reason: > > sub min($x,$y) { > $x <= $y ?? $x !! $y > } > sub min2($x, $y) { > if $x <= $y { return $x } > if $x > $y { return $y } > } > > In the presence of junctions, these two functions are not equivalent. > In fact, it is possible that both or neither of the conditionals > succeed in min2(), meaning you could change the order of the if > statements and it would change the behavior. This is wacky stuff, so > we said that you have to be aware that you're using a junction for > "safety". Hmm... min2 behaves differently from min because the > function doesn't act as the negation of the <= function when dealing with junctions. As you say, wacky stuff. > But now I'm convinced, but I've failed to convince anyone else, that > the behavior's being wacky doesn't mean that it should be declared, > but that the behavior is just plain wrong. I figure that if something > says it's totally ordered (which junctions do simply by being allowed > arguments to the <= function), both of these functions should always > be the same. The fact is that junctions are not totally ordered, and > they shouldn't pretend that they are. If junctions say that they're totally ordered by virtue of being usable in the <= function, and junctions shouldn't say that they're totally ordered, it follows that they shouldn't be usable in the <= function. Since one of the main purposes of junctions is to be usable in the <= function, this constitutes a problem. IMHO, the fallacy is the claim that something is totally ordered simply by being allowed arguments to the <= function. Total ordering is achieved by being allowed arguments to a <=> function that always returns one of -1, 0, or +1. When a junction is fed into a <=> function, it will not always return one of -1, 0, or +1; it could instead return a junction of -1's, 0's, and/or +1's (or maybe it should fail?). > The other thing that is deeply disturbing to me, but apparently not to > many other people, is that I could have a working, well-typed program > with explicit annotations. I could remove those annotations and the > program would stop working! In the literature, the fact that a > program is well-typed has nothing to do with the annotations in the > program; they're there for redundancy, so they can catch you more > easily when you write a poorly-typed program. But in order to use > junctions, using and not using annotations can both produce well-typed > programs, *and those programs will be different*. I'm not following; could you give an example? -- Jonathan "Dataweaver" Lang
Re: Junctions again (was Re: binding arguments)
Rob Kinyon wrote: > To me, this implies that junctions don't have a complete definition. > Either they're ordered or they're not. Either I can put them in a <= > expression and it makes sense or I can't. If it makes sense, then that > implies that if $x <= $y is true, then $x > $y is false. Otherwise, > the definitions of <= and > have been violated. I'll beg to differ. If you insist on that kind of restriction on Junctions, they won't be able to serve their original (and primary) purpose of aggragating comparison tests together. Remember: if $x <= 1 & 5 {...} is intended to be 'shorthand' for if $x <= 1 && $x <= 5 {...} Therefore, $x = 3; if $x <= 1 & 5 {say 'smaller'} if $x > 1 & 5 {say 'larger'} should produce exactly the same output as $x = 3; if $x <= 1 && $x <= 5 {say 'smaller'} if $x > 1 && $x > 5 {say 'larger'} If it doesn't, then Junctions are useless and should be stricken from Perl 6. And the definition of '>' is "greater than", not "not (less than or equal to)". The latter is a fact derived from how numbers and letters behave, not anything inherent in the comparison operator. Just like '<=' is "less than or equal to", as opposed to "not greater than". You aren't violating the definitions of <= and > by failing to insist that they be logical negations of each other; you're only violating the common wisdom. -- Jonathan "Dataweaver" Lang
Re: Junctions again (was Re: binding arguments)
Me no follow. Please use smaller words? -- Jonathan "Dataweaver" Lang
Re: Table of Perl 6 "Types"
Dave Whipp wrote: >An Int is Enumerable: each value that is an Int has well defined succ >and pred values. Conversely, a Real does not -- and so arguably should >not support the ++ and -- operators. Amonst other differences, a >Range[Real] is an infinite set, whereas a Range[Int] has a finite >cardinality. I think that Dave has a point about a Range[Real] being an infinite set: According to DWIM, if I see "4.5..5.7", I don't think of "4.5, 5.5"; I think of "numbers greater than or equal to 4.5 but less than or equal to 5.7". Likewise, "4.5^..^5.3" contains "numbers greater than 4.5 but less than 5.3", not "an empty list". Mind you, I generally wouldn't be using such a range for iterative purposes; rather, I'd be using it as a matching alternative to comparison operators: C === C<< if 4.5 <= $_ <= 5.7 {...; break} >> C === C<< if 4.5 < $_ < 5.3 {...; break} >> C === C<< if 1.2 <= $_ {...; break} >> C === C<< if $_ < 7.6 {...; break} >> C === C<< if $_ < 4.5 || 5.7 < $_ {...; break} >> === C That said: if I _did_ use it for iterative purposes, I'd rather get the current approach of turning it into a step-by-one incremental list than have perl 6 complain about trying to iterate over an infinite set. Well, unless I said something like "loop ...4.3 {...}"; in that case, I'd expect Perl to complain about not knowing where to start. -- Jonathan "Dataweaver" Lang
Table of Perl 6 "Types"
Luke Palmer wrote: > That's good, because that's what it does. A "range object" in list > context expands into a list, but in scalar context it is there for > smart-matching purposes: > > 3.5 ~~ 3..4 # true > 4 ~~ 3..^4 # false > > etc. > > The only remaining problem is that we have no syntax for ...3, which > doesn't make sense as a list, but does make sense as a range. So just include a ... prefix operator that's useful for pattern matching in scalar context, but fails if you try to use it in list context. More precisely, cause any range with an indeterminant lower bound to fail if you try to use it in list context; so not only would "loop ...5 {...}" fail, but so would "loop (not 3..4) {...}" or "loop (not 5...) {...}"; but "loop (not ...5) {...}" would work, generating a list of 6, 7, 8, 9, and so on. BTW: where is the behavior in scalar context documented? I don't remember seeing it. (I also seem to recall seeing something about "$a ..^ $b" being equivalent to "$a .. ($b - 1)"; but I could be mistaken. I hope that's not the case, as the only time that it would make sense would be when the both bounds are whole numbers and the range is being used in list context.) -- Jonathan "Dataweaver" Lang
Re: Table of Perl 6 "Types"
Larry Wall wrote: > On Thu, Jan 12, 2006 at 08:29:29PM +, Luke Palmer wrote: > : The only remaining problem is that we have no syntax for ...3, which > : doesn't make sense as a list, but does make sense as a range. > > Well, it could be a lazy list that you only ever pop, I suppose. > In any event, it doesn't work syntactically because ... is where a > term is expected, so it's a yada-yada-yada with an unexpected term > following it. To bad; because that's exactly the syntax that I'd expect to use. -- Jonathan "Dataweaver" Lang
Re: Indeterminate forms for the Num type.
Audrey Tang wrote: > Assuming "num" uses the underlying floating point semantic (which may > turn 0/0 into NaN without raising exception), what should the default > "Num" do on these forms? > > 0/0 > 0*Inf > Inf/Inf > Inf-Inf > 0**0 > Inf**0 > 1**Inf > > Several options: > - - Use whatever the underlying "num" semantics available > - - Always "fail" > - - Always "die" Given a choice between the latter two, I'd rather go with Always "fail" (i.e., return "undefined"). This lets people handle matters with error-trapping if they like, without removing the possibility of handling it by testing the result for "definedness". > - - Specify them to return some definite value. > > At this moment, Pugs defines them ad-hocly to: > > 0/0 == die "Illegal division by zero" See above; besides, math classes generally teach that 0/0 is undefined. > 0*Inf == NaN > Inf/Inf == NaN > Inf-Inf == NaN These expressions are no more NaN than Inf is. I'd handle them as: 0*Inf == undef Inf/Inf == Inf Inf-Inf == Inf > 0**0 == 1 > Inf**0== 1 I know that x**0 == 1 if x is anything other than 0 or Inf; meanwhile, 0**y == 0 if y is anything other than 0, and Inf**y == Inf if y is anything other than 0. I'd say that these two should normally be undef. > 1**Inf== 1 1**y == 1 when y is anything but Inf; x**Inf == Inf if |x| is greater than 1; and x**Inf == 0 if |x| is less than 1. Again, 1**Inf == undef. > But I'd like to seek a more definite ruling. Ditto. -- Jonathan "Dataweaver" Lang
Re: split on empty string
Mark Reed wrote: > Perl6 "".split(/whatever/) is equivalent to split(/whatever/,"") in Perl5. I'm hoping that the perl 5 syntax will still be valid in perl 6. -- Jonathan "Dataweaver" Lang
perl6-all@perl.org
Larry Wall wrote: > But my hunch is that it's > a deep tagmemic/metaphorical problem we're trying to solve here. > Such issues arise whenever you start making statements of the form > "I want to use an A as if it were a B." The problem is much bigger > than just how do I translate Perl 5 to Perl 6. It's questions like: > > What makes a particular metaphor work? > Will the cultural context support use of an A as if it were a B? > How do we translate the user's thoughts to the computer's thoughts? > How do we translate one user's thoughts to another user's thoughts? > How do we know when such a translation is "good enough"? > How do we know when our mental model of an object is adequate? > How do we know when the computer's mental model of an object is adequate? > What does adequate mean in context? > Will the culture support partially instantiated objects? :-) > > That's tagmemics, folks. The main problem with tagmemics is that, while > it helps you ask good questions, it doesn't give you easy answers for 'em... Let me take a (feeble) crack at this: It sounds like we're talking about something vaguely akin to C++'s typecasting, where you treat an object of a given class as if it were an object of a different class. Another way to phrase this is that you temporarily want to add role to the object. "but" doesn't work because you don't want the new role's default behavior to clobber existing behavior that the object already has. Perhaps "like" instead, with the understanding that "$obj like Hash" means "try $obj's dispatching first; if that doesn't work, try Hash's dispatching". The role in question needs to be an emulator, in that its methods should have defaults that will make sense when mixed into the object in question. Therefore, Hash probably _wouldn't_ work, since Hash probably isn't set up to work with $obj's internals. You'd either have to explicitly provide an appropriate emulator, or implicitly have Hash find it (i.e., maintain a list of roles that do Hash, along with a way to test each against $obj's class for compatability). The latter is probably a bit too much effort. Or am I completely missing the point? > But we'll not get true AI until a computer can understand a sentence like > >In a hole in the ground there lived a hobbit. > > as if it were a human. A human has the ability to execute "new Hobbit" > before "class Hobbit" is even defined! The essence of this is that you don't need to know everything about class Hobbit in order to make use of it. As with Stevan Little's reference to "lazy objects", you only need to know as much about Hobbit as is to be used to complete your current task. In this case, you don't need to know _anything_ about a Hobbit, yet. By the time you _do_ need to know something about it (such as how to form a mental image of one), the script will presumably have given you the neccessary information. If not, you're likely to say something like "what's a Hobbit?" a.k.a. "forward referencing". :) -- Jonathan Lang
Re: The definition of 'say'
IMHO, people who set $/ are, by definition, saying that they expect lines to terminate with something other than a newline; they should expect 'say' to conform to their wishes. I don't like the notion of perl second-guessing the programmer's intentions here; "Do what I mean, not what I say" only carries so far. That said, I very rarely set $/, so this aspect of 'say' doesn't really affect me. -- Jonathan Lang
Re: overloading the variable declaration process
Consider "my Dog $spot". From the Perl6-to-English Dictionary: Dog: a dog. $spot: the dog that is named Spot. ^Dog: the concept of a dog. Am I understanding things correctly? If so, here's what I'd expect: a dog can bark, or Spot can bark; but the concept of a dog cannot bark: can Dog "bark"; # answer: yes can $spot "bark"; # answer: yes can ^Dog "bark"; # answer: no -- Jonathan "Dataweaver" Lang
Re: overloading the variable declaration process
Stevan Little wrote: > Yes, that is correct, because: > > Dog.isa(Dog) # true > $spot.isa(Dog) # true > ^Dog.isa(Dog) # false > > In fact ^Dog isa MetaClass (or Class whatever you want to call it). > > At least that is how I see/understand it. OK. To help me get a better idea about what's going on here, what sorts of attributes and methods would ^Dog have? -- Jonathan "Dataweaver" Lang
Re: overloading the variable declaration process
Stevan Little wrote: > Jonathan Lang wrote: > > OK. To help me get a better idea about what's going on here, what > > sorts of attributes and methods would ^Dog have? > > Well, a metaclass describes the behaviors and attributes of a class, > and ^Dog is an *instance* of the metaclass. So actually ^Dog would not > actually have attributes and methods since it is an instance. Huh? A dog can bark; so the Dog class should have a method called 'bark'. Or does 'can' not mean what it seems to mean? > That said, I think ^Dog would probably respond to methods like > these (some of which are described in S12): OK; apparently, what I meant when I asked "what methods and attributes does ^Dog have?" is what you're talking about when you speak of which methods ^Dog will respond to. To me, an object has whatever methods that it responds to. > ^Dog.name # Dog > ^Dog.version # 0.0.1 (or something similiar of course) > ^Dog.authority # cpan:LWALL or email:[EMAIL PROTECTED] > > ^Dog.identifier # returns the string Dog-0.0.1-cpan:LWALL Would it be valid to speak of ^$spot? If so, what would ^$spot.name be? > I would like to see some methods like this: > > # dynamically add a method that > # Dog and $spot would respond to > ^Dog.add_method(bark => method () { ... }); > > Which would be like doing this in Perl 5: > > no strict 'refs'; > *{'Dog::bark'} = sub { ... }; IIRC, you can always create a new method for a class, even outside of its definition, simply by ensuring that the first parameter to be passed in will be an object of that type: method bark (Dog $_) { ... } or maybe method Dog.bark () { ... } > And of course if you can add a method, you will need to be able to > fetch and delete them as well, so a &get_method and &remove_method > would be in order as well. To fetch a method, why not have .can() return a reference to the method upon success? I might even go so far as to treat can() as an lvalue, using the assignment of a coderef as an alternate way of adding or changing the object's behavior on the fly: method bark (Dog $_:) { ... }; Dog.can("bark") = method () { ... }; # Teach the dog a new trick Dog.can("bark") = true; # inform the dog that it ought to know how to bark, without telling it how, yet; equivalent to a literal "= method { ... }". Dog.can("bark") = false; # tell the dog to forget how to bark. Dog.can("bark") = undef; # Ditto. (Doing this to Dog DWIMs to modifying the behavior of all dogs at once - you're declaring that "dogs can bark" or "this is how dogs bark", whereas doing it to $spot DWIMs to modifying the behavior of $spot only: "$brutus.can('bark') = false": my best friend's pet dog seems to have lost the capacity to bark in its old age; that doesn't mean that dogs in general can't bark.) Similar things might be done with .has (for attributes), .isa (for superclasses), and .does (for roles). -- Jonathan "Dataweaver" Lang
Re: overloading the variable declaration process
Stevan Little wrote: > Jonathan Lang wrote: > > OK; apparently, what I meant when I asked "what methods and attributes > > does ^Dog have?" is what you're talking about when you speak of which > > methods ^Dog will respond to. To me, an object has whatever methods > > that it responds to. > > I disagree, an object is an instance of a class. A class "has" the > methods that the object will respond too. You would not want to store > all the methods in each instance, it would not make sense. Each > instance needs to share a set of methods, and those methods are stored > in the class. I think that we're talking past each other: you're trying to educate me on how a programmer should think about objects and classes; I'm trying to figure out how a non-programmer thinks of them. To non-programmers (and amateur programmers), objects aren't instances of classes; classes are categories of related objects. Objects have behaviors and characteristics; classes are a convenient shorthand for describing behaviors and characteristics common to a set. Objects come first, while classes are thought of in the context of objects. The same implementation can be used for both perspectives: "dogs can bark; Spot is a dog; therefore, Spot can bark" is a form of reasoning that underlies using the class-and-instance model for the "back-end" implementation of the object-and-category paradigm. "classes have methods; objects respond to them" is part of the classes-and-instances paradigm; but that isn't really how people think. In terms of practical differences: under the classes-and-instances paradigm, if you want to create an object whose behavior differs slightly from a given class, you need to create a subclass that has the desired behavior and to create the object as an instance of that subclass. With the object-and-category paradigm, there's nothing that insists that every object's behavior must conform precisely to the behaviors described by its classes; the latter are "merely" rules of thumb to apply to the former until you learn differently, and behaviors can be added, removed, or tweaked on an individual basis. This is why (last I checked) "but" creates a behind-the-scenes 'singleton' subclass for the new object instead of demanding that a new subclass be explicitly created, and why I suggested the possibility of adding, replacing, or removing methods for individual objects as well as for classes (which, presumably, would also be implemented under the hood by replacing the object's class by a 'singleton' subclass). > > > ^Dog.name # Dog > > > ^Dog.version # 0.0.1 (or something similiar of course) > > > ^Dog.authority # cpan:LWALL or email:[EMAIL PROTECTED] > > > > > > ^Dog.identifier # returns the string Dog-0.0.1-cpan:LWALL > > > > Would it be valid to speak of ^$spot? If so, what would ^$spot.name be? > > There is no such thing as a ^$spot. OK. The only reason I was thinking in those terms was because of the possibility that $spot might be based on one of those behind-the-scenes customized subclasses that I mentioned earlier: if my Dog $brutus but cannot("bark"); How do you access the subclass of Dog that $brutus is the instance of? > > IIRC, you can always create a new method for a class, even outside of > > its definition, simply by ensuring that the first parameter to be > > passed in will be an object of that type: > > > > method bark (Dog $_) { ... } > > I don't think this is true unless it is a multi method, in which case > it is not actually a method of the of the class, but instead just > DWIMs because of MMD and the fact we allow an invocant calling style > freely. I was under the impression that the distinction between a method and a multi-method was how many of the parameters get used to dispatch: methods aren't really "owned" by classes, any more than class definition is a declarative process; it just looks that way on the surface. Am I wrong about this? > > or maybe > > > > method Dog.bark () { ... } > > Yes that works too. But TIMTOWTDI, and each has it's own benefits. I'm aware of that, and was proposing this as Another Way. > Your above approach works fine while you are writing the code, but is > not as useful for dynamically adding a method at runtime (unless you > use eval(), but that gets ugly). I was under the impression that class definition was fundamentally a functional process dressed up as a declarative process. "method Dog.bark () { ... }" would seem to me to be a means of continuing that process "after the fact" - that is, adding a method to a class after you've left the class defini
Re: overloading the variable declaration process
Thomas Sandlass wrote: > > > or maybe > > > > > > method Dog.bark () { ... } > > > > Yes that works too. > > Shouldn't that read Dog::bark? Why the dot? Because I'm not 100% with the proper syntax of things. The intent was to add a bark() method to Dog during runtime. -- Jonathan "Dataweaver" Lang
Re: Selective String Interpolation
Piers Cawley wrote: > And backwhacking braces in generated code is *not* a pretty solution > to my eyes. I'd *like* to be able to have a quasiquoting environment > along the lines of lisp's backquote (though I'm not sure about the > unquoting syntax): Let me see if I understand this correctly: Instead of interpolation being enabled by default with backwhacks selectively disabling it, you want something where interpolation is disabled by default with "anti-backwhacks" selectively enabling it. Right? -- Jonathan "Dataweaver" Lang
Re: Selective String Interpolation
Brad Bowman wrote: > Jonathan Lang wrote: > > Let me see if I understand this correctly: Instead of interpolation > > being enabled by default with backwhacks selectively disabling it, you > > want something where interpolation is disabled by default with > > "anti-backwhacks" selectively enabling it. Right? > > Not speaking for Piers... > > Something like that, although trying to find universal anti-backwhacks > in the strings handled by Perl is much harder than finding meta-syntax > for S-expressions. That's why I was suggesting a mechanism to selectively > enable interpolation by name rather than syntax. I don't see why you'd need a universal anti-backwhack, any more than you need universal quote delimiters. I could see introducing an adverb to the quoting mechanism that lets you define a custom backwhack character in qq strings (for those rare cases where literal backslashes are common occurrences), and then press the same adverb into service in q strings to define a custom anti-backwhack. The only difference is that there's a default backwhack character (the backslash), but there is no default anti-backwhack. So: my $code = q:interp(`){ package `$package_name; sub `$sub_name { `$the_complex_value_we_want_to_use_in_generated_code } sub `$another_sub_name { [EMAIL PROTECTED](';')} } }; I'd go so far as to say that the anti-backwhack would only have a special meaning in front of a $, @, %, {, \, or another anti-backwhack; in all other cases, it gets treated as a literal character just like anything else. There may be some edge conditions that I haven't thought of; but I think that this is a decent baseline to work from. I think the Huffman encoding is correct on this, up to and including the length of the adverb needed to define a custom backwhack or anti-backwhack. -- Jonathan "Dataweaver" Lang
Re: Selective String Interpolation
Brad Bowman wrote: > I don't like the idea of sharing the adverb between escaping and > force-interpolating since stacking other adverbs can turn q into qq > and vice-versa. That's a minor quibble though. And a reasonable one as well. I was trying to minimize the proliferation of adverbs, but I may have gone overboard by trying to fit crucially different things under the same tent. Separate :esc and :interp adverbs probably would be better overall. > Analogous issues occur with quasiquoting, so a solution that can be > naturally shared would be ideal. CODE :interp(`) { say `$a + $b } > ... hmm, looks ok... I'm not familiar with quasiquoting, so I wouldn't know. The interp part seems fairly straightforward; if $a = 5 and $b = 4, I'd expect the above to be equivalent to CODE { say 5 + $b }, whatever that means. -- Jonathan "Dataweaver" Lang
s29 and Complex numbers
If you're going to have versions of sqrt in S29 that deal with Complex numbers, you ought to do likewise with a number of other functions: multi sub abs (: Complex ?$x = $CALLER::_ ) returns Num should return the magnitude of a complex number. abs($x) := $x.magnitude, or whatever the appropriate method for extracting the magnitude of a Complex number is. (I'm unaware of the exact terminology used within Complex, or even if it has been established yet.) multi sub sign (: Complex ?$x = $CALLER::_) returns Complex Since I would expect $x == sign($x) * abs($x) to be true for Nums, the sign of a complex number would logically be a unitary complex number, or zero if $x is zero: sign($x) := ($x == 0) ?? 0 :: $x / abs($x). Thus, this would be distinct from $x.angle, or whatever it's called, which would return a Num representing the direction as measured in radians, degrees, etc. multi sub exp (: Complex ?$exponent = $CALLER::_, Complex +$base) returns Complex multi sub log (: Complex ?$exponent = $CALLER::_, Complex +$base) returns Complex multi sub func (: Complex ?$x = $CALLER::_, +$base) returns Complex (where func is any trig-related function) IIRC, raising a base to a Complex exponent raises the base's absolute value by the real component of the exponent, and rotates its angle (as measured in radians) by the imaginary component. All of the trig functions can be aliased to expressions using exp and log. This may be useful for the purpose of defining the Complex versions. All of these functions, as well as sqrt, need to restrict the range of valid return values: frex, sqrt ought to always return something with either a positive real component or a zero real component and a non-negative imaginary component. (Another possibility would be to return a list of every possible result when in list context, with the result that you'd get in scalar context being element zero of the list. This even has its uses wrt sqrt(Num), providing a two-element list of the positive and negative roots, in that order. This option would apply to sqrt, exp, log, and the trig functions.) This has implications for the infix:<**> operator as well. -- Jonathan "Dataweaver" Lang
Re: s29 and Complex numbers
Mark A. Biggar wrote: > For the complex trig functions the result set is infinite with no > obvious order to return the list in even lazily that provides anything > useful. Technically, the result set is one element (the principle value), since a mathematical function - by definition - produces a single result for any given input. That said: > The results are all of the form A + (B + 2*pi*N)i where N can > be any integer. A lazy list would be feasible if you order it based on N, as long as you're not forbidden from using negative indices for anything other than reverse indexing. > It is worth much more effort to just return a > consistent principle value and getting the branch cuts and the handling > of signed zero correct then to worry about trying to return multiple > values in these cases. Agreed. That's why my mention of list context for these things was phrased as an afterthought: such a thing is doable, and I believe that it might even be practical (once the other issues you mention above are dealt with); but it's more "this would be nice" than "we need this". Furthermore, a list context approach would be an extension of the functions' built-in capabilities: scalar context would give you the principle value, while list context would give you every possible value. This means that you could add list-context capabilities at some later date (through either packages or revisions) without damaging code developed in the meantime. You can't do this with the junction approach, as (last I checked) a junction is a scalar; you'd have to replace the "return the principle result" behavior with "return a junction" behavior. Finally, it's easy to go from list context to junction: just place the math function in a junctive function ("any", "all", etc.). This makes it easy to distinguish between "sqrt($x)" (which returns the principle value) and "any sqrt($x)" (which returns a disjunction of every possible result): more flexible and more readable than trying for implicit junctional return values. -- Jonathan Lang
Re: s29 and Complex numbers
Doug McNutt wrote: > Jonathan Lang wrote: > >Technically, the result set is one element (the principle value), > >since a mathematical function - by definition - produces a single > >result for any given input. > > Please be careful of "definitions" like that. Computer science has quite > different ideas about mathematics than those of chalkboard algebra. ...which is why I specified "mathematical function"; I was intentionally referring to the chalkboard algebra meaning. But I can see where the confusion came from. > x^(1/2) is a multivalued function and x^2 is a single valued > function but they are both pow(x ,y). Actually, that's an interesting point: I didn't find a pow() function in the s29 draft. There was an exp() function which could do the same thing, albeit with the arguments in reverse order: exp($exp, $base) := $base ** $exp; but if you want to do a function that places the arguments the other way around, you apparently have to say something like "infix:<**> ($base, $exp)". I'd recommend adding a pow() function such that pow($base, $exp) := $base ** $exp; it's redundant, but it's a very useful bit of redundancy. > The likes of yacc have > other ideas causing -2^2 to become +4 (thankfully not in > perl) Oh? Last I checked, prefix:<-> was a higher-priority operator than infix:<**>; so -2**2 is equivalent to (-2)**2, not -(2**2). > and sqrt(x) to become single valued positive definite. > -2^(1/2) is not the same as -sqrt(2) in some implementations. I should hope not. That said, I really hope that "sqrt($x)" will effectively be the same as "$x ** 0.5": I want both of them to return a single value in scalar context, and for each to behave like the other when in list context. -- Jonathan "Dataweaver" Lang
Re: Multisubs and multimethods: what's the difference?
Larry Wall wrote: > A multi sub presents only an MMD interface, while a multi method presents > both MMD and SMD interfaces. In this case, there's not much point in the > SMD inteface since .. used as infix is always going to call the MMD interface. So: multi method : MMD and SMD multi sub: MMD only method: SMD only sub: no dispatching right? And just so that it's clear: when you talk about the SMD interface, you're talking about being able to say "$var.foo" instead of "foo $var", right? Whereas MMD doesn't have any special calling syntax. Otherwise, the only difference between MMD and SMD would be the number of arguments that are used for dispatching purposes (all of them vs. just the first one). Can subs be declared within classes? Can methods be declared without classes? If the answers to both of these questions are "no", then it occurs to me that you _could_ unify the two under a single name, using the class boundary as the distinguishing factor (e.g., a method is merely a sub declared within a class). If the answer to either is "yes", I'd be curious to know how it would work. -- Jonathan "Dataweaver" Lang
Fwd: Multisubs and multimethods: what's the difference?
Stevan Little wrote: > Jonathan Lang wrote: > > Can subs be declared within classes? Can methods be declared without > > classes? > > I would say "yes". > > Having subs inside classes makes creating small utility functions > easier. You could also use private methods for this, but if I dont > need to pass the object instance, why make me? I will say that I think > this distinction will be difficult at first for people steeped in Perl > 5 OO. Sounds reasonable. I'm curious: can anyone think of any _public_ uses for subs that are declared within classes? > Having methods outside of classes is less useful, and most of it's > uses are pretty esoteric, however I see no good reason not to allow it > (especially anon methods, as they are critical to being able to do > some of the cooler meta-model stuff). OK; so declaring a method outside of a class could let you define one method that applies to a wide range of classes, without having to declare a separate method for each class. I can see how that might come in handy. > A method probably cannot be invoked without first being attached to a > class somehow because it needs something to SMD off of. Why not? IIRC, SMD differs from MMD dispatch-wise in terms of how many of its parameters are used (one instead of all). If I were to define "method foo(Complex $bar)" and "multi foo(Num $bar)", I could then say "foo i" and expect to be dispatched to the method; whereas "foo e" would dispatch to the sub. Likewise, "i.foo" would dispatch to the method, while "e.foo" would die due to improper syntax. At least, that's how I see it. What's the official position on what happens when you mix SMD, MMD, and/or no dispatch versions of a routine? > But you could > almost look at a bare (and named) method as a mini-role, so that: > > method unattached_method ($::CLASS $self:) { ... } > > is essentially equivalent to this: > > role unattached_method { > method unattached_method ($::CLASS $self:) { ... } > } > > which of course brings up the possibility of this syntax: > > $object does unattached_method; > ^Object does unattached_method; (Wouldn't that be "^$object does unattached_method;"?) > as a means of adding methods to a class or object (ruby-style > singleton methods). Hmm: I don't see a need for this with respect to adding methods to a class; just declare a method that takes a first parameter of the appropriate class. OTOH, TIMTOWTDI: I could see arguments for allowing "^$object does method foo() { ... }" as well. OTGH, adding methods to an individual object would pretty much require the use of an anonymous role. So the idea that "does" wraps bare methods in an anonymous role does seem to have merit. -- Jonathan Lang
Re: Multisubs and multimethods: what's the difference?
Stevan Little wrote: > Jonathan Lang wrote: > > Steven Little wrote: > > > $object does unattached_method; > > > ^Object does unattached_method; > > > > (Wouldn't that be "^$object does unattached_method;"?) > > No, I am attaching the method (well role really) to the class ^Object. > There is no such thing as ^$object IIRC. Upon a closer reading of S12, it appears that both are valid: "prefix:<^>" is syntactic sugar for method meta, and it returns the object's metaclass; thus, ^$object is equivalent to $object.meta. I'm still in the process of trying to grasp the concept of prototype objects; but it appears that ^Object is the same as ^$object. -- Jonathan Lang
Re: UNIX commands: chmod
Damian Conway wrote: > One might argue that it would be more useful to return a result object whose > boolean value is the success or failure, whose numeric and string values are > the number of files *un*changed, and whose list value is the list of those > *un*changed files. > > Then you could write: > > unless chmod 0o664, @files -> $failed { > warn "Couldn't change the following $failed file(s): $failed[]"; > } > > See below for a partial Perl 5 implementation. > > BTW, I think that *many* of Perl 6's list-taking builtins could conform to the > same (or to a very similar) general interface. The biggest problem that I have with this is that you devalue the sigils: it becomes the accepted norm that $failed is nearly as likely to contain a list-like object as a scalar-like object. How important is it that perl 6 maintains the notion that $foo and @foo are entirely different things? If it isn't that important, the above could be rewritten as unless chmod 0o664, @files -> $failed { warn "Couldn't change the following $failed file(s): @failed"; } ...with the choice of sigil determining whether the object gets treated as a scalar or as a list. Also, there's the matter of "unneccessary paperwork": if the only thing that I use the return value for is a boolean test, then all of the effort involved in loading the filenames into a list was wasted effort. Is there a way that the "lazy evaluation" concept could be extended to function return values? Something like: > Partial Perl 5 implementation: > > sub chmod { > my ($mode, @files) = @_; > > $mode = _normalize_mode($mode); > > my @failed; > FILE: > for my $file (@files) { > next FILE if chmod $mode, $file; > push @failed, $file; > } > > use Contextual::Return; # at this point, have the sub temporarily suspend operation. > return > BOOL { [EMAIL PROTECTED] } > SCALAR { [EMAIL PROTECTED] } > LIST { @failed } > } > > sub _normalize_mode { > return shift @_; # Extra smarts needed here ;-) > } > > unless (my $failed = &chmod(0664, qw(test1 test2 test5))) { > warn "Failed to chmod $failed file(s): @$failed"; > } ...$failed is evaluated in boolean context; so the chmod sub resumes operation, determines the boolean value, and suspends operation again, because neither LIST nor SCALAR has been evaluated. ...when $failed is evaluated in scalar context, the chmod sub resumes operation, determines the scalar value, and suspends operation again, because LIST still hasn't been evaluated. ...when $failed is evaluated in list context, the chmod sub resumes operation, determines the list value, and closes out, because all three of BOOL, SCALAR, and LIST have now been evaluated. Meanwhile, die unless chmod(0664, qw(test1 test2 test5)); ...the anonymous return value is evaluated in boolean context; so chmod resumes operation, determines the boolean value, and suspends operation again. ...the anonymous return value goes out of scope, so the suspended chmod sub gets discarded along with it. Or have some compiler optimization which checks the contexts that the return value gets used in, and only returns values for those contexts. -- Jonathan "Dataweaver" Lang
Re: UNIX commands: chmod
Mark Overmeer wrote: > * Larry Wall ([EMAIL PROTECTED]) [060327 01:07]: > > On Sun, Mar 26, 2006 at 02:40:03PM -0800, Larry Wall wrote: > > : On the original question, I see it more as a junctional issue. > > : Assuming we have only chmod($,$), this sould autothread: > > : > > : unless chmod MODE, all(@files) -> $oops { > > : ???; > > : profit(); > > : } > > > > Except that junctional logic is allowed to fail as soon as it can be > > falsified, > > $oops being the filename or the error? To produce a good error > report, you need both. > > To be compatible with Perl5, the order of the @files must be preserved. > > Is it really worth it to design a new syntax to avoid the use of 'for' > with chmod? In more characters as well? What about this: unless all(chmod MODE, [EMAIL PROTECTED]) -> $oops { ???; profit(); } Hyperoperate on the list in order to convert a "($$) returns $" signature into a de facto "($@) returns @" signature, then feed the resulting list into a junctive function to collapse it into a single truth value. -- Jonathan "Dataweaver" Lang
Re: UNIX commands: chmod
Damian Conway wrote: > In other words, this is another example of "Don't use junctions in actions > with side-effects". Why not tag functions with side-effects as such (using a trait to that effect), and then say that short-circuit operators don't short-circuit when side-effects are involved? Or provide adverbs for the junctive functions that can be used to change their short-circuiting behavior. Or both. -- Jonathan Lang
curly-quotes
Given perl6's use of unicode as a basis, could we get "curly quotes", both single and double, to do the same things that straight quotes do? That is: "text" does the same thing as "text", and 'text' does the same thing as 'text'. Other than "looks neat", why do this? Because curly-quotes come in matching sets, like parentheses and brackets do; this lets you nest them. (This seems so simple and obvious that I'll be surprised if someone hasn't already proposed this; however, I don't recall seeing it anywhere.) -- Jonathan "Dataweaver" Lang
Set Theory (Was: Do junctions support determining interesections of lists)
Will perl6 Sets include set negation and/or a universal set? In effect, an internal flag that says, "this set contains every possible element _except_ the ones listed"? -- Jonathan "Dataweaver" Lang
Re: Set Theory (Was: Do junctions support determining interesections of lists)
Larry Wall wrote: > On Tue, Apr 04, 2006 at 11:02:55AM -0700, Jonathan Lang wrote: > : Will perl6 Sets include set negation and/or a universal set? In > : effect, an internal flag that says, "this set contains every possible > : element _except_ the ones listed"? > > Arguably, that's what none() is. ...except that none() is a Junction, not a Set. As such, the same arguments that say that any() and all() aren't suitable for use as Sets apply to none(), don't they? -- Jonathan "Dataweaver" Lang
Re: Set Theory (Was: Do junctions support determining interesections of lists)
Larry Wall wrote: > You're confusing the map with the territory. We're trying to decide > *how* Junctions are like Sets, not defining them into two different > universes. I'm saying that all() is the Junction tha is most like > a Set. A none() Junction can be viewed as the specification for an > infinite set of sets that do not intersect with the corresponding all() > junction. Infinite sets are a bit hard to compute with directly. OK, then; what would be the specification for a _single_ set that contains everything that doesn't intersect with a corresponding all() Junction (the sort of thing that I'd use if I wanted to find the largest subset of A that doesn't intersect with B)? > A one() junction is the spec for a number of sets corresponding to the > values of the corresponding all() junction, each of which contains only > one element from that set. An any() Junction is all possible subsets > not counting the null set. Yeah; I got that. -- Jonathan "Dataweaver" Lang
Re: Another dotty idea
Delimiter-terminated quotes. Really nice idea. I'd put the dot inside the comment: "#.x", with x being an optional quote delimiter (excluding dots). If a delimiter is included, the comment is terminated by the matching quote delimiter; if absent, the comment is terminated by the next dot. $x#.[].foo(); $x.#.[]foo(); $x#. ..foo(); $x.#. .foo(); would all become $x.foo(); The third form would be legal, if a bit illegible. -- Jonathan Lang
Re: Another dotty idea
Patrick R. Michaud wrote: > But if one is going to go this route (and I'm not sure that we should), > then when the delimiter is absent have the comment terminate at > the first non-whitespace character. ...which makes "#.\s" good only for inserting whitespace where it normally wouldn't belong. On the one hand, there's something nice about > $x#..foo() > $x#..() > $x#.() instead of > $x#...foo() > $x#...() > $x#..() OTOH, wasn't the whole point to get away from "long dot is a special case"? Though having the "long dot" as a special case of the "delimited comment" concept, rather than being a "raw special case", is more acceptable to my mind. -- Jonathan Lang
Re: Another dotty idea
Larry Wall wrote: > I really prefer the form where .#() looks like a no-op method call, > and can provide the visual dot for a postfix extender. Although inline and multiline comments are very likely to be used in situations where method calls simply aren't appropriate: .#(+---+ | Hello! | ++) is something that I wouldn't be surprised to see. > It also is > somewhat less likely to happen by accident the #., I think. And I > think the front-end shape of .# is more recognizable as different > from #, while #. requires a small amount of visual lookahead, and > is the same "square" shape on the front, and could easily be confused > with a normal line-ending comment. All true. But it avoids the headache of figuring out whether "..#" is supposed to parse as a double-dot followed by a line-gobbling comment or as a single dot followed by a delimited comment. -- Jonathan "Dataweaver" Lang
Re: Another dotty idea
Larry Wall wrote: > It's only a problem when some tries to write > > .=#( ... :-) [tries to grok the meaning of "$foo.=#(Hello, World!)"] [fails] > : All true. But it avoids the headache of figuring out whether "..#" is > : supposed to parse as a double-dot followed by a line-gobbling comment > : or as a single dot followed by a delimited comment. > > One-pass, longest-token parsing says it has to be a .. followed by > a # comment. No headache, really. And nobody in their right mind > would write that anyway. Many perl programmers aren't in their right mind. :) Seriously, the question is which paradigm makes more sense: a null method (dot precedes pound), or a special kind of comment (dot follows pound)? The former emphasizes the "you don't have to put it at the end of a line" aspect, while the latter emphasizes the "you can strip it out without harming the surrounding code" aspect. IMHO, the latter is the more important point to emphasize - especially since the former brings so much baggage with it. And I suspect that the confusion between # and #."" would be minor, _especially_ with syntax highlighters and the like in common use. -- Jonathan Lang
Adverbs
How do you define new adverbs, and how does a subroutine go about accessing them? -- Jonathan Lang
Re: Adverbs
Larry Wall wrote: > Jonathan Lang wrote: > : How do you define new adverbs, and how does a subroutine go about > : accessing them? > > Adverbs are just optional named parameters. Most of the magic is in > the call syntax. Ah. So every part of a Capture Object has an alternate call syntax: act $foo, @list, bar => 'baz'; is the same as @list ==> $foo.act:bar('baz'); right? (And if this is the case, the one capability that the adverb notation provides that the more traditional named parameter notation doesn't have is a way to let a particular key to exist without being defined.) -- Jonathan "Dataweaver" Lang
Re: Adverbs
Larry Wall wrote: > You might have to write that > >@list ==> $foo.act :bar('baz'); > > I think or the colon on the method would be taken as starting a list. > I dunno, depends on whether .act: is considered a "longest token", > I guess. I could argue it the other way as well, and :bar is a longest > token compared to :. Eh? What's this bit about lists and colons? (This is one of the things that worries me about Perl 6: there seem to be all sorts of "edge cases" which crop up at the most unexpected times.) -- Jonathan "Dataweaver" Lang
Re: A shorter long dot
Damian Conway wrote: Juerd wrote: > Audrey cleverly suggested that changing the second character would also > work, and that has many more glyphs available. So she came up with > >> and propose ".:" as a solution > $xyzzy.:foo(); > $fooz. :foo(); > $foo. :foo(); This would make the enormous semantic difference between: foo. :bar() and: foo :bar() depend on a visual difference of about four pixels. :-( We've strived to eliminate homonyms from Perl 6. I'd much rather not introduce one at this late stage. Is there a reason that we've been insisting that a long dot should use whitespace as filling? To me, "foo. .bar" shares a common problem with "foo. :bar" - in both cases, my gut instinct is to consider "foo" and "bar" to be separate entities, disconnected from each other - quite the opposite of what's intended. OTOH, this problem would go away if the filler was primarily, say, underscores: foo.___.bar or foo.___:bar visually look like foo and bar are attached to each other. Of course, without any sort of whitespace, there would be no way for a long dot to span lines; so you might want to allow newline characters as well. But then, you run into problems such as: foo. ___.bar being illegal, because the second line contains leading whitespace characters... Perhaps you would be best off allowing both whitespace characters and underscores as filler, and strongly suggesting that whitespace only be used to span lines: by convention, the only whitespace that ought to be used in a long dot would be something that matches /\n\s*/. With this in place, the distinction would be between foo.:bar and foo :bar ...very easy to distinguish. -- Jonathan "Dataweaver" Lang
Re: A shorter long dot
Juerd wrote: > foo.___:bar Would suffice for my needs. Not sure if people are willing to give up their underscore-only method names, though. When is the last time that you saw an underscore-only method name? Gaal Yahas wrote: But it doesn't work across lines: Take another look at my original proposal: I didn't suggest _replacing_ whitespace with underscores; I suggested _supplementing_ it[1] - so $xyzzy.foo() $fooz.:foo() $foo. :foo() $fa. :foo() $and_a_long_one_I_still_want_to_align. :foo() $etc. :foo() would still work, but so would $foo._:foo() $fa.__:foo() $and_a_long_one_I_still_want_to_align._ __:foo() $etc._:foo() ...and the latter five would be the recommended way of doing it. -- Jonathan Lang [1] This is a nod to TIMTOWTDI; technically, one could be as restrictive as requiring any block of whitespace in a long dot to begin with a newline (thus restricting its use to line wrapping and alignment of the new line) and still have a perfectly viable long dot syntax.
Re: A shorter long dot
Larry Wall wrote: Seems so to me too. I don't see much downside to \. as a long dot. The only remaining problem that I see for the long dot is largely orthogonal to the selection of the first and last characters - namely, that your only choice for filler is whitespace. Although the C<\.> option opens an intriguing possibility of defining the long dot pattern as "backslash, dot or whitespace (repeated zero or more times), dot": $xyzzy.foo() $fooz\.foo() $foo\..foo() $fa\...foo() $and_a_long_one_I_still_want_to_align\ ...foo() $etc\..foo() -- Jonathan Lang
Re: Fw: Logic Programming for Perl6 (Was Re: 3 Good Reasons... (typo alert!))
Hmm... How about this: Treat each knowledge base as an object, with at least two methods: .fact() takes the argument list and constructs a prolog-like fact or rule out of it, which then gets added to the knowledge base. .query() takes the argument list, constructs a prolog-like query out of it, and returns a lazy list of the results. There would be a default knowledge base, meaning that you wouldn't have to explicitly state which knowledge base you're using every time you declare a fact or make a query. -- Jonathan Lang
Concurrency: hypothetical variables and atomic blocks
How does an atomic block differ from one in which all variables are implicitly hypotheticalized? I'm thinking that a "retry" exit statement may be redundant; instead, why not just go with the existing mechanisms for successful vs. failed block termination, with the minor modification that when an atomic block fails, the state rolls back? Also, what can "retry_with" do that testing the atomic block for failure can't? -- Jonathan "Dataweaver" Lang
Re: Concurrency: hypothetical variables and atomic blocks
Larry Wall wrote: The way I see it, the fundamental difference is that with ordinary locking, you're locking in real time, whereas with STM you potentially have the ability to virtualize time to see if there's a way to order the locks in virtual time such that they still make sense. Then you just pretend that things happened in that order. Forgive this ignorant soul; but what is "STM"? -- Jonathan "Dataweaver" Lang
Methods vs. Subs
Is there anything that you can do with a sub (first parameter being some sort of object) that you cannot do with a method? Frex, given: multi method my_method($invocant:); would &topical_call := &my_method.assuming :invocant<$_>; be legal? -- Jonathan "Dataweaver" Lang
Re: ===, =:=, ~~, eq and == revisited (blame ajs!) -- Explained
David Green wrote: I think I understand it... (my only quibble with the syntax is that === and eqv look like spin-offs of == and eq, but I don't know what to suggest instead (we're running short of combinations of = and : !)) Agreed. So there are three basic kinds of comparison: whether the variables are the same (different names, but naming the same thing); whether the values are the same (deep comparison, i.e. recursively all the way down in the case of nested containers); and in-between (shallow comparison, i.e. we compare the top-level values, but we don't work out *their* values too, etc., the way a deep comparison would). If I've got it right, this is what =:=, eqv, and === give us, respectively. Apparently, there are _four_ basic kinds of comparison: the ones mentioned above, and == (I believe that eq works enough like == that whatever can be said about one in relation to ===, =:=, or eqv can be said about the other). I'd be quite interested in an expansion of David's example to demonstrate how == differs from the others. -- Jonathan "Dataweaver" Lang
Re: ===, =:=, ~~, eq and == revisited (blame ajs!) -- Explained
Yuval Kogman wrote: Jonathan Lang wrote: > Apparently, there are _four_ basic kinds of comparison: the ones > mentioned above, and == (I believe that eq works enough like == that > whatever can be said about one in relation to ===, =:=, or eqv can be > said about the other). I'd be quite interested in an expansion of > David's example to demonstrate how == differs from the others. sub &infix:<==> ( Any $x, Any $y ) { +$x === +$y; # propagate coercion failure warnings to caller } sub &infix: ( Any $x, Any $y ) { ~$x === ~$y } So the purpose of === is to provide a means of comparison that doesn't implicitly coerce its arguments to a particular type? -- Jonathan "Dataweaver" Lang
Fwd: ===, =:=, ~~, eq and == revisited (blame ajs!) -- Explained
Dave Whipp wrote: Darren Duncan wrote: > Assuming that all elements of $a and $b are themselves immutable to all > levels of recursion, === then does a full deep copy like eqv. If at any > level we get a mutable object, then at that point it turns into =:= (a > trivial case) and stops. ( 1, "2.0", 3 ) === ( 1,2,3 ) True or false? More imprtantly, how do I tell perl what I mean? The best I can think of is: [&&] (@a »==« @b) Vs [&&] (@a »eq« @b) But this only works for nice flat structures. IIRC, this is because the implicit coercion to number or string gets in the way. IMHO, "@a == @b" ought to be synonymous with "all (@a »==« @b)" - it's far more likely to DWIM. Likewise with "@a eq @b" and "all (@a »eq« @b)". If what you want to do is to compare the lengths of two lists, you ought to do so explicitly: "[EMAIL PROTECTED] == [EMAIL PROTECTED]". Getting back to the notion of immutability: can someone give me an example of a realistic immutable analog to a list, and then give an example demonstrating the practical distinction between === and eqv based on that? I want to see why it's important to distinguish between comparing mutable data types and comparing immutable data types. (Incidently, I think that a suitable word-based synonym for =:= would be "is".) -- Jonathan "Dataweaver" Lang
Fwd: Classes / roles as sets / subsets
I accidently sent this directly to Richard. Sorry about that, folks... -- Forwarded message -- From: Jonathan Lang <[EMAIL PROTECTED]> Date: Aug 29, 2006 1:24 PM Subject: Re: Classes / roles as sets / subsets To: Richard Hainsworth <[EMAIL PROTECTED]> Richard Hainsworth wrote: I find classes and roles, and multiple inheritance in general, difficult to understand. Larry Wall talked about subsets, so I have tried to analyse various situations using the idea of sets and subsets and Venn diagrams for demonstrating the relations between sets and subsets. The idea is that the 'space' encompassed by a set in the diagram (labelled as sets) is method functionality and attributes. This may well miss the mark. Some of the most important differences between classes and roles come in when the sets of method namespaces don't match up with the sets of method implementations - that is, A and B both include a method m, but A's method m does something different than B's method m. See diagram Case 1 (Class B is a subset of class A): Note that in Case 1, there isn't much difference between classes and roles. My understanding of inheritance in other languages is that Class A 'isa' Class B, and inherits all of the attributes and functionality of Class B, and extends functionality and attributes. So far, so good. If you want A.m to do something different than B.m, A redefines m. It is also possible for Class B to be ('isa') Class A, and ignore the extra functionality of A, presumably to create objects that are smaller than the more general class. Not according to clean OO programming theory. If A isa B, then A will carry with it all of B's internal mechanisms, even the ones that it has closed off from public access. If it fails to do this, it isn't really a B. Even when A redefines how m works, it includes an implicit mechanism for getting at the original definition. As such, inheritance can only add to a class; it cannot remove or change anything in any substantive manner. The only point at which stuff gets removed is when the code optimizer comes along and trims the fat. Let's extend Case 1 by adding a Class C, which is a superset of Class A. C isa A, but A isa B; C can still access B.m, as well as A.m. Think of A, B, and C as being layers that get added on top of each other: A covers B, but doesn't actually replace it. My suggested interpretation of roles: Class B is a role, and Class A is built from Class B, extending its functionality. A role without extra code becomes a class if it is instantiated. The motto for role A is "anything you can do, I can do too." Note that when role A declares that it does role B, it makes a promise about _what_ it can do, but makes no promises about _how_ it will do it. In fact, roles don't even make a promise to supply any implementation information at all: B doesn't have to say a word about how m works - and for the purpose of composing roles, what it _does_ say about how m works should be taken not so much as a rule than as a guideline. As long as the composer has no reason _not_ to accept B's version of m, it will; but at the first hint of trouble, B's version of m will get jettisoned. In Case 1, the only way that role B's version of m will get jettisoned is if role A provides its own version. However, A is not a B; it is its own unique entity. If A invalidates B's suggestion for how m should work, then nothing that does A will implicitly have access to B's version of m. Unlike classes, roles can change from what they're composed from, not just add to them. However, because they don't guarantee to tell you how anything works, they can't be used to intantiate objects; that's what Classes are for. See diagram case 2 (Class A and Class B intersect): Usual OO technique: Class B inherits the functionality of Class A, extends the functionality (in one 'direction') but then over-rides (anuls) some of the functionality of A (say, by redefining methods/attributes used in A). Question: Is this the sort of behaviour that is forbidden in some languages. Very definitely, yes. Removing capabilities during inheritance is a no-no. If you wanted Classes A abd B to be part of the same inheritance tree, you'd have to define a common ancestor class that contains the intersection of A and B, and then each of A and B inherit from it. As described, though, A and B are completely unrelated by inheritance; each is a unique class which stands on its own. The fact that both contain a method m is a coincidence, nothing more. This _is_ a useful case to consider, though, in terms of what happens if you add a new class (C) which encompasses both A and B. Assuming that m is an element of both A and B, A.m doesn't neccessarily work the same way as B.m; C.m needs to decide which version t
Re: could 'given' blocks have a return value?
Mark Stosberg wrote: Sometimes I use 'given' blocks to set a value. To save repeating myself on the right hand side of the given block, I found I kept want to do this: my $foo = given { } ...and have whatever value that was returned from when {} or default {} populate $foo. Isn't it still the case that the last expression evaluated within a closure is returned by the closure? And isn't a given block just a fancy kind of closure? The question is whether or not a code block can be used where the parser expects an expression; for instance, could one say: my $foo = if condition {"BAR"} else {"BAZ"}; ? I'm no expert, but it occurs to me that allowing this could be a parsing nightmare. ISTR a programming construct along the lines of "eval" that is effectively shorthand for "sub { ... }()". It turns out pugs already allow this, through the trick of wrapping the given block in an anonymoose sub...which is then immediately executed: my $rm = sub { given $rm_param { when Code { $rm_param(self) } when Hash { %rm_param } default{ self.query.param($rm_param) } }}(); Not only do you get implicit matching on the left side, you get implicit return values on the right! I'd just like to be able to clean that up a little to: my $rm = given $rm_param { when Code { $rm_param(self) } when Hash { %rm_param } default{ self.query.param($rm_param) } }; So what happens if you forget to include a default in the given? -- Jonathan "Dataweaver" Lang
Re: A suggestion for a new closure trait.
Joe Gottman wrote: Since a FIRST block gets called at loop initialization time, it seems to me that it would be useful to have a block closure trait, RESUME, that gets called at the beginning of every loop iteration except the first. Thus, at the beginning of each loop iteration either FIRST or RESUME but not both would get called. Other possible names for this block include REENTER, SUBSEQUENT, or NOTFIRST. So RESUME would be to FIRST as NEXT is to LAST? -- Jonathan "Dataweaver" Lang
Re: Naming the method form of s///
Luke Palmer wrote: On 8/31/06, Juerd <[EMAIL PROTECTED]> wrote: > Still, though, How would you specify :g? It doesn't make a lot of sense > on rx// -- just like you can't use it with qr// in Perl 5. It is a good point that it doesn't belong on the regex. Perhaps: $foo.subst(/bar/, "baz", :g) That seems to work, though the weight is at the front again. Oh. $foo.subst(:g, /bar/, "baz") IIRC, :g is an adverb, and adverbs are merely syntactic sugar for named parameters. So perhaps the signature for the substitution method should include a slurpy hash of modifiers... The question then becomes whether or not you can pass a modifier to the regex in by means of the slurpy hash, or if it _has_ to be packaged with the regex. -- Jonathan "Dataweaver" Lang
Re: renaming "grep" to "where"
Smylers wrote: Damian Conway writes: > I don't object in principle to renaming "grep" to something more self > explanatory (except for the further loss of backwards compatability > and historical Unix reference...though that didn't stop us with > "switch" vs "given" ;-) But while C had precedence in computer science in general it didn't have this in Perl; your Switch module is not used much in Perl 5, and Perl 6's C is a substantial improvement over the C statement in most languages. Whereas C is a well-known and well-used Perl 5 function, and this functionality is not being changed in Perl 6 (other than being available as a method as well as a function). IMHO, syntax should be left alone until a compelling reason to change it is found. While I think it would be nice to have a more intuitive name for grep, I don't think that this qualifies as a compelling reason to change it - especially since it's so easy to add aliases via modules, such as the aforementioned "use English". My recommendation: leave it as grep, but leave a note for whomever is going to create the perl6 analog of the English module that they might want to provide a more intuitive name for it. -- Jonathan "Dataweaver" Lang
Re: renaming "grep" to "where"
Jonathan Scott Duff wrote: On Tue, Sep 19, 2006 at 04:38:38PM +0200, Thomas Wittek wrote: > Jonathan Lang schrieb: > > IMHO, syntax should be left alone until a compelling reason to change > > it is found. While I think it would be nice to have a more intuitive > > name for grep > > What would be the disadvantage of renaming it to a more intuitive name? > I can only see advantages. Lost culture perhaps. There's a long strong tradition of the term "grep" in perl and it would be a shame to toss that away without some serious thought. Not just that; but also because that's one more thing that perl programmers are going to have to relearn when and if they migrate from 5 to 6. And the more things there are to relearn, the more likely it will be "if" rather than "when". > > I don't think that this qualifies as a compelling > > reason to change it - especially since it's so easy to add aliases via > > modules > As Smylers said above: Please, no more aliases. They only create confusion. Sure, but "all's fair if you predeclare". Aliases imposed on us all may cause confusion, but presumably, if an individual has asked for an alias, they are willing to risk the potential confusion. Precisely. -- Jonathan "Dataweaver" Lang
Re: renaming "grep" to "where"
Larry Wall wrote: Mark J. Reed wrote: : I have no horse in this race. My personal preference would be to : leave grep as "grep". My second choice is "select", which to me is : more descriptive than "filter"; it also readily suggests an antonym of : "reject" to do a "grep -v" (cf. "if !" vs "unless"). But I'd accept : "filter", too. Given a choice between 'grep', 'filter', and 'select/reject', I'd prefer the third model: it counterbalances the break from tradition with additional functionality. But which *ect do we call the one that returns both? In short: if 'select/reject' would be analogous to 'if/unless', should 'select' be allowed the equivalent of an 'else' as well as a 'then'? Personally, I think that this would be an unneccessary complication - much like 'unless' doesn't get to split code into true and false branches, and statement modifiers can't be nested. Meanwhile, your examples seem to be illustrating another possibility: something analogous to 'grep' that uses the 'given/when' paradigm instead of the 'if/then' paradigm. This, I think, has promise - though, as you say, there's already a way to do this using gather. What I'd be looking for would be a more compact syntax: (@good, @bad, @ugly) = @stuff.divvy { when .sheep { @good } when .goat { @bad } default { @ugly } } ...or something of the sort. Regardless, this oughtn't actually be the replacement for grep, IMHO; it should _supplement_ grep instead. Anyway, it's not clear to me that grep always has an exact opposite. I don't see why it ever wouldn't: you test each item in the list, and the item either passes or fails. 'select' would filter out the items that fail the test, while 'reject' would filter out the ones that pass it. -- Jonathan "Dataweaver" Lang
Re: renaming "grep" to "where"
[EMAIL PROTECTED] wrote: I envision a select, reject, and partition, where @a.partition($foo) Returns the logical equivalent of [EMAIL PROTECTED]($foo), @a.select($foo)] But only executes $foo once per item. In fact. I'd expect partition to be the base op and select and reject to be defined as partition()[1] and partition()[0] respectively... Can the optimizer handle something like this without too much trouble? If so, I like this. It's expandable: if you allow additional matches to be passed in, it could be made into a switch-like statement: @a.partition( .sheep, .goat) would return the logical equivalent of [EMAIL PROTECTED](any(.sheep, .goat)), @a.select(.sheep), @a.select(.goat)] If you only give it one criterion, this would be equivalent to your version. The main debate I'm having is whether the 'default'/'reject' output should go to element 0 or to the last element; I think that a case could be made that the more intuitive arrangement would be to return the true portion first, followed by the false portion (or, in the multipartition version, the first case first, the second case second, and so on, with whatever's left over going last). -- Jonathan "Dataweaver" Lang
Re: Capture sigil
Larry Wall wrote: Okay, I think this is worth bringing up to the top level. Fact: Captures seem to be turning into a first-class data structure that can represent: argument lists match results XML nodes anything that requires all of $, @, and % bits. Fact: We're currently going through contortions to try to get these to behave when they're stored in a scalar variable: [,] =$capture Fact: This contrasts with the ease with which we can request arrayness and hashness using the @ and % sigils. Conjecture: We need a corresponding sigil to request captureness. As with @ and %, you can store a capture in a $ to hide it, but we don't have the ability to have capture variables that know how to behave like captures without fakey syntactic help. Let me see if I'm following you correctly: ¤args = \(1,2,3,:mice) Is the backslash still neccessary, or is the use of the Capture sigil enough to indicate that the rvalue should be treated as a capture object? $¤args; # would this return 1 or an indication that nothing's there? @¤args; # would this return [1, 2, 3], or [2, 3]? %¤args; # this would return { mice -> 'blind' } Would '¤¤args' mean anything? Does '&¤args' mean anything? (I like '¤' for capture objects, because it reminds me of '*', and capture objects remind me of Perl 5's typeglobs. Perhaps the ASCII workaround could be '**'?) '$¤args' would mean "retrieve the scalar portion of the capture object 'args'"; '¤$args' would mean "treat the scalar object 'args' as if it were a capture object". Right? (And what, precisely, is meant by treating a scalar as if it were a capture object?) Which leads me to wonder if there's a Latin-1 synonym for @@, like § maybe for "sectional", or µ for "multidimensional", or (r) for, er, repetitious or something. Of these, I like the idea of § for Latin-1 equivalent of @@. Not only does it have the meaning of "section", but it registers in my brain as "this looks sigilish" - perhaps due to its vague visual resemblance to the dollar sign (oddly enough, ¢ doesn't look sigilish to me; I don't know why not, but it doesn't). Do this, and the sigil set becomes: $ scalar @ ordered array § multislice view of @ (ASCII alias: @@) % unordered hash, i.e. associative array ¤ capture object (ASCII alias: **) & code/rule/token/regex :: package/module/class/role/subset/enum/type/grammar Hmm, then (r)(c)foo could take the multidimensional feeds out of capture "foo". Maybe µ¢foo looks better though. Or maybe we could work the € in there somewhere for "extra" dimensional... ☺ Ack! I beg you; stop the line noise! On the other hand, we could just make |foo a capture because it inclusive-ORs several other structures. Main downside is that it looks too much like an "ell". That's probably not a showstopper in practice. Much of the time we'd be using it to qualify other variables anyway, so instead of scattering [,] all over the place we would have things like foo(|$foo) foo(|@foo) foo(|%foo) foo(|foo) I'm lost. What would '|@foo' mean? Visually it kinda works as an "insert this here" marker too. And most of the places you'd use it wouldn't be using | operators. Another factor is that it kind of resonates visually with the \ that makes captures. Please remind me: what does perl 6 do with '$(...)' and '@(...)'? And oughtn't it do something analogous with '|(...)', '¤(...)', or whatever the capture sigil turns out to be? Would that differ from '\(...)'? -- Jonathan "Dataweaver" Lang
Capture Literals
How would I construct a capture literal that has both an invocant and at least one positional argument? How do I distinguish this from a capture literal that has no invocant and at least two positional arguments? Gut instinct: if the first parameter in a list is delimited from the rest using a colon instead of a comma, treat it as the invocant; otherwise, treat it as the first positional argument. This would mean that the rules for capturing are as follows: * Capturing something in scalar context: If it is a pair, it is captured as a named argument; otherwise, it is captured as the invocant. * Capturing something in list context: Pairs are captured as named arguments; the first non-pair is captured as the invocant if it is followed by a colon, but as a positional argument otherwise; all other non-pairs are captured as positional arguments. So: $x = /$a; # $$x eqv $a $x = /:foo;# %$x eqv { foo => 1 } $x = /($a,); # @$x eqv ( $a ); is the comma neccessary, or are the () enough? $x = /($a:); # $$x eqv $a $x = /(:foo); # %$x eqv { foo => 1 }; assuming that adverbs can go inside (). $x = /($a, $b) # @$x eqv ($a, $b) $x = /($a: $b) # $$x eqv $a; @$x eqv ($b) $x = /:foo ($a: $b, $c):bar <== $d, $e <== flag => 0; # results on next three lines: # $$x eqv $a # @$x eqv ($b, $c, $d, $e) # %$x eqv { foo => 1, bar => 'baz', flag => 0 } Note that this approach makes it impossible for a pair to end up anywhere other than as a named argument in the capture object; while this makes sense when the capture object is being used as a proxy argument list, it makes less sense when it is being used as the equivalent of perl 5's references, and thus is probably a bug. -- Jonathan "Dataweaver" Lang
Re: Capture sigil
Two questions: 1. How would the capture sigil affect the use of capture objects as replacements for perl5's references? 2. With the introduction of the capture sigil, would it be worthwhile to allow someone to specify a signature as a capture object's 'type'? That is: my :(Dog: Str $name, Num :legs) |x = /($spot: "Spot"):legs<4>; in analogy to 'my Dog $spot'? -- Jonathan "Dataweaver" Lang
Re: Capture sigil
Larry Wall wrote: Jonathan Lang wrote: : Two questions: : : 1. How would the capture sigil affect the use of capture objects as : replacements for perl5's references? I don't see how it would have any effect at all, unless the P5 ref happened to be to a typeglob, or had both array and hash semantics tied to it. The regular $$x, @$x, and %$x look much like they do in P5. But $x is a scalar; wouldn't you need to use '|x' to denote a capture object, thus making the above '$|x', '@|x', and '%|x', respectively? we currently don't allow assignment to a capture, only binding. IOW, if you want someone to be able to say '$|x' and get '$a' as a result, you'd have to say '|x := \$a' (or perhaps '$|x := $a') instead of '|x = \$a'. Right? -- Jonathan "Dataweaver" Lang
Re: Capture Literals
Larry Wall wrote: : This would mean that the rules for capturing are as follows: : : * Capturing something in scalar context: If it is a pair, it is : captured as a named argument; otherwise, it is captured as the : invocant. : : * Capturing something in list context: Pairs are captured as named : arguments; the first non-pair is captured as the invocant if it is : followed by a colon, but as a positional argument otherwise; all other : non-pairs are captured as positional arguments. Capture literals ignore their context like [...] does. What got me thinking about this was that I couldn't find decent documentation about Capture literals in the synopses. : So: : : $x = \$a; # $$x eqv $a : $x = \:foo;# %$x eqv { foo => 1 } : $x = \($a,); # @$x eqv ( $a ); is the comma neccessary, or are the : () enough? I think the () is probably enough. Problem: S02 explicitly states that '\3' is the same as '\(3)'. So: both of them put 3 into the scalar slot of the capture object, or both of them put the single-item list '(3)' into the array slot of the capture object. Whichever way they go, how would you do the other? : $x = \($a:); # $$x eqv $a : $x = \(:foo); # %$x eqv { foo => 1 }; assuming that adverbs can go : inside (). : $x = \($a, $b) # @$x eqv ($a, $b) : $x = \($a: $b) # $$x eqv $a; @$x eqv ($b) : $x = \:foo ($a: $b, $c):bar <== $d, $e <== flag => 0; # results : on next three lines: :# $$x eqv $a :# @$x eqv ($b, $c, $d, $e) :# %$x eqv { foo => 1, bar => 'baz', flag => 0 } Ignoring the syntax error, yes. Please don't ignore the syntax error; I'm not seeing it. : Note that this approach makes it impossible for a pair to end up : anywhere other than as a named argument in the capture object; while : this makes sense when the capture object is being used as a proxy : argument list, it makes less sense when it is being used as the : equivalent of perl 5's references, and thus is probably a bug. If you say "flag" => 0 it comes in as a pair rather than a named arg. I was under the impression that the left side of '=>' still gets auto-quoted in perl 6. Anyway, you're saying that if I capture a pair, it will be stored in the array portion of the capture object (the 'positional args'); if I capture an adverb, it will be stored in the hash portion of the capture object (the 'named args'). Right? -- Jonathan "Dataweaver" Lang
Re: Capture sigil
Larry Wall wrote: You don't need to use | to store a capture any more than you need @ to store an array. Just as $x = @b; @$x; gives you the original array, Huh. I'm not used to this happening. So what would the following code do, and why? my @b = ('foo', 'bar'); my $x = @b; say $x; likewise $x = [EMAIL PROTECTED]; @$a; gives you the original array as well. Err... don't you mean '@$x' instead of '@$a'? : >we currently don't allow assignment to a capture, only binding. : : IOW, if you want someone to be able to say '$|x' and get '$a' as a : result, you'd have to say '|x := \$a' (or perhaps '$|x := $a') instead : of '|x = \$a'. Right? Yes. That's how it's currently specced, anyway. (The \ is probably required, or it'll try to bind to the contents of $a instead.) ??? There's some potential non-dwimmery here - either that, or there's a steep learning curve (that I haven't mastered) before dwimmery can be applied[1]. I would expect $|x to refer to the scalar slot of the capture object |x; as such, '$|x := ...' would mean 'bind the scalar slot of |x to ...'. Likewise, I would expect '... := $a' to mean 'bind ... to the scalar variable $a'. Or is the distinction between '$a' and 'the contents of $a' similar to the distinction between a Unix filename and a Unix file ID? That is, are we talking about the difference between hard links and symbolic links in Unix? -- Jonathan "Dataweaver" Lang [1] I'm not sure that there's much of a difference between the two statements.
Re: Capture sigil
Aaron Sherman wrote: IMHO most of the confusion here goes away if capture variables ONLY store parameter-list-like captures, and any other kind of capture should, IMHO, permute itself into such a structure if you try to store it into one. That way, their use is carefully constrained to the places where Perl 6 can't already do the job. My understanding is that perl6 treats references as subsets of the capture object - something along the lines of the following (WARNING - this is going to border on 'internals' territory): When you bind a variable to an object, that variable becomes a synonym of that object for all purposes. However, barring auto-capturing, the variable's sigil must match the object's type: bind a scalar variable to an object, and the object better know how to behave like a scalar, else you have trouble. I understood that list-like objects generally don't know how to behave like scalars, and vice versa. A capture object is an object that's intended to allow for indirection: it's a perl6opaque object with (at least) three attributes inside, all of which are to be used exclusively for binding purposes: in effect, it's something like: role Capture { has $invocant, @positional, %named, ::returnType; } Note that you don't need any & or | attributes in order to capture a parameter list. The '\' prefix operator constructs a Capture object and binds the various parts of its parameter list to the appropriate attributes. The '$' prefix operator returns the Capture object's $invocant attribute; the '@' prefix operator returns the Capture object's @positional attribute; the Capture object's '%' prefix operator returns the Capture object's %named attribute; and the Capture object's '::' prefix operator returns the Capture object's ::returnType attribute. Such a Capture object would be unable to capture codeblocks or other Capture objects without extending its capabilities. That is, '\&code' wouldn't have an attribute to bind to &code. There _might_ be a way around this without introducing a new attribute, but only if code objects know how to behave like scalars. I may be off in terms of some of the details; but am I on the right course in general? -- Jonathan "Dataweaver" Lang
Re: RFC: multi assertions/prototypes: a step toward programming by contract
Minor nitpick: Any types used will constrain multis to explicitly matching those types or compatible types, so: our Int proto max(Seq @seq, *%adverbs) {...} Would not allow for a max multi that returned a string (probably not a good idea). IIRC, perl 6 doesn't pay attention to the leading Int here except when dealing with the actual code block attached to this - that is, "Int" isn't part of the signature. If you want Int to be part of the signature, say: our proto max(Seq @seq, *%adverbs -> Int) {...} More to the point, I _could_ see the use of type parameters here (apologies in advance if I get the syntax wrong; I'm going by memory): our proto max(Seq of ::T [EMAIL PROTECTED], *%adverbs -> ::T) {...} This would restrict you to methods where the return type matches the list item type. -- Jonathan "Dataweaver" Lang
Re: RFC: multi assertions/prototypes: a step toward programming by contract
Aaron Sherman wrote: TSa wrote: > Miroslav Silovic wrote: >> package Foo does FooMultiPrototypes { >> ... >> } > > I like this idea because it makes roles the central bearer of type > information. Type information is secondary to the proposal, but I'll run with what you said. This (the example, above) is a promise made by a class to meet its own specification. Actually, it's a promise made by a package (not a class) to meet the specification given by a role (which can, and in this case probably does, reside in a separate file - quite likely one heavily laced with POD). Specifically, the role states which subroutines the package must define, including the signatures that they have to be able to support. IOW, it defines what the package is required to do, as opposed to what the package is forbidden from doing (as your proposal does). If I'm understanding the idea correctly, you write the package role as a file, hand it to the programmer, and tell him to produce a package that does the role. -- Jonathan "Dataweaver" Lang
Re: RFC: multi assertions/prototypes: a step toward programming by contract
Aaron Sherman wrote: Jonathan Lang wrote: > Actually, it's a promise made by a package (not a class) to meet the > specification given by a role (which can, and in this case probably > does, reside in a separate file - quite likely one heavily laced with > POD). That's a fine thing to want to do. Not something that I was thinking of initially, and only tangentially related, but a good idea. I think you get this for free by embedding a proto (or perhaps a "sigform") inside of a role: role Foo { sigform bar($baz) { ... } } What would be the difference between this and role Foo { sub bar($baz) { ... } } ? IOW, what's the difference between a 'sigform' declaration and a "to be defined later" subroutine declaration? Notice the lack of export which forces this to only apply to the class or module to which the role is applied via composition, not to a module which imports that class or module. True enough. That said, it wouldn't be hard to change this. Consider the possibility of an "exported" trait, which causes whatever it's applied to to be exported whenever a module imports its package. Thus, you could say something like: role Foo; sub bar($baz) is exported { ... } At which point anything that imports a module that composes Foo will import bar as well. And I'm making the (probably erroneous) assumption that Perl 6 doesn't have a robust, intuitive means of marking package components for export already. I'm sure that a few moments with the appropriate Synopsis would correct said error. -- Jonathan "Dataweaver" Lang
RFC: multi assertions/prototypes: a step toward programming by contract
Trey Harris wrote: One thing that occurs to me: following this "contract" or "promise" analogy, what does C<...> mean in a role or class? Unless I've missed somewhere in the Synopses that explicates C<...> differently in this context, yada-yada-yada is just code that "complains bitterly (by calling C) if it is ever executed". So that's fine for an abstract routine at runtime--code calls it, and if it hasn't been reimplemented, it fails. But unless something else is going on with C<...>, as far as the language is concerned, a routine with body C< {... }> *is* implemented, as surely as a routine with body C<{ fail }> is implemented. So the routine is only "abstract" insofar as you'll need to reimplement it to do anything useful with it. -snip- Is my inference correct? I hope not. My understanding is that '{ ... }' is supposed to represent the notion of abstract routines: if you compose a role that has such routines into a class or package, I'd expect the package to complain bitterly if any such routines are left with yada-yadas as their codeblocks, on the basis that while roles can be abstract, classes and packages should not be. -- Jonathan "Dataweaver" Lang
Re: RFC: multi assertions/prototypes: a step toward programming by contract
Terminology issue: IIRC (and please correct me if I'm wrong), Perl 6 uses 'module' to refer to 'a perl 5-or-earlier module', and uses 'package' to refer to the perl 6-or-later equivalent. Aaron Sherman wrote: Details: Larry has said that programming by contract is one of the many paradigms that he'd like Perl 6 to handle. To that end, I'd like to suggest a way to assert that "there will be multi subs defined that match the following signature criteria" in order to better manage and document the assumptions of the language now that methods can export themselves as multi wrappers. Let me explain why. OK. My understanding of "programming by contract" as a paradigm is that one programmer figures out what tools he's going to need for the application that he's working on, and then farms out the actual creation of those tools to another programmer. Second, when you mention 'signature criteria', what immediately comes to mind is the notion of the signature, which applies restrictions on the various parts of an argument list: :(@array, *%adverbs) This applies two restrictions: there can be only one positional parameter, and it must do the things that a list can do. Change the comma to a colon, and you have a signature that says that there must be a list-like invocant, and that there can be no positional parameters. The only aspect of the signature that is not concerned with argument types is the part that determines how many of a particular kind of parameter (invocant, positional, or named) you are required or permitted to have: even the @-sigil in the first positional parameter (or the invocant, in the method-based signature) is specifying type information, as it's placing the requirement that that parameter needs to behave like a list. In effect, I could see thinking of a signature as being a regex-like entity, but specialized for matching against parameter lists (i.e., capture objects) instead of strings. In the continuing evolution of the API documents and S29, we are moving away from documentation like: our Scalar multi max(Array @list) {...} our Scalar multi method Array::max(Array @array:) {...} toward exported methods: our Scalar multi method Array::max(Array @array:) is export {...} "is export" forces this to be exported as a function that operates on its invocant, wrapping the method call. OK, that's fine, but Array isn't the only place that will happen, and the various exported max functions should probably have some unifying interface declared. This would seem to be a case for changing the above to something along the lines of: our Scalar multi submethod max(@array:) is export {...} This removes all references to Array from the signature, and leaves it up to the @-sigil to identify that the invocant is supposed to be some sort of list-like entity. The change above from 'method' to 'submethod' is predicated on the idea that methods have to be defined within a class or role, much like attributes have to be; if this is incorrect, then it could be left as 'method'. I'm thinking of something like: our proto max(@array, *%adverbs) {...} The Synposes already define a 'proto' keyword for use with routines; it's listed right alongside 'multi'. Were you intending to refer to this existing keyword, or did you have something else in mind? This suggests that any "max" subroutine defined as multi in--or exported to--this scope that does not conform to this prototype is invalid. Perl will throw an error at compile-time if it sees this subsequently: In short, you want to define a signature that every routine with the given name must conform to, whether that routine is a sub or submethod defined in the package directly, or if it is a method defined in a class or role that is in turn defined in the package. While 'role Foo { our method max(@array:) { ... } }' specifies that whatever composes the role in question must include a method called max that takes a list-like object as an invocant, you want to be able to say that any method, sub, submethod, or other routine defined in a given package that is called 'max' must match the signature ':(@array, *%adverbs)'. This would seem to bear some resemblance to Perl 6's notion of 'subtypes', which add matching criteria to objects, and throw exceptions whenever you try to assign a value to the object that doesn't meet the criteria. The goal, here, is to allow us to centrally assert that "Perl provides this subroutine" without defining its types or behavior just yet. Here's the thing: the above doesn't seem to require that any such subroutine be defined. That is, the coder could forego defining _any_ 'max' routines when he implements your documentation, and the first indication you'd have of this oversight would be when your application complains that the function doesn't exist. That is, you're not saying 'this module provides this subroutine'; you're saying 'if this
Re: RFC: multi assertions/prototypes: a step toward programming by contract
Mark J. Reed wrote: Jonathan Lang wrote: > Terminology issue: IIRC (and please correct me if I'm wrong), Perl 6 > uses 'module' to refer to 'a perl 5-or-earlier module', and uses > 'package' to refer to the perl 6-or-later equivalent. Other way around. "package" is Perl 5, because that's the P5 keyword, and seeing a "package" declaration is an indicator to Perl6 that the file it's processing is written in P5. In P6, there are both "module"s and "class"es, but no "package"s other than those inherited from P5 code.. Right. Thank you; I'm not sure how I got those flipped. -- Jonathan "Dataweaver" Lang
Re: RFC: multi assertions/prototypes: a step toward programming by contract
Larry Wall wrote: but only if self.HOW does Hash. And here I thought you were a responsible, law-abiding citizen... :P -- Jonathan "Dataweaver" Lang
Re: Automatic coercion and context
My understanding is that "does" will prevent coercion. In particular, it is erroneous to say that 'Str does Num' or that 'Num does Str'. If you say 'Foo does Bar', what this means is that anything Bar can do, Foo can do, too. As such, any routine that asks for a Bar can just as easily be given a Foo. It doesn't convert the Foo into a Bar; it simply uses it as is. As such, the only time coercion might take place is when the object that you're trying to use _doesn't_ do the role that's being asked for. -- Jonathan "Dataweaver" Lang
Re: Automatic coercion and context
Jonathan Scott Duff wrote: I hope you're way off the mark. Automatic coercion was one of the annoyances I remember from C++. Debugging becomes more difficult when you have to not only chase down things that are a Foo, but anything you've compiled that might know how to turn itself into a Foo. OTOH, there is a time and place for so-called "automatic" coercion, such as the way that perl freely converts between Str and Num. Indeed, ISTR something about standardized stringifying and numifying method names that let _any_ object turn into a string or number (or boolean, IIRC) when placed in the appropriate context. And I can see some benefit to extending this ability to other types, as long as it's used sparingly. In particular, I'm thinking about complex numbers - it would be nice to see perl convert between the rectilinear and polar representations of complex numbers in the same way that it converts between Num and Str. -- Jonathan "Dataweaver" Lang
Re: class interface of roles
Brad Bowman wrote: Hi, Did you mean to go off list? No, I didn't. Jonathan Lang wrote: > Brad Bowman wrote: >> Does the "class GenSquare does GenEqual does GenPointMixin" line imply >> an ordering of class composition? > > No. This was a conscious design decision: the order in which you > compose roles has no effect on the result. Great. That's what I want. >> I would like a way to make one Role to require that the target class >> "does" another abstract Role, is there already such a technique? > > What's wrong with just having the role compose the other abstract role > itself? That is, instead of "role Bar requires Baz; class Foo does > Bar does Baz", why not say "role Bar does Baz; class Foo does Bar"? > That would work, as long as you get a compile time error when Foo doesn't implement the Baz abstract interface. ...which is exactly what happens. Mind you, the compile-time error won't report that Baz::method is unimplemented, or even that Bar::method is unimplemented; it will report that Foo::method is unimplemented. It's up to the programmer to figure out where Foo::method came from, on the off chance that it matters. It's perhaps also less clear that the indirect Baz mixin must be implemented. If Bar does Baz, you can read that as "Bar requires Baz to be implemented, too." There's a tendency, when dealing with traditional inheritance systems, to think of the primary function of a superclass as being a supplier of implementations for any classes that inherit from it. I submit that this is the wrong way to think of roles: rather, a role is first and foremost a source of interface requirements for whatever does that role. If a role includes a method declaration, that should be read as "anything that does this role must provide a method that matches this one's name and signature." If a role does another role, that should be read as "anything that does this role should conform to the requirements of this other role as well." Any implementations that a role provides should be viewed as _sample_ implementations, to be used if and only if you can find no reason not to use them. BTW, this includes attributes: if a role declares a public attribute, this should be read as that role requiring an accessor method for that attribute; if whatever does the role redefines the methods (including the accessor method) such that none of them refer to the attribute, the attribute should not be mixed in. If a role defines a private attribute and then fails to define any methods that access that attribute, the only way that that attribute should end up getting mixed into something else is if whatever does the role that the attribute is in provides the methods in question. In summary, attributes and method bodies in roles should be taken as _suggestions_ - only the public method names and signatures should be viewed as requirements. I guess any role could just declare some yada methods and leave it at that. That too. Bear in mind that when you compose a role into another role, you are under no obligation to replace yada methods with defined ones. In fact, it's even conceivable for you to replace a defined method with a yada method if the default implementation from the other role isn't suited to the current one. The only time that you're required to replace yada methods with defined methods is when you compose into a class. -- Jonathan "Dataweaver" Lang
Re: Re: class interface of roles
Stevan Little wrote: Brad Bowman wrote: > How does a Role require that the target class implement a method (or > do another Role)? IIRC, it simply needs to provide a method stub, like so: method bar { ... } This will tell the class composer that this method must be created before everything is finished. Correct. I suppose this is again where the different concepts of classes are roles can get very sticky. I have always look at roles, once composed into the class, as no longer needing to exist. In fact, if it weren't for the idea of runtime role compostion and runtime role introspection, I would say that roles themselves could be garbage collected at the end of the compile time cycle. Again, you've hit the nail on the head. To elaborate on this a little bit, the only reason that perl needs to keep track of a role hierarchy at all is for parameter matching purposes (if Foo does Bar and Bar does Baz, Foo can be used if a signature asks for Baz). > I would like a way to make one Role to require that the target class > "does" another abstract Role, is there already such a technique? I am not familiar with one, but I have had this need as well lately in using Moose roles. We have a concept in Moose (stolen from the Fortress language) where a particular role can exclude the use of another role, but not the ability to require it, although I see no reason why it couldn't be done. As I mentioned before, having role Bar require that Baz also be composed is a simple matter of saying "role Bar does Baz". This notion of exclusionary roles is an interesting one, though. I'd like to hear about what kinds of situations would find this notion useful; but for the moment, I'll take your word that such situations exist and go from there. I wonder if it would be worthwhile to extend the syntax of roles so that you could prepend a "no" on any declarative line, resulting in a compilation error any time something composing that role attempts to include the feature in question. So, for instance, you might have role Bar { no method baz (Num, Str); } class Foo does Bar { method baz (Num $n, Str $s) { ... } # compilation error: Bar forbade this method! } or role Bar no does Baz { # granted, the english grammar is all wrong... } class Foo does Bar does Baz { # compilation error: Bar forbade the inclusion of Baz! } This is not the same as removing something that a composed role brought in, which is a separate potentially useful notion. The former is "Foo doesn't play well with Bar; so don't try to use them together"; the latter is "Foo can do _almost_ everything Bar can, but not quite." Mind you, if I ever see something to the effect of "Foo does Bar except baz()" as valid syntax, I'll expect a query to the effect of "Foo does Bar?" to answer to the negative. This would include "can Foo be used when Bar is asked for?" -- Jonathan "Dataweaver" Lang
"Don't tell me what I can't do!"
Twice now in the last week or so, I've run across suggestions to the effect of including syntax that forbids otherwise valid code from being used. First was during the discussion about coming up with a way to program by contract, where the poster suggested that a means of saying "any declaration of method 'm' that doesn't conform to this signature should be illegal"; the second was in a recent thread concerning roles where a poster commented that he'd like to be able to declare in role 'A' that anything composing it should be forbidden from composing role 'B' also. I'm not used to programming styles where a programmer intentionally and explicitly forbids the use of otherwise perfectly legal code. Is there really a market for this sort of thing? -- Jonathan "Dataweaver" Lang
Re: "Don't tell me what I can't do!"
jerry gay wrote: Jonathan Lang wrote: > I'm not used to programming styles where a programmer intentionally > and explicitly forbids the use of otherwise perfectly legal code. Is > there really a market for this sort of thing? use strict; Hmm... granted. But that does tend to sidestep the main thrust of my question. The examples I gave involved specific roles or routines being forbidden from use in certain situations; my gut instinct is that if you don't think that it's appropriate to use a particular role or routine somewhere, you should simply not use it there; I can't see why you'd want the compiler or runtime to enforce not using it there. -- Jonathan "Dataweaver" Lang
Re: "Don't tell me what I can't do!"
Dave Whipp wrote: Smylers wrote: >>use strict; > > That's different: it's _you_ that's forbidding things that are otherwise > legal in your code; you can choose whether to do it or not. Which suggests that the people wanting to specify the restrictions are actually asking for a way to specify additional strictures for users of their modules, which are still controlled by /[use|no] strict/. While it is true that any module is free to implement its c method to allow its users to specify a level of strictness, it would be nice to abstract this type of thing into the "use strict" mechanism. Before we start talking about how such a thing might be implemented, I'd like to see a solid argument in favor of implementing it at all. What benefit can be derived by letting a module specify additional strictures for its users? Ditto for a role placing restrictions on the classes that do it. -- Jonathan "Dataweaver" Lang
Re: "Don't tell me what I can't do!"
Dave Whipp wrote: Or we could view it purely in terms of the design of the core "strict" and "warnings" modules: is it better to implement them as centralised rulesets, or as a distributed mechanism by which "core" modules can register module-specific strictures/warnings/diagnostics. Question: if module A uses strict, and module B uses module A, does module B effectively use strict? I hope not. I was under the impression that pragmas are local to the package in which they're declared. If that's the case, then pragmas will not work for allowing one module to impose restrictions on another unless there's a way to export pragmas. -- Jonathan "Dataweaver" Lang
Abstract roles, classes and objects
Trey Harris wrote: It sounds like the assumption thus far has been that the existance of roles imply that abstract classes are disallowed, so you'd write: role Dog { method bark { ... } #[ ... ] } class Pug does Dog { method bark { .vocalize($.barkNoise) } } S12 says: "Classes are primarily for instance management, not code reuse. Consider using C when you simply want to factor out common code." To me, "instance management" means "the package can create, build, and destroy objects" - not "the package initializes and cleans up attributes". A 'class' that is forbidden from creating, building, and destroying objects isn't a class; it's a role. In fact, you can think of 'role' as being shorthand for 'abstract class' - after all, the only difference between a concrete class and an abstract class is that the former must implement everything and can manage instances, while the latter cannot manage instances but doesn't have to implement everything. -snip- In Perl 6, the abstract SystemMonitor could be a role, and a concrete ScriptedMonitor could be a class that does SystemMonitor, but it's not at all clear to me what HardwareMonitor would be, since classes can't be abstract and roles can't inherit from classes. S12 says: *> A role is allowed to declare an additional inheritance for its *> class when that is considered an implementation detail: *> *> role Pet { *> is Friend; *> } So: role SystemMonitor { ... } class CPUMonitor does SystemMonitor { ... } class DiskMonitor does SystemMonitor { ... } class ScriptedMonitor does SystemMonitor { ... } role HardwareMonitor is ScriptedMonitor { ... } class FanMonitor does HardwareMonitor { ... } class TempMonitor does HardwareMonitor { ... } class PowerSupplyMonitor does HardwareMonitor { ... } # and so on is perfectly valid, and is shorthand for role SystemMonitor { ... } role HardwareMonitor { ... } class CPUMonitor does SystemMonitor { ... } class DiskMonitor does SystemMonitor { ... } class ScriptedMonitor does SystemMonitor { ... } class FanMonitor is ScriptedMonitor does HardwareMonitor { ... } class TempMonitor is ScriptedMonitor does HardwareMonitor { ... } class PowerSupplyMonitor is ScriptedMonitor does HardwareMonitor { ... } # and so on ISTR that it's also possible to treat a class as if it were a role (e.g., "does classname" is valid, both as a statement in another role or class and as an expression used to test the truth of the claim), although I can't seem to find documentation for this at the moment. -- Jonathan "Dataweaver" Lang
Re: "Don't tell me what I can't do!"
chromatic wrote: jesse wrote: > Ok. So, I think what you're saying is that it's not a matter of "don't let > people write libraries that add strictures to code that uses those modules" > but a matter of "perl should always give you enough rope to turn off any > stricture imposed on you by external code." > > Do I have that right? Yes. You might also add "... or enable further strictures", but that sounds like what I was trying to say. Consider the following idea, bearing in mind that chromatic will probably consider it to be a waste of time and that we shouldn't hold up the release of p6 in order to develop it even if it's deemed acceptable: A 'policy' is a thing which places restrictions on what the programmer can do. Policies can be named or anonymous. Named policies can be exported. A module that exports policies is also known as a contract. As a means of making this easier, a 'contract' trait exists for modules which applies the 'export' trait to all policies found within. Proper use of policies is to define them in a contract, then import them into the appropriate lexical scope with 'use' - this lets you deport them later (using 'no') when you no longer want their restrictions. Example: module strict is contract; policy vars { ... }; # Handwavium for the definition of a policy named 'vars'. ... # EOF # somewhere in another module: use strict; # imposes all of strict's policies on this lexical scope. ... { no strict ; # rescinds the 'vars' policy within this codeblock. ... } AFAICT, this can't be done using the current toolkit, as there's nothing akin to policy definition. Compared to that, adding a 'contract' trait is trivial. Alternatively, contracts could be promoted to the same level as classes and modules, with a keyword of their own and an additional requirement that policy definition must occur within a contract. You'd still make use of a policy by importing it from a contract where appropriate, and all that would change in the above example would be that the first line would change to "contract strict;" Another benefit of this approach is that it might be possible to say that a contract is exempt from its own policies; you have to import policies from a contract in order for them to take effect. This keeps contract writers from having to abide by the policies that they're writing while they write them. -- Jonathan "Dataweaver" Lang
import collisions
What if I import two modules, both of which export a 'foo' method? IMHO, it would be nice if this sort of situation was resolved in a manner similar to how role composition occurs: call such a conflict a fatal error, and provide an easy technique for eliminating such conflicts. One such technique would be to allow the import list to rename items by means of Pairs: any Pair that occurs in the import list uses its key as the item's old name and its value as the new name. Thus, use Foo (foo => 'bar'); use Bar (:foo); would import 'foo' from 'Foo', but it would appear in the current lexical scope as 'bar'. Likewise, it would import 'foo' from Bar, but would rename it as 'baz'. Another possibility would be to supply a substitution rule that transforms the names. -- Jonathan "Dataweaver" Lang
Re: class interface of roles
TSa wrote: I'm not familiar with the next METHOD syntax. It's simple: if a multi method says "next METHOD;" then execution of the current method gets aborted, and the next MMD candidate is tried; it uses the same parameters that the current method used, and it returns its value to the current method's caller. In effect, "next METHOD" is an aspect of MMD that allows an individual method to say "I'm not the right guy for this job", and to punt to whoever's next in line. If not for the possibility of side effects that occur before the punt, one could pretend that the current method was never tried in the first place. I see that quite different: roles are the primary carrier of type information! Dispatch depends on a partial ordering of roles. I think all roles will form a type lattice that is available at runtime for type checks. True: the relationships between various roles and classes ("who does what?") is needed for runtime type checking. However, the _contents_ of the roles are only important for composing classes and for the occasional runtime introspection of a role. If roles are never composed or inspected at runtime, the only details about them that need to be kept are "who does what?" - and if all type-checking takes place at compile-time, not even this is needed. But now we're getting dangerously close to perl6internals territory... -- Jonathan "Dataweaver" Lang
Re: Re: class interface of roles
TSa wrote: Dispatch depends on a partial ordering of roles. Could someone please give me an example to illustrate what is meant by "partial ordering" here? -- Jonathan "Dataweaver" Lang
S5: substitutions
S5 says: There is no /e evaluation modifier on substitutions; instead use: s/pattern/{ doit() }/ Instead of /ee say: s/pattern/{ eval doit() }/ In my perl5 code, I would occasionally take advantage of the "pairs of brackets" quoting mechanism to do something along the lines of: s(pattern) { doit() }e Translating this to perl 6, I'm hoping that perl6 is smart enough to let me say: s(pattern) { doit() } Instead of s(pattern) { { doit() } } -- In a similar vein, I tend to write other perl5 substitutions using parentheses for the pattern so that I can use double-quotes for the substitution expression: s(pattern) "expression" This highlights to me the fact that the expression is _not_ a pattern, and uses a syntax more akin to interpolated strings than to patterns. The above bit about executables got me to thinking: _if_ perl6 is smart enough to recognize curly braces and automatically treat the second argument as an executable expression, would there be any benefit to letting perl6 apply customized quoting semantics to the second argument as well, based on the choice of delimiters? e.g., using single quotes would disable variable substitutions and the like (useful in cases where the substitution doesn't make use of the captures done by the pattern, if any). -- Jonathan "Dataweaver" Lang
S5: perl5 regex flags
It's been indicated that several regex modifiers that are found in Perl5 are gone. That's all well and good, unless you're using the Perl5 modifier to port code to perl6. What happens if you're trying to port in a regex that made use of one of the now-obsolete modifiers? Bear in mind that there are new s and x modifiers that have nothing to do with their perl5 homonyms. Suggestion: let the Perl5 modifier take an optional argument that consists of perl5-compatable, perl6-incompatable modifiers: m, s, and/or x. The other regex-related perl5-compatable modifiers (e, g, and o) have more to do with the match and substitute functions than with the patterns themselves, and so can be updated to perl6 standards without regard to the pattern used. -- Jonathan "Dataweaver" Lang
Re: S5: perl5 regex flags
Larry Wall wrote: On Sat, Oct 07, 2006 at 03:28:04PM -0700, Jonathan Lang wrote: : It's been indicated that several regex modifiers that are found in : Perl5 are gone. That's all well and good, unless you're using the : Perl5 modifier to port code to perl6. What happens if you're trying : to port in a regex that made use of one of the now-obsolete modifiers? You just put them inside with (?smx). Ah; gotcha. I had been under the impression that there were subtle differences between (?msx) and //msx, such as the former allowing you to turn the flags on or off mid-pattern. Not that it matters if you put this at the front of the pattern... -- Jonathan "Dataweaver" Lang
Re: S5: substitutions
Larry Wall wrote: Jonathan Lang wrote: : Translating this to perl 6, I'm hoping that perl6 is smart enough to let me : say: : :s(pattern) { doit() } Well, the () are illegal without intervening whitespace because that makes s() a function call, but we'll leave that alone. Thank you; I noticed this after I had sent it. Perl 5 let certain choose-your-own quotes introduce various kinds of odd semantics, and that was generally viewed as a mistake. That is why S02 says: For these "q" forms the choice of delimiters has no influence on the semantics. That is, C<''>, C<"">, C<< <> >>, C<«»>, C<``>, C<()>, C<[]>, and C<{}> have no special significance when used in place of C as delimiters. We could make an exception for the second part of s///, but certainly for this case I think it's easy enough to write: .subst(/pattern/, { doit }) However, taken as a macro, s/// is a rather odd fish. The right side isn't just a string, but a deferred string, which implies that there are always curlies there, much like the right side of && implies deferred evaluation. Perhaps quotes should be given the same "defer or evaluate as appropriate to the context" capability that regexes and closures have? That is, 'q (text)' is always a Quote object, which may be evaluated immediately in certain contexts and be passed as an object in others. As a first cut, consider using the same rule for this that regexes use: in a value context (void, boolean, string, or numeric) or as an explicit argument of ~~, a quote is immediately evaluated; otherwise, it's passed as an object to be evaluated later. The main downside I see to this is that there's no way to force one approach or the other; a secondary issue has to do with the usefulness of an unevaluated string: with regexes and closures, the unevaluated versions are useful in part because they can be made to do different things when evaulated, based on the circumstances: $x ~~ $regex will do something different than $y ~~ $regex, and closures can potentially be fed arguments that allow one closure to do many things. A quote, OTOH, isn't neccessarily that flexible. Or is it? Is there benefit to extending the analogy all the way, letting someone define a parameterized quote? But it's possible that some syntactic relief of a dwimmy sort is in order here. One could view s[pattern] as a kind of metaprefix on the following expression, sort of a self-contained unary &&. I wonder how often we'd have to explain why s/pattern/ "expression" doesn't do that, though. 'Course, it's already like that in Perl 5. Probably not too often - although I _would_ recommend that you emphasize the distinction between standard regex notation used everywhere else and the "extended" regex notion used by s///. I _do_ like the idea of reserving this behavior to situations where the pattern delimiters are a matched set, letting you freely choose some other delimiter for the expression. In particular, I'm not terribly fond of the idea of s'pattern'expression' applying single-quote semantics to the expression. Unlike in Perl 5, this approach would rule out things like: s[pattern] !foo! which would instead have to be written: s[pattern] qq!foo! Fine by me. This would also let you easily apply quote modifiers to the expression. As a unary lazy prefix, you could even just say s[pattern] doit(); Of course, then people will wonder why .subst(/pattern/, doit()) doesn't work. Perhaps. But people quickly learn that different approaches in perl often have their own unique quirks; this would just be one more example. Which makes me want to build it into the pattern somewhere where there's already deferred evaluation that just happens to be triggered at the right moment: /pattern {subst doit}/ /pattern {subst "($0)"}/ /pattern {subst q:to'END'}/ a new line END We can give the user even more rope to shoot themselves in the dark with: /pattern {$/ = doit}/ /pattern {$0 = "($0)"}/ /pattern {$() = q:to'END'}/ a new line END The possibilities are endless... These aren't syntaxes that I'd want to use; but then, TIMTOWTDI. The main problem that I have with this approach is that it could interfere with being able to use the venerable s/pattern/expression/ notation; I'm looking to open up new possibilities, not to remove a perfectly workable existing one. Well, not quite. One syntax we *can't* allow is /pattern/{ doit } because that's already used to pull named captures out of the match object. ...which brings up another potential conflic
Re: S5: substitutions
Larry Wall wrote: As a unary lazy prefix, you could even just say s[pattern] doit(); Of course, then people will wonder why .subst(/pattern/, doit()) doesn't work. Another possibility: make it work. Add a "delayed" parameter trait that causes evaluation of that trait to be postponed until the first time that the parameter actually gets used in the routine. If it never gets used, then it never gets evaluated. I could see uses for this outside of the narrow scope of implementing substitutions. -- Jonathan "Dataweaver" Lang
Re: S5: substitutions
Jonathan Lang wrote: Another possibility: make it work. Add a "delayed" parameter trait... ...although "lazy" might be a better name for it. :) -- Jonathan "Dataweaver" Lang
Re: S5: substitutions
Larry Wall wrote: On Sat, Oct 07, 2006 at 07:49:48PM -0700, Jonathan Lang wrote: : Another possibility: make it work. Add a "delayed" parameter trait : that causes evaluation of that trait to be postponed until the first : time that the parameter actually gets used in the routine. If it : never gets used, then it never gets evaluated. I could see uses for : this outside of the narrow scope of implementing substitutions. Tell me how you plan to do MMD on a value you don't have yet. MMD is based on types, not values; and you don't neccessarily have to evaluate something in order to know its type. Also, you don't neccessarily need every argument in order to do MMD: if there's a semi-colon in any of the candidates' signatures prior to the argument in question, MMD stands a decent chance of selecting a candidate before the question of its type comes up. Worst case scenario (a Code object without a return type being compared to a non-Code parameter), you can treat the argument's type as "Any", and let the method redispatch once the type is known, if it's appropriate to do so. That said, the real problem here is figuring out what to do if some candidates ask for a given parameter to be lazily evaluated and others don't. It would probably be best to restrict the "lazy evaluation" option to the prototype's parameters, so that it always applies across the board. -- Consider this as another option: instead of a parameter trait, apply a trait to the method prototype. With this trait in play, all parameter evaluations are postponed as long as possible. If the first candidate needs only the first two parameters to test its viability, only evaluate the first two parameters before testing it. If the dispatch succeeds, the other parameters remain unevaluated until they actually get used in the body. If all of the two-parameter candidates fail, evaluate the next batch of parameters and go from there. This approach doesn't guarantee that a given parameter won't be evaluated before its first appearance within the routine; but it does remove the guarantee that it will be. -- In the case of subst, there's an additional wrinkle: you can't always evaluate the expression without making reference to the pattern's Match object, which won't be known until the pattern is applied to the invocant. In particular, closures that refer to $0, $1, etc. will only work properly if called by the method itself, and only after $0, $1, etc. have been set. All things considered, the best solution for subst might be to treat the timing of quote evaluation in a manner analogous to regex evaluation. -- Jonathan "Dataweaver" Lang
S5: substitutions
Smylers wrote: Jonathan Lang writes: > Translating this to perl 6, I'm hoping that perl6 is smart enough to > let me say: > >s(pattern) { doit() } > > Instead of > >s(pattern) { { doit() } } That special case is nasty if you don't know about it -- you inadvertently execute as code something which you just expected to be a string. Not a good trap to have in the language. If you expected it to be a string, why did you use curly braces? While I'm completely on board with the idea that _pattern_ delimiters shouldn't affect the _pattern's_ semantics, the second half of the search-and-replace syntax isn't a pattern. Conceptually, it's either a string or an expression that returns a string. Larry pretty much summed up what I'm looking for in this regard - change the s/// syntax so that it has two distinctive forms: s/pattern/string/ (where '/' can be replaced by any valid non-bracketing delimiter, and string is always evaluated as an interpolated string) or s[pattern] expression (where '[' and ']' can be replaced by any valid pair of bracketing delimiters, and expression is evaluated as a perl6 expression) -- Jonathan "Dataweaver" Lang
Re: Synposis 26 - Documentation [alpha draft]
The only thing that I'd like to see changed would be to allow a more flexible syntax for formatting codes - in particular, I'd rather use something analogous to the 'embedded comments' described in S02, replacing the leading # with an appropriate capital letter (as defined by Unicode) and insisting on a word break just prior to it. I'd also prefer a more Wiki-like dialect at some point (e.g., '__underlined text__', '_italicized text_' and '*bold*' instead of 'U', 'I' and 'B'); but that can wait. Otherwise, looks good. -- Jonathan "Dataweaver" Lang
S5: substitutions
Smylers wrote: Jonathan Lang writes: > If you expected it to be a string, why did you use curly braces? Because it isn't possible to learn of all Perl (5 or 6) in one go. And in general you learn rules before exceptions to rules. Agreed. In general in Perl the replacement part of a substitution is a string, one that takes interpolation like double-quoted strings do. Here's where I differ from you. In general, string literals are delimited by quotes; pattern literals are delimited by forward slashes; and closures (i.e., code literals) are delimited by curly braces. And once you learn that other delimiters are possible for patterns and strings, that knowledge comes with the added fact that in order to use a non-standard delimiter, you have to preface it with a short tag clearly identifying what is being delimited. In general, you can use literals anywhere you can use variables, and vice versa. There are two crucial exceptions to this, both of which apply to the topic at hand: one is minor, and the other is major. The minor exception is the pattern-matching macro, m//. This macro takes a pattern literal and applies it as a match criterion to either the current topic ($_) or whatever is attempting to match (via ~~). m// _must_ take a pattern literal; it cannot take a variable containing a Regex object. To do the latter, you have to embed the Regex object in a pattern literal. m// is a _minor_ exception because it can be viewed as being the complement of rx// - both can be thought of as pattern literals, with m// being a pattern literal that always attempts to create a Match object and rx// being a pattern literal that always tries to create a Regex object. Meanwhile, you have the .match method: unlike m//, .match isn't choosy about where it gets its Regex object; it can come from a pattern literal, as with m//; it can be passed in by means of a variable; it can be composed on the spot by an expression; and so on. The possibilities are endless. The s/// macro is a Frankenstein Monster, stitched together from the bodies of a pattern literal and an interpolating string literal and infused with the spirit of a search-and-replace algorithm. Like the m// macro, s/// can only work on literals, never on variables (unless, as above, you embed them in literals). In addition, if you choose a non-bracketing delimiter for the pattern literal, you _must_ use the same delimiter for the string literal. (This is more of a handicap than at first it seems: in general, different kinds of literals use different delimiters. With s///, you're forced to use the same delimiter for two different kinds of literals: a pattern and a string.) Using bracketed delimiters for the pattern gets around this problem, but you're still left with the fact that the delimiters for the string no longer follow the common-sense rule of "either double-quotes or 'qq' followed by something else" - no matter what delimiters you apply here, the semantics remain the same - unlike anywhere else that string literals are used. And short of embedding them in the literal (notice a trend here?), you cannot apply any modifiers to the string - only to the pattern or to the search-and-replace algorithm. In this regard, the string literal is the odd man out - s/// could be thought of as a pattern literal with an auxiliary string literal attached to it, but not the other way around. There's nothing natural about this beastie; and if it wasn't so darn useful, I'd advocate dropping it. The .subst method bypasses _all_ of these problems, letting you use distinct and independently modifiable literals for each of the pattern and the string, or even letting you use a variable or expression to supply the pattern (or string) in lieu of literals. On the downside, the .subst syntax isn't nearly as streamlined as the s/// syntax. In addition, there's the issue about delayed evaluation (or lack thereof) of the string argument, currently being discussed. In general in Perl if the default delimiter for something is inconvenient you can pick a different delimiter -- this includes patterns, and also strings. And if you pick any sort of brackets for your delimiters then they match -- which is handy, cos it means that they can still be used even if the string inside contains some of those brackets. As noted above, if you choose non-standard delimiters, you have to explicitly tag them; and with the exception of s/// and tr///, a given set of delimiters delimits one thing at a time. s/// and tr/// are exceptions to the general rule. So it's quite possible for somebody to have picked up all the above, and have got used to using C or C when he wishes to quote long strings. The form with braces has the advantage that they are relatively uncommon in text (and HTML, and SQL, and many other typically encountered long strings). And he will be used to saying 'qq{long