Re: The Use and Abuse of Liskov (was: Type::Class::Haskell does Role)

2005-07-19 Thread Luke Palmer
On 7/17/05, Damian Conway <[EMAIL PROTECTED]> wrote:
>  "You keep using that word. I do not think
>   it means what you think it means"
>   -- Inigo Montoya

Quite.  I abused Liskov's name greatly here.  Sorry about that.

Anyway, my argument is founded on another principle -- I suppose it
would be Palmer's zero-effect principle, that states:

"In absence of other information, a derived class behaves just
like its parent."

I can argue that one into the ground, but it is a postulate and
doesn't fall out of anything deeper (in my thinking paradigm, I
suppose).  My best argument is that, how can you expect to add to
something's behavior if it changes before you start?

Every SMD system that I can think of obeys this principle (if you
ignore constructors for languages like Java and C++).

And as I showed below, the Manhattan metric for MMD dispatch is
outside the realm of OO systems that obey this principle.  In fact,
I'm pretty sure (I haven't proved it) that any MMD system that relies
on "number of derivations" as its metric will break this principle.

> All of which really just goes to show that the standard LSP is
> simply not a concept that is applicable to multiple dispatch. LSP is a
> set of constraints on subtype relationships within a single hierarchy.
> But multiple dispatch is not an interaction mediated by a single-hierarchy
> subtyping relationship; it's an interaction *between* two or more hierarchies.

I agree, now.

>  > As a matter of taste, classes that don't do
>  > anything shouldn't do anything!  But here they do.
> 
> But your classes *do* do something that makes them do something. They
> change the degree of generalization of the leaf classes under an L[1]
> metric. Since they do that, it makes perfect sense that they also change
> the resulting behaviour under an L[1] metric. If the resulting behaviour
> didn't change then the L[1] *semantics* would be broken.

Yep.  And I'm saying that L[1] is stupid.  In fact, (as I believe
you've already picked up on), I'm not picking on the Manhattan
combinator in particular, but using a derivation metric at all!

In another message, you wrote:
> In MMD you have an argument of a given type and you're trying to find the most
> specifically compatible parameter. That means you only ever look upwards in a
> hierarchy. If your argument is of type D, then you can unequivocally say that
> C is more compatible than A (because they share more common components), and
> you can also say that B is not compatible at all. The relative derivation
> distances of B and D *never* matter since they can never be in competition,
> when viewed from the perspective of a particular argument.
> 
> What we're really talking about here is how do we *combine* the compatibility
> measures of two or more arguments to determine the best overall fit. Pure
> Ordering does it in a "take my bat and go home" manner, Manhattan distance
> does it by weighing all arguments equally.

For some definition of "equal".  In the message that this was
responding to (which I don't think is terribly important to quote) I
was referring to the absurdity of "number of derivations" as a metric.
 Picture a mathematician's internal monologue:

Well, B is a subset of A, C is a subset of A, and D is a subset of
C.  Clearly, D has fewer elements than B.

This mathematician is obviously insane.  Then I think about the way
you're using numbers to describe this, and I picture his friend
responding to his thoughts (his friend is telepathic) with:

You can make a stronger statement than that: The difference of the
number of elements between A and D is exactly twice the difference of
elements between A and B.

And now maybe you see why I am so disgusted by this metric.  You see,
I'm thinking of a class simply as the set of all of its possible
instances.  And then when you refer to L[1] on the number of
derivations, I put it into set-subset terms, and mathematics explodes.

Here's how you can satisfy me: argue that Palmer's zero-effect
principle is irrelevant, and explain either how Manhattan dispatch
makes any sense in a class-is-a-set world view, or why that world view
itself doesn't make sense.

Or just don't satisfy me.

Luke


Re: DBI v2 - The Plan and How You Can Help

2005-07-19 Thread Kiran Kumar
We could have an option to do Bulk Inserts  ..


Re: MML dispatch

2005-07-19 Thread TSa (Thomas Sandlaß)

HaloO Larry,

you wrote:

Implicit is that role distance is N + the distance to the nearest
class that incorporates that role for small values of N.

If class Dog does role Bark and also does role Wag, then passing a
Dog to

multi (Bark $x)
multi (Wag $x)

should result in ambiguity.  The interesting question is whether N should
be 0 or 1 (or possibly epsilon).  If we have

multi (Bark $x)
multi (Dog $x)

arguably Dog is more specialized than Bark, which says N maybe
shouldn't be 0.  Whether it should be epsilon or 1 depends on how
you think of role composition, and whether you think of roles more
like immediate parent classes or generic paste-ins.  You can think
of them as immediate parent classes, but in that case they're actually
closer than a real parent class that would have distance 1, since
role methods override parent class methods.  That argues for N < 1.


Sorry, has Perl6 now reverted to setting classes above types?

I would think that the programmer specifies what type a class
implements by letting it do a set of roles. This implies that
by default a class does the very unspecific Any. In your example
the type system needs the information if Dog does a supertype or
subtype. The syntax could be junctive:

class Dog does Bark | Wag {...} # union type (least upper bound)
class Dog does Bark & Wag {...} # intersection type (greatest lower bound)
class Dog does Bark , Wag {...} # same as & or typeless composition?

If Dog is made a supertype then neither multi (Bark $x) nor multi (Wag $x)
is in the applicability set. For the subtype case both are applicable
and an ambiguity type error occurs. This needs disambiguation through
defining multi (Dog $x). The question is at what time this error occurs
and what restrictions allow a compile time detection. Note that the
ambiguity doesn't go away with a metric approach because there are no
other parameters that could compensate.

Regards,
--
TSa (Thomas Sandlaß)




How do subroutines check types?

2005-07-19 Thread Ingo Blechschmidt
Hi, 
 
class Foo {...} 
Foo.new.isa(Foo);   # true 
Foo.isa(Foo);   # true (see [1]) 
Foo.does(Class);# true 
 
sub blarb (Foo $foo, $arg) { 
  ...;   # Do something with instance $foo 
} 
 
blarb Foo.new(...), ...; 
# No problem 
 
blarb Foo,  ...; 
# Problem, as &blarb expects an *instance* of Foo, 
# not the class Foo. 
 
How do I have to annotate the type specification in the 
declaration of the subroutine to not include the class Foo, but 
only allow instances of Foo? 
 
Or is the default way to check the types of arguments 
something like the following, in which case my first question 
doesn't arise? 
 
if $param ~~ $expected_type and not $param ~~ Class { 
  # ok 
} else { 
  die "Type error: ..."; 
} 
 
(But this feels special-casey...) 
 
(And, just curious -- how can I override the default checking 
routine?) 
 
 
--Ingo 
 
[1] http://www.nntp.perl.org/group/perl.perl6.language/0 
 
--  
Linux, the choice of a GNU | Row, row, row your bits, gently down the 
generation on a dual AMD   | stream...   
Athlon!| 



Re: MML dispatch

2005-07-19 Thread TSa (Thomas Sandlaß)

HaloO Damian,

you wrote:
No. If the dispatch behaviour changes under a Manhattan metric, then it 
only ever changes to a more specific variant.


This statement is contradicting itself. A metric chooses the *closest*
not the most specific target. Take e.g. the three-argument cases
7 == 1+2+4 == 0+0+7 == 2+2+3 which are all out-performed by 6 == 2+2+2.
But is that more specific? If yes, why?


Since MMD is all about 
choosing the most specific variant, that's entirely appropriate, in the 
light of the new information. If you change type relationships, any 
semantics based on type relationships must naturally change.


Again: do you identify 'most specific' with closest? The point of
pure MMD is that specificity relations are not defineable for all
combinations of types. That is the types form a partial order.


On the other hand, under a pure ordering scheme, if you change type 
relationships, any semantics based on type relationships immediately 
*break*. That's not a graceful response to additional information. Under 
pure ordering, if you  make a change to the type hierarchy, you have to 
*keep* making changes until the dispatch semantics stabilize again. For 
most developers that will mean a bout of "evolutionary programming", 
where they try adding extra types in a semi-random fashion until they 
seem to get the result they want. :-(


Well, designing a type hierarchy is a difficult task. But we shouldn't
mix it with *using* types! My view is that the common folks will write
classes which are either untyped---which basically means type Any---or
conform to a built-in type like Str or Num. That's the whole point of
having a type system. What is the benefit of re-inventing a module
specific Int?



Perhaps I've made this argument before, but let me just ask a
question:  if B derives from A, C derives from A, and D derives from
C, is it sensible to say that D is "more derived" from A than B is? 
Now consider the following definitions:


class A { }
class B is A {
method foo () { 1 }
method bar () { 2 }
method baz () { 3 }
}
class C is A {
method foo () { 1 }
}
class D is C {
method bar () { 2 }
}

Now it looks like B is more derived than D is.  But that is, of
course, impossible to tell.  Basically I'm saying that you can't tell
the relative relationship of D and B when talking about A.  They're
both derived by some "amount" that is impossible for a compiler to
detect.  What you *can* say is that D is more derived than C.



Huh. I don't understand this at all.


My understanding is that Luke tries to express that the metric distances
are D -> A == 2 and B -> A == 1. And that this naturally leads to a
programmer thinking of B beeing more "specific" than D because in multis
where instances of both classes are applicable---that is ones with
formal parameters of type A---it gives smaller summands for the total
distance. E.g. calling multi (A,A,A) with (B,B,B) gives distance 3 and
distance 6 for (D,D,D).



In MMD you have an argument of a given type and you're trying to find 
the most specifically compatible parameter. That means you only ever 
look upwards in a hierarchy. If your argument is of type D, then you can 
unequivocally say that C is more compatible than A (because they share 
more common components), and you can also say that B is not compatible 
at all. The relative derivation distances of B and D *never* matter 
since they can never be in competition, when viewed from the perspective 
of a particular argument.


Nice discription of methods defined in classes. BTW, can there be
methods *outside* of class definitions? These would effectively be
multi subs with a single invocant without privileged access.


What we're really talking about here is how do we *combine* the 
compatibility measures of two or more arguments to determine the best 
overall fit. Pure Ordering does it in a "take my bat and go home" 
manner, Manhattan distance does it by weighing all arguments equally.


We should first agree on *what* the dispatch actually works. The
class hierarchy or the type hierarchy. In the former case I think
it can't be allowed to insert multi targets from unconnected
hierarchies while in the latter the privileged access to the implementing
class environment cannot be granted at all. But this might actually
be the distinction between multi methods and multi subs!

The thing that pure approach spares the designer is the dought that
there might be parallel targets that are more type specific on *some*
of the parameters. This might give rise to introspection orgies
and low-level programming.



In conclusion, the reason that manhattan distance scares me so, and
the reason that I'm not satisfied with "use mmd 'pure'" is that for
the builtins that heavily use MMD, we require *precision rather than
dwimmyness*.  A module author who /inserts/ a type in the standard
hierarchy can change the semantics of things that aren't aware that
that ty

Re: How do subroutines check types?

2005-07-19 Thread TSa (Thomas Sandlaß)

Ingo Blechschmidt wrote:
How do I have to annotate the type specification in the 
declaration of the subroutine to not include the class Foo, but 
only allow instances of Foo? 


My understanding is that Foo.does(Foo) is false and sub params
are checked with .does(). This automatically excludes class args.
That is you have to explicitly order them:

sub blarb (Foo|Class[Foo] $foo, $arg) # or with CLASS[Foo] ?
{
   ...;   # Do something with instance or class $foo
}

Foo|Class[Foo] is a supertype of Foo and Class[Foo] and as
such allows its subtypes Foo and the Class[Foo] parametric
role instanciation. Note that Foo|Class allows calls like
blarb( Int, "HaHa" ).

Actually I'm not convinced that class Foo should automatically
constitute a (proper) type other than Any. But since .does falls
back to .isa it comes out the same since the only way here to get
a Foo doer is by instanciating the class Foo.
--
TSa (Thomas Sandlaß)




Re: More on Roles, .does(), and .isa() (was Re: Quick OO .isa question)

2005-07-19 Thread TSa (Thomas Sandlaß)

HaloO chromatic,

you wrote:

Have I mentioned before that I think you should be able to say:

class Foo
{
method foo { ... }
method more_foo { ... }
}

class Bar does Foo
{
method foo { ... }
}

... probably get a compile-time error that Bar doesn't support
more_foo()?


We've discussed that when I was ranting about the type lattice
and co- and contravariance...

I'm not sure what my position back then was but now I agree
because I see CLASS as a subtype of ROLE. The only problem
is that $Larry said that he doesn't want parametric classes.


I see a reason to differentiate between roles and classes on the 
metalevel, but the argument is not as strong on the user-level.



I go further to see little reason to distinguish between role, class,
and type names (and what reason there is is for introspective
capabilities, not standard user-level type checking).


I strongly agree. They should share the same namespace. Since
code objects constitute types they also share this namespace.
This means that any two lines of

class   Foo {...}
roleFoo {...}
sub Foo {...}
method  Foo {...}
subtype Foo of Any where {...}

in the same scope should be a simple redefinition/redeclaration error.
OK, sub and method can escape from this fate by means of the keyword
multi and different sigs.

Since sigils somehow define references &Foo could mean e.g. a classref
while ::Foo could be a Code type. If we then also promote . to a primary
sigil for Method... hmm have to think about that!
--
TSa (Thomas Sandlaß)




Referring to package variables in the default namespace in p6

2005-07-19 Thread Matthew Hodgson

Hi all,

I've spent some of the afternoon wading through A12 and S10 trying to 
thoroughly understand scope in perl 6, in light of the death of use vars 
and the addition of class (as well as package & module) namespaces.


In the process I came up against some confusion concerning how the default 
package namespace should work.  Currently, pugs does:


% pugs -e '$main::foo="foo"; say $foo'
foo

which contradicts S10, which states:

The "::*" namespace is not "main".  The default namespace for the main 
program is "::*Main".


This turned out to be an oversight - but there was then confusion as to 
how one actually refers to variables in the "::*Main" namespace, as if 
$Foo::bar looks up the ::Foo object and fetches $bar from it, then 
presumably $*Main::foo should look up the ::*Main object, and fetch $foo 
from it.


However (from #perl6):

 Arathorn: when you see $Foo::bar, it means looking up the ::Foo 
object, then fetch $bar from it
 and ::Foo, just like %Foo, can be lexical or package scoped or 
global (%*Foo)
 to restrict the lookup to ::*Foo you can't use the ordinary 
qualifying syntax, I think.

 but I may be completely wrong
 so it sounds as if to get the variable $bar from the global 
packagename ::*Foo (or just *Foo if disambiguation is not necessary), 
you'd use $*Foo::bar then.

 that may be the case, yes.
 $?Foo::bar means $?bar in Foo::
 but $*Foo::bar can't mean $*bar in Foo::
 because Foo:: will never contain a $*bar.
 so it must mean $bar in *Foo::
 this is very weird.

So the question is: what is the correct syntax for referring to package 
variables in the default namespace?


Also, what is the correct syntax for referring to package variables in 
your 'current' namespace?  $::foo?  $?PACKAGENAME::foo? 
$::($?PACKAGENAME)::foo?  %PACKAGENAME::?


cheers,

Matthew.


--

Matthew Hodgson   [EMAIL PROTECTED]   Tel: +44 7968 722968
 Arathorn: Co-Sysadmin, TheOneRing.net®

Re: Do I need "has $.foo;" for accessor-only virtual attributes?

2005-07-19 Thread Larry Wall
On Tue, Jul 19, 2005 at 02:21:44PM +1200, Sam Vilain wrote:
: Larry Wall wrote:
: > > Users of the class includes people subclassing the class, so to them
: > > they need to be able to use $.month_0 and $.month, even though there
: > > is no "has $.month_0" declared in the Class implementation, only
: > > "has $.month".
: >We thought about defining the attribute variables that way,
: >but decided that it would be clearer if they only ever refer to
: >real attributes declared in the current class.
: 
: Clearer in what way?
: 
: This implies that you cannot;
: 
:   - refactor classes into class heirarchies without performing
: code review of all methods in the class and included roles.
: 
:   - "wrap" internal attribute access of a superclass in a subclass
: 
: This in turn implies that the $.foo syntax is, in general, bad
: practice!

Well, maybe it's bad outside of submethods, where we must have a way
to devirtualize.  I see what you're saying, but I'll have to think
about it a little.  It does seem a bit inconsistent that we're forcing
virtualization of class names within methods but not attributes.
Perhaps $.foo should be used to refer to the actual attribute
storage only within submethods, and when you declare "has $.foo"
you're not declaring an accessor method but rather a submethod that
wraps the actual attribute.  The question is then whether normal
methods should treat $.foo as an error or as $?SELF.foo().

Yes, I know your preference.  :-)

Anyway, I have to do a bit of driving the next two days, so hopefully
I'll have a chance to think about it s'more.  But my gut feeling here
is that we both oughta be right on some level, so it probably just
means the current design is drawing some border in the wrong place.
The right place might or might not be the method/submethod boundary.

Larry


Re: MML dispatch

2005-07-19 Thread chromatic
On Tue, 2005-07-19 at 12:37 +0200, "TSa (Thomas Sandlaß)" wrote:

> I would think that the programmer specifies what type a class
> implements by letting it do a set of roles. This implies that
> by default a class does the very unspecific Any.

Why would a class not also define a type?

-- c



Re: More on Roles, .does(), and .isa() (was Re: Quick OO .isa question)

2005-07-19 Thread chromatic
On Tue, 2005-07-19 at 18:47 +0200, "TSa (Thomas Sandlaß)" wrote:

> I strongly agree. They should share the same namespace. Since
> code objects constitute types they also share this namespace.
> This means that any two lines of
> 
> class   Foo {...}
> roleFoo {...}
> sub Foo {...}
> method  Foo {...}
> subtype Foo of Any where {...}
> 
> in the same scope should be a simple redefinition/redeclaration error.

I don't understand this.  What does a scope have to do with a namespace?
Why does a code object constitute a type?

I can understand there being separate types, perhaps, for Method,
Submethod, MultiSub, MultiMethod, and so on, but I don't understand the
purpose of sharing a namespace between types and function names, nor of
having funcitons declare/define/denote/de-whatever types.

-- c



Re: How do I... create a value type?

2005-07-19 Thread Nicholas Clark
On Mon, Jul 11, 2005 at 12:30:03PM -0700, Larry Wall wrote:

> Or we go the other way and, in a binge of orthogonality, rationalize
> all the "on write" traits:
> 
> Current   Conjectural
> ===   ===
> is constant   is dow  "die on write"
> is copy   is cow  "copy on write"
> is rw is mow  "mutate on write"
> 
> But thankfully Perl 6 is not intended to be a "perfect language".

That conjecture reminded me of Allison's "togs" and "dogs". I think I like
the verbose version we're going to keep, even though you are suggesting it
is *im*perfect. :-)

Nicholas Clark


Re: Referring to package variables in the default namespace in p6

2005-07-19 Thread Larry Wall
On Tue, Jul 19, 2005 at 07:25:35PM +0100, Matthew Hodgson wrote:
: Hi all,
: 
: I've spent some of the afternoon wading through A12 and S10 trying to 
: thoroughly understand scope in perl 6, in light of the death of use vars 
: and the addition of class (as well as package & module) namespaces.
: 
: In the process I came up against some confusion concerning how the default 
: package namespace should work.  Currently, pugs does:
: 
: % pugs -e '$main::foo="foo"; say $foo'
: foo
: 
: which contradicts S10, which states:
: 
: The "::*" namespace is not "main".  The default namespace for the main 
: program is "::*Main".
: 
: This turned out to be an oversight - but there was then confusion as to 
: how one actually refers to variables in the "::*Main" namespace, as if 
: $Foo::bar looks up the ::Foo object and fetches $bar from it, then 
: presumably $*Main::foo should look up the ::*Main object, and fetch $foo 
: from it.
: 
: However (from #perl6):
: 
:  Arathorn: when you see $Foo::bar, it means looking up the ::Foo 
: object, then fetch $bar from it
:  and ::Foo, just like %Foo, can be lexical or package scoped or 
: global (%*Foo)
:  to restrict the lookup to ::*Foo you can't use the ordinary 
: qualifying syntax, I think.
:  but I may be completely wrong
:  so it sounds as if to get the variable $bar from the global 
: packagename ::*Foo (or just *Foo if disambiguation is not necessary), 
: you'd use $*Foo::bar then.
:  that may be the case, yes.
:  $?Foo::bar means $?bar in Foo::
:  but $*Foo::bar can't mean $*bar in Foo::
:  because Foo:: will never contain a $*bar.
:  so it must mean $bar in *Foo::
:  this is very weird.
: 
: So the question is: what is the correct syntax for referring to package 
: variables in the default namespace?

The * looks like a twigil but it isn't really.  It's short for "*::",
where the * is a wildcard package name, so in theory we could have
$=*foo, meaning the $=foo in the *:: package.  (But most of the
twigils imply a scope that is immiscible with package scope.)

: Also, what is the correct syntax for referring to package variables in 
: your 'current' namespace?  $::foo?  $?PACKAGENAME::foo? 
: $::($?PACKAGENAME)::foo?  %PACKAGENAME::?

That's currently:

$OUR::foo

though presumably

$::($?PACKAGENAME)::foo

would also work as a symbolic reference.  I'm not sure whether $::foo
is usefully distinct from $foo these days.  It almost seems to imply
that ::foo is in type space and we have to dereference it somehow.
There's a sense in which :: only implies type space after a name.
We somehow seem to have the situation where :: is simultaneously
trying to be a leading sigil, a trailing sigil, and a separator.

Larry


Re: The Use and Abuse of Liskov

2005-07-19 Thread Damian Conway

Luke wrote:


"In absence of other information, a derived class behaves just
like its parent."

I can argue that one into the ground, but it is a postulate and
doesn't fall out of anything deeper (in my thinking paradigm, I
suppose).  My best argument is that, how can you expect to add to
something's behavior if it changes before you start?


But the act of deriving is already a change...in identity and type 
relationship. Derivation is not symmetrical and derived classes are not 
identical to their bases. *Especially* under multiple inheritance.




Every SMD system that I can think of obeys this principle


Perl 5's SMD doesn't. At least, not under multiple inheritance.
Perl 6's SMD doesn't either. At least, not wrt submethods.



And as I showed below, the Manhattan metric for MMD dispatch is
outside the realm of OO systems that obey this principle.


Yep. Because MMD relies on type relationships and type relationships change 
under inheritance.




In fact, I'm pretty sure (I haven't proved it) that any MMD system that relies
on "number of derivations" as its metric will break this principle.


But, as I illustrated in my previous message, Pure Ordering breaks under this 
principle too. Deriving a class can break existing method dispatch, even when 
"degree of derivation" isn't a factor.




But your classes *do* do something that makes them do something. They
change the degree of generalization of the leaf classes under an L[1]
metric. Since they do that, it makes perfect sense that they also change
the resulting behaviour under an L[1] metric. If the resulting behaviour
didn't change then the L[1] *semantics* would be broken.


Yep.  And I'm saying that L[1] is stupid.


Ah yes, a compelling argument. ;-)



What we're really talking about here is how do we *combine* the compatibility
measures of two or more arguments to determine the best overall fit. Pure
Ordering does it in a "take my bat and go home" manner, Manhattan distance
does it by weighing all arguments equally.


For some definition of "equal".


Huh. It treats all arguments as equally significant in determining overall 
closeness. Just like Pure Ordering does. I don't see your point.




And now maybe you see why I am so disgusted by this metric.  You see,
I'm thinking of a class simply as the set of all of its possible
instances.


There's your problem. Classes are not isomorphic to sets of instances and 
derived classes are not isomorphic to subsets.




And then when you refer to L[1] on the number of
derivations, I put it into set-subset terms, and mathematics explodes.


Sure. But if I think of a class as a piece of cheese and subclasses as 
mousetraps, the L[1] metric doesn't work either. The fault, dear Brutus, lies 
not in our metrics, but in our metaphors! ;-)




Here's how you can satisfy me: argue that Palmer's zero-effect
principle is irrelevant,


It's not irrelevant. It's merely insufficient. As I mention above, deriving a 
new class does not make the new class identical to the old in any context 
where derivation is part of the measure of similarity. Since MMD is such a 
context, derivation itself cannot be considered zero-effect, so the effects of 
derivation cannot be zero.




and explain either how Manhattan dispatch
makes any sense in a class-is-a-set world view,


It doesn't. But that's not a problem with the Manhattan metric. ;-)



or why that world view itself doesn't make sense.


Because a class isn't a set of instances...it's a recipe for creating 
instances and a specification for the unique behaviour of those instances. Two 
sets which happen to contain the same components are--by 
definition--identical. Two classes which happen to specify the same set of 
instance behaviours are--again by definition--*not* identical.




Or just don't satisfy me.


I suspect it this is the option that will occur. You seem to be looking for 
mathematical correctness and theoretical purity...a laudable goal. But I'm 
merely looking for practical utility and convenience.


So far, the twain seemed destined never to meet.

Damian


Re: Referring to package variables in the default namespace in p6

2005-07-19 Thread Matthew Hodgson


On Tue, 19 Jul 2005, Larry Wall wrote:


On Tue, Jul 19, 2005 at 07:25:35PM +0100, Matthew Hodgson wrote:
:
: So the question is: what is the correct syntax for referring to package
: variables in the default namespace?

The * looks like a twigil but it isn't really.  It's short for "*::",
where the * is a wildcard package name, so in theory we could have
$=*foo, meaning the $=foo in the *:: package.  (But most of the
twigils imply a scope that is immiscible with package scope.)


So, if I understand this correctly, the conclusion for referring to the 
default program namespace is $*Main::foo - which is shorthand for 
$*::Main::foo, which pulls the variable $foo out of the globally-rooted 
namespace called Main.  I'll plonk some tests into pugs to try to steer 
towards that...


In terms of resolving non-fully-qualified namespaces, presumably 
$Main::foo would work most of the time, assuming that you weren't 
currently within any package/module/class namespace scopes called Main - 
as S02 says 'the "*" may generally be omitted if there is no inner 
declaration hiding the global name.'?



: Also, what is the correct syntax for referring to package variables in
: your 'current' namespace?  $::foo?  $?PACKAGENAME::foo?
: $::($?PACKAGENAME)::foo?  %PACKAGENAME::?

That's currently:

   $OUR::foo


right... I guess that goes with $MY:: and $OUTER:: in S06 and their 
respective hash access methods.


I'm very surprised that package variables end up in OUR::, however - 
because surely they're not necessarily lexically scoped - and the whole 
point of 'our' was lexical global scoping, right? :/


is being able to:

% perl6 -e '{ $*Main::foo = "foo"; } say $OUR::<$foo>'

really a feature?  Or is OUR:: actually intended to include non-lexically 
scoped package variables too as a special case?



I'm not sure whether $::foo
is usefully distinct from $foo these days.  It almost seems to imply
that ::foo is in type space and we have to dereference it somehow.
There's a sense in which :: only implies type space after a name.
We somehow seem to have the situation where :: is simultaneously
trying to be a leading sigil, a trailing sigil, and a separator.


I've tried to wrap my head around all these behaviours of :: and summarize 
them here for future readers: inevitable corrections more than welcome ;)


::Foo  # leading sigil: disambiguates Foo as being in type space
$Foo::bar  # trailing sigil: indicates preceding term is in type space 
$Foo::Baz::bar # separator: type space hierarchy separator


::($foo)   # special case leading form which behaves more like the
   # trailing sigil & separator form for building up type
   # terms from expressions, as ($foo):: would be ambiguous.

I assume that I am correctly following your lead in referring to 
package/module/class namespaces as 'type space' here...


As regards $::foo, making a special case for it being a synonym for 
$*Main::foo be yet another complication entirely without precedent in the 
current behaviour.  But then again, using $::x in the past as a quick way 
to disambiguate between lexical & package variables in simple package-less 
scripts was quite handy...


thanks hugely for the quick feedback :)

M.