Re: Prototypes

2001-09-04 Thread Bryan C . Warnock

On Monday 03 September 2001 11:56 pm, Bryan C. Warnock wrote:
> The third value is a "peek" value.  Do the runtime checking, but don't do
> any magic variable stuff.  As a matter of fact, don't run any user-code at
> all.  Simply return a true or false value if the arguments *would* match.
> (This allows us to check incoming coderefs, to see that they take the
> arguments that *they* expect.  Similar to the whole "pointer to a function
> that takse a pointer to a function, and an int."  Of course, no checking
> the return value.  But they're supposed to handle your want()s.)

Er, scratch this.  Blows up if the sub isn't prototyped.  A much *better* 
way is to make the prototype of any sub a property (trait) of that sub.  We 
can always query for a property.


-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: LangSpec: Statements and Blocks

2001-09-04 Thread Bryan C . Warnock

On Tuesday 04 September 2001 12:27 am, Damian Conway wrote:
> C and C

[ LABEL: ] 
try { block }
[ [ catch [ ( expr ) ] { block } ] ... ]

?

>
> (C is not nearly so certain.)
>
>> Conditional Statement Modifiers
>>
>>  6. [ LABEL: ] expr if expr;
>>  7. [ LABEL: ] expr unless expr;
>
> I'm not at all sure modifiers will be stackable, as this grammar implies.

Er, parsing error.  Are you saying I've got it right or wrong?  (I'm 
intending non-stackable.)

>
>> Iterative Block Constructs
>>
>> 20. [ LABEL: ] for[each] [ scalar ] ( list ) { block } # Note 4
>
> I am hoping that Larry will also give us:
>
>  [ LABEL: ] for[each] (scalar, scalar ...) ( list ) { block }
>
>> Subroutine Code Blocks # Note 6
>>
>> 21. sub identifier [ ( prototype ) ] [ :properties ] { block }
>> 22. sub [ ( prototype ) ] { block }# Note 7
>
> Currently:
>
>  21. sub identifier [ ( prototype ) ] [ is properties ] { block }
>  22. sub [ ( prototype ) ] [ is properties] { block } [is properties]
>
> Though I would *much* prefer to see:
>
>  21. sub identifier [ ( prototype ) ] [ :traits ] { block }
>  22. sub [ ( prototype ) ] [ :traits] { block } [is properties]

Ah, traits is what I meant.  But that's not final yet?

>
>> A statement consists of zero or more expressions, followed by an
>> optional modifier and its expression, and either a statement
>> terminator (';') or a block closure ('}' or EOF).
>
> Need to recast this in terms of statement separators and null statements.

Wouldn't a null statement be covered by a statement of 0 expressions?

>
>> A block consists of zero or more blocks and statements. A file is
>> considered a block, delimited by the file boundaries.   Semantically,
>> I will define a block only in terms of its affect on scoping.
>
> 
>   "its effect on scoping"
>   (we probably don't care about its pyschological demeanor ;-)
> 

Thanks,

And while I'm at it, I have some questions for you!  Would you *please* 
consider reforming the 'when expr : { block }' clause as
when ( expr ) { block }
?

'if', 'unless', 'elsif', 'given', 'while', 'until', the looping 'for[each]', 
and potentially the 'catch' clauses all use that form - 'keyword ( expr ) { 
block }'.  'when' is the odd man out.   

Secondly, do 'when' clauses have targettable labels?

given ( $a ) {
when /a/ : { foo($a); next BAR }
when /b/ : { ... }
   BAR: when /c/ : { ... }
...
}


-- 
Bryan C. Warnock
[EMAIL PROTECTED]



RE: Prototypes

2001-09-04 Thread Garrett Goebel

From: Bryan C. Warnock [mailto:[EMAIL PROTECTED]]
> 
> On Monday 03 September 2001 11:56 pm, Bryan C. Warnock wrote:
> > The third value is a "peek" value.  Do the runtime 
> > checking, but don't do any magic variable stuff.  As a
> > matter of fact, don't run any user-code at all.  Simply
> > return a true or false value if the arguments *would*
> > match. (This allows us to check incoming coderefs, to
> > see that they take the arguments that *they* expect.
> >  Similar to the whole "pointer to a function that takse
> > a pointer to a function, and an int."  Of course, no
> > checking the return value.  But they're supposed to
> > handle your want()s.)
> 
> Er, scratch this.  Blows up if the sub isn't prototyped.  A 
> much *better* way is to make the prototype of any sub a
> property (trait) of that sub.  We can always query for a
> property.

This is possible now:

$foo = sub ($) { print "hello world\n" };
print prototype $foo;



RE: Expunge implicit @_ passing

2001-09-04 Thread Hong Zhang


> >The only good justification I've heard for "final" is as a directive
> >for optimization. If you declare a variable to be of a final type, then
> >the compiler (JIT, or whatever) can resolve method dispatch at
> >compile-time. If it is not final, then the compiler can make no such
> >assumption because java code can load in extra classes later.
> 
> This is the only real reason I've seen to allow final. (And it's not a bad

> reason, honestly, though not necessarily one appropriate in all cases) It 
> does allow a fair amount of optimization to be done, which can be 
> especially important when you can't see all the source. (Pretty much the 
> case in all languages that compile down to object modules you 
> link together later)

If our intention is only for optimization, I prefer to use word "inline" 
instead of "final". The word "final" already has been abused. It is very
awkward to use it for this purpose.

Hong



Re: What's up with %MY?

2001-09-04 Thread Uri Guttman

> "DC" == Damian Conway <[EMAIL PROTECTED]> writes:

  DC> Dan revealed:

  >> That's easy--you slip the pumpking or internals designer a 10-spot.
  >> Amazing what it'll do... :)

  DC> And how do you think I got five of my modules into the 5.8 core???

i heard it was blackmail. you got a hold of pictures of jarkko's
honeymoon.

:-)

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



RE: What's up with %MY?

2001-09-04 Thread Garrett Goebel

From: Ken Fox [mailto:[EMAIL PROTECTED]]
> 
> Can we have an example of why you want run-time
> symbol table manipulation?

How about being able to dump and restore subroutines and closures along with
their lexical environment?

Perhaps this next example doesn't have to fall under the runtime category,
but I personally would like to be able to use Perl5 attributes to take the
argument from an attribute specification's parameter list and convert it
from q{} into an anonymous subroutine with the same lexical context as the
subroutine implementation of which it is an attribute. Here's a contrived
example:

{
  my $year = 2001;
  sub year
: Pre($_[0] >= $year or die q{can't go back})
{ @_ ? $year = shift : $year }
}


> If the alias gets more complicated, I'm not sure the
> symbol table approach works well at all.
> 
> >> Modifying the caller's environment:
> >> 
> >>   $lexscope = caller().{MY};
> >>   $lexscope{'&die'} = &die_hard;
> 
> This only modifies the caller's scope? It doesn't modify
> all instances of the caller's scope, right? For example,
> if I have an counter generator, and one of the generated
> closures somehow has its' symbol table modified, only that
> *one* closure is affected even though all the closures
> were cloned from the same symbol table.

In the example above, any future use of '&die' within the caller's lexical
scope would execute &die_hard instead of whatever &die used to refer to.

In your example it depends on whether by cloning you mean that each
generated closure has its own symbol table which is a copy of the values
from the original symbol table, or whether it is aliased to refer to the
same underlying values from the original symbol table.


> What about if the symbol doesn't exist in the caller's scope
> and the caller is not in the process of being compiled? Can
> the new symbol be ignored since there obviously isn't any
> code in the caller's scope referring to a lexical with that
> name?

My preference would be for autovification. That new symbols would be pushed
onto the scratchpad/symbol-table of the target lexical scope. Using another
contrived example in Perl5 syntax, what about:

{
  my $foo = sub {'hello'};
  sub say {
$foo;
my $a = eval qq{$_[0]};
eval(qq{\$$a})->();
  };
}
print say('foo');

What if you'd like to insert an anonymous $bar subroutine into the
scratchpad of &say?


> Do we favor expression too much over verification? I'm
> not qualified to answer because I know I'm biased towards
> expression. (The %MY issues I'm raising mostly because of
> performance potential.)

What are the performance problems? Don't Cv's already have their own
scratchpads which could potentially by modified by code using Inline.pm or
XS code at runtime? It's not like we're adding anything new here... are we?
Isn't this just making something that is currently very difficult to do
easier?


> This particular issue is causing trouble because it has a big
> impact on local variable analysis -- which then causes problems
> with optimization. I'd hate to see lots of pragmas for turning
> features on/off because it seems like we'll end up with a more
> fragmented language that way.

Is this really a pragma issue?

I thought part of the Parrot thing was to allow simpler objects and their
associated vtables to be promoted to more complex ones. So we can have bells
and whistles, without having them impact performance until you start making
use of them. And then only affecting the performance of those objects which
are promoted. So write access to a Cv's local scope might done with one of
those scary polymorphic function objects which is less efficent than the
base function object where a function call is 'just a function call'.



RE: Expunge implicit @_ passing

2001-09-04 Thread Dan Sugalski

At 09:30 AM 9/4/2001 -0700, Hong Zhang wrote:

> > >The only good justification I've heard for "final" is as a directive
> > >for optimization. If you declare a variable to be of a final type, then
> > >the compiler (JIT, or whatever) can resolve method dispatch at
> > >compile-time. If it is not final, then the compiler can make no such
> > >assumption because java code can load in extra classes later.
> >
> > This is the only real reason I've seen to allow final. (And it's not a bad
> > reason, honestly, though not necessarily one appropriate in all cases) It
> > does allow a fair amount of optimization to be done, which can be
> > especially important when you can't see all the source. (Pretty much the
> > case in all languages that compile down to object modules you
> > link together later)
>
>If our intention is only for optimization, I prefer to use word "inline"
>instead of "final". The word "final" already has been abused. It is very
>awkward to use it for this purpose.

Fair enough. I don't much care what its called, as long as I know what it 
does.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Dave Mitchell

If there is to be a %MY, how does its semantics pan out?

for example, what (if anything) do the following do:

sub Foo::import {
my %m = caller(1).{MY}; # or whatever
%m{'$x'} = 1;
}

sub Bar::import {
my %m = caller(1).{MY}; # or whatever
delete %m{'$x'};
}


sub f {
my $x = 9;
use Foo; # does $x become 1, or $x redefined, or runtime error, or ... ?
{
# is this the same as 'my $x = 1' at this point,
# or is the outer $x modified?
use Foo;
...
}
use Bar; # is $x now still in scope?
print $x; #compile error? or runtime error? or prints 9 (or 1...) 

Bar::import(); # any difference calling it at run time?
}

and so on

IE what effects to do the standard hash ops of adding, modifying,
deleting, testing for existence, testing for undefness, etc etc map onto
when applied to some sub's %MY, at either compile or run time.



I'd be a lot happier about this concept if I knew how it was supposed
to behave!

Dave M.




RE: What's up with %MY?

2001-09-04 Thread Garrett Goebel

From: Dave Mitchell [mailto:[EMAIL PROTECTED]]
> 
> If there is to be a %MY, how does its semantics pan out?
> 
> for example, what (if anything) do the following do:
> 
> sub Foo::import {
> my %m = caller(1).{MY}; # or whatever
> %m{'$x'} = 1;
> }

IMO: Sets the value of the lexical $x in the caller's scope to 1,
autovifying '$x' if it doesn't exist.

 
> sub Bar::import {
> my %m = caller(1).{MY}; # or whatever
> delete %m{'$x'};
> }

hmm... when:

{ my $x = 1; sub incr {$x++} }

is compiled, the $x++ in &incr refers to the lexical $x. So deleting it
would remove it from the scratchpad of &incr. But I would guess that future
calls to &incr would have to autovify $x in the scratchpad and start
incrementing it from 0. I.e., ignoring a package $x if it exists. I could
see people prefering it either way...


> sub f {
> my $x = 9;
> use Foo; # does $x become 1, or $x redefined, or runtime 
>  # error, or ... ?

do you mean Foo::import()? 'use' is handled like:

BEGIN {
  require Foo;
  Foo::import(@_);
}

So 'use Foo' would modify the caller, which being processed at compile time
would be 'main'. So this would create a my $x in the scope of the main_cv.
Which would then be removed by the later 'use Bar'.

Assuming Foo::import(), I would guess that the $x which is local to &f would
be assigned the value 1.


> {
>   # is this the same as 'my $x = 1' at this point,
>   # or is the outer $x modified?

If you indeed meant 'use Foo', then the lexical $x of 'main' would be
created and set to 1.

Or, assuming you meant Foo::import() the answer would be neither. It would
neither modify the outer $x or create a new my $x, but modify the value of
the $x which exists within the scope of &f.


>   use Foo;
>   ...
> }
> use Bar; # is $x now still in scope?

'use Bar' would occur at compile time, and would remove the $x from main's
lexical scratchpad which had been created when you did 'use Foo'.

Or had you said Bar::import(): My guess would be that at this point, $x
would be removed from the stash of &f.


> print $x; #compile error? or runtime error? or prints 9 
>   #(or 1...) 

If you used 'use Foo' and 'use Bar', it would print 9. Because the $x local
to &f would never have been touched.

If you meant Foo::import() and Bar::import() and had warnings turned on, it
would print:

  Use of uninitialized value in print at ...

 
> Bar::import(); # any difference calling it at run time?

Yes... as mention above. One happens at compile, the other at runtime so the
caller and consequently lexical scope is different in each case.


> }
> 
> and so on
> 
> IE what effects to do the standard hash ops of adding, modifying,
> deleting, testing for existence, testing for undefness, etc 
> etc map onto when applied to some sub's %MY, at either compile
> or run time.

I would hope that it would be identical to the current behavior we
experience when modifying a package's stash. Or however the new behavior for
stashes maps to Perl6.




RE: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 12:50 PM 9/4/2001 -0500, Garrett Goebel wrote:
> > sub Bar::import {
> > my %m = caller(1).{MY}; # or whatever
> > delete %m{'$x'};
> > }
>
>hmm... when:
>
>{ my $x = 1; sub incr {$x++} }
>
>is compiled, the $x++ in &incr refers to the lexical $x. So deleting it 
>would remove it from the scratchpad of &incr. But I would guess that 
>future calls to &incr would have to autovify $x in the scratchpad and 
>start incrementing it from 0. I.e., ignoring a package $x if it exists. I 
>could see people prefering it either way...

Folks might also want that to then refer to the $x in the enclosing scope. 
Don't think that's going to happen, though. (Lots and lots of runtime 
overhead in that case)

> >
> > IE what effects to do the standard hash ops of adding, modifying,
> > deleting, testing for existence, testing for undefness, etc
> > etc map onto when applied to some sub's %MY, at either compile
> > or run time.
>
>I would hope that it would be identical to the current behavior we 
>experience when modifying a package's stash. Or however the new behavior 
>for stashes maps to Perl6.

I can see allowing read/write/change/iterate access (possibly enforcing 
types when writing) but not delete. That opens up a number of cans of worms 
I'd rather stay closed for now.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: CLOS multiple dispatch

2001-09-04 Thread David L. Nicol

Dan Sugalski wrote:

> It'll probably be something like "Here's the function name. Here's the
> parameters. Do The Right Thing." I don't think there's much need for
> cleverness on the part of the interface. The actual dispatch code could be
> nasty, but that's someone else's problem. :)
> 
> Dan

What form are the parameters in? Blessed perl references?  Are there
flags to indicate lexically constant information, such as "this will 
always be a stuffed animal of some kind even though it might not 
always be a medium Gund polar bear, and all stuffed animals have
a machine_washable_p() method" for optimization purposes?

That is the need for cleverness on the part of the interface.  Without
a standard way to say this is constant, this is dynamic there isn't
much gain over writing redispatch functions.

Although this whole line of unattached skypie debate may show nothing
more than how, exactly, Java, which was designed by a comittee, got the
"interface" system it has.

-- 
   David Nicol 816.235.1187
A few months ago, in a convenience store in New Jersey...



Re: explicitly declare closures???

2001-09-04 Thread Mark-Jason Dominus


Says Dave Mitchell:

> Closures ... can also be dangerous and counter-intuitive, espcially to
> the uninitiated. For example, how many people could say what the
> following should output, with and without $x commented out, and why:
> 
> {
> my $x = "bar";
> sub foo {
> # $x  # <- uncommenting this line changes the outcome
> return sub {$x};
> }
> }
> print foo()->();
> 

That is confusing, but it is not because closures are confusing.  It
is confusing because it is a BUG.  In Perl 5, named subroutines are
not properly closed.

If the bug were fixed, the result would be 'bar' regardless of whether
or not $x was commented.

This would solve the  problems with mod_perl also.

The right way to fix this is not to eliminate closures, or to require
declarations.  The right way to fix this is to FIX THE BUG.




Re: CLOS multiple dispatch

2001-09-04 Thread David L. Nicol

Me wrote:

> I found just one relevant occurence of 'mop' in perl6-all archives:
> 
> http://www.mail-archive.com/perl6-all@perl.org/msg10432.html
> 
> And not a single reply...
> 
> I'd really like to see what Dan / lisp folks have to say about mops
> and perl6...

How about some nice introductory links for MOP theory?  The above-linked
post is also the only time I recall seeing aspect theory mentioned in
here either.   Someone explained aspectJ to me at a PM meeting and
it sounded like a sure recipe for completely impossible AAAD bugs.


-- 
   David Nicol 816.235.1187
A long time ago, in a galaxy far far away...



Re: Expunge implicit @_ passing

2001-09-04 Thread Michael G Schwern

On Tue, Sep 04, 2001 at 09:30:19AM -0700, Hong Zhang wrote:
> > This is the only real reason I've seen to allow final. (And it's not a bad
> 
> > reason, honestly, though not necessarily one appropriate in all cases) It 
> > does allow a fair amount of optimization to be done, which can be 
> > especially important when you can't see all the source. (Pretty much the 
> > case in all languages that compile down to object modules you 
> > link together later)
> 
> If our intention is only for optimization, I prefer to use word "inline" 
> instead of "final". The word "final" already has been abused. It is very
> awkward to use it for this purpose.

Ummm... there should be no *language* reason why we can't override
inline methods.  It's purely an internal distinction.

The unfortunate problem with saying "inline methods cannot be
overriden" is people are not going to realize this, slap 'inline' on
their methods (cuz it's faster, you see) and screw their subclassers.
Or they will realize it and slap it on anyway, either because they
think the speed is more important than subclassing, or because they
really want 'final'.

Trying to optimize methods so they are "inline" in a dynamic language
like perl is going to have all sorts of weird side-effects.  Object
method calls are currently only about 15% slower than function calls.
I expect that gap to close in Perl 6 just with the introduction of
proper vtables.

Preventing subclassing is not worth 10%.


-- 

Michael G. Schwern   <[EMAIL PROTECTED]>http://www.pobox.com/~schwern/
Perl6 Quality Assurance <[EMAIL PROTECTED]>   Kwalitee Is Job One
This is my sig file.  Is it not nify?  Worship the sig file.
http://www.sluggy.com



RE: CLOS multiple dispatch

2001-09-04 Thread David Whipp

David L. Nicol wrote:
> How about some nice introductory links for MOP theory?  The 
> above-linked post is also the only time I recall seeing aspect
> theory mentioned in here either. Someone explained aspectJ to
> me at a PM meeting and it sounded like a sure recipe for
> completely impossible AAAD bugs.

Here's a few links. I think I agree that Aspects provide
copious quantities of rope; and that people may use this
rope in ways that do not promote longevity :-) OTOH, it
seems safe compared to the concept of redefining the Perl6
parser at every lexical scope.

http://dev.perl.org/rfc/92.pod
http://www.parc.xerox.com/csl/projects/aop/
http://c2.com/cgi/wiki?MetaObjectProtocol
http://citeseer.nj.nec.com/context/1534/0


Dave.
--
Dave Whipp, Senior Verification Engineer,
Fast-Chip inc., 950 Kifer Rd, Sunnyvale, CA. 94086
tel: 408 523 8071; http://www.fast-chip.com
Opinions my own; statements of fact may be in error. 



Re: Prototypes

2001-09-04 Thread Bryan C . Warnock

On Tuesday 04 September 2001 11:17 am, Garrett Goebel wrote:
> > Er, scratch this.  Blows up if the sub isn't prototyped.  A
> > much *better* way is to make the prototype of any sub a
> > property (trait) of that sub.  We can always query for a
> > property.
>
> This is possible now:
>
> $foo = sub ($) { print "hello world\n" };
> print prototype $foo;

Well, it's nice to know that when I reinvent the wheel, it's still round.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: Expunge implicit @_ passing

2001-09-04 Thread Dan Sugalski

At 03:54 PM 9/4/2001 -0400, Michael G Schwern wrote:
>Ummm... there should be no *language* reason why we can't override
>inline methods.  It's purely an internal distinction.

I'm not so much thinking about inline methods as inline subs.

>The unfortunate problem with saying "inline methods cannot be
>overriden" is people are not going to realize this, slap 'inline' on
>their methods (cuz it's faster, you see) and screw their subclassers.
>Or they will realize it and slap it on anyway, either because they
>think the speed is more important than subclassing, or because they
>really want 'final'.
>
>Trying to optimize methods so they are "inline" in a dynamic language
>like perl is going to have all sorts of weird side-effects.  Object
>method calls are currently only about 15% slower than function calls.
>I expect that gap to close in Perl 6 just with the introduction of
>proper vtables.

It's not method calls as such that'll be faster or not with methods marked 
as definitive. There's also the potential to inline the sub code and then 
put the inlined code through the optimizer. Some code can get a pretty 
significant speedup that way. (And, then again, some code can't...)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: CLOS multiple dispatch

2001-09-04 Thread Dan Sugalski

At 01:27 PM 9/4/2001 -0500, David L. Nicol wrote:
>Dan Sugalski wrote:
>
> > It'll probably be something like "Here's the function name. Here's the
> > parameters. Do The Right Thing." I don't think there's much need for
> > cleverness on the part of the interface. The actual dispatch code could be
> > nasty, but that's someone else's problem. :)
>
>What form are the parameters in? Blessed perl references?

PMCs. This sort of code would be operating a step below the normal perl level.

>Are there
>flags to indicate lexically constant information, such as "this will
>always be a stuffed animal of some kind even though it might not
>always be a medium Gund polar bear, and all stuffed animals have
>a machine_washable_p() method" for optimization purposes?

Dunno. Probably not, but if the language is designed such that we can count 
on things like that, it'd be fine by me.

>That is the need for cleverness on the part of the interface.  Without
>a standard way to say this is constant, this is dynamic there isn't
>much gain over writing redispatch functions.

Umm all this is is a redispatch function.


Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Damian Conway


Ken wrote:

   > Damian Conway wrote:
   > > It would seem *very* odd to allow every symbol table *except*
   > > %MY:: to be accessed at run-time.
   > 
   > Well, yeah, that's true. How about we make it really
   > simple and don't allow any modifications at run-time to
   > any symbol table?

Err. No thanks. Being able to mess with the symbol table is one
of the things I most like about Perl.

   
   > Can we have an example of why you want run-time
   > symbol table manipulation? Aliases are interesting,

...but not what I had in mind.

The main uses are (surprise):

* introducing lexically scoped subroutines into a caller's scope
* introducing lexically scoped variables into a caller's scope

In other words, everything that Exporter does, only with lexical
referents not package referents. This in turn gives us the ability to
easily write proper lexically-scoped modules.

Other important uses are:

* modifying existing lexically scoped subroutines in a caller's scope
* modifying existing lexically scoped variables in a caller's scope

I can image a Lexically::Verbose module, that modifies all variables and/or
subroutines in a scope to report their own activity:

while (whatever) {
use Lexically::Verbose 'vars';
my $x;  # logs: 'created $x at line 4''
$x++;   # logs: 'incremented $x to 1 at line 5'
}


   
   > >> Modifying the caller's environment:
   > >> 
   > >>   $lexscope = caller().{MY};
   > >>   $lexscope{'&die'} = &die_hard;
   > 
   > This only modifies the caller's scope? It doesn't modify
   > all instances of the caller's scope, right?

Right. Lexical symbol tables are themselves lexical variables. At the
end of the caller's scope, they vanish.
   

   > For example, if I have an counter generator, and one of the
   > generated closures somehow has its' symbol table modified, only
   > that *one* closure is affected even though all the closures were
   > cloned from the same symbol table.

Yep.


   > What about if the symbol doesn't exist in the caller's scope
   > and the caller is not in the process of being compiled? Can
   > the new symbol be ignored since there obviously isn't any
   > code in the caller's scope referring to a lexical with that
   > name?

No. Because some other subroutine called from the caller's scope might
also access caller().{MY}. In fact, you just invented a new pattern, in
which a set of subroutines called within a scope can communicate invisibly
but safely through that scope's lexical symbol table.


   > > Between source filters and Inline I can do pretty much whatever I like
   > > to your lexicals without your knowledge. ;-)
   > 
   > Those seem more obvious. There will be a "use" declaration

Not necessarily with Inline. Nor with source filters for that matter (the
C could be 500 lines and 10 nested scopes away at the top of the file)


   > I wrote and I already know that "use" can have side-effects on
   > my current name space. IMHO this could become a significant problem
   > as we continue to make Perl more expressive. Macros, filters,
   > self-modifying code, mini-languages ... they all make expressing
   > a solution easier, and auditing code harder. Do we favor
   > expression too much over verification?

I would have said that was Perl's signature feature. ;-)


   > We also want Perl 6 to be fast and cleanly implemented.

Accessing lexicals will be no slower than accessing package variables
is today. Because in Perl 6 both lexicals and package variables will
use the same look-up mechanism: look up the appropriate symbol table
entry and that's your SV reference (or whatever replaces SVs in Perl 6).
Think of symbol table entries as vtables for variables.

   
   > > How am I expected to produce fresh wonders if you won't let me warp the
   > > (new) laws of the Perl universe to my needs?
   > 
   > You constantly amaze me and everyone else. That's never
   > been a problem.

Thank-you. But I have to contend with the inflation of expectations.
Last year I wow'd them with simple quantum physics. This year, I needed
a quantum cellular automaton simulation of molecular thermodynamics
written in Klingon. What will it take next year???

;-)

 
   > One of the things that I haven't been seeing is the exchange
   > of ideas between the implementation side and the language side.
   > I've been away for a while, so maybe it's just me.

A great deal of that happens off-list: especially between Dan and I and
between Dan and Larry.


   > It vaguely worries me though that we'll be so far down the
   > language side when implementation troubles arise that it will
   > be hard to change the language. 

I'm really not worried about that. Larry has consistently demonstrated
he's open to reassessing his design decisions when necessary.

Damian



Re: What's up with %MY?

2001-09-04 Thread Uri Guttman

> "DC" == Damian Conway <[EMAIL PROTECTED]> writes:

  DC> Thank-you. But I have to contend with the inflation of expectations.
  DC> Last year I wow'd them with simple quantum physics. This year, I needed
  DC> a quantum cellular automaton simulation of molecular thermodynamics
  DC> written in Klingon. What will it take next year???

  DC> ;-)

a fully functional perl6 in 1 page of self modifying (at the parser
level, of course) perl6 code.

hey, if lisp can do it, why not perl6?

:)

  >> It vaguely worries me though that we'll be so far down the
  >> language side when implementation troubles arise that it will
  >> be hard to change the language. 

  DC> I'm really not worried about that. Larry has consistently demonstrated
  DC> he's open to reassessing his design decisions when necessary.

and there is nothing we are doing in the implementation so far that is
restricting the language. we are actually making parrot more flexible to
support other langauges. i can see a day where python and ruby, etc. are
defaulting to using the parrot back end as it will be faster, and
compatible with perl etc. imagine the work reduction if all/most of the
interpreted languages can share one common back end.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 09:20 AM 9/5/2001 +1100, Damian Conway wrote:
>The main uses are (surprise):
>
> * introducing lexically scoped subroutines into a caller's scope

I knew there was something bugging me about this.

Allowing lexically scoped subs to spring into existence (and variables, for 
that matter) will probably slow down sub and variable access, since we 
can't safely resolve at compile time what variable or sub is being 
accessed. Take, for example:

   my $foo;
   my sub bar {print "baz\n"}
   {
 bang();
 $foo = bar();
 print $foo;
   }

Now, what I want to do is to have bar() resolve to "previous pad, entry 2" 
and  bar to "previous pad, entry 1". Which they essentially do now. Those 
lookups are snappy, at best we need to walk up the pad pointer chain. No 
biggie.

However...

If we can inject lexicals into the caller's scope, bang() could add both a 
$foo and a bar() inside the block. That means, for this to work right, I 
*can't* resolve to a pad#/offset pair--instead I need to look up by name, 
potentially every time. For any sort of speed I'd also need to do some sort 
of caching scheme with multi-level snooping and cache invalidation, since 
if the variables in question resolve in pad N at compile time, and I use 
them at pad 0, I need to potentially check that pads 1-N have had changes 
to them. I can see this making closures odd too, if I mess with pads at 
runtime. (Odd in the "walking down from pad N just got more interesting" sense)

Not that I'm arguing against it, just that I can see some efficiency issues.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: LangSpec: Statements and Blocks

2001-09-04 Thread Damian Conway


Bryan wrote:

   > > C and C
   > 
   > [ LABEL: ] 
   > try { block }
   > [ [ catch [ ( expr ) ] { block } ] ... ]

the "expr" is more likely to be a "parameter_specification".


   > >> Conditional Statement Modifiers
   > >>
   > >>  6. [ LABEL: ] expr if expr;
   > >>  7. [ LABEL: ] expr unless expr;
   > >
   > > I'm not at all sure modifiers will be stackable, as this grammar implies.
   > 
   > Er, parsing error.  Are you saying I've got it right or wrong?  (I'm 
   > intending non-stackable.)

H. I had assumed that your grammar implies a statement is an expr.
But perhaps it didn't.


   > > Though I would *much* prefer to see:
   > >
   > >  21. sub identifier [ ( prototype ) ] [ :traits ] { block }
   > >  22. sub [ ( prototype ) ] [ :traits] { block } [is properties]
   > 
   > Ah, traits is what I meant.  But that's not final yet?

By no means. Larry has not told me what he thought of my revised
properties/traits proposal.


   > >> A statement consists of zero or more expressions, followed by an
   > >> optional modifier and its expression, and either a statement
   > >> terminator (';') or a block closure ('}' or EOF).
   > >
   > > Need to recast this in terms of statement separators and null statements.
   > 
   > Wouldn't a null statement be covered by a statement of 0 expressions?

Oops. Yes. I missed that. So you just need to s/terminator/separator/


   > And while I'm at it, I have some questions for you!

Curses! ;-)

   
   > Would you *please* consider reforming the 'when expr : { block }' clause as
   > when ( expr ) { block }
   > ?

That's Larry's syntax. And Larry's decision. For what it's worth, I have
previously argued against that colon. And you'll note that Switch.pm doesn't
require it (except in Perl 6 compatibility mode)


  
   > Secondly, do 'when' clauses have targettable labels?

They are not 'clauses', they are statements. So yes, a C can definitely
take a label.


   > given ( $a ) {
   > when /a/ : { foo($a); next BAR }
   > when /b/ : { ... }
   >BAR: when /c/ : { ... }
   > ...
   > }

That would be:

 given ( $a ) {
 when /a/ : { foo($a); goto BAR }
 when /b/ : { ... }
BAR: when /c/ : { ... }
 ...
 }

Using C would (presumably) cause control to head up-scope
to the first *enclosing* block labelled 'BAR'.

Damian



Re: What's up with %MY?

2001-09-04 Thread Damian Conway


Dave Mitchell asked:
 
   > If there is to be a %MY, how does its semantics pan out?

That's %MY::. The colons are part of the name.

   
   > for example, what (if anything) do the following do:
   > 
   > sub Foo::import {
   > my %m = caller(1).{MY}; # or whatever
   > %m{'$x'} = 1;
   > }

That would be:

 sub Foo::import {
 my $m = caller(1).{MY};
 $m{'$x'} = \1;
 }

Symbol table entries store references (to the actualy storage), not values.
My above example would make the lexical $x variable in the caller's scope
equivalent to:

my $x : const = 1;

  
   > sub Bar::import {
   > my %m = caller(1).{MY}; # or whatever
   > delete %m{'$x'};
   > }

That would be:

 sub Bar::import {
 my $m = caller(1).{MY};
 delete $m{'$x'};
 }

which would cause the next attempted access to the lexical $x in the
caller's scope to throw an exception.


   > sub f {
   > my $x = 9;
   > use Foo; # does $x become 1, or $x redefined, or runtime error, or...?

$x becomes constant, with value 1.

   > {
   ># is this the same as 'my $x = 1' at this point,

Yes.

   ># or is the outer $x modified?

No.

   >use Foo;
   >...
   > }

Because %MY:: is lexical to each scope.

   > use Bar; # is $x now still in scope?

No.

   > print $x; #compile error? or runtime error? or prints 9 (or 1...) 

Run-time exception.


   > Bar::import(); # any difference calling it at run time?

No.


   > IE what effects to do the standard hash ops of adding, modifying,
   > deleting, testing for existence, testing for undefness, etc etc map onto
   > when applied to some sub's %MY, at either compile or run time.

No difference between compile- and run-times.
Effects:

What Effect

add entry to %MY::   Creates new lexical in scope
modify entry in %MY::Changes implementation of lexical
delete entry in %MY: Prematurely removes lexical from scope
existence test entry in %MY::Does lexical exist in scope?
definition test entry in %MY::   Is lexical implemented in scope?

Damian



Re: What's up with %MY?

2001-09-04 Thread Me

>> What about if the symbol doesn't exist in the caller's scope
>> and the caller is not in the process of being compiled? Can
>> the new symbol be ignored since there obviously isn't any
>> code in the caller's scope referring to a lexical with that
>> name?
>
> No. Because some other subroutine called from the caller's scope might
> also access caller().{MY}. In fact, you just invented a new pattern,
in
> which a set of subroutines called within a scope can communicate
invisibly
> but safely through that scope's lexical symbol table.

Foxy variables. Nice.




RE: What's up with %MY?

2001-09-04 Thread Damian Conway

Dan wrote:

   > At 12:50 PM 9/4/2001 -0500, Garrett Goebel wrote:
   > >
   > >So deleting it 
   > >would remove it from the scratchpad of &incr. But I would guess that 
   > >future calls to &incr would have to autovify $x in the scratchpad and 
   > >start incrementing it from 0. I.e., ignoring a package $x if it exists. I 
   > >could see people prefering it either way...
   > 
   > Folks might also want that to then refer to the $x in the enclosing scope. 
   > Don't think that's going to happen, though. (Lots and lots of runtime 
   > overhead in that case)

I agree (both that people might want that, and that it's probably not
going to happen ;-)


   > I can see allowing read/write/change/iterate access (possibly
   > enforcing types when writing) but not delete. That opens up a
   > number of cans of worms I'd rather stay closed for now.

Why not C? It merely requires that the internals equivalent of:

sub LEXICAL::SCALAR::FETCH ($varname) {
$scalar_ref = caller().{MY}{$varname};
return $$scalar_ref;
}

sub LEXICAL::SCALAR::STORE ($varname, $newval) {
$scalar_ref = caller().{MY}{$varname};
$$scalar_ref = $newval;
}

becomes:

sub LEXICAL::SCALAR::FETCH ($varname) {
$scalar_ref = caller().{MY}{$varname}
or throw "lexical $varname no longer in scope";
return $$scalar_ref;
}

sub LEXICAL::SCALAR::STORE ($varname, $newval) {
$scalar_ref = caller().{MY}{$varname}
or throw "lexical $varname no longer in scope";
$$scalar_ref = $newval;
}

I don't understand why you think that's particularly wormy?

Damian



Re: Prototypes

2001-09-04 Thread Damian Conway

Bryan wrote:


   > > > Er, scratch this. Blows up if the sub isn't prototyped. A much
   > > > *better* way is to make the prototype of any sub a property
   > > > (trait) of that sub. We can always query for a property.
   > >
   > > This is possible now:
   > > $foo = sub ($) { print "hello world\n" };
   > > print prototype $foo;
   > 
   > Well, it's nice to know that when I reinvent the wheel, it's still round.

But I strongly agree that the parameter list of a subroutine ought to be
accessed via a trait, rather than a builtin function.

Damian



Re: LangSpec: Statements and Blocks

2001-09-04 Thread Bryan C . Warnock

On Tuesday 04 September 2001 06:39 pm, Damian Conway wrote:
> the "expr" is more likely to be a "parameter_specification".

Urk.  I'll wait for the movie, I think.

>> >>  6. [ LABEL: ] expr if expr;
>> >>  7. [ LABEL: ] expr unless expr;
>> >
>> > I'm not at all sure modifiers will be stackable, as this grammar
>> > implies.
>>
>> Er, parsing error.  Are you saying I've got it right or wrong?  (I'm
>> intending non-stackable.)
>
> H. I had assumed that your grammar implies a statement is an expr.
> But perhaps it didn't.

The simplest statement is an expression.  I'm trying to couch the definition 
of what composes an expression to exclude 'if', 'while', 'for', etc.
Apparently right poorly, at that.

>> Wouldn't a null statement be covered by a statement of 0 expressions?
>
> Oops. Yes. I missed that. So you just need to s/terminator/separator/

Done.

>> Would you *please* consider reforming the 'when expr : { block }'
>> clause as when ( expr ) { block }
>> ?
>
> That's Larry's syntax. And Larry's decision. For what it's worth, I have
> previously argued against that colon. And you'll note that Switch.pm
> doesn't require it (except in Perl 6 compatibility mode)

Okay... well, Larry, if you're listening in.

>
>> Secondly, do 'when' clauses have targettable labels?
>
> They are not 'clauses', they are statements. So yes, a C can
> definitely take a label.
>
>> given ( $a ) {
>> when /a/ : { foo($a); next BAR }
>> when /b/ : { ... }
>>BAR: when /c/ : { ... }
>> ...
>> }
>
> That would be:
>
>  given ( $a ) {
>  when /a/ : { foo($a); goto BAR }
>  when /b/ : { ... }
> BAR: when /c/ : { ... }
>  ...
>  }

If they were statements, wouldn't that be:

 when /a/ : { foo($a); goto BAR };
 when /b/ : { ... };
BAR: when /c/ : { ... };
 ...

That's why I was considering them blocks, which I, of course, mislabelled 
clauses.  Like if blocks and while blocks.

>
> Using C would (presumably) cause control to head up-scope
> to the first *enclosing* block labelled 'BAR'.

But wasn't a bare 'next' supposed to continue on to the next statement?

given ( expr ) {
when /a/ : { foo; next }
when /b/ : { bar }
}

If /a/ is true, do foo(), and then continue on to the next statement.
If that was/is still the case, then wouldn't a 'next LABEL' imply continuing 
on to the next statement labelled LABEL?

Of course, if it is no longer 'next', then that's fine, too.  We want things 
to be consistently different.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



RE: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 10:04 AM 9/5/2001 +1100, Damian Conway wrote:
>Dan wrote:
>Why not C? It merely requires that the internals equivalent of:

[Snippy]

>I don't understand why you think that's particularly wormy?

Ah, but what people will want is:

   my $x = "foo\n";
   {
 my $x = "bar\n";
 delete $MY::{'$x'};
 print $x;
   }

to print foo. That's where things get tricky. Though I suppose we could put 
some sort of placeholder with auto-backsearch capabilities. Or something.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Bryan C . Warnock

On Tuesday 04 September 2001 07:25 pm, Dan Sugalski wrote:
> Ah, but what people will want is:
>
>my $x = "foo\n";
>{
>  my $x = "bar\n";
>  delete $MY::{'$x'};
>  print $x;
>}
>
> to print foo. That's where things get tricky. Though I suppose we could
> put some sort of placeholder with auto-backsearch capabilities. Or
> something.

Other than the obvious run-time requirements of this, what's wrong with 
simply looking in the current pad, seeing it's not there, then looking in 
the previous pad...?  (Assuming you know the variable by name)

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



RE: What's up with %MY?

2001-09-04 Thread Damian Conway

Dan sighed:

   > >I don't understand why you think that's particularly wormy?
   > 
   > Ah, but what people will want is:
   > 
   >my $x = "foo\n";
   >{
   >  my $x = "bar\n";
   >  delete $MY::{'$x'};
   >  print $x;
   >}
   > 
   > to print foo. That's where things get tricky. Though I suppose we
   > could put some sort of placeholder with auto-backsearch
   > capabilities. Or something.

Exactly. 

Damian



Re: What's up with %MY?

2001-09-04 Thread Damian Conway


Dan wept:

   > I knew there was something bugging me about this.
   > 
   > Allowing lexically scoped subs to spring into existence (and
   > variables, for that matter) will probably slow down sub and
   > variable access, since we can't safely resolve at compile time what
   > variable or sub is being accessed. 
   > 
   >[snippage]
   > 
   > Not that I'm arguing against it, just that I can see some
   > efficiency issues.

Understood. And that's why you get the big bucks. ;-)

Damian



Re: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 10:34 AM 9/5/2001 +1100, Damian Conway wrote:

>Dan wept:
>
>> I knew there was something bugging me about this.
>>
>> Allowing lexically scoped subs to spring into existence (and
>> variables, for that matter) will probably slow down sub and
>> variable access, since we can't safely resolve at compile time what
>> variable or sub is being accessed.
>>
>>[snippage]
>>
>> Not that I'm arguing against it, just that I can see some
>> efficiency issues.
>
>Understood. And that's why you get the big bucks. ;-)

I'm getting paid? Keen! :-P

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 07:24 PM 9/4/2001 -0400, Bryan C. Warnock wrote:
>On Tuesday 04 September 2001 07:25 pm, Dan Sugalski wrote:
> > Ah, but what people will want is:
> >
> >my $x = "foo\n";
> >{
> >  my $x = "bar\n";
> >  delete $MY::{'$x'};
> >  print $x;
> >}
> >
> > to print foo. That's where things get tricky. Though I suppose we could
> > put some sort of placeholder with auto-backsearch capabilities. Or
> > something.
>
>Other than the obvious run-time requirements of this, what's wrong with
>simply looking in the current pad, seeing it's not there, then looking in
>the previous pad...?  (Assuming you know the variable by name)

Absolutely nothing. The issue is speed. Looking back by name is, well, 
slow. The speed advantage that lexicals have is that we know both what pad 
a variable lives in and what offset in the pad it's living at. We don't 
have to do any runtime lookup--it's all compile time. If we lose that 
compile-time resolution, things get a lot slower. (Runtime lexical name 
lookup is a lot slower than runtime global lookup because we potentially 
have a lot of pads to walk up)

Certainly doable. Just potentially slow, which is what I'm worried about. 
Making it not slow has both potential significant complexity and memory 
usage. If we have to, that's fine. Just want to make sure the cost is known 
before the decision's made. :)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Damian Conway

Dan concluded:

   > Certainly doable. Just potentially slow, which is what I'm worried
   > about. Making it not slow has both potential significant complexity
   > and memory usage. If we have to, that's fine. Just want to make
   > sure the cost is known before the decision's made. :)

I rather liked the "delete-means-install-a-pad-walking-placeholder" notion.
That way things only get slow if you actuallt do something evil.

Damian



Re: What's up with %MY?

2001-09-04 Thread Bryan C . Warnock

On Tuesday 04 September 2001 08:32 pm, Dan Sugalski wrote:
> Absolutely nothing. The issue is speed. Looking back by name is, well,
> slow. The speed advantage that lexicals have is that we know both what pad
> a variable lives in and what offset in the pad it's living at. We don't
> have to do any runtime lookup--it's all compile time. If we lose that
> compile-time resolution, things get a lot slower. (Runtime lexical name
> lookup is a lot slower than runtime global lookup because we potentially
> have a lot of pads to walk up)
>
> Certainly doable. Just potentially slow, which is what I'm worried about.
> Making it not slow has both potential significant complexity and memory
> usage. If we have to, that's fine. Just want to make sure the cost is
> known before the decision's made. :)

Well, the ultimate trade-off for speed is memory.  Right now, pads are 
differential - what are the list of things that were defined in my lexical 
scope.  Those things are evaluated at compile-time to the pad ancestry (how 
far up was something defined) and the pad offset (where within that pad) the 
value of a lexical exists.

my $foo;
{
my $bar;
{
my $baz = $foo + $bar;
}
}

(Although I don't know why I'm explaining this to you, because you know this 
far better than I do.)

Anyway, that's really...

[0][0];
{
[1][0];
{
[2][0] = [0][0] + [1][0];
}
}

# Pad 0 = [ foo ]
# Pad 1 = [ bar ]
# Pad 2 = [ baz]

...awful.  But you get the idea.

Of course, you can't inject a new lexical foo at the inner-most loop because 
it's already looking two pads up.

But what if we went ahead and made pads additive - comprehensive, so to 
speak?

[0][0];
{
# Dup pad 0, and build on it.
[1][1];
{
# Dup pad 1, and build on it.
[2][2] = [2][0] + [2][1];
}
}

# Pad 0 = [ foo ]
# Pad 1 = [ foo, bar ]
# Pad 2 = [ foo, bar, baz ]

Yes, this is akin to redeclaring every lexical variable every time you 
introduce a new scope.   Not pretty, I know.  But if you want run-time 
semantics with compile-time resolution

Let's see what this buys us  Enough for another gig of RAM, I hope.

To replace the innermost foo, all you do is change where [2][0] points.

To delete the innermost foo, all you do is replace it with the value from 
the next most innermost foo, which (if you implement this as an array), at 
the same offset.  Of course, if the variable was introduced at that scope, 
then you point to the global instead.

To add to the innermost scope Hmm.  I'm not dumb enough to suggest 
copying all the globals in there..  Okay, how about this?  Package 
qualified globals will always resolve in a global variable, so they continue 
to handle lookups like before.  So that leaves us unqualified globals to 
take the brunt of the performance hit, which I'm okay with... so far.
Now, the unqualified globals need to first check to see if they've been 
lexicalized.  If they have been, they'd appear in the pad at an offset 
beyond where the next higher pad left off.  (Since otherwise, they'd have a 
pad entry already.)  Since most of the time, that would be empty, it'd only 
be a brief glimpse before pursuing the global.  If there are some, then it 
would have to scan for itself, and use whatever was appropriate.

It's ugly but quick... er.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: LangSpec: Statements and Blocks

2001-09-04 Thread Damian Conway


Bryan asked:

   > > That would be:
   > >
   > >  given ( $a ) {
   > >  when /a/ : { foo($a); goto BAR }
   > >  when /b/ : { ... }
   > > BAR: when /c/ : { ... }
   > >  ...
   > >  }
   > 
   > If they were statements, wouldn't that be:
   > 
   >  when /a/ : { foo($a); goto BAR };
   >  when /b/ : { ... };
   > BAR: when /c/ : { ... };
   >  ...
   > 
   > That's why I was considering them blocks, which I, of course, mislabelled 
   > clauses.  Like if blocks and while blocks.

A C is a statement, just as an C or a C is a statement.


   > > Using C would (presumably) cause control to head up-scope
   > > to the first *enclosing* block labelled 'BAR'.
   > 
   > But wasn't a bare 'next' supposed to continue on to the next statement?

Yes.


   > given ( expr ) {
   > when /a/ : { foo; next }
   > when /b/ : { bar }
   > }
   > 
   > If /a/ is true, do foo(), and then continue on to the next statement.

Yes.


   > If that was/is still the case, then wouldn't a 'next LABEL' imply
   > continuing on to the next statement labelled LABEL?

I guess that would be consistent too. H.


   > Of course, if it is no longer 'next', then that's fine, too.

No. As far as I know, it's still C. That's just an extension of
the C semantics I hadn't considered. Thanks.

Damian



Re: LangSpec: Statements and Blocks

2001-09-04 Thread Bryan C . Warnock

On Tuesday 04 September 2001 09:09 pm, Damian Conway wrote:
> A C is a statement, just as an C or a C is a statement.

Okay, then I simply need to rethink/redefine how I'm defining a statement, 
(which is currently in terms of the statement separator).


-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 08:59 PM 9/4/2001 -0400, Bryan C. Warnock wrote:
>Yes, this is akin to redeclaring every lexical variable every time you
>introduce a new scope.   Not pretty, I know.  But if you want run-time
>semantics with compile-time resolution

That is exactly what it is, alas. If we allow lexicals to get injected in, 
we need to either do this (Basically having every non-package variable 
getting an entry in the scope's pad) or search backward. I don't much like 
either option, but I think this is the best of the lot.

So much for the "Extra braces don't carry any runtime penalty to speak of" 
speech in class... :)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 12:00 PM 9/5/2001 +1100, Damian Conway wrote:
>Dan concluded:
>
>> Certainly doable. Just potentially slow, which is what I'm worried
>> about. Making it not slow has both potential significant complexity
>> and memory usage. If we have to, that's fine. Just want to make
>> sure the cost is known before the decision's made. :)
>
>I rather liked the "delete-means-install-a-pad-walking-placeholder" notion.
>That way things only get slow if you actuallt do something evil.

Insert needs one too. Or, rather, there needs to be one there already, and 
we may need to walk back pad by pad if a pad's changed.

I think we're going to have to go with a doubly-linked tree structure for 
pads with some sort of runtime invalidation of fake entries when the pad 
itself is messed with. Have to think on that one a bit.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Ken Fox

Damian wrote:
> Dan wept:
>> I knew there was something bugging me about this.
>> 
>> Allowing lexically scoped subs to spring into existence (and
>> variables, for that matter) will probably slow down sub and
>> variable access, since we can't safely resolve at compile time what
>> variable or sub is being accessed. 
> 
> Understood. And that's why you get the big bucks. ;-)

Efficiency is a real issue! I've got 30,000 lines of *.pm in my
latest application -- another 40,000 come from CPAN. The lines
of code run a good deal less, but it's still a pretty big chunk
of Perl.

The thought of my app suddenly running slower (possibly *much*
slower after seeing the semantics of Perl 6 lexicals) doesn't
make me real happy. IMHO it would fork the language, even if
the fork was only done with pragmas.

- Ken



Re: What's up with %MY?

2001-09-04 Thread Dan Sugalski

At 10:23 PM 9/4/2001 -0400, Ken Fox wrote:
>Efficiency is a real issue! I've got 30,000 lines of *.pm in my
>latest application -- another 40,000 come from CPAN. The lines
>of code run a good deal less, but it's still a pretty big chunk
>of Perl.
>
>The thought of my app suddenly running slower (possibly *much*
>slower after seeing the semantics of Perl 6 lexicals) doesn't
>make me real happy. IMHO it would fork the language, even if
>the fork was only done with pragmas.

I still have the "perl 6 must run faster than perl 5" mandate. Things 
*will* be faster. Somehow. We may have to do Weird Magic (or I have to 
convince Larry that demonstrated performance won't get any better) but I 
think we can get there. I think we're going to have to sacrifice some 
memory for it, though.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-04 Thread Ken Fox

Damian wrote:
> In other words, everything that Exporter does, only with lexical
> referents not package referents. This in turn gives us the ability to
> easily write proper lexically-scoped modules.

Great! Then we don't need run-time lexical symbol table
frobbing. A BEGIN block can muck with its' callers symbol
table at compile time.

> I can image a Lexically::Verbose module, that modifies all variables and/or
> subroutines in a scope to report their own activity:
> 
> while (whatever) {
> use Lexically::Verbose 'vars';

Another compile time example.

> In fact, you just invented a new pattern, in
> which a set of subroutines called within a scope can communicate invisibly
> but safely through that scope's lexical symbol table.

Hey, don't make me an accomplice in this... ;)

> Accessing lexicals will be no slower than accessing package variables
> is today.

Actually I'm not sure about that. Package variables only work well
because they have global definitions. Lexicals don't. IMHO in order
to have the speed of package variables, we'll have to make lexical
scope changes trigger a re-compile (at least a re-link) of the
affected code. Besides, I was hoping for Perl 6 lexicals to be a
great deal *faster* than package variables...

How much stuff currently depends on dynamic lexicals? (Ugh. Why
are we even *talking* about something that horrible.) If there were
a pragma to eliminate them, would it break much?

- Ken



Re: What's up with %MY?

2001-09-04 Thread Uri Guttman

> "DC" == Damian Conway <[EMAIL PROTECTED]> writes:

  DC> Dan concluded:
  >> Certainly doable. Just potentially slow, which is what I'm worried
  >> about. Making it not slow has both potential significant complexity
  >> and memory usage. If we have to, that's fine. Just want to make
  >> sure the cost is known before the decision's made. :)

  DC> I rather liked the
  DC> "delete-means-install-a-pad-walking-placeholder" notion.  That way
  DC> things only get slow if you actuallt do something evil.

but that doesn't cover adding something into the parent scope. that
can't fit into the pad since it was preallocated and has no slot for the
new symbol. maybe then a hash based pad is added and checked
first/last. again, this is a majro slowdown.

even with a linear pad, if a lower scope references a parent, it has to
do a linear search. what about a similar idea to the above, add a hash
based pad whenever the pad set of symbols is changed (deletes or
additions). the original pad is still used for code compiled in the
pad's scope so those vars are always found with a pad offset and are
fast. the first time you do something to a pad from another scope, a
hash of it is made and attached to the pad. that hash is used for all
external access to %MY::.

seems like a win as it only penalizes the users of %MY:: in another
scope and which modify its symbol table. and then the hash is a speedup
for later uses of that pad.

another way to look at it is that compiled code in a scope uses compile
time pad offsets and external access to a pad at runtime is via a
hash. the hash entries for the compiled symbols refer to pad offsets and
the others are stored in the hash itself.

i can't believe i got sucked into this thread. :) 

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: LangSpec: Statements and Blocks

2001-09-04 Thread Randal L. Schwartz

> "Bryan" == Bryan C Warnock <[EMAIL PROTECTED]> writes:

Bryan> The simplest statement is an expression.  I'm trying to couch the definition 
Bryan> of what composes an expression to exclude 'if', 'while', 'for', etc.
Bryan> Apparently right poorly, at that.

If you treat statement as

EXPR;

(yes, semicolon INCLUDED) or

if (EXPR) BLOCK

then you won't have any problems defining where semicolons go.

A statement is an expression followed by a semicolon, or an if, or a while,
or a naked block, or something in that class.

That clearly explains why

EXPR if EXPR;

is a statement, not an EXPR, so we can't use it recursively.

The only oddness is that a closing brace acts as if it is semicolon-brace
if needed.  That way { EXPR; EXPR; EXPR } still parses, because
it acts like you wrote { EXPR; EXPR; EXPR; }.

This seems to be the most natural approach.  Define statement as
expression followed by semicolon.  Don't try to take the Pascal approach
of "semicolon is statement separator".  Take the *C* approach.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<[EMAIL PROTECTED]> http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Re: What's up with %MY?

2001-09-04 Thread Bryan C . Warnock

On Tuesday 04 September 2001 10:10 pm, Dan Sugalski wrote:
> At 08:59 PM 9/4/2001 -0400, Bryan C. Warnock wrote:
> >Yes, this is akin to redeclaring every lexical variable every time you
> >introduce a new scope.   Not pretty, I know.  But if you want run-time
> >semantics with compile-time resolution
>
> That is exactly what it is, alas. If we allow lexicals to get injected in,
> we need to either do this (Basically having every non-package variable
> getting an entry in the scope's pad) or search backward. I don't much like
> either option, but I think this is the best of the lot.
>
> So much for the "Extra braces don't carry any runtime penalty to speak of"
> speech in class... :)

Well, they still wouldn't.  Mostly.

All the pads could *still* be set up at compile time.  All lexicals within a 
scope would be grouped together, which might (doubtful) help reduce paging.
If pads were still arrays, the original construction would consist of  
memcopys - about as cheap of duplication that you'll get.  And the 
performance hits would be taken only by a) the unqualified globals, and b) 
the actual twiddling of the lexical variables (both in lookup, and in 
manipulation).  If you're going to take hits, that's where to take them.

Of course, then you've got the bloat to worry about.  Which might make your 
decision to go ahead and be slow an easy one

But why are we on the language list for this?  Back to internals we go..

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



debugger API PDD, v1.1

2001-09-04 Thread Dave Storrs

=head1 TITLE

API for the Perl 6 debugger.

=head1 VERSION

1.1

=head2 CURRENT

 Maintainer: David Storrs ([EMAIL PROTECTED])
 Class: Internals
 PDD Number: ?
 Version: 1
 Status: Developing
 Last Modified: August 18, 2001
 PDD Format: 1
 Language: English

=head2 HISTORY

=over 4

=item Version 1.1

=item Version 1

First version

=back

=head1 CHANGES

1.1 - Minor edits throughout
- Explicit and expanded list of how breakpoints may be set
- Explicit mention of JIT compilation
- Added mention of edit-and-continue functionality
- Added "remote debugging" section.
- Added "multithreaded debugging" section

1 None. First version

=head1 ABSTRACT

This PDD describes the API for the Perl6 debugger.

=head1 DESCRIPTION

The following is a simple English-language description of the
functionality that we need.  Implementation is described in a later
section.  Descriptions are broken out by which major system will need
to provide the functionality (interpreter, optimizer, etc) and the
major systems are arranged in (more or less) the order in which the
code passes through them.  Within each section, functionality is
arranged according to (hopefully) logical groupings.


=head2 Compiler

=head3 Generating Code on the Fly

=over 4

=item *

Compile and return the bytecode stream for a given expression. Used
for evals of user-specified code and edit/JIT compiling of source.
Should be able to compile in any specified context (e.g., scalar,
array, etc).

=item *

Show the bytecode stream emitted by a particular expression, either a
part of the source or user-specified.  (This is basically just the
above method with a 'print' statement wrapped around it.)

=item *

Do JIT compilation of source at runtime (this is implied by the first
item in this list, but it seemed better to mention it explicitly).

=back # Closes 'Generating Code on the Fly' section



=head2 Optimizer

=head3 Generating and Comparing Optimizations

=over 4

=item *

Optimize a specified bytecode stream in place.

=item *

Return an optimized copy of the specified bytecode stream.

=item *

Show the diffs between two bytecode streams (presumably pre- and
post-optimization versions of the same stream).

=back # Closes 'Generating and Comparing Optimizations' section




=head2 Interpreter

=head3 Manipulating the Bytecode Stream

=over 4

=item *

Display the bytecodes for a particular region.

=item *

Fetch the next bytecode from the indicated stream.

// @@NOTE: from a design perspective, this is nicer than doing
"(*bcs)" everywhere, but we definitely don't want to pay a function
call overhead every time we fetch a bytecode.  Can we rely on all
compilers to inline this properly?

=item *

Append/prepend all the bytecodes in 'source_stream' to 'dest_stream'.
Used for things like JIT compilation.

=back # Closes 'Manipulating the Bytecode Stream' section



=head3 Locating Various Points in the Code

=over 4

=item *

Locate the beginning of the next Perl expression in the specified
bytestream (which could be, but is not necessarily, the head of the
stream).

=item *

Locate the beginning of the next Perl source line in the specified
bytestream (which could be, but is not necessarily, the head of the
stream).

=item *

Search the specified bytestream for the specified bytecode.  Return
the original bytecode stream, less everything up to the located
bytecode.

// @@NOTE: Should the return stream include the searched-for bytecode
or not?  In general, I think this will be used to search for 'return'
bytecodes, in order to support the "step out of function"
functionality. In that case, it would be more convenient if the return
were B there.

=item *

Search the specified bytecode stream for the specified line number.
This line may appear in the current module (the default), or in
another module, which must then be specified.

=item *

Search the specified bytecode stream for the beginning of the
specified subroutine.

=item *

Locates the beginning of the source line which called the function for
which the current stack frame was created.

=item *

Locate the next point, or all points, where a specified file is 'use'd
or 'require'd

=back # Closes 'Locating Various Points in the Code' section.



=head3 Moving Through the Code

=over 4

=item *

Continue executing code, stop at end of code or first breakpoint
found.

=item *

Continue up to a specified line, ignoring breakpoints on the way.

=item *

In the source which produced a specified bytecode stream, search
forwards for a specified pattern.

=item *

In the source which produced a specified bytecode stream, search
backwards for a specified pattern.

=item *

In the source which produced a specified bytecode stream, search
forwards for lines where expression is satisfied

=item *

In the source which produced a specified bytecode stream, search
backwards for lines where expression is satisfied

=back # Closes 'Moving th