Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

On Thursday 06 September 2001 06:16 am, Dave Mitchell wrote:
> One further worry of mine concerns the action of %MY:: on unintroduced
> variables (especially the action of delete).
>
> my $x = 100;
> {
> my $x = (%MY::{'$x'} = \200, $x+1);
> print "inner=$x, ";
> }
> print "outer=$x";
>
> I'm guessing this prints inner=201, outer=200

Perhaps I missed something, but %MY:: refers to my lexical scope, and not my 
parents, correct?

Why isn't this inner=201, outer=100?

>
> As for
>
> my $x = 50;
> {
> my $x = 100;
> {
>   my $x = (delete %MY::{'$x'}, $x+1);
>   print "inner=$x, ";
> }
> print "middle=$x, ";
> }
> print "outer=$x";
>
> If delete 'reexposes' an outer version of that variable, then I'd
> speculate the output would be
>
> inner=51, middle=50, outer=50

Again, I though that %MY:: referred to my current scope, in which case the 
delete doesn't do anything.  That would make it 101, 100, 50.

Is my understanding incorrect?

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: What's up with %MY?

2001-09-06 Thread Bart Lateur

On Tue, 04 Sep 2001 18:38:20 -0400, Dan Sugalski wrote:

>At 09:20 AM 9/5/2001 +1100, Damian Conway wrote:
>>The main uses are (surprise):
>>
>> * introducing lexically scoped subroutines into a caller's scope
>
>I knew there was something bugging me about this.
>
>Allowing lexically scoped subs to spring into existence (and variables, for 
>that matter) will probably slow down sub and variable access, since we 
>can't safely resolve at compile time what variable or sub is being 
>accessed.

Eh... isn't the main use for this, in import()? That's still compile
time, isn't it?

So you optimize it for compile time. Once compile time is over, the
lexical variables table is frozen. If you want to add more stuff at 
runtime, it'll cost you.

BTW if you add new variables at the back of the table, I don't see how
any old references to existing variables would be compromised.

-- 
Bart.



Re: What's up with %MY?

2001-09-06 Thread Dave Mitchell

"Bryan C. Warnock" <[EMAIL PROTECTED]> wrote:
> On Thursday 06 September 2001 06:16 am, Dave Mitchell wrote:
> > One further worry of mine concerns the action of %MY:: on unintroduced
> > variables (especially the action of delete).
> >
> > my $x = 100;
> > {
> > my $x = (%MY::{'$x'} = \200, $x+1);
> > print "inner=$x, ";
> > }
> > print "outer=$x";
> >
> > I'm guessing this prints inner=201, outer=200
> 
> Perhaps I missed something, but %MY:: refers to my lexical scope, and not my 
> parents, correct?
> 
> Why isn't this inner=201, outer=100?

Because on the RHS of a 'my $x = ...' expression, $x is not yet in scope
(ie hasn't been "introduced"), so

my $x = 100; { my $x = $x+1; print $x }

prints 101, not 1 - the $x in the '$x+1' expression refers to the $x in the
outer scope.

I was just trying to confirm whether similar semantics apply to the use of
%MY:: - ie when used where a lexical has been defined but not yet introduced,
does %MY{'$x'} pick up the inner or outer lex?

I especially wanted to confirm whether delete %MY{'$x'} will delete the outer
$x because the inner one isn't yet quite in scope.




Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

%MY:: manipulates my lexical pad.  If, to resolve a variable, I have to 
search backwards through multiple pads (that's a metaphysical search, so as 
to not dictate a physical search as the only behavior), that's a different 
beastie.

Consider it like, oh, PATH and executables:
`perl` will search PATH and execute the first perl found, but 'rm perl' will 
not.  It would only remove a perl in my current scope..., er, directory.

On Thursday 06 September 2001 08:28 am, Dave Mitchell wrote:
> > > my $x = 100;
> > > {
> > > my $x = (%MY::{'$x'} = \200, $x+1);
> > > print "inner=$x, ";
> > > }
> > > print "outer=$x";
> > >
> > > I'm guessing this prints inner=201, outer=200

Oops, I may have been wrong.  This might give you {some random number}, 100,
depending on how the reference is handled.

What you are, in essence, doing, is creating a lexical $x in my current 
scope, and setting that to be a reference to 200.  You're then taking that 
newly created lexical $x, adding 1 to it (which currently is adding one to 
the address of the constant, but whatever), and that is being stored in, 
effectively, itself.

> >
>
> I was just trying to confirm whether similar semantics apply to the use of
> %MY:: - ie when used where a lexical has been defined but not yet
> introduced, does %MY{'$x'} pick up the inner or outer lex?
>
> I especially wanted to confirm whether delete %MY{'$x'} will delete the
> outer $x because the inner one isn't yet quite in scope.

The delete should be no-oppish, as the lexical variable doesn't exists yet 
in the current lexical scope.  If you want to mess with your parent's scope, 
you have to mess with it directly, not indirectly.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: What's up with %MY?

2001-09-06 Thread Dave Mitchell

"Bryan C. Warnock" <[EMAIL PROTECTED]> mused:
> Consider it like, oh, PATH and executables:
> `perl` will search PATH and execute the first perl found, but 'rm perl' will 
> not.  It would only remove a perl in my current scope..., er, directory.

But surely %MY:: allows you to access/manipulate variables that are in scope,
not just variables are defined in the current scope, ie

my $x = 100;
{
print $MY::{'$x'};
}

I would expect that to print 100, not 'undef'. Are your expectations different?

I think any further discussion hinges on that.




RE: What's up with %MY?

2001-09-06 Thread Garrett Goebel

From: Dave Mitchell [mailto:[EMAIL PROTECTED]]
> "Bryan C. Warnock" <[EMAIL PROTECTED]> mused:
> > Consider it like, oh, PATH and executables:
> > `perl` will search PATH and execute the first perl
> > found, but 'rm perl' will not.  It would only remove
> > a perl in my current scope..., er, directory.
> 
> But surely %MY:: allows you to access/manipulate
> variables that are in scope, not just variables are
> defined in the current scope, ie
> 
> my $x = 100;
> {
> print $MY::{'$x'};
> }
> 
> I would expect that to print 100, not 'undef'. Are your 
> expectations different?

Hmm, shouldn't that print something like SCALAR(0x1b9289c)?

If you meant ${$MY::{'$x'}} then I'd agree...







Re: What's up with %MY?

2001-09-06 Thread Dan Sugalski

At 02:19 PM 9/6/2001 +0200, Bart Lateur wrote:
>On Tue, 04 Sep 2001 18:38:20 -0400, Dan Sugalski wrote:
>
> >At 09:20 AM 9/5/2001 +1100, Damian Conway wrote:
> >>The main uses are (surprise):
> >>
> >> * introducing lexically scoped subroutines into a caller's scope
> >
> >I knew there was something bugging me about this.
> >
> >Allowing lexically scoped subs to spring into existence (and variables, for
> >that matter) will probably slow down sub and variable access, since we
> >can't safely resolve at compile time what variable or sub is being
> >accessed.
>
>Eh... isn't the main use for this, in import()? That's still compile
>time, isn't it?

Sure, but the issue isn't compile time, it's runtime. The main use for 
source filters was to allow compressed source, but look what Damian's done 
with 'em... :)

>So you optimize it for compile time. Once compile time is over, the
>lexical variables table is frozen. If you want to add more stuff at
>runtime, it'll cost you.

Right. But since you have to take into account the possibility that a 
variable outside your immediate scope (because it's been defined in an 
outer level of scope) might get replaced by a variable in some intermediate 
level, things get tricky. (On the other hand, Simon and I were chatting and 
I think we have a good solution, but it'll take some time to pan out)

>BTW if you add new variables at the back of the table, I don't see how
>any old references to existing variables would be compromised.

Well, if you have this:

   my $foo = 'a';
   {
 {
   %MY[-1]{'$foo'} = 'B';
   print $foo;
 }
}

(modulo syntax) should the print print a, or B? We bound to a $foo way 
outside our scope, then injected a $foo in the middle.

Personally, I'd argue that it should print 'B'. Others may differ. I don't 
care much, as long as I know what I should be doing at the internals level.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-06 Thread Ken Fox

Dan Sugalski wrote:
> ... you have to take into account the possibility that a
> variable outside your immediate scope (because it's been defined in an
> outer level of scope) might get replaced by a variable in some intermediate
> level, things get tricky.

Other things get "tricky" too. How about when the compiler
optimizes away a lexical with a constant? How about when
the compiler optimizes away a sub call because it's side-
effect-free and called with a constant? What about dead
code elimination? What about when the compiler selects a
register-based call op because of the prototype and then
the sub gets replaced with an incompatible sub at run-time?
What about inlining?

We're not just talking symbol table frobbing. The whole ball
of wax is on the table.

> Personally, I'd argue that it should print 'B'.

I totally agree. What's the point in injecting *broken* lexicals?!

Yeah, I can see it now. Perl 6 has three kinds of variables:
dynamically scoped package variables, statically scoped lexical
variables and "Magical Disappearing Reappearing Surprise Your
Friends Every Time" variables. Oh, and by the way, lexicals
are really implemented using "Magical Disappearing Reappearing
Surprise Your Friends Every Time" variables, so I guess we only
have two kinds of variables...

- Ken



Re: What's up with %MY?

2001-09-06 Thread Uri Guttman

> "DS" == Dan Sugalski <[EMAIL PROTECTED]> writes:

  DS>my $foo = 'a';
  DS>{
  DS>  {
  DS>%MY[-1]{'$foo'} = 'B';
  DS>print $foo;
  DS>  }
  DS> }

explain %MY[-1] please.

my impression is that is illegal/meaningless in perl6. maybe you meant
something with caller and getting the last scope.

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



Re: What's up with %MY?

2001-09-06 Thread Dan Sugalski

At 11:51 AM 9/6/2001 -0400, Uri Guttman wrote:
> > "DS" == Dan Sugalski <[EMAIL PROTECTED]> writes:
>
>   DS>my $foo = 'a';
>   DS>{
>   DS>  {
>   DS>%MY[-1]{'$foo'} = 'B';
>   DS>print $foo;
>   DS>  }
>   DS> }
>
>explain %MY[-1] please.
>
>my impression is that is illegal/meaningless in perl6. maybe you meant
>something with caller and getting the last scope.

Yup. I don't know that caller will be right when walking up plain block 
scopes, so I punted on the syntax.

I could probably give you the bytecode... :)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: CLOS multiple dispatch

2001-09-06 Thread David L. Nicol

Hong Zhang wrote:

> How do you define the currently loaded? If things are lazy loaded,
> the stuff you expect has been loaded may not have been loaded.

We could load placeholders that go and load the bigger methods
as needed, for instance.  

-- 
   David Nicol 816.235.1187
do be do be do



Re: What's up with %MY?

2001-09-06 Thread Dan Sugalski

At 11:44 AM 9/6/2001 -0400, Ken Fox wrote:
>Yeah, I can see it now. Perl 6 has three kinds of variables:
>dynamically scoped package variables, statically scoped lexical
>variables and "Magical Disappearing Reappearing Surprise Your
>Friends Every Time" variables. Oh, and by the way, lexicals
>are really implemented using "Magical Disappearing Reappearing
>Surprise Your Friends Every Time" variables, so I guess we only
>have two kinds of variables...

Not to put a damper on your rant here (and a nice one it was... :) but this 
is doable now. Heck, I can write a sub that completely rewrites the code 
for its caller at will now, in perl 5. Your code is only safe by 
convention. That safety isn't enforced in any way by the interpreter.

I think you're also overestimating the freakout factor. You already have 
the easy potential for global variables and subroutines to change contents 
and behaviour at whim, and the world's not come to an end. If someone does 
go so far as to write a module that does truly bizarre and, more to the 
point, unexpected and undocumented things that nobody'll use it.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: What's up with %MY?

2001-09-06 Thread Dave Mitchell

One further worry of mine concerns the action of %MY:: on unintroduced
variables (especially the action of delete).

my $x = 100;
{
my $x = (%MY::{'$x'} = \200, $x+1);
print "inner=$x, ";
}
print "outer=$x";

I'm guessing this prints inner=201, outer=200

As for

my $x = 50;
{
my $x = 100;
{
my $x = (delete %MY::{'$x'}, $x+1);
print "inner=$x, ";
}
print "middle=$x, ";
}
print "outer=$x";

If delete 'reexposes' an outer version of that variable, then I'd speculate
the output would be

inner=51, middle=50, outer=50




Re: What's up with %MY?

2001-09-06 Thread David L. Nicol

Damian Conway wrote:


> proper lexically-scoped modules.

sub foo { print "outer foo\n"};
{
local *foo = sub {print "inner foo\n"};
foo();
};
foo();


did what I wanted it to.   Should I extend Pollute:: to make
this possible:

in file Localmodules.pm:

use Pollutte::Locally;
use Carp;

and in blarf.pl:

sub Carp {print "outer carp\n"};

{
use Localmodules.pm;
local *{$_} foreach @PolluteList;
Pollute();
Carp("Inner Carp"); # goes to STDERR

}

Carp(); #prints "outer carp\n"



Or is that just too complex.  Local must be visible at compile-time
I suppose.

local use Carp;

is how local import should look IMO, that might have gotten into an RFC




-- 
   David Nicol 816.235.1187
Refuse to take new work - finish existing work - shut down.



Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

On Thursday 06 September 2001 07:15 am, David L. Nicol wrote:
> in file Localmodules.pm:
>
>   use Pollutte::Locally;
>   use Carp;
>
> and in blarf.pl:
>
>   sub Carp {print "outer carp\n"};

sub frok { Carp(); }

>
>   {
>   use Localmodules.pm;
>   local *{$_} foreach @PolluteList;
>   Pollute();
>   Carp("Inner Carp"); # goes to STDERR

frok();# We want this to print "outer carp"

>
>   }
>
>   Carp(); #prints "outer carp\n"
>



-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: pads and lexicals

2001-09-06 Thread Dave Mitchell

Simon Cozens <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 06, 2001 at 11:05:37AM +0100, Dave Mitchell wrote:
> > I'm trying to get my head round the relationship between pad lexicals,
> > pad tmps, and registers (if any).
> 
> It's exactly the same as the relationship between auto variables, C
> temporaries and machine registers.

Hmmm, except that at the hardware level, registers can store the actual
temporary values themselves, whereas the PMC registers in the parrot VM
only hold pointers to the values, and storage for these values need to be 
allocated by some other mechanism.




Re: pads and lexicals

2001-09-06 Thread Ken Fox

Dave Mitchell wrote:
> Hmmm, except that at the hardware level, registers can store the actual
> temporary values themselves

register struct value *hardware_registers_can_be_pointers_too;

The PMC registers act like pointer-to-struct registers. Other register
sets can hold immediate values. This is exactly the same as real
hardware having registers for floating point, addresses, words, etc.
Parrot just uses more types.

- Ken



Re: pads and lexicals

2001-09-06 Thread Dave Mitchell

whoops, forgot to CC the list

- Begin Forwarded Message -

Date: Thu, 6 Sep 2001 14:32:19 +0100 (BST)
From: Dave Mitchell 
Subject: Re: pads and lexicals
To: [EMAIL PROTECTED]
Content-MD5: iVd18ng5xfzBBgJHSPdShg==

Ken Fox <[EMAIL PROTECTED]> wrote:
> Dave Mitchell wrote:
> > Hmmm, except that at the hardware level, registers can store the actual
> > temporary values themselves
> 
> register struct value *hardware_registers_can_be_pointers_too;
> 
> The PMC registers act like pointer-to-struct registers. Other register
> sets can hold immediate values. This is exactly the same as real
> hardware having registers for floating point, addresses, words, etc.
> Parrot just uses more types.

Just to clarify what I'm talking about. I'm referring to ops that return
a PMC (as opposed to a raw ints for example).

In C, the code a = a + a*b needs to store the intermediate result (a*b)
somewhere. The compiler can go this by storing it in a temporary
hardware register. The Perl equivalent $a = $a + $a*$b requires a
temporary PMC to store the intermediate result ($a*$b). I'm asking
where this tmp PMC comes from. A C compiler doesnt have the same
problem, which is why the analogy with C and h/w registers breaks down
slightly.

- End Forwarded Message -





Re: Should MY:: be a real symbol table?

2001-09-06 Thread Ken Fox

Bart Lateur wrote:
> On Mon, 03 Sep 2001 19:29:09 -0400, Ken Fox wrote:
> > The concept isn't the same. "local" variables are globals. 
> 
> This is nonsense.
> ...
> How are globals conceptually different than, say, globally scoped
> lexicals? Your description of global variables might just as well apply
> to file scoped lexicals.

Try this example:

  use vars qw($x);

  $x = 1;

  sub foo { ++$x }

  sub bar {
local($x);

$x = 2;
foo();
print "$x\n";
  }

  bar();
  print "$x\n";

That prints:

  3
  1

If you use lexicals, you get the behavior that was *probably*
intended. "local" merely saves a copy of the variable's current
value and arranges for the variable to be restored when the
block exits. This is *fundamentally* different than lexical
variables.

It is possible to simulate the behavior of lexicals with
globals if you generate unique symbols for each instance of
each lexical. For example:

  foreach (1..10) { my $x; push @y, \$x }

could compile to something like:

  foreach (1..10) {
local($_temp) = gensym;
$::{$_temp} = undef;
push @y, \$::{$_temp}
  }

The local used for $_temp must be guaranteed to never be
changed by any other sub. Ideally $_temp would be visible
only in the scope of the original lexical, but then $_temp
would be a lexical... ;)

- Ken



Re: pads and lexicals

2001-09-06 Thread Simon Cozens

On Thu, Sep 06, 2001 at 02:35:53PM +0100, Dave Mitchell wrote:
> The Perl equivalent $a = $a + $a*$b requires a
> temporary PMC to store the intermediate result ($a*$b)

Probably a temporary INT or NUM register, in fact. But I see
your point. I wouldn't be surprised if some of the PMC registers
had spare PMCs knocking around to store this kind of thing.

Simon



Re: Should MY:: be a real symbol table?

2001-09-06 Thread Bart Lateur

On Mon, 03 Sep 2001 19:29:09 -0400, Ken Fox wrote:

>> *How* are they "fundamentally different"?
>
>Perl's "local" variables are dynamically scoped. This means that
>they are *globally visible* -- you never know where the actual
>variable you're using came from. If you set a "local" variable,
>all the subroutines you call see *your* definition.
>
>Perl's "my" variables are lexically scoped. This means that they
>are *not* globally visible. Lexicals can only be seen in the scope
>they are introduced and they do not get used by subroutines you
>call. This is safer and a bit easier to use because you can tell
>what code does just by reading it.
>
>> But in this case the pad is actually a full symbol table.  The
>> concept is the same, the data structure is different.
>
>The concept isn't the same. "local" variables are globals. 

This is nonsense.

Firs of all, currently, you can localize an element from a hash or an
array, even if the variable is lexically scoped. This works:

use Data::Dumper;
my %hash = ( foo => 42, bar => '007' );
{
 local $hash{foo} = 123;
 print "Inner: ", Dumper \%hash;
}
print "Outer: ", Dumper \%hash;
-->
Inner: $VAR1 = {
  'foo' => 123,
  'bar' => '007'
};
Outer: $VAR1 = {
  'foo' => 42,
  'bar' => '007'
};

So local and global are not one and the same concept.

Unfortunately, this doesn't work with plain lexical scalars. I wonder
why. Really.

How are globals conceptually different than, say, globally scoped
lexicals? Your description of global variables might just as well apply
to file scoped lexicals. Currently, that is the largest possible scope,
but why stop there?

Typeglobs are on the verge of extinction. Perhaps the current concept of
symbol tables may well follow the same route? A symbol table will be
different in perl6, anyway. If the implementation of lexicals is
consistently faster than that of globals, perhaps globals ought to be
implemented in the same way as lexicals?

From the top of my head, I can already think of one reason against:
dynamic creating of new global variables, for example while loading new
source code. It's a situation you just can't have with lexicals. For
globals, it can, and will, happen, and it would require extending the
"global pad" or something like that.

-- 
Bart.



Re: pads and lexicals

2001-09-06 Thread Dave Mitchell

Simon Cozens <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 06, 2001 at 02:35:53PM +0100, Dave Mitchell wrote:
> > The Perl equivalent $a = $a + $a*$b requires a
> > temporary PMC to store the intermediate result ($a*$b)
> 
> Probably a temporary INT or NUM register, in fact. But I see
> your point. I wouldn't be surprised if some of the PMC registers
> had spare PMCs knocking around to store this kind of thing.

espcially as most vtable methods (eg mult) work on PMCs and return
PMCs - or rather than returning PMCs, they expect the address of an existing
PMC which they can scribble on with their result.

So I guess I'm asking whether we're abandoning the Perl 5 concept
of a pad full of tmp targets, each hardcoded as the target for individual
ops to store their tmp results in.

If a certain number of PMC regs are 'hardcoded' with pointers to
PMC tmps, then we need to address register overflow, eg an expression like

foo($x+1, $x+2, , $x+65);




Re: pads and lexicals

2001-09-06 Thread Ken Fox

Dave Mitchell wrote:
> The Perl equivalent $a = $a + $a*$b requires a
> temporary PMC to store the intermediate result ($a*$b). I'm asking
> where this tmp PMC comes from.

The PMC will stashed in a register. The PMC's value will be
stored either on the heap or in a special memory pool reserved
for temps. (I'm guessing we won't have a real generational
garbage collector, but we will be able to know when/if a temp
is destroyed at block exit.)

Dan can say for sure since he's read the code. (nudge, nudge ;).

BTW, I think we will be able to optimize this code in some
instances to use the floating point registers instead of the
PMC registers. (This is the main reason I'm totally against
run-time modification of the current scope -- essentially
we'd have to treat *everything* as a PMC and we lose all of
our optimization possibilities.)

- Ken



Re: pads and lexicals

2001-09-06 Thread Simon Cozens

On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
> So I guess I'm asking whether we're abandoning the Perl 5 concept
> of a pad full of tmp targets, each hardcoded as the target for individual
> ops to store their tmp results in.

Not entirely; the last thing we want to be doing is creating PMCs at
runtime.

> If a certain number of PMC regs are 'hardcoded' with pointers to
> PMC tmps, then we need to address register overflow, eg an expression like
> 
> foo($x+1, $x+2, , $x+65);

That's slightly different, though, because that'll all be passed in as
a list.

Simon



Re: pads and lexicals

2001-09-06 Thread Dave Mitchell

Simon Cozens <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
> > So I guess I'm asking whether we're abandoning the Perl 5 concept
> > of a pad full of tmp targets, each hardcoded as the target for individual
> > ops to store their tmp results in.
> 
> Not entirely; the last thing we want to be doing is creating PMCs at
> runtime.

Sorry, I thought you were suggesting that at compile time a fixed number of
tmp PMCs would be created, and slots 1-N of the PMC registers would be set
permanently to point to them. Which is why I was concerned about the
possibility of N+1 tmps being needed.

> > If a certain number of PMC regs are 'hardcoded' with pointers to
> > PMC tmps, then we need to address register overflow, eg an expression like
> > 
> > foo($x+1, $x+2, , $x+65);
> 
> That's slightly different, though, because that'll all be passed in as
> a list.

So how does that all work then? What does the parrot assembler for

foo($x+1, $x+2, , $x+65)

look like roughly - and where do the 65 tmp PMCs come from? In Perl 5 they're
the 65 pad tmps associated with the add ops.

PS - I'm not trying to "catch anyone out", I'm just trying to understand :-)




Re: An overview of the Parrot interpreter

2001-09-06 Thread Simon Cozens

On Sun, Sep 02, 2001 at 11:56:10PM +0100, Simon Cozens wrote:
> Here's the first of a bunch of things I'm writing which should give you
> practical information to get you up to speed on what we're going to be doing
> with Parrot so we can get you coding away. :) Think of them as having a
> Apocalypse->Exegesis relationship to the PDDs. 

I want to get on with writing all the other documents like this one, but
I don't want the questions raised in this thread to go undocumented and
unanswered. I would *love* it if someone could volunteer to send me a patch
to the original document tightening it up in the light of this thread.

Anyone fancy doing that?

Simon



Re: pads and lexicals

2001-09-06 Thread Ken Fox

Dave Mitchell wrote:
> So how does that all work then? What does the parrot assembler for
> 
>   foo($x+1, $x+2, , $x+65)

The arg list will be on the stack. Parrot just allocates new PMCs and
pushes the PMC on the stack.

I assume it will look something like

  new_pmc pmc_register[0]
  add pmc_register[0], $x, 1
  push pmc_register[0]

  new_pmc pmc_register[0]
  add pmc_register[0], $x, 2
  push pmc_register[0]

  ...

  call foo, 65

It would be nice if we knew the lifetime of those temps so that
we could optimize the allocation. In Perl 5, closures don't capture
@_ -- I hope Perl 6 won't capture them either. So the only thing
we need to worry about is code taking a reference to @_. That
should be something the compiler can catch.

Hmm. It didn't occur to me that raw values might go on the call
stack. Is the call stack going to store PMCs only? That would
simplify things a lot.

- Ken



Re: An overview of the Parrot interpreter

2001-09-06 Thread Ken Fox

Simon Cozens wrote:
> I want to get on with writing all the other documents like this one, but
> I don't want the questions raised in this thread to go undocumented and
> unanswered. I would *love* it if someone could volunteer to send me a patch
> to the original document tightening it up in the light of this thread.

Sure. I can do that while *waiting patiently* for Parrot to be
released. ;)

- Ken



Re: An overview of the Parrot interpreter

2001-09-06 Thread Simon Cozens

On Thu, Sep 06, 2001 at 10:46:56AM -0400, Ken Fox wrote:
> Sure. I can do that while *waiting patiently* for Parrot to be
> released. ;)

Don't tell Nat I said this, but we're hoping for around the
beginning of next week.

Simon



Re: Should MY:: be a real symbol table?

2001-09-06 Thread Dan Sugalski

At 10:44 AM 9/6/2001 +0200, Bart Lateur wrote:
>On Mon, 03 Sep 2001 19:30:33 -0400, Dan Sugalski wrote:
>
> >The less real question, "Should pads be hashes or arrays", can be answered
> >by "whichever is ultimately cheaper". My bet is we'll probably keep the
> >array structure with embedded names, and do a linear search for those rare
> >times you're actually looking by name.
>
>Perhaps a lookup hash for the names, containing the offsets?

Considered that, but I don't know of any hash structure that's modifiable 
that doesn't have absolute addresses to the current incarnation of the 
structure in it. If we need to go this route, I expect I have some 
literature diving to do.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Should MY:: be a real symbol table?

2001-09-06 Thread Dan Sugalski

At 10:41 AM 9/6/2001 +0200, Bart Lateur wrote:
>Firs of all, currently, you can localize an element from a hash or an
>array, even if the variable is lexically scoped.

This doesn't actually have anything to do with lexicals, globals, or pads. 
And the reason the keyword local works on elements of lexical aggregates is 
actually a bug.

Perl has a generic "save stack". The interpreter can push a structure 
containing an address, type, and value on the stack. It can also push 
'marks' on the stack.

When control flow hits the start of a scope, perl pushes a mark. Then, 
every time you do something that requires remembering, perl pushes one of 
these memory structures on the stack. When perl leaves a scope, it then 
walks up the save stack and finds all the memory structures on it up to the 
topmost mark. It removes those structures and, more importantly, puts all 
the values in those structures back at the addresses in the structures.

So what's happening when you localise an array element is perl is pushing a 
structure on the stack that looks like:

 where: address of $array[18]
  what: SV
  contents: SV * pointing to the scalar containing "12"

then sticks an empty new sv in $array[18]. When you leave the scope, perl 
sees that structure and puts the pointer back where it got it. Perl knows 
there are SVs involved so the restore handles refcounts right. (You can 
also put plain addresses, IVs and NVs on the stack)

You can, in XS code, actually use this to dynamically and temporarily 
modify pieces of a C structure. That's kinda nasty, though. (But clever...)

The reason you can localize an element in a lexical array or hash is 
because perl doesn't check the lexicalness of the container when you 
localize an element. It should, but doesn't.

>Unfortunately, this doesn't work with plain lexical scalars. I wonder
>why. Really.

Well, now you know! :)

>Typeglobs are on the verge of extinction. Perhaps the current concept of
>symbol tables may well follow the same route?

It probably will. The big issue is that lexicals can be resolved to pad 
entry numbers at compile time, so they're all accessed like "get variable 3 
from pad 4". The global table isn't all known at compile time so we can't 
know that, so we have to look up by name. Given the extra certainty with 
lexicals, it makes sense to choose a different data structure, one we can 
access more quickly because we have more certainty.


Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Should MY:: be a real symbol table?

2001-09-06 Thread Bart Lateur

On Mon, 03 Sep 2001 19:30:33 -0400, Dan Sugalski wrote:

>The less real question, "Should pads be hashes or arrays", can be answered 
>by "whichever is ultimately cheaper". My bet is we'll probably keep the 
>array structure with embedded names, and do a linear search for those rare 
>times you're actually looking by name.

Perhaps a lookup hash for the names, containing the offsets?

-- 
Bart.



Re: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 10:45 AM 9/6/2001 -0400, Ken Fox wrote:
>Dave Mitchell wrote:
> > So how does that all work then? What does the parrot assembler for
> >
> >   foo($x+1, $x+2, , $x+65)
>
>The arg list will be on the stack. Parrot just allocates new PMCs and
>pushes the PMC on the stack.

No, it won't actually. It'll be in a list. I'll get to that in a minute in 
the next message.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: CLOS multiple dispatch

2001-09-06 Thread Dan Sugalski

At 05:08 PM 9/5/2001 -0500, David L. Nicol wrote:
>what if:
>
> *>  there is a way to say that no new classes will be introduced

Then pigs will probably be dive-bombing the Concorde, and demons ice 
skating. This is the language Damian programs in, after all... :)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 03:21 PM 9/6/2001 +0100, Dave Mitchell wrote:
>Simon Cozens <[EMAIL PROTECTED]> wrote:
> > On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
> > > So I guess I'm asking whether we're abandoning the Perl 5 concept
> > > of a pad full of tmp targets, each hardcoded as the target for individual
> > > ops to store their tmp results in.
> >
> > Not entirely; the last thing we want to be doing is creating PMCs at
> > runtime.
>
>Sorry, I thought you were suggesting that at compile time a fixed number of
>tmp PMCs would be created, and slots 1-N of the PMC registers would be set
>permanently to point to them. Which is why I was concerned about the
>possibility of N+1 tmps being needed.

What we're going to do is have a get_temp opcode to fetch temporary PMCs. 
Where do they come from? Leave a plate of milk and cookies on your back 
porch and the Temp PMC Gnomes will bring them. :)

Seriously, we'll have a bunch of them handy for fetching as need be. The 
interpreter will manage it for us. (And we may go with the 'preallocated 
list of temps generated at scope entry' method that perl 5 uses)

> > > If a certain number of PMC regs are 'hardcoded' with pointers to
> > > PMC tmps, then we need to address register overflow, eg an expression 
> like
> > >
> > > foo($x+1, $x+2, , $x+65);
> >
> > That's slightly different, though, because that'll all be passed in as
> > a list.
>
>So how does that all work then? What does the parrot assembler for
>
> foo($x+1, $x+2, , $x+65)
>
>look like roughly - and where do the 65 tmp PMCs come from? In Perl 5 they're
>the 65 pad tmps associated with the add ops.

 new P0, list# New list in P0
 get_lex P1, $x  # Find $x
 get_type I0, P1 # Get $x's type
 set_i I1, 1 # Set our loop var
$10:   new P2, I0   # Get a temp of the same type as $x
 add P2, P1, I1  # Add counter to $x, store result in P2
 push P0, P2 # Push it into the list
 eq I1, 65, $20, $10 # If loop counter's 65 goto $20, else $10
$20 call foo# Call the sub


At least that's one way to do it. The compiler may generate different code.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: pads and lexicals

2001-09-06 Thread Garrett Goebel

From: Dave Mitchell [mailto:[EMAIL PROTECTED]]
Subject: pads and lexicals
> 
> Dave "confused as always" M.
> 

I just wanted to say that I'm really enjoying this pad/lexical thread.

There's a lot of info passing back and forth that I don't believe is clearly
documented in perlguts, etc. I expect when this thread runs its course,
you'll be a whole lot less confused... and I may have graduated from a
general state of ignorance to confusion.

It would be very nice if someone were to write something up in pod comparing
and constrasting pads/lexicals in the context of Perl5 vs. Perl6.



RE: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 10:11 AM 9/6/2001 -0500, Garrett Goebel wrote:
>I just wanted to say that I'm really enjoying this pad/lexical thread.
>
>There's a lot of info passing back and forth that I don't believe is clearly
>documented in perlguts, etc. I expect when this thread runs its course,
>you'll be a whole lot less confused... and I may have graduated from a
>general state of ignorance to confusion.

Cool! Next comes understanding, then madness. (Or is it the other way 
around? I can never remember :) We'll make a core hacker out of you yet.

>It would be very nice if someone were to write something up in pod comparing
>and constrasting pads/lexicals in the context of Perl5 vs. Perl6.

Well, that'll be reasonably tough as we don't have anything running for 
perl 6. Yet. ;-)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: An overview of the Parrot interpreter

2001-09-06 Thread Paolo Molaro

On 09/05/01 Nick Ing-Simmons wrote:
> >It's easier to generate code for a stack machine 
> 
> True, but it is easier to generate FAST code for a register machine.
> A stack machine forces a lot of book-keeping either run-time inc/dec of sp, 
> or alternatively compile-time what-is-offset-now stuff. The latter is a real 
> pain if you are trying to issue multiple instructions at once.

There is an issue that is at the roots of all these discussions:
should parrot execute also low-level opcodes or not?
We all know from the paper that instruction decoding/dispatch is
a bottleneck in interpreter's execution speed and that it's better to
execute few heavy opcodes than many smaller ones.
The current perl has high-level opcodes and when most of the time
is spent in the opcode implementation it's quite fast (the regex engine
for example). However perl5 falls short when you need to perform integer/fp 
operations and subroutine calls.
Now, the proposal for parrot is also to handle low-level opcodes.
This is all well and good and the proposed architecture to handle it
is a register machine. If the register machine is implemented in sw
any talk about issuing multiple instructions at once is moot, it's
not under our control, but of the compiler. If anyone has any
evidence that coding a stack-based virtual machine or a register one
provides for better instructions scheduling in the dispatch code,
please step forward.

I believe that a stack-based machine will have roughly the same
performance when interpreted as a register-based machine, but
it easily allows to take a step further and JIT compile the bytecode
to machine code. If we are going to execute low-level opcodes,
no matter what architecture you choose for the interpreter,
JIT code runs faster:-)

> is a pain.) Explicit stack ops are going to give them indigestion.
> The P-III+ model is that most things are "on the C stack" i.e. offsets
> from the few "base" registers. The hardware then "aliases" those offsets 
> into its real registers. I don't think Parrot's register files will give 
> it much trouble, but throwing away the right inc/dec-of-pointer ops that
> a stack machine implies will (there are obviously HW special cases for x86's

With the difference that the registers are malloc()ed while the eval
stack in a stack machine is in the actual cpu stack. A good compiler
will put the stack pointer on a register, anyway.

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: pads and lexicals

2001-09-06 Thread Dave Mitchell

Dan Sugalski <[EMAIL PROTECTED]> wrote:
> What we're going to do is have a get_temp opcode to fetch temporary PMCs. 
> Where do they come from? Leave a plate of milk and cookies on your back 
> porch and the Temp PMC Gnomes will bring them. :)

Ah, things are starting to make sense!

>  new P0, list# New list in P0
>  get_lex P1, $x  # Find $x
>  get_type I0, P1 # Get $x's type
>  set_i I1, 1 # Set our loop var
> $10:   new P2, I0   # Get a temp of the same type as $x
>  add P2, P1, I1  # Add counter to $x, store result in P2
>  push P0, P2 # Push it into the list
>  eq I1, 65, $20, $10 # If loop counter's 65 goto $20, else $10
> $20 call foo# Call the sub
> 

should "new P2, I0" be "get_temp P2, I0" given what you said on the
first line above?

Also, it would be nice to have

new P0, list, 65

to pre-extend the list, as the compiler knows in advance how many args
it's going to push. But I'm bikeshedding now :-)




Re: An overview of the Parrot interpreter

2001-09-06 Thread Paolo Molaro

On 09/05/01 Dan Sugalski wrote:
> >It's easier to generate code for a stack machine
> 
> So? Take a look at all the stack-based interpreters. I can name a bunch, 
> including perl. They're all slow. Some slower than others, and perl tends 
> to be the fastest of the bunch, but they're all slow.

Have a look at the shootout benchmarks. Yes, we all know that
benchmarks lie, but...
The original mono interpreter (that didn't implement all the semantics
required by IL code that slow down interpretation) ran about 4 times
faster than perl/python on benchmarks dominated by branches, function calls,
integer ops or fp ops.

> >That said, I haven't seen any evidence a register based machine is going to
> >be (significantly?) faster than a stack based one.
> >I'm genuinely interested in finding data about that.
> 
> At the moment a simple mix of ops takes around 26 cycles per opcode on an 
> Alpha EV6. (This is an even mix of branch, test, and integer addition 
> opcodes)  That's with everything sticking in cache, barring task switches. 
> It runs around 110 cycles/op on the reasonably antique machine I have at 
> home. (A 300MHz Celeron (the original, with no cache))

Subliminal message: post the code... :-)

> You're also putting far too much emphasis on registers in general. Most of 
> the work the interpreter will be doing will be elsewhere, either in the 
> opcode functions or in the variable vtable functions. The registers are 

That is true when executing high-level opcodes and a register or stack
machine doesn't make any difference for that. It's not true for
the low-level opcodes that parrot is supposed to handle according to the overview
posted by Simon.

> It'll be faster than perl for low-level stuff because we'll have the option 
> to not carry the overhead of full variables if we don't need it. It should 
> be faster than perl 5 with variables too, which will put us at the top of 
> the performance heap, possibly duking it out with Java. (Though I think 
> perl 5's faster than java now, but it's tough to get a good equivalence 
> there)

Rewriting perl will leave behind all the cruft that accumulated over the years,
so it should not be difficult for parrot to run faster;-)
Java is way faster than perl currently in many tasks: it will be difficult
to beat it starting from a dynamic langauge like perl, we'll all pay
the price to have a useful language like perl.
Most of us are here because they wouldn't program with a strongly typed
language more than for perl's speed. Note also that while java is faster than
perl most of the time, this advantage is completely wasted when you realize
you need 20 megs of RAM to run hello world:-)

> >The only difference in the execution engine is that you need to update
> >the stack pointer. The problem is when you need to generate code
> >for the virtual machine.
> 
> Codegen for register architectures is a long-solved problem. We can reach 
> back 30 or more years for it if we want. (We don't, the old stuff has been 

... when starting from a suitable intermediate representation (i.e., not
machine code for another register machine).

> push 0
>  pushaddr i
> store
> foo:  | push i
>   | push 1000
>   | branchgt end
>   | push 7
>   | push i
>   | add
>   | pushaddr i
>   | store
>   | jump foo
> end:
> 
> 
> with the ops executed in the loop marked with pipes. The corresponding 
> parrot code would be:
> 
>getaddr P0, i
>store   P0, 0
>store   I0, 1000
> foo: | branchgt end, P0, I0
>  | add P0, P0, 7
>  | jump foo
[...]
> So, best case (combined store, branch with constant embedded) the stack 
> based scheme has 7 opcodes in the loop, while parrot has 3. With the likely 
> case (what you see above) it's 9.

Well, it would be up to us to design the bytecode, so I'd say it's likely 7.

> Do you really think the stack-based way will be faster?

The speed of the above loop depends a lot on the actual implementation
(the need to do a function call in the current parrot code whould blow
away any advantage gained skipping stack updates, for example).
Also, this example doesn't take into account the convention to do
a function call: where do you put the arguments for a call? Will
you need to push/copy them?

As I said in another mail, I think the stack-based approach will not
be necessarily faster, but it will allow more optimizations down the path.
It may well be 20 % slower in some cases when interpreted, but if it allows 
me to easily JIT it and get 400 % faster, it's a non issue.

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 05:00 PM 9/6/2001 +0100, Dave Mitchell wrote:
>Dan Sugalski <[EMAIL PROTECTED]> wrote:
> > What we're going to do is have a get_temp opcode to fetch temporary PMCs.
> > Where do they come from? Leave a plate of milk and cookies on your back
> > porch and the Temp PMC Gnomes will bring them. :)
>
>Ah, things are starting to make sense!

See? I told you it would! Trust the Good Folk, they make your program run. :)

> >  new P0, list# New list in P0
> >  get_lex P1, $x  # Find $x
> >  get_type I0, P1 # Get $x's type
> >  set_i I1, 1 # Set our loop var
> > $10:   new P2, I0   # Get a temp of the same type as $x
> >  add P2, P1, I1  # Add counter to $x, store result in P2
> >  push P0, P2 # Push it into the list
> >  eq I1, 65, $20, $10 # If loop counter's 65 goto $20, else $10
> > $20 call foo# Call the sub
> >
>
>should "new P2, I0" be "get_temp P2, I0" given what you said on the
>first line above?

Hmmm. Yes, in fact it should. That code will end up with a list of 65 
identical scalars in it. Bad Dan! No cookie for me.

>Also, it would be nice to have
>
> new P0, list, 65
>
>to pre-extend the list, as the compiler knows in advance how many args
>it's going to push. But I'm bikeshedding now :-)

That'll probably be:

get_temp P0
new P0, list

Dunno about preextending (which isn't bikeshedding, FWIW) since it doesn't 
quite do what you want with arrays, and lists will be an awful lot like arrays.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: An overview of the Parrot interpreter

2001-09-06 Thread Ken Fox

Paolo Molaro wrote:
> If anyone has any
> evidence that coding a stack-based virtual machine or a register one
> provides for better instructions scheduling in the dispatch code,
> please step forward.

I think we're going to have some evidence in a few weeks. I'm not
sure which side the evidence is going to support though... ;)

Eric Raymond posted on python-dev that he's doubtful, but has
a "wait and see" approach. Seems sensible. I think even Dan would
say he's just hopeful, not commited.

> I believe that a stack-based machine will have roughly the same
> performance when interpreted as a register-based machine, but
> it easily allows to take a step further and JIT compile the bytecode
> to machine code.

I don't really see much difference. You don't have to map the VM
registers to hardware registers to pick up a lot of speed. If the
JIT wanted to, it could have an optional peep-hole optimization
pass. That actually sounds easier to do with a register-based VM
than a stack-based one. (Of course the run-time environment is a
big wild-card here. We need a fast interface between the dispatcher
and the run-time. That's going to want registers too.)

> With the difference that the registers are malloc()ed while the eval
> stack in a stack machine is in the actual cpu stack.

Is there really a difference in memory access between the heap
and the stack? I've alway thought a page is a page and it doesn't
matter where in memory the page is. I'm not a hardware guy though...

Allocating register pools on the heap (making sure that malloc()
is used sensibly), might be faster if you want your VM to handle
continuations and co-routines. Check out stackless Python for
a good example. I'm not sure if Appel was the first, but he has
written quite a bit about the advantages of allocating activation
records on the heap. (He also points out that a garbage collector
can make heap allocation as fast as stack allocation.)

- Ken



pads and lexicals

2001-09-06 Thread Dave Mitchell

I'm trying to get my head round the relationship between pad lexicals,
pad tmps, and registers (if any).

The PMC registers are just a way of allowing the the address of a PMC to
be passed to an op, and possibly remembered for soonish reuse, right?

So presumably we still have the equivalent of a padsv op, except that now
it puts the address of a pad lexical in a nominated PMC register rather than
pushing it on the stack. Then as an optimisation, the compiler may remember
that the address is now in register 5 say, and can remove further padsv
ops for the same variable in subsequent steps?

I'm less clear about pad tmps. Will we still statically allocate tmp PMCs
to ops and store them in pad slots á la Perl 5? ie is

$a = $a + $a*$b

compiled to

getaddr P0, PADOFFSET($a)
getaddr P1, PADOFFSET($b)
getaddr P2, PADOFFSET(firsttmp)
multP0, P1, P2  # tmp = a*b
add P0, P2, p0  # a = tmp + a

where PADOFFSET(...) are compile-time constants.

Or are we going for some other mechanism?

NB for what it's worth, I really dislike Perl 5's tendency to have a
a whole bunch of intermediate results left languishing in pad tmps until
the end of the program, eg
$a = ' ' x 100_000_000;
$a = ' ' x 100_000_000;
$a = ' ' x 100_000_000;
$a = ' ' x 100_000_000;
$a = ' ' x 100_000_000;
Leaves 500Mb of useless data sitting around in op_repeat targets.

Dave "confused as always" M.







Re: pads and lexicals

2001-09-06 Thread Simon Cozens

On Thu, Sep 06, 2001 at 12:13:11PM -0400, Dan Sugalski wrote:
> Hmmm. Yes, in fact it should. That code will end up with a list of 65 
> identical scalars in it. Bad Dan! No cookie for me.

Damn. I guess that means we have to write a compiler after all. I was
looking forward to having Dan assemble all my Perl 6 code for me.

Simon



Re: An overview of the Parrot interpreter

2001-09-06 Thread Paolo Molaro

On 09/05/01 Hong Zhang wrote:
> I think we need to get some initial performance characteristics of register
> machine vs stack machine before we go too far. There is not much points left
> debating in email list.

Unfortunately getting meaningful figures is quite hard, there are
so many thing to take into account that we'd spend a lot of time
only in evaluating the differences (it might be an interesting thesis
for a CS student, though:-).

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: An overview of the Parrot interpreter

2001-09-06 Thread Dan Sugalski

At 06:12 PM 9/6/2001 +0200, Paolo Molaro wrote:
>On 09/05/01 Dan Sugalski wrote:
> > >It's easier to generate code for a stack machine
> >
> > So? Take a look at all the stack-based interpreters. I can name a bunch,
> > including perl. They're all slow. Some slower than others, and perl tends
> > to be the fastest of the bunch, but they're all slow.
>
>Have a look at the shootout benchmarks. Yes, we all know that
>benchmarks lie, but...
>The original mono interpreter (that didn't implement all the semantics
>required by IL code that slow down interpretation) ran about 4 times
>faster than perl/python on benchmarks dominated by branches, function calls,
>integer ops or fp ops.

Right, but mono's not an interpreter, unless I'm misunderstanding. It's a 
version of .NET, so it compiles its code before executing. And the IL it 
compiles is darned close to x86 assembly, so the conversion's close to trivial.

For this, it doesn't surprise me that Mono would wipe the floor with perl, 
since you don't have the interpreter loop and opcode dispatch overhead to 
deal with. Heck, if we *still* beat you with Mono compiling, I'd have to 
take the T over and give Miguel a hard time. :)

> > >That said, I haven't seen any evidence a register based machine is 
> going to
> > >be (significantly?) faster than a stack based one.
> > >I'm genuinely interested in finding data about that.
> >
> > At the moment a simple mix of ops takes around 26 cycles per opcode on an
> > Alpha EV6. (This is an even mix of branch, test, and integer addition
> > opcodes)  That's with everything sticking in cache, barring task switches.
> > It runs around 110 cycles/op on the reasonably antique machine I have at
> > home. (A 300MHz Celeron (the original, with no cache))
>
>Subliminal message: post the code... :-)

Anon CVS and surrounding tools (bug tracking system and such) being set up 
even as I type. (Though not by me, I don't have that many hands... :) 
Expect code to check out and build sometime early next week.

> > You're also putting far too much emphasis on registers in general. Most of
> > the work the interpreter will be doing will be elsewhere, either in the
> > opcode functions or in the variable vtable functions. The registers are
>
>That is true when executing high-level opcodes and a register or stack
>machine doesn't make any difference for that. It's not true for
>the low-level opcodes that parrot is supposed to handle according to the 
>overview
>posted by Simon.

Sure, but there'll be a mix of high and low level code. Yes, we're going to 
get hosed with low-level ops because the interpreter loop overhead. No way 
around that as long as we're an interpreter. (FWIW, the benchmark I posted 
was all low-level ops, and I'm not really unhappy with a 26 cycle/op number 
because of that. I'd like it smaller, and I think we have some ways around 
it (we can cut out the function call overhead with sufficient cleverness), 
but I'm not currently unhappy) So just because we're going to be able to 
add integers doesn't mean we're not also going to be adding full-blown 
variables. Or executing a single map or grep opcode.

The low-level ops are the places where we'll win the most when going either 
the TIL or straight compile route, since the loop and function call 
overhead will be cut out entirely.

> > It'll be faster than perl for low-level stuff because we'll have the 
> option
> > to not carry the overhead of full variables if we don't need it. It should
> > be faster than perl 5 with variables too, which will put us at the top of
> > the performance heap, possibly duking it out with Java. (Though I think
> > perl 5's faster than java now, but it's tough to get a good equivalence
> > there)
>
>Rewriting perl will leave behind all the cruft that accumulated over the 
>years,
>so it should not be difficult for parrot to run faster;-)

Boy I hope so. (Try benchmarking perl 5.004_04 against 5.6.1. I did, the 
results were embarrassing)

>Java is way faster than perl currently in many tasks:

Only when JITed. In which case you're comparing apples to oranges. A better 
comparison is against Java without JIT. (Yes, I know, Java *has* a JIT, but 
for reasonable numbers at a technical level (and yes, I also realize that 
generally speaking most folks don't care about that--they just want to know 
which runs faster) you need to compare like things)

>it will be difficult
>to beat it starting from a dynamic langauge like perl, we'll all pay
>the price to have a useful language like perl.

Unfortunately (and you made reference to this in an older mail I haven't 
answered yet) dynamic languages don't lend themselves to on-the-fly 
compilation quite the way that static languages do. Heck, they don't tend 
to lend themselves to compilation (certainly not optimization, and don't 
get me started) period as much as static languages. That's OK, it just 
means our technical challenges are similar to but not the same as for 
Java/C/C++/C#/Whatever.

> > >

Re: pads and lexicals

2001-09-06 Thread Buddha Buck

At 10:45 AM 09-06-2001 -0400, Ken Fox wrote:
>Dave Mitchell wrote:
> > So how does that all work then? What does the parrot assembler for
> >
> >   foo($x+1, $x+2, , $x+65)
>
>The arg list will be on the stack. Parrot just allocates new PMCs and
>pushes the PMC on the stack.
>
>I assume it will look something like
>
>   new_pmc pmc_register[0]
>   add pmc_register[0], $x, 1
>   push pmc_register[0]
>
>   new_pmc pmc_register[0]
>   add pmc_register[0], $x, 2
>   push pmc_register[0]
>
>   ...
>
>   call foo, 65

Hmmm, I assumed it would be something like:

load $x, P0 ;; load $x into PMC register 0
new P2  ;; Create a new PMC in register 2
push p0,p2  ;; Make P2 be ($x)
add p0,#1,p1;; Add 1 to $x, store in PMC register 1
push p1,p2  ;; Make P2 be ($x,$x+1)
add p0,#2,p1;; Add 2 to $x, store in PMC register 1
push p1,p2  ;; Make P2 be ($x,$x+1,$x+2)
...
call foo,p2 ;; Call foo($x,$x+1,...,$x+65)

Although this would be premature optimization, since I see this idiom being 
used a lot, it may be useful to have some special-purpose ops to handle 
creating arg-lists, like a "new_array size,register" op, that would create 
a new PMC containing a pre-sized array (thus eliminating repeatedly growing 
the array with the push ops), or a "push5 destreg, reg1, reg2, reg3, reg4, 
reg5" op (and corresponding pushN ops for N=2 to 31) that push the 
specified registers (in order) onto the destreg.



>Hmm. It didn't occur to me that raw values might go on the call
>stack. Is the call stack going to store PMCs only? That would
>simplify things a lot.

If ops and functions should be able to be used interchangeably, I wouldn't 
expect any function arguments to be stored on the stack, but passed via 
registers (or lists referenced in registers).


>- Ken




Re: An overview of the Parrot interpreter

2001-09-06 Thread Dan Sugalski

(Firstly, I'd say trust Nick's expertise--he has spent a good-sized chunk 
of his career doing software simulations of CPUs, and knows whereof he 
speaks, both in terms of software running on hardware and software running 
on software)

At 05:33 PM 9/6/2001 +0200, Paolo Molaro wrote:
>I believe that a stack-based machine will have roughly the same
>performance when interpreted as a register-based machine, but
>it easily allows to take a step further and JIT compile the bytecode
>to machine code. If we are going to execute low-level opcodes,
>no matter what architecture you choose for the interpreter,
>JIT code runs faster:-)

On x86 machines. Maybe. I think you both underestimate

> > is a pain.) Explicit stack ops are going to give them indigestion.
> > The P-III+ model is that most things are "on the C stack" i.e. offsets
> > from the few "base" registers. The hardware then "aliases" those offsets
> > into its real registers. I don't think Parrot's register files will give
> > it much trouble, but throwing away the right inc/dec-of-pointer ops that
> > a stack machine implies will (there are obviously HW special cases for 
> x86's
>
>With the difference that the registers are malloc()ed while the eval
>stack in a stack machine is in the actual cpu stack.

Absolutely *not*. No way is the eval stack going to be the real CPU stack. 
That puts nasty artificial limits, or massive memory requirements, on the 
interpreter. It also makes GC a major pain in the neck, since it means 
walking the stack and trying to extract real variable usage info. Bletch. 
Even if perl 6 goes the stack route, we won't be using the system stack.

The registers will be from heap memory, but that's not a problem in and of 
itself. Cache lines are cache lines.

Anyway, I think the point's moot for now. Parrot's register based unless 
either performance is substandard and demonstrably because of the register 
architecture, or because I get bussed and my successor's more comfortable 
with stacks. (The first is certainly possible)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: pads and lexicals

2001-09-06 Thread Ken Fox

Dan Sugalski wrote:
> > Dan Sugalski <[EMAIL PROTECTED]> wrote:
> > > Where do they come from? Leave a plate of milk and cookies on your back
> > > porch and the Temp PMC Gnomes will bring them. :)

> Bad Dan! No cookie for me.

You aren't fooling anybody anymore... You might just as well stop the
charade and write "Dan "The Temp PMC Gnome" Sugalski" in your sig. ;)

At least we know where temps *really* come from now...

- Ken



Re: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 01:21 PM 9/6/2001 -0400, Ken Fox wrote:
>Dan Sugalski wrote:
> > > Dan Sugalski <[EMAIL PROTECTED]> wrote:
> > > > Where do they come from? Leave a plate of milk and cookies on your back
> > > > porch and the Temp PMC Gnomes will bring them. :)
>
> > Bad Dan! No cookie for me.
>
>You aren't fooling anybody anymore... You might just as well stop the
>charade and write "Dan "The Temp PMC Gnome" Sugalski" in your sig. ;)

Well, my e-mail address *is* in the .sidhe.org domain for a reason... :)

>At least we know where temps *really* come from now...

Yep. I keep a bucket of 'em in my bottom desk drawer. They're tasty with a 
little cheese sauce, too.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: An overview of the Parrot interpreter

2001-09-06 Thread Dan Sugalski

At 06:12 PM 9/6/2001 +0200, Paolo Molaro wrote:
>As I said in another mail, I think the stack-based approach will not
>be necessarily faster, but it will allow more optimizations down the path.
>It may well be 20 % slower in some cases when interpreted, but if it allows
>me to easily JIT it and get 400 % faster, it's a non issue.

Okay, I just did a test run, converting my sample program from interpreted 
to compiled. (Hand-conversion, unfortunately, to C that went through GCC)

Went from 2.72M ops/sec to the equivalent of 22.5M ops/sec. And with -O3 on 
it went to 120M ops/sec. The smaller number is more appropriate, since a 
JIT/TIL version of the code won't do the sort of aggressive optimization 
that GCC can do.

I'm not sure if I like those numbers (because they show we can speed things 
up with a translation to native code) or dislike them (because they show 
how much time the interpreter's burning). Still, they are numbers.

When I get the assembler to spit out C instead of bytecode I'll add it into 
the repository.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: pads and lexicals

2001-09-06 Thread Brent Dax

Dave Mitchell:
# Simon Cozens <[EMAIL PROTECTED]> wrote:
# > On Thu, Sep 06, 2001 at 02:54:29PM +0100, Dave Mitchell wrote:
# > > So I guess I'm asking whether we're abandoning the Perl 5 concept
# > > of a pad full of tmp targets, each hardcoded as the
# target for individual
# > > ops to store their tmp results in.
# >
# > Not entirely; the last thing we want to be doing is creating PMCs at
# > runtime.
#
# Sorry, I thought you were suggesting that at compile time a
# fixed number of
# tmp PMCs would be created, and slots 1-N of the PMC registers
# would be set
# permanently to point to them. Which is why I was concerned about the
# possibility of N+1 tmps being needed.
#
# > > If a certain number of PMC regs are 'hardcoded' with pointers to
# > > PMC tmps, then we need to address register overflow, eg
# an expression like
# > >
# > > foo($x+1, $x+2, , $x+65);
# >
# > That's slightly different, though, because that'll all be
# passed in as
# > a list.
#
# So how does that all work then? What does the parrot assembler for
#
#   foo($x+1, $x+2, , $x+65)
#
# look like roughly - and where do the 65 tmp PMCs come from?
# In Perl 5 they're
# the 65 pad tmps associated with the add ops.

If foo is an unprototyped function (and thus takes a list in P0) we can
immediately push the values of those calculations on to the list,
something like (in a lame pseudo-assembler that doesn't use the right
names for instructions):

load $x, I1
load 1, I2
add I1, I2, I3
push P0, I3
load 2, I2
add I1, I2, I3
push P0, I3
(lather, rinse, repeat)

In the more general case, however (say, $x*1+$x*2+...$x*65) that's an
interesting question.  Could we just do some fun stuff with lists?  What
do real CPUs do?

--Brent Dax
[EMAIL PROTECTED]

"...and if the answers are inadequate, the pumpqueen will be overthrown
in a bloody coup by programmers flinging dead Java programs over the
walls with a trebuchet."




Re: An overview of the Parrot interpreter

2001-09-06 Thread Paolo Molaro

On 09/06/01 Dan Sugalski wrote:
> >The original mono interpreter (that didn't implement all the semantics
> >required by IL code that slow down interpretation) ran about 4 times
> >faster than perl/python on benchmarks dominated by branches, function 
> >calls,
> >integer ops or fp ops.
> 
> Right, but mono's not an interpreter, unless I'm misunderstanding. It's a 
> version of .NET, so it compiles its code before executing. And the IL it 
> compiles is darned close to x86 assembly, so the conversion's close to 
> trivial.

Nope, if we had written a runtime, library, compiler and JIT engine in two 
months we'd be all on vacation now ;-)
The figures are actually for a stack-based interpreter that executes IL opcodes,
no assembly whatsoever. And, no, IL is not close to x86 assembly:-)
I don't expect a new perl to run that fast, but there is a lot of room for
improvement.

> >Java is way faster than perl currently in many tasks:
> 
> Only when JITed. In which case you're comparing apples to oranges. A better 
> comparison is against Java without JIT. (Yes, I know, Java *has* a JIT, but 
> for reasonable numbers at a technical level (and yes, I also realize that 
> generally speaking most folks don't care about that--they just want to know 
> which runs faster) you need to compare like things)

It's not so much that java *has* a JIT, but that it *can* have it. My point is,
take it into consideration when designing parrot. There's no need to
code it right from the start, that would be wrong, but allow for it in the design.

> >it will be difficult
> >to beat it starting from a dynamic langauge like perl, we'll all pay
> >the price to have a useful language like perl.
> 
> Unfortunately (and you made reference to this in an older mail I haven't 
> answered yet) dynamic languages don't lend themselves to on-the-fly 
> compilation quite the way that static languages do. Heck, they don't tend 
> to lend themselves to compilation (certainly not optimization, and don't 
> get me started) period as much as static languages. That's OK, it just 
> means our technical challenges are similar to but not the same as for 
> Java/C/C++/C#/Whatever.

Yep, but for many things there is an overlap. As for the dynamic language
issue, I'd like the ActiveState people that worked on perl <-> .net
integration to share their knowledge on the issues involved.

> >The speed of the above loop depends a lot on the actual implementation
> >(the need to do a function call in the current parrot code whould blow
> >away any advantage gained skipping stack updates, for example).
> 
> A stack interpreter would still have the function calls. If we were 
> compiling to machine code, we'd skip the function calls for both.

Nope, take the hint: inline the code in a big switch and voila', no
function call ;-)

> numbers. (This is sounding familiar--at TPC Miguel tried to convince me 
> that .Net was the best back-end architecture to generate bytecode for) 

I know the ActivState people did work on this area. Too bad their stuff
is not accessible on Linux (some msi file format stuff).
I don't know if .net is the best back-end arch, it's certanly going
to be a common runtime to target since it's going to be fast and
support GC, reflection etc. With time and the input from the dynamic language
people may become a compelling platform to run perl/python on.

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: An overview of the Parrot interpreter

2001-09-06 Thread Paolo Molaro

On 09/06/01 Dan Sugalski wrote:
> Okay, I just did a test run, converting my sample program from interpreted 
> to compiled. (Hand-conversion, unfortunately, to C that went through GCC)
> 
> Went from 2.72M ops/sec to the equivalent of 22.5M ops/sec. And with -O3 on 
> it went to 120M ops/sec. The smaller number is more appropriate, since a 
> JIT/TIL version of the code won't do the sort of aggressive optimization 
> that GCC can do.
> 
> I'm not sure if I like those numbers (because they show we can speed things 
> up with a translation to native code) or dislike them (because they show 
> how much time the interpreter's burning). Still, they are numbers.

A 10x slowdown on that kind of code is normal for an interpreter
(where 10x can range from 5x to 20x, depending on the semantics).
But I think this is not a big issue: speed optimizations need
to be possible, there's no need to implement them right now.

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



Re: pads and lexicals

2001-09-06 Thread Simon Cozens

On Thu, Sep 06, 2001 at 11:05:37AM +0100, Dave Mitchell wrote:
> I'm trying to get my head round the relationship between pad lexicals,
> pad tmps, and registers (if any).

It's exactly the same as the relationship between auto variables, C
temporaries and machine registers.

Simon



RE: pads and lexicals

2001-09-06 Thread Brent Dax

Dan Sugalski:
...
#  new P0, list# New list in P0
#  get_lex P1, $x  # Find $x
#  get_type I0, P1 # Get $x's type
#  set_i I1, 1 # Set our loop var
# $10:   new P2, I0   # Get a temp of the same type as $x
#  add P2, P1, I1  # Add counter to $x, store
# result in P2
#  push P0, P2 # Push it into the list
#  eq I1, 65, $20, $10 # If loop counter's 65 goto
# $20, else $10
# $20 call foo# Call the sub
...

Are you expecting the optimizer to be *that* powerful?  If so, I think
I'll stay with the execution engine... :^)

--Brent Dax
[EMAIL PROTECTED]

"...and if the answers are inadequate, the pumpqueen will be overthrown
in a bloody coup by programmers flinging dead Java programs over the
walls with a trebuchet."




Re: An overview of the Parrot interpreter

2001-09-06 Thread Dan Sugalski

At 09:11 PM 9/6/2001 +0200, Paolo Molaro wrote:
>On 09/06/01 Dan Sugalski wrote:
> > >The original mono interpreter (that didn't implement all the semantics
> > >required by IL code that slow down interpretation) ran about 4 times
> > >faster than perl/python on benchmarks dominated by branches, function
> > >calls,
> > >integer ops or fp ops.
> >
> > Right, but mono's not an interpreter, unless I'm misunderstanding. It's a
> > version of .NET, so it compiles its code before executing. And the IL it
> > compiles is darned close to x86 assembly, so the conversion's close to
> > trivial.
>
>Nope, if we had written a runtime, library, compiler and JIT engine in two
>months we'd be all on vacation now ;-)

Then I'm impressed. I expect you've done some things that I haven't yet. 
The implementation of the current interpreter is somewhat naive.

>no assembly whatsoever. And, no, IL is not close to x86 assembly:-)

I dunno about that. From reading the Microsoft docs on it, it doesn't look 
that far off. (OK, the x86 doesn't do objects, but the rest maps in pretty 
darned well)

Also, while I have numbers for Parrot, I do *not* have comparable numbers 
for Perl 5, since there isn't any equivalence there. By next week we'll 
have a basic interpreter you can build so we can see how it stacks up 
against Mono.

> > >Java is way faster than perl currently in many tasks:
> >
> > Only when JITed. In which case you're comparing apples to oranges. A 
> better
> > comparison is against Java without JIT. (Yes, I know, Java *has* a JIT, 
> but
> > for reasonable numbers at a technical level (and yes, I also realize that
> > generally speaking most folks don't care about that--they just want to 
> know
> > which runs faster) you need to compare like things)
>
>It's not so much that java *has* a JIT, but that it *can* have it. My 
>point is,
>take it into consideration when designing parrot. There's no need to
>code it right from the start, that would be wrong, but allow for it in the 
>design.

Ah. That's a long-done deal. (You missed it by about a year... ;) TIL 
capabilities are on the list of things to remember, as is a straight-to-C 
translator for bytecode. Compilation to java bytecode, .NET, and tie-ins to 
native compilers (GCC, GEM, whatever) are as well.

That's in for the design. We're not doing them to start, but they're there 
for the design.

> > >it will be difficult
> > >to beat it starting from a dynamic langauge like perl, we'll all pay
> > >the price to have a useful language like perl.
> >
> > Unfortunately (and you made reference to this in an older mail I haven't
> > answered yet) dynamic languages don't lend themselves to on-the-fly
> > compilation quite the way that static languages do. Heck, they don't tend
> > to lend themselves to compilation (certainly not optimization, and don't
> > get me started) period as much as static languages. That's OK, it just
> > means our technical challenges are similar to but not the same as for
> > Java/C/C++/C#/Whatever.
>
>Yep, but for many things there is an overlap. As for the dynamic language
>issue, I'd like the ActiveState people that worked on perl <-> .net
>integration to share their knowledge on the issues involved.

That one was easy. They embedded a perl interpreter into the .NET execution 
engine as foreign code and passed any perl to be executed straight to the 
perl interpreter.

> > >The speed of the above loop depends a lot on the actual implementation
> > >(the need to do a function call in the current parrot code whould blow
> > >away any advantage gained skipping stack updates, for example).
> >
> > A stack interpreter would still have the function calls. If we were
> > compiling to machine code, we'd skip the function calls for both.
>
>Nope, take the hint: inline the code in a big switch and voila', no
>function call ;-)

Not everywhere. We've run tests, and we'll run more later, but... given the 
various ways of dispatch--big switch, computed goto, and function 
calls--the right way is platform dependent. Different architectures like 
different methods of calling, depending on the chip design and compiler. 
We're actually going to test for the right way at configure time and build 
the core interpreter loop accordingly.

Also, we can't do away with some of the function calls, since we've 
designed in the capability to have lexically scoped opcode functions, which 
means we have to dispatch to a function for all but the base opcodes.

> > numbers. (This is sounding familiar--at TPC Miguel tried to convince me
> > that .Net was the best back-end architecture to generate bytecode for)
>
>I know the ActivState people did work on this area. Too bad their stuff
>is not accessible on Linux (some msi file format stuff).
>I don't know if .net is the best back-end arch, it's certanly going
>to be a common runtime to target since it's going to be fast and
>support GC, reflection etc. With time and the input from the dynamic language
>people may become a c

Re: An overview of the Parrot interpreter

2001-09-06 Thread Dan Sugalski

At 09:22 PM 9/6/2001 +0200, Paolo Molaro wrote:
>A 10x slowdown on that kind of code is normal for an interpreter
>(where 10x can range from 5x to 20x, depending on the semantics).

If we're in the normal range, then, I'm happy.

Well, until we get equivalent benchmarks for Mono, in which case I shall 
be  unhappy if we're slower. :)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 12:34 PM 9/6/2001 -0700, Brent Dax wrote:
>Dan Sugalski:
>...
>#  new P0, list# New list in P0
>#  get_lex P1, $x  # Find $x
>#  get_type I0, P1 # Get $x's type
>#  set_i I1, 1 # Set our loop var
># $10:   new P2, I0   # Get a temp of the same type as $x
>#  add P2, P1, I1  # Add counter to $x, store
># result in P2
>#  push P0, P2 # Push it into the list
>#  eq I1, 65, $20, $10 # If loop counter's 65 goto
># $20, else $10
># $20 call foo# Call the sub
>...
>
>Are you expecting the optimizer to be *that* powerful?

Well, yeah. Why wouldn't it be? The code to be compiled isn't at all 
tricky, and neither is it at all likely to be unusual for what we see in 
perl. (Granted, I'd have written the function parameters to be 
"$x+1..$x+65" in which case we'd have just created an iterator instead of 
flattening things out) The translation was very straightforward, though I 
could see having 65 separate creations (if we weren't sure we really did go 
from 1 to 65) or creating a list from 1 to 65 at compile time and doing an 
add of $x to it in list context, or creating a list of 65 $x and the 65 
integers and adding the two lists together.

Which pattern the compiler took would probably be an interesting thing to 
consider.

>If so, I think I'll stay with the execution engine... :^)

Works--we can use all the execution people we can get our hands on. 
(Compiler people too, but I know that there are fewer compiler folks around)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 12:04 PM 9/6/2001 -0700, Brent Dax wrote:
>If foo is an unprototyped function (and thus takes a list in P0) we can
>immediately push the values of those calculations on to the list,
>something like (in a lame pseudo-assembler that doesn't use the right
>names for instructions):

FWIW, it's:

op, dest, source, source, source

>In the more general case, however (say, $x*1+$x*2+...$x*65) that's an
>interesting question.  Could we just do some fun stuff with lists?  What
>do real CPUs do?

Real CPUs don't do lists. It's just one big addressable byte array...

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-06 Thread Ken Fox

Dan Sugalski wrote:
> At 02:05 PM 9/6/2001 -0400, Ken Fox wrote:
> >You wrote on perl6-internals:
> >
> >get_lex P1, $x  # Find $x
> >get_type I0, P1 # Get $x's type
> >
> >[ loop using P1 and I0 ]
> >
> >That code isn't safe! If %MY is changed at run-time, the
> >type and location of $x will change. You really need to put
> >the code for finding $x *inside the loop*.
> 
> Only if $x is active. (i.e. tied) In that case we need to do some other
> things as well. I was assuming the passive case, for which the code was
> valid since there wasn't any way for it to be changed.

Could you compile the following for us with the assumption that
g() does not change its' caller?

  sub f {
my $sum = 0;
for (0..9) {
  $sum += g()
}
$sum
  }

Now what if g() is:

  sub g {
my $parent = caller().{MY};
my $zero = 0;
$parent{'$sum'} = \$zero;
1
  }

What if g() *appears* to be safe when perl compiles the loop, but
later on somebody replaces its' definition with the scope changing
one? Does perl go back and re-compile the loop?

The compiler could watch for uses of %MY, but I bet that most
modules will eventually use %MY to export symbols. Can the
compiler tell the difference between run-time and compile-time
usage of %MY?

> Now, granted, it might be such that a single "uses string eval" or "uses
> MY" in the program shuts down optimization the same way that $& kills RE
> performance in perl 5, but we are in the position of tracking that.

To quote Timone: "And you're okay with that?"

- Ken



Re: What's up with %MY?

2001-09-06 Thread Dan Sugalski

At 02:44 PM 9/6/2001 -0400, Ken Fox wrote:
>Could you compile the following for us with the assumption that
>g() does not change its' caller?

Maybe later. Pressed for time at the moment, sorry.

>What if g() *appears* to be safe when perl compiles the loop, but
>later on somebody replaces its' definition with the scope changing
>one? Does perl go back and re-compile the loop?

Maybe. We might also do any number of other things, including refetching 
every time.

On the other hand, if we put the address of the lexical's PMC into a 
register, it doesn't matter if someone messes with it, since they'll be 
messing with the same PMC, and thus every time we fetch its value we'll Do 
The Right Thing.

>The compiler could watch for uses of %MY, but I bet that most
>modules will eventually use %MY to export symbols. Can the
>compiler tell the difference between run-time and compile-time
>usage of %MY?

Sure. We scan the syntax tree after we're done with compilation, and if 
there aren't any MY accesses, we're OK. Mostly. do, require and string eval 
are still issues.

> > Now, granted, it might be such that a single "uses string eval" or "uses
> > MY" in the program shuts down optimization the same way that $& kills RE
> > performance in perl 5, but we are in the position of tracking that.
>
>To quote Timone: "And you're okay with that?"

Yes. "I shut off the optimizer and my code ran slow!" "Well, don't do that."

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: What's up with %MY?

2001-09-06 Thread Ken Fox

Dan Sugalski wrote:
> On the other hand, if we put the address of the lexical's PMC into a
> register, it doesn't matter if someone messes with it, since they'll be
> messing with the same PMC, and thus every time we fetch its value we'll Do
> The Right Thing.

Hmm. Shouldn't re-binding affect only the *variable* and not
the value bound to the variable? Maybe I misunderstand a PMC, but
if the PMC represents a value, then re-binding a lexical should
create a new PMC and bind it to the variable.

I think we have a language question... What should the following
print?

  my $x = 1;
  my $y = \$x;
  my $z = 2;
  %MY::{'$x'} = \$z;
  $z = 3;
  print "$x, $$y, $z\n"

a. "2, 1, 3"
b. "2, 2, 3"
c. "3, 1, 3"
d. "3, 3, 3"
e. exception: not enough Gnomes

I think I would expect behavior (c), but it's not obvious to me.

Anyways, it looks like you just reached the same conclusion I have: we
can't shadow a named variable in a non-PMC register. This might have
a surprising effect on the speed of

  foreach (1..10)

vs.

  foreach my $i (1..10)

- Ken



RE: What's up with %MY?

2001-09-06 Thread Garrett Goebel

From: Ken Fox [mailto:[EMAIL PROTECTED]]
> 
> I think we have a language question... What should the following
> print?
> 
>   my $x = 1;
>   my $y = \$x;
>   my $z = 2;
>   %MY::{'$x'} = \$z;
>   $z = 3;
>   print "$x, $$y, $z\n"
> 
> a. "2, 1, 3"
> b. "2, 2, 3"
> c. "3, 1, 3"
> d. "3, 3, 3"
> e. exception: not enough Gnomes
> 
> I think I would expect behavior (c), but it's not obvious to me.

I would have said (c) as well.

And if I can figure it out... it ain't that tricky.



what lexicals do?

2001-09-06 Thread Dave Mitchell

Here's a list of what any Perl 6 implementation of lexicals must be able to
cope with (barring additions from future apocalyses). Can anyone think of 
anything else?

>From Perl 5:

* multiple instances of the same variable name within different scopes
of the same sub

* The notion of introduction - a variable that has been defined but does
not yet mask an outer var, eg  my $x = 1; { my $x = $x+1; ... }

* an inner sub referring to a lexical in a lexically enclosing outer sub
- ie closures.

* eval - ie delayed compilation of an inner sub with restored scope

* typed lexicals

* our


New in Perl 6:

* %MY:: dynmaically changing the values and visibility of lexicals

* lexically scoped named subs - so caller(){MY::}{'&die'} = &mydie does
something useful.





Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

On Thursday 06 September 2001 08:53 am, Dave Mitchell wrote:
> But surely %MY:: allows you to access/manipulate variables that are in
> scope, not just variables are defined in the current scope, ie
>
> my $x = 100;
> {
> print $MY::{'$x'};
> }
>
> I would expect that to print 100, not 'undef'. Are your expectations
> different?

Yes.  I would expect that to print 'undef'.  '$x' doesn't exist as a key in 
%MY::

>
> I think any further discussion hinges on that.

Yes.  My expectations are different. My expectations are exactly like my 
previous PATH example.  

my $x = 100;
{
$MY::{'$x'} = 200;   # Equivalent to 'my $x = 200'
print $x;
}
print $x;

That should print 200, and 100, should it not?
You are creating a lexical in the current scope, and assigning it the value 
of 200.  You are not finding a currently existing $x and assigning it the 
value of 200, resulting in 200 / 200.  

But let's be a little more pragmatic about it, shall we?  Look beyond the 
fire and brimstone for a moment. As Dan said, we can already screw up your 
entire world.  So other than a couple clever hacks from Damian, how will 
they be used?

Generically speaking, modules aren't going to be running amok and making a 
mess of your current lexical scope - they'll be introducing, possibily 
repointing, and then possibly deleting specific symbols out of that scope's 
symbol table.  Very precise actions - not random reassignment of values from 
hither and yon.  Furthermore, unlike the value determination of a variable, 
which meanders through the various scopes looking for the most applicable 
target, %MY:: table manipulation is a singular entity, and needs to be 
treated as such.  You certainly don't want to be targetting random scopes 
'n' levels up.  You know exactly which level you need control over - 99% of 
the time, the immediate parent - and that is where any change should be 
limited too. 

Believe it or not, this feature is designed to reduce action at a distance - 
why would we want to create even more?

my $x = 100;
{
use some_pragma; # Introduces some $x
foo($x);
bar($x);
}
# The original pragma's scope has ended... why should we be using the
# same $x?  We shouldn't.  The $x was created in the inner scope, and
# we're back to ours

%MY:: access the pad, not the variable.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

On Thursday 06 September 2001 05:52 pm, Ken Fox wrote:
> I think we have a language question... What should the following
> print?
>
>   my $x = 1;
>   my $y = \$x;
>   my $z = 2;
>   %MY::{'$x'} = \$z;
>   $z = 3;
>   print "$x, $$y, $z\n"
>
> a. "2, 1, 3"
> b. "2, 2, 3"
> c. "3, 1, 3"
> d. "3, 3, 3"
> e. exception: not enough Gnomes
>
> I think I would expect behavior (c), but it's not obvious to me.

SCALAR(addr),SCALAR(addr), 3

$$x,$$$y,$z = "3,3,3"

My $x container contains 1.  ($x = 1)
My $y container contains a ref to the $x container.  ($x = 1, $y = \$x)
My $z container contain 2.  ($x = 1, $y = \$x, $z = 2)
My $x container now contains a ref to the $z container. 
   ($x = \$z, $y = \$x, $z = 2)
My $z container now contains 3.  
   ($x = \$z, $y = \$x, $z = 3, or $$x = 3, $$y = \$z, $z = 3, or 
   $$x = 3, $$$y = 3, $z = 3)


-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

On Thursday 06 September 2001 06:01 pm, Garrett Goebel wrote:
> From: Ken Fox [mailto:[EMAIL PROTECTED]]
>
> > I think we have a language question... What should the following
> > print?
> >
> >   my $x = 1;
> >   my $y = \$x;
> >   my $z = 2;
> >   %MY::{'$x'} = \$z;
> >   $z = 3;
> >   print "$x, $$y, $z\n"
> >
> > a. "2, 1, 3"
> > b. "2, 2, 3"
> > c. "3, 1, 3"
> > d. "3, 3, 3"
> > e. exception: not enough Gnomes
> >
> > I think I would expect behavior (c), but it's not obvious to me.
>
> I would have said (c) as well.
>
> And if I can figure it out... it ain't that tricky.

%MY:: ain't no different than %main::, except its contents are heaviliy 
restricted to the current scope level.  Whatever you used to be able to do 
with globals, you'll now be able to do with lexicals.  You just lose the 
globalness of it.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: What's up with %MY?

2001-09-06 Thread Damian Conway


Bryan thought:

   > >   my $x = 1;
   > >   my $y = \$x;
   > >   my $z = 2;
   > >   %MY::{'$x'} = \$z;
   > >   $z = 3;
   > >   print "$x, $$y, $z\n"
   > 
   > My $x container contains 1.  ($x = 1)
   > My $y container contains a ref to the $x container.  ($x = 1, $y = \$x)
   > My $z container contain 2.  ($x = 1, $y = \$x, $z = 2)
   > My $x container now contains a ref to the $z container. 
   >($x = \$z, $y = \$x, $z = 2)

Bzzzt! The line:

%MY::{'$x'} = \$z;

assigns a reference to $z to the *symbol table entry* for $x, not to $x itself.

"3, 1, 3" is the correct answer.

Damian



Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

On Thursday 06 September 2001 07:44 pm, Damian Conway wrote:
> Bzzzt! The line:
>
>   %MY::{'$x'} = \$z;
>
> assigns a reference to $z to the *symbol table entry* for $x, not to $x
> itself.

So you're saying that the symbol table entry contains a reference to the 
variable it represents?  Okay, I'll buy that for now.

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: what lexicals do?

2001-09-06 Thread David L. Nicol

Dave Mitchell wrote:
> 
> Here's a list of what any Perl 6 implementation of lexicals must be able to
> cope with (barring additions from future apocalyses). Can anyone think of
> anything else?

I would like 

perl -le 'my $Q = 3; {local $Q = 4; print $Q}'

to print 4 instead of crashing in confusion.  In other words,
have a lexical shadow a temporary (that's what we decided to call
locals, right?) when the temporary is asking for a name that has
been associated with a lexical.

Doing this takes two pieces.  The first piece is package-justifying
any argument to C before replacing symbols with scratchpad
lookup hooks.  The second is creating a lexical to hide the same-named
enclosing lexical and making it an alias to the new temporary.

perl -le'$Q=2;my$Q=3;print do{local$main::Q=4;($Q,$main::Q)},$Q,$main::Q'

Or did I miss the meeting where it was declared that temporaries
are just history.

It is not a big deal any way; can't think of a situation where this
would be useful.


-- 
   David Nicol 816.235.1187
Refuse to take new work - finish existing work - shut down.



Re: What's up with %MY?

2001-09-06 Thread Ken Fox

Dan Sugalski wrote:
> I think you're also overestimating the freakout factor.

Probably. I'm not really worried about surprising programmers
when they debug their code. Most of the time they've requested
the surprise and will at least have a tiny clue about what
happened.

I'm worried a little about building features with global effects.
Part of Perl 6 is elimination of action-at-a-distance, but now
we're building the swiss-army-knife-of-action-at-a-distance.

What worries me the most is that allowing %MY to change at run-time
slows down code that doesn't do it. Maybe we can figure out how
to reduce the impact, but that's time IMHO better spent making
existing code run faster.

You wrote on perl6-internals:

   get_lex P1, $x  # Find $x
   get_type I0, P1 # Get $x's type

   [ loop using P1 and I0 ]

That code isn't safe! If %MY is changed at run-time, the
type and location of $x will change. You really need to put
the code for finding $x *inside the loop*.

Maybe we can detect a few cases when it's safe to move
get_lex out of a loop, but if the loop calls any subs or
non-core ops we're stuck.

- Ken



Re: what lexicals do?

2001-09-06 Thread Ken Fox

Dave Mitchell wrote:
> Can anyone think of anything else?

You omitted the most important property of lexical variables:

  [From perlsub.pod]

  Unlike dynamic variables created by the C operator, lexical
  variables declared with C are totally hidden from the outside
  world, including any called subroutines.  This is true if it's the
  same subroutine called from itself or elsewhere--every call gets
  its own copy.

- Ken



Re: What's up with %MY?

2001-09-06 Thread Dan Sugalski

At 02:05 PM 9/6/2001 -0400, Ken Fox wrote:
>Dan Sugalski wrote:
[stuff I snipped]

>I'm worried a little about building features with global effects.
>Part of Perl 6 is elimination of action-at-a-distance, but now
>we're building the swiss-army-knife-of-action-at-a-distance.

I don't know how much of a stated design goal this is. Most of the globals 
that are getting eliminated are mostly because of coarseness issues. ($/, say)

>What worries me the most is that allowing %MY to change at run-time
>slows down code that doesn't do it. Maybe we can figure out how
>to reduce the impact, but that's time IMHO better spent making
>existing code run faster.

Maybe, but... with this sort of thing, if we know about it before we code, 
we're fine. It's one of the things worth spending some design time on. Even 
if we ultimately don't do it, I think it'll be time well spent. (There have 
been a number of features that have been discussed that ultimately were 
discarded, but thinking about them expanded the interpreter's design in 
pleasant ways)

>You wrote on perl6-internals:
>
>get_lex P1, $x  # Find $x
>get_type I0, P1 # Get $x's type
>
>[ loop using P1 and I0 ]
>
>That code isn't safe! If %MY is changed at run-time, the
>type and location of $x will change. You really need to put
>the code for finding $x *inside the loop*.

Only if $x is active. (i.e. tied) In that case we need to do some other 
things as well. I was assuming the passive case, for which the code was 
valid since there wasn't any way for it to be changed.

>Maybe we can detect a few cases when it's safe to move
>get_lex out of a loop, but if the loop calls any subs or
>non-core ops we're stuck.

Maybe. I think we're going to end up assuming ops have no side effects 
unless explicitly noted to have some, and those rules will be given to the 
compiler. As for subs, we do have to worry some, but we are in the nice 
position of being able to know if a sub does or doesn't change things 
globally. We're certainly not limited to keeping C's pathetic "just 
parameters" as the only metadata stored about functions. We can have "uses 
string eval", "uses MY", "OK but calls x, y, and Z", or whatever stored for 
each sub so we can have an idea of what alters things.

Now, granted, it might be such that a single "uses string eval" or "uses 
MY" in the program shuts down optimization the same way that $& kills RE 
performance in perl 5, but we are in the position of tracking that.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: What's up with %MY?

2001-09-06 Thread Garrett Goebel

From: Ken Fox [mailto:[EMAIL PROTECTED]]
> Dan Sugalski wrote:
> >
> > I think you're also overestimating the freakout factor.
> 
> Probably. I'm not really worried about surprising programmers
> when they debug their code. Most of the time they've requested
> the surprise and will at least have a tiny clue about what
> happened.
> 
> I'm worried a little about building features with global effects.
> Part of Perl 6 is elimination of action-at-a-distance, but now
> we're building the swiss-army-knife-of-action-at-a-distance.

Would it be possible/desirable to have 'static' and 'dynamic' properties for
lexical scopes? Could we have static lexical scopes for things that can be
resolved before runtime, yet could be explicitly promoted to dynamic scopes
at runtime if needed?

Speaking from a solid position of ignorance, I must ask: does supporting one
exclude support for the other?

is static|dynamic {
  my $pop = 0;
  sub incr { ++$pop }
}



RE: pads and lexicals

2001-09-06 Thread Brent Dax

Dan Sugalski:
# At 12:04 PM 9/6/2001 -0700, Brent Dax wrote:
# >If foo is an unprototyped function (and thus takes a list in
# P0) we can
# >immediately push the values of those calculations on to the list,
# >something like (in a lame pseudo-assembler that doesn't use the right
# >names for instructions):
#
# FWIW, it's:
#
# op, dest, source, source, source

Yeah, I was just being too lazy to go open the assembler PDD and look
for the right instructions, so I missed the format too.  :^)

# >In the more general case, however (say, $x*1+$x*2+...$x*65) that's an
# >interesting question.  Could we just do some fun stuff with
# lists?  What
# >do real CPUs do?
#
# Real CPUs don't do lists. It's just one big addressable byte array...

Those were two separate questions.  :^)  First, I thought that we could
generalize the P0 case to all similar cases.  Then, I thought "Hey, this
has to happen on real CPUs all the time.  How do they handle it?"  See?
Two separate ideas.

--Brent Dax
[EMAIL PROTECTED]

"...and if the answers are inadequate, the pumpqueen will be overthrown
in a bloody coup by programmers flinging dead Java programs over the
walls with a trebuchet."




RE: pads and lexicals

2001-09-06 Thread Dan Sugalski

At 01:43 PM 9/6/2001 -0700, Brent Dax wrote:
>Dan Sugalski:
># At 12:04 PM 9/6/2001 -0700, Brent Dax wrote:
># >In the more general case, however (say, $x*1+$x*2+...$x*65) that's an
># >interesting question.  Could we just do some fun stuff with
># lists?  What
># >do real CPUs do?
>#
># Real CPUs don't do lists. It's just one big addressable byte array...
>
>Those were two separate questions.  :^)  First, I thought that we could
>generalize the P0 case to all similar cases.  Then, I thought "Hey, this
>has to happen on real CPUs all the time.  How do they handle it?"  See?
>Two separate ideas.

Ah. Well, on real CPUs, generally parameters go in registers with overflows 
on the stack. (Assuming you have registers, of course) Most of the time you 
won't see functions called with more than two or three parameters. Seven is 
wildly unusual.

As for generating the list, that's all compiler/programmer dependent. 
Generally data's static, so you'd load in the value from $x to a register, 
then do math and push the results on the stack or into registers, then make 
the call.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: An overview of the Parrot interpreter

2001-09-06 Thread Paolo Molaro

On 09/06/01 Dan Sugalski wrote:
> Then I'm impressed. I expect you've done some things that I haven't yet. 

The only optimizations that interpreter had, were computed goto and
allocating the eval stack with alloca() instead of malloc().
Of course, now it's slower, because I implemented the full semantics required
by IL code (the biggest slowdown came from having to consider
argments and local vars of any arbitrary size; making the alu opcodes
work for any data type slowed it down by 10 % only): but the parrot
interpreter doesn't need to deal with that kind of stuff that slows
down interpretation big time. Still, it's about 2x faster than perl5
on the same benchmarks, though I haven't tried to optimize the new code, yet.

> Also, while I have numbers for Parrot, I do *not* have comparable numbers 
> for Perl 5, since there isn't any equivalence there. By next week we'll 
> have a basic interpreter you can build so we can see how it stacks up 
> against Mono.

See above, I expect it to be faster, at least handling the low-level stuff
since I hope you're not going to add int8, uint8 etc, type handling.

> >Yep, but for many things there is an overlap. As for the dynamic language
> >issue, I'd like the ActiveState people that worked on perl <-> .net
> >integration to share their knowledge on the issues involved.
> 
> That one was easy. They embedded a perl interpreter into the .NET execution 
> engine as foreign code and passed any perl to be executed straight to the 
> perl interpreter.

I think they worked also on outputting IL bytecode...

> >I know the ActivState people did work on this area. Too bad their stuff
> >is not accessible on Linux (some msi file format stuff).
> >I don't know if .net is the best back-end arch, it's certanly going
> >to be a common runtime to target since it's going to be fast and
> >support GC, reflection etc. With time and the input from the dynamic 
> >language
> >people may become a compelling platform to run perl/python on.
> 
> Doubt it. That'd require Microsoft's involvement, and I don't see that 
> happening. Heck, most of what makes dynamic languages really useful is 

The ECMA people are not (only) m$, there is people in the committee
interested in both other implementations and input on the specs.

> completely counter to what .NET (and java, for that matter) wants to do. 
> Including runtime compilation from source and runtime redefinition of 
> functions. (Not to mention things like per-object runtime changeable 
> multimethod dispatch, which just makes my head hurt)

Naah, with the reflection support you can create types and methods on the fly,
the rest can probably be done with a couple of ad-hoc opcodes.

lupus

-- 
-
[EMAIL PROTECTED] debian/rules
[EMAIL PROTECTED] Monkeys do it better



language agnosticism and internal naming

2001-09-06 Thread Benjamin Stuhl

I had a thought this morning on funtion/struct/global
prefixes for Parrot. If we really plan to also run
Python/Ruby/whatever on it, it does not look good for the
entire API to be prefixed with "perl_". We really (IMHO)
ought to pick something else so that we don't give people a
convenient target for FUD.

For lack of anything better, I propose "par_" for
functions. We might stll be able to get away with "PL_ "
for globals (Parrot Library?), but I doubt it.

Just something else to consider. (But hopefully a topic
that won't make Dan's brain hurt any more than it probably
does. :-)

-- BKS

__
Do You Yahoo!?
Get email alerts & NEW webcam video instant messaging with Yahoo! Messenger
http://im.yahoo.com



Re: What's up with %MY?

2001-09-06 Thread Ken Fox

Damian Conway wrote:
> Bzzzt! The line:
> 
> %MY::{'$x'} = \$z;
> 
> assigns a reference to $z to the *symbol table entry* for $x, not to $x itself.

So I should have said:

  %MY::{'$x'} = $z;

That's pretty magical stuff isn't it? Sorry I used the wrong syntax.
I'm just taking it from your "Perl 6 Overview" PDF slides. This stuff
changes *fast*, eh?

If I want to bind a sub, do I use:

  %MY::{'&die'} = &my_die;  # reference to sub my_die? (from slides)

or

  %MY::{'&die'} = my_die;   # magical symbol table assignment rule?

> "3, 1, 3" is the correct answer.

That's what I thought. Dan's not going to be happy. ;)

- Ken



Re: What's up with %MY?

2001-09-06 Thread Bryan C . Warnock

On Friday 07 September 2001 12:13 am, Ken Fox wrote:
> Damian Conway wrote:
> > Bzzzt! The line:
> >
> > %MY::{'$x'} = \$z;
> >
> > assigns a reference to $z to the *symbol table entry* for $x, not to $x
> > itself.
>
> So I should have said:
>
>   %MY::{'$x'} = $z;
>
> That's pretty magical stuff isn't it? Sorry I used the wrong syntax.
> I'm just taking it from your "Perl 6 Overview" PDF slides. This stuff
> changes *fast*, eh?

No, he was buzzing me.  I was still thinking in terms of pads, and missed 
the extra pointer.  You were correct...  Forget what you just 
unremembered


-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: What's up with %MY?

2001-09-06 Thread Ken Fox

"Bryan C. Warnock" wrote:
> Generically speaking, modules aren't going to be running amok and making a
> mess of your current lexical scope - they'll be introducing, possibily
> repointing, and then possibly deleting specific symbols

How much do you want to pay for this feature? 10% slower code? 50%
slower? Do you want the feature at any price?

I don't like run-time frobbing of the symbol table. Not even
precise tweaking. ;) I think it's in bad style and inconsistent with
the purpose of lexicals. *But* bad style isn't a good argument
and I wouldn't be pursuing this if it were just a style issue.

The trouble lies in running the code. Lexicals used to be known at
compile time. Now they can change practically anywhere. It's like
using C and having *everything* be volatile. Except worse because
you don't even know the address where something is going to be.

A simple solution would be to allow lexical scope editing at
compile time, but not run-time. Change a BEGIN block's caller() so
that it is the scope being compiled instead of main. This achieves
the majority of the benefits (lexical imports at compile time)
without any downside.

There are two other things that are easy to add. If the
compiler knew in advance which lexicals might dynamically change,
it could generate better code and not slow everything down. A
trait called ":volatile" or something. IMHO this would also
show intent to the people reading the code that something funny
might happen to the variable. (Macros or compile-time injected
lexicals could declare the :volatile trait, so I would imagine
that some pretty interesting packages could still be written.)

The other thing is to permit attaching attributes to a
lexical scope. This allows undetected channels of communication
between callees. There are interesting things that could be
used for (carrying state between function calls in the same
scope or simple thread-local storage). And It wouldn't impact
compiled code at all.

- Ken