Re: RFC 289 (v1) Generate module dependencies easily

2001-09-01 Thread Dave Storrs


How would this handle code and/or packages that are generated at run
time?  Or would that be another caveat?

Dave

On Fri, 31 Aug 2001, Steve Simmons wrote:

> > Perl6 should ship with a simple utility that shows all modules a program
> > uses, and all modules those modules use.
> 
> Presumably with the caveat that no usage list can be generated for any
> missing modules.
> 




Re: RFC 289 (v1) Generate module dependencies easily

2001-09-01 Thread Michael G Schwern

On Fri, Aug 31, 2001 at 04:08:18PM -0700, Dave Storrs wrote:
> How would this handle code and/or packages that are generated at run
> time?  Or would that be another caveat?

If a piece of code depends on a package it generates itself... doesn't
sound like much of a dependency.

Anyhow, RFC 289 (v2) already states:

Even if the dependency checker could not detect runtime dependencies,
it would still be damn useful.

which pretty much sums it up.


-- 

Michael G. Schwern   <[EMAIL PROTECTED]>http://www.pobox.com/~schwern/
Perl6 Quality Assurance <[EMAIL PROTECTED]>   Kwalitee Is Job One
viscosity dawns
creamy, juicy, filling paste
open wide fucker
-- imploded



Re: Multiple-dispatch on functions

2001-09-01 Thread Dan Sugalski

At 10:03 PM 8/30/2001 -0400, Michael G Schwern wrote:
>Thinking about what Zhang was saying about multiple-dispatch not being
>inherently OO.  I think he's sort of right.  Multiple-dispatch need
>not be confined to method lookups.

There is the potential for a pretty significant cost to this, since we'd 
need to evaluate the args at runtime for each call. (Possibly we could do 
some compile time optimization, but not in a lot of places, alas)

I think it'd be cool, but it won't be free at runtime.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




!< and !>

2001-09-01 Thread raptor

hi,
I was looking at Interbase SELECT syntax and saw these two handy shortcuts :

 = {= | < | > | <= | >= | !< | !> | <> | !=}

!<  and !>

Personaly i didn't liked  if (! ...) construct too much, so even that
starting to use "unless" is harder for non-english speaker, I think is much
cleaner and good. Particulary 'cause if(!...) is harder to spot... but
moving it near to comparison operator looks good  of cource not the same
with !ne , !eq ..
=
iVAN
[EMAIL PROTECTED]
=
PS. In my native language we doesn't have a word that fit best to
"unless"(if not) ...







Re: CLOS multiple dispatch

2001-09-01 Thread Dan Sugalski

At 04:09 PM 8/31/2001 -0500, Me wrote:
> > If the dispatcher is drop-in replacable, what does its
> > interface look like?
>
>I'm thinking this is either deep in mop territory, or a probably quite
>straightforward set of decisions about dispatch tables, depending
>on how you look at things.

It'll probably be something like "Here's the function name. Here's the 
parameters. Do The Right Thing." I don't think there's much need for 
cleverness on the part of the interface. The actual dispatch code could be 
nasty, but that's someone else's problem. :)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Will subroutine signatures apply to methods in Perl6

2001-09-01 Thread Ken Fox

Uri Guttman wrote:
[Re: use strict 'typing'; my $rex = new Dog; $rex.bark]
> then it should be a compile time error at the assignment to $rex
> and not later. you can't trace $rex at compile time to see what
> kind of object (if any) was assigned to it. so the illegal method
> call can't (easily) be detected at compile time. it has to be a
> runtime error.

I agree with this comment, but I think the approach has serious
usability problems. From my experience with using "const" in C++,
it looks like this pragma will be *very* difficult to use.
Perl code also tends to be highly generic and polymorphic.

Wouldn't it be better to handle strict typing as a warning in
the places where the type information isn't known? In the Dog
example, "my $rex = new Dog" would generate a warning, unless
Dog::new was typed.

Also, I think it would be excellent to have an "assumptions"
pragma to complement strict typing. If the Dog package does not
declare type information, I could write an assumption to quiet
the strict type warnings. If the assumption is false (maybe
the author of Dog adds type declarations someday), then an
"invalid assumption" error should occur at compile time.

- Ken



Re: Multiple-dispatch on functions

2001-09-01 Thread Michael G Schwern

On Sat, Sep 01, 2001 at 01:10:58PM -0400, Dan Sugalski wrote:
> At 10:03 PM 8/30/2001 -0400, Michael G Schwern wrote:
> >Thinking about what Zhang was saying about multiple-dispatch not being
> >inherently OO.  I think he's sort of right.  Multiple-dispatch need
> >not be confined to method lookups.
> 
> There is the potential for a pretty significant cost to this, since we'd 
> need to evaluate the args at runtime for each call. (Possibly we could do 
> some compile time optimization, but not in a lot of places, alas)

H shouldn't be any worse than a multi-method call.  And it'll
only effect those functions with the 'multi' flag.


-- 

Michael G. Schwern   <[EMAIL PROTECTED]>http://www.pobox.com/~schwern/
Perl6 Quality Assurance <[EMAIL PROTECTED]>   Kwalitee Is Job One
"Let's face it," said bearded Rusty Simmons, opening a can after the
race.  "This is a good excuse to drink some beer."  At 10:30 in the
morning?  "Well, it's past noon in Dublin," said teammate Mike
[Joseph] Schwern.  "It's our duty."
-- "Sure, and It's a Great Day for Irish Runners" 
   Newsday, Sunday, March 20, 1988



Re: Multiple-dispatch on functions

2001-09-01 Thread Ken Fox

[EMAIL PROTECTED] wrote:
> Dan, I don't immediately see how per object/class dispatch
> control helps to make multimethods pluggable.

The way to approach this problem is to profile
Class::MultiMethods and figure out (a) where the hot spots
are and (b) what core support would help eliminate those
hot spots.

The one thing I'm curious about is whether different syntactic
conventions affect the dispatcher or whether this is all just
sugar for a single dispatch. Perl 5 uses several different
sub call syntaxes:

  $obj->method(...)
  method $obj ...;
  function($x, ...);

  $obj->Scope::method(...)
  Scope::method $obj ...;
  Scope::function($x, ...);

Does the C style dispatch differently from the C++ style?

I'd expect indirect object syntax to dispatch like C++
syntax since indirect object syntax is just sugar. But is
it possible to tell the difference between C style and
indirect object style for functions of arity 1?

IMHO it would be very nice if everything was just sugar
for "function($x, ...)". It would be harder to define
the nearest multi-method, but at least we wouldn't
have to worry about different dispatchers kicking in when
we didn't expect them to.

> Is there a general mop for dispatch?

Some pointers to meta-object protocol would be nice, if
only to establish common terminology. Do we all agree with
RFC 92's terminology and references? I only know a bit
about CLOS and I'm afraid that I'm going to confuse generic
MOP concepts with the CLOS implementation.

I have a few questions about MOPs.

Is there a difference between a meta-object system and
a meta-object protocol? They seem like data-structure
vs. algorithm views of the same thing.

Dispatch is a small part of a MOP. Is there a difference
between attribute (slot) lookup and method lookup? Is
dispatch trivial once method lookup is defined?

Is it a requirement for a MOP to defined in the target
language, or can it exist outside the language? For
example, CLOS' MOP can be changed using Lisp. C++'s MOP,
if it has one, can't.

How much of the MOP do expose to Perl programs?

- Ken



Re: Multiple-dispatch on functions

2001-09-01 Thread Dan Sugalski

At 03:06 PM 9/1/2001 -0400, Michael G Schwern wrote:
>On Sat, Sep 01, 2001 at 01:10:58PM -0400, Dan Sugalski wrote:
> > At 10:03 PM 8/30/2001 -0400, Michael G Schwern wrote:
> > >Thinking about what Zhang was saying about multiple-dispatch not being
> > >inherently OO.  I think he's sort of right.  Multiple-dispatch need
> > >not be confined to method lookups.
> >
> > There is the potential for a pretty significant cost to this, since we'd
> > need to evaluate the args at runtime for each call. (Possibly we could do
> > some compile time optimization, but not in a lot of places, alas)
>
>H shouldn't be any worse than a multi-method call.  And it'll
>only effect those functions with the 'multi' flag.

Nope, the cost will be paid on all sub calls. We at least need to check on 
every sub call to see if there are multiple versions of the functions. (We 
can't tell at compile time if it's a single or multi-method sub call, since 
it can change at runtime) Granted, it's not a huge expense for 
non-multi-method calls, but it does still impose an overhead everywhere.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Multiple-dispatch on functions

2001-09-01 Thread Michael G Schwern

On Sat, Sep 01, 2001 at 03:12:17PM -0400, Dan Sugalski wrote:
> Nope, the cost will be paid on all sub calls. We at least need to check on 
> every sub call to see if there are multiple versions of the functions. (We 
> can't tell at compile time if it's a single or multi-method sub call, since 
> it can change at runtime) Granted, it's not a huge expense for 
> non-multi-method calls, but it does still impose an overhead everywhere.

Sounds like it could be solved with a function call cache similar to
the method call cache we have now.  Just blow it away if anything
touches that package's symbol table.


-- 

Michael G. Schwern   <[EMAIL PROTECTED]>http://www.pobox.com/~schwern/
Perl6 Quality Assurance <[EMAIL PROTECTED]>   Kwalitee Is Job One
That which stirs me, stirs everything.
-- Squonk Opera, "Spoon"



Re: !< and !>

2001-09-01 Thread Russ Allbery

raptor <[EMAIL PROTECTED]> writes:

> I was looking at Interbase SELECT syntax and saw these two handy
> shortcuts :

>  = {= | < | > | <= | >= | !< | !> | <> | !=}

> !<  and !>

How is !< different from >=?

-- 
Russ Allbery ([EMAIL PROTECTED]) 



Re: Multiple-dispatch on functions

2001-09-01 Thread Dan Sugalski

At 03:43 PM 9/1/2001 -0400, Michael G Schwern wrote:
>On Sat, Sep 01, 2001 at 03:12:17PM -0400, Dan Sugalski wrote:
> > Nope, the cost will be paid on all sub calls. We at least need to check on
> > every sub call to see if there are multiple versions of the functions. (We
> > can't tell at compile time if it's a single or multi-method sub call, 
> since
> > it can change at runtime) Granted, it's not a huge expense for
> > non-multi-method calls, but it does still impose an overhead everywhere.
>
>Sounds like it could be solved with a function call cache similar to
>the method call cache we have now.  Just blow it away if anything
>touches that package's symbol table.

Sure, if we do decide to do it we'll have to come up with some way to do it 
efficiently. There'll probably be a cost of some sort no matter what we do, 
though it might turn out to be paid by the core coders not the user. (Which 
is OK, but I'm trying to minimize those costs where I can too)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: !< and !>

2001-09-01 Thread Sterin, Ilya

Though it might prove convenient (just more syntax for more than one way to
do it) it's equivalent to !< == >= and !> == <= , it might be too confusing,
though I myself would think that since != and ne is implemented, !< and !>
would definitely make common sense to implement as well.

Ilya

> -Original Message-
> From: raptor [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, September 01, 2001 11:18 AM
> To: [EMAIL PROTECTED]
> Subject: !< and !>
>
>
> hi,
> I was looking at Interbase SELECT syntax and saw these two handy
> shortcuts :
>
>  = {= | < | > | <= | >= | !< | !> | <> | !=}
>
> !<  and !>
>
> Personaly i didn't liked  if (! ...) construct too much, so even that
> starting to use "unless" is harder for non-english speaker, I
> think is much
> cleaner and good. Particulary 'cause if(!...) is harder to spot... but
> moving it near to comparison operator looks good  of cource
> not the same
> with !ne , !eq ..
> =
> iVAN
> [EMAIL PROTECTED]
> =
> PS. In my native language we doesn't have a word that fit best to
> "unless"(if not) ...
>
>
>



RE: !< and !>

2001-09-01 Thread Sterin, Ilya



> -Original Message-
> From: Russ Allbery [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, September 01, 2001 4:03 PM
> To: [EMAIL PROTECTED]
> Subject: Re: !< and !>
> 
> 
> raptor <[EMAIL PROTECTED]> writes:
> 
> > I was looking at Interbase SELECT syntax and saw these two handy
> > shortcuts :
> 
> >  = {= | < | > | <= | >= | !< | !> | <> | !=}
> 
> > !<  and !>
> 
> How is !< different from >=?

It's just more syntax just like foo != bar 
is the same as (foo > bar || foo < bar).

It might prove convenient to express the expression.

Ilya


> 
> -- 
> Russ Allbery ([EMAIL PROTECTED]) 
> 



Re: !< and !>

2001-09-01 Thread Russ Allbery

Sterin, Ilya <[EMAIL PROTECTED]> writes:
>> From: Russ Allbery [mailto:[EMAIL PROTECTED]]

>> How is !< different from >=?

> It's just more syntax just like foo != bar 
> is the same as (foo > bar || foo < bar).

> It might prove convenient to express the expression.

It's the same number of characters.  How can it be more convenient?

-- 
Russ Allbery ([EMAIL PROTECTED]) 



Re: !< and !>

2001-09-01 Thread Bryan C . Warnock

On Saturday 01 September 2001 05:40 pm, Russ Allbery wrote:
> Sterin, Ilya <[EMAIL PROTECTED]> writes:
> >> From: Russ Allbery [mailto:[EMAIL PROTECTED]]
> >>
> >> How is !< different from >=?
> >
> > It's just more syntax just like foo != bar
> > is the same as (foo > bar || foo < bar).
> >
> > It might prove convenient to express the expression.
>
> It's the same number of characters.  How can it be more convenient?

You only have to manipulate the shift key once!  ;-)

I'm waiting for someone to say that in tri-state logic, '!<' != '>='

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: !< and !>

2001-09-01 Thread Andrew Wilson

On Sat, Sep 01, 2001 at 02:40:40PM -0700, Russ Allbery wrote:
> >> How is !< different from >=?
> 
> > It's just more syntax just like foo != bar 
> > is the same as (foo > bar || foo < bar).

Not if you're using Quantum::SuperPositions ;-)

> > It might prove convenient to express the expression.
> 
> It's the same number of characters.  How can it be more convenient?

It may help you to express yourself better in the way that you
understand the problem you're solving.  Every little bit helps.

cheers

Andrew



Re: Multiple-dispatch on functions

2001-09-01 Thread Damian Conway

Ken wrote:

   > The way to approach this problem is to profile
   > Class::MultiMethods and figure out (a) where the hot spots
   > are and (b) what core support would help eliminate those
   > hot spots.

But please don't do that until I release the next update of C::MM,
which will use a new dispatch mechanism that is pluggable.

   > The one thing I'm curious about is whether different syntactic
   > conventions affect the dispatcher or whether this is all just
   > sugar for a single dispatch. 

Multiple dispatch is certainly not (practically) implementable via single
dispatch. Oh, there *are* techniques, but they're subject to exponential
blow-out in the number of intermediate methods required to resolve all 
possible dispatches.

Syntactically, C::MM dispatches the call:

$obj1->multimethod($obj2, $obj3);

exactly the same as:

multimethod($obj1, $obj2, $obj3);

The only internal difference is that the first version has to do the
normal Perl single dispatch look-up before it discovers that
&multimethod is a multimethod (the second version doesn't), whilst the
second version requires that &multimethod be declared a multimethod in
the current scope (the first version doesn't).

Damian



LangSpec: Statements and Blocks

2001-09-01 Thread Bryan C . Warnock

A couple weeks ago I alluded that I was working on some documentation.  
After a brief hiatus, I've picked it back up, and am ready to release
an entire half document.  Big whoopee.

Anyway, what I'm working on is more or less a Statement of Fact, from a Perl 
6 language perspective.  It is intended to be a seed for an actual language 
spec to be produced and maintained... someday.

Larry's Apocalypses give an overview on what is changing for Perl 6.  
Damian's Exegeses show the Apocalypse in motion.  [1]  This is the third 
step - to give a more comprehensive view of exactly how Perl 6 will look and 
feel.

Using information gathered from the A&E series, the Camel III, Vroman's 
Pocket Guide, the mailing lists, pointed questions, existing Perl (5.6.1) 
behavior, and, when all else fails, my own judgement, I'm attempting to 
document in excruciatingly mundane detail [2] a complete picture of Perl 6 - 
the language, and how it interacts with the internals - and an accurate 
target for the Perl 6 language, so that coders know exactly what they are 
coding, and testers know exactly what to test.

A secondary goal is to help assist Larry and company in the language design.
A lot of information was presented, and there are some very good ideas.  But 
it is a very big picture, and perhaps a complete, distilled view may show 
some areas that may need review, rethinking, or removal.

I've got two currently in the works.  The first is on "Statements and 
Blocks", which is included below.  The second is on operators.
I'm not particular happy with the format, but I haven't found a decent one 
yet.  One of the first things I wish to add are references to the actual 
source of the information I base each decision on.  (Apo 2, p5p, 5.6.1, etc.)

Lastly, it's a living document.  The language is in flux, and this document 
will attempt to shadow as closely as possible.

Suggestions, corrections, and, of course, additions, are more than welcome.
As is, of course, friendly debate... :-)

[1] Apologies to Dave Barry, but wouldn't "Apocalypse in Motion" be a good 
name for a band?  Perhaps at the next Perl Jam
[2] ie, no humorous footnotes. [3]
[3] Not even this one.

--

Perl 6 Reference - Statements and Blocks 
(0.1/2001-09-01)

Syntax Overview

Keywords
continue, do, else, elsif, for, foreach, given, goto, grep, if, last,
map, next, redo, sort, sub, unless, until, when, while 

Basic Constructs

 1. [ LABEL: ] expr;
 2. [ LABEL: ] { block } [ continue { block } ]
 3. << grep | map >> { block } list# Note 1
 4. sort [ { block } ] list# Note 1
 5. do { block }   # Note 1


Conditional Statement Modifiers

 6. [ LABEL: ] expr if expr;
 7. [ LABEL: ] expr until expr;


Looping Statement Modifiers

 8. [ LABEL: ] expr while expr;
 9. [ LABEL: ] do { block } while expr;# Note 2
10. [ LABEL: ] expr until expr;
11. [ LABEL: ] do { block } until expr;# Note 3


Iterative Statement Modifiers

12. [ LABEL: ] expr for[each] list;# Note 4


Conditional Block Constructs

13. [ LABEL: ] if ( expr ) { block } 
   [ [ elsif  ( expr ) { block } ] ... ]
   [ else  { block } ]
14. [ LABEL: ] until ( expr ) { block }
   [ [ elsif  ( expr ) { block } ] ... ]
   [ else  { block } ]
15. [ LABEL: ] given  ( expr ) { block }
16. [ LABEL: ] when expr : { block }   # Note 5

Looping Block Constructs

17. [ LABEL: ] while ( expr ) { block } [ continue { block } ]
18. [ LABEL: ] until ( expr ) { block } [ continue { block } ]
19. [ LABEL: ] for[each] ( expr; expr; expr )  # Note 4
 { block }


Iterative Block Constructs

20. [ LABEL: ] for[each] [ scalar ] ( list ) { block } # Note 4

Subroutine Code Blocks # Note 6

21. sub identifier [ ( prototype ) ] [ :properties ] { block }
22. sub [ ( prototype ) ] { block }# Note 7


Definitions

An expression (expr) consists of one or more terms, operators, and
expressions.

A list consists of zero or more expressions.  List members may either be
an explicit expression, separated via a comma (','), or may be interpolated 
from two expressions via either of the two range operators 
( ('..') and ('...') ).  A list of zero elements must be delimited by
parenthesis.

A statement consists of zero or more expressions, followed by an optional
modifier and its expression, and either a statement terminator (';') or a
block closure ('}' or EOF).

A block consists of zero or more blocks and statements. A file is
considered a block, delimited by the file boundaries.   Semantically, I
will define a block only in terms of its affect on scoping.  (Blocks are
sometimes referenced by their interaction with flow control.  However, this
definition isn't consistent, and I will avoid it.)


Flow Control Expressions

A. g

RE: Multiple-dispatch on functions

2001-09-01 Thread Brent Dax

# -Original Message-
# From: Ken Fox [mailto:[EMAIL PROTECTED]]
# Sent: Saturday, September 01, 2001 9:44 AM
# To: Me
# Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; Michael G Schwern; Dan
# Sugalski
# Subject: Re: Multiple-dispatch on functions
...
# The one thing I'm curious about is whether different syntactic
# conventions affect the dispatcher or whether this is all just
# sugar for a single dispatch. Perl 5 uses several different
# sub call syntaxes:
#
#   $obj->method(...)
#   method $obj ...;
#   function($x, ...);
#
#   $obj->Scope::method(...)
#   Scope::method $obj ...;
#   Scope::function($x, ...);
#
# Does the C style dispatch differently from the C++ style?
#
# I'd expect indirect object syntax to dispatch like C++
# syntax since indirect object syntax is just sugar. But is
# it possible to tell the difference between C style and
# indirect object style for functions of arity 1?
#
# IMHO it would be very nice if everything was just sugar
# for "function($x, ...)". It would be harder to define
# the nearest multi-method, but at least we wouldn't
# have to worry about different dispatchers kicking in when
# we didn't expect them to.

I think what we have to do is separate dispatch into two steps.  The
first should identify the name of the function being called and run
through all the packages it might be in, passing the name and the
parameters to the second step.  The second step should see if a function
by that name exists in the package passed in.  The logic goes something
like this for a simple function call:

sub first_step_normal {
my($function, @params)=@_;
my($package, $name)= $function =~ /^(.*)::(\w+)$/
$package ||= "main";
$name ||= $function;

if($address=second_step($package, $name, @params)) {
return &$address(@params);
} elsif($address=second_step($package, "AUTOLOAD", @params)) {
$AUTOLOAD=$name;
return &$address(@params);
}
else {
die "subroutine $name not found in package $package";
}
}

For a method call, it's more complicated:

sub first_step_method {
my($function, @params)=@_;
my($package, $name)=$function =~ /^(.*)::(\w+)$/;
$package ||= ref $params[0];
$name ||= $function;

@packages=($package, get_all_parents_in_traversal_order($package));

for(@packages) {
if($address=second_step($_, $name, @params)) {
return &$address(@params);
}
}

for(@packages) {
if($address=second_step($_, "AUTOLOAD", @params)) {
$AUTOLOAD=$name;
return &$address(@params);
}
}

die "method $name not found for object of type $package";
}

(Both of these examples leave out complications involving the package
$AUTOLOAD is in.)

The second step takes care of resolving the address of the function; it
handles picking which (if any) of the prototypes available for the
method is appropriate for those parameters.  Its implementation is left
as an exercise to the reader.  :^)

--Brent Dax (who finds it very amusing that the spell checker tried to
change Sugalski to Sealskin)
[EMAIL PROTECTED]

"...and if the answers are inadequate, the pumpqueen will be overthrown
in a bloody coup by programmers flinging dead Java programs over the
walls with a trebuchet."




Re: LangSpec: Statements and Blocks

2001-09-01 Thread Uri Guttman

> "BCW" == Bryan C Warnock <[EMAIL PROTECTED]> writes:

  BCW> Keywords
  BCW> continue, do, else, elsif, for, foreach, given, goto, grep, if, last,
  BCW> map, next, redo, sort, sub, unless, until, when, while 

  BCW> Basic Constructs

  BCW>  1. [ LABEL: ] expr;
  BCW>  2. [ LABEL: ] { block } [ continue { block } ]
  BCW>  3. << grep | map >> { block } list# Note 1
  BCW>  4. sort [ { block } ] list# Note 1
  BCW>  5. do { block }   # Note 1

i don't consider sort/map/grep blocks to be basic like the others. also
sort/map can take espressions which is a different syntax.

what about eval BLOCK? i think that is being renamed to throw/catch but
it takes a code block too.

  BCW> Conditional Statement Modifiers

  BCW>  6. [ LABEL: ] expr if expr;
  BCW>  7. [ LABEL: ] expr until expr;
   ^
unless

  BCW> Looping Statement Modifiers

  BCW>  8. [ LABEL: ] expr while expr;
  BCW>  9. [ LABEL: ] do { block } while expr;# Note 2

i see the note, but that is not special. just a simple expression with a
modifier. now, if the do BLOCK while() were to support loop semantics,
it would be special here.

  BCW> 10. [ LABEL: ] expr until expr;
  BCW> 11. [ LABEL: ] do { block } until expr;# Note 3


  BCW> Iterative Statement Modifiers

  BCW> 12. [ LABEL: ] expr for[each] list;# Note 4


  BCW> Conditional Block Constructs

  BCW> 13. [ LABEL: ] if ( expr ) { block } 
  BCW>[ [ elsif  ( expr ) { block } ] ... ]
  BCW>[ else  { block } ]
  BCW> 14. [ LABEL: ] until ( expr ) { block }
  ^

unless (again :-)



  BCW> A statement consists of zero or more expressions, followed by an
  BCW> optional modifier and its expression, and either a statement
  BCW> terminator (';') or a block closure ('}' or EOF).

how do you have multiple expressions in a statement? when you combine
expressions you just get one larger expression. 

also perl has statement separators, not terminators. with that
definition you don't need to mention block close or EOF.

  BCW> A block consists of zero or more blocks and statements. A file is
  BCW> considered a block, delimited by the file boundaries.
  BCW> Semantically, I will define a block only in terms of its affect
  BCW> on scoping.  (Blocks are sometimes referenced by their
  BCW> interaction with flow control.  However, this definition isn't
  BCW> consistent, and I will avoid it.)

a good definition of a BLOCK and its scoping and loop semantics is
needed. the do BLOCK while() controversy is one are that needs clearing
up. 

  BCW> Flow Control Expressions

  BCW> A. goto 
  BCW> B. 

B. was intentionally left blank.

  BCW> last
  BCW> next
  BCW> redo


overall a good idea and thankfully written in english and not
ANSI-speak. a more proper (but not official sanctioned by some
organization) perl specification will be useful. i doubt there will be
multiple implementations like you get with many other languages that
have specs, but it will make it easier to create tests suites, fix bugs,
etc. when we get to exposing various API's it will be very useful to
have clean spec and a document style/format for them (like the PDD).

uri

-- 
Uri Guttman  -  [EMAIL PROTECTED]  --  http://www.sysarch.com
SYStems ARCHitecture and Stem Development -- http://www.stemsystems.com
Search or Offer Perl Jobs  --  http://jobs.perl.org



RE: !< and !>

2001-09-01 Thread Sterin, Ilya



> -Original Message-
> From: Andrew Wilson [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, September 01, 2001 6:06 PM
> To: Russ Allbery
> Cc: [EMAIL PROTECTED]
> Subject: Re: !< and !>
>
>
> On Sat, Sep 01, 2001 at 02:40:40PM -0700, Russ Allbery wrote:
> > >> How is !< different from >=?
> >
> > > It's just more syntax just like foo != bar
> > > is the same as (foo > bar || foo < bar).
>
> Not if you're using Quantum::SuperPositions ;-)
>
> > > It might prove convenient to express the expression.
> >
> > It's the same number of characters.  How can it be more convenient?
>
> It may help you to express yourself better in the way that you
> understand the problem you're solving.  Every little bit helps.
>

That's exactly what I was getting at.  The readability of the program,
otherwise why have more than one way to do it:-)

> cheers
>
> Andrew



RE: !< and !>

2001-09-01 Thread Sterin, Ilya



> -Original Message-
> From: Bryan C. Warnock [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, September 01, 2001 5:59 PM
> To: Russ Allbery; [EMAIL PROTECTED]
> Subject: Re: !< and !>
>
>
> On Saturday 01 September 2001 05:40 pm, Russ Allbery wrote:
> > Sterin, Ilya <[EMAIL PROTECTED]> writes:
> > >> From: Russ Allbery [mailto:[EMAIL PROTECTED]]
> > >>
> > >> How is !< different from >=?
> > >
> > > It's just more syntax just like foo != bar
> > > is the same as (foo > bar || foo < bar).
> > >
> > > It might prove convenient to express the expression.
> >
> > It's the same number of characters.  How can it be more convenient?
>
> You only have to manipulate the shift key once!  ;-)
>
> I'm waiting for someone to say that in tri-state logic, '!<' != '>='

I was actually thinking a similar before I send the initial reply, but I
just can't see a scenerio where it would be easily expressed !<=> :-)

Ilya


>
> --
> Bryan C. Warnock
> [EMAIL PROTECTED]



Re: Multiple-dispatch on functions

2001-09-01 Thread Dan Sugalski

At 04:35 PM 8/31/2001 -0500, Me wrote:
>Dan, I don't immediately see how per object/class dispatch
>control helps to make multimethods pluggable.

There's going to be a "method call" entry in the variable's vtable. You 
want a different method call method, you change the entry. Probably by 
changing the class shared vtable.

Globally overriding the "leftmost depth-first" (or breadth-first dispatch, 
or dispatch with automatic redispatch) dispatch method's certainly 
possible, and I know Damian'd like to do it, but I don't like the thought 
of throwing in another level of indirection on a method call. Yeah, I know, 
it's only a single extra pointer deref, and compared to the rest of the 
code it's not much of a performance loss, but we have a lot of places where 
it's "Just one more level of indirection" and those little pieces will 
ultimately add up.

Not to say we won't, but I make it a point to question these decisions. 
That way folks take the performance implications into account. I don't mind 
*doing* them, mind, but I want people to think about the costs when 
deciding the value of the feature.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Multiple-dispatch on functions

2001-09-01 Thread Piers Cawley

Dan Sugalski <[EMAIL PROTECTED]> writes:

> At 03:06 PM 9/1/2001 -0400, Michael G Schwern wrote:
> >On Sat, Sep 01, 2001 at 01:10:58PM -0400, Dan Sugalski wrote:
> > > At 10:03 PM 8/30/2001 -0400, Michael G Schwern wrote:
> > > >Thinking about what Zhang was saying about multiple-dispatch not being
> > > >inherently OO.  I think he's sort of right.  Multiple-dispatch need
> > > >not be confined to method lookups.
> > >
> > > There is the potential for a pretty significant cost to this, since we'd
> > > need to evaluate the args at runtime for each call. (Possibly we could do
> > > some compile time optimization, but not in a lot of places, alas)
> >
> >H shouldn't be any worse than a multi-method call.  And it'll
> >only effect those functions with the 'multi' flag.
> 
> Nope, the cost will be paid on all sub calls. We at least need to
> check on every sub call to see if there are multiple versions of the
> functions. (We can't tell at compile time if it's a single or
> multi-method sub call, since it can change at runtime) Granted, it's
> not a huge expense for non-multi-method calls, but it does still
> impose an overhead everywhere.

Can't you do it with a scary polymorphic function object? (Handwaving
starts which could be *completely* off base.) Then you just have to
rely on the 'call_this_function' vtable method to DTRT, which, unless
it's a 'multi' function, will simply do what function calls have
always done. You only have to do the more complex stuff if the
function object is a 'multi' function, in which case it'll have a
different handler in the vtable.

-- 
Piers Cawley
www.iterative-software.com




RE: Expunge implicit @_ passing

2001-09-01 Thread Dan Sugalski

At 05:23 PM 8/28/2001 -0700, David Whipp wrote:
> > They list two reasons to make your class final.  One is security
> > (which might actually be valid, but I doubt it will hold up to
> > determined attack), the other though...
> >
> > You may also wish to declare a class as final for object-oriented
> > design reasons. You may think that your class is
> > "perfect" or that,
> > conceptually, your class should have no subclasses.
> >
> > The idea that a class is either 'perfect' or 'complete' has to be the
> > silliest, most arrogant thing I've ever heard!
>
>The only good justification I've heard for "final" is as a directive
>for optimization. If you declare a variable to be of a final type, then
>the compiler (JIT, or whatever) can resolve method dispatch at
>compile-time. If it is not final, then the compiler can make no such
>assumption because java code can load in extra classes later.

This is the only real reason I've seen to allow final. (And it's not a bad 
reason, honestly, though not necessarily one appropriate in all cases) It 
does allow a fair amount of optimization to be done, which can be 
especially important when you can't see all the source. (Pretty much the 
case in all languages that compile down to object modules you link together 
later)

You can, with sufficiently aggressive analysis, determine whether a class 
is subclassed if you have a language that doesn't allow you to change the 
rules at runtime, if all the source is available. Perl, alas, doesn't fall 
into this class of languages, so we're going to have to do something 
clever. (Probably some form of conditional branch--"Branch if not changed 
since time X"--that checks to see if the inlined version's safe to use)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: CLOS multiple dispatch

2001-09-01 Thread Dan Sugalski

At 06:34 PM 8/30/2001 -0700, Hong Zhang wrote:
>With optimized C compiler, we can achieve similar performace
>with obviously more code. Let's say C is only 80% of Fortran on math, I 
>still don't see the reason to put math into C language for the last 20% of 
>speed. It may be my personal preference. I am not going to argue on this 
>any more.

Well, in this case I think you underestimate the speed benefit of Fortran. 
More importantly, you underestimate the ease of writing the code, which is 
more important.

You can do everything that perl does in assembly. The reason it's far 
easier (and to some extent, possible in the first place for most 
programmers) is because you're starting from a higher point--the language 
gives you a significant conceptual boost up. Most of the interesting new 
features being bandied about are of this sort, giving the programmer a 
boost. They don't have to hand-roll some high-level fundamental constructs 
(if they even can, as many people don't have the time, talent, and or/ 
background to do that. How many people are really up to building closures 
properly themselves? Or even something as reasonably straightforward as 
math with complex numbers?) so it's possible they can start a program and 
be half-done already because the groundwork'd been provided.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Deoptimizations

2001-09-01 Thread Bryan C . Warnock

Random musings from a discussion I had yesterday.  (And check me on my 
assumptions, please.)

One of the more common lamentations is that a dynamic language (like Perl) 
doesn't mix well with optimizations, because the majority of optimizations 
are done at compile time, and the state at compile time isn't always the 
state at runtime.  A common declaration is, "We'd like to optimize that, but 
we can't, because foo may change at runtime."

Perl 5 optimizations replace (or, more accurately, null out) a more complex 
opcode stream [1] with a simpler one.  Constant folding is one such example.

5 -> add -> 10 -> add -> 15 -> store

becomes

30 -> store 

plus a bunch of null ops.  (The null ops are faster than attempting to 
splice the new opcode stream [1] in place of the old one, but I don't know 
by how much.)

Consider the following:

Create an optimization op, which is more a less a very fancy branch operator.
Whenever you elect to do an aggressive optimization, place the opt op as a 
branch between the newly created... [1] and the old, full one.

The op could decide ( From a switch or variable - turn on and off 
optimizations while running. Or from state introspection, perhaps, since you 
probably have a good idea of what changes would invalidate it. )  whether to 
exercise the optimized code, or revert to the original, unoptimized version.
I supposed, if you were to implement an advanced JIT, you could replace an 
invalidated optimization with its newly optimized variant.  

That would also work with a couple of tie-ins with the language.  First, 
of course, the ubiquitous pragma, which could affect which optimizations 
(assuming we categorized them) we should run, and which we shouldn't, based 
on the suggestions from the programmer.  And perhaps some hook into the 
internals for the same reason.

sub foo {
no optimizations;
...
}

or
{
local $opt = (ref $obj eq "SomeNewObject"); 
# If the $obj has changed, don't run any optimizations
}

Is this possible?  Feasible?  Way out there?

[1] Chain?  Branch?  What's the correct terminology here?
-- 
Bryan C. Warnock
[EMAIL PROTECTED]



RE: Deoptimizations

2001-09-01 Thread Brent Dax

# -Original Message-
# From: Bryan C. Warnock [mailto:[EMAIL PROTECTED]]
# Sent: Saturday, September 01, 2001 12:29 PM
# To: [EMAIL PROTECTED]
# Subject: Deoptimizations
#
#
# Random musings from a discussion I had yesterday.  (And check
# me on my
# assumptions, please.)
#
# One of the more common lamentations is that a dynamic
# language (like Perl)
# doesn't mix well with optimizations, because the majority of
# optimizations
# are done at compile time, and the state at compile time isn't
# always the
# state at runtime.  A common declaration is, "We'd like to
# optimize that, but
# we can't, because foo may change at runtime."
#
# Perl 5 optimizations replace (or, more accurately, null out)
# a more complex
# opcode stream [1] with a simpler one.  Constant folding is
# one such example.
#
#   5 -> add -> 10 -> add -> 15 -> store
#
# becomes
#
#   30 -> store
#
# plus a bunch of null ops.  (The null ops are faster than
# attempting to
# splice the new opcode stream [1] in place of the old one, but
# I don't know
# by how much.)
#
# Consider the following:
#
# Create an optimization op, which is more a less a very fancy
# branch operator.
# Whenever you elect to do an aggressive optimization, place
# the opt op as a
# branch between the newly created... [1] and the old, full one.
#
# The op could decide ( From a switch or variable - turn on and off
# optimizations while running. Or from state introspection,
# perhaps, since you
# probably have a good idea of what changes would invalidate
# it. )  whether to
# exercise the optimized code, or revert to the original,
# unoptimized version.
# I supposed, if you were to implement an advanced JIT, you
# could replace an
# invalidated optimization with its newly optimized variant.
#
# That would also work with a couple of tie-ins with the
# language.  First,
# of course, the ubiquitous pragma, which could affect which
# optimizations
# (assuming we categorized them) we should run, and which we
# shouldn't, based
# on the suggestions from the programmer.  And perhaps some
# hook into the
# internals for the same reason.
#
# sub foo {
# no optimizations;
# ...
# }
#
# or
# {
# local $opt = (ref $obj eq "SomeNewObject");
# # If the $obj has changed, don't run any optimizations
# }
#
# Is this possible?  Feasible?  Way out there?

I think it's a good idea!  ++[bwarnock]!

Of course, the hard part is detecting when the optimization is invalid.
While there are simple situations:

sub FOO {"foo"}

print FOO;

evaluating to:

/-no--"foo"-\
  opt: FOO redefined? -< >---print
\-yes-call FOO--/

there could also be some more complicated situations, in which the
situations where the optimizations are invalid are harder to define.

I'd also suggest a different pragma:

use less 'optimization';

--Brent Dax
[EMAIL PROTECTED]

"...and if the answers are inadequate, the pumpqueen will be overthrown
in a bloody coup by programmers flinging dead Java programs over the
walls with a trebuchet."




Re: Deoptimizations

2001-09-01 Thread Bryan C . Warnock

On Saturday 01 September 2001 05:07 pm, Brent Dax wrote:
> Of course, the hard part is detecting when the optimization is invalid.
> While there are simple situations:
>
>   sub FOO {"foo"}
>
>   print FOO;
>
> evaluating to:
>
> /-no--"foo"-\
>   opt: FOO redefined? -< >---print
> \-yes-call FOO--/
>
> there could also be some more complicated situations, in which the
> situations where the optimizations are invalid are harder to define.

Granted, code can always be written more complexly than our ability to 
understand it, in which case we very well may throw out the warnings that 
Thou Shall Not (if thou wants consistant, accurate results).

But for many optimizations, although perhaps more with peephole 
optimizations than with full code analysis type of optimizations, simply 
identifying a possible optimization usually identifies its potential undoing 
as well.  

After all, optimizations don't just happen.  They are, more or less, a set 
of known patterns that you look for.  For a dynamic language, part of the 
original identification of those patterns may very well be the additional 
identification of what pieces are critical to the optimization.

Of course, with as *highly* a dynamic language as Perl, there may be several 
hundred things that could invalidate a given optimization - it would be less 
optimal to detect all those things that to simply run the unoptimized code!  

But in many cases, it may only be one or two.  

For instance, optimization within (or of) an object's methods could very 
well be dependent solely on whether that object was ever redefined.  You 
could then implement something akin to a dirty flag simply to check whether 
it had been updated, and if so, deoptimize.

Of course, what changes may have been made wouldn't necessarily negate the 
original optimization - but in our case, we can be conservative with it 
because is most cases the object may not be updated at all.  80/10 vs 90/50 
[1].

Another example that comes up often with multiple fetches and stores on 
variables is tying and overloading.  If a variable were non-magical, 
multiple retrievals and stores could potentially be reordered and combined. 
However, if it were tied, the explicit operations coded could be critical.  
(Then again, maybe not.  But we have to assume that they are.)  Our logic 
for this particular form of optimization can check to see if we know the 
variable is tied.  If it is, we may as well move on, (even if, during 
runtime, the variable in question is *never* tied when it gets to this area 
of the code).  But if we determine that it isn't, we know that someone could 
play symtable tricks on us and replace it with one that is.  So we could 
check to see if the symbol table had changed, if the symbol table for that 
package had changed (or pseudo package, if it were lexically scoped), or if 
the symtable entry for the variable had changed - whichever had the best 
longterm results (in terms of the speed trade-off for doing the optimization 
versus a false hit to deoptimize).  Or we could just say we won't optimize 
it, or we can say that we will - caveat scriptor!

Now, it could very well be that very few agressive optimizations will result 
in such a small set of code that could be optimized like this, in which case 
it's not worth it.  This is Perl, after all, and TMTOWTDI.  You could write 
the Perl code itself to differentiate between the cases, and force 
non-optimal code when it wasn't safe.  This is precisely what Memoization 
does.  You could optimize code at the opcode level and memoize it there, but 
there's so much that could change.  So you implement it yourself it perl 
space.

{
local $OPTIMIZATIONS = not defined tied $thingy;
# A bunch of code that could be optimized.
# If thingy is tied, non-optimal code is forced.
}

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



RE: Deoptimizations

2001-09-01 Thread Brent Dax

# -Original Message-
# From: Bryan C. Warnock [mailto:[EMAIL PROTECTED]]
# Sent: Saturday, September 01, 2001 3:01 PM
# To: Brent Dax; [EMAIL PROTECTED]
# Subject: Re: Deoptimizations
#
#
# On Saturday 01 September 2001 05:07 pm, Brent Dax wrote:
# > Of course, the hard part is detecting when the optimization
# is invalid.
# > While there are simple situations:
# >
# > sub FOO {"foo"}
# >
# > print FOO;
# >
# > evaluating to:
# >
# > /-no--"foo"-\
# >   opt: FOO redefined? -< >---print
# > \-yes-call FOO--/
# >
# > there could also be some more complicated situations, in which the
# > situations where the optimizations are invalid are harder to define.
#
# Granted, code can always be written more complexly than our
# ability to
# understand it, in which case we very well may throw out the
# warnings that
# Thou Shall Not (if thou wants consistant, accurate results).
#
# But for many optimizations, although perhaps more with peephole
# optimizations than with full code analysis type of
# optimizations, simply
# identifying a possible optimization usually identifies its
# potential undoing
# as well.
#
# After all, optimizations don't just happen.  They are, more
# or less, a set
# of known patterns that you look for.  For a dynamic language,
# part of the
# original identification of those patterns may very well be
# the additional
# identification of what pieces are critical to the optimization.
#
# Of course, with as *highly* a dynamic language as Perl, there
# may be several
# hundred things that could invalidate a given optimization -
# it would be less
# optimal to detect all those things that to simply run the
# unoptimized code!
#
# But in many cases, it may only be one or two.

You'd be surprised how quickly the possibilities pile up:

sub FOO {"FOO"}
sub BAR {"BAR"}
sub BAZ {"BAZ"}
sub QUUX ("QUUX")

print FOO.BAR.BAZ.QUUX;

When you try to do inlining and constant folding, you end up with
something like this:

OPT:
 Redefined FOO or BAR or BAZ or QUUX?
/ \
   no yes
  / \
 /   \
/ \
   /   \
 "FOOBARBAZQUUX"  OPT:
  |   Redefined FOO?
  |  /\
  |yesno
  |/\
  |call FOO"FOO"
  |\/
  | \  /
  |  \/
  |   \  /
  |\/
  |   OPT:
  |  Redefined BAR?
  |  /\
  |yesno
  |/\
  |call BAR   "BAR"
  |\/
  | \  /
  |  \/
  |   \  /
  |\/
  |   Concatenate
  |||
  |   OPT:
  |   Redefined BAZ?
  |  /\
  |yesno
  |/\
  |call BAZ   "BAZ"
  |\/
  | \  /
  |  \/
  |   \  /
  |\/
  |   Concatenate
  |||
  |   OPT:
  |   Redefined QUUX?
  |  /\
  |yesno
  |/\
  |  call QUUX"QUUX"
  |\/
  | \  /
  |  \/
  |   \  /
  |\/
  |   Concatenate
  \   /
   \ /
\   /
 \ /
  \   /
   \ /
  Print

(Yes, I have _way_ too much time on my hands...)

Actually, even this structure is sub-optimal--if only one