Re: Test Case: Complex Numbers

2005-11-14 Thread Dave Whipp

Jonathan Lang wrote:


In the hypothetical module that I'm describing, the principle value
approach _would_ be used - in scalar context.  The only time the "list
of all possible results" approach would be used would be if you use
list context.  If you have no need of the list feature, then you don't
have to use it.


This might suprise people:

@a = (1,4,9);
@b = @a.map:{sqrt};

ok([EMAIL PROTECTED] == [EMAIL PROTECTED]); # fails because of list context



The problem is that the two coordinate systems don't just differ in
terms of which methods they make available; they also differ in terms
of how they represent the value that they represent


And they also differ in what values they can represent, assuming the 
underlying representation of real numbers uses a finite number of bits.



In the former
regard, this would be much like C's union; but it would differ from
them in that it would be designed to preserve the logical value (what
the data represents) rather than the physical value (the sequence of
bits in memory).


Which is not possible in the general case.

That probably doesn't mean that there isn't a good use for the concept, 
but I'd be wary of it for precision math stuff.


Re: HLL Debug Segments

2005-11-15 Thread Dave Whipp

Will Coleda wrote:

Right, the hard bit here was that I needed to specify something other  
than "file". Just agreeing that we need something other than just  
"file/line".


I'd have thought the onus is the other way: justify the use of 
"file/line" as the primitive concept.


We're going to have aset of "parrot compiler tools", which represent 
high level language and subsequent transformations as trees. If these 
trees are available, then all that is needed for debug traceability is a 
pointer/reference to nodes in the tree. If the node has a "get 
file/line" method, then the node (attribute grammar?) can be responsible 
for chaining the information back to the source code, even when things 
like common-subexpression optimizations have been done (the method can 
query the callstack, etc., to resolve this).


Re: \x{123a 123b 123c}

2005-11-22 Thread Dave Whipp

Larry Wall wrote:


And there aren't that many regexish languages anyway.  So I think :syntax
is relatively useless except for documentation, and in practice people
will almost always omit it, which makes it even less useful, and pretty
nearly kicks it over into the category of multiplied entities for me.


Its surprising how many are out there. Even if we ignore the various 
dialects of standard rexen, we can find interesting examples such as 
PSL, a language for specifying temporal assertions, for hardware design: 
http://www.project-veripage.com/psl_tutorial_5.php. Whether one would 
want to fold this syntax into a C is a different question.


There are actually a number of competing languages in this space. E.g. 
http://www.pslsugar.org/papers/pslandsva.pdf.


Re: relational data models and Perl 6

2005-12-15 Thread Dave Whipp

Darren Duncan wrote:

As an addendum to what I said before ...

...
I would want the set operations for tuples to be like that, but the 
example code that Luke and I expressed already, with maps and greps etc, 
seems to smack too much of telling Perl how to do the job.


I don't want to have to use maps or greps or whatever, to express the 
various relational operations.


I think you're reading too many semantics into C and C: they 
don't tell perl *how* to implement the search, any more than 
C would. The example was:


  INSERT INTO NEWREL SELECT FROM EMP WHERE DNO = 'D2';
Vs
  my $NEWREL = $EMP.grep:{ $.DNO eq 'D2' };

The implementation of $EMP.grep depends very much on the class of $EMP. 
If this is an array-ref, then it is reasonable to think that the grep 
method would iterate the array in-order. However, if the class is 
"unordered set", then there is no such expectation on the implementation.


The deeper problem is probably the use of the "eq" operator in the test. 
Without knowing a-priori what operations (greps) will be performed on 
the relation, it is not possible to optimize the data structure for 
those specific operations. For example, if we knew that $EMP should 
store its data based on the {$.DNO eq 'D2'} equivalence class then this 
grep would have high performance (possibly at the expense of its creation).


In theory, a sufficiently magical module could examine the parse tree 
(post type-inference), and find all the calls to C on everything 
that's a tuple -- and use that to attempt optimizations of a few special 
cases (e.g. a code block that contains just an "eq" test against an 
attribute). I'm not sure how practical this would be, but I don't see 
how a different syntax (e.g. s/grep/where/) would be more more 
declarative in a way that makes this task any easier.


Re: Table of Perl 6 "Types"

2006-01-04 Thread Dave Whipp

Larry Wall wrote:


: -
:   Num : : Base Numeric type
: Int   : :
: Float : :
: Complex   : :

This bothers me.  The reason we put in Num in the first place was to
get rid of things like Float and Double.  Every time I see "Float"
or "Double" in a Perl 6 program I will feel like a failure.


I'd suggest the the concepts of "Real" and "Int" are sufficiently 
distinct to warrant types (The term "Float" is bad IMO because it 
implies a specific style of implementation -- A Real could, in 
principle, be implemented as a fixed-point value).


An Int is Enumerable: each value that is an Int has well defined succ 
and pred values. Conversely, a Real does not -- and so arguably should 
not support the ++ and -- operators. Amonst other differences, a 
Range[Real] is an infinite set, whereas a Range[Int] has a finite 
cardinality.


(perhaps this discussion belongs on p6l)


Re: Table of Perl 6 "Types"

2006-01-12 Thread Dave Whipp

>>(perhaps this discussion belongs on p6l)
> It sure does;)

(this reply moved to p6l)

[EMAIL PROTECTED] wrote:

Dave Whipp wrote:


An Int is Enumerable: each value that is an Int has well defined succ
and pred values. Conversely, a Real does not -- and so arguably should
not support the ++ and -- operators. Amonst other differences, a
Range[Real] is an infinite set, whereas a Range[Int] has a finite
cardinality.



++ and -- aren't meant to increment or decrement to the next/last value
in the set, they're increment or decrement by one (see perlop). I can
see your point about them not making sense for Real since it's not an
enumerable set like integers but I don't think it would be in the
spirit of DWIM ...


Imagine I have a concrete type FixedPoint8_1000 that stores numbers from 
0 to 1000 in an 8-bit value, and "does Real". Incrementing a value 
stored in this type by one isn't a meaningful operation.


wrt the perlop reference, we manipulate strings with ++ and --; and 
we're going to have enumerated types (albeit backed by intergers). I'm 
sort-of hoping that we'll be able to use the operators on iterators, 
too. I think what I'm saying is that "succ/pred" semantics are more 
general than just "+/- 1"; and perl6 does not need to be bound by 
perl5's perlop. I can't find a formal defn of autoincrement in the perl6 
docs.


I wouldn't see a problem with defining a "Real" role that has a fairly 
sparse set of operations. Afterall, a type that does support ++ and -- 
(e.g. Int, Num) could easily "does Enumerable" if it wants to declare 
that it supports them.



Dave.


Re: Table of Perl 6 "Types"

2006-01-12 Thread Dave Whipp

Rob Kinyon wrote:

I wouldn't see a problem with defining a "Real" role that has a fairly
sparse set of operations. Afterall, a type that does support ++ and --
(e.g. Int, Num) could easily "does Enumerable" if it wants to declare
that it supports them.



What about the scripty-doo side of Perl6? One of the overriding design
considerations that Larry put forward at the very beginning was that
the "easy things are easy" part of the philosophy would still remain.
I want to still be able to do something like

perl -pia -e '@F[2]++' somefile.xsv

And it just DWIM for numbers like 1.2 ( -> 2.2). If Real is what 1.2
is implicitly coerced into, what do I do now?


Scripty-code (without explicit types) uses Num, not Real.


Pattern matching and "for" loops

2006-01-13 Thread Dave Whipp

Today I wrote some perl5 code for the umpteenth time. Basically:

  for( my $i=0; $i< $#ARGV; $i++ )
  {
 next unless $ARGV[$i] eq "-f";
 $i++;
 $ARGV[$i] = absolute_filename $ARGV[$i];
  }
  chdir "foo";
  exec "bar", @ARGV;

I'm trying to work out if there's a clever perl6 way to write this using 
pattern matching:


  for @*ARGV -> "-f", $filename {
$filename .= absolute_filename;
  }

Would this actually work, or would it stop at the first elem that 
doesn't match ("-f", ::Item)?


Is there some way to associate alternate codeblocks for different 
patterns (i.e. local anonymous MMD)?



Dave.


(ps. if anyone on this list is looking for a $perl_job in Santa Clara, 
CA, please contact me by email: dwhipp at nvidia -- no promises, but we 
might even pay you to work on p6)


Re: Pattern matching on arrays and "for" loops

2006-01-13 Thread Dave Whipp

Luke Palmer wrote:

On 1/13/06, Dave Whipp <[EMAIL PROTECTED]> wrote:



Would this actually work, or would it stop at the first elem that
doesn't match ("-f", ::Item)?


If by "stop" you mean "die", yes it would stop.


not what I wanted :-(


Is there some way to associate alternate codeblocks for different
patterns (i.e. local anonymous MMD)?


As Austin points out, that's called "given".  I'll also point out that
if we have lookahead in "for" (a feature that I think is best reserved
for third-party modules):

for @*ARGV -> $cur, ?$next is rw {
if $cur eq '-f' {
$next .= absolute_filename;
}
}

Which isn't as slick as a pattern matching approach, but it gets the
job done without having to use indices.


What happens if I simply abandon the attempt at anonymous MMD and use a 
named multi-sub, instead:


{
  my multi sub process_arg("-f", Str $f is rw) {
 $f .= absolute_filename
  }
  my multi sub process_arg("--quux", Str arg1, Str arg2) { ... }
  ...
  my multi sub process_arg(Str _) {} # skip unrecognised args

  for @*ARGV: &process_arg;
}


something between "state" and "my"

2006-02-02 Thread Dave Whipp

(from p6i)

Larry Wall wrote:

On Thu, Feb 02, 2006 at 07:12:08PM +0100, Leopold Toetsch wrote:
: >... Anyway,
: >the P6 model of "state" is more like a persistent lexical than like
: >C's static.  
: 
: Sorry for my dumb question - what's the difference then? (Besides that C 
: dosn't have closures ;)


That *is* the difference. 

[...]

I was thinking about this discussion on p6i, and I started thinking that 
there's something between "my" and state: a "state" variable that is 
created/initialized on each invocation of a sub but only if it's not 
already in the dynamic scope (i.e. if its not a recursive call). A 
slightly silly example of its use (using "temp state" as the quantifier) 
would be:


  sub factorial(Int $x) {
  temp state Int $result = 1;
  $result *= $x;
  factorial $x-1 if $x > 2;
  return $result if want;
  }
  say factorial 6;

This code is essentially the same as any other recursive factorial 
function, except that it doesn't use call-stack semantics to maintain 
its state. (I know P6 will do tail recursion, but sometimes I find 
myself building complex args/return values to pass state from one 
iteration to the next.)


Equivalent code without this feature is:

  sub factorial(Int $x) {
  my Int $result = 1;
  my sub fact1(Int $x) {
 $result *= $x;
 fact1 $x-1 if $x > 2;
  }
  fact1 $x
  return $result;
  }

Dave.


Re: something between "state" and "my"

2006-02-03 Thread Dave Whipp

Larry Wall wrote:


But that's just my current mental model, which history has shown
is subject to random tweakage.  And maybe "env $+result" could be a
special squinting construct that does create-unless-already-created.
Doesn't feel terribly clean to me though.  If we stick with the +
twigil always meaning at least one CALLER::, then clarity might be
better served by

env $result := $+result // 1;

assuming that $+result merely returns undef in the outermost env context.


Wouldn't that bind $result to a constant at the outermost scope -- and 
therefore to that same constant in all inner scopes? If so, then later 
attempts to assign $result would be an error.


Re: comment scope

2006-03-14 Thread Dave Whipp

Ruud H.G. van Tol wrote:
Perl6 could introduce (lexical, nestable) comment scope. 


In P5 I often us q{...} in void context -- P6 seems to be attaching tags 
to the quote operator, so q:comment{...} might fall out naturally.


Capture Object: why no verb?

2006-04-17 Thread Dave Whipp
Reading about capture objects, I see that they represent an arglist, and 
the the object to which you going to send those args. What is doesn't 
capture is the method name (the verb) that's being called. This feels 
like a slightly strange ommission.


Compare:

  $message = &Shape::draw.prebind( x=>0, y=>0 );
  $capture = \( $shape: x=>0, y=>0 );

  $shape.$message;
  draw *$capture;

These are both doing very similar things. The difference is only in the 
thing that's being associated with the arglist. In the case of currying 
it's the method whereas for the capture it's the invocant.


Perhaps I'm not fully groking the abstraction of the capture-object, but 
it seems to me that there should be a slot in it for the method. For 
dispatch, all three things are needed (invocant, method, args); so if 
you're going to put two of them in one object, then maybe the third 
thing belongs, too.


Re: Capture Object: why no verb?

2006-04-22 Thread Dave Whipp
Audrey Tang wrote:

> Hm, Perl 6 actually has two different ways of putting Capture to some
> Code object... Following yesterday's P6AST draft I'll call them Call and
> Apply respectively:
> 
> moose($obj: 1, 2); # this is Call
> &moose.($obj: 1, 2);   # this is Apply
> 
> elk(named => 'arg');   # this is Call
> &elk.(named => 'arg'); # this is Apply
> 
> The difference is whether the Code is an object (Apply), or a "message"
> name (Call).  As the same argument list can be reused in both cases, it
> seems the "method" part is better left out from the abstraction.  That
> allows:

My understanding is that Capture objects are intended to superceed perl5
references. Synopsys 2 states:

| You may retrieve parts from a Capture object with a prefix sigil
| operator:
|
|$args = \3; # same as "$args = \(3)"
|$$args; # same as "$args as Scalar" or "Scalar($args)"
|@$args; # same as '$args as Array"  or "Array($args)"
|%$args; # same as '$args as Hash"   or "Hash($args)"
|
| When cast into an array, you can access all the positional arguments;
| into a hash, all named arguments; into a scalar, its invocant.

I find myself wanting to add the obvious extra case to this list. Should
it read:

   &$args; # 'fail' ? (runtime error, or compile time?)

I'd prefer it to do something more analagous to a perl5 coderef
dereference -- assuming that it is possible to create a reference to
code using a Capture.

Also, I'm a bit confused By the idea that the invocant is obtained by a
scalar dereference, because I know that arrays and hashes can be
invocants, too. E.g. @a.pop. So, If I do:

  my $args = \(@a:);
  my $b  = $$args;  # @a as a scalar
  my @c  = @$args;  # empty list
  my @d := $$args;  # bound to @a

Is there any way that a deref can determine that the invocant stored in
the capture was placed there using the '@' sigil? Perhpas this leads to
the question of whether there is ever a reason for code to distinguish
between @ invocants and $ invocants. I'm guessing that the answer must
be "no".


error building pugs: "Could not find module `Data.ByteString'"

2006-04-30 Thread Dave Whipp
I'm trying play with pugs for the first time. I checked it out from the 
repository (r10142) and, after installing ghc 6.4.2, attempted to build 
pugs. Fairly quickly, the build dies with the message below. Does anyone 
have any hints what the problem might be (I'm not a Haskell person yet, 
but I did confirm that my ghc installation builds "hello, world". My 
perl is 5.8.8; Linux rhel3)


...
configure: No cpphs found
configure: No greencard found
The field "hs-source-dir" is deprecated, please use hs-source-dirs.
Preprocessing library Pugs-6.2.11...
Building Pugs-6.2.11...
Chasing modules from: 
Pugs,Pugs.AST,Pugs.AST.Internals,Pugs.AST.Internals.Instances,Pugs.AST.Pad,Pugs.AST.Pos,Pugs.AST.Prag,Pugs.AST.SIO,Pugs.AST.Scope,Pugs.Bind,Pugs.CodeGen,Pugs.CodeGen.JSON,Pugs.CodeGen.PIL1,Pugs.CodeGen.PIL2,Pugs.CodeGen.PIR,Pugs.CodeGen.PIR.Prelude,Pugs.Prelude,Pugs.CodeGen.Perl5,Pugs.CodeGen.YAML,Pugs.Compat,Pugs.Compile,Pugs.Compile.PIL2,Pugs.Compile.Haskell,Pugs.Compile.Pugs,Pugs.Config,Pugs.Cont,Pugs.DeepSeq,Pugs.Embed,Pugs.Embed.Haskell,Pugs.Embed.Parrot,Pugs.Embed.Perl5,Pugs.Embed.Pugs,Pugs.Eval,Pugs.Eval.Var,Pugs.External,Pugs.External.Haskell,Pugs.Help,Pugs.Internals,Pugs.Junc,Pugs.Lexer,Pugs.Monads,Pugs.PIL1,Pugs.PIL1.Instances,Pugs.PIL2,Pugs.PIL2.Instances,Pugs.Parser,Pugs.Parser.Operator,Pugs.Parser.Number,Pugs.Parser.Program,Pugs.Parser.Types,Pugs.Parser.Unsafe,Pugs.Parser.Export,Pugs.Parser.Doc,Pugs.Parser.Literal,Pugs.Parser.Util,Pugs.Pretty,Pugs.Prim,Pugs.Prim.Code,Pugs.Prim.Eval,Pugs.Prim.FileTest,Pugs.Prim.Keyed,Pugs.Prim.Lifts,Pugs.Prim.List,Pugs.Prim.Match

,Pugs.Prim.Numeric,Pugs.Prim.Param,Pugs.Prim.Yaml,Pugs.Rule,Pugs.Rule.Expr,Pugs.Run,Pugs.Run.Args,Pugs.Run.Perl5,Pugs.Shell,Pugs.Types,Pugs.Version,Emit.Common,Emit.PIR,Emit.PIR.Instances,Data.DeepSeq,Data.Yaml.Syck,DrIFT.JSON,DrIFT.Perl5,DrIFT.YAML,RRegex,RRegex.PCRE,RRegex.Syntax,System.FilePath,UTF8
Could not find module `Data.ByteString':
  use -v to see a list of the files searched for
  (imported from src/Pugs/AST/Internals/Instances.hs)
Build failed: 256 at util/build_pugs.pl line 96.
make: *** [pugs] Error 2


Re: error building pugs: "Could not find module `Data.ByteString'"

2006-05-01 Thread Dave Whipp

Dave Whipp wrote:

Could not find module `Data.ByteString':


I updated to r10166: Audrey's update to third-party/fps/... fixed my 
problem.


Thanks.


Dave.


Why does p6 always quote non-terminals?

2006-06-27 Thread Dave Whipp
I was reading the slides from PM's YAPC::NA, and a thought drifted into
my mind (more of a gentle alarm, actually). One of the examples struck me:

  rule parameter_list {  [ , ]* }

Its seems common in the higher layers of a grammar that there are more
non-terminal than terminals in each rule, so maybe the current "rule"
isn't properly huffmanized (also, the comma seemed some-how out of place
-- most symbols will need to be quoted if used in similar context). A
more traditional YACC/YAPP coding of the
rule would be:

  rule parameter_list { parameter [ "," parameter ]* }

Is there a strong reason (ambiguity) why every nonterminal needs to be
quoted (or could we at least have a form ( C< rule:y {...} > ) where
they are not)? I see this as increasingly important when rules are used
to process non-textual streams. In these cases every token will need to
be quoted using angle brackets, which at that point might become little
more than line noise.


Dave.


Re: ===, =:=, ~~, eq and == revisited (blame ajs!) -- Explained

2006-07-14 Thread Dave Whipp

Darren Duncan wrote:

Assuming that all elements of $a and $b are themselves immutable to all 
levels of recursion, === then does a full deep copy like eqv.  If at any 
level we get a mutable object, then at that point it turns into =:= (a 
trivial case) and stops.


  ( 1, "2.0", 3 ) === ( 1,2,3 )

True or false?

More imprtantly, how do I tell perl what I mean? The best I can think of is:

  [&&] (@a »==« @b)
Vs
  [&&] (@a »eq« @b)

But this only works for nice flat structures. For arbitrary tree 
structures, we probably need adverbs on a comparison op (I think Larry 
mentioned this a few posts back) ... but if we're going with adverbs do 
we really need 5 different base operators? Are all of the 5 so common 
that it would be clumbersome to require adverbs for their behavior?


Also, when sorting things, maybe deep inequalities would be useful, too.


Re: === and array-refs

2006-08-17 Thread Dave Whipp

David Green wrote:



No, look at the example I've been using.  Two arrays (1, 2, [EMAIL PROTECTED]) and (1, 
2, [EMAIL PROTECTED]) clearly have different (unevaluated) contents.  "eqv" only tells 
me whether they have the same value (when @x and @y are evaluated).  
That's a different question -- yes, it's the more common question, but I 
think the comparison I want to make is just as reasonable as ===, except 
there's no easy way to do it.



does "*$a === *$b" work? I.e. splat the two arrays into sequences, and 
then do the immuable compare on those sequences.


Re: Mutability vs Laziness

2006-09-25 Thread Dave Whipp

Aaron Sherman wrote:
It seems to me that there are three core attributes, each of which has 
two states:


Mutability: true, false
Laziness: true, false
Ordered: true, false


I think there's a 4th: exclusivity: whether or not duplicate elements 
are permitted/exposed (i.e. the difference between a set and a bag). 
This is orthogonal to orderedness.


Re: "Don't tell me what I can't do!"

2006-10-02 Thread Dave Whipp

Smylers wrote:

use strict;


That's different: it's _you_ that's forbidding things that are otherwise
legal in your code; you can choose whether to do it or not.


Which suggests that the people wanting to specify the restrictions are 
actually asking for a way to specify additional strictures for users of 
their modules, which are still controlled by /[use|no] strict/. While it 
is true that any module is free to implement its c method to 
allow its users to specify a level of strictness, it would be nice to 
abstract this type of thing into the "use strict" mechanism.


Re: "Don't tell me what I can't do!"

2006-10-02 Thread Dave Whipp

Jonathan Lang wrote:

Before we start talking about how such a thing might be implemented,
I'd like to see a solid argument in favor of implementing it at all.
What benefit can be derived by letting a module specify additional
strictures for its users?  Ditto for a role placing restrictions on
the classes that do it.


Or we could view it purely in terms of the design of the core "strict" 
and "warnings" modules: is it better to implement them as centralised 
rulesets, or as a distributed mechanism by which "core" modules can 
register module-specific strictures/warnings/diagnostics. If it makes 
sense for the core strictures to be decentralized, then the ability for 
non-core modules to make use of the same mechanism comes almost for free 
(and therefore it doesn't need much justification beyond the fact that 
some people think it might be a nice toy to use, abuse, and generally 
experiment with).


Re: "Don't tell me what I can't do!"

2006-10-02 Thread Dave Whipp

Jonathan Lang wrote:

Dave Whipp wrote:


Or we could view it purely in terms of the design of the core "strict"
and "warnings" modules: is it better to implement them as centralised
rulesets, or as a distributed mechanism by which "core" modules can
register module-specific strictures/warnings/diagnostics.



Question: if module A uses strict, and module B uses module A, does
module B effectively use strict?  I hope not.

I was under the impression that pragmas are local to the package in
which they're declared.  If that's the case, then pragmas will not
work for allowing one module to impose restrictions on another unless
there's a way to export pragmas.



I think your hopes are fulfilled: stricness is not transitive. However, 
that's not what I was suggesting. What I was suggesting was that if 
Module A uses "strict", and Module A uses something from Module B, then 
Module B should be able to do additional checks (at either runtime or 
compile time) based on the strictness of its caller. For example I might 
write:


  use strict;
  my Date $date = "29 February 2001";

and get a compile time error (or perhaps warning); but without the "use 
strict" the Date module might interpret the string (which comes from 
it's caller) as "1 March 2001".


Re: Synposis 26 - Documentation [alpha draft]

2006-10-08 Thread Dave Whipp

Damian Conway wrote:
> Delimited blocks are bounded by C<=begin> and C<=end> markers...
> ...Typenames that are entirely lowercase (for example: C<=begin
> head1>) or entirely uppercase (for example: C<=begin SYNOPSIS>)
> are reserved.

I'm not a great fan of this concept of "reservation" when there is no 
mechanism for its enforcement (and this is perl...). Typical programmers 
ignore it, just as they ignore similar reservations of the type 
"lower-case subroutine names are reserved".


If "use strict" will flag an error for their use, then perhaps "is 
reserved" would become "must be predeclared" (imported via =use). Then 
any module will be able to add its own typenames, without needing some 
distinguishing "this is a core module" trait to enable the typename. 
Reservation then simply becomes a note to module authors, not part of 
the language specification.


Re: Non-integers as language extensions (was Re: Numeric Semantics)

2007-01-04 Thread Dave Whipp

Darren Duncan wrote:

For example, the extra space of putting them aside will let us expand 
them to make them more thorough, such as dealing well with exact vs 
inexact, fixed vs infinite length, fuzzy or interval based vs not, 
caring about sigfigs or not, real vs complex vs quaternon, etc.


I agree with the general idea that this is non core (from an 
implementatin perspective); but one thing struck me here (slightly off 
topic, but not too far): a quaternion cannot be a Num because anyone 
using a "Num" will assume that multiplication is commutative (for 
quaternions, $a*$b != $b*$a).


It would be good if the type system could catch this type of thing; e.g. 
as a trait on the infix:<*> operator that would prevent the composition 
of the Num role from the Quaternion role because of this operator 
behavioral mismatch. The fundamental types should offer very strong 
guarantees of their behavior: implementations can differ in their 
precision and accuracy; but not much more.


The S13 "is commutative" trait

2007-01-16 Thread Dave Whipp
Synopsys 13 mentions an "is commutative" trait in its discussion of 
operator overloading syntax:


> Binary operators may be declared as commutative:
>
>multi sub infix:<+> (Us $us, Them $them) is commutative {
>myadd($us,$them) }

A few questions:

Is this restricted to only binary operators, or can I tag any 
function/method with the trait. The semantics would be that the current 
seq of ordered args to the function would be treated as a true 
(unordered) set for purposes of matching


Does the fact that a match was obtained by reordering the arguments 
affect the distance metric of MMD?


Will the use of this trait catch errors such as the statement "class 
quaternion does Num" that came up a few days ago on this list 
(multiplication of quaternions isn't commutative; but of Nums is).


Does the trait only apply within one region of the arglist, or can I 
create a 1-arg method that is commutative between the "self" arg and its 
data arg? (I assume not -- I can't quite work out what that would mean)


Re: Numeric Semantics

2007-01-22 Thread Dave Whipp

Doug McNutt wrote:

At 00:32 + 1/23/07, Smylers wrote:


% perl -wle 'print 99 / 2'
49.5



I would expect the line to return 49 because you surely meant integer

> division. Perl 5 just doesn't have a user-available type integer.

I'd find that somewhat unhelpful. Especially on a one-liner, literals 
should be Num-bers, because that's what's usually intended. Either that, 
or infix:(Int,Int-->Num) -- except when MMD on return type finds a 
more constrained form.


Coercion of non-numerics to numbers

2007-03-05 Thread Dave Whipp
I was wondering about the semantics of coercion of non-numbers, so I 
experimented with the interactive Pugs on feather:


pugs> +"42"
42.0
pugs> +"x42"
0.0

I assume that pugs is assuming "no fail" in the interactive environment. 
However, Is "0.0" the correct answer, or should it be one of "undef", or 
"NaN"?



In my experiments, I also noticed:

pugs> my Int $a = "42"
"42"
pugs> $a.WHAT
::Str


It seems that the explicit type of $a is being ignored in the 
assignment. Am I right to assume this is simply a bug?


testcase (if someone with commit bits can add it):

{
  no fail;
  my Int $a = "not numeric";
  is $a.WHAT, ::Int, "coercion in initial assignment";
}


Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-11 Thread Dave Whipp
Garrett Goebel wrote:

> Can anyone write up a detailed document describing how one would go about
> writing Perl6 test cases and submitting them to Parrot? The parrot
> documentation on testing, is understandably focused on testing parrot...
> not the languages running on parrot.
> 
> I can't find any writeup or overview on the Perl5 regression test
> framework. Which is odd, because I'd expect something like that to exist
> in the core documentation. -Perhaps I'm just missing it...

I'm not going to attempt the "detailed" document you ask for; but I would
like to offer some thoughts.

Test suites for Perl5 modules can be somewhat opaque. It is not obvious
what is being tested. Anyone familiar with "Agile" programming methods
will know the importance of good tests is; and that test-code should
be as well written as actual production code. We should attempt to set
a good example.

It is often hard to write good tests. Especially if they are written
as a derivative of a spec (or, Argh, of the code itself). Writing tests
should be one of the most important activities of any module writer,
so we should want it to be one of the easiest things that one can do
in Perl6.

Making things easy, is hard!

So we should be prepared to endure some pain to get it write. We should
expect to implement some tests in one way, then throw them out (or
convert them) when we find a better way. We MUST avoid setting our
testing approach in stone from Day-1.

We probably want to orient towards something like the xUnit test-suites;
but we want to do it in a way that incorporates a module's documentation
(the module in question may be CORE:: -- so lets not think of the
Perl6 documentation project as a special case).

I've just said we should expect to thouw away tests, when we find better
ways of doing things; so here's one to throw away:


=chapter 0 Getting Started

Every programming book starts with a simple program that prints the message 
"hello, world" to the screen. The Perl6 documentation is no exception. This 
is how to write this simple program in Perl6:

=test-src hello_world

print "hello, world\n";

=explaination
When you run this program, you should see the message on the screen

=test-stdout
hello, world
=test-end


OK, so its a trivial test; but we must be able to write them. We'll find
various problems with alternative output devices, etc.; but this shouldn't 
put us off.

The emphesis in on using a test to explore a behaviour, from the perspective 
of a user. It is not a white-box test. The test should be readable as part 
of the documentation: not something separate, that only gurus can fathom. 
We need a standard script that runs tests that are embedded in 
documentation.


Dave.



Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-11 Thread Dave Whipp
"Sean O'Rourke" <[EMAIL PROTECTED]> wrote > languages/perl6/t/*/*.t is
what we've got, though they're intended to
> exercise the prototype compiler, not the "real language" (which looks like
> it's changing quite a bit from what's implemented).

OK, lets take a specific test. builtins/array.t contains a test for the
reverse (array) function. Ignoring the fact that it is obviously incomplete,
I'd like to focus on the format of the test.

The test currently looks like (with a bit of cleanup):

#!perl
use strict;
use P6C::TestCompiler tests => 1;
##
output_is(<<'CODE', <<'OUT', "reverse");
sub main() {
@array = ("perl6", "is", "fun");
print @array, "\n";
print reverse @array;
print "\n", @array, "\n";
}
CODE
perl6isfun
funisperl6
perl6isfun
OUT
##

This is fine as a test, but not as documentation. Furthermore, it is
depending on the "print" statement for its comparison (not necessarily bad;
but I find that "golden-output" style tests tend to become difficult to
maintain -- specific assertions tend to work better).

So what would documentation for this feature look like?

=context array ops
=function reverse
The reverse functions takes an existing array, and returns a new array that
contains the same elements in the reverse order. Like all functions on
arrays, it can also be called as a method on the array, using "dot"
notation. The following example demonstrates its use:

=test simple_array_reverse
my @original = qw( foo bar baz );
my @result = reverse @original;

assert_equal( @result, qw( baz bar foo ) );
assert_equal( @original, qw( foo bar baz ) );

my @reversed_again = @result.reverse;
assert_equal( @reversed_again, qw( foo bar baz ) );
=test-end

=cut

I'm not claiming that the test is any better. But its context is that of
documentation, not code. An obvious question is how to extend it to be a
more thorough test, whilst not spoiling the documentation. We'd want to
intersperse text with the test-code; and probably mark a few bits as
"hidden", from a normal documentation view (levels of hiding might be
defined for overview vs. reference vs. guru levels of understanding).


Dave.





Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-11 Thread Dave Whipp

> Hm.  I'm not sure how well it goes with the Perl philosophy ("the perl
> language is what the perl interpreter accepts"), but we could embed the
> _real_ test cases in whatever formal spec happens.  This would be the
> excruciatingly boring document only read by people trying to implement
> perl6.  I don't think real tests, which exercise specific corner cases,
> mix very well with user-level documentation of any sort.

Yes, we should identify 2 types of tests: those that explore user-centric
corner cases; and those that explore implementation-centric corner cases.
User-centric tests are "real", but they aren't "unit-tests".

One of the goals of perl6 is to create a reasonably regular languages --
without too many exceptions to exceptions of a context-specific rule. ;). If
this goal is attained, then there won't be too many user-visible corner
cases ... so the document won't be too tedious.

The perl6.documentation project should focus on these user-centric tests. It
is possible (likely) that people creating these tests will find things to
spill over onto the implementation-tests; but that probably shouldn't be a
goal of the documentation.


Dave.





Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-11 Thread Dave Whipp
"Sean O'Rourke" <[EMAIL PROTECTED]> wrote in message > One thing the
"golden-output" has going for it is that it gets into and
> out of perl6 as quickly as possible.  In other words, it relies on
> perl6/parrot to do just about the minimum required of it, then passes
> verification off to outside tools (e.g. perl5).  I realize they can be
> fragile, but at least for the moment, we can't rely on a complete
> perl6 Test::Foo infrastructure, and I think that in general, we
> _shouldn't_ require such a thing for the very basic tests.  Because if we
> do, and something basic is broken, all the tests will break because of it,
> making the bug-hunter's job much more difficult.

I see where you are coming from ... but is the IO infrastructure really the
most primitive thing to rely on? It may be at the moment; but I expect
that it will become more complex. C may be a built-in right now;
but it should probably move to a module later.

If we can't rely on C to kill a test (and C;
then things are pretty badly broken (assuming that C exists).

If we are going to pick a very small subset on which almost all tests
will depend ... isn't it better to pick the test-infrastructure itself to be
that dependency; rather that some arbitrary module (like IO).


Dave.





Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-11 Thread Dave Whipp
"Joseph F. Ryan" <[EMAIL PROTECTED]> wrote in message
news:3DD0674C.1080708@;osu.edu...
> A module?  For something as basic as print?
> I hope not, that would certainly be a pain.

My understanding is that C will be a method on C (or
whatever), which has a default invocant of $stdout. This module might be
included automatically, so users don't need to know about it.

Anyway, calling C "basic" is a very biased point of view. Its the
viewpoint of someone who knows how things are implemented.

> >If we can't rely on C to kill a test (and C;
> >then things are pretty badly broken (assuming that C exists).
> >
>
> I think the point that Sean was trying to make was that for some kind test
> infrastructre to be available to Perl 6 to test Perl 6 with, it would have
> to be implemented in Perl6 first.  The problems with this are:
>
> 1.) Perl 6 at the moment is *extremely* slow.  This has much to do with
> the huge grammar PRD has to handle, but why make the testing any
slower
> than it already is?
>
> 2.) If there is a problem with some part of the compiler that the testing
> mechanism was built with, it will break the testing mechanism.  This
> will needlessly break all of the tests, and could be very painful to
debug.

I think it would be a mistake to assume that we can't use a testing
infrastructure for tests. Sure, there need to be some implementation-centric
unit tests that depend on almost nothing: but those tests should be run as a
guard around a main regression (i.e. a regression script should say "do
clean checkout; run sanity checks; if OK, then run regression"). If
C fails, then things are broken.

It is also a mistake to base the decision on the fact that perl is currently
too slow. This will be fixed: performance of dispatch will be crucial all
perl scripts, not just tests.

> >If we are going to pick a very small subset on which almost all tests
> >will depend ... isn't it better to pick the test-infrastructure itself to
be
> >that dependency; rather that some arbitrary module (like IO).
>
> Well, the P6C print implementation is as basic as it gets; all it
> does is take a list of arguments, loop through them, and
> call parrot's print op on them.  Although the full implementation will
> be more complex (as it will have to handle printing to filehandles other
> than stdout), the testing implementation won't have to deal with that.
> However, assert is bound to be more complex, since it will have to handle
> and compare many different types of data structures.

This feels like a false augment (even if the conclusions are true).

Contrast: C takes a single arg, and evaluates it in boolean context.
It if the result is false, then execution terminates.

OTOH, C takes a list of args; iterates through them, and calls a
polymorphic (in parrot) method on each.

If I make a primitive assert op in parrot, then the perl implementation is
rather basic.

assert_equal is a bit more complex. But the complexity can be structured. We
don't need, initially, to implement heterogeneous comparisons, for example.
At a more detailed level: a test that compares integers doesn't need strcmp
to work.

You seem to be concerned about a boot-strapping problem. I want to make the
assumption that there is a level of functionality below which it is
pointless to run user-level tests.


Dave.





Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-11 Thread Dave Whipp
"Andrew Wilson" <[EMAIL PROTECTED]> wrote
> Perl's tests are built on Test::More, it uses ok() and is() not
> assert().  If we're going to be doing test cases for perl 6 then we
> should do them using perl's standard testing format (i.e. Test::More,
> Test::Harness, etc.)

I would argue that we should write our tests using perl6's standard
format -- and we need to define that format. There may be a good
argument for using the perl5 standards; but we should explore the
alternatives. "assert" is a generally accepted term: I'm not sure
why perl should be different.

> If your program can't do basic I/O it's probably pretty broken.  Even if
> we we're to only rely on the test modules, they also need to be able to
> communicate with the outside world.

My day-job is ASIC verification: in a previous job I tested
microprocessor cores. We generally found mechanisms to communicate
pass/fail without requiring any IO capability. A common method is to
use the programm counter -- if we execute the instruction at address
0x123456, then exit as a PASS; if we reach 0x654321, then fail. (we
used the assembler to get the addresses of specific pass/fail labels).

We don't need to go to these extremes for perl testing, because we have
an exit(int) capability. exit(0) means pass: anything else (including
timeout)
is a fail.

The fact that we don't need C is not a good argument for
not using it. Perl tests should assume that Parrot works!


Dave.

--
Dave Whipp, Senior Verification Engineer,
Fast-Chip inc., 950 Kifer Rd, Sunnyvale, CA. 94086
tel: 408 523 8071; http://www.fast-chip.com
Opinions my own; statements of fact may be in error.





Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-12 Thread Dave Whipp
Joseph F. Ryan wrote:

Dave Whipp wrote:

The fact that we don't need C is not a good argument for
not using it. Perl tests should assume that Parrot works!



Right, so whats wrong with using one of parrot's most basic ops?  Thats 
all perl6 print
is;  a small wrapper around a basic parrot feature.


The problem is not that you are (or aren't) using a primitive op. The 
problem is that the testing methodology is based on an output-diff.

The most obvious drawback is that the use of a text-diff is that
it separates the "goodness" critera from the detection point.
Instead of saying "assert $a==5", you say

{
  ...
  print $a, "\n";
  ...
}
CODE
...
5
...
OUT


If you a new to the test, there is a cognitive problem associating
the "5" with the $a. You could try using comments to match them:

{
  ...
  print $a, ' # comparing value of $a', "\n"; # Expect 5
  ...
}
CODE
...
5 # comparing value of $a
...
OUT

but now you're adding extra fluff: if you change the test,
you've got more places to make mistakes. The alternative
"assert" localises the condition in both time and space.

C is also more scalable. If you are comparing 2 complex
structures (e.g. cyclic graphs), then you have to tediously
produce the golden output (you might choose to run the test,
and then cut-and-paste, but that violates the rule that you
should know the expected result before you run the test).

The alternative to the big output would be to internally check
various predicates on the data; an then print an OK/not-OK
message: but that's just an unencapsulated assert.

And talking of predicates: there are times when you want
to be fuzzy about the expected result: there may be some
freedom of implementation, where exact values are
unimportant, but some trend must be true. Consider:

  $start_time = time;
  $prev_time = $start_time;
  for 1..10 {
sleep 1;
$new_time = time;
assert($new_time >= $prev_time);
$prev_time = $new_time;
  }
  assert ($new_time - $start_time <= 11);

Writing that as a golden-diff test is somewhat awkward.

There are occasionally reasons to use "print" style
testing; but it should be the exception, not the rule.



Dave.



Re: HOWTO: Writing Perl6 Tests (was: Project Start: Section 1)

2002-11-12 Thread Dave Whipp
Richard Nuttall wrote:


I agree with that. take the example of reverse (array) in this thread.
Really, the testing should have a number of other tests to be complete,
including thorough testing of boundary conditions.

e.g. - tests of reverse on
0. undef
1. Empty list
2. (0..Inf) - Error ?
3. Mixed data types
4. Complex data types
5. Big (long) lists
6. Big (individual items) lists
7. Array References
8. Things that should raise a compile-time error
9. Things that should raise a run-time error

This gets pretty boring in main documentation.
Writing a complete test suite really also needs reasonable knowledge
of how the internals are written in order to understand the kinds of
tests that are likely to provoke errors. (More thoughts on this if 
requested).

This get back to defining the focus/level of the testing that we want to 
achieve. Some of these items may make sense for paranoid testing; but 
not as part of a comprehensive test suite.

Consider item 0. Do we need to test C? The answer is 
probably "no": conversion of undef to an empty lsit is a property of the 
 list context in the caller. We tested that as part of our compehensive 
set of arglist evaluations. The reverse function never sees the undef, 
so we don't need to test it.

Item 1 may be worth testing; but then we see that what we realy have is 
a parameterised test: one control path with multiple datasets. One 
parametarised test would code most of the other items.

The most interesting case is Item:2. This is a question that a user 
might want to ask. The question is a language-issue, not implementation. 
 Derivative thoughts ask about lazy lists in general (is the reverse 
still lazy?); and hence onto tied lists. Perhaps there is an interface 
to list-like objects: perhaps we need to document/test that.

In sammary: yes, lists of tests can get boring; and yes, we would want 
to construct documentation that hide most of the tedious details. I 
treat it as a challenge: to create a unified infrastructure for 
documentation and testing, which is neither too tedious for the user, 
nor too vague for the tester; but which has projections (viewpoints) 
that give the desired level of boredom. Perhaps its not possible, but we 
should at least try. Perhaps we can only go as far as creating 
executable examples as tests. But if we can get that far, then most of 
the infrastructure for a more detailed (boring) document will be in place.


Dave.



Re: Docs Testing Format (was Re: HOWTO: Writing Perl6 Tests)

2002-11-12 Thread Dave Whipp
"Chromatic" <[EMAIL PROTECTED]> wrote:
> Advantages of inline tests:
> - close to the documentation
> - one place to update
> - harder for people to update docs without finding code

Plus, it gives us a mechanism to validate example-code
within documents

> Disadvantages:
> - doc tools must skip tests normally
> - pure-doc patches are harder
> - some tests (bugs, regressions) don't naturally fit in docs
> - makes docs much larger
> - adds an extra step to extract tests before running them
> - adds weight to installed documents (or they must be extracted)

These seem to be reasonable arguments: though issues of file-size and the
need for extraction seem a bit weak. Oh, and bugs do belong in documents:
"erata".

The majority of tests do not belong in documents, for the simple reason that
they are implementation-centric, not user-centric. But then, in perl, every
file can contain POD, so even external-test files can be documents
(documents that are not part ofthe perl6 documentation project).

> Advantages of external tests:
> - tests can be grouped by purpose/thing tested within tests
> - test files can be smaller
> - individual tests can be run during development
> - tests can be grouped by subsystem/purpose

All of these seem to be red-herrings: obviously the individual files are
smaller if you split them: but the boiler plate probably makes the total
size greater then for the integrated form. Similarly, arguments based on
structure are probably neutral. All formats will have structure; and all
will/should allow us to run individual tests. The important arguments should
be based on human psychology: which is easier to comprehend and maintain?

> Disadvantages of external tests:
> - proper organization is important
> - multiplies files
> - doc patchers may find test to patch and vice versa

These are red-herrings too. Proper organisation is always important; you'll
always have multiple files (and if you include tests as part of the document
files, you'll probably end up with more doc files). It may be true that doc
patchers need to find corresponding tests ... but its quite easy to miss a
test even if its in the same file.

> On the whole, I prefer external tests.  Brent's schema looks good.
>
> In Perl 5 land, the culture expects documentation and test patches with
the
> addition of a new feature.  I think we can cultivate that for Perl 6.  As
> Brent also alluded, there will probably be standalone regression tests
anyway.

Maybe there's a terminology problem: but what is a regression test? In my
world, we create a regression by running existing tests: we don't write a
special test suite for the regression. There may be a small number of tests
that we exclude from the regression, but then the special case is the
non-regressed tests.

I'm happy pick a format and run with it. When we've a few micro-sections
done, then we can review. I see (in another post) that Mike has opted for
external, "without objection". I'm abstaining. But I would like to see
executable examples as part of the docs.


Dave.





Re: Literal Values

2002-11-12 Thread Dave Whipp
> > output_is(<<'CODE', <<'OUT', "Simple Floats");
> > print 4.5;
> > print 0.0;
> > print 13.12343
> > CODE
> > 4.50.013.12343
> > OUT
> >
> >I'd be more comfortable with a newline between the numbers, just in case.
It's
> >not an issue in the string tests.
>
> Alright, fine by me; I was wondering on that myself.  Done & Updated.

When I look at this, I find myself wanting to separate the control from the
data. Here's an alternative:

my @input  = qw( 4.5   0.0   13.12343 );
my @output = qw( 4.5   0.0   13.12343 ); # can't assume that input==output

my $code = join(";", map {"print $_"} @input);
my $expect = join( "", @output);
output_is($code, $expect, "Simple Floats");

This is, perhaps, slightly harder to grok initially. But its easier to
extend the test data; and also to make control-path changes (such as added
the \n to the print statement). It might be better to use a hash for the
test data.

It is possible to make this type of test much easier to read. A mechanism I
have used in the past is to put the test data into a table (I used html).
Then, you have the test data and expected output as a nice table in a
document; and a simple piece of code to extract tests from it (assuming you
use a perl5 module to parse the table).


Dave.





Re: Literals, take 2

2002-11-13 Thread Dave Whipp

> except for obfuscatory purposes.  Besides, if we allow dots for
> floating point numbers how do we represent this integer:
>
> 256:234.254

Using this notation is cute: a generalization that lets us specify a strange
thing. That are the reasons for using such a thing?

1) an alternative to C
2) an IP address.

Creating a byte-string is not a common thing (in most modules). Its the type
of thing that is better done as a function than as syntax (especially if
functions can be evaluated at compile time).

The most common reason for dotted decimal notation is for IP addresses. If
that is the motivation, then the cute trick doesn't suffice: I can't use it
to specify IPv6 addresses (which are hex numbers, separated by colons; with
double-colon representing a zero-padding in the middle). syntax for bases 2,
10 and 16 seems like a good thing. If that syntax generalizes then fine: but
do we really need to go beyond base 36? If so, is dotted-decimal notation
the most appropriate (I might want to use dotted-hex???).

The original question (snipped) concerned the use of the binary point (for
specifying floating point numbers. Such things are used, though only in
restricted domains. For example, IEE754 single-point numbers use a 23 bit
mantissa, which is simply the 23 bits that follow the binary point (well,
almost). Sure, we don't usually write it that way; but the concept is valid.

I'd prefer to use the dot as a floating-point, in whatever base the number
is specified. Why should base-10 be an exception to a generalization? IP
addresses are probably better represented with an explicit prefix: e.g.
IP:255.255.255.0; or maybe even a function that accepts raw string input
(another example of such things is C).


Dave.





Re: Numeric Literals (Summary)

2002-11-14 Thread Dave Whipp
"Michael Lazzaro" <[EMAIL PROTECTED]> wrote
> exponential:
> -1.23e4 # num
> -1.23E4 # num (identical)

And now I know why we can't use C<.> as a floating point in base 16:

1.5e1 == 15
16:1.5e1 != (1 + 5/16) * 16

There would be an ambiguity as to the meaning of 'e', so it should probably
be a syntax error.

Question: is 10:1.5e1 also a syntax error?

What about 2:1.1e1? error, or ==3?

I like the idea of generalizing; but there's a conflict with dotted-decimal:
does 2:1.1 == 10:3; or does it equal 1.5? What does 10:1.1 equal?


Confused,

Dave.





Re: Perl 6 Test Organization

2002-11-15 Thread Dave Whipp
Chromatic wrote:

I'm prepared to start checking in Perl 6 tests on behalf of the Perl 6
documentation folks.  These should be considered functional tests -- they are
exploring the behavior we expect from Perl 6.  Anything that's not yet
implemented will be marked as a TODO test, and we'll figure out a way to extract
the unimplemented features so people will have small tasks to accomplish.

Brent Dax had a nice suggestion for Perl 6 test organization.  I like it
tremendously.

I repost it here to solicit comments -- to make this work, I'll need to change
the Perl 6 test harness to find all of the tests.  I'll also need to create
several different subdirectories as appropriate.  Rearranging things after the
fact is a bit of a chore with CVS (if you want to keep revision history), so I
welcome feedback.

Brent's post follows.

[...]


I agree that CVS makes it difficult to change things later (I assume you 
don't want to risk Subversion in its current state). However, I would 
suggest that you hold off creating the elaborate structure until we need 
it. Bucket loads of empty subdirectories benefit no one. And CVS has no 
problem adding things later.

Currently we are concentrating on the literals: we should be able to do 
most of those tests in 3-4 files (I still hope to persuade people to use 
a table-driven approach for these tests): so no subdirs are necessary. 
Also, I have a preference for flatter structures than that proposed. It 
is fair to assume that there are multiple classifications possible for 
any test: a deep structure tends to force unnecessary specialisation, 
and commitment to one particular viewpoint.

Could you elaborate on you need to change the test harness to "find all 
of the tests": are you proposing something more complex than "find all 
*.t files (recursively)"?


Dave.



Re: [Fwd: Re: Numeric Literals (Summary)]

2002-11-15 Thread Dave Whipp
Richard Nuttall wrote:


How about

my $a = 256:192.169.34.76;
my $b = $a.base(10);
my $c = '34:13.23.0.1.23.45'.base(16);


This coupling makes me nervous. A number is a number: its value is not 
effected by its representation.

I can see that, in some scripts, it might be useful to define a property 
such that:

my $a = 26 but string_fmt("%02x"); # == ("%16.02r")
print $a, "$a", ~$a, sprintf("%s", +$a);

would print "26 1a 1a 26" (without the spaces).

However, I would not like to see such a property set by default. It has 
the potential to cause too many surprises for the unsuspecting recipient
of a number, who suddenly finds it behaves in funny ways.


Dave.



Re: Numeric Types

2002-11-15 Thread Dave Whipp
Michael Lazzaro wrote:


Does someone from internals want to take on the task of finalizing this 
list?  We need to decide if we want to support none, some, or all of 
these types/aliases.

-

The Full List of Numeric Types

In addition to the standard int and num, there are a great number of 
other numeric types available. If your program makes use of code written 
in other languages, such as C, C++, Java, C#, etc., or if you want your 
program to make use of low-level system or library calls, you will 
frequently need to use more exact types that correspond to what the 
other language expects to see. You might also wish to use the more 
specific primitive types when you can guarantee certain bounds 
restrictions on numbers, and simply want to squeeze every unnecessary 
byte out of the generated code.
>
> [ very long list snipped]

I still don't understand why we want to go to all this hassle of
completing a vast list primitives to support mappings onto languages and 
architectures that have yet to be invented. I still prefer to keep 
things simple:


my Number $a is ctype("unsigned long int") = 42;
my Number $b is ctype("my_int32") = 42;

The definition of this type depends on the system you're running on. If 
you require a specific range, independent on compiler/architecture: then 
primitive types is the wrong mechanism:

my Integer $a is range(0..16:) = 42;
my Real $b is range(0..^1;0.001) = rand; # correct syntax?
my String $x is null_terminated is ctype(const char *) = "hello";


You can rename the types if you want; but properties are a better 
representation of constraints than type names: more precise, and more 
flexible.


Dave.



Re: Docs Testing Format

2002-11-15 Thread Dave Whipp
Piers Cawley wrote:


I'm not arguing that the unit tests themselves shouldn't carry
documentation, but that documentation (if there is any) should be
aimed at the perl6 developer. 

Depends what you mean by "perl6 developer": is that the internals 
people, or the lucky user?

Unit tests should be aimed at internals people: it would obviously be 
nice to have a few comments/POD in there.

Our focus should be the user. There are really two deliverables: 
documentation that details how use the language; and a "language 
reference manual": which pins down every detail in an unambiguous 
manner, but not necessarily very readable. This reference could just be 
the set of tests; but they'd have to be sufficiently readable that an 
outsider could decode them without an Enigma machine.

BTW, from my point of view, the tests should be authoratative. If the
tests and the docs disagree then, unless someone in authority rules
otherwise, the test wins.






Re: Numeric Types

2002-11-15 Thread Dave Whipp
[EMAIL PROTECTED] wrote:

Dave Whipp writes:
 > 
 > You can rename the types if you want; but properties are a better 
 > representation of constraints than type names: more precise, and more 
 > flexible.
 > 

but types *are* properties . 

arcadi 

True :-(

But I think my examples somehow withstand that oversight. Parameterized 
properties are more expressive than named types.


Dave.



Re: Numeric Types

2002-11-15 Thread Dave Whipp

"Michael Lazzaro" <[EMAIL PROTECTED]> wrote
>[...]
>  So if you *knew* you were dealing with
> 16-bit unsigned integers, you could say
>
> my uint16 @numarray;
>
> and it would generate the optimal code for such an array.  You could
> instead say:
>
> my Int @numarray is ctype("unsigned short int");
>
>  but that's obviously more work, and we still have to support all
> possible combinations of unsigned/long/short/int/etc.

But if we had a (compile-time) range property, then the compiler can infer
the use of a smaller type. For example:

my int $a is range(1000..1255) is unchecked;

could be stored in just one byte (plus overhead). If you add these
properties as an optimisation (rather than to make code more expressive),
then it is good to make it a bit verbose.

> So the decision, I think, is whether or not using such types should be
> encouraged, or discouraged.  I'd actually like to encourage them, for
> expert users: using primitive types when you want fast, primitive
> behaviors.

OK. My bias is slightly different: I prefer to have declarative properties
that experts use to guide the optimiser. I'd also like to see the
inter-language integration so good that it feels natural to move things into
(e.g.) c/c++ when extreme optimization is needed. If we can bind (tie?) an
array to a c++ std::vector, then we can create efficient code. But this
doesn't require primitive types in the core language.

> (One of my own "broad goals" is to see Perl be a valid choice for
> things like gif/jpeg manipulation -- not as fast as C, but not crippled
> either -- and other binary data.  I think having enough builtin types
> to mirror the C types would make that goal more explicit.)

Yes, it would make it more explicit. I guess that's why it makes me
uncomfortable. Most of the benefits could be achieved with a simple "packed
array" concept: a property on an array that says that the array is a single
value (and therefore its members can't have run-time properties, etc.).
Combined with a C compile-time property, the compiler could optimize
storage: possibly better then you could (e.g. packed, 5-bit values); and
definitely more portably. The more options you pile on, the better it can do
(e.g. direct the optimizer for time vs space). Of course, someone has to
write the optimizers: but hopefully they'll be pluggable modules.

If you have an oft-used type, that is a combination of 5 or 6 properties:
then there is a mechanism to create an alias (typedef/class) of that
composite.


Dave.





Re: Numeric Literals (Summary)

2002-11-15 Thread Dave Whipp
"Michael Lazzaro" <[EMAIL PROTECTED]> wrote
> > 1.5e1 == 15
> > 16:1.5e1 != (1 + 5/16) * 16
>
> Due to ambiguities, the proposal to allow floating point in bases other
> than 10 is therefore squished.  If anyone still wants it, we can ask
> the design team to provide a final ruling.

So what about

10:1.5

is that dotted decimal (i.e. ==15) or a float ( == 1.5)?

Both answers feel wrong to me.


Dave.





Re: Numeric Literals (Summary)

2002-11-15 Thread Dave Whipp
A couple more corner cases:

$a =  1:0; #error? or zero

$b = 4294967296:1.2.3.4  # base 2**32

printf "%32x", $b;

0001000200030004


Dave.





Re: Numeric Literals (Summary)

2002-11-17 Thread Dave Whipp
Dave Storrs wrote:

[...] Just as an aside, this gives me an idea: would it be
feasible to allow the base to be specified as an expression instead of
a constant? (I'm pretty sure it would be useful.)  For example:



4294967296:1.2.3.4  # working with a really big base, hard to grok
2**32:1.2.3.4   # ah, much better



I very much hope that perl will do constant propagation as a 
compile-time optimization. So you could just write:

  eval( 2**32 _ ":1.2.3.4" ); # or whatever strcat is, this week.

or

  literal("$(2**32):1.2.3.4"); # have we decided on the fn name yet?


Dave.



Re: String concatentation operator

2002-11-18 Thread Dave Whipp
Dan Sugalski wrote:

The expensive part is the shared data. All the structures in an 
interpreter are too large to act on atomically without any sort of 
synchronization, so everything shared between interpreters needs to have 
a mutex associated with it. Mutex operations are generally cheap, but if 
you do enough of them they add up.

Why do we need to use preemptive threads? If Parrot is a VM, then surely 
the threading can be implemented at its level, or even higher. If it is 
the VM that implements the threading, then its data structures don't 
need to be locked. The main problem with that approach is that the 
multithreading would not be able to preempt C-level callouts: but that 
could be solved by spawning a true thread only when code makes calls out 
of the parrot VM.


Dave.



Re: Design Team Issues: Numeric Types

2002-11-18 Thread Dave Whipp
"Michael Lazzaro" <[EMAIL PROTECTED]> wrote
> (A) How shall C-like primitive types be specified, e.g. for binding
> to/from C library routines, etc?
>
>Option 1: specify as property
>
>  my numeric $a is ctype("unsigned long int");  # standard C type
>  my numeric $b is ctype("my_int32");   # user-defined
>  my numeric $c is ctype("long double");
>
>  my int $a is range(1000..1255) is unchecked;  # auto-infer 8bit

Just to clarify: I think of the latter (C) for efficient
packing into arrays (e.g. a 5-bit range can be packed efficiently,
even though there is no 5-bit c-type): binding to C routines is
probably best done explicity.

The C property would enable performance
optimizations. Checked-ranges probably catch a few more
bugs, though.


Dave.





Re: String concatentation operator

2002-11-18 Thread Dave Whipp

"Damian Conway" <[EMAIL PROTECTED]> wrote > >my $file = open "error.log"
& "../some/other.log"; # I hope this is legal
>
> Under my junctive semantics it is. It simply calls C twice, with
> the two states, and returns a conjunction of the resulting filehandles.
> Though you probably really want a *dis*junction there.

The thing that's worrying me is: what happens when one of them throws an
exception? Can I catch half of the junction? Do the two threads ever join?
Does the exception get deferred until after all the threads have completed?
If both throw an exception: what happens then?


Dave.





Re: Help! Strings -> Numbers

2002-11-19 Thread Dave Whipp
"Michael Lazzaro" <[EMAIL PROTECTED]> wrote:
> OK, back to this, so we can finish it up: we have a number of proposals
> & questions re: string-to-num conversions, and from Luke we have some
> initial docs for them that need completing.  Can I get more feedback on
> these issues, plz, and any other String -> Number proposals/questions?
>
>
> (A) Unification of Literal <--> Stringified Numeric Behaviors

I vote yes

> (B) We want to be able to convert strings to numbers, using strings in
> any of the formats that literals understand.  If (A) is accepted, this
> is unnecessary:

Actually, even with A, I think its good to have it explicit. It gives us a
name to hook pragmas and ajectives onto [see (E), below].

> (C) We sometimes want to be able to specify the _exact_ format a string
> should be expected as.  Suppose you wanted a user to always enter a hex
> value: they should be able to just enter "10" or "ff", and you wanted
> to convert that to 16 and 255, respectively.  So you'd maybe use
> something like:
>
>  my $s = "ff";# note there's no '0x' in front
>
>  my int $i = hex $s;
>  my int $i = $s.hex;

Yes to both these

>  my int $i = $s.numeric('%2x');  # ???

This doesn't seem to buy anythings really. I'd prefer:

  my $i = $s.sscanf('%2x').

Than all formats can be used: not just those that return numbers.

> (D) There were proposals to allow a given string to _always_ be
> numified in a certain way by attaching a property, e.g.:
>
>  my str $s is formatted('%2x');
>  $s = 'ff';
>  my int $i = $s;  # 255

My initial reaction is to X it. If someone could demonstrate a real-world
example where it helps (and is better than a regex based solution), I might
reconsider.

> now I'm wondering if that's buying us anything, though the inverse
> seems much more useful:
>
>  my int $i is formatted('%4x');
>  $i = 255;
>  print $i;# prints '00ff';

I can see where it might help. Though perhaps a runtime property might be
more useful. Personally though, I'd axe it for now. Perhaps someone can
write the module in a few years time.

> (E) We need to finalize what happens when a string isn't a number, or
> has additional trailing stuff.  Damian gave his initial preference, but
> we need to verify & set in stone:
>
>  my int $i =  'foo';   # 0, NaN, undef, or exception?
>  my int $i = '5foo';   # 5, NaN, undef, or exception?
>
>  my Int $i =  'foo';   # 'Int' probably makes a difference
>  my Int $i = '5foo';

This is probably a case where "it depends". It would be good to have a
predicate that checks a string for its numberness:

  my $x = "5foo";
  fail unless is_number($x);

Going further, if the C/C [see (B), above] is a named
function, then we can have a pragma:

  use fail 'literal';

which defines the rules for the current lexical scope. The default behavior
can depend on the warning/strictness pragmas currently in force:

  use strict 'literals';
  use warnings 'literals';

should be available.


Dave.





Re: String to Num (was Re: Numeric Literals (Summary))

2002-11-20 Thread Dave Whipp

"Larry Wall" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> On Wed, Nov 20, 2002 at 11:57:33AM -0800, Michael Lazzaro wrote:
> :  and _I'm_ trying to promote the reuse of the old "oct/hex"
> : functions to do a similar both-way thing, such that:
>
> What's a two-way function supposed to return if you pass it something
> that has both a string and a numeric value?  Convert it both directions?

It returns a junction, of course ;-)

Seriously though, I dislike the proposal. Of course, if we have a
'do-it-as-a-string' operator modifier (As discussed for bitops, etc), then
we could have:

  ~hex(10) eq 'a'
  hex(10) eq '16'

But I still tend to read that ~ as a 1's complement.


Dave.





Re: Numeric Literals (Summary)

2002-11-20 Thread Dave Whipp
"Martin D Kealey" <[EMAIL PROTECTED]> wrote
> I would suggest that exponent-radix should default to the same as radix.
>
> So
>
>   10:1.2.3:4.5:6== 12345
>   2:1:1:1110== 0x6000
>   60:22.0.-27::-2   == 21.9925
>

For some reason, I find those almost impossible to read.

We have constant-propagation in the compiler (or can assume we do); and we
have overloading of functions. So the following could behave like a literal:

  $a = num( radix=>256, digits => [ [0xff, 0b] , [255, 0c377] ],
exponent => 2<<16 );

  assert( $a == 0x );

Anyone who wants to do something that nasty shouldn't care that it looks
like a function call. Its a very rare thing to do, so it doesn't need
special syntax.


Dave.





Re: Perl 6 Test Organization

2002-11-20 Thread Dave Whipp
"Nicholas Clark" <[EMAIL PROTECTED]> wrote
> On Thu, Nov 14, 2002 at 08:53:02PM -0800, chromatic wrote:
> > Brent Dax had a nice suggestion for Perl 6 test organization.  I like it
> > tremendously.
> >
> > I repost it here to solicit comments -- to make this work, I'll need to
change
>
> Did anyone comment on it? It seems a sane to me, and I certainly can't
suggest
> a better way to do it.

Yes, I did. I feel it is too complex for our current status: we are
currently talking about literals, and we only have about 2 files-worth of
tests so far. We could guess that the rest of the structure seems reasonable
... but given the problems with CVS when you want to change hierarchies, it
is best not to commit ourselves until we have something to put in all those
subdirs.

A problem that I do have with the structure is that I'm not sure exactly
where the literals tests will go -- I'm not even sure precisely how to
partition them. I currently see:

  numeric_literals.t
  string_literals.t

There may be no good reason to have these as two files. One might suffice
... or perhaps 3 is better (separate file for error cases?). Do they all
belong in a single subdir?

I prefer to work this type of thing bottom-up: but CVS can cause problems.
So its better to avoid commitment for as long as possible. I propose that
all we have so far is:

  t/scalar/literals

and perhaps insufficient files to justify the literals subdir.


Dave.





Re: Perl 6 Test Organization

2002-11-20 Thread Dave Whipp
"David Whipp" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
m...
>
> Here's an updated numbers.t file: I'm not sure that everything is
> up-to-date; but I find it clearer. I fixed a few bugs, and merged in the
> radii tests.
>

The attachments on that previous post seemed to go wrong: here it is,
inline. (Note that this version actually prints the .t file to stdout: It
would be easy to have it actually call C). I'm pretty sure a few
of the test cases are incorrect (e.g. upper-case 0X00cc should be an
error). -- easier for people to see/comment this way, anyway.

print <<'HEADER';
#!perl
use strict;
use P6C::TestCompiler tests => 18;
use Test::More qw(skip);

HEADER

my $todo = 0;
my $test_name = "";
my @cases = ();

sub write_test
{
print "TODO: {\n" if $todo;

my @results = ();
print "output_is(<<'CODE', <<'OUT', '$test_name');\n";
my $count = 1;
while (@cases)
{
my @case = @{shift @cases};
push @results, pop @case;
if (@case == 1)
{
printf 'print %s; print "\n";', $case[0];
}
elsif (@case == 2)
{
printf '%s t%d = %s; print $t%d; print "\n";',
$case[0], $count, $case[1], $count;
}
else
{
die "unexpected test-case format: @case";
}
print "\n";
$count++;
}
print "CODE\n";
print join("\n", @results);
print "\nOUT\n";

print "}\n" if $todo;
}

do {
/TEST: (.*)/ and do { write_test(); $test_name = $1; $todo=0;
next };
/TODO: (.*)/ and do { write_test(); $test_name = $1; $todo=1;
next };

my @F = split;

next unless @F;

push @cases, [@F];

} for split "\n", <<'END_OF_TEST_DATA'; write_test;

TEST: Simple Integers

42   42
11
00


TEST: Negative Numbers

-5 -5


TEST: Simple Floats

4.5   4.5
0.0   0.0
13.12343  13.12343


TEST: Int Type

int 11
int 2.2  2


TEST: Num type

num 11
num 2.2  2.2


TODO: Format

12345678901234567890
1_234_567_890 1234567890
12_34_56_78_901234567890
1_2_3_4_5_6_7_8_9 1234567890
1_234_567_890.3_5 1234567890.35


TODO: Exponential

1.23e1  12.3
1.23E2  123
-1.23e3 -1230
-1.23E4 -12300


TODO: Big Numbers

4611686018427387904   4611686018427387904
1.3209583289235829340 1.3209583289235829340


TODO: Bigger Than Big Number (Infinity)

Inf Inf


TODO: NaN

NaN NaN


TODO: Binary

0b01106
0B01106
-0b0110  -6
0b0_1_1_0 6


TODO: Octal

0c0777511
0C0777511
-0c0777  -511
0c0_7_7_7 511


TODO: Hex

0x00ff255
0x00CC204
0X00ff255
0X00CC204
-0x00ff  -255
0x0_0_f_f 255


TODO: Simple Radii

2#101110  78
3#1210112 1311
8#1270696
16#1E3A7  123815


TODO: Floating Radii

2#101110.0101 46.3125
3#1210112.21  1311.778
8#1270.674696.8671875
16#1E3A7.AE7  123815.681396484


TODO: Formatted Radii

2#10_11_10 78
16#1_E3_A7 123815


TODO: High Range Radii

20#1gj   550
20#1GJ   550
20#1_g_j 550
20#1_GJ  550


TODO: Dotted Notation

12#10:11:10   1582
256#255:255:0:34  4294901794

END_OF_TEST_DATA





Re: Perl 6 Test Organization

2002-11-20 Thread Dave Whipp
Tanton Gibbs wrote:
> We also might want some way of specifying a test that will cause an
> error...for example
> 0b19  ERROR
>
> I'm not exactly sure how to specify this, but it is often important to
> document what is not allowed along with what is allowed.

I definitely agree that we need some error tests: I agree with your "ERROR"
indicator as the trigger: but I need an example of what the output of the
code-generator should look like. After that: yes, we should add a whole load
of negative tests.

I think that it'd also be nice to get some consensus on which format of test
we should maintain: the table version, or the raw-code version.


Dave.





Re: Perl 6 Test Organization

2002-11-20 Thread Dave Whipp
I wrote:
>I think that it'd also be nice to get some consensus on which format of
> test we should maintain: the table version, or the raw-code version.

"Joseph F. Ryan" wrote:
> I think the consensus when Chromatic brought the subject
> up was to use the testing system that Parrot uses; however,
> your table version is kinda nice.

Well, with a very minor tweak, it does use the same testing system
as Parrot: it's just a bit more structured.

The minor tweak is to generate the CODE and OUTput chunks
into strings (instead of stdout), and then call C directly.
That style can be harder to debug, though.

"Tanton Gibbs" <[EMAIL PROTECTED]> wrote:
> I also agree that the table version is nice.  However, I wonder how easy
it
> will extend.  Right
> now all we're doing is printing literals and make sure they appear
> correctly.  It may not be so easy
> when we have to concat 5 strings and interpolate entries.  Therefore, I
> would recommend sticking with the testing system Parrot uses.

I was wondering when someone would bring that up (someone always
does). Extensibility doesn't matter: the code generator's specific purpose
is to generate tests of numeric literals. If that isn't what you want, use
a different generator; or just stick with hand-coding.

If people are happy to use these data-oriented test-scripts, then I'm
happy to examine various groups of tests and find their abstractions.
It's just basic data-modeling, applied to source code. By modeling
each file independently, I avoid the problems associated with
infinitely flexible abstractions. I usually find that after a few rounds
of refactoring, some of the abstractions become reusable.

To address your specific "concat * 5 + interpolate" issue: presumably
such things are not done in isolation. There will be a family of tests,
which do N concatenations with M interpolation styles. A (different)
code generator can easily generate such a family, exhaustively.

For each family of tests, I think the correct approach is to start by
writing the test-code manually. But as soon as abstractions become
apparent, a session of refactoring can make the tests much more
readable and maintainable.


Dave.





Re: Perl 6 Test Organization

2002-11-20 Thread Dave Whipp
"Tanton Gibbs" <[EMAIL PROTECTED]> wrote:
> So, we can either use one generic test script, and write the perl6
> ourselves...or
> we can create N specific test scripts which generate the perl6 for us
given
> a particular data set and after we have written the perl6 ourselves.
Sounds
> like duplication of effort and a maintenance problem to me.  I would
rather
> stick with writing the perl6 and its output.

I don't think I've got the energy to debate basic SW development philosophy:
just do a google on "merciless refactoring" or "agile software development"
(or even "extreme programming").

The maintenance is only made more difficult if the abstractions are wrong.
With the correct abstractions, most changes will be to the dataset (e.g.
"add a test for -ve exponent")  and are very simple. A few changes are
extensions to the abstractions (e.g. "add an ERROR token as an output
value"), and require a simple change to the code generator. If you're
uncomfortable with using abstraction, then by all means stick with the
low-level stuff (sorry, that sounds a bit harsh). Abstractions in code are
best introduced by refactoring., not foresight.


Dave.





Re: Perl 6 Test Organization

2002-11-21 Thread Dave Whipp
"Chromatic" <[EMAIL PROTECTED]> asked:
> Where would you like to generate the test files?  Would it be part of the
> standard 'make' target?  Would it happen at the start of 'make test'?
Would we
> do it before checking the test files into source control?

My usual approach is to checkin the generator, then run it as part
of the test. If we were using Make for tests, then I could imagine:

foo.result: foo.t
foo.t > foo.result

foo.t: foo.t.pl
perl -w foo.t.pl > foo.t

I haven't actually looked at your harness, but I'm sure something
similar would work.

If the test-suite is run on multiple platforms from a shared
source-directory, then
you might choose to only run the generator once. but OTOH, it might be that
the generator knows of platform-specific issues and customizes the tests.


Dave.





Re: Help! Strings -> Numbers

2002-11-21 Thread Dave Whipp
"Michael Lazzaro" <[EMAIL PROTECTED]> wrote :
>
> 1) "Formats" as classes.  What I _want_ to do is to be able to
> associate a named "format" with a given class/instance/output, because
> I tend to use the same few formats over and over.  So if I want to
> frequently output numbers as '%-4.2d', I just call it "MoneyFormat" or
> something:
>
> [ ... snip ... ]
>
> and not deal with sprintf syntax at all, but still have something
> that's eminently adjustable, self-documenting, and has decent
> compile-time optimization possibilities.  And could change, runtime,
> according to user preferences.

An alternative, with almost the same benefits, is

  $fmt{money} = "%-4.2d";
  printf "The amount is: $fmt{money}\n", $v;

One of the things that I like about the printf style is that the values are
separated
from the format. So $v can be spelt @_.

> 2) An analogue to a "rule", but for output instead of input.  As we can
> say:
> [... cut ... ]
>
> print "blah blah: <$i:MoneyFormat>";

The inverse-regex was discussed on perl6-language a few months
back (I think the context was an alternative to pack/unpack): I
don't recall any definitive conclusions. A simple proposal to pass
a match object/hash as an input to the rule falls foul of too many
reverse ambiguities. But a constrained regex syntax could be made
to work.

For your specific example, the following is probably valid syntax:

print "blah blah $( Money.fmt $v )\n";

(assuming a class-method named fmt on the Money class). If Money
were a rule, insead of a class, then a similar thing might work.

If $v were of type Money, then Money's AS_STRING method could
be over-ridden for the interpolation. I guess that's how the fmt
property would be implemented: it creates a run-time subclass that
overrides the method.


Dave.





Re: Status Summary; next steps

2002-11-25 Thread Dave Whipp

"Michael Lazzaro" <[EMAIL PROTECTED]> wrote:
> [...] and a type that matches every
> context (except void).

Actually, it might be nice to have a void type. It might seem useless:
but then, so does /dev/null.

An example, from another language, is C++ templates. Its amazing
how often I find myself needing to create a template specialization
for the void type, because I cannot declare/assign-to void variables
in C++. Even if Perl won't have templates, we still tend to be
quite dynamic with the code (code generators, eval, etc.). The
ability to declare a variable of type-Void could be helpful to
avoid special casing it in the generators.


Dave.





Re: Syntax of using Perl5 modules?

2005-05-25 Thread Dave Whipp

Autrijus Tang wrote:

So, this now works in Pugs with (with a "env PUGS_EMBED=perl5" build):

use Digest--perl5;

my $cxt = Digest.SHA1;
$cxt.add('Pugs!');

# This prints: 66db83c4c3953949a30563141f08a848c4202f7f
say $cxt.hexdigest;

This includes the "Digest.pm" from Perl 5.  DBI.pm, CGI.pm etc will
also work.

Now my question is, is my choice of using the "perl5" namespace indicator a 
sane way to handle this?  Is it okay for Perl 6 to fallback to using Perl 5

automatically?  Or should I use something else than "use" entirely?



To my mind, the coupling seems wrong. If the "use" statement needs to 
state the language of the included module, then it would be difficult to 
later re-implement that module in p6.


My understanding is that there are clear rules (see S11) to distinguish 
p5 modules from p6: if the first keyword in the file is "module" or 
"class", then it's p6; otherwise assume p5.


Your use of hyphens puts your "perl5" indicator in the "URI" field 
(again, see S11), which is expected to refer to the author. "perl5" 
doesn't seem correct in this context. Even if you got rid of one of the 
hypthens (so that "perl5" is the "version"), then it still feels wrong. 
Each version of a module should be free to choose its language, but that 
should be encapsulated within the module.


If you do want to impose the language from the "use"er, the I'd expect 
an adverbial syntax: "use:p5 Digest;".



Dave.


Re: reduce metaoperator on an empty list

2005-05-31 Thread Dave Whipp

Damian Conway wrote:

0 args:  fail (i.e. thrown or unthrown exception depending on use 
fatal)

...


$sum  = ([+] @values err 0);
$prod = ([*] @values err 1);
$prob = ([*] @probs  err 0);


Just wanted to check, if I've said "use fatal": will that "err 0" DWIM, 
or will the fatalness win? Would I need to write


Re: reduce metaoperator on an empty list

2005-05-31 Thread Dave Whipp

Damian Conway wrote:

And what you'd need to write would be:

$sum  = (try{ [+] @values } err 0);


The "err ..." idiom seems too useful to have it break in this case. 
Afterall, the purpose of "err 0" is to tell the stupid computer that I 
know what to do with the empty-array scenario.


Feels like fine grained control over fatalness is needed (and not just 
per-function). For example "use fatal :void_context_only" would be nice, 
but wouldn't help in this case. This needs "use fatal :void_or_assign".


Re: reduce metaoperator on an empty list

2005-06-01 Thread Dave Whipp

Luke Palmer wrote:


For something like:

$ordered = [<] @array;

If @array is empty, is $ordered supposed to be true or false?   It
certainly shouldn't be anything but those two, because < is a boolean
operator.


I have no problem with 3-state logic systems (true, false, undef) if 
this is what is required to let me choose the corner-case behavior.


Damian previously wrote:
> 2+ args: interpolate specified operator
> 1 arg:   return that arg
> 0 args:  fail (thrown or unthrown exception depending on use fatal)

The 1-arg case doesn't seem to work right with the [<] operator:

  ? [<] 1
  ? [<] 0

If the [<] is taken to mean "ordered", then it doesn't seem right that 
these two tests would give different results. In this case, I need to 
special case both the 0-arg and 1-arg scenarios. We either need to hard 
code these special-cases into perl (belch), or we need to make it easy 
to code both special-cases inline in the code, or we need a better 
general-case rule.


One approach might be to reverse the direction of the definitions. That 
is, instead of defining the binary form and then autogeneralizing in 
terms of "join", we might define operators in terms of thier reduction 
behavior, and then autospecialize to the binary case. Of course, that 
still doesn't help for Damian's "product Vs factorial" example for the 
0-arg case.


Or we take the well trodden road of ignoring mathematical correctness 
and simply state "this is what perl does: take it or leave it".


Re: sub my_zip (...?) {}

2005-06-16 Thread Dave Whipp

Larry Wall wrote:

 You must
specify @foo[[;[EMAIL PROTECTED] or @foo[()] <== @bar to get the special mark.


I'm uncomfortable with the  specific syntax of @a[()] because generated 
code might sometimes want to generate an empty list, and special-casing 
that sort of thing is always a pain (and fragile). An empty list of 
subscripts should return an empty slice.


What this mark is really trying to say is "The definition of the indices 
is coming from elsewhere". I'm wondering if these semtantics would make 
it appropriate to use the yada operator here:


   @foo[...] <== @bar;


Dave.


Re: Time::Local

2005-07-05 Thread Dave Whipp

Larry Wall wrote:


The time function always returns the time in floating point.


I don't understand why time() should return a numeric value at all. 
Surely it should return a DateTime (or Time) object. Using epochs in a 
high level language seems like a really bad thing to be doing. If I want 
"duration since epoch" then I should subtract the epoch from the time -- 
resulting in a duration (which may indeed be a floating point value).


  my DateTime $epoch is constant = DateTime "2000-01-01 00:00:00";
  my Num $seconds_since_epoch = time - $epoch;


> In fact,
> all numeric times are Num in Perl 6.  Then you don't have to worry
> about whether picosecond resolution is good enough.  Time and
> duration objects can, of course, do whatever they like internally,
> and among themselves, but someone who says sleep($PI) should get a
> $PI second sleep.

For the sleep function, it seems reasonable to accept either a DateTime 
or a Duration, which would sleep either until the requested time, or for 
the requested duration.



Sorry about the rant, but you seem to have pushed one of my hot buttons...


Ditto


Larry


Re: Time::Local

2005-07-05 Thread Dave Whipp

Douglas P. McNutt wrote:

At 10:55 -0700 7/5/05, Dave Whipp wrote:


I don't understand why time() should return a numeric value at all.


Some of us like to use epoch time, as an integer, to create unique file names which sort 
"right" in a shell or GUI.



You can use "{time - $epoch}" or "{time.as<%d>}" or "{int time}". (That 
last one is not "{+time}", because that would be a floating-point value, 
not an integer).


Re: Time::Local

2005-07-05 Thread Dave Whipp

Darren Duncan wrote:
The object 
should not store anything other than this single numerical value 
internally (smart caching of conversions aside).


I think we can all either agree with that, or dont-care it. The internal 
implementation is an implementation issue (or library). It doesn't need 
to be defined by the language. The one important thing is that that 
language shouldn't define semantics that require more than this single 
value (e.g. we shouldn't associate the epoch with the object).


Re: Time::Local -- and lexical scope

2005-07-05 Thread Dave Whipp

Dave Whipp wrote:

You can use "{time - $epoch}" or "{time.as<%d>}" or "{int time}". (That 
last one is not "{+time}", because that would be a floating-point value, 
not an integer).


I was thinking: an epoch is just a time, and "int time" is a duration -- 
the number of seconds since the current epoch. So, the following should 
work:


for 1 .. 2 -> {
   use epoch time();
   sleep 6;
   say int time;
}

This should print something close to "6", twice.

But something niggled me: does the value of the RHS of a "use" get 
evaluated at run time, or compile time? In perl5, that could definitely 
would only execute the C once.


I could see 3 possible behaviors:

1. C sets the epoch for each iteration of the loop, thus calling 
time() one per iteration


2. C executes just once, at compile time. Thus seconds iteration 
prints approximately "12"


3. C does a compile-time binding of the epoch to the time() 
function. So each iteration prints "0".



Which actually happens?


Re: File.seek() interface

2005-07-07 Thread Dave Whipp

Wolverian wrote:

Or maybe we don't need such an adverb at all, and instead use

$fh.seek($fh.end - 10);

I'm a pretty high level guy, so I don't know about the performance
implications of that. Maybe we want to keep seek() low level, anyway.



Any thoughts/decisions?


We should approach this from the perspective that $fh is an iterator, so 
 the general problem is "how do we navigate a random-access iterator?".


I have a feeling that the "correct" semantics are closer to:

  $fh = $fh.file.end - 10

though the short form ($fh = $fh.end - 10) is a reasonable shortcut.


Re: Hackathon notes

2005-07-08 Thread Dave Whipp

Rod Adams wrote:


   multi method foo#bar (Num x) {...}
   multi method foo#fiz (String x) {...}

   $y = 42;
   $obj.foo#fiz($y); # even though $y looks like a Num
   $obj.foo($z); # let MMD sort it out.



Having additional tags might also give us something to hang priority 
traits off: "foo#bar is more_specific_than(foo#baz);" might influence 
the order of clauses in the implicit given/when block. It feels like 
there should be a generalization of operator precidence here (even 
thought he two are superficially dis-similar, the looser/tighter concept 
appears valid).


Re: Perl 6 Summary for 2005-07-05 through 2005-07-12

2005-07-13 Thread Dave Whipp

Damian Conway wrote:


Important qualification:

  Within a method or submethod, C<.method> only works when C<$_ =:=

> $?SELF>.


C<.method> is perfectly legal on *any* topic anywhere that $?SELF 
doesn't exist.


Just to be clear, this includes any method/submethod with an explicitly 
named invocant, I hope.


Re: Optimization pipeline

2005-07-14 Thread Dave Whipp

Yuval Kogman wrote:


- optimizers stack on top of each other
- the output of each one is executable
- optimizers work in a coroutine, and are preemptable
- optimizers are small
- optimizers operate with a certain section of code in mind


> ...

Optimizers get time slices to operate on code as it is needed. They
get small portions - on the first run only simple optimizations are
expected to actually finish.

> ...

A couple of thoughts spring to mind: in these coming times of ubiquitous 
multi-core computing with software transaction support, perhaps it would 
be realistic to place optimisation on a low-priority thread. So much 
code is single-threaded that anything we can do to make use of 
dual-cores is likely to improve system efficiency.


The other thing that I thought of was the question of errors detected 
during optimisations. It is possible that an optimiser will do a more 
in-depth type inference (or dataflow analysis, etc.) and find errors in 
the code (e.g. gcc -O2 adds warnings for uninitialised variables). This 
would be a compile-time error that occurs while the code is running. If 
a program has been running for several hours when the problem is found, 
what do you do with the error? Would you even want to send a warning to 
stderr?


Re: Referring to package variables in the default namespace in p6

2005-07-21 Thread Dave Whipp

"TSa (Thomas Sandlaß)" wrote:


Here your expectations might be disappointed, sorry.

The non-symbolic form $*Main::foo = 'bar' creates code that
makes sure that the lhs results in a proper scalar container.
The symbolic form might not be so nice and return undef!
Then undef = 'bar' of course let's your program die.


When something knows that it is being evaluated in lvalue context, it 
should probably return something like "undef but autovifify:{...}". The 
assignment operator could then check for the "autovivify" property when 
its LHS is undefined.


Re: Messing with the type heirarchy

2005-07-31 Thread Dave Whipp

Luke Palmer wrote:


Everything that is a Num is a Complex right?


Not according to Liskov   But this is one of the standard OO

>>paradoxes, and we're hoping roles are the way out of it.


Well, everything that is a Num is a Complex in a value-typed world,
which Num and Complex are in.  I don't like reference types much
(though I do admit they are necessary in a language like Perl), and
I'm not sure how this fits there anymore.  Anyway, that's beside the
point, since a supertyping need is still there for referential types.


Doesn't the problem largely go away if we allow Num to be a more general 
numeric type, and introduce, say, Real for the more constrained set of 
numbers that Num currently represents. Of course, if it were truely the 
most general, then it'd permit quaternions, etc., but I think that most 
people would be happy for Num to be a simplest possible complete 
arithmetic type.


"set" questions -- Re: $object.meta.isa(?) redux

2005-08-10 Thread Dave Whipp

Luke Palmer wrote:


A new development in perl 6 land that will make some folks very happy.
 There is now a Set role.  Among its operations are (including
parentheses):

(+)   Union
(*)   Intersection
(-)   Difference
(<=)  Subset
(<)   Proper subset
(>=)  Superset
(>)   Proper superset
(in)  Element
(=)   Set equality



Do Sets get a sigil? I'd guess that % would be appropriate, because a 
hash is simply "Set of Pair" where the membership equivalence class is 
simply $^member.key. What syntax is used to associate the equiv-class 
with a set?


Re: Demagicalizing pairs

2005-08-24 Thread Dave Whipp
I've been trying to thing about how to make this read right without too 
much line noise. I think Lukes keyword approach ("named") is on the 
right track.


If we want named params at both start and end, then its bound to be a 
bit confusing. But perhaps we can say that they're always at the end -- 
but either at the end of the invocant section or the end of the args.


Also, "named" is a bit of a clumsy name. "Where" and "given" are taken, 
so I'll use "with":


I think something like these read nicely, without too much line noise:

  draw_polygon $canvas: @verticies with color => "red";

  draw_polygon $canvas with color => "red": @vertices;


Dave.


Parsing indent-sensitive languages

2005-09-08 Thread Dave Whipp
If I want to parse a language that is sensitive to whitespace 
indentation (e.g. Python, Haskell), how do I do it using P6 rules/grammars?


The way I'd usually handle it is to have a lexer that examines leading 
whitespace and converts it into "indent" and "unindent" tokens. The 
grammer can then use these tokens in the same way that it would any 
other block-delimiter.


This requires a stateful lexer, because to work out the number of 
"unindent" tokens on a line, it needs to know what the indentation 
positions are. How would I write a P6 rule that defines  and 
 tokens? Alternatively (if a different approach is needed) how 
would I use P6 to parse such a language?


Re: Parsing indent-sensitive languages

2005-09-08 Thread Dave Whipp

Damian Conway wrote:


Alternatively, you could define separate rules for the three cases:

{
state @indents = 0;

rule indent {
^^ $:=(\h*)
{ $ = expand_tabs($).chars }
<( $ > @indents[-1] )>
{ let @indents = (@indents, $) }
}

rule outdent {
^^ $:=(\h*)
{ $ = expand_tabs($).chars }
<( $ < @indents[-1] )>
{ pop @indents while @indents && $ < @indents[-1];
  let @indents = (@indents, $);
}
}

rule samedent { ... }
}


I have a couple of questions about this:

1.  It's quite possible that a code-block in a parser could call a 
function that reads a different file (e.g. for an "include " 
statement). How does the state, @indents, get associated with a 
particular match? (Sure, I could do an explicit save/restore; but things 
might get harder if I was using coroutines to get concurrent matches to 
implement, say, a smart-diff script)


2. How does the outdent rule work in the case where a line does 2 
outdents? It looks to me as if I'd only get one match of : the 
/\h*/ match will advance the match pos, so /^^/ won't match for the 
second  on the same line, which would cause problems if I'm 
trying to match up nested blocks.



Dave.


Re: skippable arguments in for loops

2005-09-29 Thread Dave Whipp

Luke Palmer wrote:


Joked?  Every other language that has pattern matching signatures that
I know of (that is, ML family and Prolog) uses _.  Why should we break
that?  IMO, it's immediately obvious what it means.

Something tells me that in signature unification, "undef" means "this
has to be undef", much like "1" means "this has to be 1".


In Perl6 we currently have at least tw oways to say "don't care": In a 
regex, we say /./ to match anything; in a type signature, we use "Any" 
to mean that we don't care what the type is. I don't think we need 
another way to say "don't care". In fact, we could unify things:


  rules: // matches anything (/./ is shorthand synonym)
  binding: ($a, Any, $b) := (1,2,3);

I'll admit that "Any" doesn't immediately shout "skip", but it would 
really be the fact that there's no variable associated with it that 
means "skip", If we'd wanted to skip an integer, we could say:


  ($a, Int, $b) := (1,2,3);

Why would Perl need to add "_" as a synonym for "Any"? It's only a 
couple of characters shorter! The argument for /./ being a synonym in 
rexen is easier to make: it's common, it's legacy, and it's 4 chars shorter.


Look-ahead arguments in for loops

2005-09-29 Thread Dave Whipp

Imagine you're writing an implementation of the unix "uniq" function:

   my $prev;
   for grep {defined} @in -> $x {
   print $x unless defined $prev && $x eq $prev;
   $prev = $x;
   }

This feels clumsy. $prev seems to get in the way of what I'm trying to 
say. Could we imbue optional binding with the semantics of not being 
consumed?


  for grep {defined} @in -> $item, ?$next {
print $item unless defined $next && $item eq $next;
  }

The same behavior, but without the variable outside the loop scope.


It would also be good not to overload the meaning of $?next to also tell 
us if we're at the end of the loop. In addition to FIRST{} and LAST{} 
blocks, could we have some implicit lexicals:


  for @in -> $item, ?$next {
print $item if $?LAST || $item ne $next
  }


Re: Look-ahead arguments in for loops

2005-09-30 Thread Dave Whipp

Damian Conway wrote:

Rather than addition Yet Another Feature, what's wrong with just using:

for @list ¥ @list[1...] -> $curr, $next {
...
}

???


There's nothing particularly wrong with it -- just as ther's nothing 
particularly wrong with any number of other "we don't need this, because 
we can program it" things. Perl5 had many other these: "we don't need a 
switch statement", "we don't need function signatures", etc.


My original idea, not consuming optional bindings, is barely a new 
feature: just a clarification of the rules in a corner-case of the 
language. Others took the idea and ran with it and added the bells as 
whistles. I guess the best alternative is to say that optional bindings 
aren't allowed in this context -- that leaves the issue open for Perl 
6.1 (or a module).


Re: Exceptuations

2005-10-05 Thread Dave Whipp

Luke Palmer wrote:

Of course, exactly how this "public interface" is declared is quite undefined.


Reading this thread, I find myself wondering how a resumable exception 
differs from a dynamically scropted function. Imagine this code:


sub FileNotWriteable( Str $filename ) {
  die "can't write file: $filename";
}

sub write_file (Str $filename)  {
  FileNotWriteable($filename) unless -w $filename;
  ...
}


sub my_program {

  temp sub FileNotWriteable( Str $filename ) {
return if chmod "+w", $filename;
OUTER::FileNotWriteable( $filename );
  }

  ...
  write_file("foo.txt");
  ...
}


Ignoring syntactic sugar, what semantics does exception handling have 
that a dynamically scoped function does not?


In the case of non-resumable exceptions, we see things like deferred 
handling -- the exception is passed as a property of an undef value. I 
assume such an exception cannot be resumed, so it does appear to me that 
there are fundamental differences between resumable things, and 
non-resumable, deferrable, exceptions. What is the benefit of unifying 
them under a common syntax (CATCH blocks)?


Larry suggested that the exception mechanism might be a way of unifying 
errors and warnings; but perhaps the opposite is true. Perhaps what we 
see is a needed to generalize the distinction between warnigns and errors.



Dave.


Re: zip: stop when and where?

2005-10-06 Thread Dave Whipp

Luke Palmer wrote:


zip :: [a] -> [b] -> [(a,b)]

It *has* to stop at the shortest one, because it has no idea how to
create a "b" unless I tell it one.  If it took the longest, the
signature would have looked like:

zip :: [a] -> [b] -> [(Maybe a, Maybe b)]

Anyway, that's just more of the usual Haskell praise.


Given that my idea about using optional binding for look-ahead didn't 
fly, maybe it would work here, instead:


  @a Y @b ->  $a,  $b { ... } # stop at end of shortest
  @a Y @b ->  $a, ?$b { ... } # keep going until @a is exhaused
  @a Y @b -> ?$a, ?$b { ... } # keep going until both are exhaused

I think we still need a way to determine if an optional arg is bound. 
Can the C function be used for that ("if exists $b {...}")?



Dave.


$value but lexically ...

2005-10-06 Thread Dave Whipp
C properties get attached to a value, and are available when the 
value is passed to other functions/ etc. I would like to be able to 
define a property of a value that is trapped in the lexical scope where 
it is defined. The example that set me thinking down this path is


sub foo( $a, ?$b = rand but :is_default )
{
   ...
   bar($a,$b);
}

sub bar( $a, ?$b = rand but :is_default )
{
  warn "defaulting \$b = $b" if $b.is_default;
  ...
}


It would be unfortunate if the "is_default" property attached in &foo 
triggers the warning in &bar. So I'd like to say somthing like


  sub foo( $a, ?$b = 0 but lexically :is_default ) {...}
or
  sub foo( $a, ?$b = 0 but locally :is_default ) {...}

to specify that I don't want the property to the propagated.


Re: Sane (less insane) pair semantics

2005-10-10 Thread Dave Whipp

Austin Hastings wrote:


How about "perl should DWIM"? In this case, I'm with Juerd: splat should
pretend that my array is a series of args.

So if I say:

foo [EMAIL PROTECTED];

or if I say:

foo([EMAIL PROTECTED]);

I still mean the same thing: shuck the array and get those args out
here, even the pairs.


The trouble is, an array doesn't contain enough information:

Compare:
  foo( (a=>1), b=>2 );

With
  @args = ( (a=>1), b=>2 );
  foo( [EMAIL PROTECTED] );

If we have an arglist ctor, then we could have

  @args = arglist( (a=>1), b=>2 );
  foo( [EMAIL PROTECTED]);

  say @args.perl
## (
##   (a=>1) but is_positional,
##   (b=>2) but is_named,
## )


but without such a constructor, it would be difficult to DWIM correctly.

Of course, for the case of $?ARGS, constructing the array with 
appropriate properties wouldn't be a problem.


Re: Proposal to make class method non-inheritable

2005-10-11 Thread Dave Whipp

Stevan Little wrote:
I would like to propose that class methods do not get inherited along  
normal class lines.


One of the things that has annoyed me with Java is that it's class 
methods don't inherit (dispatch polymorphically). This means that you 
can't apply the "template method" pattern to static (class) methods. I 
hope Perl 6 doesn't copy this "feature".


Re: Proposal to make class method non-inheritable

2005-10-11 Thread Dave Whipp

Stevan Little wrote:

David,

...
If you would please give a real-world-useful example of this usage of  
class-methods, I am sure I could show you, what I believe, is a  better 
approach that does not use class methods.

...

The example I've wanted to code in Java is along the lines of:

public class Base {
  public static main(String[] args) {
 init();
 do_it(args);
 cleanup()
  }
}

and then define a bunch of derived classes as my main class.

public class Derived extends Base {
  static init() { print("doing init"); }
  static do_it(String[] args) { print("doing body"); }
  static cleanup() { print("doing cleanup"); }
}

% javac Derived
% java Derived

In other words, I wanted to not have a main function on the class that I 
run as the application.


This example, of course, doesn't apply to Perl -- but I think that the 
basic pattern is still useful


Thoughs on Theory.pm

2005-10-13 Thread Dave Whipp

(ref: http://svn.openfoundry.org/pugs/docs/notes/theory.pod)

>theory Ring{::R} {
>multi infix:<+>   (R, R --> R) {...}
>multi prefix:<->  (R --> R){...}
>multi infix:<->   (R $x, R $y --> R) { $x + (-$y) }
>multi infix:<*>   (R, R --> R) {...}
># only technically required to handle 0 and 1
>multi coerce: (Int --> R)  {...}
>}
>
> This says that in order for a type R to be a ring, it must
> supply these operations.  The operations are necessary but
> not sufficient to be a ring; you also have to obey some
> algebraic laws (which are, in general, unverifiable
> programmatically), for instance, associativity over
> addition: C<(a + b) + c == a + (b + c)>.

I started thinking about the "in general, unverifiable programmatically" 
bit. While obviously true, perhaps we can get closer than just leaving 
them as comments. It should be possible to associate a 
unit-test-generator with the theory, so I can say:


theory Ring(::R) {
   ...
   axiom associative(R ($a, $b, $b)) {
 is_true( ((a+b)+c) - (a+(b+c)) eqv R(0) );
   }
   ...
}

And then say "for type T, please generate 1000 random tests of T using 
axioms of Ring".


In an ultra-slow debug mode, the aximons could be propagated as post 
conditions on every public mutator of T, so that we test them 
continuously in a running application (or tes suite).


Re: Thoughs on Theory.pm

2005-10-13 Thread Dave Whipp

David Storrs wrote:

While I like the idea, I would point out that 1000 tests with  randomly 
generated data are far less useful than 5 tests chosen to  hit boundary 
conditions.


I come from a hardware verification background. The trend in this 
industry is driven from the fact that the computer can generate (and) 
run 1000 random tests more quickly than a human can write 5 directed 
tests. And a quick question: just what are the boundary cases of 
"a+(b+c)==(a+b)+c" for a generic Ring type?


Of course, in the hardware world we give hints (constraints) to the 
generators to bias it towards interesting cases. Plus the tools use 
coverage data to drive the tests towards uncovered code (not entirely 
automatic). Finally, we have tools that can spend 48+ hours analyzing a 
small block (statically) to find really good set of tests.


Re: Standard library for perl6? (graphical primitives)

2005-10-18 Thread Dave Whipp

Markus Laire wrote:

I'm not completely sure if it would be worth the trouble to support only 
most primitive graphical commands "in core", (no windows, etc..), and 
leave the rest to the modules (or to the programmer).


To a large extent, I'd want to leave most details to modules, etc. But 
what would be nice (tm) would be to establish a framework within 
graphics libraries can the created. Sort of like DBI/DBD: A core set of 
capabilities ala DBI, implemented in multiple ways via drivers (DBD). 
The only problem is ... it's hard.


But things like "create window" are the sort of interfaces that you 
would want to become defacto standards. Drawing pixels/lines is much 
less interesting (except as exposed by a "canvas" widget)


The thing that makes it feasable is perhaps that most look-and-feel 
stuff is already externalized from specific applications. So it is 
reasonable to have a generic "open window" that "just works". Perhaps 
the equivalent of DBI is semantically the same as the interface to web 
browsers (client side) -- thats probably the closest we have a broadly 
accepted standard.


Re: new sigil

2005-10-21 Thread Dave Whipp

Luke Palmer wrote:

As I mentioned earlier, most programmers in a corporate environment

>> have

limited access to system settings.
And in those kinds of corporate environments, you're not going to be
working with any code but code written in-house.  Which means that
nobody is going to be using Latin-1, and everyone will be using the
ASCII synonyms.  What's the problem?


My experience is that this isn't true: we use lots of external code, but 
I still need to file requests with IT to get system-settings changed.


That said, I have no objection to Latin-1 sigils. So it's only your 
argument that's bogus, not the conclusion ;-).


Re: Status Summary; next steps

2002-11-26 Thread Dave Whipp
"Larry Wall" <[EMAIL PROTECTED]> wrote:
> On Mon, Nov 25, 2002 at 07:46:57PM -0500, Bryan C. Warnock wrote:
> : Should an explicit bool type be part of the language? If so, how should
> : it work?  C storing only a truth property but
> : no value makes little sense in the context of the larger language.  So
> : does handling truth as something other than a property.
>
> Don't think of truth as a property--it's a predicate that can pay
> attention to the property if it's defined.  But if there's no property,
> truth is defined by the C method on the class of the object.
> Again, a property is just a singleton method override.

So is it fair to say that Bool is an Interface? No concrete class,
no boolean literals: but if you store a value in a variable of type
Bool, then you will still have access to its c and c
methods.

If we define Boolean-ness in this way, then we can say that
context is an interface. I think it is true to say that a type is
a junction of interfaces -- so we could allow that context is
a type.

If (that's a 72 point "if") we accept the preceding, then it is natural
(IMO) to employ the Null-Object pattern for the remaining
special case: Void is an interface with no methods (or perhaps
a junction of zero interfaces)!

I think we need a strong technical definition of terms here:
classes, interfaces, types, contexts and objects. Lets see:

An object is a value, associated with implementations
of the methods that access and/or manipulate that value.
An interface is a grouping of method signatures. A
class is an object, whose value is an implementation of
one or more methods.

A variable is a reference to an object. A Type is a set of
constraints on a variable. It may define the signatures of
methods that must be supported by object(s) stored in the
variable; and it may impose additional constraints on the
value stored.

A Context defines how an object will be used. This may
constrain the required methods of the object, or may provide
information, which is not a mandatory constraint: for
example, if the result is a list, then the context may give the
"natural" length of the list to which the value will be assigned.
If the result is to be assigned to a scalar variable, then the
context may provide access to the object currently stored
in that variable.

I writing these definitions, I came to the conclusion that a
context is more than a type. Others may disagree.


Dave.





Re: Status Summary; next steps

2002-11-26 Thread Dave Whipp
"Michael Lazzaro" <[EMAIL PROTECTED]> wrote
> I'm trying to think of a counterexample, in which you have a context
> that _cannot_ be represented as a "type" according to this very broad
> definition.  I don't think it should be possible, is it?  If it _is_
> possible, does that represent a flaw/limitation of the perl6 "types"?

Let us imagine that an object can tell us that it will be destroyed
as a result of the assignment (we don't have ref-counts, but
some variables are scoped lexically). Now imagine that the
output object is expensive to create: but that the current
object (which will be destroyed as a result of the assignment)
can be cheaply modified. There might be some optimization
potential here.

Perhaps an example will clarify my thoughts

  my $big_heavy_object = Foo.new;
  $big_heavy_object = add($big_heavy_object, $bar);

If the context provides access to the lvalue, then it
may be possible to optimize. Effectively, we have the
information to create an implicit C<+=> operator. The
add method should be able to utilize the fact that it
can recycle the old lvalue.

If the old-lvalue is available in the context, then the context
is more than a type.


Dave.





Re: Numeric literals, take 3

2002-11-27 Thread Dave Whipp
"Angel Faus" <[EMAIL PROTECTED]> wrote:
> Alphanumeric digits: Following the common practice,
> perl will interpret the A letter as the digit 10, the B
> letter as digit 11, and so on. Alphanumeric digits are case
> insensitive:
>
>   16#1E3A7  # base 16
>   16:1e3a5  # the same

Should that second example be "0x1e3a5" ?

There doesn't seem much point in supporting two
general-purpose radix indicators.





Re: Numeric literals, take 3

2002-11-27 Thread Dave Whipp
"Luke Palmer" <[EMAIL PROTECTED]> wrote

> > This notation is designed to let you write very large or
> > very small numbers efficiently. The left portion of the
> > C is the coefficient, and the right is the exponent,
> > so a number of the form C is actually intepreted
> > as C.
>
> Your "coefficient" is usually referred to as the mantissa.  But it's
> clear either way.

Technically, the mantissa is the part of the coefficient that
comes after the point.


Dave.





Indenting HERE docs

2002-12-02 Thread Dave Whipp
At various times, I have seen proposals for using indentation with HERE
docs. Was this issue ever resolved?


Dave.





  1   2   3   4   >