Xpath language for referencing embedded data

2001-11-05 Thread David Nicol

http://www.25hoursaday.com/StoringAndQueryingXML.html#samplexpath


like plan9, Xpath uses slash instead of dot to get inside things.
Note the slicing syntax.



RE: Yet another switch/goto implementation

2001-11-05 Thread Brent Dax

Michael Fischer:
# On Nov 04, Brent Dax <[EMAIL PROTECTED]> took up a keyboard
# and banged out
# > Michael Fischer:
# > # In the goto case, we spin. And perhaps I am broken there. End
# > # really wants to return, not just set the pc, but I hadn't thought
# > # of a clever way to do that corner case, and wanted to see what
# > # the behavior would be without it. I suspect I need it.
# >
# > Can't you just break()?
#
# Out of a function?

Isn't the win in computed goto that you inline the sub bodies and can
loop implicitly instead of explicitly, thus saving a jump or two?

goto *lookup[*pc];

op0:
return;
op1:
pc += 1;
goto *lookup[*pc];
op2:
/* whatever */
pc += size_of_op_2;
goto lookup[*pc];
op3:
/* this one may halt */
if(whatever) {
pc += size_of_op_3;
goto lookup[*pc];
}
else {
return;
}

vs.


while(pc) {
goto *lookup[*pc];

op0:
pc=Parrot_op_end(pc, interp);
continue;
op1:
pc=Parrot_op_noop(pc, interp);
continue;
op2:
pc=Parrot_op_whatever(pc, interp);
continue;
op3:
pc=Parrot_op_whatever_else(pc, interp);
continue;
...
}

The second example really is no better than a switch (and perhaps worse,
since the compiler can't get a high-level view of things and maybe come
up with a better way to do it).

--Brent Dax
[EMAIL PROTECTED]
Configure pumpking for Perl 6

When I take action, I'm not going to fire a $2 million missile at a $10
empty tent and hit a camel in the butt.
--Dubya




RE: Rules for memory allocation and pointing

2001-11-05 Thread Brent Dax

Michael L Maraist:
# On Sunday 04 November 2001 02:39 pm, Dan Sugalski wrote:
# > At 08:32 PM 11/4/2001 +0100, Benoit Cerrina wrote:
# > > > There will be a mechanism to register PMCs with the
# interpreter to note
# > > > they're pointed to by something that the interpreter
# can't reach. (For
# > > > example, a structure in your extension code, or via a
# pointer stashed
# > > > in the depths of a buffer object, or referenced by
# another interpreter)
# > > > This "foreign access" registry is considered part of an
# interpreter's
# > > > root set.
# > >
# > >If this is the case, how do you want to move the PMC, I
# thought you wanted
# > > a copying collector?
# >
# > While the PMC structures themselves don't move (no real
# need--there of
# > fixed size so you can't fragment your allocation pool,
# though it makes
# > generational collection easier to some extent) the data
# pointed to by the
# > PMC can. That's the bit that moves.
# >
#
# Ok, so far, here's what I see:
#
# There are two main memory segments.  A traditional alloc/free
# region.  Within
# this area are arena's (for fixed sized memory objects).  This
# region must be
# efficient within MT, and hopefully not too wasteful or
# fragmenting.  This
# region is mostly for core operation and the non arena
# allocations are to be
# minimized; memory leaks are critical as usual. The
# utilization patterns
# should be analyzable, well known and compensated for (with respect to
# fragmentation, thread-contention, etc).  My vmem-derivative
# might still be
# valuable here, but I suppose we need to let the core's
# needs/characteristics
# flesh out further.
#
# Then there's a GC region which is primarly the allocation
# space of the
# interpreted app, which obviously can't be trusted with
# respect to memory
# leaks or usage (fragmentation) patterns.  PMC's and strings
# are handles into
# this GC-region, though the handles are to be stored in
# core-memory (above).

My understanding is that we will pretty much only allocate PMCs out of
the arena and any buffers are allocated out of the GC region.  (I could
be wrong, of course...)

# Question: Are there other types of references?  I can't think of any.
#
# The GC algorithm (being a separate thread or called
# incrementally when memory
# is low (but not yet starved)), needs to quickly access this
# GC heap's values,
# which I believe is Dans for requiring a maximum of two levels of
# indirection.  I suppose it just runs down the known PMC lists
# including the
# PMC and string register sets, the stacks and the stashes for each
# interpreter.  You do, however refer to a "foreign address"
# region, and my
# first impression is that it's the wrong way of going about it.
#
# First of all, how are arrays of arrays of arrays handled?
#
# // create a buffer object of PMCs
# PMC_t p1 = new Array(50)
#
# for i in 0 .. 49:
#   p1->data[ i ] = new Array(50)
#
# In order to comply with the max-depth, the array creation
# will have to
# register each sub-array-entry in the foreign access region,
# or am I missing
# something?

I think the comment about 'two levels deep' just meant that he's not
going to look at arrays of arrays of arrays of arrays of buffers.  I
think PMCs do recursive things.  Once again, however, I could be wrong.

# First of all, this will mean that the foreign access
# data-structure will grow
# VERY large when PMC arrays/ hashes are prevalant.  What's worse, this
# data-structure is stored within the core, which means that there is
# additional burden on the core memory fragmentation / contention.

No...foreign access (if I undestand correctly, once again) is just a way
of saying 'hey, I'm pointing at this', a bit like incrementing a
refcount.

# Additionally, what happens when an array is shared by two
# threads (and thus
# two interpreters).  Who's foreign access region is it stored
# in?  My guess is
# that to avoid premature freeing, BOTH.  So now a work-q used
# by a 30-thread
# producer/consumer app is allocating and freeing LOTS of
# core-memory on each
# enqueue / dispatch..  Again, with the details this fuzzy, I'm
# probably going
# off half-cocked; but I did qualify that this was my initial
# impression.

I think there's going to be a virtual shared interpreter that all shared
objects belong to.

# My suggestion is to not use a foreign references section; or
# if we do, not
# utilize it for deep data-structure nesting.  And instead
# incorporate a doubly
# linked list w/in PMCs and strings...  Thus wheneever you
# allocate a PMC or
# string, you attach it to the chain of allocated handles.
# Whenever the PMC is
# free'd, you detach it.  The GC then has the laughably simple task of
# navigating this linked list, which spans all threads.  This
# can encorporate
# mark-and-sweep or copying or what-ever.  By adding 8 or 16
# bytes to the size
# of a PMC / string, you avoid many memory related problems.
# Not to mention
# the fact that we are free of the concern of depth away from the
# 

RE: Rounding?

2001-11-05 Thread Brent Dax

Zach Lipton:
# I'm working on learning some parrot asm, but if I write
# something like this:
#
# set N0,2
# set N1,2
# add N3, N0, N1
# print N3
#
#
# I get:
#
# 4.00
#
# Is there any way to round this, or at least chop the 0's off the end?

In this case at least, you could convert N3 to an I register with the
C op and then print the integer.

--Brent Dax
[EMAIL PROTECTED]
Configure pumpking for Perl 6

When I take action, I'm not going to fire a $2 million missile at a $10
empty tent and hit a camel in the butt.
--Dubya




Re: Rounding?

2001-11-05 Thread Leon Brocard

Zach Lipton sent the following bits through the ether:

> Is there any way to round this, or at least chop the 0's off the end?

Right. I'd just like to clear this up completely. The N registers are
for numerics (well, ok, floating point) and the I registers are for
integers. Currently, quite a bit of precision is used when printing
the N registers. There are two ways to get 4 out of your code: either
convert the N to an I or work with Is:

set N0,2
set N1,2
add N3, N0, N1
print N3   # prints 4.0
print "\n"
ntoi I0, N3
print I0   # prints 4
print "\n"
set I0, 2
set I1, 2
add I2, I0, I1
print I2   # prints 4
print "\n"
end

Hope this helps, Leon
-- 
Leon Brocard.http://www.astray.com/
Nanoware...http://www.nanoware.org/

 Tonight's the night: Sleep in a eucalyptus tree



Re: [PATCHES] Multiple operation tables

2001-11-05 Thread Simon Cozens

On Mon, Nov 05, 2001 at 12:44:38AM -0500, Jeff wrote:
> This (rather large) set of patches adds the ability for parrot to use
> +interpreter->profile = (INTVAL 
>*)mem_sys_allocate((core_numops+obscure_numops+vtable_numops) * sizeof(INTVAL));

Nice idea, but I'm afraid I'm not convinced that this is sufficiently
extensible.

How about a Perl script that reads in all the .ops files and writes out
a header file describing them and the total number of ops?

-- 
You advocate a lot of egg sucking but you're not very forthcoming with the 
eggs. - Phil Winterbottom (to ken)



Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Tom Hughes

In message <[EMAIL PROTECTED]>
Daniel Grunblatt <[EMAIL PROTECTED]> wrote:

> Do you want me to give you an account in my linux machine where I have
> install gcc 3.0.2 so that you see it?

I'm not sure that will achieve anything - it's not that I don't
believe you, it's just that I'm not seeing the same thing.

I have now tried on a number of other machines, and the results
are summarised in the following table:

   Standard Computed Gotos
   Interpreted   CompiledInterpreted   Compiled
A  3.3533.56 4.63 (+38%)  29.83 (-11%)
B  5.6985.2414.08 (+147%) 78.60 (-8%)
C 15.09   314.9131.83 (+111%)259.34 (-18%)
D 45.87   774.7362.37 (+36%) 795.30 (+3%)

Machine A is a 90Mhz Pentium running RedHat 7.1 with gcc 2.96
Machine B is a Dual 200Mhz Pentium-Pro running RedHat 6.1 with egcs 1.1.2
Machine C is a 733Mhz Pentium III running FreeBSD 4.3-STABLE with gcc 2.95.3
Machine D is an 1333Mhz Athlon running RedHat 7.1 with gcc 2.96

Clearly the speedup varies significantly between systems with some
giving much greater improvements than others.

One other thing that I did notice is that there is quite a bit of
fluctuation between runs on some of the machines, possibly because
we are measuring real time and not CPU time.

Tom

-- 
Tom Hughes ([EMAIL PROTECTED])
http://www.compton.nu




Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Simon Cozens

On Sun, Nov 04, 2001 at 06:22:59PM -0300, Daniel Grunblatt wrote:
> Do you want me to give you an account in my linux machine where I have
> install gcc 3.0.2 so that you see it?

How much effort do we want to put into something that shows a speedup
on one particular version of one particular compiler?

-- 
The most effective debugging tool is still careful thought, coupled with 
judiciously placed print statements. -Kernighan, 1978



Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Daniel Grunblatt

As you can see the problem is still that you are not using gcc 3.0.2,
please take 10' minutes and compile gcc 3.0.2, I will now compile 3.0.1
just to see what happens.

For the compiled version I attached a diff between the current mops.c and
the patch mops.c, enlighten me on how can that difference affect the
speed, but any way, the current pbc2c.pl DOESN'T work with any .pasm that
uses jump or ret, did you ever noticed? We have to decide how are we going
to handle this 2 cases when we don't have computed goto.

Daniel Grunblatt.

On 5 Nov 2001, Tom Hughes wrote:

> In message <[EMAIL PROTECTED]>
> Daniel Grunblatt <[EMAIL PROTECTED]> wrote:
>
> > Do you want me to give you an account in my linux machine where I have
> > install gcc 3.0.2 so that you see it?
>
> I'm not sure that will achieve anything - it's not that I don't
> believe you, it's just that I'm not seeing the same thing.
>
> I have now tried on a number of other machines, and the results
> are summarised in the following table:
>
>Standard Computed Gotos
>Interpreted   CompiledInterpreted   Compiled
> A  3.3533.56 4.63 (+38%)  29.83 (-11%)
> B  5.6985.2414.08 (+147%) 78.60 (-8%)
> C 15.09   314.9131.83 (+111%)259.34 (-18%)
> D 45.87   774.7362.37 (+36%) 795.30 (+3%)
>
> Machine A is a 90Mhz Pentium running RedHat 7.1 with gcc 2.96
> Machine B is a Dual 200Mhz Pentium-Pro running RedHat 6.1 with egcs 1.1.2
> Machine C is a 733Mhz Pentium III running FreeBSD 4.3-STABLE with gcc 2.95.3
> Machine D is an 1333Mhz Athlon running RedHat 7.1 with gcc 2.96
>
> Clearly the speedup varies significantly between systems with some
> giving much greater improvements than others.
>
> One other thing that I did notice is that there is quite a bit of
> fluctuation between runs on some of the machines, possibly because
> we are measuring real time and not CPU time.
>
> Tom
>
> --
> Tom Hughes ([EMAIL PROTECTED])
> http://www.compton.nu
>
>




Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Daniel Grunblatt

A lot, since it's the lastone, and as I said in a previous mail, we can
let everyone download binaries, but, read the previos mail sent by Tom
Hughes there IS a speed up any way on the older version, why shouldn't we
implement this anyway?.

Daniel Grunblatt.

On Mon, 5 Nov 2001, Simon Cozens wrote:

> On Sun, Nov 04, 2001 at 06:22:59PM -0300, Daniel Grunblatt wrote:
> > Do you want me to give you an account in my linux machine where I have
> > install gcc 3.0.2 so that you see it?
>
> How much effort do we want to put into something that shows a speedup
> on one particular version of one particular compiler?
>
> --
> The most effective debugging tool is still careful thought, coupled with
> judiciously placed print statements. -Kernighan, 1978
>





Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Tom Hughes

In message <[EMAIL PROTECTED]>
Daniel Grunblatt <[EMAIL PROTECTED]> wrote:

> As you can see the problem is still that you are not using gcc 3.0.2,
> please take 10' minutes and compile gcc 3.0.2, I will now compile 3.0.1
> just to see what happens.

I have been having a very hard time believing that a point release
of the compiler would make such a huge difference but I have now built
a copy of 3.0.2 on one of the boxes and it really does make a vast
difference:

   Standard Computed Gotos
   Interpreted   CompiledInterpreted   Compiled
A  3.3533.56 4.63 (+38%)  29.83 (-11%)
B  5.6985.2414.08 (+147%) 78.60 (-8%)
C 15.09   314.9131.83 (+111%)259.34 (-18%)
D 45.87   774.7362.37 (+36%) 795.30 (+3%)
E 45.89   747.37   142.57 (+210%)776.41 (+4%)

Machine A is a 90Mhz Pentium running RedHat 7.1 with gcc 2.96
Machine B is a Dual 200Mhz Pentium-Pro running RedHat 6.1 with egcs 1.1.2
Machine C is a 733Mhz Pentium III running FreeBSD 4.3-STABLE with gcc 2.95.3
Machine D is an 1333Mhz Athlon running RedHat 7.1 with gcc 2.96
Machine E is an 1333Mhz Athlon running RedHat 7.1 with gcc 3.0.2

The last two lines are for the same machine but with the different
versions of the compiler. I haven't tried it on the other because 
it would take much longer to build gcc on those boxes than it does
on the 1.3GHz Athlon ;-) Doubtless a similar effect would be seen
though.

Tom

-- 
Tom Hughes ([EMAIL PROTECTED])
http://www.compton.nu




Re: [PATCHES] Multiple operation tables

2001-11-05 Thread Gregor N. Purdy

Simon and Jeff --

> > This (rather large) set of patches adds the ability for parrot to use
> > +interpreter->profile = (INTVAL 
>*)mem_sys_allocate((core_numops+obscure_numops+vtable_numops) * sizeof(INTVAL));
> 
> Nice idea, but I'm afraid I'm not convinced that this is sufficiently
> extensible.

I don't think this is intended to be extensible (yet). I'm happy to have
code to play with and try to merge with my dynamic oplib loading stuff,
which *does* get us extensibility.

I don't know if I'm going to have time today, but I do plan to apply
this patch here and see about getting it working with the dynamic
loading stuff I've posted.

We now have two platforms with platform/*.[hc] code to do the dynamic
loading. If we can get code in there for the other core platforms, we
could start planning for a committed version of multi oplib code not
too long thereafter.

> How about a Perl script that reads in all the .ops files and writes out
> a header file describing them and the total number of ops?

This would be OK as an interim measure, I suppose, but as long as we
are doing build-time workarounds rather than dynamic loading, I don't
think it matters too much if its hardcoded or generated. Of course,
that opinion is based on the hope that we can get the real stuff in
sooner rather than later.


Regards,

-- Gregor
 _ 
/ perl -e 'srand(-2091643526); print chr rand 90 for (0..4)'  \

   Gregor N. Purdy  [EMAIL PROTECTED]
   Focus Research, Inc.http://www.focusresearch.com/
   8080 Beckett Center Drive #203   513-860-3570 vox
   West Chester, OH 45069   513-860-3579 fax
\_/




Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Dan Sugalski

At 10:16 AM 11/5/2001 -0500, Sam Tregar wrote:
>On Mon, 5 Nov 2001, Simon Cozens wrote:
>
> > On Sun, Nov 04, 2001 at 06:22:59PM -0300, Daniel Grunblatt wrote:
> > > Do you want me to give you an account in my linux machine where I have
> > > install gcc 3.0.2 so that you see it?
> >
> > How much effort do we want to put into something that shows a speedup
> > on one particular version of one particular compiler?
>
>When we're talking about the most current release of GCC - lots!  By the
>time we reach beta 3.0.2+ will be in every Linux and *BSD distro worth
>using.

If people want to fiddle with it, that's keen, but micro-optimizing the 
source to match the optimizer of a particular version of a C compiler seems 
a bit much, even for me...

Our target audience is also significantly larger than that group of people 
who won't install either Linux or *BSD for another year or so.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Win32 build and WINVER

2001-11-05 Thread Dan Sugalski

At 06:27 PM 11/4/2001 -0500, James Mastros wrote:
>On Sun, Nov 04, 2001 at 01:38:58PM -0500, Dan Sugalski wrote:
> > Currently, I don't want to promise back before Win98, though if Win95 
> is no
> > different from a programming standpoint (I have no idea if it is) then
> > that's fine too. Win 3.1 and DOS are *not* target platforms, though if
> > someone gets it going I'm fine with it.
>I'd tend to say that we should support back to win95 (original, not sp2).
>AFAIK, there's nothing that changed that should effect core perl/parrot.
>The one big exception is Unicode support, NT-based systems have much better
>Unicode.  Specificly, you can output unicode to the console.  However, only
>targeting NT machines is absolutly not-an-option, for obvious reasons.

Win95 probably isn't a problem--ActiveState certainly manages now.

Hopefully someone who writes a fair amount of Win9x code can keep us honest.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: A serious stab at regexes

2001-11-05 Thread Angel Faus

Hi Brent,

># I have been uncapable of expressing nested groups or
># alternation with your model, and I would say that this
># is because the engine needs some way to save not only
># the index into the string, but also the point of the
># regex where it can branch on a backtack.

>I've been a bit worried that this might be the case.  The best solution
>I've been able to think of is to push a "mark" onto the stack, like what
>Perl 5 does with its call stack to indicate the end of the current
>function's arguments.  If a call to rePopindex popped a mark, it would
>be considered to have failed, so it would branch to $1 if it had a
>parameter.

I am not sure I understand it fully. If I get it right an action can branch
if:

  a) A submatch fails.
  b) A popIndex fails, because there is a mark set

It looks to me a bit too confusing, both from the developer and the compiler
point of view. Maybe is just a matter of taste, but isn't it going to be
more complex for the compiler?...

> .. a larger example ..
>   reLiteral "b"   #and what if this fails?

It just branches to $fail, because that's the address that was set in
OnFail.


>However, this may not be a good example, as I'm seriously looking at the
>possibility of making reAdvance independent of the stack
>(cur_re->startindex or something) to ease implementation of reSubst
>(substitution) and related nonsense.  Here's a better example:
>

I was thinking in a different way of implementing substitution:

On the cur_re struct you would you store all the info about the match:
index of the first letter of the match, index of the last letter of the
match, info about the position of the capturing groups, etc..

Then, after the regexp has matched, you simply read this structure and apply
the substitution to the string.

>From the engine, the only thing that you need to do is have a pair of ops
that store the indexes. Something like this:

reInitGroup i
Saves the current index position as the start position of the i group.
reEndGroup i
Saves the current index position as the end position of the i group.

The whole match would be capturing group 0.

So a full match with substitution looks like:

RE:
reOnFail $fail
rePushIndex $advance
reInitGroup 0
reLiteral "literal"
reEndGroup 0
set I0, 1
reSubst "substitution", S1 ## probably not here, but on the caller.
reFinished
$advance:
reAdvance
$fail:
set I0, 0
reFinished

Back to backracking:   :)

>   #/xa*.b*[xb]/
>   branch $start
>$advance:
>   reAdvance $fail #no longer using stack in this example
>$start:
>   reLiteral "x", $advance
>$finda:
>   reLiteral "a", $findany
>   rePushindex
>   branch $finda
>$findany:
>   reAnything $backa
>   rePushmark
>$findb:
>   reLiteral "b", $findxb
>   rePushindex
>   branch $_findb
>$findxb
>   reOneof "xb", $backb
>   set I0, 1
>   reFinished
>$backb:## two backtracks
>   rePopindex $backa
>   branch $findxb
>$backa:
>   rePopindex $advance
>   branch $findany
>$fail:
>   set I0, 0
>   reFinished


Yup, but you could this with the the alternative model too:

#/xa*.b*[xb]/
reOnFail $fail
branch $start
$start:
  rePushindex $advance
  reLiteral "x"
$finda:
rePushindex $findany
reLiteral "a"
branch $finda
$findany:
reAnything
$findb:
rePushindex $findxb
reLiteral "b"
branch $findb
$findxb:
reOneof "xb"
set I0, 1
reFinished
$fail:
set I0, 0
reFinished
$advance:
reAdvance
branch $start

I have adapted the little compiler to output code for your ops proposal
including the modification I suggest. This way we can see how the assembler
would be for complex regular expressions. I am attaching it with this
e-mail.

Now it supports:

 * literals
 * nested groups (do not capture)
 * alternation
 * classes
 * .
 * simple quantifiers: *, +, ?

I am working in merging your patch with my proposal of implicit
backtracking, so the output of this compiler can actually run. Not much job
but don't have time now. Hope I can send it in a day or so.

-angel



BabyRegex.pm
Description: Binary data


regexc.pl
Description: Binary data


Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Daniel Grunblatt

Right, now, what about the audience with an operative system with gcc
3.0.2? Can't we ship compiled versions for every plataform/operative
system?

By the way, the patch that I sent is already 2.5 - 3 times faster on *BSD

Daniel Grunblatt.

On Mon, 5 Nov 2001, Dan Sugalski wrote:

> At 10:16 AM 11/5/2001 -0500, Sam Tregar wrote:
> >On Mon, 5 Nov 2001, Simon Cozens wrote:
> >
> > > On Sun, Nov 04, 2001 at 06:22:59PM -0300, Daniel Grunblatt wrote:
> > > > Do you want me to give you an account in my linux machine where I have
> > > > install gcc 3.0.2 so that you see it?
> > >
> > > How much effort do we want to put into something that shows a speedup
> > > on one particular version of one particular compiler?
> >
> >When we're talking about the most current release of GCC - lots!  By the
> >time we reach beta 3.0.2+ will be in every Linux and *BSD distro worth
> >using.
>
> If people want to fiddle with it, that's keen, but micro-optimizing the
> source to match the optimizer of a particular version of a C compiler seems
> a bit much, even for me...
>
> Our target audience is also significantly larger than that group of people
> who won't install either Linux or *BSD for another year or so.
>
>   Dan
>
> --"it's like this"---
> Dan Sugalski  even samurai
> [EMAIL PROTECTED] have teddy bears and even
>   teddy bears get drunk
>
>




RE: A serious stab at regexes

2001-11-05 Thread Brent Dax

Angel Faus:
# ># I have been uncapable of expressing nested groups or
# ># alternation with your model, and I would say that this
# ># is because the engine needs some way to save not only
# ># the index into the string, but also the point of the
# ># regex where it can branch on a backtack.
#
# >I've been a bit worried that this might be the case.  The
# best solution
# >I've been able to think of is to push a "mark" onto the
# stack, like what
# >Perl 5 does with its call stack to indicate the end of the current
# >function's arguments.  If a call to rePopindex popped a
# mark, it would
# >be considered to have failed, so it would branch to $1 if it had a
# >parameter.
#
# I am not sure I understand it fully. If I get it right an
# action can branch
# if:
#
#   a) A submatch fails.
#   b) A popIndex fails, because there is a mark set

An action can branch if it fails.  For most ops, 'fail' means that it
didn't match; for rePopindex, 'fail' means that it popped a mark instead
of an index.  rePopindex already has a failure mode: if there isn't
anything left on the stack to pop, rePopindex jumps to its parameter if
it has one.  I'm just changing that failure mode.

# It looks to me a bit too confusing, both from the developer
# and the compiler
# point of view. Maybe is just a matter of taste, but isn't it
# going to be
# more complex for the compiler?...

It just means you have to be more explicit.  I consider that a Good
Thing--Perl 5's regular expressions are compact enough to be represented
like:

   1: EXACT (3)
   3: STAR(6)
   4:   EXACT (0)
   6: EXACT (8)
   8: END(0)

but the internals to support them are an absolute jungle.  I'd rather
have a few exposed ops than have a chunk of code like Perl 5's regular
expressions.  Besides, being able to /see/ where we jump to on each op
when we fail will help when debugging the RE compiler and such.

How do you plan to support lookahead?  Unless I'm mistaken, with my
proposal it goes something like:

rePushindex
rePushmark
#lookahead code in here
rePopmark
rePopindex

There are advantages and disadvantages to both proposals.  The question
is, which one (or both) is flexible enough to do what we want it to do?

# > .. a larger example ..
# > reLiteral "b"   #and what if this fails?
#
# It just branches to $fail, because that's the address that was set in
# OnFail.

But I pushed $continue onto the stack just a few lines up...

# >However, this may not be a good example, as I'm seriously
# looking at the
# >possibility of making reAdvance independent of the stack
# >(cur_re->startindex or something) to ease implementation of reSubst
# >(substitution) and related nonsense.  Here's a better example:
# >
#
# I was thinking in a different way of implementing substitution:
#
# On the cur_re struct you would you store all the info about the match:
# index of the first letter of the match, index of the last
# letter of the
# match, info about the position of the capturing groups, etc..
#
# Then, after the regexp has matched, you simply read this
# structure and apply
# the substitution to the string.
#
# From the engine, the only thing that you need to do is have a
# pair of ops
# that store the indexes. Something like this:
#
# reInitGroup i
# Saves the current index position as the start position of
# the i group.
# reEndGroup i
# Saves the current index position as the end position of
# the i group.
#
# The whole match would be capturing group 0.
#
# So a full match with substitution looks like:
#
# RE:
#   reOnFail $fail
#   rePushIndex $advance
#   reInitGroup 0
#   reLiteral "literal"
#   reEndGroup 0
#   set I0, 1
#   reSubst "substitution", S1 ## probably not here, but on
# the caller.
#   reFinished
# $advance:
#   reAdvance
# $fail:
#   set I0, 0
#   reFinished

I was thinking of extending the re_info struct to include another string
field.  Then, when you hit reFinished, before you null the cur_re
structure for GCing, it checks a flag to see if it should be
substituting.  The whole thing would go something like this:

reSubst RE_0, "matchagainst", "subststring"

RE_0:
#do whatever
reFinished  #does the substituting

Much of the idea of this design is that it likes qr//ed regular
expressions just fine--they're just ones you have to jump to instead of
branching.  That means that substitution regular expressions shouldn't
call additional ops--they should just have the logic tucked away
somewhere.

# Back to backracking:   :)
#
# > #/xa*.b*[xb]/
# > branch $start
# >$advance:
# > reAdvance $fail #no longer using stack in this example
# >$start:
# > reLiteral "x", $advance
# >$finda:
# > reLiteral "a", $findany
# > rePushindex
# > branch $finda
# >$findany:
# > reAnything $backa
# > rePushmark
# >$findb:
# > reLiteral "b", $findxb
# > rePushindex
# > branch $_findb
# >$findxb
# > reOneof 

Re: Opcode numbers

2001-11-05 Thread Brian Wheeler

On Sat, 2001-11-03 at 22:11, Gregor N. Purdy wrote:
> Brian --
> 
> > > None of these are issues with the approach I've been working on /
> > > advocating. I'm hoping we can avoid these altogether.
> > > 
> > 
> > I think this is a cool concept, but it seems like a lot of overhead with
> > the string lookups.  
> 
> I'm hoping we can keep the string lookups in order to sidestep the
> versioning issue. They can be made pretty cheap with a hashtable or search
> tree, and the lookups only happen once when we load. And, we may even be
> able to create the tree or hash table structure as part of the oplib.so,
> so we don't even have to pay to construct it at run time. I guess I'm
> making the provisional assumption that by the type we go out and
> dynamically load the oplib, a few op lookups by name won't be too big a
> deal if we are smart about it. Of course, I could be wrong, but I'd like
> to see it in action before passing judgement on it.
> 
> [snip stuff about versioning]
> 
> > Thoughts?  Or am I too tired to be sending email? :)
> 
> I think its a fine suggestion. I'm just hoping we don't end up having to
> go there. I like the simplicity of doing things by name. We don't have to
> care what else happens to an oplib as long as the ops we cared about are
> still there.
> 

After thinking about it more, you're right, this is nicer :)  

Brian


> 
> Regards,
> 
> -- Gregor
> 




RE: Yet another switch/goto implementation

2001-11-05 Thread Brent Dax

Daniel Grunblatt:
# On Mon, 5 Nov 2001, Brent Dax wrote:
#
# > Michael Fischer:
# > # On Nov 04, Brent Dax <[EMAIL PROTECTED]> took up a keyboard
# > # and banged out
# > # > Michael Fischer:
# > # > # In the goto case, we spin. And perhaps I am broken there. End
# > # > # really wants to return, not just set the pc, but I
# hadn't thought
# > # > # of a clever way to do that corner case, and wanted to see what
# > # > # the behavior would be without it. I suspect I need it.
# > # >
# > # > Can't you just break()?
# > #
# > # Out of a function?
# >
# > Isn't the win in computed goto that you inline the sub
# bodies and can
# > loop implicitly instead of explicitly, thus saving a jump or two?
#
# Exactly, that's why I suggested not to use computed goto when tracing,
# checking bounds or profiling, there is no way , I think, to
# use it without
# loosing speed.

When you implement bounds, tracing, or profiling, you *will* lose speed
in *any* system.  In this case, you can do these things by inserting the
appropriate checks just before the goto.

--Brent Dax
[EMAIL PROTECTED]
Configure pumpking for Perl 6

When I take action, I'm not going to fire a $2 million missile at a $10
empty tent and hit a camel in the butt.
--Dubya




Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Dan Sugalski

At 10:24 AM 11/5/2001 -0300, Daniel Grunblatt wrote:
>Right, now, what about the audience with an operative system with gcc
>3.0.2?

What about 'em? They build the same way everyone else does.

Gearing code specifically towards the quirks of a specific compiler 
version's usually a good way to get really disappointed when the next point 
release comes out and tosses away the win you found.

>Can't we ship compiled versions for every plataform/operative
>system?

That's a rhetorical question, right? The answer's "no", of course.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




RE: Yet another switch/goto implementation

2001-11-05 Thread Daniel Grunblatt

No, I totally disagree on that if I do that we will lose the speed gained
before, I still don't know why we can't stay we the actual dispatch method
when tracing, etc and use computed goto when running without any command
line switch?

Daniel Grunblatt.

On Mon, 5 Nov 2001, Brent Dax wrote:

> Daniel Grunblatt:
> # On Mon, 5 Nov 2001, Brent Dax wrote:
> #
> # > Michael Fischer:
> # > # On Nov 04, Brent Dax <[EMAIL PROTECTED]> took up a keyboard
> # > # and banged out
> # > # > Michael Fischer:
> # > # > # In the goto case, we spin. And perhaps I am broken there. End
> # > # > # really wants to return, not just set the pc, but I
> # hadn't thought
> # > # > # of a clever way to do that corner case, and wanted to see what
> # > # > # the behavior would be without it. I suspect I need it.
> # > # >
> # > # > Can't you just break()?
> # > #
> # > # Out of a function?
> # >
> # > Isn't the win in computed goto that you inline the sub
> # bodies and can
> # > loop implicitly instead of explicitly, thus saving a jump or two?
> #
> # Exactly, that's why I suggested not to use computed goto when tracing,
> # checking bounds or profiling, there is no way , I think, to
> # use it without
> # loosing speed.
>
> When you implement bounds, tracing, or profiling, you *will* lose speed
> in *any* system.  In this case, you can do these things by inserting the
> appropriate checks just before the goto.
>
> --Brent Dax
> [EMAIL PROTECTED]
> Configure pumpking for Perl 6
>
> When I take action, I'm not going to fire a $2 million missile at a $10
> empty tent and hit a camel in the butt.
> --Dubya
>
>





RE: Yet another switch/goto implementation

2001-11-05 Thread Brent Dax

Daniel Grunblatt:
# No, I totally disagree on that if I do that we will lose the
# speed gained
# before, I still don't know why we can't stay we the actual
# dispatch method
# when tracing, etc and use computed goto when running without
# any command
# line switch?

If we enable tracing with computed goto, we should only expect to get
better performance than tracing with switch or tracing with
function-pointer.  What I'm saying is:

op27:
/* stuff */
pc+=3;
/* tracing or whatever code goes here */
goto *lookup[*pc];

That'll still be faster than switch()ing or function-pointer; it just
won't be as fast as untracing, which I think users can accept and
understand.

--Brent Dax
[EMAIL PROTECTED]
Configure pumpking for Perl 6

When I take action, I'm not going to fire a $2 million missile at a $10
empty tent and hit a camel in the butt.
--Dubya




RE: Yet another switch/goto implementation

2001-11-05 Thread Daniel Grunblatt

I'm definetly having a hard time trying to make my self clear, sorry guys
I'm still learning english :( .

The point is that,in my opinion, we don't really need to be faster than
now when tracing, etc but we DO have to be faster when running like:

# ./test_prog mops.pbc



On Mon, 5 Nov 2001, Brent Dax wrote:

> Daniel Grunblatt:
> # No, I totally disagree on that if I do that we will lose the
> # speed gained
> # before, I still don't know why we can't stay we the actual
> # dispatch method
> # when tracing, etc and use computed goto when running without
> # any command
> # line switch?
>
> If we enable tracing with computed goto, we should only expect to get
> better performance than tracing with switch or tracing with
> function-pointer.  What I'm saying is:
>
>   op27:
>   /* stuff */
>   pc+=3;
>   /* tracing or whatever code goes here */
^

This is exactly what I'm trying to avoid, this is a big overhead, because
if I'm understaing right you are suggesting to add and if here, right?
well imagine that if made everytime even when we are not tracing.

Unless that what you mean is having separated functions and we call
cg_core() by default, cg_core_trace when tracing and so on, is that what
you are suggesting?

Daniel Grunblatt.

>   goto *lookup[*pc];
>
> That'll still be faster than switch()ing or function-pointer; it just
> won't be as fast as untracing, which I think users can accept and
> understand.
>
> --Brent Dax
> [EMAIL PROTECTED]
> Configure pumpking for Perl 6
>
> When I take action, I'm not going to fire a $2 million missile at a $10
> empty tent and hit a camel in the butt.
> --Dubya
>
>




Re: Yet another switch/goto implementation

2001-11-05 Thread Simon Cozens

On Mon, Nov 05, 2001 at 11:46:50AM -0300, Daniel Grunblatt wrote:
> The point is that,in my opinion, we don't really need to be faster than
> now when tracing, etc but we DO have to be faster when running like:

I agree completely. I'd like to see configure-time options for the
runops loop.

-- 
"You can have my Unix system when you pry it from my cold, dead fingers."



Re: Yet another switch/goto implementation

2001-11-05 Thread Dan Sugalski

At 05:32 PM 11/5/2001 +, Simon Cozens wrote:
>On Mon, Nov 05, 2001 at 11:46:50AM -0300, Daniel Grunblatt wrote:
> > The point is that,in my opinion, we don't really need to be faster than
> > now when tracing, etc but we DO have to be faster when running like:
>
>I agree completely. I'd like to see configure-time options for the
>runops loop.

Definitely.

One thing we should consider when building the alternate (i.e. not 
blazingly fast) runops loops is size. While the fully-indirect function 
dispatch form is slowest, it's also got the smallest incremental cost with 
multiple dispatch loops.

We might want to have one fast and potentially big loop (switch or computed 
goto) with all the alternate (tracing, Safe, and debugging) loops use the 
indirect function dispatch so we're not wedging another 250K per loop or 
something.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Yet another switch/goto implementation

2001-11-05 Thread Daniel Grunblatt

You already got them on my last patch posted yesterday, but now I'm
working in a new version which will have nicer code, in that patch I
didn't add an if to the Makefile, because I thought that it is not
portable, but Brent Dax told me that I can use it, what do you think about
this? should we keep Makefile without if or not?

Daniel Grunblatt.

On Mon, 5 Nov 2001, Simon Cozens wrote:

> On Mon, Nov 05, 2001 at 11:46:50AM -0300, Daniel Grunblatt wrote:
> > The point is that,in my opinion, we don't really need to be faster than
> > now when tracing, etc but we DO have to be faster when running like:
>
> I agree completely. I'd like to see configure-time options for the
> runops loop.
>
> --
> "You can have my Unix system when you pry it from my cold, dead fingers."
>





Re: Yet another switch/goto implementation

2001-11-05 Thread Bryan C . Warnock

On Monday 05 November 2001 09:46 am, Daniel Grunblatt wrote:
> This is exactly what I'm trying to avoid, this is a big overhead, because
> if I'm understaing right you are suggesting to add and if here, right?
> well imagine that if made everytime even when we are not tracing.
>
> Unless that what you mean is having separated functions and we call
> cg_core() by default, cg_core_trace when tracing and so on, is that what
> you are suggesting?

That is what is currently done, but it isn't extensible.  Ultimately, you 
need two - one streamlined to run a series of ops as fast as possible, and
another to provide hooks for pre- and post- dispatch callbacks.  That limits 
the number of opcode dispatchers for each variant to two, and allows 
arbitrary code (be it tracing, profiling, debugging) to be added to any of 
the variants. 

-- 
Bryan C. Warnock
[EMAIL PROTECTED]



Re: Multi-dot files

2001-11-05 Thread Tom Hughes

In message <[EMAIL PROTECTED]>
  Dan Sugalski <[EMAIL PROTECTED]> wrote:

> At 08:48 PM 11/4/2001 -0500, James Mastros wrote:
> >For that matter, why are we avoiding filenames with more then one dot?  It'd
> >be easy to teach a Makefile to get core.ops.c from core.ops; much harder to
> >tell it how to get core_ops.c.  (Note that in the current Makefile, we
> >special-case it.)
> 
> Some platforms only allow a single dot in their filenames. (Could be
> worse--we could be shooting for 8.3 uniqueness...)

We could just have a .ops -> .c rule and create core.c from core.ops
and so on. If we're worred about basename clashes then we can always
move all the ops files into an ops subdirectory.

Tom

-- 
Tom Hughes ([EMAIL PROTECTED])
http://www.compton.nu/




RE: Yet another switch/goto implementation

2001-11-05 Thread Daniel Grunblatt



On Mon, 5 Nov 2001, Brent Dax wrote:

> Michael Fischer:
> # On Nov 04, Brent Dax <[EMAIL PROTECTED]> took up a keyboard
> # and banged out
> # > Michael Fischer:
> # > # In the goto case, we spin. And perhaps I am broken there. End
> # > # really wants to return, not just set the pc, but I hadn't thought
> # > # of a clever way to do that corner case, and wanted to see what
> # > # the behavior would be without it. I suspect I need it.
> # >
> # > Can't you just break()?
> #
> # Out of a function?
>
> Isn't the win in computed goto that you inline the sub bodies and can
> loop implicitly instead of explicitly, thus saving a jump or two?

Exactly, that's why I suggested not to use computed goto when tracing,
checking bounds or profiling, there is no way , I think, to use it without
loosing speed.

Daniel Grunblatt.

>
>   goto *lookup[*pc];
>
>   op0:
>   return;
>   op1:
>   pc += 1;
>   goto *lookup[*pc];
>   op2:
>   /* whatever */
>   pc += size_of_op_2;
>   goto lookup[*pc];
>   op3:
>   /* this one may halt */
>   if(whatever) {
>   pc += size_of_op_3;
>   goto lookup[*pc];
>   }
>   else {
>   return;
>   }
>
> vs.
>
>
>   while(pc) {
>   goto *lookup[*pc];
>
>   op0:
>   pc=Parrot_op_end(pc, interp);
>   continue;
>   op1:
>   pc=Parrot_op_noop(pc, interp);
>   continue;
>   op2:
>   pc=Parrot_op_whatever(pc, interp);
>   continue;
>   op3:
>   pc=Parrot_op_whatever_else(pc, interp);
>   continue;
>   ...
>   }
>
> The second example really is no better than a switch (and perhaps worse,
> since the compiler can't get a high-level view of things and maybe come
> up with a better way to do it).
>
> --Brent Dax
> [EMAIL PROTECTED]
> Configure pumpking for Perl 6
>
> When I take action, I'm not going to fire a $2 million missile at a $10
> empty tent and hit a camel in the butt.
> --Dubya
>
>




Re: vmem memory manager

2001-11-05 Thread Dan Sugalski

At 06:03 PM 11/4/2001 -0500, James Mastros wrote:
>On Sun, Nov 04, 2001 at 01:47:44PM -0500, Dan Sugalski wrote:
> > I've not made any promises as to what type of GC system we'll use. I'm
> > gearing things towards a copying collector, but I'm also trying to make
> > sure we don't lock ourselves out of a generational scheme.
>I'd really like to hear that you were planning on not locking us out of
>/any/ scheme.  I'd like to see a lot of pluggablity here, so we can get
>custom solutions for those needing multiprocessor, huge memory optimized
>schemes, and with tiny machines with poor processors, or on a handheld with
>tiny memory.  Hell, even segmented memory, if they're really brave.

I doubt there'll be GC pluggbility. (Unless you consider "Ripping out the 
guts of resources.c and gc.c and replacing them" pluggability... :) If it 
works out that way, great, but I don't know that it's really something I'm 
shooting for.

> > I know things are a little fuzzy in the GC arena, but that's on purpose 
> for
> > the moment.
>Hell.  I've got very, very little knowlage about gc.  But I'd love to see
>the GC pluggable to the point where different modules can have different
>GCs... but I don't think it's reasonably possible.
>
>Without doubt, there should be a way for parrot code to modify the
>properties of the GC, like the frequency of calling, and to specify "run the
>GC now".

That will be settable. I should go add the ops to the docs.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Yet another switch/goto implementation

2001-11-05 Thread Ken Fox

Dan Sugalski wrote:
> We might want to have one fast and potentially big loop (switch or computed 
> goto) with all the alternate (tracing, Safe, and debugging) loops use the 
> indirect function dispatch so we're not wedging another 250K per loop or 
> something.

Absolutely. There's no gain from doing computed goto for those anyway
because the per-op overhead makes direct threading impossible. Brent Dax
already posted an example of why this is bad.

Function calls are not slow. It's the extra jumps and table lookups
that are slow. If a mode has extra over-head it won't see any advantage
with computed goto over function calls. (At least this is what I've
found testing on Pentium III and Athlon. Most RISC systems should see
similar effects. Older CISC systems with slow function calls may be a
different story.)

BTW, 250K for the size of the inlined dispatch loop is way too big. The
goal should be to put the hot ops inline and leave the other ones out.
Ideally the dispatch loop will fit into L1 cache -- maybe 8k or so. IMHO
we'd be a lot better inlining some of the PMC methods as ops instead of
trig functions. ;)

- Ken



Re: Yet another switch/goto implementation

2001-11-05 Thread Simon Cozens

On Mon, Nov 05, 2001 at 02:08:21PM -0500, Ken Fox wrote:
> we'd be a lot better inlining some of the PMC methods as ops instead of
> trig functions. ;)

Won't work. We can't predict what kind of PMCs will be coming our way, let
alone what vtables they'll use, let alone what methods those vtables will use
most often.

-- 
"Jesus ate my mouse" or some similar banality.
-- Megahal (trained on asr), 1998-11-06



Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Sam Tregar

On Mon, 5 Nov 2001, Simon Cozens wrote:

> On Sun, Nov 04, 2001 at 06:22:59PM -0300, Daniel Grunblatt wrote:
> > Do you want me to give you an account in my linux machine where I have
> > install gcc 3.0.2 so that you see it?
>
> How much effort do we want to put into something that shows a speedup
> on one particular version of one particular compiler?

When we're talking about the most current release of GCC - lots!  By the
time we reach beta 3.0.2+ will be in every Linux and *BSD distro worth
using.

-sam




Multi-dot files

2001-11-05 Thread Dan Sugalski

At 08:48 PM 11/4/2001 -0500, James Mastros wrote:
>For that matter, why are we avoiding filenames with more then one dot?  It'd
>be easy to teach a Makefile to get core.ops.c from core.ops; much harder to
>tell it how to get core_ops.c.  (Note that in the current Makefile, we
>special-case it.)

Some platforms only allow a single dot in their filenames. (Could be 
worse--we could be shooting for 8.3 uniqueness...)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Regex helper opcodes

2001-11-05 Thread Steve Fink

> >It's pretty
> >much functional, including reOneof.  Still, these could be useful
> >internal functions... *ponder*
> 
> I was thinking that the places they could come in really handy for were 
> character classes. \w, \s, and \d are potentially a lot faster this way, 
> 'specially if you throw in Unicode support. (The sets get rather a bit 
> larger...) It also may make some character-set independence easier.

But why would you be generating character classes at runtime? For
ASCII or iso-8859 or whatever regular ol' bytes are properly called, I
would expect \w \s \d charclasses to be constants. In fact, all
character classes would be constants. And as Dax mentioned, the
constructors for those constants would properly be internal functions.

For UTF-32 etc., I don't know. I was thinking we'd have to have
something like a multi-level lookup table for character classes. I see
a character class as a full-blown ADT with operators for
addition/unions, subtraction/intersections, etc.

You aren't thinking that the regular expression _compiler_ needs to be
written in Parrot opcodes, are you? I assumed you'd reach it through
some callout mechanism in the same way that eval"" will be handled.



[PATCH] Set Windows Target Version (1/1)

2001-11-05 Thread Richard J Cox

Sets defines to ensure that post Win95 functions are not defined in 
windows.h.

Richard

-- 
[EMAIL PROTECTED]


win32_h_WINVER.diff
Description: Binary data


Re: Win32 build and WINVER

2001-11-05 Thread Richard J Cox

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
(James Mastros) wrote:

> On Sun, Nov 04, 2001 at 01:38:58PM -0500, Dan Sugalski wrote:
> > Currently, I don't want to promise back before Win98, though if Win95 
> > is no different from a programming standpoint (I have no idea if it 
> > is) then that's fine too. Win 3.1 and DOS are *not* target platforms, 
> > though if someone gets it going I'm fine with it.

There is relatively little difference amongst Win95 thru ME. Some extras, 
but in practice I don't think we're going to want them (not in the core in 
any case).

> I'd tend to say that we should support back to win95 (original, not 
> sp2).
> AFAIK, there's nothing that changed that should effect core perl/parrot.
> The one big exception is Unicode support, NT-based systems have much 
> better
> Unicode.  Specificly, you can output unicode to the console.  However, 
> only
> targeting NT machines is absolutly not-an-option, for obvious reasons.

No and yes. No, in that the UNICODE[1] support in NT[2] is all pervasive 
(i.e. the ascii APIs are translated into UNICODE to be passed into the 
kernel).

> It might be that we end up with an NT binary with support for printing
> Unicode to the console, and a generic binary without.  (Come to think 
> of it,
> the only thing that should care is the opcode library that implements
> print(s|sc).)  There's a lot of other differences, of course, but for
> everything the win95 versions should be sufficent.  (For example, if we 
> want
> to set security properties on open, we need to use APIs that won't work 
> on
> 95,98, or Me.  But so long as we don't care, the security descriptor
> parameter can be NULL, and it will work fine on both.)

I would think (given Perl's roots) that's exactly where Perl can gain an 
advantage, the ability to programmatically manipulate ACLs without having 
to take the security APIs full on is going to be a big win (oops:).

The one big benefit an NT only build is that it could use UNICODE (but see 
[1]) as its native character set and avoid all the ASCII <-> UNICODE 
conversions in the APIs; however this may not be a really big gain in 
practice (however I can only speak as someone who rarely uses the upper 
codes of Latin-1, let alone all the other sets than UNCICODE provides -- 
e.g. to create filenames[3]).

The answer I think is to move as much UNICODE enabled functionality into 
modules, the selecting of which would switch in the native UNICODE support 
and only be supported on NT. The other alternative might be "Microsoft 
Layer for Unicode" which emulates much of the NT Unicode support on 
Win9x/ME, however I need to finished reading the info on this (and since 
it's rather new...)

Once I'm caught up on these messages and a few others I'll put together a 
patch to setup the defines before including windows.h to limit us to be 
Win95 compatible in the core.

> I should note, BTW, that I don't write windows programs when I can 
> manage
> not to, and I don't run NT.

[OT] If you're going to run Windows then  2k is a far easier environment 
(once working) than 9x (except for most games that is).

> -=- James Mastros
> 
> 

[1] Strictly speaking UNICODE assuming USC-2, i.e. pre-V3.0 with the 
extension beyond 64k code points.
[2] That is NT, 2000 and XP (at the time of writing).
[3] For example (this is C++, or IIRC C99):

#include 
#include 

int wmain() {   // UNICODE entry point, like main for ASCII
wchar_t fn[2];
fn[0] = 0x4f09; // Some CJF Unified Ideograph
fn[1] = 0;
HANDLE h = CreateFileW(fn, GENERIC_WRITE, 0, 0, CREATE_ALWAYS,
0, 0);
if (INVALID_HANDLE_VALUE == h) {
wprintf(L"Couldn't create file %ld\n", GetLastError());
} else {
CloseHandle(h);
}
return 0;
}

works fine... and displays quite nicely (once I had selected a typeface 
with the symbol in it.)

-- 
[EMAIL PROTECTED]



RE: Rules for memory allocation and pointing

2001-11-05 Thread Dan Sugalski

At 12:23 AM 11/5/2001 -0800, Brent Dax wrote:
>Michael L Maraist:
># On Sunday 04 November 2001 02:39 pm, Dan Sugalski wrote:
>My understanding is that we will pretty much only allocate PMCs out of
>the arena and any buffers are allocated out of the GC region.  (I could
>be wrong, of course...)

That's dead on. Requiring all the PMCs to be in arenas makes some GC stuff 
easier, as well as making better use of the cache in normal operations.

># First of all, how are arrays of arrays of arrays handled?

You'll have a PMC who's data pointer points to a buffer of PMCs. That works 
fine--we'll tromp through the buffer of PMC pointers and put them on the 
list of live PMCs and GC them as needed.

># First of all, this will mean that the foreign access
># data-structure will grow
># VERY large when PMC arrays/ hashes are prevalant.  What's worse, this
># data-structure is stored within the core, which means that there is
># additional burden on the core memory fragmentation / contention.
>
>No...foreign access (if I undestand correctly, once again) is just a way
>of saying 'hey, I'm pointing at this', a bit like incrementing a
>refcount.

Yep. Only used for real foreign access.

># Additionally, what happens when an array is shared by two
># threads (and thus
># two interpreters).  Who's foreign access region is it stored
># in?  My guess is
># that to avoid premature freeing, BOTH.

The owner's (i.e. the thread that created it). Only the owner of a PMC can 
GC the thing.

># My suggestion is to not use a foreign references section; or
># if we do, not
># utilize it for deep data-structure nesting.

We aren't using it for deep nesting, so we're fine there.

># Beyond this, I think I see some problems with not having PMCs
># relocatable.
># While compacting the object-stores that are readily resized
># can be very
># valuable, the only type of memory leak this avoids is
># fragmentation-related.

Yep, that's one of the benefits of  compacting collector, along with fast 
allocation on average.

># The PMCs themselves still need to be tested against memory
># leaks.

That's why the dead object detection phase exists, to see which PMCs are in 
use. Unused PMCs get reclaimed, and the space taken by their contents reused.

>#  Now I'm
># still in favor of some form of reference counting; I think
># that in the most
># common case, only one data-structure will reference a PMC and
># thus when it
># goes away, it should immediately cause the deallocation of
># the associated
># object-space (sacraficing a pitance of run-time CPU so that
># the GC and free
># memory are relaxed).

Don't underestimate the amount of CPU time taken up by reference counting. 
Amortized across the run of the program, yes, but not at all insignificant. 
Also horribly error prone, as most XS module authors will tell you. 
(assuming they even notice their own leaks, which many don't)

># But I hear that we're not relying on an
># integer for
># reference counting (as with perl5), and instead are mostly
># dependant on the
># GC.

You're conflating dead object detection with GC. Don't--the two things are 
separate and if you think of them that way it makes things clearer.

># Well, if we use a copying GC, but never move the PMC,
># then how are we
># freeing these PMCs?

The dead object detection phase notes them. They're destroyed if necesary, 
then thrown back in the interpreter's PMC pool.

[Fairly complex GC scheme snipped]

That was clever, but too much work. The PMC, buffer header, and memory 
pools are interpreter-private, which eliminates the need for locking in 
most cases. Only the thread that owns a PMC will need to collect it or its 
contents.

For all intents and purposes, an interpreter can consider its pools of 
memory and objects private, except in the relatively rare shared case. (And 
sharing is going to be mildly expensive, which is just too bad--our 
structures are too complex for it not to be) The standalone-interpreter 
model makes GC straightforward, and all we really need to do to expand it 
to a multiple interpreter model is:

*) Make sure the off-interpreter references of shred PMCs do active cleanup 
properly
*) Make sure shared PMCs allocate memory in ways we can reasonably clean up 
after.



Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: [PATCH] Computed goto, super-fast dispatching.

2001-11-05 Thread Alex Gough

On Mon, 5 Nov 2001, Dan Sugalski wrote:
> At 10:24 AM 11/5/2001 -0300, Daniel Grunblatt wrote:
> >Right, now, what about the audience with an operative system with gcc
> >3.0.2?
> 
> What about 'em? They build the same way everyone else does.
> 
> Gearing code specifically towards the quirks of a specific compiler 
> version's usually a good way to get really disappointed when the next point 
> release comes out and tosses away the win you found.
> 

Hurrah that man!  For information, Irix isn't working any more, because
Configure thinks it's linux.  I'll try to sort it out when I'm further
away from the evil modem monster.

Alex Gough 




RE: Rules for memory allocation and pointing

2001-11-05 Thread Michael Maraist

On Mon, 5 Nov 2001, Dan Sugalski wrote:

> At 12:23 AM 11/5/2001 -0800, Brent Dax wrote:
> >Michael L Maraist:

[reordered for clarity]
>
> > But I hear that we're not relying on an
> > integer for
> > reference counting (as with perl5), and instead are mostly
> > dependant on the
> > GC.
>
> You're conflating dead object detection with GC. Don't--the two things are
> separate and if you think of them that way it makes things clearer.

Ok, this clarifies things a bit.  I know you're purposefully standing
back from the details, though that's conflicting with my head-burried
among the trees not being able to accept that it's all part of a
forest.  You're saying that the dod (dead object detector) has to have
intimate knowledge of
the parrot internals, while the gc is part of the memory-manager
black-box.  The main reason I joined the two was to reduce the number
of passes in my presented algorithm.

> >First of all, how are arrays of arrays of arrays handled?
>
> You'll have a PMC who's data pointer points to a buffer of PMCs. That works
> fine--we'll tromp through the buffer of PMC pointers and put them on the
> list of live PMCs and GC them as needed.

Just for clarification, does this mean that (depending on the
algorithm) the dod will check if the PMC is an
array, and thus recurse into the object buffer?  Or does this mean that
at array creation time, the nested PMCs are added to some top-level
list of live PMCs?

>
> >#  Now I'm
> ># still in favor of some form of reference counting; I think
> ># that in the most
> ># common case, only one data-structure will reference a PMC and
> ># thus when it
> ># goes away, it should immediately cause the deallocation of
> ># the associated
> ># object-space (sacraficing a pitance of run-time CPU so that
> ># the GC and free
> ># memory are relaxed).
>
> Don't underestimate the amount of CPU time taken up by reference counting.
> Amortized across the run of the program, yes, but not at all insignificant.
> Also horribly error prone, as most XS module authors will tell you.
> (assuming they even notice their own leaks, which many don't)

With respect to XS memory leaks, I wasn't suggesting that ref-flagging
superceed explicit dod, but provide an optional and
efficient quick-check.  The suggestion was that by checking a flag we more
quickly free resources.  Otherwise for for-loops that alloc/free data, this can
quickly starve memory (and regularly invoke the gc), whereas the
ref-flagging would successfully reuse the same memory space each time
(conserving cache-space as well).
While I was mostly considering using the API to attach references which
would avoid the XS mem-leakage, I suppose even a simple flag can be abused.

With respect to "cpu" time, I'd be curious to see benchmarks.  I'd
agree that adding an if-statement to every assignment adds overhead
(especially since it's an unpredictable hardware branch).  But I can't
think of a good dod that's not O(n).  Under heavy
memory loads, I'd argue that (short of always requesting more memory)
relying exclusively on GCs will always be slower than ref-counting:
  For every m allocations (with sufficiently large m):
o with ref-counting/flagging you have m maintanance operations
o with dod/gc, you have roughly 1 triggered gc with O(n) objects to
  navigate through.
  Note that for sufficiently large memory requests, both will invoke
  the gc.  I focused mainly on the tight inner loop of allocs/frees.

Still, I'd agree that if XS-code can't be trusted with a modified form
of reference-management, then gc does make developers lives easier.

-Michael




Re: Regex helper opcodes

2001-11-05 Thread Steve Fink

Quoting Dan Sugalski ([EMAIL PROTECTED]):
> At 11:54 AM 11/5/2001 -0800, Steve Fink wrote:
> > > >It's pretty
> > > >much functional, including reOneof.  Still, these could be useful
> > > >internal functions... *ponder*
> > >
> > > I was thinking that the places they could come in really handy for were
> > > character classes. \w, \s, and \d are potentially a lot faster this way,
> > > 'specially if you throw in Unicode support. (The sets get rather a bit
> > > larger...) It also may make some character-set independence easier.
> >
> >But why would you be generating character classes at runtime?
> 
> Because someone does:
> 
>while (<>) {
>  next unless /[aeiou]/;
>}
> 
> and we want that character class to be reasonably fast?

? So don't generate it at runtime. When you generate the opcode
sequence for the regex, emit a bit vector into the constant table and
refer to it by address in the matchCharClass op's arguments. Be fancy
and check that you haven't already emitted that bit vector. Am I
missing something?

> >For
> >ASCII or iso-8859 or whatever regular ol' bytes are properly called, I
> >would expect \w \s \d charclasses to be constants. In fact, all
> >character classes would be constants. And as Dax mentioned, the
> >constructors for those constants would properly be internal functions.
> 
> Sure, the predefined ones would be, and they'd get loaded up along with the 
> character encoding libraries.

Ok, so they're even more constant :-), but I'm talking about constants
in the sense that

my $x = 18.34;

emits a constant 18.34 floating point value in the same way that

if (/[aeiou]/) 

would emit a constant vowel charclass?

> >For UTF-32 etc., I don't know. I was thinking we'd have to have
> >something like a multi-level lookup table for character classes. I see
> >a character class as a full-blown ADT with operators for
> >addition/unions, subtraction/intersections, etc.
> 
> Ah, point. A bitmap won't work too well with the full UTF-32 set.
> 
> Having a good set of set operations would be useful for the core, though.

No argument there.

> >You aren't thinking that the regular expression _compiler_ needs to be
> >written in Parrot opcodes, are you? I assumed you'd reach it through
> >some callout mechanism in the same way that eval"" will be handled.
> 
> The core of the parser's still a bit up in the air. Larry's leaning towards 
> it being in perl.

When you say "parser", do you mean parser + bytecode generator +
optimizer + syntax analyzer? (Of which only the bytecode generator is
relevant to [:classes:], I suppose.)



Re: Regex helper opcodes

2001-11-05 Thread Dan Sugalski

At 11:54 AM 11/5/2001 -0800, Steve Fink wrote:
> > >It's pretty
> > >much functional, including reOneof.  Still, these could be useful
> > >internal functions... *ponder*
> >
> > I was thinking that the places they could come in really handy for were
> > character classes. \w, \s, and \d are potentially a lot faster this way,
> > 'specially if you throw in Unicode support. (The sets get rather a bit
> > larger...) It also may make some character-set independence easier.
>
>But why would you be generating character classes at runtime?

Because someone does:

   while (<>) {
 next unless /[aeiou]/;
   }

and we want that character class to be reasonably fast?

>For
>ASCII or iso-8859 or whatever regular ol' bytes are properly called, I
>would expect \w \s \d charclasses to be constants. In fact, all
>character classes would be constants. And as Dax mentioned, the
>constructors for those constants would properly be internal functions.

Sure, the predefined ones would be, and they'd get loaded up along with the 
character encoding libraries.

>For UTF-32 etc., I don't know. I was thinking we'd have to have
>something like a multi-level lookup table for character classes. I see
>a character class as a full-blown ADT with operators for
>addition/unions, subtraction/intersections, etc.

Ah, point. A bitmap won't work too well with the full UTF-32 set.

Having a good set of set operations would be useful for the core, though.

>You aren't thinking that the regular expression _compiler_ needs to be
>written in Parrot opcodes, are you? I assumed you'd reach it through
>some callout mechanism in the same way that eval"" will be handled.

The core of the parser's still a bit up in the air. Larry's leaning towards 
it being in perl.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




make clean

2001-11-05 Thread Daniel Grunblatt

..



Index: Makefile.in
===
RCS file: /home/perlcvs/parrot/Makefile.in,v
retrieving revision 1.43
diff -u -r1.43 Makefile.in
--- Makefile.in 2001/11/02 12:11:15 1.43
+++ Makefile.in 2001/11/06 02:38:03
@@ -130,7 +130,7 @@
cd docs; make
 
 clean:
-   $(RM_F) *$(O) *.s core_ops.c $(TEST_PROG) $(PDISASM) $(PDUMP)
+   $(RM_F) *$(O) encodings/*$(O) *.s core_ops.c $(TEST_PROG) $(PDISASM) $(PDUMP)
$(RM_F) $(INC)/vtable.h
$(RM_F) $(INC)/oplib/core_ops.h
$(RM_F) $(INC)/oplib/vtable_ops.h vtable_ops.c vtable.ops



Re: Yet another switch/goto implementation

2001-11-05 Thread Ken Fox

Simon Cozens wrote:
> On Mon, Nov 05, 2001 at 02:08:21PM -0500, Ken Fox wrote:
> > we'd be a lot better inlining some of the PMC methods as ops instead of
> > trig functions. ;)
> 
> Won't work. We can't predict what kind of PMCs will be coming our way, let
> alone what vtables they'll use, let alone what methods those vtables will use
> most often.

This is not appropriate for Parrot, but Perl should definitely have
some of its own PMC ops inlined.

IMHO Perl is getting some static typing ability, so it should be able
to emit bytecode that doesn't go through the PMC vtable. If somebody
adds type inferencing there would be even more opportunities for
skipping the vtable.

If Perl isn't able to infer the type of a PMC, it could still win by
inlining a type check and the common case in a custom op. If the VM is
automatically generated this could be done by looking at profiling
info and inlining the hottest PMC methods. It seems feasible to even
have multiple dispatch loops optimized for different types of apps,
say PDL number crunching vs. XML munging.

That's a lot of "ifs" and "shoulds" -- but I'd rather hear that
instead of "can't".

- Ken



Re: Rules for memory allocation and pointing

2001-11-05 Thread Benoit Cerrina



> You're conflating dead object detection with GC. Don't--the two things are
> separate and if you think of them that way it makes things clearer.
>
> ># Well, if we use a copying GC, but never move the PMC,
> ># then how are we
> ># freeing these PMCs?
>
> The dead object detection phase notes them. They're destroyed if necesary,
> then thrown back in the interpreter's PMC pool.
>
> [Fairly complex GC scheme snipped]
Sorry to be obtuse but I don't see how you can consider both separate. What
you call dead object detection sounds like the mark phase of a mark and
sweep
collector but this belongs to the collector.  Some collector do not have two
distinct steps for how they find if an object is garbage and what they do to
garbage (or live object) such as copying collector wich do it in one pass.
Can you explain what you mean by GC if it doesn't detect the garbage? Can
you point at some literature explaining it?  I believe that this is a very
well studied field for both compiled and interpreted languages but normally
the garbage detection, the read and/or write barrier and the actual handling
of the garbage are all consider part of the collector.
Benoit