Uninstalling perl

2000-11-01 Thread pruna

Hi:

I've installed last version of Perl from sources on my Cobalt
cache Raq2. Now I want to uninstall it, and let the previous version.
My questions are:
1. How can I uninstall it?
2. When I do it, the previous version will be active again, or I must
reinstall it?
3. Anyone knows which was the previous (original) version, and
where can I found it?

Thanks!


Andrés.



Re: Uninstalling perl

2000-11-01 Thread Casey R. Tweten

Today around 5:41pm, [EMAIL PROTECTED] hammered out this masterpiece:

: Hi:

This isn't an appropriate forum for these questions.  Consult perlfaq2
for a better place to ask.

I happen to have a Cobalt cache Raq2 so I'll give you some pointers,
but reply to me directly, not the list.

:   I've installed last version of Perl from sources on my Cobalt 
: cache Raq2. Now I want to uninstall it, and let the previous version.
:   My questions are:
: 1. How can I uninstall it?

On a Raq2, Perl 5.004 is installed at /usr/bin/perl, I am assuming
you chose to write a like from /usr/bin/perl to
/usr/local/bin/perl.  This isn't very good because the Raq2 uses their
own hacked up version of Perl 5.004, IIRC.

Basically, you have to delete any files the your Perl install wrote.

: 2. When I do it, the previous version will be active again, or I must 
: reinstall it?

No, and good luck getting getting your box to work after installing
5.004 at /usr ( not /usr/local ).

: 3. Anyone knows which was the previous (original) version, and 
: where can I found it?

Look at the CPAN for Perl 5.004


-- 
print(join(' ', qw(Casey R. Tweten)));my $sig={mail=>'[EMAIL PROTECTED]',site=>
'http://home.kiski.net/~crt'};print "\n",'.'x(length($sig->{site})+6),"\n";
print map{$_.': '.$sig->{$_}."\n"}sort{$sig->{$a}cmp$sig->{$b}}keys%{$sig};
my $VERSION = '0.01'; #'patched' by Jerrad Pierce 




Re: Larry's ALS talk summary

2000-11-01 Thread Larry Wall

Jarkko Hietaniemi writes:
: >   * XS, the system for extending Perl with C or C++, will be replaced
: > with something much easier to use.  This will give people very
: > convenient access to existing code libraries, and write C or C++
: > subroutines that can be called as Perl subroutines from Perl code
: > to take advantage of C's speed and memory flexibility.
: 
: I disclaim having any inner access to Larry's current thoughts on this
: but just as a datapoint check out the latest Perl Journal and its
: article on the Inline module.
: 
: Also, having recently been subjected to the pain and horrors, ummm,
: thrill and joys, of trying to use a very deeply C-like (1) library to
: from Java, allow me to voice my fervent, errr, humble hope that we
: steer clear of the JNI disaster (2).
: 
: (1) Synchronous and asynchronous callbacks implemented using
: function pointers and void pointers being used both
: as event type specific data containers and as user-specified
: opaque cookies.  Yum.
: 
: (2) There are 200+ functions in the JNI API -- but enough to do (1).
: I do understand that mapping a type-unsafe language to a type-safe
: O-O language is hard but the resulting API is simply disgusting,
: and even Sun admits this, slow as a drunken limping pig.

The hope is to extend Perl's subroutine declaration syntax (via types
and attributes) to the point where a "forward" declaration in Perl of a
C, Java, or C# routine can supply all the glue information formerly
supplied by XS.  While this will undoubtedly give us some rather
strange looking Perl, I'd rather look at potentially strange Perl than
certainly strange XS.

Larry



Re: virtual machine implementation options

2000-11-01 Thread Steve Fink

Ken Fox wrote:
> 
> I was noodling around with a couple different VM implementations
> and thought I would ask other people if they were doing the same.
> It would be nice to divide and conquer the exploration of different
> implementations.
> 
> I looked into 3 different dispatching techniques. Here's the list
> with timings on a 600 MHz Pentium III running Linux and gcc 2.95:
> 
>  -g-O
> Switch/case6.0 sec   2.0 sec
> Computed goto (gcc feature)6.1 sec   1.4 sec
> Function call 11.9 sec  11.8 sec

Cool!

I have the same machine (600MHz PIII) running Linux but using a
different gcc:

% gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/specs
gcc version egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)

and I get the same numbers for -O (I used -O3) but different numbers
without optimization. Maybe we should assume optimization?

I implemented a few variants of the function calling version.
FUNCALL_PREDICTABLE is exactly the same as your USE_FUNCALL except
instead of jumping directly through a function pointer, it does a nested
else-if statement. I had a vague idea that this would help branch
prediction, without losing the flexibility because you could always have
a final default 'else' that used a function pointer for unexpected
functions. It also allows the compiler to inline the function bodies.
(For completeness, I should really put the bodies in a separate
compilation unit; I'm not sure whether it inlined them or not. Asking
them to be inlined showed a small effect, so probably not? Or I could
just disassemble.) That got a big speedup: 10.2sec -> 4.9sec. So maybe
keeping things as separate functions isn't so harmful after all, though
that computed goto is very tempting. (But before you believe this branch
prediction BS, see below.)

Since function addresses aren't constant in gcc, that requires a nested
else-if construct. I took a shot at comparing the linear search with a
switch, but this is probably a ridiculous test because of the tiny
number of opcodes. The comparison is between FUNCALL_HYBRID and
FUNCALL_HYBRID_LINEAR.

FUNCALL_NAIVE looks at what happens if you don't rewrite opcode ids with
function pointers, and instead do a lookup for every opcode. It slows it
down some, though the whole thing is so slow it's not a big deal
percentage-wise.

FUNCALL_HYBRID is an attempt to use a switch with separate functions,
avoiding function pointers. It gives the same bottom line: on the PIII,
(1) function pointers are expensive, and (2) the compiler doesn't do a
very good job of inlining (with inlining on, FUNCALL_HYBRID should be
the same as SWITCH, but it's half the speed.)

I'm not sure I really buy the branch prediction argument, so I tried
another version FUNCALL_PREDICTABLE_HALF with half of the opcodes
replaced with function pointers. So branch prediction should work almost
as well since there's still always a single destination for every IP
value (although there's still an extra level of indirection -- major
handwaving here), but things go through a function pointer half the
time. It slows things down some, but much less than half the difference
between function pointers and no function pointers, so it seems to
support the branch prediction argument. But I'm way too far out on a
limb to believe anything I'm saying any more. :-)

Algorithm   Inline? OptimizationTime (sec)
GOTO-   -O3 1.35
GOTO-   none5.87
SWITCH  -   -O3 2.19
SWITCH  -   none5.72
FUNCALL_ORIGno  -O3 10.15
FUNCALL_ORIGno  none12.37
FUNCALL_ORIGyes -O3 10.16
FUNCALL_ORIGyes none12.37
FUNCALL_NAIVE   no  -O3 11.66
FUNCALL_NAIVE   no  none13.51
FUNCALL_NAIVE   yes -O3 11.67
FUNCALL_NAIVE   yes none13.51
FUNCALL_HYBRID  no  -O3 4.21
FUNCALL_HYBRID  no  none10.08
FUNCALL_HYBRID  yes -O3 4.20
FUNCALL_HYBRID  yes none10.05
FUNCALL_HYBRID_LINEAR   no  -O3 4.67
FUNCALL_HYBRID_LINEAR   no  none10.06
FUNCALL_HYBRID_LINEAR   yes -O3 4.67
FUNCALL_HYBRID_LINEAR   yes none10.07
FUNCALL_PREDICTABLE no  -O3 4.92
FUNCALL_PREDICTABLE no  none9.85
FUNCALL_PREDICTABLE yes -O3 4.87
FUNCALL_PREDICTABLE yes none9.84
FUNCALL_PREDICTABLE_HALFno