Re: Vocabulary
Larry Wall <[EMAIL PROTECTED]> writes: > On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote: > : When you say CHECK time, do you mean there'll be a CHECK phase for > : code that gets required at run time? > > Dunno about that. When I say CHECK time I'm primarily referring > to the end of the main compilation. Perl 5 appears to ignore CHECK > blocks declared at run time, so in the absence of other considerations > I suspect Perl 6 might do the same. I feared that might be the case. -- Beware the Perl 6 early morning joggers -- Allison Randal
Re: Vocabulary
[EMAIL PROTECTED] (Michael Lazzaro) writes: > Well, just for clarification; in my anecdotal case (server-side web > applications), the speed I actually need is "as much as I can get", > and "all the time". Every N cycles I save represents an increase in > peak traffic capabilities per server, which is, from a marketing > perspective, essential. The desire to optimize the hell out of Perl 6 is a good one, but surely you optimize when there is a problem, not when before. Is there a problem with the speed you're getting from Perl 6 at the moment? -- I often think I'd get better throughput yelling at the modem.
RE: Vocabulary
Michael Lazzaro wrote: > I don't think so; we're just talking about whether you can extend a > class at _runtime_, not _compiletime_. Whether or not Perl can have > some degree of confidence that, once a program is compiled, it won't > have to assume the worst-case possibility of runtime alteration of > every class, upon every single method call, just in case > you've screwed with something. That's a cute way of glossing over the problem. How do you truly know when runtime is in the first place? Imagine an application server which parses and loads code from files on-demand. This shouldn't be difficult. Imagine that that code references a system of modules. Imagine if Perl "finalizes" classes after "primary compilation" (after parsing, say, an ApacheHandler file), and proceeds to behave quite differently indeed afterwards. Imagine that a perfectly innocent coder finds that his class library doesn't run the same (doesn't run at all) under the application server as it does when driven from command line scripts: His method overrides don't take effect (or, worse, Perl tells him he can't even compile them because the class is already "finalized"! And he thought Perl was a dynamic language!). What's his recourse? Nothing friendly. Tell Perl that he's going to subclass the classes he subclasses? Why? He already subclasses them! Isn't that "tell" enough? And how? Some obscure configuration file of the application server, no doubt. And now the app server needs to be restarted if that list changes. His uptime just went down. And now he can't have confidence that his system will continue to behave consistently over time; "apachectl restart" becomes a part of his development troubleshooting lexicon. Java doesn't make him do that; HotSpot can make this optimization at runtime and back it out if necessary. Maybe he'll just write a JSP instead. C# and VB.NET do likewise. ASP.NET isn't looking so bad, either. The .NET Frameworks are sure a lot less annoying than the Java class library, after all. Point of fact, for a large set of important usage cases, Perl simply can't presume that classes will EVER cease being introduced into the program. That means it can NEVER make these sorts of optimizations unless it is prepared to back them out. Even in conventional programs, dynamic class loading is increasingly unavoidable. Forcing virtuous programmers to declare "virtual" (lest their program misbehave or their perfectly valid bytecode fail to load, or their perfectly valid source code fail to compile) is far worse than allowing foolish programmers to declare "final." Making semantic distinctions of this scale between "compile time" and "runtime" will be a significant blow to Perl, which has always been strengthened by its dynamism. Its competitors do not include such artifacts; they perform class finalization optimizations on the fly, and, despite the complexity of the task, are prepared to back out these optimizations at runtime--while the optimized routines are executing, if necessary. Yes, this requires synchronization points, notifications (or inline checks), and limits code motion. Better than the alternative, I say. It is very simply a huge step backwards to create a semantic wall between primary compilation and program execution. So write the complicated code to make it work right. - or - Take the performance hit and go home. Dynamism has a price. Perl has always paid it in the past. What's changed? -- Gordon Henriksen IT Manager ICLUBcentral Inc. [EMAIL PROTECTED]
Re: Yet another keyed ops proposal.
Dan Sugalski <[EMAIL PROTECTED]> wrote: > At 11:56 AM +0100 12/12/03, Leopold Toetsch wrote: >> >>But having multi-keyed variants of all relevant opcodes would burst >>our opcode count to #of-keyed-opcodes * #of-key-permutations. That's >>not feasable. > Definitely not. Here's an alternative. I meant #of-implemented-key-permutations, which is anything from 1 to 4^3 (or more if string keyes are supported in ops, or we consider the no-key case too ...) > ... I'd originally planned on > there being a single keyed variant of each op, so the above would be > written: >add P0[S0], P1[S1], P2[] > Note that a key is passed in for $k, just a NULL key, all keys were > meant to be in S registers, and there weren't going to be constant > keys or pure-integer. Things have changed a bit. :) I know that The Plan was, to use one *all* keyed opcode. But - "Things have changed a bit" - when going with only one multi-keyed we have: - one constant PMC per argument, these have to be constructed at program load time, albeit keys are folded (identical keys in different instructions are only generated once) and they don't need marking, its some overhead - each keyed access needs a check, if the key is non NULL - that are three ifs per vtable meth - and finally extracting the index This all reduces the advantage of having all keyed ops towards zero. Further: We have then e.g. add_p_p_p and add_p_k_p_k_p_k. Overloading the add operator would need to delegate 2 vtable methods and implement 2 different subs to do the job. > Note that I'm perfectly OK mandating the following: > 1) All keys must be of the same type (integer key or key struct keys) Already starting permutations ... >>1) The assembler splits above PASM statement into two: > This is the one thing that I'll grumble about -- we can argue over > other things, but the assembler should *not* implicitly split stuff > out like this. Maybe (Only maybe!) IMCC should, but I'd argue not > there as well. Imcc *is* the assembler. Assembler input is a multi-keyed opcode as defined in pdd06. Its like a macro expansion one line generating 2 opcodes. > ... It should be explicitly specified in the source where > the keys should be fetched. These 2 operations are always adjacent. >>5) These returned pointers are stored in REG_PMC(x) .. REG_PMC(z) >>(x = 32, y = 33, z = 34) [2] >>struct PReg hasPMC *registers[NUM_REGISTERS + 3]; > This would be the clever bit -- key registers. I'm fine with key > registers however... if we're going to add them, why not just have a > separate set of key registers and be done with it? The problem is addressing these key registers in the opcode. A non keyed C calls a vtable on and with PREG(i), where PREG(i) is REG_PMC(cur_opcode(i)). When key registers are accessed with a different scheme, we again get code duplication with all the drawbacks. So my proposal uses key registers, which are more or less special depending, if they are outside or inside the regular register file. But e.g. using REG_PMC(32) doesn't need any further change to the vtables. C is C nothing more. And there is no overhead for e.g. adding a constant to a keyed operand. We can continue using all the optimized (e.g. integer indexed) vtable meths. leo
Re: Namespaces, part 2
Dan Sugalski <[EMAIL PROTECTED]> wrote: > load_global $P1, ['foo'; 'bar'] '$baz' > load_global $P2, ['foo'; 'bar'] '$xyzzy' > The difference there being that, rather than having two separate > constant keys we have one constant key and two string constants. This > should result in less memory usage and a faster startup time for > bytecode that accesses globals (which should pretty much be all of > it). I already commented on the last one: Above syntax is hmmm - strange (bracketed access on nothing doesn't really fit). So why not use a scheme, that matches current syntax: getinterp $P1# (or) get_namespace PMC set $P2, $P1['foo'; 'bar'; '$baz'] # get var and (optimized for multiple access) set $P3, $P1['foo'; 'bar'] # get foo::bar namespace set $P2, $P3['$baz'] # get var This needs a bit of work (CSE) and eventually one more register, but makes it usable for all multi-keyed opcodes - no special syntax is needed, just a namespace PMC. Anyway - my original question was on attributes: how are these accessed? leo
Re: [perl #24682] [BUG] parrot compile fails on MacOS 10.3.1 - possibly dynaloading patch?
Allison Randal <[EMAIL PROTECTED]> wrote: > Excellent. It compiles now. I do have two failing tests which may be > related (catching SIGFPE): > Failed Test Stat Wstat Total Fail Failed List of Failed > --- > t/op/hacks.t2 512 22 100.00% 1-2 Could you please run these 2 standalone: $ parrot t/op/hacks_1.pasm catched it error -8 severity 0 $ parrot t/op/hacks_2.pasm catched it ok leo
-lpthread
After updating and building I notice... make[1]: Entering directory `/home/abergman/Dev/ponie/perl' cc -L/home/abergman/Dev/ponie/parrot/blib/lib -o miniperl \ miniperlmain.o opmini.o libperl.a -lnsl -ldl -lm -lcrypt -lutil -lc -lparrot /home/abergman/Dev/ponie/parrot/blib/lib/libparrot.a(events.o): In function `init_events_first': /home/abergman/Dev/ponie/parrot/src/events.c:83: undefined reference to `pthread_create' /home/abergman/Dev/ponie/parrot/blib/lib/libparrot.a(tsq.o): In function `queue_timedwait': /home/abergman/Dev/ponie/parrot/src/tsq.c:164: undefined reference to `pthread_cond_timedwait' collect2: ld returned 1 exit status Am I right to assume that I always need to build a threaded perl if I want to link against parrot? Arthur
Re: -lpthread
On Wed 17 Dec 2003 12:29, Arthur Bergman <[EMAIL PROTECTED]> wrote: > After updating and building I notice... > > make[1]: Entering directory `/home/abergman/Dev/ponie/perl' > cc -L/home/abergman/Dev/ponie/parrot/blib/lib -o miniperl \ > miniperlmain.o opmini.o libperl.a -lnsl -ldl -lm -lcrypt -lutil -lc > -lparrot > /home/abergman/Dev/ponie/parrot/blib/lib/libparrot.a(events.o): In > function `init_events_first': > /home/abergman/Dev/ponie/parrot/src/events.c:83: undefined reference to > `pthread_create' > /home/abergman/Dev/ponie/parrot/blib/lib/libparrot.a(tsq.o): In > function `queue_timedwait': > /home/abergman/Dev/ponie/parrot/src/tsq.c:164: undefined reference to > `pthread_cond_timedwait' > collect2: ld returned 1 exit status > Am I right to assume that I always need to build a threaded perl if I > want to link against parrot? Unacceptable IMHO. Many people getting prebuild binaries on commercial OS's have no choice > > Arthur -- H.Merijn BrandAmsterdam Perl Mongers (http://amsterdam.pm.org/) using perl-5.6.1, 5.8.0, & 5.9.x, and 806 on HP-UX 10.20 & 11.00, 11i, AIX 4.3, SuSE 8.2, and Win2k. http://www.cmve.net/~merijn/ http://archives.develooper.com/[EMAIL PROTECTED]/ [EMAIL PROTECTED] send smoke reports to: [EMAIL PROTECTED], QA: http://qa.perl.org
Re: -lpthread
On Wednesday, December 17, 2003, at 11:35 am, H.Merijn Brand wrote: Unacceptable IMHO. Many people getting prebuild binaries on commercial OS's have no choice I don't see how this is relevant, prebuilt perl or prebuilt parrot? I don't think we need to worry about prebuilt parrot, and ponie will need to be built anyway. However, I would still like a flag that doesn't make it link against libpthread. Arthur
Re: -lpthread
Arthur Bergman <[EMAIL PROTECTED]> wrote: > Am I right to assume that I always need to build a threaded perl if I > want to link against parrot? I don't think, that perl needs building with threads. But parrot needs libpthread for platforms that have pthread.h and include thr_phread.h in their platform header files. These are currently: $ grep thr_pthread config/gen/platform/*.h config/gen/platform/darwin.h:# include "parrot/thr_pthread.h" config/gen/platform/generic.h:# include "parrot/thr_pthread.h" config/gen/platform/openbsd.h:# include "parrot/thr_pthread.h" If that file isn't included, dummy defines in thread.h take over. Of course a config option would be a nice to have. > Arthur leo
Re: -lpthread
On Wednesday, December 17, 2003, at 12:38 pm, Leopold Toetsch wrote: $ grep thr_pthread config/gen/platform/*.h config/gen/platform/darwin.h:# include "parrot/thr_pthread.h" config/gen/platform/generic.h:# include "parrot/thr_pthread.h" config/gen/platform/openbsd.h:# include "parrot/thr_pthread.h" However, I am building this on Linux switch.work.fotango.com 2.4.23-rc5 #3 SMP Wed Nov 26 10:05:52 GMT 2003 i686 unknown And I still get unknown references to lipthread, meaning I need link to it, meaning I need to build my perl threaded. Arthur
Re: -lpthread
At 12:50 PM + 12/17/03, Arthur Bergman wrote: On Wednesday, December 17, 2003, at 12:38 pm, Leopold Toetsch wrote: $ grep thr_pthread config/gen/platform/*.h config/gen/platform/darwin.h:# include "parrot/thr_pthread.h" config/gen/platform/generic.h:# include "parrot/thr_pthread.h" config/gen/platform/openbsd.h:# include "parrot/thr_pthread.h" However, I am building this on Linux switch.work.fotango.com 2.4.23-rc5 #3 SMP Wed Nov 26 10:05:52 GMT 2003 i686 unknown And I still get unknown references to lipthread, meaning I need link to it, meaning I need to build my perl threaded. Well... yes and no. You need to make sure Parrot links against the thread libraries. You don't, strictly speaking, need to have perl linked against the threading libraries except... several (perhaps most) platforms *really* hate it when you dlopen (or its equivalent) the thread libraries and *haven't* linked your main executable against them. Tends to crash or lock up your process, which kind of sucks. If you have it such that parrot is linked directly into the main perl executable so that it's loaded as part of the process startup, then you don't need to link in the thread libraries to perl. If you're loading parrot as a perl extension, then you will. (It isn't necessary to build a threaded perl for this, FWIW, you just need to make sure perl loads in the thread library) -- Dan --"it's like this"--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: -lpthread
On Wednesday, December 17, 2003, at 02:06 pm, Dan Sugalski wrote: Well... yes and no. You need to make sure Parrot links against the thread libraries. You don't, strictly speaking, need to have perl linked against the threading libraries except... several (perhaps most) platforms *really* hate it when you dlopen (or its equivalent) the thread libraries and *haven't* linked your main executable against them. Tends to crash or lock up your process, which kind of sucks. If you have it such that parrot is linked directly into the main perl executable so that it's loaded as part of the process startup, then you don't need to link in the thread libraries to perl. If you're loading parrot as a perl extension, then you will. (It isn't necessary to build a threaded perl for this, FWIW, you just need to make sure perl loads in the thread library) -- Dan Yes, but making sure perl loads the thread library is pretty much the same as saying that perl needs be threaded :). I don't really like that you cannot build parrot without linking in pthread. Arthur
Re: -lpthread
On Wed 17 Dec 2003 15:11, Arthur Bergman <[EMAIL PROTECTED]> wrote: > > On Wednesday, December 17, 2003, at 02:06 pm, Dan Sugalski wrote: > > > > > Well... yes and no. You need to make sure Parrot links against the > > thread libraries. You don't, strictly speaking, need to have perl > > linked against the threading libraries except... several (perhaps > > most) platforms *really* hate it when you dlopen (or its equivalent) > > the thread libraries and *haven't* linked your main executable against > > them. Tends to crash or lock up your process, which kind of sucks. > > > > If you have it such that parrot is linked directly into the main perl > > executable so that it's loaded as part of the process startup, then > > you don't need to link in the thread libraries to perl. If you're > > loading parrot as a perl extension, then you will. (It isn't necessary > > to build a threaded perl for this, FWIW, you just need to make sure > > perl loads in the thread library) > > -- > > Dan > > > > Yes, but making sure perl loads the thread library is pretty much the > same as saying that perl needs be threaded :). I don't agree. All my HP-UX perls are non-threaded, but have libcl and libpthread linked in to enable DBD::Oracle later on which will not build/run if one does not link them to perl Building a threaded perl (I read this as: perl supports threads) will give me a 25% performance hit on HP-UX which I am not willing to take > I don't really like that you cannot build parrot without linking in > pthread. > > Arthur -- H.Merijn BrandAmsterdam Perl Mongers (http://amsterdam.pm.org/) using perl-5.6.1, 5.8.0, & 5.9.x, and 806 on HP-UX 10.20 & 11.00, 11i, AIX 4.3, SuSE 8.2, and Win2k. http://www.cmve.net/~merijn/ http://archives.develooper.com/[EMAIL PROTECTED]/ [EMAIL PROTECTED] send smoke reports to: [EMAIL PROTECTED], QA: http://qa.perl.org
[CVS ci] parrot-threads-1
getinterp P2 find_method P0, P2, "thread" find_global P6, "_foo" clone P5, P2 invoke # start the thread This little piece of code runs the subroutine "_foo" in a separate thread inside interpreter P5. There are of course a lot of things missing, ParrotInterpreter::clone does only very basic things and so on, but it runs. Some remarks and thoughts: 1) We have to store the Parrot_thread somewhere - I'm thinking of: struct thread_data inside the interpreter structure 2) All interpreters get stored in a global Interp_Array, the index being the user visible thread id for e.g. join. 3) Shared PMCs should probably get their own pmc-pool memory. 4) above code snippet could be one single op: thread (out INT, in PMC) # run sub $2 threaded, ret thread id in $1 Further pointers and comments are very welcome. Have fun, leo
Re: [ANNOUNCE] Test::Benchmark
Fergal Daly wrote: NAME Test::Benchmark - Make sure something really is faster SYNOPSIS is_faster(-10, sub {...}, sub {...}, "this is faster than that") is_faster(5, -10, sub {...}, sub {...}, "this is 5 times faster than that") is_n_times_faster(5, -10, sub {...}, sub {...}, "this is 5 times faster than that") is_faster(-10, $bench1, $bench2, "res1 was faster than res2"); Hi Fergal, Id like to see a slightly different interface: is_faster( sub{}, sub{}, "1st is faster"); is_faster( 5.0, sub{}, sub{}, "1st is 5.0 x faster"); is_faster( 0.5, sub{}, sub{}, "1st is 1/2 as fast"); is_faster( 1000, sub{}, sub{}, "1st is faster over 1000 iterations"); is_faster( -3, sub{}, sub{}, "1st is faster over 3 second test"); ie - with optional arguments, and the ability to test for a float-val. OTOH - this might be too DWEOM? ish or, more like Benchmark::timethese(). this form also allows 3 or more tests is_faster( test1, { test1 => sub{}, test2 => sub{}, test3=> sub{} }, "test1 is fastest"); it is doable, since { no warnings; if ( $_[0] and $_[0] == 0 and $_[0] ne '' ) # like timethese FWIW, I started messing about with the TB <=> Benchmark relationship.. with the notion that a few new Benchmark::* classes could represent the Benchmark results more portably than the various versions of Benchmark do. (notably the old ones) package Benchmark::Table; # the AoA returned by Benchmark::cmpthese() - non-existent in 5.00503s version # for a 2 test cmpare, gives 3 x 3 matrix: labels, slower, faster package Benchmark::Comparison; # an array of Benchmark objects, returned by timethese Also, FWIW, I still want some support for throwing to screen via diag() Other comments; DEPENDENCIES Benchmark, Test::Builder but they come with most Perl's. is that perls, perl's, Perls ? You can avoid the whole issue; Benchmark, which is standard with perl5.00503+ Test::Builder, which is standard with perl 5.6.[01] ? HISTORY This came up on the perl-qa mailing list, no one else. incomplete sentence
Re: [ANNOUNCE] Test::Benchmark
Fergal Daly wrote: On Wed, Dec 17, 2003 at 09:28:48AM -0700, Jim Cromie wrote: Hi Fergal, Id like to see a slightly different interface: is_faster( sub{}, sub{}, "1st is faster"); This would be nice but what value should I use for iterations? I suppose -1 Benchmark defaults to -3. Its hard to argue against keeping (or implicitly using) that default. ie - with optional arguments, and the ability to test for a float-val. OTOH - this might be too DWEOM? ish DWEOM? aka Magic. a term that Tom Christiansen dredged out of old English, (or is that Middle Earth) see perlopentut or http://www.gregorpurdy.com/gregor/wow/000438.html I don't fancy the float stuff. I think it's doable but if someone who's not familiar with the module is reading the test script they could very easily misunderstand it, unless they read the docs very carefully. or, more like Benchmark::timethese(). this form also allows 3 or more I think I'll call that is_fastest(). youre way too logical :-) FWIW, I started messing about with the TB <=> Benchmark relationship.. with the notion that a few new Benchmark::* classes could represent the Benchmark results more portably than the various versions of Benchmark do. (notably the old ones) Benchmark itself could do with refactoring, I though about doing it but then decided against it because people would have to upgrade to use it or I'd have to write two versions of T::B. yeah - thats why I was thinking to slap a few new classes on/into it via TB. A well-marked living-room invasion as it were. Also, FWIW, I still want some support for throwing to screen via diag() It dumps the benchmarks to the screen when the test fails. I can stick in a verbose flag somewhere to make it do that all the time. that sounds good - what Im hoping for is something like these for 5.00503: Benchmark: running DD, EzDD, each for at least 3 CPU seconds... DD: 3 3.15 0.01 0 0 2399 none @ 759.18/s (n=2399) EzDD: 3 3 0.01 0 0 2400 none @ 797.34/s (n=2400) 5.6.2 # EzDD 10582/3.23 = 3276.161 # DD 8616/3.17 = 2717.981 5.8.2 # Rate DD EzDD # DD 2011/s -- -14% # EzDD 2335/s 16% -- my t/speed.t does various unholy version contortions, that would hopefully be obsoleted by Test::Benchmark
Re: [ANNOUNCE] Test::Benchmark
On Wed, Dec 17, 2003 at 09:28:48AM -0700, Jim Cromie wrote: > Hi Fergal, > > Id like to see a slightly different interface: > >is_faster( sub{}, sub{}, "1st is faster"); This would be nice but what value should I use for iterations? I suppose -1 would be safe enough, anything that takes longer than 1 second probably doesn't need more than 1 iteration to see if it's faster - unless the times are very close, in which case the test is probably pointless. So that's a yes I guess. >is_faster( 5.0, sub{}, sub{}, "1st is 5.0 x faster"); >is_faster( 0.5, sub{}, sub{}, "1st is 1/2 as fast"); >is_faster( 1000, sub{}, sub{}, "1st is faster over 1000 iterations"); >is_faster( -3, sub{}, sub{}, "1st is faster over 3 second test"); > > ie - with optional arguments, and the ability to test for a float-val. > OTOH - this might be too DWEOM? ish DWEOM? I don't fancy the float stuff. I think it's doable but if someone who's not familiar with the module is reading the test script they could very easily misunderstand it, unless they read the docs very carefully. It also suffers from the "how long should I run this for?" problem except now I'm not sure that -1 is a suitable value for these because now there's a potentially large factor multiplying the result. > or, more like Benchmark::timethese(). this form also allows 3 or more tests > >is_faster( test1, { test1 => sub{}, test2 => sub{}, test3=> sub{} }, > "test1 is fastest"); > > it is doable, since > { no warnings; if ( $_[0] and $_[0] == 0 and $_[0] ne '' ) # like timethese I think I'll call that is_fastest(). > FWIW, I started messing about with the TB <=> Benchmark relationship.. > with the notion that a few new Benchmark::* classes could represent the > Benchmark results more portably than the various versions of Benchmark do. > (notably the old ones) Benchmark itself could do with refactoring, I though about doing it but then decided against it because people would have to upgrade to use it or I'd have to write two versions of T::B. > Also, FWIW, I still want some support for throwing to screen via diag() It dumps the benchmarks to the screen when the test fails. I can stick in a verbose flag somewhere to make it do that all the time. > >DEPENDENCIES > > Benchmark, Test::Builder but they come with most Perl's. > > > > > > is that perls, perl's, Perls ? You can avoid the whole issue; > Benchmark, which is standard with perl5.00503+ > Test::Builder, which is standard with perl 5.6.[01] ? Indeed, "Perls" is correct. I think I've seen too many corner shop signs. F
Re: -lpthread
At 2:11 PM + 12/17/03, Arthur Bergman wrote: On Wednesday, December 17, 2003, at 02:06 pm, Dan Sugalski wrote: Well... yes and no. You need to make sure Parrot links against the thread libraries. You don't, strictly speaking, need to have perl linked against the threading libraries except... several (perhaps most) platforms *really* hate it when you dlopen (or its equivalent) the thread libraries and *haven't* linked your main executable against them. Tends to crash or lock up your process, which kind of sucks. If you have it such that parrot is linked directly into the main perl executable so that it's loaded as part of the process startup, then you don't need to link in the thread libraries to perl. If you're loading parrot as a perl extension, then you will. (It isn't necessary to build a threaded perl for this, FWIW, you just need to make sure perl loads in the thread library) Yes, but making sure perl loads the thread library is pretty much the same as saying that perl needs be threaded :). No, it isn't. It's perfectly possible, and reasonably simple, to get perl linking against thread libraries--the VMS port does this by default, and I'm surprised that the other platforms don't. (I never bothered looking) Access any of the pthread library routines, even pthread_self, and it's in. Perl won't be threaded, but the thread libraries will be linked in. I don't really like that you cannot build parrot without linking in pthread. Tough. :) Parrot's threaded, and links in threads. We don't do much with them yet, but we're working on that. (So things will only get worse, not better, as time goes on for threads) -- Dan --"it's like this"--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: restore N via win32 CreateProcessA
At 5:01 AM + 12/17/03, Pete Lomax wrote: On Tue, 16 Dec 2003 19:54:25 -0500, Dan Sugalski <[EMAIL PROTECTED]> wrote: At 11:38 PM + 12/16/03, Pete Lomax wrote: Hi, I've hit a very strange problem: set N18, 86 save N18 restore N18 Solved. I forgot I was using -O2 when executing via CreateProcessA, which I wasn't when running at the DOS prompt. Under -O2 the above code outputs: set N18, 86 save 86 restore N18 which explains things. I'll stop using -O2 (for now). Maybe pushing an int and popping a num should be allowed? I've considered making pop autoconverting, but the assumption at the moment is that only compiler-generated code (and the odd hand-written stuff) will be pushing and popping things onto the stack, so if there's a type mismatch it indicates a problem. It does make conversion a pop/set combo rather than just a pop, but I'm not sure that's a problem. Also, if we disallow autoconversion we can possibly shrink down the size of stack entries some by not keeping the type information around at all. (Though that has the possibility of exploitation by malicious code and DOD issues, so it's probably not worth it) -- Dan --"it's like this"--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk
Re: [perl #24682] [BUG] parrot compile fails on MacOS 10.3.1 - possibly dynaloading patch?
Leo wrote: > > > Failed Test Stat Wstat Total Fail Failed List of Failed > > --- > > t/op/hacks.t2 512 22 100.00% 1-2 > > Could you please run these 2 standalone: > > $ parrot t/op/hacks_1.pasm > catched it > error -8 > severity 0 > $ parrot t/op/hacks_2.pasm > catched it > ok $ parrot t/op/hacks_1.pasm not reached $ parrot t/op/hacks_2.pasm not reached $ Allison
Re: [perl #24682] [BUG] parrot compile fails on MacOS 10.3.1 - possibly dynaloading patch?
Allison Randal <[EMAIL PROTECTED]> wrote: > $ parrot t/op/hacks_1.pasm > not reached Very likely that SIGFPE isn't defined. Does F have an entry for SIGFPE? Is PARROT_HAS_HEADER_SIGNAL defined? > Allison leo
Re: Vocabulary
On Wed, Dec 17, 2003 at 06:20:22AM -, Rafael Garcia-Suarez wrote: : Larry Wall wrote in perl.perl6.language : : > On Wed, Dec 17, 2003 at 12:11:59AM +, Piers Cawley wrote: : >: When you say CHECK time, do you mean there'll be a CHECK phase for : >: code that gets required at run time? : > : > Dunno about that. When I say CHECK time I'm primarily referring : > to the end of the main compilation. Perl 5 appears to ignore CHECK : > blocks declared at run time, so in the absence of other considerations : > I suspect Perl 6 might do the same. : : This has proven to be inconvenient except for a few specialized usages, : such as the B::/O compiler framework. : : There's a need (more or less) for special blocks that can be run at the : end of the compilation phase of any arbitrary compilation unit. Well, that's what I'd call an "other consideration". :-) Larry
Re: Vocabulary
On Tue, Dec 16, 2003 at 06:55:56PM -0500, Gordon Henriksen wrote: : Michael Lazzaro wrote: : : > I don't think so; we're just talking about whether you can extend a : > class at _runtime_, not _compiletime_. Whether or not Perl can have : > some degree of confidence that, once a program is compiled, it won't : > have to assume the worst-case possibility of runtime alteration of : > every class, upon every single method call, just in case : > you've screwed with something. : : That's a cute way of glossing over the problem. : : How do you truly know when runtime is in the first place? Imagine an : application server which parses and loads code from files on-demand. : This shouldn't be difficult. Imagine that that code references a : system of modules. : : Imagine if Perl "finalizes" classes after "primary compilation" : (after parsing, say, an ApacheHandler file), and proceeds to behave : quite differently indeed afterwards. : : Imagine that a perfectly innocent coder finds that his class : library doesn't run the same (doesn't run at all) under the : application server as it does when driven from command line scripts: : His method overrides don't take effect (or, worse, Perl tells him he : can't even compile them because the class is already "finalized"! And : he thought Perl was a dynamic language!). : : What's his recourse? Nothing friendly. Tell Perl that he's going : to subclass the classes he subclasses? Why? He already subclasses : them! Isn't that "tell" enough? And how? Some obscure configuration : file of the application server, no doubt. And now the app server needs : to be restarted if that list changes. His uptime just went down. And : now he can't have confidence that his system will continue to behave : consistently over time; "apachectl restart" becomes a part of his : development troubleshooting lexicon. Any such application server would probably just use DYNAMIC_EVERYTHING; (or whatever we call it) and have done with it. : Java doesn't make him do that; HotSpot can make this optimization at : runtime and back it out if necessary. Maybe he'll just write a JSP : instead. If Parrot turns out to be able to make this optimization, then the individual declarations of dynamism merely become hints that it's not worth trying to optimize a particular class because it'll get overridden anyway. It's still useful information on an individual class basis. The only thing that is bogus in that case is the global DYNAMIC_EVERYTHING declaration in the application server. So I could be argued into making that the default. A program that wants a static analysis at CHECK time for speed would then need to declare that. The downside of making that the default is that then people won't declare which classes need to remain extensible under such a regime. That's another reason such a declaration does not belong with the class itself, but with the users of the class. If necessary, the main program can pick out all the classes it things need to remain dymanic: module Main; use STATIC_CLASS_CHECK; use class Foo is dynamic; use class Bar is dynamic; or whatever the new C syntax will be in A11... : C# and VB.NET do likewise. ASP.NET isn't looking so bad, either. The : .NET Frameworks are sure a lot less annoying than the Java class : library, after all. On the other hand, those guys are also doing a lot more mandatory static typing to get their speed, and that's also annoying. (Admittedly, they're working on supporting dynamic languages better.) : Point of fact, for a large set of important usage cases, Perl simply : can't presume that classes will EVER cease being introduced into the : program. That means it can NEVER make these sorts of optimizations : unless it is prepared to back them out. Even in conventional programs, : dynamic class loading is increasingly unavoidable. Forcing virtuous : programmers to declare "virtual" (lest their program misbehave or : their perfectly valid bytecode fail to load, or their perfectly valid : source code fail to compile) is far worse than allowing foolish : programmers to declare "final." The relative merit depends on who declares the "final", methinks. But if we can avoid both problems, I think we should. : Making semantic distinctions of this scale between "compile time" : and "runtime" will be a significant blow to Perl, which has always been : strengthened by its dynamism. Its competitors do not include such : artifacts; they perform class finalization optimizations on the fly, : and, despite the complexity of the task, are prepared to back out these : optimizations at runtime--while the optimized routines are executing, : if necessary. Yes, this requires synchronization points, notifications : (or inline checks), and limits code motion. Better than the : alternative, I say. It is very simply a huge step backwards to : create a semantic wall between primary compilation and program : execution. : : So write the complicated code
pdd03 and method calls
While playing with calling threaded subs, I came along a thing which I think might be suboptimal: pdd03 states that the method PMC should go into P2. This doesn't really play with Perl5 <-> Perl6 interoperbility IMHO. Perl5 methods are plain subs, where the first param is the object. I dunno, if Ponie will ever use ParrotClass/ParrotObject, but I'm sure, that calling Perl6 methods should work (and vv). So me thinks that the method PMC should be the first PMC argument living in P5. sub meth { my ($self, $arg1) = @_; #P5P6 ... Comments? leo
remarks WRT clone
In former days and before YAPC::EU I changed the original clone vtable, which was IIRC: PMC* clone() # return new clone of pmc to the now existing form, which gets an uninitialized destination PMC. This change was at that time necessary because of reasons described in F keyword "Variant 2: Anchor early, anchor often". This is solved, stackwalking during DOD works, so /me thinks, that we can use again the original signature of the clone vtable. This does also simplify switching clone to Parrot_clone (the real and final clone via freeze/thaw), which just happens to return a newly created PMC. leo