Attached is a quick'n dirty parrotbench, instead of a complicated test harness it uses bash to make time measurements, so that new languages are very simple to add.
Currently it's just a proof of concept but if you like it i will make a better version with pretty printing, extended reports and stuff.
Here's an example run:
parrot perl python ruby addit 8.469 7.379 - - arriter - 1.657 - - bench_newp 1.827 - - - fib - 0.594 - - freeze 0.783 1.65 - - gc_alloc_new 0.191 - - - gc_alloc_reuse 4.068 - - - gc_generations 6.363 - - - gc_header_new 1.168 - - - gc_header_reuse 5.772 - - - gc_waves_headers 1.302 - - - gc_waves_sizeable_data 1.074 - - - gc_waves_sizeable_headers 3.702 - - - oo1 3.571 1.189 0.689 - primes 27.991 383.851 - - primes2 17.325 - 44.379 - primes2_p 29.753 - - - prop 0.14 - - - shared_ref 0.552 11.563 - - stress 1.988 0.905 - - stress1 27.539 17.312 - - stress2 3.908 3.440 - - stress3 19.050 - - - utf8 0.13 - - - vpm - 40.057 - -
Cheers, Sebastian
Leopold Toetsch wrote:
I'd a short look at perlbench from CPAN. This inspired me to the following idea:
examples/benchmarks/* has a bunch of programs e.g. oo1.pasm oo1.pl oo1.py stress.pasm stress.pl ...
Now like perlbench is able to compare run times of different perl versions, the goal of this task is to provide a script that compares different interpreters and finally spits out:
Parrot-j Parrot-C Perl Python Ruby oo1 100% 103% 75% 50% - mops 100% 200% 40000% stress ... - -
or some such.
To simplify the task, we could of course move used tests into a separate directory. Unavailable interpreters (or missing scripts for that language) are just skipped.
Any takers?
leo
parrotbench.patch.gz
Description: application/tgz