+1

 

These benchmarks seem to test the performance of specific algorithms, 
implemented in different languages. Including start up or compile time doesn’t 
make any sense to me in that case.

 

From: [email protected] [mailto:[email protected]] On 
Behalf Of Stefan Karpinski
Sent: Wednesday, January 27, 2016 2:39 PM
To: Julia Users <[email protected]>
Subject: Re: [julia-users] Re: Julia Benchmarks Feedback

 

It's true. However interpreters don't take much time to start up – an 
interpreter is the lowest latency way to get from source code to running 
program (compared to AOT or JIT compilation). There's an argument to be made 
for including all of the things that take time when benchmarking. However, we 
don't time benchmarks because we're interested in how fast language 
implementations can run the benchmarks – we're trying to get some sense of how 
fast they'll run real programs. And if you care about speed, you probably have 
a program that's going to take a non-trivial amount of time to run – which 
means that runtime startup and JIT are going to become negligible. But we don't 
really want benchmarks that run for minutes. So instead we eliminate fixed 
overheads like runtime startup and JIT time when measuring benchmarks – not 
because they doesn't happen, but because they're a fixed cost that ends up 
being insignificant for real life programs. The only scenario where I can 
imagine really caring about runtime startup and JIT is if you were evaluation 
how good a language implementation is for running short-lived command-line 
programs. Then it would absolutely make sense to measure runtime startup and 
JIT. But I don't think that's what these benchmarks are trying to measure.

 

On Wed, Jan 27, 2016 at 4:48 PM, Ismael Venegas Castelló 
<[email protected] <mailto:[email protected]> > wrote:

but then again the benchmarks are flawed because other implementations are also 
timing the interpreters startup time.



El martes, 26 de enero de 2016, 22:07:54 (UTC-6), George escribió:

I was surprised to se the results on the following benchmarks:

 <https://github.com/kostya/benchmarks> https://github.com/kostya/benchmarks

 


Some benchmarks of different languages


Brainfuck


 <https://github.com/kostya/benchmarks/tree/master/brainfuck> Brainfuck


bench.b


Language

Time,s

Memory, Mb


Nim Clang

3.21

0.7


Felix

4.07

1.3


Nim Gcc

4.52

0.6


Java

4.94

147.6


C++

5.08

1.1


Rust

5.46

4.9


Scala

5.90

116.3


D

6.57

1.0


D Ldc

6.61

0.9


Crystal

6.97

1.3


Go

7.29

1.3


Javascript Node

8.74

15.0


D Gdc

8.87

1.0


Julia

9.25

59.0


Javascript V8

9.41

8.1


Go Gcc

13.60

10.0


Python Pypy

13.94

55.4


Javascript Jx

17.14

11.0


C# Mono

18.08

15.4


OOC

48.86

1.3


Ruby JRuby

87.05

124.1


Ruby Topaz

112.91

36.0


Ruby JRuby9K

160.15

297.2


Ruby

226.86

8.0


Tcl

262.20

2.7


Python

452.44

4.9


Ruby Rbx

472.08

45.0


Python3

480.78

5.5


mandel.b


 <https://github.com/kostya/benchmarks/blob/master/brainfuck/mandel.b> Mandel 
in Brainfuck


Language

Time,s

Memory, Mb


Nim Clang

28.96

1.0


Felix

40.06

3.7


D Ldc

43.30

0.9


D

45.29

1.2


Rust

46.34

4.9


Crystal

48.62

1.3


Nim Gcc

50.45

0.9


Go

52.56

7.6


Java

55.14

69.9


Cpp

56.63

1.1


Scala

64.37

126.4


D Gdc

70.12

1.5


Go Gcc

85.67

10.7


Javascript Node

92.65

15.8


Julia

94.33

56.9

                        

...

 

Reply via email to