On Friday, March 27, 2015 at 12:12:19 PM UTC+11, Phil Tomson wrote:
>
>
>
> On Thursday, March 26, 2015 at 5:51:59 PM UTC-7, [email protected] 
> wrote:
>>
>> Lots of useful answers here. This is an issue for me a lot too. Here are 
>> two StackOverflow links that provide some more interesting reading:
>>
>>
>> http://stackoverflow.com/questions/26173635/performance-penalty-using-anonymous-function-in-julia
>>
>> http://stackoverflow.com/questions/28356437/julia-compiler-does-not-appear-to-optimize-when-a-function-is-passed-a-function
>>
>> Stefan Karpinski answers in one of them that the problem will be fixed in 
>> an upcoming overhaul of the type system. My current understanding of the 
>> roadmap is that it is definitely planned to be fixed by v1.0, but that 
>> there is quite a lot of support for a fix in v0.4 (not sure yet whether it 
>> will happen).
>>
>> Cheers,
>>
>> Colin
>>
>>
> Colin, 
>
> Thanks for the links. It is a bit encouraging that you can specify the 
> return type as shown in the second link there and achieve a little bit of a 
> speedup (still not great performance, but about a 20% speedup in the small 
> testcase I tried)
>
>  
>

Its not actually specifying the return type, its ensuring its Int or 
whatever, its checked immediately on return, so all the rest of the code 
can be optimised assuming its an Int giving you some speedup.  But there is 
actually no restriction on the function passed as a parameter.  When it can 
be specified that only functions returning Ints can be passed for that 
parameter, then the check can be removed, and the rest of the code can be 
optimised on that assumption.

Cheers
Lex
 

>
>>
>> On Thursday, 26 March 2015 05:41:10 UTC+11, Phil Tomson wrote:
>>>
>>>  Maybe this is just obvious, but it's not making much sense to me.
>>>
>>> If I have a reference to a function (pardon if that's not the correct 
>>> Julia-ish terminology - basically just a variable that holds a Function 
>>> type) and call it, it runs much more slowly (persumably because it's 
>>> allocating a lot more memory) than it would if I make the same call with  
>>> the function directly.
>>>
>>> Maybe that's not so clear, so let me show an example using the abs 
>>> function:
>>>
>>>     function test_time()
>>>          sum = 1.0
>>>          for i in 1:1000000
>>>            sum += abs(sum)
>>>          end
>>>          sum
>>>      end
>>>
>>> Run it a few times with @time:
>>>
>>>    julia> @time test_time()
>>>     elapsed time: 0.007576883 seconds (96 bytes allocated)
>>>     Inf
>>>
>>>    julia> @time test_time()
>>>     elapsed time: 0.002058207 seconds (96 bytes allocated)
>>>     Inf
>>>
>>>     julia> @time test_time()
>>>     elapsed time: 0.005015882 seconds (96 bytes allocated)
>>>     Inf
>>>
>>> Now let's try a modified version that takes a Function on the input:
>>>
>>>     function test_time(func::Function)
>>>          sum = 1.0
>>>          for i in 1:1000000
>>>            sum += func(sum)
>>>          end
>>>          sum
>>>      end
>>>
>>> So essentially the same function, but this time the function is passed 
>>> in. Running this version a few times:
>>>
>>>     julia> @time test_time(abs)
>>>     elapsed time: 0.066612994 seconds (32000080 bytes allocated, 31.05% 
>>> gc     time)
>>>     Inf
>>>  
>>>     julia> @time test_time(abs)
>>>     elapsed time: 0.064705561 seconds (32000080 bytes allocated, 31.16% 
>>> gc time)
>>>     Inf
>>>
>>> So roughly 10X slower, probably because of the much larger amount of 
>>> memory allocated (32000080 bytes vs. 96 bytes)
>>>
>>> Why does the second version allocate so much more memory? (I'm running 
>>> Julia 0.3.6 for this testcase)
>>>
>>> Phil
>>>
>>>
>>>

Reply via email to