On Mon, 5 Aug 2024 17:20:24 GMT, Afshin Zafari <azaf...@openjdk.org> wrote:

>> Why would the execution time grow logarithmically when we do linearly more 
>> work? When we run this with `N2` we will perform `10_000 * log(10_000, 2)` 
>> units of work, and for `N1` we will perform `1_000 * log(1_000, 2)` units of 
>> work, approximately. A closer approximation is `O(log(1)) + O(log(2)) + ... 
>> + O(log(n))` for `n = N2` or `n = N1`.
>> 
>> Let's calculate that:
>> 
>> 
>>>>> import math
>>>>> def time_it(n):
>> ...     t = 0
>> ...     for i in range(1, n):
>> ...             t  = t + math.log(i)
>> ...     return [t, 3*t] # Approximate best and worst case
>> ... 
>>>>> time_it(1000)
>> [5905.220423209189, 17715.661269627566]
>>>>> time_it(10_000)
>> [82099.71749644217, 246299.15248932652]
>>>>> def compare(a, b):
>> ...     ab, aw = a
>> ...     bb, bw = b
>> ...     return [ bb / ab, bw / aw ]
>> ... 
>>>>> compare(time_it(1000), time_it(10_000))
>> [13.902904821938064, 13.902904821938066]
>>>>> # Ouch, that's outside of our linear bound!
>> 
>> 
>> What I said previously also applies, especially the performance of `malloc` 
>> will impact this I suspect.
>
> It is considered that `malloc` or other external events are the same for two 
> cases. If we know that there might be some noise for one or another, we 
> should check and disable them. This is the approach I have talked. How can we 
> avoid noise from `malloc` side?

When it is said that an algorithm has the log(n) time-complexity, it means that 
if the input grows n times, the times grows log(n) times. The tree 
data-structure has log(n) time-complexity. VMATree may have not exactly log(n) 
response times due to self-balancing costs. But it is still expected to be less 
than O(n).

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/20425#discussion_r1704427407

Reply via email to