It could be an issue that is already solved, given that the v8 version in 
Node is usually set at release time. Particularly if you are using an older 
version of node.
You might want to try with node 10 as well, which was recently released.

On Saturday, April 7, 2018 at 12:48:51 AM UTC+2, J Decker wrote:
>
> Was forumating a crbug; and realized I was using Node.  I'm doing the 
> console.log with node.... 
> when I grab the code and run it in a webpage it doesn't suffer the same 
> slowdown.
>
> On Fri, Apr 6, 2018 at 3:13 PM, J Decker <d3c...@gmail.com <javascript:>> 
> wrote:
>
>> Sorry got busy with other things.
>>
>> This gist is fast.
>> https://gist.github.com/d3x0r/be849400be3ea30877568e5656a86ca3
>> how to slow down.  
>>
>> (line 1)
>> function pcg_setseq_128_srandom_r()
>> {
>>      //const state = new Uint32Array([0,0,0,0,0,0,0,0]);
>>      const state = [0,0,0,0,0,0,0,0];
>>
>> uncomment the first line to use uInt32array and comment the second const 
>> state line .  (use typed array instead of array).  
>>
>> (On my system it reports Done in  2045 /ms 48899.75550122249 
>> 1564792.1760391197) as is , with a standard array.
>> If it runs more than 4 seconds I end it... because it will be 15-20 
>> seconds .  (Done in  22615 /ms 4421.843908910016 141499.0050851205)
>>
>> With the following mentioned speedups Uint32array IS faster.... 
>> The first thing, replace the state with Uint32array, if fast reports 
>> (Done in  1530 /ms 65359.47712418301 2091503.2679738563 )
>>
>> ----------
>> Okay?  Is that reproducable?
>>
>>
>>
>> How to speed up.
>> 1) remove the console.log( testRng ); on line 46.  (it's done before the 
>> loop, no the logging itself is not measured in the speed)
>> or 2) comment out state: new Uint32Array([0,0,0,0,0,0,0,0]),  on line 
>> 10.  This is a Uint32Array that is put into the RNG object returned;  and 
>> if console.log doesn't log a uint32array type it's fast.
>> 3) 
>>     
>>
>>
>> On Friday, April 6, 2018 at 2:28:40 AM UTC-7, Jakob Gruber wrote:
>>>
>>> If you do end up with a good repro for the performance difference 
>>> between typed arrays and arrays, please post it at crbug.com/v8/new. 
>>>
>>> On Fri, Apr 6, 2018 at 8:33 AM, <mog...@syntheticsemantics.com> wrote:
>>>
>>>> Are you able to hoist the memory allocations out of the library so the 
>>>> caller can allocate the buffers it needs and reuse them from call to call? 
>>>>  
>>>> new in JS has GC overhead not present in C++ alloc/free.  Aside from 
>>>> those variables, the rest are stack allocations and GC won't play much 
>>>> role.
>>>>
>>>>                      -J
>>>>
>>>>
>>>> On Wednesday, April 4, 2018 at 7:03:05 PM UTC-7, J Decker wrote:
>>>>>
>>>>> How long of a story to tell(?)...
>>>>>
>>>>> I have this procedural random generator, that uses the result of sha2 
>>>>> as a stream of bits, regenerating another sha2 hash when the 256 bits are 
>>>>> all consumed.  I started to use this for perlin noise generation, and 
>>>>> thought I found sha2 was the culprit consuming most of the time.  Since 
>>>>> I'm 
>>>>> making something just generally random, I went looking for alternative, 
>>>>> lightweight, RNGs. After digging for a while I stumbled on PCG (
>>>>> http://www.pcg-random.org/using-pcg.html) It's basically a 
>>>>> header-only library because it attempts to generate everything as 
>>>>> inline... 
>>>>>
>>>>> I made this JS port of it  
>>>>> https://gist.github.com/d3x0r/345b256be6569c0086c328a8d1b4be01  
>>>>> This is the first revision ... 
>>>>> https://gist.github.com/d3x0r/345b256be6569c0086c328a8d1b4be01/fffa8e906d5723e66f7e9baa950b3b3d5b4895c7
>>>>>   
>>>>> It has a better flow matching what the C code does closer... the 
>>>>> current version is fast generating 115k bits per millisecond (vs the 9.3k 
>>>>> bpms of sha2); however, when compared to the C version, which generates 
>>>>> 1.1Mbps it's a factor of 10 off... and the routine is generally only 
>>>>> doing 
>>>>> 64 bit integer math (though the test was compiled in 32 bit mode so it 
>>>>> was 
>>>>> really just 32 bit registers emulating 64 bit).
>>>>>
>>>>> if I just change the arrays created in getState() (first function), to 
>>>>> Uint32Array() it runs MUCH slower... 
>>>>>
>>>>> ----
>>>>> As I write this I was updating some, and some of my numbers from 
>>>>> before are a factor of 8 off because I was counting bytes not bits; 
>>>>> except 
>>>>> in the sha2, which is really slow... But I would like to take this 
>>>>> opportunity to say...
>>>>>
>>>>>     crypto.subtle.digest("SHA-256", buffer).then(hash=>hash );
>>>>>
>>>>> is the same output type as my javascript version I'm using ( forked 
>>>>> from a fork of forge library and consolidated to just the one return 
>>>>> type...), but is another 10x slower than my javascript sha-256.
>>>>>
>>>>> I keep thinking 'Oh I'll just compile this and even use intel 
>>>>> accelearted sha2msg1 and sha2msg2 instructions to make the C version 8x 
>>>>> faster than it is in C straight, which itself was already faster than the 
>>>>> JS version; and hook it into a ... ( oh wait I want to do this on a 
>>>>> webpage! can't say Node addon there...).  
>>>>>
>>>>> Well... back to optimizing.
>>>>> ----
>>>>>
>>>>> I was working also on a simple test case to show where using a simple 
>>>>> array vs a typed array causes a speed difference, but it's not 
>>>>> immediately 
>>>>> obvious what I'm doing that's causing it to deoptimize.... so I'll work 
>>>>> on 
>>>>> building that up until it breaks; or conversely strip the other until it 
>>>>> speeds up..
>>>>>
>>>>>
>>>>> https://github.com/d3x0r/-/blob/master/org.d3x0r.common/salty_random_generator.js#L86
>>>>>   
>>>>> This is getting the bits from a typed array; and it's really not that 
>>>>> complex (especially if only getting 1 bit at a time which is what I was 
>>>>> last speed testing with; but turns out all the time is really here, 
>>>>> swapping out sha2 for pcg(without typed arrays) dropped that from 150ms 
>>>>> to 
>>>>> 50ms but the remaining was still 3500ms... so I misread the initial 
>>>>> performance graph I guess... 
>>>>>
>>>>> There's a stack of what were C macros to make the whole thing more 
>>>>> readable... 
>>>>> https://github.com/d3x0r/-/blob/master/org.d3x0r.common/salty_random_generator.js#L25
>>>>>  
>>>>> and if I inline these, there's no improvement so I guess they're all 
>>>>> small 
>>>>> to qualify for auto inlining anyway.  The version that's current on 
>>>>> github 
>>>>> ended up creating a new uint32array(1) for every result; I moved that out 
>>>>> locally so I can use just a single buffer for that result and it sped up 
>>>>> the initialization from 700ms to 200ms (cumulative times) but there's 
>>>>> still 
>>>>> like 80% of the time in the remainder of the getBuffer routine; maybe I 
>>>>> need to move things out of the uint8arrays (data from sha2/pcg)
>>>>>
>>>>>
>>>>> -- 
>> -- 
>> v8-users mailing list
>> v8-u...@googlegroups.com <javascript:>
>> http://groups.google.com/group/v8-users
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "v8-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to v8-users+u...@googlegroups.com <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to