On Mon, Jul 26, 2021 at 11:19 AM Leszek Swirski <lesz...@chromium.org>
wrote:

> On Fri, Jul 23, 2021 at 1:18 AM Vitali Lovich <vlov...@gmail.com> wrote:
>
>> What's the best way to measure script parse time vs lazy function
>> compilation time? It's been a few months since I last looked at this so my
>> memory is a bit hazy on whether it was instantiating
>> v8::ScriptCompiler::Source, v8::ScriptCompiler::CompileUnboundScript, or
>> the combined time of both (although I suspect both count as script parse
>> time?). I do recall that on my laptop, using the code cache basically
>> halved the time on larger scripts of what I was measuring & I suspect I
>> would have looked at the overall time to instantiate the isolate with a
>> script (it was a no-op on smaller scripts, so I suspect we're talking about
>> script parse time).
>>
>
> The best way is to run with --runtime-call-stats, this will give you
> detailed scoped timers for almost everything we do, including compilation.
> Script deserialisation is certainly faster than script compilation, so I'm
> not surprised it has a big impact when the two are compared against each
> other, I'm more curious how it compares to overall worklet runtime.
>
>
>> FWIW, if It's helpful, when I profiled a stress test of isolate
>> construction on my machine with a release build, I saw V8 spending a lot of
>> time deserializing the snapshot (seemingly once for the isolate & then
>> again for the context).
>>
>
> Yeah, the isolate snapshot is the ~immutable context-independent one
> (think of things like the "undefined" value) which is deserialized once per
> isolate, and the context snapshot is things that are mutable (think of
> things like the "Math" object) that have to be fresh per new context. Note
> that these snapshots use the same mechanism as the code cache snapshot, but
> are otherwise entirely distinct.
>
>
>> Breakdown of the flamegraph:
>> * ~22% of total runtime to run NewContextFromSnapshot. Within that ~5% of
>> total runtime was spent just decompressing the snapshot & the rest was
>> deserializing it (17%). I thought there was only 1 snapshot. Couldn't the
>> decompression happen once in V8System instead?
>>
>
> It's possible that the decompression could be once per isolate, although
> there is the memory impact to consider.
>

Snapshot compression can be disabled during the build, see
https://source.chromium.org/chromium/chromium/src/+/main:v8/BUILD.gn;l=288;drc=67960ba110803b053a772eff7aeac6c5d2f23143
.


>
>
>> * 9% of total runtime spent decompressing the snapshot for the isolate
>> (in other words 14% of total runtime was spent decompressing the snapshot).
>>
>> In our use-case we construct a lot of isolates in the same process. I'm
>> curious if there's opportunities to extend V8 to utilize COW to reduce the
>> memory & CPU impact of deserializing the snapshot multiple times. Is my
>> guess correct that deserialization is actually doing non-trivial things
>> like relocating objects or do you think there's a 0-copy approach that can
>> be taken with serializing/deserializing the snapshot so that it's prebuilt
>> in the right format (perhaps even without any compression)?
>>
>
> There's definitely relocations happening during deserialisation; for the
> isolate, we've wanted to share the "read-only space" which contains
> immutable immortal objects (like "undefined"), but under pointer
> compression this has technical issues because of limited guarantees when
> using mmap (IIRC). I imagine COW for the context snapshot would have
> similar issues, combined with the COW getting immediately defeated as soon
> as the GC runs (because it has to mutate the data to set mark bits). It's a
> direction worth exploring, but hasn't been enough of a priority for us.
>
> Another thing we're considering looking into is deserializing the context
> snapshot lazily, so that unused functions/classes never get deserialized in
> the first place. Again, not something we've had time to prioritise, but
> something we're much more likely to work on at some point in the future,
> since it becomes more web relevant every time new functionality is
> introduced.
>
> I fully understand. I'm definitely interested in the snapshot format since
>> presumably anything that helps the web here will also help us. Is there a
>> paper I can reference to read up more on the proposal? I've seen a few in
>> the wild from the broader JS community but nothing about V8's plans here. I
>> have no idea if that will help our workload but it's certainly something
>> we're open to exploring.
>>
>
> You're probably thinking of BinaryAST, which is unrelated to this. We
> haven't talked much about web snapshots yet, because it's still very
> preliminary, very prototypy, and we don't want to make any promises or
> guarantees around it even ever materialising. +Marja Hölttä
> <ma...@chromium.com> is leading this effort, she'll know the current
> state.
>
> --
> --
> v8-users mailing list
> v8-users@googlegroups.com
> http://groups.google.com/group/v8-users
> ---
> You received this message because you are subscribed to the Google Groups
> "v8-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to v8-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/v8-users/CAGRskv-Qk5X5MzcUez0Us2haeKGPaU0-Bcjk8j_0sRtweNS%3DKw%40mail.gmail.com
> <https://groups.google.com/d/msgid/v8-users/CAGRskv-Qk5X5MzcUez0Us2haeKGPaU0-Bcjk8j_0sRtweNS%3DKw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-users/CAH3p7oMwEgeceyNuiFoqqNr5Yz%2Bh1pk7U8S-reD6u4%2BNGKep9Q%40mail.gmail.com.

Reply via email to