Hi Phillip,

When faced with a similar situation we were concerned with startup/restore 
efficiency of parsing all that JSON every time the program was run, we 
concluded a lazy evaluation strategy worked best for us and we decomposed 
the object for serialization & storage, later it was reconstituted when 
referenced.  Your additional step of compression would complicate this, but 
the basic idea holds.

If you really need a monolithic snapshot, your plan of implementing your 
own based on existing deep-copy and stringify code is probably your least 
surprise-free permanent solution.  If you publish it you may discover 
you're not the only one reading and writing large JSON objects.

However, if it is possible to leverage your knowledge of the data to 
decompose it for serialization, a small amount of JS to paper over the 
decomposition may be all you need.

            -J

-- 
-- 
v8-users mailing list
v8-users@googlegroups.com
http://groups.google.com/group/v8-users
--- 
You received this message because you are subscribed to the Google Groups 
"v8-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to v8-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/v8-users/58b12d4d-b5ad-4589-bd4f-7d5e097b8258n%40googlegroups.com.

Reply via email to