Hi Andrei.

What makes me believe the cache would use approximately 96MB is that the method 
is called “cacheSize” and takes a size in MB ;) 

I do realise that there’s probably no general and efficient way to get the 
actual heap size of an object graph. I have played with 
java.lang.instrument.Instrumentation.getObjectSize in the past, but of course 
you need to walk the graph, which is difficult in general and likely to be very 
expensive.

The thing that tripped alarm bells with me is that two servers configured the 
same but with different data sets, after running for some days, had greatly 
differing cache heap usage: one was more than triple the other. I can handle 
tweaking the “96” to be something else by trial and error, but with that level 
of variance it looks like I’d have to set the number very low to ensure I don’t 
blow the heap as the caches fill.

I’m running the latest MVStore build in test at the moment with the dataset 
that blew out. So far the cache seems more stable at a lower heap usage, but it 
will take a while to be sure. I also have some standalone load tests that I’ll 
throw at it.

I was just about to ask if it would make sense to provide an interface that can 
be optionally implemented by clients to estimate the size of an object, but it 
just hit me that I may have made an incorrect assumption about 
DataType.getMemory: I assumed it is the serialised size of the object, but is 
it? Is getMemory actually what I want here?

Cheers,

Matthew.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to