Hi,

The URLs, and the memory needed for a sample database, are:

url: jdbc:h2:~/temp/test 34 MB mem
url: jdbc:h2:mem:test 219 MB mem
url: jdbc:h2:memFS:test 61 MB mem
url: jdbc:h2:memLZF:test 45 MB mem

Sample code: http://h2database.com/p.html#f7c3a26ce7d9eb460c1316c120cbebf3

memFS and memLZF are using the "file system abstraction", see the docs
(persisting to in-memory byte arrays).

Regards,
Thomas


On Tuesday, October 20, 2015, Christian MICHON <[email protected]>
wrote:

> What would be the JDBC url for such nio in-memory node? Could not find a
> way to make it work...
>
> On Monday, October 19, 2015 at 8:40:58 PM UTC+2, Noel Grandin wrote:
>>
>> You might want to try the "nio" in-memory mode, it's likely to be more
>> space efficient and easier on the GC, since it stores data off the GC heap.
>> On Mon, 19 Oct 2015 at 18:43, Csaba Sarkadi <[email protected]>
>> wrote:
>>
>>> Hi,
>>>
>>> It is not really a question, rather an experience summary with H2 (we
>>> are using H2 1.4.190).
>>>
>>> We had a business need for a huge memory cache (talking about 110M
>>> records), for a faster query option in our statistical queries.
>>> Finally, we have decided to use H2 instead of caching (with this option
>>> we have to modify less existing code, so we can go towards with our
>>> existing JDBC Pooling handlers).
>>>
>>> So basically, the cached records are simple ones with 1 long and 6
>>> integer columns (so 32 bytes for each record).
>>> Not talking about the load time (in a simple server, the copy from the
>>> existing MS SQL db is about 2-4 hours, depending on the current load), here
>>> are our results:
>>>
>>>    1. 110M records in a in memory H2 db is about 35-36GBs of memory
>>>    2. Simple queries are extremely fast (thanks Thomas!)
>>>       1. like select count(*) is 1ms
>>>       2. selecting records and counting them by integer ranges are a
>>>       maximum of 35 seconds (without indexes wow - it is not really faster 
>>> with
>>>       indexes on a normal SQL Server)
>>>    3. Due to the storage mechanics, memory usage is not linear with the
>>>    record count
>>>       1. like 10M records was about 9GBs of memory
>>>       2. 25M records were about 21GBs of memory
>>>       3. 110M records were 35-36GBs of memory
>>>    4. HASH index creation (after the table was filled) killed the server
>>>       1. => create the hash index before populating the server
>>>
>>>
>>> Hope I could help everyone with these data (If I have anymore to add, I
>>> will).
>>> Also, if it is possible, we would like to make some personal contact
>>> with Thomas (couldn't find your mail, just this mailing list) - so both of
>>> us could learn from handling bigger inmemory DBs :)
>>>
>>>
>>> Thanks,
>>>
>>> Csaba Sarkadi
>>>
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "H2 Database" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To post to this group, send email to [email protected].
>>> Visit this group at http://groups.google.com/group/h2-database.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "H2 Database" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected]
> <javascript:_e(%7B%7D,'cvml','h2-database%[email protected]');>
> .
> To post to this group, send email to [email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>.
> Visit this group at http://groups.google.com/group/h2-database.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to