Thanks Göran,

I'll inform you if I find something..

7 Şubat 2017 Salı 12:41:40 UTC+3 tarihinde Göran Schwarz yazdı:
>
> Hi Gönenç
>
> No I didn't find the root cause of the issue!
> I did a "workaround" where I just start a "new" database when the storage 
> time was too long...
> In a "normal" database (where you must have all data in one db) this is 
> not possible, but since I use the H2 as a storage for collecting and 
> storing performance data I could just "spillover" into a new database when 
> severe storage problems accrued.
>
> So my guess is that the issue is still there (but I didn't have time to 
> dig into the code and understand how it works and suggest/do a fix)
> /Goran
>
> 2017-02-07 9:09 GMT+01:00 Gönenç Doğan <[email protected] <javascript:>>
> :
>
>> Hey Göran, did you find anything helpful?
>>
>> 27 Haziran 2016 Pazartesi 18:52:57 UTC+3 tarihinde Göran Schwarz yazdı:
>>
>>> Hi Noel, and thanks for the answer!
>>>
>>> MULTI_THREADED is normally OFF (it was just an test to see if that was 
>>> going to help, but it did not, so I'm running *without* that option now)
>>>
>>> LOB: Sorry I can't remove the LOB columns, then I have to "redesign" 
>>> everything from scratch (meaning: breaking up CLOBS into smaller chunks and 
>>> store them as varchar(255) or similar, then assemble them again on selects, 
>>> but that is a *major* job)
>>>
>>> Open Trans: I have no open transactions, according to: select * from 
>>> INFORMATION_SCHEMA.SESSIONS, the column CONTAINS_UNCOMMITTED is false 
>>> between my "insert batches"
>>>
>>> Restart the app:
>>> * No the problem still exists after application is restarted (using the 
>>> same URL, or big db-file)
>>> * But if I create a *new* database on the fly (close current db-file & 
>>> create/open a new db-file) the "storage" time goes down *drastically* (from 
>>> several minutes (approx 8-10 minutes) to "normal" time, which is approx 1-2 
>>> seconds for 15.000 records, which is applied in ~20 different transactions)
>>>
>>> So my guess is that we do to much cleanup-stuff in the commit phase (or 
>>> trying to *find* things to cleanup)... and when the database becomes to 
>>> large it just "loops" or "spend to much time" at 
>>> org.h2.mvstore.Page$PageChildren.removeDuplicateChunkReferences(Page.java:1094)
>>> I haven't really investigated the code path it takes in depth, but at 
>>> some point it reaches a threshold where things just takes to long time for 
>>> a commit!
>>>
>>> In my mind: a commit should be quick (write the "last" log pages to 
>>> disk, and possible (if we want recover-ability and consistency) wait for 
>>> for the IO to complete), and if we need to do "cleanup" work it should be 
>>> done in some "housekeeper-post" phase...
>>> but I'm not sure how H2 is designed when it comes to this: The design is 
>>> not really my concern, it's simply the long commit times when a database 
>>> becomes to big!
>>>
>>> BTW: I have to thank you for all the hard work you have put into H2
>>> /Goran
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "H2 Database" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/h2-database.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to