This seems to be a useful and interesting
link: https://commons.apache.org/proper/commons-jcs/JCSvsEHCache.html
I suggest you take it up and add MVStore and/or NitroCache to it for
establishing a benchmark.
When you find MVStore competitive (enough) and find the achievable
speed matches your need
Greetings!
How about by-passing JDBC/H2 and pushing the data into the MV Store
directly?
This way you eliminate any possible bottleneck.
Next was to compare MV Store performance vs. other implementation (e
.g. EHCache).
Next next was comparing against Postgres LOAD and or DuckDB read from
Parquet
1) Ensure that all Indexes and constraints are turned off
yes, faster
2) Reduce the commit size. As far as I can see you create one very large
commit over all records. Instead, commit as per 1k or 4k records or so.
i tried 1k and 10k per commit, not much different, sometimes 1k is slower
than
1xk mean i can insert 10-15 thousand records to h2 per second, thanks
On Friday 12 January 2024 at 16:41:03 UTC+8 Andreas Reichel wrote:
> Forgot one:
>
> try multi threading, e. g. populating one prepared statement while another
> is executed/written.
> Not guaranteed if this really will be fas
Forgot one:
try multi threading, e. g. populating one prepared statement while
another is executed/written.
Not guaranteed if this really will be faster though.
On Fri, 2024-01-12 at 15:38 +0700, Andreas Reichel wrote:
> Greetings.
>
> On Fri, 2024-01-12 at 00:17 -0800, mche...@gmail.com wrote:
Greetings.
On Fri, 2024-01-12 at 00:17 -0800, mche...@gmail.com wrote:
> hi. I am running AMD 3900x with 128GB ram and a nvme ssd. Now i can
> insert 1xk record per seconds, which is very fast. But how can I make
> is 10 times more? what hardware can do that?
1) Ensure that all Indexes and constr
hi. I am running AMD 3900x with 128GB ram and a nvme ssd. Now i can insert
1xk record per seconds, which is very fast. But how can I make is 10 times
more? what hardware can do that?
thanks
Peter
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.