Thanks for the quick response! - Depending on the exact query, your table structure, the version you use, the H2 URL params you use, you may indeed be triggering some kind of O(n^2) behaviour. -> But how/why? - What do you mean by "dedicated transaction"? I'm working in the H2 console. - Is the updated column indexed too? -> No - Is the primary key a standard integer? -> BIGINT
I've struggled to create a test case with the same problem. The below script runs perfectly fine (<10,000ms), even though it has far more rows than my table (1,000,000): CREATE TABLE test(a BIGINT PRIMARY KEY AUTO_INCREMENT, b BIGINT, c SMALLINT, d VARCHAR, e BOOLEAN); @LOOP 1000000 INSERT INTO test VALUES(NULL, RAND() * 100000, RAND() * 100, 'abc', RAND() >= 0.5); UPDATE test SET c = 200 WHERE e; The update operation is the same as in my production database (WHERE on BOOLEAN and update SMALLINT). So the question is what else I need to add to the test case to reproduce the slow result. These are a couple of things from my production DB that I could add: - More tables. Production DB is > 5GB. - Foreign keys (not on the updated) - Other unique indices (not on the updated) - More columns Any ideas? -- You received this message because you are subscribed to the Google Groups "H2 Database" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/h2-database. For more options, visit https://groups.google.com/d/optout.
