Hi all,
I have a wierd problem regarding deleting large (>60.000) number of rows 
from a large table (~1.200.000 rows).
I use the following SQL

DELETE FROM TABLE1 WHERE PERSID IN (SELECT PERSID FROM TABLE1_T)

If I do this in a Java program, it never ends and the DB file grows and 
grows (exceeds 2x the original size of 2GB)  (use h2 as embedded).
If I do this in SQUIRREL ( http://squirrel-sql.sourceforge.net/ ) it runs 
approx 3 seconds (use h2 as embedded).

Both have auto-commit on.
I used even -Xmx1024m for the stand alone program - no use

I tried to drop all indexes or create just an index on the PERSID field. No 
change.

Background:
I do this to sync two databases. The original has a last modified timestamp 
on each row. I copy all changed rows since a given point in time into a 
"temporary" table (TABLE1_T) but without the keyword "TEMP" (ie "CREATE 
TABLE TABLE1_T" not "CREATE TEMP TABLE TABLE1_T")
Next, I DELETE all updated rows from the destination table by checking the 
unique key PERSID
Last, I INSERT all rows from the temp table into the destination table and 
drop the temp table.

I could use UPDATE but since the table has 20+ attributes this would be 
awkward, updating the attributes list for additional columns even more.


How could I identify why it behaves to differently in those two 
environments?

Any alternative strategies to achieve the sync'ing?

Thanks,
Chris.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to