Hi. one question and two bug reports. I am using 5.1 on Linux 2.0.37 glibc 2.0 Firstly, I am performing a big copy, around 1.5 million rows. The speed, whilst slower than I would like, is acceptable, and more my problem for having an underpowered machine. My problem is that whilst I perform this copy, any other modification on this table becomes very very slow. I am attempting to simply delete from table where refnum < somenumber As soon as the copy is finished, it speeds right up again. Now, I am aware that *any* action being performed during a copy is an improvement on pre 5.0, but I was wondering if there was any way to speed it up. I do not need access to the data I am copying until it is all inserted so maybee there is something I can do with that? Im not sure. Secondly, a bug report. I killed a process doing a destroydb (yes yes, its a silly thing to do, but it took a long time and I thought it had crashed) When I tried to createdb the same db again, it told me it couldnt create it. When I tried to destroy it, it told me it didnt exist. I eventually rectified the problem by deleting the files in the data directory. However it seems a slightly better solution is desirable. I am assuming that the procedure for a destroydb is in several stages, and that if any stage says the dbase doesnt exist, the destroy exits. It seems to me that you should continue, try and perform all tasks, and only then declare it doesnt exist. Im sure it cant be that much slower, and it would stop the problems. Thirdly, a second bug report I had just performed a BIG copy into a table, and when I tried to select() on this table the backend crashed. I tried a select again after reconnecting and again it crashed. I then did a select in another big table, and that worked. Tried again on the first table and that also worked this time. Unfortunately, I am unable to reproduce this crash, or give any more detailed a report as I couldnt find a core dump that was related to the backend process. Thanx M Simms