Hi folks, I've been warned re-importing an innodb mysql database was slow, but I hadn't expected it to be *this* slow.
I'm dumping a catalog instance that is about 55GB in size (dump), but the file table has 294GB in the database, so obviously there's a lot of unused / deleted data in there. Dumping the catalog instance takes about 90 minutes which is fine, dump size is 22G lzo-compressed, 55G uncompressed. I've copied over this dump to test machine (8GB RAM, hw similar to the bacula server) and am re-importing it there. I started the mysql import on March 14th, now I'd like to direct your attention to the modification times of the table files (we use innodb_file_per table on this instance): -rw-rw---- 1 mysql mysql 65 14. Mär 14:01 db.opt -rw-rw---- 1 mysql mysql 8,6K 14. Mär 14:01 BaseFiles.frm -rw-rw---- 1 mysql mysql 0 14. Mär 14:01 BaseFiles.MYD -rw-rw---- 1 mysql mysql 8,5K 14. Mär 14:01 CDImages.frm -rw-rw---- 1 mysql mysql 1,0K 14. Mär 14:01 BaseFiles.MYI -rw-rw---- 1 mysql mysql 8,6K 14. Mär 14:01 Client.frm -rw-rw---- 1 mysql mysql 8,6K 14. Mär 14:01 Counters.frm -rw-rw---- 1 mysql mysql 9,1K 14. Mär 14:01 Device.frm -rw-rw---- 1 mysql mysql 8,7K 14. Mär 14:01 File.frm -rw-rw---- 1 mysql mysql 96K 14. Mär 14:01 Device.ibd -rw-rw---- 1 mysql mysql 96K 14. Mär 14:01 Counters.ibd -rw-rw---- 1 mysql mysql 112K 14. Mär 14:01 Client.ibd -rw-rw---- 1 mysql mysql 96K 14. Mär 14:01 CDImages.ibd -rw-rw---- 1 mysql mysql 8,5K 15. Mär 01:51 FileSet.frm -rw-rw---- 1 mysql mysql 5,0K 15. Mär 01:51 FileSet.MYI -rw-rw---- 1 mysql mysql 14K 15. Mär 01:51 FileSet.MYD -rw-rw---- 1 mysql mysql 8,5K 15. Mär 01:51 Filename.frm -rw-rw---- 1 mysql mysql 75G 15. Mär 01:51 File.ibd -rw-rw---- 1 mysql mysql 5,7G 16. Mär 10:44 Filename.ibd As you can see, the File table import seems to have finished on Mar 1th, 1:51 which seems fine given its size. What worries me is the Filename and the Path table: Apparently 1,5 days have already been spent on importing the Filename table (size on the bacula server is around 12GB) and it's still being updated by the import process, with the Path table still missing which is even larger on the production machine. Needless to say that this mechanism is more or less useless for a disaster recovery scenario if re-creating the catalog from a dump takes over three days! My question: Is there some way to optimize the catalog dump to make the import faster, like maybe omitting indices and re-creating them manually once the import has completed? Seeing the Path table also has 19GB, its import probably won't have finished before our Sun goes Nova. ;) I'm now considering only dumping the file table to get 200GB back from the original 300G file table size, dropping the table in-place and then re-importing it from the dump. I'd be most grateful for any advice and / or comments. All the best, Uwe PS: Here's the relevant table size list on the production server for reference: -rw-rw---- 1 mysql mysql 96K 12. Mär 10:27 MediaType.ibd -rw-rw---- 1 mysql mysql 96K 13. Mär 20:39 Storage.ibd -rw-rw---- 1 mysql mysql 160K 15. Mär 19:46 Pool.ibd -rw-rw---- 1 mysql mysql 560K 16. Mär 00:21 RestoreObject.ibd -rw-rw---- 1 mysql mysql 19G 16. Mär 05:46 Path.ibd -rw-rw---- 1 mysql mysql 112K 16. Mär 08:46 Client.ibd -rw-rw---- 1 mysql mysql 11M 16. Mär 09:58 Job.ibd -rw-rw---- 1 mysql mysql 12G 16. Mär 10:44 Filename.ibd -rw-rw---- 1 mysql mysql 736K 16. Mär 10:48 Media.ibd -rw-rw---- 1 mysql mysql 28M 16. Mär 10:48 JobMedia.ibd -rw-rw---- 1 mysql mysql 294G 16. Mär 10:52 File.ibd -- NIONEX --- Ein Unternehmen der Bertelsmann SE & Co. KGaA ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users