On 03/16/13 06:18, Jérôme Blion wrote: > Le 16/03/2013 10:54, Uwe Schuerkamp a écrit : >> My question: Is there some way to optimize the catalog dump to make >> the import faster, like maybe omitting indices and re-creating them >> manually once the import has completed? Seeing the Path table also has >> 19GB, its import probably won't have finished before our Sun goes >> Nova. ;) >> > Hello, > > You have several ways to speed it up. > First: > - use --disable-keys when dumping > - use other tools to do the backup / restore : You can try: > * mydumper : Each table will be a different dump. you will recreate > the database using multiple threads in parallel. > * mylvmbackup : you will restore a snapshot of the filesystem, the > speed will be the highest you can have. (the size of the backup will be > much bigger)
Snapshot-based MySQL backup schemes work with varying results depending on the underlying OS and filesystem. Using ZFS snapshots, for example, in Solaris 10/11 or presumably FreeBSD, a snapshot backup scheme works very well. We have experimented at my company with snapshot backup schemes using Linux LVM, and frankly, they really don't work well at all by comparison. LVM snapshots are too slow and require too much reserved disk space to make the technique viable on a large DB. -- Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355 ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater It's not the years, it's the mileage. ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users