Zitat von "Luis H. Forchesatto" <luisforchesa...@gmail.com>:
> Greetings. > > I'd like to discuss my situation here where I have a job who backups only > 200GB in files, but it has 2 million more files to save on catalog (MySQL). > When the file copy is complete the storage server, who also runs the > director, spend many hours to save the files in the catalog. This also > cause low mysqld performance at all. > > Is there any tip to optimize the catalog operation or MySQL performance to > make the job less resource hungry? > > The job runs once a week with no concurrent jobs. Director and storage runs > at the same computer but the client is another server, at the same network > who can transport up to 1Gb of data between the servers. > > Any tips will be apretiated. Maybe the wrong database? Our main filer has ~2.7 million files per full backup which pile up to ~1GB attribute data to go in the backup database (PostgreSQL). It finish to despool the attributes within 23 minutes on a commodity hardware. If you have to stick with MySQL use Bacula attribute spooling feature and tune MySQL for fastest insert with something along these general tips: - Raise the size of the redologs - Enlarge the buffers - Use a really fast dedicated i/o channel for the db Some further reading is here: http://stackoverflow.com/questions/7585301/how-tos-for-mysql-innodb-insert-performance-optimization Regards Andreas ------------------------------------------------------------------------------ This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users