>>>>> On Wed, 11 Apr 2012 11:09:12 +0100, =?utf-8?Q?Joe Nyland?= said: > > Ok, firstly: sorry for not following up sooner. There have been several > replies for this thread since my last reply, so thanks everyone for your > input. > > Phil, you point out that the problem here is most likely to do with the fact > that other jobs are running whilst the full DB backup is being run and I > agree with this. > > Now, to me, the simple resolution to this is to assign a backup window for > the catalog backup to run in, when no other backups are running. There are, > however, two issues I can foresee: > - The MySQL backup script is being called by a Bacula job as a run before > script, so there will allways be a job 'running' whilst the catalog dump is > being performed.
Should be OK because Bacula isn't in the middle of inserting File records for that job when the before script runs. > - One backup job in particular takes approx 24-48 hrs to complete, so this > will be constantly updating the catalog with file records, no matter when I > schedule the full catalog backup for. Yes, that will be a problem. > I'm therefore back to square one. > > Whilst I understand the complications with temporary tables (and threads) > that Phil has pointed out, I do agree with what Martin was suggesting that > there may be a better way to use temporary tables. I don't, however, have > enough MySQL knowledge to provide any further suggestions to improve this in > later versions of Bacula. > > Stephen, yes, I am running the backup whilst other backups are running, for > reasons stated above. You are right - if there were no jobs running, the > temp tables wouldn't be referred to in the first place, as they would not > exist. > > Your suggestion of keeping a 'buffer' of bin logs is interesting and this > was the last thing I tried to resolve this problem with. I basically removed > the '--delete-master-logs' option from the mysqldump line, so that old > binary logs are not removed. (Side point: I'm not sure when the > '--delete-master-logs' actually deletes the binary logs - the mysqldump docs > state that it "[sends a] PURGE BINARY LOGS statement to the server __after__ > performing the dump operation". This doesn't make sense to me, as anything > that's created between the dump being initiated and the dump completing will > be deleted from the logs and there's no guarantee it's included in the > dump). So I removed the '--delete-master-logs' option, in a hope this would > somehow allow me to see the CREATE TEMPORARY TABLE statements earlier in the > binary logs, but it didn't. > > What I didn't do that you have suggested is to restore older logs first > i.e. logs from before the dump, then let it work through them, skipping any > duplicate key errors. My issue with this workaround though, is that I fear > that it could corrupt the database in some way (unbeknown to me at the time > of restoring) if I am just 'skipping errors'. Yes, it sounds like a risky approach. I think you would have filter the older logs to find the commands that are needed to make the newer logs work. > To me, this all does beg the question: "Why use temporary tables in the > first place?" Again, I may not have enough MySQL experience to realise this > for myself yet, but it's my view at this stage, after battling this for a > while now! The main reason was probably for simplicity with multiple connections to the catalog (each connection automatically gets its own temporary tables). I see that some of the accurate-mode code is using a different approach though, with a table named after the jobid to make it unique. __Martin ------------------------------------------------------------------------------ For Developers, A Lot Can Happen In A Second. Boundary is the first to Know...and Tell You. Monitor Your Applications in Ultra-Fine Resolution. Try it FREE! http://p.sf.net/sfu/Boundary-d2dvs2 _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users