The issue I had in the first email was worked around by removing the "Accurate 
= yes" line in the Job definition; and I'll put it back when the Fulls run. I 
realize that's not the proper solution, but here we are :)

This issue evidently also affects restores (this is me doing a restore on an 
non-active client). Essentially the same behavior as my previously reported 
problem.

| 65 | bacula | localhost | bacula | Query   | 2794 | Sending data | SELECT 
Path.Path, Filename.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq      FROM ( 
SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, 
File.FilenameId AS FilenameId, LStat     , File.DeltaSeq AS DeltaSeq, 
Job.JobTDate AS JobTDate FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, 
PathId, FilenameId, DeltaSeq FROM ( SELECT JobTDate, PathId, FilenameId, 
DeltaSeq FROM File JOIN Job USING (JobId) WHERE File.JobId IN (111208) UNION 
ALL SELECT JobTDate, PathId, FilenameId, DeltaSeq FROM BaseFiles JOIN File 
USING (FileId) JOIN Job  ON    (BaseJobId = Job.JobId) WHERE BaseFiles.JobId IN 
(111208) ) AS tmp GROUP BY PathId, FilenameId, DeltaSeq ) AS T1 WHERE 
(Job.JobId IN ( SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId IN 
(111208)) OR Job.JobId IN (111208)) AND T1.JobTDate = Job.JobTDate AND 
Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FilenameId = 
File.FilenameId ) AS T1 JOIN Filename ON (Filename.FilenameId = T1.FilenameId) 
JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 ORDER BY 
T1.JobTDate, FileIndex ASC |

This is MySQL 5.1.52 on Scientific Linux 6.1, 64 bit. I'm using the my-huge.cnf 
in MySQL.

Is this just something with MySQL? MySQL 5?

Looking at the FAQ, I don't believe I have any extraneous indexes on File:

mysql> show indexes from File;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | 
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+
| File  |          0 | PRIMARY  |            1 | FileId      | A         |   
392744826 |     NULL | NULL   |      | BTREE      |         |
| File  |          1 | JobId    |            1 | JobId       | A         |      
  NULL |     NULL | NULL   |      | BTREE      |         |
| File  |          1 | JobId    |            2 | PathId      | A         |      
  NULL |     NULL | NULL   |      | BTREE      |         |
| File  |          1 | JobId    |            3 | FilenameId  | A         |      
  NULL |     NULL | NULL   |      | BTREE      |         |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+

Based on an older email thread I'm doing an 'optimize table' on File.

Thanks!

Mark

On Nov 19, 2011, at 6:08 AM, Bober, Mark wrote:

> 
> Greetings! I'm upgrading for Bacula 2.x to 5.2.1, and I've wanted to start 
> using Base Jobs, since I'm backing up to disk now.
> 
> Previously, I'd had the exact same problem here with doing a Full on a 
> previously made Base job:
> 
> http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg06023.html
> 
> I wanted to mention that the (fixed) first patch posted by Eric Bollengier 
> fixed the problem, as shown here:
> 
> http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg06054.html
> 
> I'm using Bacula 5.2.1, on Scientific Linux 6, 64 bit. The MySQL version 
> (from the RPM) is 5.1.52. This is a database that's extremely large (70+GB), 
> and has been updated using the supplied scripts from the 2.0 version up to 
> current, and has over 7 years of use in it.
> 
> Now, an incremental had started as scheduled, based off the previous 
> Base/Full I spoke about before. MySQL is hanging again. I can't tell the job 
> status, because it's locked the DB to a point bconsole can't query what it 
> needs.
> 
> Here's what MySQL shows:
> 
> |  3 | bacula | localhost | bacula | Query   | 16780 | Copying to tmp table | 
> SELECT Path.Path, Filename.Name, T1.FileIndex, T1.JobId, LStat, DeltaSeq      
> FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, 
> File.FilenameId AS FilenameId, LStat     , DeltaSeq, Job.JobTDate AS JobTDate 
> FROM Job, File, ( SELECT MAX(JobTDate) AS JobTDate, PathId, FilenameId FROM ( 
> SELECT JobTDate, PathId, FilenameId FROM File JOIN Job USING (JobId) WHERE 
> File.JobId IN (112775) UNION ALL SELECT JobTDate, PathId, FilenameId FROM 
> BaseFiles JOIN File USING (FileId) JOIN Job  ON    (BaseJobId = Job.JobId) 
> WHERE BaseFiles.JobId IN (112775) ) AS tmp GROUP BY PathId, FilenameId ) AS 
> T1 WHERE (Job.JobId IN ( SELECT DISTINCT BaseJobId FROM BaseFiles WHERE JobId 
> IN (112775)) OR Job.JobId IN (112775)) AND T1.JobTDate = Job.JobTDate AND 
> Job.JobId = File.JobId AND T1.PathId = File.PathId AND T1.FilenameId = 
> File.FilenameId ) AS T1 JOIN Filename ON (Filename.FilenameId = 
> T1.FilenameId) JOIN Path ON (Path.PathId = T1.PathId) WHERE FileIndex > 0 
> ORDER BY T1.JobTDate, FileIndex ASC |
> 
> I'm hoping Eric or someone as another little brilliant snippet of SQL that 
> can solve this issue as well. As far as I can tell, MySQL created the 
> temporary table for the job within the minute the job started, and then hung, 
> essentially the same behavior as shown in the old thread from 2010 with the 
> Base/Full issue.
> 
> 
> Thanks!
> 
> Mark Bober
> 

------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to