bacula: 9.6.5
Postgres 8.4.20

After some large (12TB, 7TB) backup jobs finished and were in the process of 
despooling attributes to the database, I got the followingg errors:

        12-Jan 12:43 vbacula JobId 131494: Fatal error: sql_create.c:841 Fill 
File table Query failed: INSERT INTO File (FileIndex, JobId, PathId, 
FilenameId, LStat, MD5, DeltaSeq) SELECT batch.FileIndex, batch.JobId, 
Path.PathId, Filename.FilenameId,batch.LStat, batch.MD5, batch.DeltaSeq FROM 
batch JOIN Path ON (batch.Path = Path.Path) JOIN Filename ON (batch.Name = 
Filename.Name): ERR=ERROR:  could not close file 
"base/pgsql_tmp/pgsql_tmp25672.4702": Input/output error
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": 
Input/output error
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": Bad 
file descriptor
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": Bad 
file descriptor
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": Bad 
file descriptor
        PANIC:  ERRORDATA_STACK_SIZE exceeded
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.4702": 
Input/output error
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": 
Input/output error
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": Bad 
file descriptor
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": Bad 
file descriptor
        ERROR:  could not close file "base/pgsql_tmp/pgsql_tmp25672.5134": Bad 
file descriptor
        PANIC:  ERRORDATA_STACK_SIZE exceeded

Bacula shows the jobs as completed, with an final status of:

        12-Jan 12:59 vbacula JobId 131403: Error: Bacula cbica-infr-vbacula 
9.6.5 (11Jun20):
          Build OS:               x86_64-redhat-linux-gnu redhat 
          JobId:                  131403
          Job:                    home-b.2022-01-07_18.36.40_23
          Backup Level:           Full (upgraded from Incremental)
          Client:                 "bicic-share" 
          FileSet:                "home-b" 2022-01-07 18:36:40
          Pool:                   "Full" (From Job FullPool override)
          Catalog:                "MyCatalog" (From Client resource)
          Storage:                "q80" (From Pool resource)
          Scheduled time:         07-Jan-2022 18:36:38
          Start time:             07-Jan-2022 18:36:44
          End time:               12-Jan-2022 12:47:53
          Elapsed time:           4 days 18 hours 11 mins 9 secs
          Priority:               10
          FD Files Written:       19,404,384
          SD Files Written:       0
          FD Bytes Written:       12,668,185,563,535 (12.66 TB)
          SD Bytes Written:       0 (0 B)
          Rate:                   30817.7 KB/s
          Software Compression:   100.0% 1.0:1
          Comm Line Compression:  16.6% 1.2:1
          Snapshot/VSS:           no
          Encryption:             no
          Accurate:               no
          Volume name(s):         
          Volume Session Id:      955
          Volume Session Time:    1640221422
          Last Volume Bytes:      0 (0 B)
          Non-fatal FD errors:    2
          SD Errors:              0
          FD termination status:  OK
          SD termination status:  SD despooling Attributes
          Termination:            *** Backup Error **
        
I would really like to avoid re-running these jobs if the data is valid.

Does anyone have a suggestion for the [best|easiest|fastest] way to verify that 
the info within the database is valid and each backup would be usable for a 
restore?

Does anyone have a suggestion for ways to debug or avoid this issue at the 
conclusion of the next backup?

Thanks,

Mark

-- 
Mark Bergman                                           voice: 215-746-4061      
 
mark.berg...@pennmedicine.upenn.edu                      fax: 215-614-0266
http://www.med.upenn.edu/cbica/
IT Technical Director, Center for Biomedical Image Computing and Analytics
Department of Radiology                         University of Pennsylvania


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to