Hi

yes, I commented the incremential out because it also makes a full backup and therefore I only try to do differential per weekend. I want to have incrementals daily, but it does not work (always does full).

The last log is:

07-Apr 23:05 -dir JobId 136: Start Backup JobId 136, 
Job=FileServer_Full.2023-04-07_23.05.00_04
07-Apr 23:05 -dir JobId 136: Using Device "LTO9-1" to write.
07-Apr 23:05 -sd JobId 136: Error: 07-Apr 23:05 -sd JobId 136: Volume 
"000020L9" previously written, moving to end of data.
07-Apr 23:06 -sd JobId 136: Ready to append to end of Volume "000020L9" at 
file=2789.
07-Apr 23:06 -sd JobId 136: Spooling data ...
08-Apr 02:37 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,026,157 MaxJobSpoolSize=3,000,000,000,000
08-Apr 02:37 -sd JobId 136: Writing spooled data to Volume. Despooling 
3,000,000,026,157 bytes ...
08-Apr 08:14 -sd JobId 136: Despooling elapsed time = 05:36:38, Transfer rate = 
148.5 M Bytes/second
08-Apr 08:14 -sd JobId 136: Spooling data again ...
08-Apr 11:40 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,008,520 MaxJobSpoolSize=3,000,000,000,000
08-Apr 11:40 -sd JobId 136: Writing spooled data to Volume. Despooling 
3,000,000,008,520 bytes ...
08-Apr 17:59 -sd JobId 136: Despooling elapsed time = 06:18:52, Transfer rate = 
131.9 M Bytes/second
08-Apr 17:59 -sd JobId 136: Spooling data again ...
08-Apr 22:57 -sd JobId 136: User specified Job spool size reached: 
JobSpoolSize=3,000,000,059,190 MaxJobSpoolSize=3,000,000,000,000
08-Apr 22:57 -sd JobId 136: Writing spooled data to Volume. Despooling 
3,000,000,059,190 bytes ...
09-Apr 05:47 -sd JobId 136: Despooling elapsed time = 06:49:38, Transfer rate = 
122.0 M Bytes/second
09-Apr 05:47 -sd JobId 136: Spooling data again ...
09-Apr 10:35 -sd JobId 136: Error: Error writing header to spool file. Disk 
probably full. Attempting recovery. Wanted to write=64512 got=692
09-Apr 10:35 -sd JobId 136: Writing spooled data to Volume. Despooling 
2,999,394,019,676 bytes ...
09-Apr 17:35 -sd JobId 136: Despooling elapsed time = 06:59:31, Transfer rate = 
119.1 M Bytes/second
09-Apr 20:28 -sd JobId 136: Error: Error writing header to spool file. Disk 
probably full. Attempting recovery. Wanted to write=64512 got=3793
09-Apr 20:28 -sd JobId 136: Writing spooled data to Volume. Despooling 
2,999,392,366,895 bytes ...
10-Apr 03:33 -sd JobId 136: Despooling elapsed time = 07:04:44, Transfer rate = 
117.6 M Bytes/second
10-Apr 06:22 -sd JobId 136: Committing spooled data to Volume "000020L9". 
Despooling 1,510,003,913,646 bytes ...
10-Apr 09:38 -sd JobId 136: Despooling elapsed time = 03:16:01, Transfer rate = 
128.3 M Bytes/second
10-Apr 09:38 -sd JobId 136: Elapsed time=58:31:43, Transfer rate=78.27 M 
Bytes/second
10-Apr 09:38 -sd JobId 136: Sending spooled attrs to the Director. Despooling 
1,885,573,459 bytes ...
10-Apr 09:43 -dir JobId 136: Bacula -dir 11.0.6 (10Mar22):
  Build OS:               x86_64-suse-linux-gnu openSUSE Tumbleweed
  JobId:                  136
  Job:                    FileServer_Full.2023-04-07_23.05.00_04
  Backup Level:           Full
  Client:                 "-fd" 11.0.6 (10Mar22) 
x86_64-suse-linux-gnu,openSUSE,Tumbleweed
  FileSet:                "Full Set" 2023-03-18 23:05:00
  Pool:                   "Tape" (From Job resource)
  Catalog:                "MyCatalog" (From Client resource)
  Storage:                "AutoChangerLTO" (From Job resource)
  Scheduled time:         07-Apr-2023 23:05:00
  Start time:             07-Apr-2023 23:05:03
  End time:               10-Apr-2023 09:43:52
  Elapsed time:           2 days 10 hours 38 mins 49 secs
  Priority:               10
  FD Files Written:       6,600,638
  SD Files Written:       6,600,638
  FD Bytes Written:       16,491,080,239,221 (16.49 TB)
  SD Bytes Written:       16,492,258,152,160 (16.49 TB)
  Rate:                   78109.0 KB/s
  Software Compression:   None
  Comm Line Compression:  64.5% 2.8:1
  Snapshot/VSS:           no
  Encryption:             no
  Accurate:               no
  Volume name(s):         000020L9
  Volume Session Id:      2
  Volume Session Time:    1680860753
  Last Volume Bytes:      19,288,951,428,096 (19.28 TB)
  Non-fatal FD errors:    0
  SD Errors:              3
  FD termination status:  OK
  SD termination status:  OK
  Termination:            Backup OK -- with warnings

10-Apr 09:43 -dir JobId 136: Begin pruning Jobs older than 6 months .
10-Apr 09:43 -dir JobId 136: No Jobs found to prune.
10-Apr 09:43 -dir JobId 136: Begin pruning Files.
10-Apr 09:43 -dir JobId 136: No Files found to prune.
10-Apr 09:43 -dir JobId 136: End auto prune.

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Incremental
  Client = -fd
  FileSet = "Full Set"
  Schedule = "WeeklyCycle"
  Storage = AutoChangerLTO
  Messages = Standard
  Pool = Tape
  SpoolAttributes = yes
  Priority = 10
  Write Bootstrap = "/mnt/data5/bacula/%c.bsr"
}

Job {
  Name = "FileServer_Full"
  JobDefs = "DefaultJob"
  Storage = AutoChangerLTO
  Spool Data = yes    # Avoid shoe-shine
  Pool = Tape
}

The defs are based on the vanilla config file provided with Bacual.

Cheers

T


Radosław Korzeniewski schrieb am 21.04.23 um 11:54:
Hi,

pt., 21 kwi 2023 o 11:10 Dr. Thorsten Brandau <thorsten.bran...@brace.de> napisał(a):

    Hi J/C

    Thank you.

    The configuration is:

    Schedule {
      Name = "WeeklyCycle"
      Run = Full 1st fri at 23:05
      Run = Differential 2nd-5th fri at 23:05
    #  Run = Incremental sat-thu at 23:05
    }

    So how should it be configured if that does not work?

Please share your logs and job definition, so we can check why it is not working in your setup. There are a few situations where incremental or differential backups are forced to be full.

btw. you commented out the incremental level in your schedule, are you aware of this?

Radek
--
Radosław Korzeniewski
rados...@korzeniewski.net
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to