Hi
This is a log of a job done with this config :
Two 400GB ssd drive in raid0 are in my storage server mounted in /data_spooling
My librairy is a SCALAR i500 with 2 LTO7- Fiber channel drives.

speed lan on servers : 10Gb/s
Fiber channel : 8Gb/s
speed read/write on raid 0 ssd : about 8 Gb/s

the average bandwith only depend on speed to read data on the client
and on the time spent to despool data on tape.
In my case : about 20 min / 344GB to despool.

Your LTO drive should receive data for at least ten minutes from spool
(for shoe shine reason).

Best regards


PJG095SRBacula-dir No prior Full backup Job record found. No prior or
suitable Full backup found in catalog. Doing FULL
backup.PJG095SRBacula-dir Start Backup JobId 5417,
Job=B-ALL_partage-EXCEPT_PJGN.2017-03-20_17.41.40_31 Using Volume
"000001" from 'Scratch' pool. Using Device "Drive-1" to
write.pjg095srsav002-sd 3307 Issuing autochanger "unload slot 34,
drive 0" command for vol 000001.pjg095srsav002-sd 3304 Issuing
autochanger "load slot 1, drive 0" command for vol
000001.pjg095srsav002-sd 3305 Autochanger "load slot 1, drive 0",
status is OK for vol 000001.pjg095srsav002-sd Wrote label to
prelabeled Volume "000001" on tape device "Drive-1"
(/dev/tape/by-id/scsi-3500308c3a347d000-nst) *Spooling data
...*pjg095srsav002-sd User specified *Device spool size reached:*
DevSpoolSize=375,809,964,486 MaxDevSpoolSize=375,809,638,400 Writing
spooled data to Volume. Despooling 375,809,964,486 bytes
...pjg095srsav002-sd *Despooling elapsed time = 00:20:44, Transfer
rate = 302.0 M Bytes/second*pjg095srsav002-sd Spooling data again
...pjg095srsav002-sd User specified Device spool size reached:
DevSpoolSize=375,809,964,644 MaxDevSpoolSize=375,809,638,400 Writing
spooled data to Volume. Despooling 375,809,964,644 bytes
...pjg095srsav002-sd Despooling elapsed time = 00:21:52, Transfer rate
= 286.4 M Bytes/secondpjg095srsav002-sd Spooling data again
...pjg095srsav002-sd User specified Device spool size reached:
DevSpoolSize=375,809,964,295 MaxDevSpoolSize=375,809,638,400 Writing
spooled data to Volume. Despooling 375,809,964,295 bytes
...pjg095srsav002-sd Despooling elapsed time = 00:21:11, Transfer rate
= 295.6 M Bytes/secondpjg095srsav002-sd Spooling data again
...pjg095srsav002-sd User specified Device spool size reached:
DevSpoolSize=375,809,964,285 MaxDevSpoolSize=375,809,638,400 Writing
spooled data to Volume. Despooling 375,809,964,285 bytes
...pjg095srsav002-sd Despooling elapsed time = 00:18:00, Transfer rate
= 347.9 M Bytes/secondpjg095srsav002-sd Spooling data again
...pjg095srsav002-sd User specified Device spool size reached:
DevSpoolSize=375,809,964,450 MaxDevSpoolSize=375,809,638,400 Writing
spooled data to Volume. Despooling 375,809,964,450 bytes
...pjg095srsav002-sd Despooling elapsed time = 00:14:59, Transfer rate
=* 418.0 M Bytes/second*pjg095srsav002-sd Spooling data again
...pjg095srsav002-sd Writing spooled data to Volume. Despooling
375,809,964,442 bytes ... User specified Device spool size reached:
DevSpoolSize=375,809,964,442
MaxDevSpoolSize=375,809,638,400pjg095srsav002-sd Despooling elapsed
time = 00:17:54, Transfer rate = 349.9 M Bytes/secondpjg095srsav002-sd
Spooling data again ...pjg095srsav002-sd User specified Device spool
size reached: DevSpoolSize=375,809,964,110
MaxDevSpoolSize=375,809,638,400 Writing spooled data to Volume.
Despooling 375,809,964,110 bytes ...pjg095srsav002-sd Despooling
elapsed time = 00:18:43, Transfer rate = 334.6 M
Bytes/secondpjg095srsav002-sd Spooling data again ...pjg095srsav002-sd
User specified Device spool size reached: DevSpoolSize=375,809,964,665
MaxDevSpoolSize=375,809,638,400 Writing spooled data to Volume.
Despooling 375,809,964,665 bytes ...pjg095srsav002-sd Despooling
elapsed time = 00:16:50, Transfer rate = 372.0 M
Bytes/secondpjg095srsav002-sd Spooling data again ...pjg095srsav002-sd
User specified Device spool size reached: DevSpoolSize=375,809,964,730
MaxDevSpoolSize=375,809,638,400 Writing spooled data to Volume.
Despooling 375,809,964,730 bytes ...pjg095srsav002-sd Despooling
elapsed time = 00:14:27, Transfer rate = 433.4 M
Bytes/secondpjg095srsav002-sd Spooling data again ...pjg095srsav002-sd
Writing spooled data to Volume. Despooling 375,809,964,457 bytes ...
User specified Device spool size reached: DevSpoolSize=375,809,964,457
MaxDevSpoolSize=375,809,638,400pjg095srsav002-sd Despooling elapsed
time = 00:17:35, Transfer rate = 356.2 M Bytes/secondpjg095srsav002-sd
Spooling data again ...pjg095srsav002-sd User specified Device spool
size reached: DevSpoolSize=375,809,963,641
MaxDevSpoolSize=375,809,638,400 Writing spooled data to Volume.
Despooling 375,809,963,641 bytes ...pjg095srsav002-sd Despooling
elapsed time = 00:21:09, Transfer rate = 296.1 M
Bytes/secondpjg095srsav002-sd Spooling data again ...pjg095srsav002-sd
Committing spooled data to Volume "000001". Despooling 288,229,988,549
bytes ...pjg095srsav002-sd Despooling elapsed time = 00:14:50,
Transfer rate = 323.8 M Bytes/secondpjg095srsav002-sd Elapsed
time=19:13:54, Transfer rate=63.85 M Bytes/secondpjg095srsav002-sd
Sending spooled attrs to the Director. Despooling 2,025,520,288 bytes
...PJG095SRBacula-dir No Files found to prune. Bacula
PJG095SRBacula-dir 7.4.2 (06Jun16):  Build OS:
x86_64-unknown-linux-gnu debian 7.8  JobId:                  5417
Job:
B-ALL_partage-EXCEPT_PJGN.2017-03-20_17.41.40_31  Backup Level:
   Full (upgraded from Incremental)  Client:
"PJG095SRFIC001-FD" 7.4.2 (06Jun16)
amd64-portbld-freebsd10.1,freebsd,10.1-RELEASE-p37  FileSet:
     "FIC001-ALL_partage-EXCEPT_PJGN-FS" 2017-03-12 19:00:00  Pool:
               "FULL-POOL-SCALAR" (From Job FullPool override)
Catalog:                "MyCatalog" (From Client resource)  Storage:
             "PJG095SRSAV002-STORAGE" (From Pool resource)  Scheduled
time:         20-mars-2017 17:41:20  Start time:
20-mars-2017 17:41:43  End time:               21-mars-2017 13:05:31
Elapsed time:           19 hours 23 mins 48 secs  Priority:
   8  FD Files Written:       4,738,350  SD Files Written:
4,738,350  FD Bytes Written:       4,419,508,257,191 (4.419 TB)  SD
Bytes Written:       4,420,646,851,428 (4.420 TB)  Rate:
    *63291.3 KB/s*  Software Compression:   None  Snapshot/VSS:
   no  Encryption:             no  Accurate:               no  Volume
name(s):         000001  Volume Session Id:      1  Volume Session
Time:    1490014938  Last Volume Bytes:      4,422,000,339,968 (4.422
TB)  Non-fatal FD errors:    0  SD Errors:              0  FD
termination status:  OK  SD termination status:  OK  Termination:
      Backup OK


Le jeu. 7 févr. 2019 à 17:08, Dmitri Maziuk via Bacula-users <
bacula-users@lists.sourceforge.net> a écrit :

> On Thu, 07 Feb 2019 11:32:29 +0100
> Wolfgang Denk <w...@denx.de> wrote:
>
> > Dear Adam,
> >
> > In message <20190207185030.35830...@teln.shikadi.net> you wrote:
> > >
>
> > Also, disk space is cheap - where is the problem of using a much
> > bigger spool area?  I use only LTO4 tapes so far, and I have a
> > 1.5 TB spool area.  Where is the problem?
>
> Running it on spinning rust is suboptimal, so, over here in murka:
>  ~2TB of SATA SSD is under $300
>  1.6TB SAS SSD is ~$600
>  1.9 TB MVMe is ~600
> and an 11TB U.2 NVMe is over $4K.
>
> Which one you can use depends on what connector you can free up in your
> hardware. If you only have a 2.5" NVMe slot, spool space is not cheap at
> all.
>
> > > ...  However with Bacula, my spool file
> > > must be 800GB to achieve the same result, and even this makes the
> > > process take much longer because the tape is idle while the spool
> > > file is filling up the first time.
>
> Your clients can stream data over the net at your LTO-whatever's full
> throughput, and you can't afford an 800 GB SSD? Interesting setup you
> have.
>
> --
> Dmitri Maziuk <dmaz...@bmrb.wisc.edu>
>
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to