Since i upgrade bacula (5 à 7) + postgresql (8.3 à 9.3), restore and all
access to the catalog are slow.
Is there something (a script ?) to run when upgrade bacula ? and postgresql
?
Thx
Antoine
--
Slashdot T
I all,
what are the advantages and disadvantages of the virtualFull backup ?
I want to use it to save bandwidth, but I'm afraid to use too much bacula's
server time & cpu for consolidated backup.
Thankx.
Antoine
---
dedup
On 2013-10-04 09:46, Silver Salonen wrote:
> On Friday 04 October 2013 09:37:57 Dan Langille wrote:
> On 2013-10-04 09:33, Radosław Korzeniewski wrote:
> > Hello,
> >
> > 2013/10/4 Dan Langille
> >
> > On 2013-09-27 14:17, Radosław Korzeniewski wrote:
Since a week, I'm testing backup on a powerful server (32G RAM / 4 CPU).
I do a full backup on weekends and incremental every day of the week.
Why my dedup rate is so low?
[root@Bacula fs01]# zdb -DD bacula
DDT-sha256-zap-unique: 17612028 entries, size 295 on disk, 154 in core
DDT histogr
I found,
In webacula, when you whant to run a job, there is POOL (yes/no) instead of
SPOOL !!!
I think, its a mistake !!!
Antoine
De : BOURGAULT Antoine [mailto:antoine.bourga...@sib.fr]
Envoyé : jeudi 19 septembre 2013 15:38
À : bacula-users@lists.sourceforge.net
Objet : Re
:
Job {
Name = abysse
Client = abysse
Type = Backup
Schedule = Tot
FileSet = abysse
Max Full Interval = 6 days
Pool = abysse
Messages = Standard
Write Bootstrap = "/var/spool/bacula/abysse.bsr"
Spool Data = no
}
An idea ?
De : BOURGAULT Antoine [mailto:an
Hello,
When i backup a very large server (file server), i have a < tempory > file
in working directory that is very very large.
This file is :
/var/spool/bacula/bacula-sd.data.47.MyClientName.2013-09-19_14.59.17_22.MyCl
ientName.spool
Is there any way to cut this very large file into fe