hello,
i want to schedule my backups like this, so one backup a month gets kept
for a longer timeperiod:
Schedule {
Name = "mybackup-schedule"
Run = Level=Full Pool=1YearPool on 1 at 02:00
Run = Level=Full Pool=1WeekPool on 2-31 at 02:00
}
however the only way i currently know to get this
> ===
>
> Enviado por TypeMail <http://www.typeapp.com/r>
>
> Em 4 de dez de 2015, pelo 12:31, Martin Reissner <mailto:mreiss...@wavecon.de>> escrito:
>
> hello,
>
> i wa
Bacula (com implementação) na sua cidade. Me
> mande um email.
> 61 8268-4220
> Site: www.bacula.com.br | Facebook: heitor.faria
>
>
> - Original Message -
>> From: "Martin Reissne
Hello Bacula-Users,
I want to create an offsite archive for our bacula setups that keeps
backups on a different storage and also with a longer retention time and
I'm looking for suggestions on how to best implement this.
We're running three bacula instances, each with its own dir/sd and in
total
Hello Bacula-Users,
I'm currently setting up an offsite/longterm archive and I want to use
Copy Jobs to copy selected Jobs from my standard SD to an offsite SD and
to a Pool with a longer retention time. Fwiw, all of my storage is disks
and I'm running Bacula 7.4.3.
I did some basic tests and the
Copy Jobs parallel to Backup Jobs.
Regards,
Martin
On 09/14/2016 05:07 PM, bacula-users-requ...@lists.sourceforge.net wrote:
> --
>
> Message: 2
> Date: Wed, 14 Sep 2016 12:00:38 +0200
> From: Martin Reissner
> Subject: [Bacula-users] Concurrent Co
Hello,
once again I have a problem with running concurrent jobs on my bacula
setup. I'm using Version: 3.0.3 (18 October 2009) on DIR and SD and all
data goes to a single SD where multiple Device Resources are configured
(RAID-6 Harddisk Storage). Running concurrent jobs on different Device
Resour
>Martin Reissner schrieb:
>> Hello,
>>
>> once again I have a problem with running concurrent jobs on my bacula
>> setup. I'm using Version: 3.0.3 (18 October 2009) on DIR and SD and all
>> data goes to a single SD where multiple Device Resources are
Hello,
I had to remove several clients (fds) from my bacula setup and I found
no intructions on how to do this properly. Can someone please help me out?
What I want to achieve is that all database entries related to those
clients (clients, jobs, files,...) will be removed as this is quiet some
dat
On 05/07/2012 03:00 PM, John Drescher wrote:
> On Mon, May 7, 2012 at 8:13 AM, Martin Reissner wrote:
>> Hello,
>>
>> I had to remove several clients (fds) from my bacula setup and I found
>> no intructions on how to do this properly. Can someone please help me out?
&
>> > Hello,
>> >
>> > I had to remove several clients (fds) from my bacula setup and I found
>> > no intructions on how to do this properly. Can someone please help me out?
>> > What I want to achieve is that all database entries related to those
>> > clients (clients, jobs, files,...) will be rem
Hello and sorry for the generic subject. My issue is as follows:
I have a centralized director which should be used to backup several
setups with multiple clients/fds in a cloud environment. In those setups
there is only one gateway/jumphost with a public ip, the actual
clients/fds only have a
ommand line tools, which I
haven't done but imagine is possible using openvpn or similar.
I'm sure there might be bacula features that cover these eventualities,
but I'm not a big enough bacula expert to know about them.
Robert Gerber
402-237-8692
r...@craeon.net <mailto:r...@
As we're starting to upgrade our systems from Debian 11 to 12 I noticed
there are no community packages for Debian 12 yet. Does somebody know if
this is planned or when we can expect those packages?
I'm really happy those packages are available to the community at all so
no complaints here, I'
rightaway.
On 19.12.23 08:42, Martin Reissner wrote:
Hey Rob,
thank you for the detailed reply. To be honest I had not thought about
VPN because of performance/throughput concerns but those are unwarranted
as my clients push to s3 via a storage daemon which has a public ip and
can be rea
).
Let me know whether this also happens for you.
Best,
J/C
On 19. Dec 2023, at 10:56, Martin Reissner <mailto:mreiss...@wavecon.de>> wrote:
For future reference I wanted to add that I found the "Client Behind
NAT Support with the Connect To Director Directive" feature tod
Hello,
we've recently switched some of our backups from using disk/filestorage
to cloudstorage in our own ceph rgw/s3 and this has been working great
so far, but today I ran into two cache related issues which are giving
me a headache.
Firstly somehow the server running the sd which does the
eem to consume any resources most of the time since the S3
driver seems to check the sync state first and only upload the bad or
missing data.
-Chris Wilkinson
On Tue, 16 Jan 2024, 21:19 Martin Reissner, <mailto:mreiss...@wavecon.de>> wrote:
Hello,
we've recently swit
Hello,
by now I am mostly using our Ceph RGW with the S3 driver as storage and this
works just fine but time and again requests towards the RGW time out.
This is of course our business and not Bacula's but due to a behaviour I can't
understand this causes us more trouble than it should.
When o
t; driver, instead
of the "S3" driver. You can simply change the "Driver" in the cloud resource, and restart
the SD. I'm not sure the Amazon driver is available in 13.0.2, but you can have a try.
The Amazon driver is much more stable to such timeout issues.
Best regar
have been
improving the Amazon driver continuously. Thus, I would move to this one. If
you still see this issue related to the volume not marked as error, then we
should investigate it.
Best,
Ana
On Wed, May 15, 2024 at 12:04 PM Martin Reissner mailto:mreiss...@wavecon.de>> wrote:
Hel
Hello,
we're running a Bacula 15.0.2 setup which stores everything in S3 storage and
due to having a lot of data to backup every day
we use
Truncate Cache = AfterUpload
Upload = EachPart
in our "Cloud" ressources, to ensure the systems running the SDs do not run out
of diskspace. This wor
attention!
All the best,
Martin
On 19.06.24 19:01, Eric Bollengier wrote:
Hello Martin,
On 6/19/24 12:57, Martin Reissner wrote:
Hello,
we're running a Bacula 15.0.2 setup which stores everything in S3 storage and
due to having a lot of data to backup every day
we use
Truncate Cache = Af
Hello,
this question is purely theoretical for now and hopefully forever, it just
popped up in my head and I couldn't find a satisfactory answer so I thought I'd
try here
as I already got a lot of really useful information from the list.
We are using bacula (15.0.2) with our S3 compatible Ceph
Hello,
I already sent this a while ago but never saw it in the list so I'm
trying again.
We are using bacula (15.0.2) with our S3 compatible Ceph-RGW and the
"Amazon" driver as storage and the new "Volume Encryption" feature for
data encryption which works great but I recently thought how I
Hello,
I'm using Bacula 15.0.2 from the community repository on a Debian 12
server and I want to use the RunScript option to prune the Cloudcache
after a Verifyjob. My Job config is as follows:
Job {
Name = "replica-db-1-verify"
Client = "replica-fd"
VerifyJob = "replica-db-1"
Type =
26 matches
Mail list logo