nstall needed bacula-*
> packages as usual.
There are no bacula-fd packages in any Ubuntu 22.04 repo, not even in backports.
--
/Thomas
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
aemon client to storage-daemon backup
with)
* Pool: File
* Storage: Tape1
* Priority: 10
Best regards
Thomas
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
rrentJobs = 5
}
Device {
Name = "LTO5-Drive1"
MediaType = "LTO3000"
DeviceType = "Tape"
ArchiveDevice = "/dev/nst0"
RemovableMedia = yes
RandomAccess = no
AutomaticMount = yes
LabelMedia = yes
AlwaysOpen = yes
MaximumFileSize = 100
MaximumConcurrentJobs = 5
LabelType = "Bacula"
}
Thank you very much in advance
Thomas
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
I'm having a RAID5 array of about 40TB in size. A separate RAID
controller card handles the disks. I'm planning to use the normal ext4
file system. It's standard and well known, most probably not the
fastest though. That will not have any great impact, as there is a 4TB
NVMe SSD drive, which
>Can Bacula use my 4 disks in the same way filling up backup1 and than
using backup2 etc?
The short answer is yes. We've been doing this for over a decade using
sym links to create one logical Bacula storage area that then points off
to 40-50 disks worth of volume data on each server. In g
for example:
>
> date.timezone = "Europe/Warsaw"
>
> Full list timezones you can find here:
>
> https://www.php.net/manual/en/timezones.php
>
> At the end you need to restart or reload web server.
>
> Best regards,
> Marcin Haba (gani)
>
> On Thu,
Error 1000 - Internal error. [Warning] strtotime(): It is not safe to rely
on the system's timezone settings. You are *required* to use the
date.timezone setting or the date_default_timezone_set() function. In case
you used any of those methods and you are still getting this warning, you
most likel
Bacula DOES NOT LIKE and does not handle network interruptions _at all_
if backups are in progress. This _will_ cause backups to abort - and
these aborted backups are _not_ resumable
Hi,
My feeble two cents is that this has been a bit of an Achilles heel for
us even though we are a LAN backup
.noarch
> rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-api-httpd-9.6.3-1.fc31.noarch
> rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
> baculum-common-9.6.3-1.fc31.noarch
>
> I don't know how it could be possible.
>
> Best regards,
> Marcin Hab
000
>
> Best regards,
> Marcin Haba (gani)
>
> On Thu, 21 May 2020 at 23:04, Jeff Thomas wrote:
> >
> > I'm following the instructions to the letter and then 'WHAM!'
> >
> > Running transaction check
> > ERROR You need to update
I'm following the instructions to the letter and then 'WHAM!'
Running transaction check
ERROR You need to update rpm to handle:
rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
baculum-api-9.6.3-1.fc31.noarch
rpmlib(PayloadIsZstd) <= 5.4.18-1 is needed by
baculum-api-httpd-9.6.3-1.fc31.noarch
rpmlib
No, same issue.
On Fri, May 15, 2020 at 5:43 AM Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:
> Hello,
>
> czw., 14 maj 2020 o 18:03 Jeff Thomas napisał(a):
>
>> Problem solved. FQDN needed in bconsole.conf
>>
>>
> I'm very glad that
Problem solved. FQDN needed in bconsole.conf
On Thu, May 14, 2020 at 8:55 AM Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:
> Hello,
>
> First of all, you should respond to the list and not to me directly,
> please.
>
> czw., 14 maj 2020 o 15:30 Jeff Thom
to the list and not to me directly,
> please.
>
> czw., 14 maj 2020 o 15:30 Jeff Thomas napisał(a):
>
>> The director is running as 'bacula'
>>
>> bacula 27410 1 0 May13 ?00:00:00 /opt/bacula/bin/bacula-sd
>> -fP -c /opt/bacula/etc/
Greetings all,
I would greatly appreciate some pointers to resolve this issue. I
installed using the Community Installation Guide with:
Centos 7, Bacula 9.6.3 and Postgresql 9.2
The bconsole command returns to the shell immediately.
[root@costello working]# sudo -u bacula ../bin/bconsole
Con
Hello
I am used to this principle with Linux but I don't understand why it just takes
it when Bacula is working and it slows down the server so much that I can no
longer access it in ssh.
How is your storage allocated on the server? i.e. how are things
partitioned with regard to your backup
%Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 52.9 id, 46.5 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 29987532 total, 220092 free, 697356 used, 29070084 buff/cache
KiB Swap: 15138812 total, 15138812 free,0 used. 28880936 avail Mem
It looks like your memory is being used by the Linux file cach
Hi,
How many files and total space on each client? 6 TB is not necessarily
a huge total amount but you may want to consider splitting each client
job into smaller chunks. Also, what does the status of the jobs show?
Does it show that it is indeed backing up data? Unfortunately, if they
ar
registration email. Copying the URI
from that email will be one of the simplest ways to set this up correctly.
https://blog.bacula.org/whitepapers/CommunityInstallationGuide.pdf
But I cannot find where to register to get that access-key.
Can somebody help me .
Thomas
Hi Kern, yes, I know - I should have mentioned that we're still running
an earlier version of Bacula. But my main point was that Postgres 10
doesn't seem to have any issues for us.
cheers,
--tom
On 09/07/2018 02:41 PM, Kern Sibbald wrote:
On 09/07/2018 12:05 PM, Thomas Lo
FWIW we have not seen any compatibility problems in v.10, but we're not
using it with bacula. All I can see in bacula is
/usr/libexec/bacula/create_postgresql_database:
We've been using Bacula with Postgres 10.x on RH Enterprise 7.5 for a
few months now with no issues. The only change to Ba
Hi Dan,
Thanks for your interest!
So I let bscan run for almost 48 hours and then thought I should try
and see if any records were added to the database: nothing was added
except a job number and the volume name! I cancelled bscan and went into
bconsole to verify and yes, nothing except as m
LTO-7
Media Type = LTO-7
}
Any idea ?
Best regards
Thomas
--
Thomas Franz
Data-Service GmbH
Beethovenstr. 2A
23617 Stockelsdorf
Amtsgericht Lübeck, HRB 318 BS
Geschäftsführer: Wilfried Paepcke, Dr. Andreas Longwitz, Josef Flatau
> One of the queued backups is the next incremental backup of "archive".
> My expectation was that the incremental backup would run only some hours
> after the full backup finishes, so the difference is really small and it
> only takes some minutes and only requires a small amount of tape
> storage
problem.
Maybe there is a more elegant way, but it works.
Finally we can switch to bacula7.
Best regards,
Thomas
--- src/stored/block_util.c.orig2016-07-06 21:03:41.0 +0200
+++ src/stored/block_util.c 2016-11-11 20:57:49.36519 +0100
@@ -205,7 +205,6 @@
Dmsg3(200
On 10/02/2015 04:54 PM, Thomas Eriksson wrote:
> Hi,
>
> I updated my director from 7.0.5 to 7.2.0 on a CentOS 7 box, using
> Simone's COPR repository. I ran the database update script and have
> successfully run some backups after the upgrade.
>
> However, the
error:
pg_dump: [archiver (db)] query failed: ERROR: permission denied for
relation snapshot
pg_dump: [archiver (db)] query was: LOCK TABLE public.snapshot IN ACCESS
SHARE MODE
Anyone know how to correct the permissions?
thanks,
T
in the Messages Resource.
Why are these messages not of type "skipped" ?
Any suggestions to suppress this message?
best regards
Thomas
--
Thomas Franz
Data-Service GmbH
Beethovenstr. 2A
23617 Stockelsdorf
Amtsgericht Lübeck, HRB 318 BS
Geschäftsführer: Wilfried Pae
No, because the end time of Full job #1 occurred after the end time of
the failed job #2. Bacula doesn't see any failed jobs occurring after
the end time of successful job #1 which is all it cares about - at least
in our patched version.
--tom
> Wouldn't this changed behavior run into the pr
> Ok, so the option "Allow Duplicate Job=no" can at least prevent multiple
> full backups of the same server in a row as stated before?
As others mentioned, I think it may help in your case but it may not
completely solve the problem that you saw. It looks like you had 5
instances of the same j
>> The question now is: bacula decides if it will upgrade jobs when it
>> queues the jobs or when it starts the jobs? According to the logs
>> above I think it is when it starts.
>>
>
> To my mind it's upgraded when it's queued... I hope I'm wrong :)
Hi, it is done when the job is queued to run.
> On 25/06/15 13:21, Silver Salonen wrote:
>
>>> But why it upgraded the other incrementals in the queue if the first
>>> incremental was upgraded to full?
>
> Because the algorithm is broken. It should only make that decision when
> the job exits the queue.
>
> I filed a bug against this a long ti
> Even though, IMHO, spooling disks backup is just "muda" (Japanese
> Term): http://en.wikipedia.org/wiki/Muda_(Japanese_term)
Not necessarily - if you have a number of backups that tend to flake out
halfway through for whatever reasons (network, client issues, user
issues, etc) e.g. then by spoo
> is there a quick way to set the schedule to be "every other week"
> (to create full backups every 14 days i.e. on even weeks since
> 01.01.1971 for example)
>
> If there is no predefined keyword, is there a way to trigger this
> based on the result of an external command?
Hi, you may also want t
This is probably a question for Kern or perhaps should be better posted
to bacula-devel but I'll send it here since others may have experienced
or have comments on this.
Assume you are running Virtual Fulls every x days (aka the Max Full
Interval for Virtual Fulls) and also have retention perio
Hello!
is it possible to get a list of all saved files of my last incr backup and the size of the files?
I need it, because my incr backup of a server is greater than 10gb every day, and i will know, which files are so big.
Thanks
Regards
Thomas
> First let me thank you all for your responses, i really appreciate
> them. As Joe, i think the problem here are the bacula jobids, ¿ is
> there any way to say bacula to start from (let say) job id 900 ?
> i think that's an easy way to fix all the problem as i will be able
I am not familiar e
label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 20
}
On both host, zlib is compiled.
What is the problem??
Thanks
Thomas
-
> My volumes are of type files so using new volumes vs recycling expired
> ones just fills up the file system with old data. It makes it hard to
> manage and forecast filesystem space needs.
>
> I have never understood Bacula's desire to override my policy and insist
> on preserving data that I alr
> "StartTime" does not get updated when migrating a job. Is this a bug or
> is it the way it is supposed to be?
>
I believe that this is the way it is supposed to work. When
copying/migrating a job or when creating a virtual Full job from
previous jobs, the start time of the new job gets set to
> According to
> http://www.baculasystems.com/windows-binaries-for-bacula-community-users,
> 6.0.6 is still the latest version. Does this mean the bug was never
> fixed there, or is it the text on that page that needs updating? Or
> is there still something else entirely, and is it not this bug tha
> Because traffic is going through those firewalls, I had already
> configured keepalive packets (heartbeat) at 300 seconds. In my first
> tests, backups *did* fail because that was missing. Now they don't
> seem to fail anymore, but there's that "socket terminated" message
> every now and then t
I've seen this error before on and off on one particular client.
Nothing changes with regard to the configuration and yet the error will
crop up. Usually a combination of the following "fixes" it -
cancel/restart the job, restart the Bacula client, or restart the Bacula
storage daemon. Since
> thank you, so the only way is to configure the volume to be used in only
> 1 job, So if a job fail i can delete the entire volumen. I try this.
Hi, you can also choose to spool jobs before they are written to your
actual volumes. This way if jobs tend to fail in the middle for
whatever reason
> I guess I will go with Sven's suggestion, or does anyone have any
> other recommendation on running a weekly backup with 7 days archive?
Hi, this may be the same as Sven's recommendation but if you want to
guarantee the ability to restore data as it was 7 days ago then
you'll need to set your re
> Bacula can't write "volume" files into more than one directory. If you
You can get around this restriction fairly easily by using symbolic
links or some other "redirection" technique. Of course, you then have
to manage that along with the "real" storage. We had problems with
vchanger for ou
> C:\Documents and Settings\lkemp>sc create Bacula-FD binpath=
> "C:\Program Files\Bacula>bacula-fd.exe" -c "C:/Program
> Files/Bacula/bacula-fd.conf" type= share start= auto
>
> C:\Documents and Settings\lkemp>sc create Bacula-FD binpath=
> "C:\Program Files\Bacula>bacula-f
Hi,
I just wanted to verify that Bacula uses the JobTDate field for a job
when determining if a job should be pruned including for VirtualFull
backups. In the latter case, that field gets set to the same JobTDate
as the last good "real" backup for that job. I'm just trying to figure
out if a
Hi,
We now have certificates for our backup clients that have been signed by
two different certificate authorities. Does anyone know if Bacula has
any issues with dealing with a CA Certificate file that contains
multiple CAs? Does anyone have any experience with doing this?
thanks for any as
> It did. Thanks a lot for your help - I highly appreciate it.
> If we ever should run into each other in real life please remember me
> that I owe you some beer...
No problem :) - glad that you got it working.
--tom
--
4
Archive Device = /dev/tape/by-id/scsi-3500110a0008dde5f-nst
...
}
The by-id name will not change between reboots.
-Thomas
--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Ar
> I tried that, but it fails:
>
> Enter SQL query: alter sequence fileset_filesetid_seq restart with 76;
> Query failed: ERROR: must be owner of relation fileset_filesetid_seq
>
> I ran this under "bconsole", i. e. as user bacula - is this not the
> right thing to do?
Wolfgang,
As some
Wolfgang,
> Dear Thomas,
>
> In message <52d555c5.9070...@mtl.mit.edu> you wrote:
>> My guess is that during the migration from MySQL to Postgres, the
>> sequences in Bacula did not get seeded right and probably are starting
>> with a seed value of 1.
>
&
My guess is that during the migration from MySQL to Postgres, the
sequences in Bacula did not get seeded right and probably are starting
with a seed value of 1.
the filesetid field in the fileset table is automatically populated by
the fileset_filesetid_seq sequence.
Run the following two quer
> That seems a working solution, but creating a symbolic link for every
> volume required by a restore job introduces a manual operation that
> would be better to avoid, especially if a lot of incremental volumes are
> being considered.
We use symbolic links here and have never had any problems.
> 10-dic 17:46 thisdir-sd JobId 762: acquire.c:121 Changing read
> device. Want Media Type="JobName_diff" have="JobName_full"
>device="JobName_full" (/path/to/storage/JobName_full)
I think that you want to make sure the Media Type for each Storage
Device is "File". It looks like
> 25-Nov 13:38 home-server-dir JobId 144: Fatal error: Network error with FD
> during Backup: ERR=Connection reset by peer
> 25-Nov 13:38 home-server-dir JobId 144: Fatal error: No Job status returned
> from FD.
> 25-Nov 13:38 home-server-dir JobId 144: Error: Bacula home-server-dir 5.2.5
> (26J
> - heartbeat: enabling on the SD (60 seconds) and
> net.ipv4.tcp_keepalive_time also set to 60
In glancing at your error (Connection reset by peer) and your config
files, I didn't see the Heartbeat Interval setting in all the places
that it may need to be. Make sure it is in all the following
We do something like this by running a job within Bacula every morning
that scans all client configuration files, builds a list of expected
current jobs/clients and then queries the Bacula DB to see when/if
they've been successfully backed up or not (i.e. marked with a T). If
it's been more th
> We are having a problem between a Bacula server version 5.2.5
> (SD and
> Dir) and a Windows client running Bacula-fd 5.2.10.
While this may not be your problem, in general, I recall it is best to
keep the client versions <= to the server versions.
--tom
---
> Yes, for disk storage, it does not make much sense to have data spooling
> turned off.
> I would suggest to always turn attribute spooling on (default off) so
> that attributes
> will be inserted in batch mode (much faster), and if possible ensure
> that the
> working directory, where attributes
it seems that bacula's limit is "<= 400"
from src/stored/block.c :
>if (block_len > 400) {
> Dmsg3(20, "Dump block %s 0x%x blocksize too big %u\n", msg, b,
> block_len);
> return;
>}
another limit i found is this one from the output of "dmesg | grep st" :
> [3.6
, but backup and restore are working
fine.
Best regards
Thomas
--
[:O]###[O:]
--
LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99!
1,500+ hours of tutorials including VisualStudio 2012, Windows 8
On 9/16/2013 12:52 PM, Greg Woods wrote:
> On Sat, 2013-09-14 at 14:02 -0600, Greg Woods wrote:
>
>> My question is whether there is any such thing as a USB tape drive that
>> is known to work with Bacula.
>
> It's clear from the responses I got that I left out an important detail,
> since all the
won't be able to match
all his needs with rsync.
Regards
On 27/06/2013 13:00, Philip Gaw wrote:
On 27/06/2013 11:52, Florent THOMAS wrote:
Hy,
Thanks for your answer, I was expecting this kind of answer. It
confirms what I was thinking.
rsync will be a good solution.
I still have
eed a bacula
daemon on a NAS otherwise (unworkable).
On 27/06/2013 11:00, Florent THOMAS wrote:
Hy folks,
I explain my context. I have a web agency as customer that store its
production on a NAS. They are working from their IMac and sharing
files on the NAS. They want to be more secure
Hy folks,
I explain my context. I have a web agency as customer that store its
production on a NAS. They are working from their IMac and sharing files
on the NAS. They want to be more secure and make some increments
backups. Of course they need some "GUI" because they are not as geeks as
me ;
Hi,
We have jobs that we want to limit their time either sitting and waiting
or running to certain number of hours. In addition, we want these jobs
to reschedule on error - essentially, start the job at X time, keep
trying to run but after Y hours end no matter what. I've found that if
you u
> I have not made any changes to the Fileset. I have not purged any
> volume containing the last full backup of this client. In fact I was
> able to do a small file restore from the last Full backup
> successfully that tells me the last full backup is good.
>
> In my client-dir.conf file I have the
> One idea I can think of is using a list of filesystem types that matter.
> That way you can handle most things and also exclude cluster
> filesystems like ocfs2 that should best be backed up with a different
> job and separate fd.
This is what we do for our UNIX systems. We actually define each
> Yesterday I waited for the job to finish the first tape and then wait
> for me to insert the next one.
>
> I opened wireshark to see if there is a heartbeat during waiting -
> and there was none. During the job the heartbeat was active.
>
>> From what you wrote the heartbeat should be active whe
> I now could check if bacula fd to sd connection timed out because of
> the network switches. This was not the case. My job still cancels.
My experience is that the heartbeat setting has not helped us with our
"Connection Reset by Peer" issues that occur occasionally. Something
more is going
> Tom: How did you restart the job. Did you have a script or do you do it
> by hand?
There are Job options to reschedule jobs on error:
Reschedule On Error = yes
Reschedule Interval = 30 minutes
Reschedule Times = 18
The above will reschedule the job 30 minutes after the failure and it'll
try a
> 2012-09-19 22:58:45 bacula-dir JobId 13962: Start Backup JobId 13962,
> Job=nina_systemstate.2012-09-19_21.50.01_31
> 2012-09-19 22:58:46 bacula-dir JobId 13962: Using Device "FileStorageLocal"
> 2012-09-19 23:02:41 nina-fd JobId 13962: DIR and FD clocks differ by 233
> seconds, FD autom
> Hi folks.
>
> I've got a problem whereby my email and web servers sometimes fail to backup.
>
> These two servers are inside the DMZ and backup to the server inside my LAN.
>
> The problem appears to be the inactivity on the connection after the data has
> been backed up while the database is bei
image. I tested with incremental
backup, seems bacula can't copy delta within files, but the redo full backup on
those containers because it have been changed.
Any solution to deal with this ?
Thomas Lau
Senior Technology Analyst
Principle One Limited
27/F Kinwick Centre, 32 Hollywood
Since adding Heartbeat Interval (set to 15 seconds) on our clients'
FileDaemon definition as well as the Director definition in
bacula-dir.conf and the Storage definition in bacula-sd.conf, it has
fixed some of the firewall timeout issues that we've had backing up some
clients but we've also st
>>> "bat ERROR in lib/smartall.c:121 Failed ASSERT: nbytes >0"
>>
>> This particular message is generated because some calling method is
>> passing in a 0 to the SmartAlloc methods as the number of bytes to
>> allocate. This is not allowed via an ASSERT condition at the top of the
>> actual smallo
> "bat ERROR in lib/smartall.c:121 Failed ASSERT: nbytes >0"
This particular message is generated because some calling method is
passing in a 0 to the SmartAlloc methods as the number of bytes to
allocate. This is not allowed via an ASSERT condition at the top of the
actual smalloc() method in
I downloaded the latest stable QT open source version (4.8.2 at the
time) and built it before building Bacula 5.2.10. Bat seems to work
fine with it. If you do this, just be aware that the first time you
build it, it will probably find the older 4.6.x RH QT libraries and
embed their location
This may be a stupid question but is the working state data, that are
cached on the client and used to display the recent job history of a
client from the tray monitor, limited to the most recent 10 job events?
Or is there a way to configure this to show and/or cache more than
just 10?
thank
Hi,
We're running 5.2.10 for both Windows 7 clients and our servers. My
system admins have noticed that when during restores of files to a
Windows 7 client that the restored files are all hidden which requires
them to then go in and uncheck the hide protected operating system files
option. A
This actually is a hardcoded "sanity" check in the code itself. Search
the mailing lists from the past year. I'm pretty sure I posted where in
the code this was and what needed to be changed. We have no jobs that
run more than a few days so have not made such changes ourselves so I
can't gua
>> I am running version 5.2.9 on my director and file daemon. I am
>> able to backup successfully but when I attempt to restore data onto
>> the 32bit Windows 2003 file daemon the bacula service terminates on
>> the 2003 server and the restore job fails. I can choose a Linux
>> file daemon as the
>> Restores to the Windows client systematically crash the FD on the
>> client without restoring anything. This seems to be a known, as
>> yet unsolved problem. There are several posts on this on the list.
Yes, we have the same problem. For now, we have rolled back our Windows
clients to 5.0.3 w
Jon,
I believe I posted this same issue back in April and didn't get any
replies. I never did submit it as a bug but it does seem to be a bug to me.
http://sourceforge.net/mailarchive/forum.php?thread_name=4F8ECD71.8080203%40mtl.mit.edu&forum_name=bacula-users
Perhaps I'll go ahead and post a
vel
and try googling. there were discussions about 5.2 on this mailinglist.
- Thomas
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has c
Before I submit this as a possible bug, I just wanted to see if perhaps
it is the expected behavior for Bacula.
We have a few long running jobs that take > 24 hours to do a Full
backup. Because of this, we have the following set:
Allow Duplicate Jobs = no
Cancel Lower Level Duplicates = yes
Ca
application references the OID
columns in some way (e.g., in a foreign key constraint).
Otherwise, this option should not be used.
This is Bacula 3 PostgreSQL 8.
Thanks,
Thomas
Thomas McMillan Grant Bennett Appalachian
s the next time a job is run. How do I avoid this?
i would run the bacula-fd (maybe a second instance to not intefere with
other backups on the node) with the same configuration on every shared
storage node where you would like to backup the storage and setup an DNS
entry to point at the de
e configured location, that would make me up and running?
yes. but I would recommend to test the procedure and write some down
somewhere. :)
- Thomas
>>
>>> Hi List,
>>>
>>> We are backing up to disks not tapes, now I need to set up a plan in
>>> c
g dump somewhere easyly
accessible (another server (in another building), Amazon S3, whatever) to
do a fast recovery of the bacula backup service.
- Thomas
--
Virtualization & Cloud Management Using Capacit
or you just live with the fact, that you need to backup the whole file
everyday.
- Thomas
--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computin
Am Thu, 16 Feb 2012 08:01:57 -0500 schrieb Phil Stracchino:
> On 02/16/2012 05:01 AM, Thomas Mueller wrote:
>> In theory encryption does just need the public-key. Encrypting needs
>> the private key. but I don't know if it is possible to provide only the
>> public key to
like "defragment" for tapes.
you could use a migration job to move the data away from tapes to have
the old one recycled.
- Thomas
--
Virtualization & Cloud Management Using Capacity Planning
Cloud c
; but it fails with "Failed to load private key for File daemon".
- Thomas
>
>
>
> On 2/16/12 12:01 PM, "Thomas Mueller" wrote:
>
>> Am Wed, 15 Feb 2012 11:07:40 +0200 schrieb Wassim Zaarour:
>>
>>> Hello,
>>>
>>> Current
elf create the encryption cert and try to use only the
public-key in the sd.
In theory encryption does just need the public-key. Encrypting needs the
private key. but I don't know if it is possible to provide only the
is not supported (... yet, it's on the projects list
http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?
h=Branch-5.2)
as Steve said, (some?) LTO tapes support on-drive encryption. never used
it by myself.
- Thomas
The update postgres script for 5.2.x is missing these two lines which
you can run manually from within psql (connect to the bacula db as your
Postgres admin db user):
grant all on RestoreObject to ${bacula_db_user};
grant select, update on restoreobject_restoreobjectid_seq to
${bacula_db_user};
I got this error:
>
> Error: restore.c:944 Missing cryptographic signature for
> /path/to/my/file
>
I had problems restoring files from an encrypted backup if "Replace:
always" was not selected on the restore job.
But I do not remember the exact error message.
- Thomas
1 - 100 of 571 matches
Mail list logo