--On Friday, December 04, 2015 08:19:00 AM -0600 Richard Robbins
wrote:
> The OS mounts the NFS share at that point and I'm able to read and write
> files without difficulty but when I fire up Bacula the program hangs with
> accompanying warning messages "Warning: bsock.c:112 Could not connect
--On Thursday, November 19, 2015 01:03:59 PM + Martin Simmons
wrote:
> Does Bacula ever check for expired [data encryption] certs? I suspect
> not, so the question about rollover strategy is a moot one.
I've empirically verified this to be the case; I performed a backup
using a short-lived
--On Thursday, November 19, 2015 10:49:07 AM +0100 Marcin Haba
wrote:
> You can renew your certs.
True, as long as you're ok with using the old key. However t won't work,
for example, if you need to expand your key size.
> I think that important is understand that data stored by Bacula is not
My alerting system tells me that I have some file daemons that have been
merrily encrypting their data for quite a while. In particular, the
expiry dates for the data encryption x509 certs are coming up soon.
Well, this brings up an interesting question that I'd not really
considered in depth: G
--On Thursday, May 21, 2015 12:21:23 PM -0500 Dimitri Maziuk
wrote:
> How much work is required to mount a filesystem on such drive? That is,
> one drive from md raid1 with extX on it "just mounts" when you stick it
> in a usb cradle. Do you need to jump through extra hoops with zfs?
The ZFS ca
--On Thursday, May 21, 2015 06:50:31 PM +0200 "Rados?aw Korzeniewski"
wrote:
> Why do you need to use a 500MB volume in size? This days it is like
> distributing movies on floppies instead of DVD/BR.
Some of my older deployments had data patterns where incrementals
are typically small, but at t
--On Thursday, May 21, 2015 11:09:58 AM -0500 dweimer
wrote:
> I have three systems two of which are using disk backup, then copy to
> tape both of those are running on CentOS with 25G volume sizes [...]
> The third system is using 46G File volumes
Thanks. Good to know.
> I wouldn't worry abo
--On Thursday, May 21, 2015 09:06:41 AM +0200 Kern Sibbald
wrote:
> Bacula does keep 64 bit addresses.
Excellent. Not surprisingly, I'm not dealing with file sizes near 2^63,
but I *do* need to back up files that are in the 2^39 range (from
filesystems that are in the 2^46 range onto virtual c
I was under the impression that the maximum size of a file
that can be backed up would be either 2^63 or 2^64 bytes, but
I can't seem to find anything in the manuals or via google-fu
that confirms this.
Does anyone have any positive information regarding the
maximum file size limit?
Devin
-
--On Monday, January 26, 2015 01:46:07 PM -0600 ramesh penugonda
wrote:
> I am getting the encryption context error on a freebsd server when trying
> to backup data encrypted.
>
> 1) when i run in daemon mode i get the encryption error
> 26-Jan 19:33 bksrv1-dir JobId 123428: Start Backup Job
Also check both your Solaris and attached switch statistics
for the *late* collision count. If it is anything but zero,
it is indicative of an autonegotiation problem. Solaris
100Mb interfaces are infamous for needing autoneg forced off
and locked at a set value at both the server and switch
wher
I have a situation where I have one bacula installation locally
for normal backups and another at a geographically remote
location for DR (disaster recovery) purposes.
For the DR site I would like to minimize network traffic, so I've
set things up such that I manually triggered one Full backup of
--On Thursday, January 20, 2011 09:27:37 PM -0500 Dan Langille
wrote:
> On 1/17/2011 4:27 PM, Devin Reade wrote:
>>
>> Eh? Have I missed some important point about using MySQL for Bacula's
>> catalog, or is this just the author's prejudices speaking?
>
> A
--On Monday, January 17, 2011 10:50:39 AM -0500 Dan Langille
wrote:
(In another thread)
> Even though you are doing migrate, this might help, because Migrate and
> Copy are so similar.
>
>http://www.freebsddiary.org/bacula-disk-to-tape.php
I started to scan that document, and I saw this s
--On Friday, January 07, 2011 12:23:43 AM -0700 Devin Reade
wrote:
> I'd like to (additionally) set things up so that server A
> performs offsite backup for network B, and server B does the
> converse for network A (for DR purposes).
I'm also open to other configurations, such
I have a situation where I have two geographically separate
networks, both of which are using bacula locally. I'll call
them network A and network B. Currently, (bacula) server A
on network A backs up network A machines, and server B on
network B backs up network B machines.
I'd like to (additio
Maybe try looking at your database and seeing if there are any
locks still in place?
--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their databas
I've got a Linux-HA resource agent for bacula-fd that is suitable
for backing up clustered filesystems. I've cc'd the pacemaker list
and mentioned that anyone is welcome to incorporate it into the
resource-agents RPM.
The RA is available from:
ftp://ftp.gno.org/pub/tools/bacula-contrib
Se
Martin Simmons wrote:
> I read that glusterfs uses FUSE, so it might be checking something more than
> the uid. That would explain why a root shell can access the files. Note that
> the error is "Operation not permitted", which is different from the normal
> "Permission denied" error you get fr
Martin Simmons wrote:
>>>>>> On Wed, 22 Dec 2010 14:09:50 -0700, Devin Reade said:
>> I have set up bacula clients on these nodes and, in addition to the
>> usual ext3 filesystems (/, /usr, et cetera), I'm trying to back up
>> the glusterfs-mounted /home
That's kernel space, not application space. Your most likely culprits
are bad memory or excess heat. Try to log/graph lmsensors for the
latter, and run memtest86 (for at least one full run) for the former.
Also look at any other hardware monitors you might have available,
including smartd.
Eve
Yes, this is bacula related, but first some background.
I've got a new two-node HA cluster where I am trying the new (for me)
mechanism of using glusterfs for /home. (For anyone not familiar with
this, both nodes have native filesystems mounted elsewhere -- in this
case, /gluster/home -- and /hom
Thank you very much for doing the work of making this release (in
particular, the el5 release which I am using on CentOS 5).
It doesn't appear that the bacula-bat RPMs are up on sourceforge in
the fschwarz directory as they were with previous releases. I'm assuming
that this is related to the qt
Over the years I've learned that the ability to read DVDs written
on one a different drive than the original writer is much improved
if the disk is written at speed=1. I recently set up a DVD on my
SD and would like to continue this level of paranoia.
I was looking through the dvd-handler script
Piotr Gbyliczek wrote:
> On Sunday 14 December 2008 04:52:50 Devin Reade wrote:
>
>> If you have clients that are not on the same network as one of the
>> SD interfaces (or if you have clients with dumb resolvers that won't
>> do result record sorting based on connec
Piotr Gbyliczek wrote:
> I'm having problem with configuring bacula properly. We have one director,
> three storages and lot of clients. Clients are in different networks, so we
> need to use different IP to connect from client to storage and different IP
> to connect from director to storage.
Arno Lehmann <[EMAIL PROTECTED]> wrote:
> Well, I suppose lack of transactions can make MySQL sound less reliable,
> but unless your catalog server tends to crash or damage its filesystem I
> don't think this is the most important issue...
I was thinking more of the situation where concurrent d
I'm just in the process of setting up bacula and debating postgres
vs mysql (I've used both, but have more experience with the latter).
I see that the current online users' manual mentions that one advantage
of using Postgres is support for transactions and stored procedures.
MySQL also has suppo
Magnus Ahl <[EMAIL PROTECTED]> wrote:
> Our server came with blank tapes so there were no problem with labeling. How
> about unmounting them from bacula, feed them to the drive manualy using mtx
> and erasing them with "/bin/mt -f /dev/nst0 erase"?
Assuming that it is in fact okay to destroy the
29 matches
Mail list logo