yep, senior moment. Works fine
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
---
While trying to use bsmtp on a freenas system, jail. It seems to get to the
postfix smtp server correctly, but hangs at the point of sending the data and
ending. I am trying to hand run it to see the error, and it looks like bsmtp
problem. I did not have any problems on a centos bacula box. I ha
While trying to use bsmtp on a freenas system, jail. It seems to get to the
postfix smtp server correctly, but hangs at the point of sending the data and
ending. I am trying to hand run it to see the error, and it looks like bsmtp
problem. I did not have any problems on a centos bacula box. I
Ok, so this would be a feature request
Either a utility that reads every file in the db and updates its stored ctime
to the ctime on the currently residing file.
Or...
A new switch used in job section called:
datamoved=yes
If it sees that, during the next run, looking for files that have chan
ok, that def explains it.
Is it possible to force bacula to ignore, but save, current ctimes, for one
incr run?
Usinf mtimeonly, it could use mtime to backup incr of nothing, but save the new
ctimes it finds.
After the 'special' incr run, take out the mtimeonly line and everything is
back to
Something is wrong here
I moved 48TB from one NAS to the other, rsync -avz --numeric-ids
All dates/times were identical between the two, but when I mounted new at same
mount points, every other application pointing at the NAS was happy, EXCEPT
BACULA which proceeded to do incrementals, selectin
Something is wrong here
I just posted about this exact thing.
I moved 48TB from one NAS to the other, rsync -avz --numeric-ids
All dates/times were identical between the two, but when I mounted new at same
mount points, everything was happy, EXCEPT BACULA which proceeded to do
incrementals, se
Yes, I used -Aavz --numeric-ids
The dates, times (creation and modification), user id, and group id's match the
original.
Bacula still wrote everything to tape again for my test directory!
+--
|This was sent by p...@isone.biz v
I move my NAS files to a new FreeNAS unit.
I used rsync to move, keeping numeric-ids, perms, owners, groups, etc. Full
everything backup.
Mounted the new nas at exact mountpoints as old unit used.
Plex, pytivo, etc did not see a difference
Bacula, on the other hand, saw something, as it still
ok, fixed
Somehow the bacula-dir startup file was old
Pulled a copy, I am running now
Thanks for all help
Phil
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+---
ok, tried a restore:
[root@bkpcentos6 mysql]# mysql -u root -D bacula -p <
/root/bacula/bin/working/bacula.sql
Enter password:
[root@bkpcentos6 mysql]# cp /dev/null /var/log/bacula/bacula.log
cp: overwrite `/var/log/bacula/bacula.log'? y
[root@bkpcentos6 mysql]# /etc/init.d/bacula-dir start
Star
>I think its the other way around: sometimes between reboots MySQL got
>upgraded to version 15, but not restarted, so the old version of MySQL
>was still running. Then after the reboot new version of MySQL was
>loaded in the memory, causing bacula to emit this error.
Ok, I stopped mysql, and start
So after a bit more reading.
The dtabase is v15, why would bacula-dir look for v12?
It is the same date as the bacula-fd, -sd, and bconsole
bconsole -v shows 7.2
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward
hi
The actual version is 7.2
I did the upgrade way back in January, and have run 400+ jobs since then. I
did a new install of 7.2 and started over back then
I run bconsole -v, it does show v7.2
I run bconsole -l and it does show the bacula-dir that is defined
Is the catalog dump you talk of,
More info
Bacula 7.0.5
Mysqld is running
I can get into database using credentials and mysql command
Question. How do I restore the catalog if the db is hosed (If that is the
problem)?
Any way to debug this would be appreciated.
+--
More info
Bacula 7.0.5
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
Hi,
I was backing up an NFS volume from a NAS, when the NAS crashed, and the backup
aborted.
The next backup got a database error, so I reboot, and started up bacula-fd,
bacula-sd without problem, but starting up bacula-dir, it dies with:
[root@bkpcentos6 bacula]# cat /var/log/bacula/bacula.lo
I recovered by doing a umount, then using mtx to load another appendable blank
tape, then told bacula to mount the tape, and it worked
To perm fix it, I did do an update to mark the almost full tape as full
Re: update slots, yes, I did do an update slots, and it showed the correct
slots and tap
I am running Bacula 7.4 with a 24 tape loader (HP/SUN) and LT)4 (800g tape). I
use a single pool named "LT04Pool"
It works fine. Trying to do a 20TB backup in 4 jobs, so more tapes than slots
The system wrote 24 tapes and completed with the last tape #24 at 804G used, ie
99+% full
Since it i
Wont i lose the 4 jobs on the tape ?
If so, will bacula know it and re-backup the new items in those 4 savesets?
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+
I have an lt04 tape system. All jobs write to the tapes in one pool. A specific
tapw was in the drive, half full or so from oler backup jobs
During a tape job today, we had a power fail due to flickering power. Nothing
went off completely, but tape stopped writing and bacula timed out and marke
That was it
Thank you. I did move some tapes around to add the new ones. Forgot that
command..Here is the output:
*update slots
The defined Storage resources are:
1: File1
2: File2
3: LTO-4
Select Storage resource (1-3): 3
Connecting to Storage daemon LTO-4 at bkpcentos7:910
I added 5 tapes to my 24 slot library. I noticed bacula did not start using
the previous tape in append mode that it had been using. It skipped to the
next.
Then after I labelled the new tapes (label barcodes slots=17-22), the next job
then skipped a blank tape and went to another new tape.
Actually, what happened is I had forgotten to set retention to 2 years, so the
default, 60 days was hit the same day the nas expanded...coincidence!
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab...@
I would, except I reloaded the auotloader after manually loading tapes to make
the backup in progress finish.
After the reload, it is now working as expected
Still think it is a bug, but not ffecting me anymore.
+--
|This was s
Thanks - worked fine!
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
--
ok, tried to do that, seems to have failed, but list volumes shows retention is
now correct:
*update
Update choice:
1: Volume parameters
2: Pool from resource
3: Slots from autochanger
4: Long term statistics
Choose catalog item to update (1-4): 1
Parameters to modify:
1:
No
All tapes are physically labelled with barcodes as well as labelled by Bacula
and shown as status=append and used on tape 64,512, and in same pool.
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab.
ok, changed prune off and all retention to 2 years.
I have a unique situation where nothing is ever removed and only a little is
ever added, so 2 years on a full will be fine.
So now, how can I go into postgresql to change retention to 2 years on already
written stuff?
Thanks!
phil
+-
>On 1/5/2015 9:20 AM, philhu wrote:
>I added 8tb to my nas, all machines show the new size/free etc, but bacula
>decided that because
>the NAS drive size is bigger, it will do a Full on all my backups/sets.
>
>Why should the nas getting bigger cause all new fulls? The data
Have a tape loader
slots 1-5 have labeled tapes, in a pool labelled PH0018L4 - PH0022L4, media
id=1-5
slots 6-22 have labeled tapes, in pool labelled PH0001L4-PH0017L4, media
id=11-27
and the pool also has 5 labeled tapes out of unit PH0023L4-PH0027L4, media
id=6-10
All are labelled, in pool
I added 8tb to my nas, all machines show the new size/free etc, but bacula
decided that because the NAS drive size is bigger, it will do a Full on all my
backups/sets.
Why should the nas getting bigger cause all new fulls? The data on the drive
did not change
+
Does it happen every month or just when the '1' occurs on mon-sat?
Your schedule says always on the first and incr Mon-sat, so if the 1st occurs
on Mon-Sat, you will get both.
+--
|This was sent by p...@isone.biz via Backup Cent
Hi. I did not notice the date/time, sorry
I restarted my bacula after a wierd bug when i expanded my nas. The expansion
made every job run a new full. So i reset everything and yes i had disabled all
jobs after the restart and manually started the fulls before any jobs would be
scheduled
+
Bacula 7.0.
I have a bunch of scheduled jobs, but they don't seem to start. I did do a
run of one as a test and it started up fine. How can I release or whatever,
the scheduled jobs to run? I did the 'mount' command to start them, and the
one manually run job is running now.
A 'status dir
BTW, this did not promote to a full job, still marked incremental, just
retaping ALL files
+--
|This was sent by p...@isone.biz via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
I changed my NAS from 24T to 32T by adding drives to my NAS
Not alot of changes to data on the NAS
Bacula 7.0.5 is working ok, but it decided all files were eligible for backup
during the first incremental after the NAS resizing.
I assume it saw that the drive size had changed, which was enough
Hi. I am using Bacula 7.0.5. I havde disk spooling turned on (80G). During
bacups, it spools to the data spool, and attr spool, then writes data to tape
as 80g fills up.
When it is done, the data spool shows empty, but the attr spool doesnt:
Used Volume status:
Reserved volume: PH0036L4 on
I am seeing a wierd autoloader problem in Centos7
I have an hp (Sun) Sl24 changer with one lt04 drive in it. It works fine in
mt/mtx/bacula etc, perfect.
The problem I am seeing is that if I pull a magazine to change tapes, or use
the mailslot to load a tape, the system hangs on a device wait
39 matches
Mail list logo