Starting up the Directory, I can only get so far and I do not understand
why. I am running 5.0.0_1 from Ports. This is a recently upgraded
FreeBSD system form 6.2 to 7.3-RELEASE-p1. When I started up the
bacula-dir in debug I get the following:
/var/run]# /usr/local/sbin/bacula-dir -d 500 -u
2010/6/10 bwellsnc
> Here is what I have setup for my conf's. I have my conf files in a conf.d
> directory. I added this to my bacula-dir.conf file:
>
>
Did you run all the suggested tests for mtx changer script , and did they
all pass?
_
{Beto|Norberto|Numard} Meijome
"
Here is what I have setup for my conf's. I have my conf files in a conf.d
directory. I added this to my bacula-dir.conf file:
@|"sh -c 'for f in /etc/bacula/conf.d/*.conf ; do echo @${f} ; done'"
I then setup a conf file just for the tape pool and tape storage:
# Tape pool definition
Pool {
On 06/09/10 17:07, Jeremiah D. Jester wrote:
> Every time I try to delete a volume by id I get an error. Is this
> normal? Deleting by VolumeName works fine.
>
> Enter *MediaId or Volume name: 5
>
> sql_get.c:1062 Media record for Volume "5" not found.
Since about Bacula 3.x, since volumes can
2010/6/9 Jeremiah D. Jester :
> Every time I try to delete a volume by id I get an error. Is this normal?
> Deleting by VolumeName works fine.
>
>
>
> Thanks,
>
> Jj
>
>
>
>
>
> *delete
>
> In general it is not a good idea to delete either a
>
> Pool or a Volume since they may contain data.
>
>
>
>
On 6/9/2010 1:22 PM, Robbie Base wrote:
What version of Cent OS are you using?
I did find a configure.log file, but that was all and a configure.out.
Both look OK
Nothing really jumps out But I am no expert.
Bacula does not start so no log.
Thanks for your help and direction.
On Wed, Jun 9, 20
Every time I try to delete a volume by id I get an error. Is this normal?
Deleting by VolumeName works fine.
Thanks,
Jj
*delete
In general it is not a good idea to delete either a
Pool or a Volume since they may contain data.
You have the following choices:
1: volume
2: pool
3:
2010/6/9 bwellsnc :
> The loader is set to Random. It looks more like to me an issue with mtx and
> the mtx-changer script. Like i said, it will write for client1-job1, then
> it goes to client2-job1 it will move the tape to slot 1 then it won't bring
> it back. I want the tape to continue to fi
The loader is set to Random. It looks more like to me an issue with mtx and
the mtx-changer script. Like i said, it will write for client1-job1, then
it goes to client2-job1 it will move the tape to slot 1 then it won't bring
it back. I want the tape to continue to fill until it's full. If it's
On Jun 9, 2010, at 3:22 PM, Robbie Base wrote:
What version of Cent OS are you using?
Centos 5.5 x86_64
I did find a configure.log file, but that was all and a
configure.out. Both look OK
Nothing really jumps out But I am no expert.
Right, these wont tell you why your service isn't s
What version of Cent OS are you using?
I did find a configure.log file, but that was all and a configure.out. Both
look OK
Nothing really jumps out But I am no expert.
Bacula does not start so no log.
Thanks for your help and direction.
On Wed, Jun 9, 2010 at 2:29 PM, Charlie Reddington <
char
I have had luck with a Quantum Superloader 3 in the past. Can you let
me know what mode the loader is set to (it should be set to Random).
Thanks -Jason
On Wed, 2010-06-09 at 15:02 -0400, bwellsnc wrote:
> OS: CentOS 5.5
> Bacula Version: 5.0.2
>
>
> Hello. I
OS: CentOS 5.5
Bacula Version: 5.0.2
Hello. I am able to write data to my quantum superloader 3 only by
basically moving the tape manually and disabling the autochanger. Here is
the situation. This unit has an 8 tape magazine. I have a tape in SLOT 1.
When I attempt to run a backup job to tap
I see there are a lot of guides for installation on centos but don't see any
upgrades guides
+--
|This was sent by adam.gr...@comarch.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+---
On Wed, Jun 9, 2010 at 2:31 PM, Jeremiah D. Jester
wrote:
> Thanks. I ran the command and it looks like it scanned all my tapes (see 2nd
> section) Also, how can I remove the tapes I've already labeled and rename
> with their barcodes?
You need to manually get each tape the drive using either
This may be of some interest too. I ran 'update slots' and didn't seem to
change anything
*status slots
The defined Storage resources are:
1: File
2: Tape
Select Storage resource (1-2): 2
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Enter autochanger drive[0]
Thanks. I ran the command and it looks like it scanned all my tapes (see 2nd
section) Also, how can I remove the tapes I've already labeled and rename with
their barcodes? (see first section) Also, I have many more tapes in here that
aren't allocated to any of the below pools; only 8 tapes of 22
On Jun 9, 2010, at 12:59 PM, Robbie Base wrote:
> 1) Has any one tried to run bacula under Oracle Enterprise Linux
> version 5.x.
>
> After doing the following:
> ./configure
> make
> make install
> cd /d02/bacula/sbin
>
> vi the /etc/ld.so.conf file and added the following line /usr/lib64/
On Wed, Jun 9, 2010 at 2:08 PM, Jeremiah D. Jester
wrote:
> Sorry, John. Yes, it does have a bar code reader. I'll try label barcodes.
>
Here is some info on that:
http://www.bacula.org/manuals/en/concepts/concepts/Autochanger_Resource.html#SECTION001911
Sorry, John. Yes, it does have a bar code reader. I'll try label barcodes.
-Original Message-
From: John Drescher [mailto:dresche...@gmail.com]
Sent: Wednesday, June 09, 2010 11:07 AM
To: Jeremiah D. Jester
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] cannot find app
> Note, this is a tape robot (ADIC Scalar 24) and
> has an auto changer mechanism.
Again does the changer have a barcode reader. If so you do not want to
give volumes names. You run a single command
label barcodes
and put volumes in a pool. Since I have lots of pools I just put them
into the Scr
1) Has any one tried to run bacula under Oracle Enterprise Linux version
5.x.
After doing the following:
./configure
make
make install
cd /d02/bacula/sbin
vi the /etc/ld.so.conf file and added the following line /usr/lib64/mysql
ldconfig
./bacula start
I get nothing started. Bacula does not s
Forgot to include this as well...
*list volumes
Pool: Onsite
No results to list.
Pool: Catalog
No results to list.
Pool: Offsite
+-+-+---+-+--+--+--+-+--+---+---+-+
| MediaId | V
Hello,
Another question to the list. Every time I run a backup of I client I get a
similar message to the below...
09-Jun 10:43 bacula01-sd JobId 3: Job kojak-onsite.2010-06-09_10.19.21_06 is
waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume fo
Hi
I am trying to build 5.0.2 win32 director using the cross-tools included in
the source for win32 on Ubuntu 10.04.
I type sudo ./build-win32-cross-tools as instructed and I get the following
error due to Werror being present.
Could someone let me know where I should disable the warni
> I have a Scalar i500 library with 5 drives and 135 slots. I have a Red Hat 5
> server with a 1gb nic. The setup works fine for backups on most systems. The
> problem I have is a NFS share with 11TB of data that I need to backup. Every
> time I run this job it will write about 600GB of data to
Am Wed, 09 Jun 2010 11:29:37 -0400 schrieb ekke85:
> Hi
>
> I have a Scalar i500 library with 5 drives and 135 slots. I have a Red
> Hat 5 server with a 1gb nic. The setup works fine for backups on most
> systems. The problem I have is a NFS share with 11TB of data that I need
> to backup. Every
Hi
I have a Scalar i500 library with 5 drives and 135 slots. I have a Red Hat 5
server with a 1gb nic. The setup works fine for backups on most systems. The
problem I have is a NFS share with 11TB of data that I need to backup. Every
time I run this job it will write about 600GB of data to a t
OK then I have to start studying bacula upgrade on centos ver. 4.5
As far as I know database have to be upgraded too. I will simulate on a test
computer.
Thank You John
+--
|This was sent by adam.gr...@comarch.com via Backup
On Wed, Jun 9, 2010 at 10:22 AM, mst wrote:
>
> It looks like some network error
>
> 09-Jun 10:05 merlin-dir: pcuser.2010-06-09_09.59.50 Fatal error: Network
> error with FD during Backup: ERR=Connection reset by peer
> 09-Jun 10:05 merlin-dir: pcuser.2010-06-09_09.59.50 Fatal error: No Job
On Wed, Jun 9, 2010 at 10:05 AM, mst wrote:
>
> That's correct I have RAID 10.
>
> I have just moced one of file from u01 to see if space is the problem but I
> have this:
>
> 09-Jun 09:59 merlin-dir: No prior Full backup Job record found.
> 09-Jun 09:59 merlin-dir: No prior or suitable Full back
It looks like some network error
09-Jun 10:05 merlin-dir: pcuser.2010-06-09_09.59.50 Fatal error: Network error
with FD during Backup: ERR=Connection reset by peer
09-Jun 10:05 merlin-dir: pcuser.2010-06-09_09.59.50 Fatal error: No Job status
returned from FD.
09-Jun 10:05 merlin-dir: pcu
That's correct I have RAID 10.
I have just moced one of file from u01 to see if space is the problem but I
have this:
09-Jun 09:59 merlin-dir: No prior Full backup Job record found.
09-Jun 09:59 merlin-dir: No prior or suitable Full backup found in catalog.
Doing FULL backup.
09-Jun 09:59 merl
Hi there,
I'm using bacula 5.0.1 under Centos 5.4 64 bit with a tape library( 3
drives, 133 slots) It's been working great , non stop. I just noticed 1
small item:
We sent a bunch of tapes offsite. After taking them out of the robot,I
issued an update slots command, selected the robot device, it d
The job failed with an unexpected error:
08-Jun 00:20 tsi-vms01-fd JobId 267: Fatal error: backup.c:892 Network send
error to SD. ERR=Input/output error
which means that the fd got Input/output error while writing to the sd via a
socket.
I've never seen ERR=Input/output error from a socket conn
This errors:
08-Jun 00:20 tsi-vms01-fd JobId 267: Fatal error: backup.c:892 Network
send error to SD. ERR=Input/output error
08-Jun 00:16 backup01-sd JobId 267: Fatal error: append.c:242 Network
error reading from FD. ERR=No data available
Maybe are your SD and FD listening on the loopback inter
Hi Francisco,
Thanks for your quick reply! I've modified the bacula-sd.conf to use /dev/nst0
instead of /dev/st0.
Here is the output of the btape test command on /dev/nst0:
r...@backup01:/etc/bacula# btape /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: "/
You must use the "/dev/nst0" Device in your Storage Daemon config
file (Device Section) better than "/dev/st0". Try to run the btape
test and post the results for more accurate help.
Javier.
2010/6/9 Niek Linnenbank :
> Hello,
>
>
>
> We are using Bacula for several months now as our primary ba
Hello,
We are using Bacula for several months now as our primary backup solution,
yet we are still struggling to get the automatic backup to tape working.
I temporarily disabled them and ran the backup jobs manually. I inserted an
empty
tape and labelled it 'Maandag'. The jobs started and w
On Wed, Jun 09, 2010 at 12:05:34PM +0400, Alexander Pyhalov wrote:
> Hello.
> I have some questions about Virtual Full backups. As I understood, VF
> backups should use separate volume pool. We are going to use bacula for
> making backups of a lot of data (>2 TB, about 10 servers). In this
> cir
2010/6/8 Jeremiah D. Jester :
>
>
> I’ve just bacula is working state where it can connect to clients and the
> the required daemons. I’m now trying to prepare a volume to write too via
> bconsole. Is manual load usually required for bacula or is this a
> configuration issue?
>
>
>
> Thanks.
>
> J
Hello.
I have some questions about Virtual Full backups. As I understood, VF
backups should use separate volume pool. We are going to use bacula for
making backups of a lot of data (>2 TB, about 10 servers). In this
circumstances I'd like to avoid making Full backup. I'd like to make one
initia
>> I've just bacula is working state where it can connect to clients and
>> the the required daemons. I'm now trying to prepare a volume to write too
>> via bconsole. Is manual load usually required for bacula or is this a
>> configuration issue?
>>
>> Thanks.
>> JJ
>>
>> *label
>> Automaticall
43 matches
Mail list logo