Thanks, John.
I'll take a look at my concurrency setup - it may be that it's not high
enough.
An upgrade is on my ToDo list ...
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
> -Original Message-
utochanger or can it use drive
1 if drive 0 is busy?
2a. If restore can use drive 1, how do I tell it to do that?
Thanks!
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
---
options that are
available
to users of proprietary products - find another support vendor or use
financial incentives with the vendor to improve response from them.
/rant mode off/
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROT
Item n: Implement NDMP protocol support
Origin: Alan Davis
Date: 06 March 2007
Status: Submitted
What: Network Data Management Protocol is implemented by a number of
NAS filer vendors to enable
backups using third-party software.
Why:This would allow NAS filer
Item n: Implement Catalog directive for Pool resource in Director
configuration
Origin: Alan Davis [EMAIL PROTECTED]
Date: 6 March 2007
Status: Submitted
What: The current behavior is for the director to create all pools
found in the
configuration file in all catalogs. Add
On Tue, 6 Mar 2007 09:39:35 +0100
Kern Sibbald <[EMAIL PROTECTED]> wrote:
> On Monday 05 March 2007 23:57, Alan Davis wrote:
>> I understand the sanity check - but the job wasn't idle
>>- the FD and SD
>> were both working and data was being written to ta
eemed to indicate that the director was trying to talk to
the FD but couldn't, or was expecting a response to the mount that it
never got.
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
> -Original Messag
It's clear that the db record for the job doesn't get updated with the
number of files written, etc.
Is it possible, given the other data that I have available, to
synthesize enough of the job record entry that it could be marked 'T'
(termina
o try
to duplicate the problem exactly. I will try to create a reproducer with
a smaller backup set once I have the archive backup completed.
Any insight on the possible cause(s) would be greatly appreciated.
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m
nd
create pools there and populate them with volumes first?
Are there commands or command options that I'm missing that direct
operations to use the catalog other than "use"?
Thanks
>
>
> Alan Davis
> Senior Architect
> Ruckus Network, Inc.
> 703.464.6578
ng
other than the mention that it's possible.
The db create scripts hard-code the db name so they aren't much help
without modifying them.
Is there a write-up anywhere on how to configure a second catalog that
would be written with the non-db-savvy admin in mind?
Thanks
A
ot;Planning"
> Pool = Weekly, Daily
>}
># Autochanger Dell PV132T (21x LTO3)
>Storage {
> Name = Autochanger
> Address = srv-mpp-lrs
> SDPort = 9103
> Password = "xxx"
> Device = Dell-PV132T # must be same as Device in
Storage
>
;co-ordinate the jobs so that the hosts pass the drive around seamlessly
>without fighting over it.
>
>Is this feasible with Bacula ?
>
>If so, how would you do it ?
>If not, any alternate suggestions ?
>
>Thanks in Adva
copy of job.c
Change the 30 to something larger - 90 would make it wait for 90
minutes.
In the .../src/stored directory run "make && make install"
Start the SD.
Alan Davis
Senior Architect
Ruckus Netw
rmine what, if any, the practical limit of the FD is.
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
> -Original Message-
> From: Alan Davis [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 30, 2007
a mix of very large db files and many small files - I expect
the backup to take at least 12 hours based on prior experience when
using a more "normal" fileset specification based on directory names
rather than individual files.
Alan Davis
Senior Architect
Ruckus Network, Inc.
6,047
max_bufs=298,836
The SD waited for the FD to connect and is running the job as expected.
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
> -Original Message-
> From: Alan Davis [mailto:[EMAIL PROTECTED]
>
;jcr->job_start_wait, &mutex,
&timeout);
if (errstat == 0 || errstat == ETIMEDOUT) {
break;
}
}
V(mutex);
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
> -Original Messa
would be the FD's capability of handling a file list of
10K+ entries.
Thanks.
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
> -Original Message-
> From: Kern Sibbald [mailto:[EMAIL PROTECTED]
> Sent: M
s the potential to return tens of thousands of files stored in
hundreds of directories.
Thanks
----
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavi
> -Original Message-
> From: Kern Sibbald [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 11, 2007 3:26 AM
> To: Alan Davis; bacula-users; bacula-devel
> Subject: Re: [Bacula-users] Optimizing bacula in large filesystems
>
> On Thursday 11 January 2
mention here on the list of others backing up multi-terabyte
servers - I'm looking for suggestions on how to optimize and speed up
the backup process.
Thanks!
Alan
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTEC
o show an interest, and though it has been
mentioned,
>there doesn't seem to be much interest. In addition, we need someone
to
>implement it -- it is unlikely to be me as I have already defined my
>priorities for the next release (next 9 months at least).
>
>Regards,
Storage Element 8:Full :VolumeTag=EJB059
Storage Element 9:Full :VolumeTag=DAI259
Storage Element 10:Full
# mtx load 1 0
Loading media from Storage Element 1 into drive 0...done
# mt status
DEC DLT TZ89 tape dr
tape volume and load a
new one if a job specifies a separate pool rather than just using the
most available drive.
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
> Message: 2
> Date: Thu, 21 Dec 2006 12:40:54 +01
refer Mounted Volumes = no
You will also need to increase 'Maximum Concurrent Jobs' appropriately.
Keywords: bacula simultaneous jobs autochanger concurrent interleave
multiple drives
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
are:
Offline On Unmount = no
Hardware End of Medium = yes
BSF at EOM = yes
Backward Space Record = yes
Backward Space File = yes
Fast Forward Space File = yes
Use MTIOCGET= yes
TWO EOF = yes
Alan Davis
Senior Architect
Ruckus Network, Inc
ther sections of the manual and FAQ it suggests that you
should allow bacula to choose the volume that it wants to use, this
seems counter-intuitive.
I've looked at Use Volume Once, Maximum Volume Jobs and retention and
recycle times directives but none of them seem appropriate.
Alan Da
info -f %c |grep
TapeAlert|cat'"
Offline On Unmount = no
Hardware End of Medium = no
BSF at
EOM
= yes
Backward Space Record = no
Fast Forward Space File = no
TWO
EOF
= yes
LabelMedia =
yes;
# lets Bacula label unlabeled
out how to reserve/mount the device yet. I'll be digging into
the developer's manual but if anyone wants to take a look at it I'd
certainly appreciate it.
The bottom line is that I think I've convinced myself that the btape
error is specific to btape and bacula_sd will function as
e "list" command
#cat ${TMPFILE} | grep " *Storage Element [0-9]*:.*Full" | awk "{print
\$3 #\$4}" | sed "s/Full *\(:VolumeTag=\)*//"
cat ${TMPFILE} | grep ' *Storage Element [0-9]*:.*Full' | awk '{print $3
$4}' | sed 's/F
"Use the Autochanger-Directive in the SD configuration. Use that device
with btape and (usually) in the DIR config."
Alan Davis
Senior Architect
Ruckus Network, Inc.
703.464.6578 (o)
410.365.7175 (m)
[EMAIL PROTECTED]
alancdavis AIM
-Original Message-----
From: Alan Davis [mai
r Device = /dev/changer
# Enable the Alert command only if you have the mtx package loaded
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
Offline On Unmount = no
Hardware End of Medium = no
BSF at EOM = yes
Backward Space Record =
33 matches
Mail list logo