On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>
>
> First off, thanks for the response Phil.
>
>
> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>>> transactions, but
I don't think those commands ever worked. I tried using them as well a
while back, but had to use Run Before Job and Run After Job to get
automatic mounting/unmounting to work. I no longer do that, but that's
what you'll want to do.
On 4/2/2012 4:00 PM, Oliver Lehmann wrote:
> Hi,
>
> I'm bac
First off, thanks for the response Phil.
On 04/02/2012 01:11 PM, Phil Stracchino wrote:
> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
>> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
>> transactions, but lose on read speed.
>
> If you're finding InnoDB slower than My
Hi,
I'm backing up with bacula using an eSATA harddisk I'd like to
automatically mount before bacula accesses it and unmount it
after the access is done.
I'm running FreeBSD 9/amd64.
I have the following bacula-sd.conf part:
Device {
Name = FileStorage
Media Type = File
Archive Device
On 4/2/2012 3:08 PM, Murray Davis wrote:
> Thank you, Josh, for your response. I ended up doing two things...
>
> 1) I changed the permissions on /mnt/sdb1 using I chmod 777 so
> everyone has read/write/execute privilege. This seemed like overkill
> since I thought that my problem was related to
On 04/02/2012 01:49 PM, Stephen Thompson wrote:
> Well, we've made the leap from MyISAM to InnoDB, seems like we win on
> transactions, but lose on read speed.
If you're finding InnoDB slower than MyISAM on reads, your InnoDB buffer
pool is probably too small.
> That aside, I'm seeing something
Thank you, Josh, for your response. I ended up doing two things...
1) I changed the permissions on /mnt/sdb1 using I chmod 777 so everyone has
read/write/execute privilege. This seemed like overkill since I thought
that my problem was related to my labels and pools not being defined
properly. So,
Hello,
is it possible to use chained copy jobs? For example i would like to
copy my full backups from local disk to usb disk and after that to an
nas storage.
Job {
Name = "backup-all"
JobDefs = "DefaultBackup"
Client = backup-fd
FileSet = "backup-all"
Storage = backup
Full Backup Poo
On 02/06/2012 02:45 PM, Phil Stracchino wrote:
> On 02/06/2012 05:02 PM, Stephen Thompson wrote:
>> So, my question is whether anyone had any ideas about the feasibility of
>> getting a backup of the Catalog while a single "long-running" job is
>> active? This could be in-band (database dump) or o
On 04/02/2012 11:10 AM, Phil Stracchino wrote:
> On 04/02/2012 10:39 AM, Juan Pablo Botero wrote:
>>
>> Hi All.
>>
>> I'm sorry for the message in Spanish before.
>>
>> How can I add more thant one cliente to a job?
>
> You don't. You create a job per client.
Now, what you CAN do is create a Job
In the message dated: Mon, 02 Apr 2012 11:12:35 EDT,
The pithy ruminations from "Clark, Patricia A." on
<[Bacula-users] Best method for managing mtx "Data Transfer Element" numbers to
scsi tape> were:
=> I am new to Bacula and I am in the process of installing and configuring the
software. Some
After modifying src/dird/ua_tree.c and recompiling bacula, it's ok now.
It needs comments in src/dird/ua_tree.c :
...
static int cdcmd(UAContext *ua, TREE_CTX *tree)
{
TREE_NODE *node;
char cwd[2000];
if (ua->argc != 2) {
ua->error_msg(_("Too few or too many arguments. Try us
On Sat, 24 Mar 2012, James Harper wrote:
>> more than one client is available to backup the (shared) storage. If I change
>> the name of the client in the Job definition, a full backup always occurs the
>> next time a job is run. How do I avoid this?
>
> That's definitely going to confuse Bacula.
On 04/02/2012 10:39 AM, Juan Pablo Botero wrote:
>
> Hi All.
>
> I'm sorry for the message in Spanish before.
>
> How can I add more thant one cliente to a job?
You don't. You create a job per client.
--
Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
ala...@caerllew
I am new to Bacula and I am in the process of installing and configuring the
software. Something that is giving me some headaches is the mtx numbering on
the drives vs the /dev/st assignments. The output of the mtx status on the
auto changer and the the device assignment is below:
Data Trans
Hi All.
I'm sorry for the message in Spanish before.
How can I add more thant one cliente to a job?
Thanks.
>
>
> --
> Cordialmente:
> Juan Pablo Botero
> Administrador de Sistemas informáticos
> Fedora Ambassador for Colombia
> http://www.jpilldev.net
>
>
--
Cordialmente:
Juan Pablo Bote
On 3/30/2012 5:54 PM, Murray Davis wrote:
> ...
> Here are the permissions for my second hard drive...
>
> root@cablemon /mnt/sdb1# ls -la
> total 28
> drwxrwxr-x 4 root bacula 4096 Mar 30 10:14 .
> drwxrwxr-x 3 root bacula 4096 Mar 29 15:10 ..
> drwxrwxr-x 2 root bacula 4096 Mar 30 10:14 backu
Rob Becker 2co.com> writes:
>
> I'd assume putting everything into a shell script would work, but that's
exactly what I'm trying to get away
> from. Putting a new script on every server would require change control,
documentationetc. If I'm
> able to control everything from the Director I
I'd assume putting everything into a shell script would work, but that's
exactly what I'm trying to get away from. Putting a new script on every server
would require change control, documentationetc. If I'm able to control
everything from the Director I can side step some of those process
On Mon, Apr 02, 2012 at 10:56:43AM +, Rob Becker wrote:
> RunScript {
> RunsWhen = Before
> Runs On Client = Yes
> Command = "/bin/echo `/bin/hostname` >
> /usr/local/bacula/working/restore_file"
>Command = "/bin/date +%%F >> /usr/local/bacula/working/restore_file"
> }
.
Hello all,
I'm hoping someone will be able to help me solve a problem that's been causing
me some frustration over the weekend. I'm working on a process to automate a
restore jobs to confirm the validity of the backup jobs. The restore job will
restore a single file that gets created during t
On 30/03/12 09:39, Alex Crow wrote:
> We tried removing the compression on some jobs, and we got a great speed
> boost. However, the SSL compression was either absent or minimal, even
> though OpenSSL libs are compiled with zlib:
They probably use Z0 or Z1 for best speed.
if that's the case the
Saludos.
Quisiera saber la forma de añadir varios clientes a un JobDefs?
Gracias.
--
Cordialmente:
Juan Pablo Botero
Administrador de Sistemas informáticos
Fedora Ambassador for Colombia
http://www.jpilldev.net
--
Thi
23 matches
Mail list logo