Hi all,
I'm searching how make a schedule for "the first day of
january", the first day of feb-jun", "the first day of jul", the first
day of aug-dec".
The syntaxe I used doesn't work :
"jan 1st at
00:00am"
Do you know what's the good one ?
Thanks
--
Nicolas
http://www.shivaserv.fr
On 04/05/2012 07:27 PM, Murray Davis wrote:
> When logged on as root, I can run the following script to backup my
> MySQL databases to a local folder:
[...]
> 05-Apr 16:18 cablemon-dir JobId 27: shell command: run BeforeJob
> "/usr/local/sbin/backupdbs"
> 05-Apr 16:18 cablemon-dir JobId 27: Befo
When logged on as root, I can run the following script to backup my MySQL
databases to a local folder:
#!/bin/bash
BACKUPLOCATION=/var/local/mysqlbackups
LOGFILE=/usr/local/sbin/backupdbs.log
GZIP="$(which gzip)"
NOW=$(date +"%d-%m-%Y")
RETENTION=30
#remove .gz files greater than 30 days old
find
On 04/05/2012 06:46 PM, Stephen Thompson wrote:
> On 04/05/2012 03:19 PM, Joe Nyland wrote:
>> As I think it may be useful, here's the line taken from my MySQL
>> 'RunBeforeJob' script when the full backup is taken:
>>
>> mysqldump --all-databases --single-transaction --delete-master-logs
>> --flu
On Thu, Apr 5, 2012 at 6:24 PM, Wolfgang Denk wrote:
> Dear John Drescher,
>
> In message
> you
> wrote:
>>
>> >> Do you have any restrictions on how many jobs go per volume?
>> >
>> > No.
>>
>> Is the same volume used by both clients? I mean you are not using a
>> different pool per client or
On 04/05/2012 03:19 PM, Joe Nyland wrote:
> On 5 Apr 2012, at 22:37, Stephen Thompson wrote:
>
>> On 04/05/2012 02:27 PM, Joe Nyland wrote:
>>> Hi,
>>>
>>> I've been using Bacula for a while now and I have a backup procedure in
>>> place for my MySQL databases, where I perform a full (dump) backup
Dear Mark,
In message <3158.1333653400@localhost> you wrote:
>
> => I wonder why I see situations that a client is waiting for another job
> => to complete, that is only despooling, i. e. that does not block any
> => resources on the client:
>
> This has been discussed several times. Check the li
Dear John Drescher,
In message
you wrote:
>
> >> Do you have any restrictions on how many jobs go per volume?
> >
> > No.
>
> Is the same volume used by both clients? I mean you are not using a
> different pool per client or something like that?
All these jobs use the same pool, so both jobs a
On 5 Apr 2012, at 22:37, Stephen Thompson wrote:
> On 04/05/2012 02:27 PM, Joe Nyland wrote:
>> Hi,
>>
>> I've been using Bacula for a while now and I have a backup procedure in
>> place for my MySQL databases, where I perform a full (dump) backup nightly,
>> then incremental (bin log) backups
On 04/05/2012 02:27 PM, Joe Nyland wrote:
> Hi,
>
> I've been using Bacula for a while now and I have a backup procedure in place
> for my MySQL databases, where I perform a full (dump) backup nightly, then
> incremental (bin log) backups every hour through the day to capture changes.
>
> I basic
Hi,
I've been using Bacula for a while now and I have a backup procedure in place
for my MySQL databases, where I perform a full (dump) backup nightly, then
incremental (bin log) backups every hour through the day to capture changes.
I basically have a script which I have written which is run a
On 04/05/2012 02:41 PM, Stephen Thompson wrote:
> On 04/02/2012 03:33 PM, Phil Stracchino wrote:
>> (Locking the table for batch attribute insertion actually isn't
>> necessary; MySQL can be configured to interleave auto_increment inserts.
>> However, that's the way Bacula does it.)
>
> Are you
Thanks for the help guys. You are right it was this file right here:
/var/log/lastlog... I will exclude it from our backups.
Your the best thanks again!
On 4/5/2012 1:03 PM, Pablo Marques wrote:
> Abdullah:
>
> Make sure you have this in your fileset definition
>
> Sparse = yes
>
> Also you can
In the message dated: Thu, 05 Apr 2012 14:18:22 +0200,
The pithy ruminations from Wolfgang Denk on
<[Bacula-users] parallelizing jobs> were:
=>
=> Hi,
=>
=> I wonder why I see situations that a client is waiting for another job
=> to complete, that is only despooling, i. e. that does not block a
On 04/02/2012 03:33 PM, Phil Stracchino wrote:
> On 04/02/2012 06:06 PM, Stephen Thompson wrote:
>>
>>
>> First off, thanks for the response Phil.
>>
>>
>> On 04/02/2012 01:11 PM, Phil Stracchino wrote:
>>> On 04/02/2012 01:49 PM, Stephen Thompson wrote:
Well, we've made the leap from MyISAM t
On Thu, Apr 5, 2012 at 2:19 PM, Wolfgang Denk wrote:
> Dear John Drescher,
>
> In message
> you
> wrote:
>>
>> Do you have any restrictions on how many jobs go per volume?
>
> No.
Is the same volume used by both clients? I mean you are not using a
different pool per client or something like th
Dear John Drescher,
In message
you wrote:
>
> Do you have any restrictions on how many jobs go per volume?
No.
Best regards,
Wolfgang Denk
--
DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+
Abdullah:
Make sure you have this in your fileset definition
Sparse = yes
Also you can do this in bconsole:
estimate job=client_job_whatever listing
it will print the list of files to be backed up.
Look for big files in the list.
Pablo
- Original Message -
From: "Abdullah Sofizada"
On 4/5/12 8:21 AM, Abdullah Sofizada wrote:
> Hi guys, this is a very weird one. I been trying to tackle this for the
> past two weeks or so to no avail...
>
> My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are
> Redhat Rhel 5.5 running bacula 5.0.2.
>
> Each of the bacula cli
Hi guys, this is a very weird one. I been trying to tackle this for the
past two weeks or so to no avail...
My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are
Redhat Rhel 5.5 running bacula 5.0.2.
Each of the bacula clients are less than 15 GB of data. Backups of each
cli
Hello,
I am looking at moving towards virtual full backups as a way to cut down
on our backup times. The situation is that our full backups on several
of our file servers are running over their window. My question is what
could the draw backs be to moving towards virtual full backups?
I curren
On Thu, Apr 5, 2012 at 8:18 AM, Wolfgang Denk wrote:
>
> Hi,
>
> I wonder why I see situations that a client is waiting for another job
> to complete, that is only despooling, i. e. that does not block any
> resources on the client:
>
> 53100 Increme SD despooling Data
> 53101 Increme is wai
Hi,
I wonder why I see situations that a client is waiting for another job
to complete, that is only despooling, i. e. that does not block any
resources on the client:
53100 Increme SD despooling Data
53101 Increme is waiting on max Client jobs
This is with bacula 5.0.3 as distributed wit
Hello Uwe,
Am 05.04.2012 13:08, schrieb Uwe Schuerkamp:
> On Thu, Apr 05, 2012 at 12:47:02PM +0200, Dennis Hoppe wrote:
>> maybe you could send me your config files? Which distribution and
>> version are you using?
>
> I'm running Bacula 5.2.6 compiled from source on CentOS 6.2 (64bit)
> with a M
On Thu, Apr 05, 2012 at 12:47:02PM +0200, Dennis Hoppe wrote:
> maybe you could send me your config files? Which distribution and
> version are you using?
>
Hello Dennis,
I'm running Bacula 5.2.6 compiled from source on CentOS 6.2 (64bit)
with a MySQL backend.
Here's the relevant config for
Hello Uwe,
Am 05.04.2012 12:38, schrieb Uwe Schuerkamp:
>> the "Selection Type" is defined at the following "JobDefs". I read
>> somwhere that i have to use a "Selection Type" instead of
>> "PoolUncopiedJobs", because it does not set a "priorjobid".
>
> sorry I must have overlooked that bit. Than
On Thu, Apr 05, 2012 at 12:30:43PM +0200, Dennis Hoppe wrote:
> Hello Uwe,
> the "Selection Type" is defined at the following "JobDefs". I read
> somwhere that i have to use a "Selection Type" instead of
> "PoolUncopiedJobs", because it does not set a "priorjobid".
>
Hi Dennis,
sorry I must ha
Hello Uwe,
Am 05.04.2012 10:47, schrieb Uwe Schuerkamp:
> I see your definition below is lacking the "SQLQuery" for the
> "Selection Type", might this be part of the problem?
the "Selection Type" is defined at the following "JobDefs". I read
somwhere that i have to use a "Selection Type" instead
On Mon, Apr 02, 2012 at 08:20:17PM +0200, Dennis Hoppe wrote:
> Hello,
>
> is it possible to use chained copy jobs? For example i would like to
> copy my full backups from local disk to usb disk and after that to an
> nas storage.
>
Hi Dennis,
I see your definition below is lacking the "SQLQue
29 matches
Mail list logo