Michel, I upgraded the AVG virus database to version 269.10.17/915 and
it doesn't alarm me about the trojan horse anymore. Thanks for the
submission.
regards,
Diky
On 7/24/07, Michel Meyers <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Michel Meyers wrote:
> >
Hi friends!!!
i'm here again
Before try, i would like to know if its possible ( or make sense ) use
bls , bextract and tools with file storages.
thanks for all help...
JC Júnior
-
This SF.net email is sponsored b
Dear All
The major error seems to be
24-Jul 14:37 elizabeth-dir: civeng54.2007-07-24_09.05.01 Error: open mail
pipe /usr/sbin/bsmtp -h localhost -f "(Bacula) [EMAIL PROTECTED]" -s "Bacula:
Backup Fatal Error of civeng54 Full" [EMAIL PROTECTED] failed: ERR=Cannot
allocate memory
Several errors li
Dear Shon
I have a similar situation and solved it byusing this stargegy.
Firstly the disk volumes should be treated as "Tape" drives in that only
onle volume can be opened at a time but you can have many concurrent jobs
writing to that volume - my setting is for a maximum of 5 concurrent jobs/
On Tue, 2007-07-24 at 16:27 +0200, Olivier wrote:
> Richard Mortimer wrote:
> > I have answered in detail below but the label and other commands are
> > complaining that /tmp/backup do not exist. You do mention that it exists
> > but is your SD on a different server to the director? /tmp/backup ne
One more thing to consider is the Run Before Job directive. I had the
problem that a downed client meant Bacula would hang for something like 40
minutes before giving up. I solved that by pinging the client three times
with Run Before Job -- if it failed the job rescheduled, if not it ran.
~Kyl
On Tue, 24 Jul 2007, Brian Debelius wrote:
> ohhh, ahhh, is this new?
>
> brian-
>
> Junior Cunha wrote:
>> Steve Poulsen wrote:
>>
>>> I am backing up several machines, but a couple of them are only on
>>> about 10% of the day. Is there any kind of option on the daily
>>> backups to have it ret
Hi,
24.07.2007 20:19,, Brian Debelius wrote::
> ohhh, ahhh, is this new?
>
> brian-
>
> Junior Cunha wrote:
>> Steve Poulsen wrote:
>>
>>> I am backing up several machines, but a couple of them are only on
>>> about 10% of the day. Is there any kind of option on the daily
>>> backups to h
This probably has no meaning whatsoever(I'm sure it has no
meaning)...but it appears to me that there are more windows downloads
from sourceforge for bacula then all others combinedor its darn
close. Just a curious observation.
brian-
--
> On Tue, 24 Jul 2007 12:05:47 +0200, Jordi Moles said:
>
> hi, i don't know if what i'm trying to do is actually possible, i've
> been googleing for days with no anwers at all.
>
> i perform a backup every night from many servers.
> one of them has grown so much that it made me realize i wo
I'm trying to understand job concurrency in Bacula and what strategy I
should use for backing up clients.
Its likely that I will have to backup around 50 clients. My strategy is to
write incrementals to disk volumes and fulls to tapes. I assume its possible
for Bacula to write to multiple disk vo
Hello,
the last 2 tests:
Tuesday, July 24, 2007, 2:00:43 PM:
FS> Actually, that gives me another idea. While I've never used it myself, you
FS> may be able to get more details by running some jobs with strict mode turned
FS> on on your mysql catalog.
FS> http://dev.mysql.com/doc/refman/5.0/en
I didn't see these included with the CSWbacula package. Is there somewhere
besides the bacula source code to get these?
-Shon
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?
ohhh, ahhh, is this new?
brian-
Junior Cunha wrote:
> Steve Poulsen wrote:
>
>> I am backing up several machines, but a couple of them are only on
>> about 10% of the day. Is there any kind of option on the daily
>> backups to have it retry every hour so that when the machine comes
>> bac
Steve Poulsen wrote:
> I am backing up several machines, but a couple of them are only on
> about 10% of the day. Is there any kind of option on the daily
> backups to have it retry every hour so that when the machine comes
> back up it can get backed up?
Hi Steve,
You can use this options i
I am backing up several machines, but a couple of them are only on about 10%
of the day. Is there any kind of option on the daily backups to have it
retry every hour so that when the machine comes back up it can get backed
up?
Thanks,
Steve
--
create_database.cmd
add: SET MYSQLUSER=root
add: SET MYSQLPASS=mysqlpassword
change: "C:\Program Files\MySQL\MySQL Server 5.0\bin\mysql" *-u
%MYSQLUSER% --password=%MYSQLPASS%* %* -e "CREATE DATABASE bacula;"
drop_database.cmd
add: SET MYSQLUSER=root
add: SET MYSQLPASS=mysqlpassword
c
Many thanks. This was it. The svn database backup script did a chmod
-R at the end which caused everything to be backed up again.
On 7/16/07, Alan Brown <[EMAIL PROTECTED]> wrote:
> On Mon, 16 Jul 2007, Steve Poulsen wrote:
>
> > I have one directory that continues to show up in all of my increm
Just to say that the difference problem I had between "Files Expected"
and "Files Restored" has been resolved with a $> dbcheck
I don't understand why my database had inconsistencies (as I never had
any hard reboot, electric cut or such) ... should dbcheck be executed at
regular interval ?
On Tue
Hi all,
I've got several job definitions that are similar, but the following is not
working:
Job {
Name = sede_Vol2Samba_job
Enabled = no
Type = Backup
Level = Incremental
Client = sede-fd
FileSet = sede_Vol2Samba_fileset
Storage = sede-samba-sd
Messages = Daemon
Hello,
Tuesday, July 24, 2007, 2:00:43 PM:
FS> Also, it's been suggested that you try turning on spooling. Have you done
so?
Good news (or bad, who knows) enabled spooling (Maximum Job Spool Size
= 500m) performed the same and AGAIN the first job I tested to restore
~44K files are missing:
On 7/24/07, John Drescher <[EMAIL PROTECTED]> wrote:
> On 7/24/07, Jordi Moles <[EMAIL PROTECTED]> wrote:
> > hi, i don't know if what i'm trying to do is actually possible, i've
> > been googleing for days with no anwers at all.
> >
> > i perform a backup every night from many servers.
> > one of
Richard Mortimer wrote:
I have answered in detail below but the label and other commands are
complaining that /tmp/backup do not exist. You do mention that it exists
but is your SD on a different server to the director? /tmp/backup needs
to exist on the SD server. My other thought is that yo
Hello,
Tuesday, July 24, 2007, 2:00:43 PM:
FS> Okay, so it looks like you can reproduce the symptoms just with multiple
FS> concurrent jobs, regardless of the gzip settings.
I am sure the file/dirs backed up are important! I bet developers are
tested enough concurrent jobs but if they didn't ca
has someone managed to use automatic mount and unmount just by configuring
the storage daemon the 'right' way with a bacula 2.0.3 (maybe on Debian
Linux)?
The goal is that the media change shall be sufficient for the normal
backup to work. No manual intervention at all shall be needed.
The close
Mair Wolfgang-awm013 wrote:
> Spooling? Does this also apply if my backup goes directly to files?
It would in this case, yes. With spooling, the data goes to the spooling file
first, and is then unspooled in chunks. Without spooling, all of the data
from the multiple jobs goes straight to the v
Spooling? Does this also apply if my backup goes directly to files?
Here is my seeting:
sd:
Device {
Name = FileStorage
Media Type = File
Archive Device = /export/bacula-dump
LabelMedia = yes; # lets Bacula label unlabeled
media
Random Access = Yes;
AutomaticMount =
Doytchin Spiridonov wrote:
> Hello,
>
> done. Found where is the problem after some more tests (and once again
> it is not in our hadrware or OS or broken things). It is where I
> initially suggested - the concurrent jobs.
So you can reliably reproduce the problem now? Excellent!
> After the fi
On 7/24/07, Mair Wolfgang-awm013 <[EMAIL PROTECTED]> wrote:
> Hello,
>
> This is exactly what I experienced last week. I submitted this under the
> subject: ' Restore Error of linux-install-fdFul'.
>
> However, I didn't had the time yet to track this as much down as Doytchin
> did. Great work!
>
Hello,
This is exactly what I experienced last week. I submitted this under the
subject: ' Restore Error of linux-install-fdFul'.
However, I didn't had the time yet to track this as much down as Doytchin did.
Great work!
This morning (before reading through all this) I also found that if I do
On 7/24/07, Jordi Moles <[EMAIL PROTECTED]> wrote:
> hi, i don't know if what i'm trying to do is actually possible, i've
> been googleing for days with no anwers at all.
>
> i perform a backup every night from many servers.
> one of them has grown so much that it made me realize i would need a new
On 7/24/07, Jordi Moles <[EMAIL PROTECTED]> wrote:
> hi, i don't know if what i'm trying to do is actually possible, i've
> been googleing for days with no anwers at all.
>
> i perform a backup every night from many servers.
> one of them has grown so much that it made me realize i would need a new
hi, i don't know if what i'm trying to do is actually possible, i've
been googleing for days with no anwers at all.
i perform a backup every night from many servers.
one of them has grown so much that it made me realize i would need a new
bacula-dir configuration. It turns out that this servers
33 matches
Mail list logo