On 8/30/2010 1:00 PM, Josh Fisher wrote:
> On 8/30/2010 10:50 AM, Dan Langille wrote:
>> This just came up on IRC. As an aid to newcomers, a ready-made VM may
>> be useful. A potential user could download and run the VM, without
>> having to go through the install process.
>>
>> If you use VMs (eg
Hello,
Recently we went through a following scenario:
An external USB drive to which JobA was supposed to write became unusable. JobA
requested a mount. Jobs B through Z that were scheduled to run after JobA did
not run. As a result we lost two days of backups.
How can we make sure that jobs w
2010/8/30 Sergio Belkin :
> 2010/8/30 Sebastian Bachmann :
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> mh you should add a yes to the end, not comment=yes
>>
>> hth
>>
It's getting better
echo "run job=Backup_centeno_Martes-LTO3 client=spectrum-fd
fileset=dumpcenteno storage=LTO3 w
2010/8/30 Sebastian Bachmann :
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> mh you should add a yes to the end, not comment=yes
>
> hth
>
> On 30.08.10 21:32, Sergio Belkin wrote:
>> 2010/8/30 :
>>> when=
>>
>> Thanks, I did the following:
>>
>> [r...@pepino ~]# echo "run job=Backup_cente
Hello,
As you can see from the email below, we have been planning to hold a Bacula
Developer's conference the 27th and 28th of September. Despite some early
enthusiasm, there are currently not enough people signed up to hold the
conference.
However disappointing that is, I am hoping that ther
Bacula 5.0.3 Ubuntu 10.04 64bit MySQL
Since upgrading to 5.0.3 from 3.0.x I get the following duplicate job
error when running from a schedule. If I run the copy job manually, it
runs without error. The following was typed out by hand, there may be
some typos.
Brian-
25-Aug 09:02 marlin2
I assume you mean SD and Dir in a VM. I have been using Bacula in a
qemu-kvm guest for some time now. Dir and database run beautifully in a
VM. SD is a bit trickier. SD works well with disk storage. All works as
expected even with USB disk, but only at USB 1 speeds. That is only due
to qemu-k
On 08/30/2010 06:35 PM, Tobias Brink wrote:
> Steve Costaras writes:
>
>> Could be due to a transient error (transmission or wild/torn read at
>> time of calculation). I see this a lot with integrity checking of
>> files here (50TiB of storage).
>>
>> Only way to get around this now is to do a
'Paul Mather' wrote:
>On Aug 30, 2010, at 6:41 AM, Henrik Johansen wrote:
>
>> Like most ZFS related stuff it all sounds (and looks) extremely easy but
>> in reality it is not quite so simple.
>
>Yes, but does ZFS makes things easier or harder?
It hides a lot of the complexity involved. In ZFS it
Steve Costaras writes:
> Could be due to a transient error (transmission or wild/torn read at
> time of calculation). I see this a lot with integrity checking of
> files here (50TiB of storage).
>
> Only way to get around this now is to do a known-good sha1/md5 hash of
> data (2-3 reads of the
Hello,
At the moment i have the following pools:
- FileServer-Pool-full
- FileServer-Pool-Incr
- FileServer-Pool-Diff
- MailServer-Pool-fulll
- MailServer-Pool-Incr
- MailServer-Pool-Diff
I have 7 tapes in my library and all of them were put in the Scratch
pool. I have calculated that
On Aug 30, 2010, at 6:41 AM, Henrik Johansen wrote:
> Like most ZFS related stuff it all sounds (and looks) extremely easy but
> in reality it is not quite so simple.
Yes, but does ZFS makes things easier or harder?
Silent data corruption won't go away just because your pool is large. :-)
(But,
This just came up on IRC. As an aid to newcomers, a ready-made VM may
be useful. A potential user could download and run the VM, without
having to go through the install process.
If you use VMs (eg VMWare, VirtualBox), then I think this would be a
great way for you to contribute to the proj
Thank you, the runbefore script solved the problem!!
Truly appreciated.
+--
|This was sent by informat...@dhul.es via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+
Hello,
*help run
Command Description
===
Hallo!
I'd like to kindly ask if someone could post a reasonable Windows 7
FileSet!
Sincerely,
Ralph
--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community a
Hi,
Let's say that you had a problem with last scheduled job, and you need
to backup tonight. Idon't want to create a new job on bacula-dir.conf
only for use once. I'd love to add an entry on crontab file that does
it. But I don't know how to do it (is it possible). For example I can
run:
echo "s
'Steve Costaras' wrote:
> A little mis-quoted there:
>
>On 2010-08-30 02:59, Henrik Johansen wrote:
>>> On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:
>>>
>>> Could be due to a transient error (transmission or wild/torn read at
>>> time of calculation). I see this a lot with integrity checkin
Hi folks,
looks like extending the value range did not fix the error
entirely. When running an incr. backup, I receive the following error
message:
Fatal error: Can't fill File table Query failed: INSERT
INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5)SELECT
batch.FileIndex,
batch.Jo
On Mon, Aug 30, 2010 at 08:45:50AM +0200, Uwe Schuerkamp wrote:
> Hi folks,
>
> over the weekend, our mysql bacula db seems to have hit the maximum
> possible auto_increment counter for an unsigned int(10) as I'm getting
> duplicate key errors on insert's into the File table.
>
> Is there a way
A little mis-quoted there:
On 2010-08-30 02:59, Henrik Johansen wrote:
>> On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:
>>
>> Could be due to a transient error (transmission or wild/torn read at
>> time of calculation). I see this a lot with integrity checking of
>> files here (50TiB of st
'Paul Mather' wrote:
>On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:
>
>Could be due to a transient error (transmission or wild/torn read at
>time of calculation). I see this a lot with integrity checking of
>files here (50TiB of storage).
>
>Only way to get around this now is to do a known-go
22 matches
Mail list logo