I'm having some trouble getting vchanger to work, partly because the
documentation that comes with it seems not to be up to date. For
instance, the keywords described in the howto HTML file that comes with
the source are invalid keywords. The example config file has the right
keywords, but I'm havi
Thanks for the reply Uwe, when you mention "upgrade bacula tables"
scripts, are these scripts that can be found within the compiled bacula
software itself?
Thanks
+--
|This was sent by jkel...@popcap.com via Backup Central.
|For
On Tue, 2013-09-17 at 13:13 +0300, Guy wrote:
> Yes I do this with vchanger...
I've run into a wall trying to get vchanger to compile on Raspbian (a
limited version of Debian for the Raspberry Pi).
When I run configure, it notes that I do not have libuuid. I believe
this is correct; there is a s
Dear Team,
While labeling a new volume My system is asking me to enter auto
changer drive.
See Below command output:
*label barcodes slot=1 pool=pool1
Automatically selected Catalog: catalog1
Using Catalog "catalog1"
The defined Storage resources are:
1: File
2: Autochanger
Select
I'm not an expert, but my guess would be that no, this won't affect which
drive will be used for backup--except to the extent that the tape in
question is left in that drive until the backup runs--my understanding is
that a mounted (& suitable) tape, in whichever drive, is preferred when
running a
On 9/20/2013 2:17 PM, Greg Woods wrote:
On Tue, 2013-09-17 at 13:13 +0300, Guy wrote:
Yes I do this with vchanger...
I've run into a wall trying to get vchanger to compile on Raspbian (a
limited version of Debian for the Raspberry Pi).
When I run configure, it notes that I do not have libuuid
On Fri, 2013-09-20 at 12:17 -0600, Greg Woods wrote:
> On Tue, 2013-09-17 at 13:13 +0300, Guy wrote:
> > Yes I do this with vchanger...
>
> I've run into a wall trying to get vchanger to compile on Raspbian (a
> limited version of Debian for the Raspberry Pi).
I did solve the compile issue. With
Hi Kevin,
Just trying to understand... The error bellow:
Restore-public-Disk.2013-09-20_12.43.48_32 is waiting
for Client to connect to Storage daemon
...
20-Sep 12:19 public-fd JobId 19084: Fatal error: Authorization key rejected
by Storage daemon.
Please seehttp://www.bacula.org/rel-manual/faq.
Greetings -
I'm running Bacula 5.2.5 on an Ubuntu 12.04.3 server, with about 61TB of
disk storage and an attached Dell PVTL2000 tape library. Backups are
working great, but restores are presenting a problem.
Here's my dilemma:
When attempting to restore a file, I get this status for about 10 m
On 20/09/13 15:03, Andreas Koch wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi Alan,
>
> can you let me know what hardware (SAS Controller) and OS (kernel version)
> you use?
Everything is FC connected using QLA 2430-series controllers.
When linux first connects to the drives it
On 09/20/2013 04:21 AM, Thomas wrote:
> Hi Andreas,
>
> we are using also LTO-5 with 2M Blocksize and without any Problems.
>
> Drives and Kernel are:
>
> Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 GNU/Linux
> Medium ChangerOVERLAND NEO Series
> IBM ULTRIUM-TD5
> IBM ULT
I am curious to find out if I change the block size will I be able to go back
and restore data from tapes that has the old block size?
Thanks,
URao
-Original Message-
From: Alan Brown [mailto:a...@mssl.ucl.ac.uk]
Sent: Friday, September 20, 2013 9:44 AM
To: Andreas Koch
Cc: bacula-users
it seems that bacula's limit is "<= 400"
from src/stored/block.c :
>if (block_len > 400) {
> Dmsg3(20, "Dump block %s 0x%x blocksize too big %u\n", msg, b,
> block_len);
> return;
>}
another limit i found is this one from the output of "dmesg | grep st" :
> [3.6
On 20/09/13 13:22, Andreas Koch wrote:
>
> Many thanks for the data point! When we use Bacula (not just btape) with
> larger block sizes (512 KB), our backups abort when bacula fails to read the
> tape's header block.
>
Did you attempt to mix blocksizes on the same physical tape?
That will not wo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 09/20/2013 03:43 PM, Alan Brown wrote:
> On 20/09/13 13:22, Andreas Koch wrote:
>>
>> Many thanks for the data point! When we use Bacula (not just btape)
>> with larger block sizes (512 KB), our backups abort when bacula fails
>> to read the tape's
On 2013-09-20, at 6:08 AM, Uwe Schuerkamp wrote:
> Nice work, thanks for sharing!
>
> Maybe a word or two on preventing this situation in the first place
> might be helpful, like restricting volume sizes & number?
>
> Also, you could consider documenting deleting a volume from a disk
> pool
Original Message
*Subject:* Re: [Bacula-users] Is anyone using >128K blocks with LTO-4 or
LTO-5
drives?
*Date:* Fri, 20 Sep 2013 14:22:55 +0200
*From:* Andreas Koch
*To:* Thomas
*CC:* bacula-users@lists.sourceforge.net
Many thanks for the data point! When we use Bacula (not just bt
Hello,
2013/9/20
> Hi,
>
> ** **
>
> I am using uncompressed backup.
>
> **
>
So, check your backup job logs or show these logs here.
and please, do not topposting.
best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net
-
Hello,
2013/9/20
> Thanks for Reply…!!!
>
> ** **
>
> ** **
>
> I come to know that, there was no more space for mysql database. So
> Catalog was not able to grow . J
>
> ** **
>
> I am facing one more Issue.
>
> ** **
>
> ** **
>
> I have three volumes in my data pool which I have
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 09/20/2013 10:21 AM, Thomas wrote:
> Hi Andreas,
>
> we are using also LTO-5 with 2M Blocksize and without any Problems.
>
> Drives and Kernel are:
>
> Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 GNU/Linux Medium
> ChangerOVERLAN
Hi,
I am using uncompressed backup.
From: Radosław Korzeniewski [mailto:rados...@korzeniewski.net]
Sent: Friday, September 20, 2013 5:33 PM
To: Deepak Pal (WI01 - GIS - RCT); bacula-users
Subject: Re: [Bacula-users] Why Marking Volume "XX" in Error in Catalog.
Hello,
2013/9/20 mailto:deepak
Nice work, thanks for sharing!
Maybe a word or two on preventing this situation in the first place
might be helpful, like restricting volume sizes & number?
Also, you could consider documenting deleting a volume from a disk
pool (update pool from resource and so on) once the reader has cleaned
On Thu, Sep 19, 2013 at 04:30:58PM -0700, Jared Kelley wrote:
> Bconsole tells me I'm running this version of bacula.
> *version
> backup1-dir Version: 3.0.1 (30 April 2009) i686-pc-linux-gnu debian 5.0.1
>
> I inherited this setup and need to upgrade on a new server because the raid
> controller
On Thu, Sep 19, 2013 at 11:07:47AM -0500, Dimitri Maziuk wrote:
>
> Mysql is known for "features" like inserting 0 in a NOT NULL column
> instead of throwing errors, case-sensitive identifiers, and so on and so
> forth, with Oracle lawyers looming in the background. I wouldn't pick it
> for anythi
Hi Andreas,
we are using also LTO-5 with 2M Blocksize and without any Problems.
Drives and Kernel are:
Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 GNU/Linux
Medium ChangerOVERLAND NEO Series
IBM ULTRIUM-TD5
IBM ULTRIUM-TD5
the btape tests fails like in your example, b
Hello,
2013/9/19
> Dear Team,
>
> ** **
>
> While running a Incremental Job on a volume I received below errors.
>
> ** **
>
> 19-Sep 22:06 backup-sd JobId 129: 3305 Autochanger "load slot 5, drive 0",
> status is OK.
>
> 19-Sep 22:06 backup-sd JobId 129: Volume "A00044L4" previousl
Zitat von Mauro :
> On 19 September 2013 21:53, wrote:
>
>>
>> Zitat von Konstantin Khomoutov :
>>
>>
>> Our biggest single job has about 1TB data with some 3.3 million files.
>> This lead to around 1GB spooled attributes which will be absorbed in
>> less then 2 minutes. Database is Postgres wit
27 matches
Mail list logo