Bill Moran wrote:
> Alan Brown <[EMAIL PROTECTED]> wrote:
>> Our current main fire/data safe is a Phoenx Data commander 4623, which is
>> capable of taking 720 LTO tapes in current configuration (39 per drawer,
>> cased, increasing to 45 uncased)
>>
>> See http://www.phoenixsafeusa.com/
>> or ht
spool
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
Maximum Network Buffer Size = 262144
}
--
-se
Steve Ellis
---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geroni
for me for now, especially once I
get the LTO2 drive online, making nearly all of my backups a 1 tape
affair.
Thanks!
--
-se
Steve Ellis
---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App S
ng to adapt
to 1.38 from 1.36, as well as switching to LTO2 from DDS4, so it could
well be pilot error for me, but at least I found a workaround, and that
workaround was effective 3 times last night (my home media server is way
too big...3 LTO2 tapes!).
H
This should probably work its way into the manual, but a warning for
anyone who tries to move to significantly larger Maximum Network Buffer
Size numbers:
At least in bacula-1.38, Maximum Network Buffer Size _must_ be less than
51, or restores will crash the storage daemon. I was playing with
dministrator, then you
should be able to grant full control to another user (and have him claim
ownership of the files, if necessary).
I'm not a Windows guru (and don't want to be), so I can't be held
accountable for bad advice, but something close to this worked for
ay (in /etc/my.cnf).
Perhaps someone else who understands mysql better than I can find out why
the connection didn't used to timeout (at least with mysql 4)
-se
--
-se
Steve Ellis
---
This SF.Net email is sponsored by xPML, a groundb
On 8/5/2009 9:34 AM, Shawn wrote:
Yes, the problem is the "Hello command" which was introduced in 3.x
On a 2.x director, it will simply state "Hello command rejected" as
the failure in connecting to the FD from the director, I've tested
this before and go the same results regardless of the pla
On 9/30/2009 10:04 AM, Joseph L. Casale wrote:
> Hmm,
> I had the following:
>RunScript {
>RunsWhen = After
>RunsOnFailure = Yes
>FailJobOnError = Yes
>Command = "scp -i /path/to/key -o StrictHostkeyChecking=no
> /var/lib/bacula/*.bsr u...@host:/path/Bacula/"
>
Arno Lehmann wrote:
> Hi,
>
> 19.01.2008 15:52, Dan Langille wrote:
>> Jesper Krogh wrote:
>>> Hi.
>>>
>>> I'd like to configure this simple policy:
>>>
>>> Bacula should SpoolData = No if it is doing a Full backup (even if it
>>> has been automatically upgraded from Incremental), otherwise it sh
You don't mention the technology behind your tape drive, database backend,
CPU, RAM, or what your disk subsystem looks like--all of which would be
useful to have a reasonable chance to analyze even vaguely properly, but
I'll wade in nonetheless.
You are almost certainly shoeshining the heck out of
Due to disk layout on my system, I have the DB dump stored elsewhere on my
server, and I changed the catalog backup to not delete the DB dump.
Assuming some kind of less catastrophic crash, my hope would be to restore
the DB from the on-disk copy. I maintain the DB dump, bootstrap files and
'impo
the make_catalog_backup script would need to be changed to have a different
working directory (at least with my version of bacula). I made that change
myself some time back (I also like to keep the bacula.sql file around--so I
didn't change delete_catalog_backup). Alas, since I install from fedor
My bacula installation supports night-time backups of several Windows
machines (win7 and xp) that are typically suspended when the backups start
from a Linux-based server. I made sure to enable Wake-on-LAN on all of the
windows clients, and arranged a script to wake them, but Windows often
believe
I'm not an expert, but my guess would be that no, this won't affect which
drive will be used for backup--except to the extent that the tape in
question is left in that drive until the backup runs--my understanding is
that a mounted (& suitable) tape, in whichever drive, is preferred when
running a
There may be a better solution, but I've been using a script of my own
devising to wake and keep (at least Windows) machines awake during a
backup. I didn't know if there was a client-side API to call to keep the
machine awake, so I implemented mine as repeated WoL packets (say, once
every 5 minut
I have what sounds like it is a less-powerful system than yours, and I
see significantly faster performance from Bacula 3.0.2 (and before that
with 2.4 and earlier). My system uses a 3ware 9500 connected merely via
32-bit PCI, and I have a single separate spool drive connected via the
motherbo
On 12/10/2009 9:33 AM, Hayden Katzenellenbogen wrote:
> Steve,
>
> Here is a quick snap of my top during a full backup.
>
> top - 09:32:35 up 1 day, 18:38, 1 user, load average: 11.41, 11.60,
> 10.75
> Tasks: 161 total, 1 running, 160 sleeping, 0 stopped, 0 zombie
> Cpu0 : 0.3%us, 0.3%sy
On 12/14/2009 7:22 AM, Craig White wrote:
> A relatively new bacula 3.0.3 installation on CentOS 5
>
> I get an error every day from logwatch...
>
...
> Cannot find shared script applybaculadate
>
...
> *ApplyBaculaDate =
>
> What is it that I am supposed to do?
>
> Craig
>
I had the s
On 1/6/2010 3:55 PM, Terry L. Inzauro wrote:
> On 01/06/2010 05:40 PM, brown wrap wrote:
>
>> I tried compiling it, and received errors which I posted, but didn't
>> really get an answer to. I then started to look for RPMs. I found the
>> client rpm, but not the server rpm unless I don't know
I don't know if this is specific to mysql or not. My system: Fedora 12
x86-64, Bacula 5.0.0 (installed from fedora's rawhide), mysql 5.1.42.
I've been happily running bacula 3.0.3 on this particular machine and
config for several months without issue (and earlier bacula releases,
but without
On 2/19/2010 5:55 PM, Frank Sweetser wrote:
The best way to get more data about what's going on is to use the 'explain'
mysql command. First, get the complete SQL query that's taking too long to
run by using the 'show processlist full' command - that way the results won't
get truncated.
Then,
On 2/25/2010 4:28 PM, Erik P. Olsen wrote:
> I've changed the server IP-address in the storage resource to a host name and
> the windows client didn't know how to resolve that. Changing it back to
> IP-address solved the problem.
>
> I wonder if Windows Vista uses a host name resolution file which
On 8/26/2010 5:23 PM, Ben Beuchler wrote:
> I just hooked up my shiny new Powervault 124T with dual magazines to
> an Ubuntu 10.04 server via SAS. For some reason, mtx can only see the
> first (left) magazine. The output of mtx is below.
>
> The front panel interface sees all 16 slots just fine
On 8/31/2010 5:44 AM, Marco Lertora wrote:
>Hi!
>
> I've the same problem! anyone found a solution?
>
> I have 3 concurrent jobs, which backup from different fd to the same
> device on sd.
> All jobs use the same pool and the pool use "Maximum Volume Bytes" as
> volume splitting policy, as su
On 9/1/2010 7:09 AM, Brian Debelius wrote:
>Is Maximum Volume Bytes set in the catalog for these tapes?
>
> On 9/1/2010 9:15 AM, Rodrigo Ferraz wrote:
>> Certainly. The schedule comprises 6 different tapes, between monthly and
>> weekly pools, and the problem is exactly the same with all of
I believe the error is in the file that you had previously commented out
(as I suspect you also believe). Below I've snipped what I think is the
problematic line in the pc-agenda_full.conf file:
On 12/27/2010 12:00 PM, der_Angler wrote:
> pc-agenda_full.conf
>
...
> FileSet {
> Name =
On 1/10/2011 7:29 AM, Guy wrote:
> Indeed it was and that for me is the right thing. It's all in subversion
> which is it's self backed up.
>
> ---Guy
> (via iPhone)
>
> On 10 Jan 2011, at 15:18, Dan Langille wrote:
>
However, if you have people planning on making commits to subversion,
you've a
On 1/20/2011 7:18 AM, Peter Zenge wrote:
>>
>>> Second, in the Device Status section at the bottom, the pool of LF-F-
>> 0239 is
>>> listed as "*unknown*"; similarly, under "Jobs waiting to reserve a
>> drive",
>>> each job wants the correct pool, but the current pool is listed as
>> "".
>>
> Admit
I've got a Dell 124T w/ an LTO3 drive (bought used for <$800 on Ebay
last year), and so far I've had no trouble (Bacula 5.0.3 w/ Fedora 14).
I may not be a very heavy user, however, as I'm running it on a home
network (clients are: Fedora 14, WinXP & Win7-64)--sending ~1.5TB to
tapes in a month
On 2/28/2011 9:37 AM, Jeremiah D. Jester wrote:
>
> Steve,
> I’m using a 124t w/ LT04 tapes. I would appreciate if I could see your conf
> files for comparison.
> Gracias,
> JJ
>
> Jeremiah Jester
> Informatics Specialist
> Microbiology – Katze Lab
> 206-732-6185
>
>
Tapeinfo output for the
On 3/3/2011 6:52 AM, Fabio Napoleoni - ZENIT wrote:
>
> JobId 7: Spooling data ...
> JobId 7: Job write elapsed time = 00:13:07, Transfer rate = 1.295 M
> Bytes/second
> JobId 7: Committing spooled data to Volume "FullVolume-0004". Despooling
> 1,021,072,888 bytes ...
> JobId 7: Despooling elapse
On 3/12/2011 2:20 AM, Raczka wrote:
> Hello everyone!
>
> Bacula (currently 5.0.3) is running in my enviroment under FreeBSD for about
> year without problems.
> Two days ago daemon started sending message as below (every 5minutes):
>
> bckserver1: ERROR in authenticate.c:304 UA Hello from
> cl
On 3/17/2011 7:30 AM, Mike Hendrie wrote:
*I cannot telnet from the windows machine to any 9102 port, is that
the problem?*
Yes. That is definitely a problem (may not be the only one). I
haven't been following this thread closely, but it could easily be
either the windows firewall or lin
On 4/13/2011 4:48 AM, Steffen Fritz wrote:
> Hey folks,
>
>
> something strange is happening within my bacula configuration. Every help is
> much appreciated!
>
> 1. This is, what bconsole--> status tells me about my tape drive. No pool?
>
> Device "Drive-1" (/dev/nst0) is mounted with:
> Vo
On 4/20/2011 3:26 PM, John Drescher wrote:
>> This rule is not the real truth.
>> I'm backing up a 2.4.4 (Debian Lenny) client on a 5.0.2 (Debian Squeeze)
>> Director (and Storage)
> That does not violate the rule I gave.
>
>> Does the Bacula team plan to provide such compatibility matrix ?
>>
> Th
On 5/2/2011 1:48 AM, obviously wrote:
> Hi all
>
> My first post here. So don't shoot me if I say/do stupid things.
>
> I got a problem with Bacula. The version I use is 2.4.4 on debian etch.
Since 2.4.4 is now nearly 4 years old, you really ought to try a more
recent version.
> My Bacula runs smo
On 5/10/2011 12:36 AM, mulle78 wrote:
> Hello @all, I've purged a volume and appended a new back up to the tape. Is
> there a way to find out the free space on a tape to be shure that a reorg of
> a tap has be done successfully?!
>
> +--
Alan-
I've actually not used encryption, but certainly encryption will mean
that you will get no benefit from whatever compression your tape
hardware may be capable of--possibly doubling backup time right there,
if you were able to keep your tape drive writing at full speed. I do
know that e
On 6/23/2011 1:31 PM, Troy Kocher wrote:
> Listers,
>
> I'm trying to restore data from medicaid 27, but it appears there are no
> files. There is a file corresponding with this still on the disk, so I think
> it's just been purged from the database.
>
> Could someone help me thru the restore pr
On 7/25/2011 6:14 PM, James Harper wrote:
>> 2011/7/25 Rickifer Barros:
>>> Hello Guys...
>>>
>>> This weekend I did a backup with a size of 41.92 GB that took 1 hour
> and 24
>>> minutes with a rate of 8.27 MB/s.
>>>
>>> My Bacula Server is installed in a IBM server connected in a Tape
> Drive LTO
On 7/26/2011 5:04 AM, Konstantin Khomoutov wrote:
> On Tue, 26 Jul 2011 00:18:05 -0700
> Steve Ellis wrote:
>
> [...]
>> Another point, even with your current config, if you
>> aren't doing data spooling you are probably slowing things down
>> further, as wel
On 1/10/12 1:12 PM, Craig Van Tassle wrote:
> I'm sorry if this has been asked before.
>
> I'm running a Scalar 50 with HP LTO-4 Drives. I want to encrypt the
> data that is put on the tape, We already have encryption going between
> the Dir/SD and FD's. I just want to encrypt the data that will be
On 1/24/12 2:22 PM, mark.berg...@uphs.upenn.edu wrote:
> In the message dated: Tue, 24 Jan 2012 19:09:15 GMT,
> The pithy ruminations from Martin Simmons on
>
>
> Thanks for replying.
>
>
> backu
> ps unreadable> were:
> => > On Mon, 23 Jan 2012 18:47:31 -0500, mark bergman said:
> => >
>
On 3/8/12 9:38 AM, Erich Weiler wrote:
> Thanks for the suggestions!
>
> We have a couple more questions that I hope have easy answers. So, it's
> been strongly suggested by several folks now that we back up our 200TB
> of data in smaller chunks. This is our structure:
>
> We have our 200TB in on
On 3/8/12 3:34 PM, Gary Stainburn wrote:
> On Thursday 08 March 2012 20:35:10 Andrea Conti wrote:
>> Hello,
>>
>>> I've added exclude entries for most of the folders
>>> I don't want to back up but they're still being included,
>> Which folders are still being included? All of them or just some?
>>
On 3/20/12 6:50 AM, Gustavo Gibson da Silva wrote:
> Hi there,
>
> I have several machines with different disk configurations (some have
> c:,d: and e:, others have c: and others have c: and d:) sharing the same
> fileset. If Bacula 5.0.3 should not find some folders (for instance
> d:\systemstate
On 3/23/12 1:20 AM, Kern Sibbald wrote:
> Hello,
>
> This is in response to the email from Jesper (see below). As it is
> not always obvious, I am not in the least upset in any way. This is
> meant to be information about our future direction, and more
> directly a response to Jesper's concerns a
On 4/5/12 8:21 AM, Abdullah Sofizada wrote:
> Hi guys, this is a very weird one. I been trying to tackle this for the
> past two weeks or so to no avail...
>
> My director runs on Redhat Rhel 5.5 running bacula 5.0.2. My clients are
> Redhat Rhel 5.5 running bacula 5.0.2.
>
> Each of the bacula cli
On 4/10/12 4:36 PM, Steve Costaras wrote:
> I'm running bacula 5.2.6 under ubuntu 10.04LTS this is a pretty simple setup
> just backing up the same server that bacula is on as it's the main fileserver.
>
> For some background: The main fileserver array is comprised of 96 2TB drives
> in a raid-6
50 matches
Mail list logo