ne thing i am missing are patched packages, rpm and also deb packages
> > have a mechanism to include patch files an apply during build
process.
> >
> > For most bacula users it would be much easier to apply patches by
"rpm
> > -U ..." or "dpkg install ...".
>
> Yes, but you have to build, test an
> So this deviation would be eliminated by:
> 1. changing %script_dir to %_libdir/bacula
> 2. changing the installed location of the rescue files from
%sysconf_dir
> to %script_dir?
I think this is correct, based on my reading of the FHS. It does seem to
address the complaints the SRB raised.
-
> Personally I think the official rpms should be FHS compliant
> for reasons that David Boyes articulated. He is quite correct about
> large enterprise IT departments. I could have you buy me beers for an
> entire evening and regale you with stupid IT stories from US Airways.
> R
> Just to expand on this point a bit - as security enhancements such as
> SELinux
> become more commonplace, it's very realistic to expect that files that
are
> not
> in directories indicated by security policy as holding executables
simply
> can't be run as programs. If Bacula throws binary execu
> There are standards such as FHS, and these are good and useful for
most
> programs, but they really do a big disservice to Bacula users when we
are
> dealing with recovery. If you spread the Bacula installation all
around
> your
> computer filesystem as most packages do and as the standards spec
elaborate on what this means to you a bit more?
>
> I think I was confused and stated it backwards. Anyway, the Job
Proximity
> directive was proposed by David Boyes, so perhaps he could give us a
> definitive definition :-)
My suggestion was to define it as the minimum guard time between
> > I'd suggest separating installation and configuration (they're
really
> > two very different problems),
>
> Yes, I had considered doing that, but one of the goals was to try to
get
> each
> manual below 300 pages. The combined Installation and configuration
is
> only
> 244 pages, and the two
> Recently, I split the Bacula documentation (800+ pages) into the
following
> 7
> documents:
>
> Concepts and Overview Guide
> Installation and Configuration Guide
> Console and Operators Guide
> Problem Resolution Guide
> Catalog Database Guide
> Utility Programs
> Developers' Guid
> I hadn't planned to cancel the lower priority job, but I had thought
about
> the
> possibility. However, now that you mention it, I think we need some
> keyword
> to do this. Any suggestions? -- CancelLower, HigherWithCancel ?
On further thought, perhaps the right thing to do would be to prom
> 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
Looks OK. One question: if a lower level job is running, and a higher
level job attempts to start, does this imply that the lower level job is
cancelled, and the higher level one is run instead? I think this is
desirable behavior if possible
)?
> On Tuesday 09 October 2007 16:10, David Boyes wrote:
Not me. That was Kenny Dail.
> In the first baby step, there is probably no need for a database
change.
> However, the key to understanding the difficulties, and something that
is
> not
> going to change is that Bacula is J
> My thoughts on this would be to make the SD-MUX a a totally separate
> daemon with perhaps it's own DB. And the mux logic be left completely
> out of the Director.
The director has to be involved to some degree to ensure that device
reservations are properly registered (to prevent it from tryin
> On Wednesday 12 September 2007 17:03, David Boyes wrote:
> > I'm not sure I understand you completely. If you have a storage pool
> > defined in one SD and another storage pool defined in a separate SD,
you
> > should be able to migrate from one SD to the other, as lon
I'm not sure I understand you completely. If you have a storage pool
defined in one SD and another storage pool defined in a separate SD, you
should be able to migrate from one SD to the other, as long as the
director in a particular instance knows about both SDs. We do that now,
and it seems to wo
> Item 1: Implement a Copy job type that will copy the
> jobdata from one device to another, for example,
> copy from a fiel Tape to a real Tape.
This function already exists in Bacula. Define a nextpool resource in
the disk pool definition and define a migration job, and schedule it as
norm
> Couldn't the migrate capability be altered ever so slightly to allow
> the "migration" of a job without purging the old job from the
> catalog? This would allow bitwise identical backup(s) to be created
> without having to create a forking/muxing SD/FD.
>
> This, of course, does not create the
> Copypools
> Extract capability (#25)
> Continued enhancement of bweb
> Threshold triggered migration jobs (not currently in list, but will be
> needed ASAP)
> Client triggered backups
> Complete rework of the scheduling system (not in list)
> Performance and usage instrumentation (not in list)
>
> Of all the projects on the projects list, which 2 or 3 do you think
are
> most
> important from an enterprise standpoint?
That's a very open-ended question...8-) Careful what you wish for.
IMHO, here's what my wish list would be:
Copypools
Extract capability (#25)
Continued enhancement of bw
> I would like to second this. Right now I have duplicates of everything
> to first do a local backup and 7 hours later another backup of the
same
> data (but without the scripts and longer runtime) to an offsite
storage
> to mirror the data.
In our shop, this wouldn't be sufficient to satisfy the
> Item 8: Implement Copy pools
> Date: 27 November 2005
> Origin: David Boyes (dboyes at sinenomine dot net)
> Status:
>
> What: I would like Bacula to have the capability to write copies
> of backed-up data on multiple physical volumes sele
> I have to say i like it quite a bit. Very powerful search features and
> ways of customizing your view of the tickets in the system.
Another vote from here. It's a nice tool, if you add a few features
around it.
> Perl comes bundled with a tool called perlbug which knows how to write
> a bug r
Small request: can we start to clip the extra text in responses? It's
getting very difficult to find the actual content in these
discussions...
> >Downside -- it is probably better to compute the signature from
the
> >actual file written in case something goes wrong in writing it.
> Howeve
> > 1. I am considering to change the default installation location for
> Bacula on
> > Windows to be the same as it was previously -- that is the \bacula
> directory
> > on the main disk. The current installation places files in the
> "standard"
> > Windows locations, but IMO, it is very inconven
> > > In
> > > non-autochanger configurations, the StorageId is not significant
since
> > the
> > > volume can be mounted on any device with the same Media Type.
> >
> > Would this argue for defining manual drives as an autochanger with a
> > special/null changer device? That would allow the devic
> PS: for non-autochanger configurations, I have noticed that Bacula can
> label a
> volume and will often set InChanger=1, which is incorrect, and could
lead
> to
> the same problem noted above (message: no appendable volume found,
...).
> In
> non-autochanger configurations, the StorageId is not
> One thing that might be different between my needs and others is that
> I never want to use more than one drive at a time for a single pool. I
> am not sure that made sense so I will try to say it a different way...
> I mean that if there are concurrent jobs running for a single pool I
> want thi
> > OK, neat. Does that also allow regeneration of the file while the
daemon
> > is running, eg, what happens if you issue a reload command? Do you
get a
> > updated file?
>
> Yes. Whenever Bacula tries to open a configuration file, it instead
> opens a pipe.
Cool. That'll work for me. Nice job,
> > Since he has implemented it as an enhancement of the conf file
> specification
> > at the lexically scanner level, it will apply to *all* Bacula
components
> that
> > accept the -c option, which is every component (or tool) that needs
a
> conf
> > file.
> Yes, that's exactly what the patch doe
> I have no problems with putting the bacula conf files in to the
database
> if
> someone wants to do so. However, I don't foresee any change in the
ASCOO
> format that Bacula requires -- i.e. I don't foresee Bacula reading its
> conf
> file directly from an SQL database.
Fair enough.
Thinking
> On Apr 26, 2007, at 8:16 AM, Ryan Novosielski wrote:
> > This has been rejected before, and the reason being that you really
> > don't want to have to have a database in order to read your tapes.
I would disagree with you here. What I want is to get back to a
controlled restore process that non
> > The design of a GUI for bacula may revolve arround interfacing with
> > the dameons. I was wondering
> > if just designing a GUI for config file modification might simply
> > suffice, i.e. a config file editor with
> > the smarts of syntax checking and linking of all the resources in
> > the
> detach attach
> float attach
I prefer detach/attach, but float/attach would also work at somewhat of
a disadvantage. I'm not sure J Random User would understand the float
terminology, as I think that implies that the window would always be on
top, and that probably isn't the case.
---
> > # bacula-dir -c '|/usr/local/sbin/generate-dir-config'
> >
> > ... would cause it to run /usr/local/sbin/generate-dir-config to
> > generate a new configuration file.
>
>
> Hey, that sounds pretty useful. When you say "globally", I assume you
> mean that I can just toss a similar line int
> 1. Previous when Bacula needed a new Volume, it would prune *all*
volumes
> in the current pool. Now it prunes only one at a time, until it
finds
> one
> that has been Purged. This means that less pruning will be done,
and
> database records will tend to remain longer (possibly muc
> 1. With the batch insert code turned on there are a number of
regression
> tests
> that fail. They must all pass without errors prior to production
release.
> Responsible: Eric
> Deadline: Roughly the end of March
Makes sense.
> - First, have a default limit of the number of records that wil
> > I'm a fan of frequent releases. Smaller changes each time is a good
> > thing. Both from a user and a sysadmin point of view.
In principle this is a great idea. Can I suggest a minor variant?
Frequent releases benefit sites that are capable of experimentation.
Enterprise deployments don't
> I also think that for a longer term, "cleaner" solution it might be a
good
> idea to add a RunAfterJob, which runs one or more jobs after the
current
> job,
> and by "runs", I mean it starts the job within Bacula, not through a
> script
> as RunScript does. This, however, needs to be carefully e
> > "VanHelsing" did spring to mind - A tool to tame Bacula :)
Garlic, anyone? 8-)
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions
> On 1/5/2007 12:24 AM, Dan Langille wrote:
> ...
> > If you've always wanted to contribute to the project, this is
> > something that anyone can do. Please help to spread the news and
get
> > us a wider audience.
> >
> > We have a German version[3] of the PR.
>
> Just for your information: The G
> Yes, I think that is a good idea. First, though, we must create such a
> tar
> file, which doesn't currently exist as such. If you load and run the
> rescue
> make, it will exist as a directory.
One other thing that might be an issue is encryption information. I'm not
sure if the current scr
> 1. A snapshot of your hard disk configuration.
> 2. A copy of your current Bacula file daemon that can be run on
>a rescue system (i.e. probably statically linked).
> 3. A bunch of scripts that can be used to do various recovery tasks
> (bring up the network, repartition your hard disks as t
> Now, I'd like to get this fixed. So, David, could you try disabling
the
> virus scaner you use and see what happens? Which antivirus software
are
> you running?
The problem workstation runs WinXP SP2+all the up-to-date fixes and a
copy Norton AV 2005. I disabled it last night, and voila -- no p
> > Since migration is a significant new
> > feature, it strikes me as a very good opportunity to start heading
in
> > that direction.
> Yes, that is a good point that I probably did not consider enough.
Well, it's not like you don't have anything else to do...8-)
> Yes, on second thought we pr
> Thanks for your thoughts. I think there are several points that you
have
> minimized or overlooked in your response:
>
> 1. Bacula currently permits specifying multiple Media Types in a Pool.
> 2. Bacula currently permits Storage devices to be specified in the Job
> resource
> 3. Bacula current
> I did not always do that, but I learned that the flexibility of pools
> across different storage devices and media types is less valuable than
a
> simple setup.
Multi-type pools also make backup and storage management policy almost
impossible to implement. If we define a pool as an administrativ
> Defining and figuring out what storage device is going to be used has
> become a
> bit too complicated in version 1.39.x. This is mainly due to the fact
> that
> we have two different storage devices for each job and three possible
> places
> to specify them:
My 2 cents worth:
This can be simp
> Everyone can start by looking over the current Migration documentation
> that I
> have just updated at:
>
> http://www.bacula.org/dev-manual/Migration.html
>
> To make testing easier, I will release version 1.39.20 beta in the
next
> few
> days.
Outstanding! This plus the encryption support m
nt as it could be (compared to something like TSM's
server-to-server migration function), but that's possible with the current
state of the code in 1.39.
David Boyes
Sine Nomine Associates
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:bacula-devel-
> [EMAIL
Arno,
I like it! That's pretty close to what I wanted.
Suggestions:
In the location definition screen, add a column for "Next Location", so
that one can specify what location is used next.
On the volume display screen, add a check box to select particular
volumes, and a "Move to Next Locati
> Another solution might be to have a "Support form" that must be
completed
> and
> sent before support is given. This support form could have a few
simple
> questions and requests for information that could speed up a lot of
> support
> requests.
IMHO, this is the only possibility that's likely
> 1. To make migration (and later copy) work correctly, after a
migration
> job has run (moved data from one pool to another), it is inserted in
the
> catalog with the time and date of the original job. If I didn't do
this, a
> restore would not be possible since the restore is based on job level
a
> > Kind of a misleading name, though. It's really a volume state
indication
> > -- eg, the volume is mountable or not.
> In its current usage, I find it to be the best name. It indicates if
the
> volume is in the changer or not.
It does work, but I think it reflects an assumption born of Bacula'
> Sorry, but I don't understand what a "MOVE VOLUME" -- or rather what
an
> operator area is. I do understand the concept of an area that is
> accessible
> from the outside without opening the library case.
Sorry -- terminology thing again. In the very large libraries (the big
STK and IBM silos),
> > I'm also unclear on whether Bacula currently tracks which volumes
are
> > in an autochanger. Right now, I have two volumes that are both
> > recorded in the database as "InChanger = 1" and "Slot = 4", but it's
> > not obvious which tape is actually in the changer.
An interesting question. Wou
> I my mind, uid and jobuid are not the same thing but we can
> also call it jobuuid ?
> (uuid stands for a Universal Unique IDentifier)
That would work.
---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that
> > We can call it "jobuid" ?
>
> Yes. That is really nice. It is exactly what I was looking
> for. Thanks.
How about "jobseq" or "jobserial"? My first reaction to "jobuid" is "the
uid that this job runs under".
---
This SF.Net email is s
> Are you saying that instead I should write the tray icon to
> reflect some status rather than problems? ie: showing the
> number of running jobs.
I may nave been confusing the tray icon for the client with a minimized
icon/tray icon for operator interface you've been working on -- I
occasiona
> I'm sending this email to both lists because I think this is
> od interest of both groups.
> I haven't seen the already working bacula tray icon, and I
> want to add to the PyGTK console, some sort tray icon so that
> the operator can be noticed from interesting events.
I think I would argu
> > And these values apply to all volumes in the pool, right?
>
> Yes, which may encourage some users to create different Pools
> for different Media Types.
Good. This is a productive direction..8-).
> > Operationally, I think you'll be
> > more concerned with the entire pool case, using th
> What I have implemented already is (passes regression
> testing, so all existing features work despite the new code):
> - Separation of read/write descriptors in the Storage daemon.
> - Separation of the read/write Storage names in the Director that
> are sent to the Storage daemon (both a re
> On 22/11/05, Kern Sibbald <[EMAIL PROTECTED]> wrote:
>
> > Notes: By Kern: I would prefer to have a single new
> Resource called
> > RunScript, and within that resource an additional
> directive:
> >
> >RunBeforeJob = Yes|No
> >
> > If no, it runs it after the
> > Item 1: "spool-only" backup
> >Origin: Frank Volf ([EMAIL PROTECTED])
> >Date: 17 november 2005
> >Status:
This is another variation of the pool-to-pool migration idea. If a disk pool
were available as the first pool in the migratino chain, then this feature
request would be s
> Sounds like a great plan. It's probably too late but I
> suggest changing the acronym from RFC to something else to
> avoid conflict and possible confusion with RFC = "Request for
> Comment", used extensively to document Internet and other standards.
>
> What about RFF = "Request for Featur
> Well, in my opinion
>
> > i.e. a drive that
> > reads/writes two media types;
>
> Very, very rare, to the point of being almost non-existant in
> the open systems world. Occurs occasionally in the mainframe
> world with 3480/3490 media (same physical tape cartridge, but
> R/W density, f
> > I think you've answered a slightly different question.
> Right now, what
> > goes on the tape is block-for-block identical to what is
> reported by
> > the FD during the scope of that run, however flawed. If I want do
> > parallel copies of those tapes at the same time for redundancy or
> > Hmm. I don't think that would pass our auditors. If there's a
> > significant chance that the copies are not identical (and it sounds
> > like this approach pretty much guarantees that the copies
> will not be
> > identical), I don't think it would be sufficient or useful for this
> > purp
I scoped the problem as two major projects:
> >
> > 1) implementation of "copy pools" -- where files written to a pool
> > were automatically also written to up to 3 additional pools
> using the
> > same volume selection criteria as exist now (essentially
> getting the
> > SD to act as a FD
> > Hmm. Does the second job transfer the data from the FD
> again? If so,
> > then that doesn't (IMHO) quite do what I want to do here. I really
> > want to transfer the data only once (the only guarantee we have of
> > getting the same data on all the copies) and create the
> replicas on t
I scoped the problem as two major projects:
1) implementation of "copy pools" -- where files written to a pool were
automatically also written to up to 3 additional pools using the same volume
selection criteria as exist now (essentially getting the SD to act as a FD
to more than one FDs to ensu
> We want to backup all our clients to disk and then mirror
> these backups to tape. We want the ability to do quick
> restores from the disk (last
> 2-4 weeks depending on backup size) but also create a tape
> archive spanning several months if we need anything that is
> not on the disk-stora
> > I think another aspect that we haven't seen a lot of
> discussion on is
> > transparency and accountability, which is often the big catch with
> > commercial donors.
>
> I don't think this is a really big problem. First, I am
> someone very open. I have no problem with keeping things
> t
> Yes, indeed. This is a very interesting article. I was aware
> of the problems of funding especially bad feelings that can
> develop when certain developers are paid and others not, but
> I had never considered it from an angle of "crowing-out" of
> volunteer programmers. This "crowding-
> > Actually, this is closer to the "volume set" idea I alluded to --
> > being able to define a single volume name representing a "set" of
> > volumes where any member of the set would be acceptable to
> satisfy a
> > mount request for backup or restore. It's only moderately
> related to
>
> > nope .. i want the RECENT backup on tape it's just that i also want
> > the last 2 backups on HD for quicker restore, in case the
> tape fails
> > etc.
>
> Now, that doesn't much change what I wrote about bacula not
> allowing this. But I think your ideas could perhaps
> contribute to
> I would like any such companies to step forward, because the
> idea here for Bacula is not to make money, but to cover out
> of pocket costs of development.
I'm up for it.
If a foundation controls the actual Bacula code ownership, it's fairly
simple to have "support" providers contribute
I'd also point out that this is the route that OpenAFS took. It seems to
scale pretty well, with one or two commercial providers contributing funds
and development hardware from support contract revenue. OpenAFS created a
foundation to manage the contributions and hardware, thus providing an
audit
> 2. I've looked into the idea of creating a Bacula Foundation,
> and if done here in Switzerland where I live, it will cost
> about $2000-3000 to create and $2000-3000 per year for
> administrative fees (accounting, audit, ...) to run.
> At this point, this is not feasible.
If you wouldn't
> > $query = $queries[STUFF_FROM_TABLE1][SELECTED_DRIVER];
> > $query = sprintf($query, $parameter1, $parameter2);
>
> I'd vote for trying to do what Bacula does -- simplify the
> SQL and not use any
> non-standard MySQL SQL unless absolutely necessary. Doing
> that, we should be
> able to k
Title: Bacula-Client only on Solaris 2.5.1
It
built fine with the gcc packages from sunsite.unc.edu on my 2.5.1
system. If this is "Trusted Solaris" (a odd variation of 2.5.1 still used by the
government in various places), you have to use the Sun acc compiler so as not to
break the "trusted
>I can
> implement a script that automatically rejects all email by
> nonsubscribed
> users (it also rejects all other email held for
> administrative approval).
Every other significant mailing list I work with has this requirement to
serve it's subscribers, and it's the accepted practice. Uns
> http://www.linux-magazin.de/Artikel/ausgabe/2005/06/bacula/bacula.html
> As my German is still rather at an elementary level, I'm
> hoping the article is
> positive.
Quite positive. It's a very good description of what Bacula is and does.
-- db
---
81 matches
Mail list logo