[Bacula-users] client1 job error -- but not defined

2005-11-07 Thread mark
Hi all,

I keep getting this error message from bacula. I do not have a job
called client1 defined in bacula-dir.conf. I have done a grep -r -i
client1 /etc/bacula and found nothing. I am using gentoo and the gentoo
ebuild of bacula.


===

08-Nov 08:05 slain-sd: Job Client1.2005-11-06_01.05.01 waiting. Cannot
find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: FileStorage
Media type: File
Pool: Default


Any help appreciated.

thanks




---
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] replication/migration strategy for multiple bacula servers

2006-06-12 Thread mark








Hello, all!

I’ve got 2 bacula servers, one of which is a primary
box doing all the backing-up and the other is supposed to be an exact copy of
the first one (it’s located in a “hot site” facility far away).
As a part of the disaster recovery plan, I would like to make sure that the
second box always have a more or less (+/-30 min old) replica of the data on
the first box so that I can cut over to it once the disaster strikes.

 

What is the most proper way of doing this in Bacula world? Or
should be I be using rsync and friends instead? 

 

Kern mentioned migration jobs in one of his previous
responses to someone. Could that be the answer and if yes, is this feature
already available?

 

Thank you in advance.

 

Best Regards,

    Mark Gimelfarb.

 






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Removing Volumes from Catalog

2008-10-21 Thread Mark
Hiya list,

I added a bunch of volumes with the add command but both the volume 
sequence numbers and slot locations came out wrong. I then discovered 
the label barcode command which started to add the volumes in correctly 
and would have if I hadn't canceled it part of the way though.

So my question is how do I remove volumes from the catalog?

Mark

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Removing Volumes from Catalog

2008-10-21 Thread Mark
hai, i r noob

RTFM for the win.

I was looking for the remove command.

Sorry to spam the list, gents.

Mark


> Hiya list,
> 
> I added a bunch of volumes with the add command but both the volume 
> sequence numbers and slot locations came out wrong. I then discovered 
> the label barcode command which started to add the volumes in correctly 
> and would have if I hadn't canceled it part of the way though.
> 
> So my question is how do I remove volumes from the catalog?
> 
> Mark




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How does bacula handled interrupted jobs?

2013-08-14 Thread Mark
Hi Barak,

>> If I can't ever get a full because of unstable links then bacula
>> is useless in my particular setup.

Out of curiosity, what backup product are you currently using that _does_
cleanly handle loss of communication between the client and the backup
server, resuming the next backup at exactly there the previous one failed?
 I don't know how many remote sites you're going to be backing up, but if
it were me and just a few sites, I'd probably deploy a bacula host in each
site (you don't need a lot of expensive server horsepower to deploy a
perfectly functional and reliable bacula server).  If your links are
unstable enough that it's a problem for backups, then imagine when you need
to do restores and you're having those same circuit stability issues while
everyone is frantically asking you, "How much longer?!?!".  People don't
much care if a backup takes 8 hours overnight, but they care *immensely*
when everything has ground to a halt during a business day and you're
explaining that the restore will take that same 8+ hours, and may have to
start over completely because your links are bad.  At restore time, I'd
much rather be restoring at near wire speed from a local system.

Regards,
Mark


On Wed, Aug 14, 2013 at 9:16 AM, Barak Griffis wrote:

> That's not friendly with unstable/slow links...  Does anyone have any
> other insightful ideas on how to handle this sort of situation?  If I
> can't ever get a full because of unstable links then bacula is useless
> in my particular setup.
>
> On 08/14/2013 08:59 AM, John Drescher wrote:
> >> Feel free to direct me to a URL, since this seems like an obvious newb
> >> question, but I don't see an obvious search result on the webs.
> >>
> >> If a job gets interrupted (say network drops out midway through a
> >> full).  What happens the next time?  does it pick up where it left off
> >> or does it start over?
> >>
> > It marks the job as failed and the next job will start over.
> >
> > John
>
>
>
> --
> Get 100% visibility into Java/.NET code with AppDynamics Lite!
> It's a free troubleshooting tool designed for production.
> Get down to code-level detail for bottlenecks, with <2% overhead.
> Download for free and get started troubleshooting in minutes.
> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
--
Get 100% visibility into Java/.NET code with AppDynamics Lite!
It's a free troubleshooting tool designed for production.
Get down to code-level detail for bottlenecks, with <2% overhead. 
Download for free and get started troubleshooting in minutes. 
http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up catalog into a Pool defeats the purpose?

2013-11-25 Thread Mark
Hello,

On Mon, Nov 25, 2013 at 4:37 PM, jackbroni
wrote:

> Doesn't backing up the catalog into a Bacula Pool of volumes defeat the
> purpose as it would be inconvenient to retrieve the catalog in the case of
> failure?
>
>
Yes, it certainly could be.  You'll notice the default RunAfter job for the
catalog backup just deletes the resulting db dump after it has been backed
up.  An easy change is to replace it with your own RunAfter script that
copies that db dump and your various other files (configs, bootstraps,
whatever will simplify a rebuild) away to another host.  Offsite,
preferably.

HTH,
Mark
--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Failure Backing Up Windows Client Machine

2014-02-25 Thread Mark
Hello,

On Tue, Feb 25, 2014 at 2:28 PM, John Drescher  wrote:

> Mine bacula-5.2.10 x64 on windows 7 seems to support the -c parameter even
> though the help does not list that option
>
>
Indeed, if you look at the service entry on a Windows machine after a
successful installation of the client, it's simply:

"C:\Program Files\Bacula\bacula-fd.exe" /service -c "C:\Program
Files\Bacula\bacula-fd.conf"

I don't know why it's failing to install the service for him, but that
command line should get a functioning FD, provided the config is valid.

Regards,
Mark
--
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis & security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] latest bacula client (bacula-fd) for Windows

2014-04-25 Thread Mark
On Fri, Apr 25, 2014 at 11:22 AM, compdoc  wrote:

> > Not to worry: as of windows 7, windows backup is still broken
>
> Windows 7  Pro backup is ok, but it does break easily and doesn't tell you
> that its stopped working...
>
>
How are you running the Windows backup?  As a scheduled task on Windows?
 Here, I have bacula fire off a batch file on the windows host as a
ClientRunBeforeJob.  That batch script triggers wbadmin to do a backup out
to a Samba share, and when it finishes, bacula grabs the resulting Windows
image from the samba server.  If the Windows backup encounters a problem,
the batch exits with an error and I get a nice "backup failed" email from
bacula.  It works well for us, though it's only a handful of Windows
hosts...  I don't think it'd scale well to be doing full Windows backups
nightly to a Samba share from dozens and dozens of hosts.
--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] rpm repo for CentOS 6 anyone?

2014-05-08 Thread Mark
Hello,

On Thu, May 8, 2014 at 12:02 PM, Jari Fredriksson  wrote:

> 08.05.2014 15:12, Richard Fox kirjoitti:
> > Hi Jari & all,
> >
> > On Thu, 8 May 2014, Jari Fredriksson wrote:
> >
> >> CentOS 6 has bacula 5.0.0 which has a bug so that it prunes jobs older
> >> than 43 years, no matter how it was defined for director.
>

I don't have an alternate repo to offer you, I just wanted to ask if "43
years" is a typo, or if you really have retention times set that go beyond
that time frame?  More a curiosity than anything, I've certainly heard of
people with 7 or 10 year retentions, but you're talking about it being an
issue if jobs you run today get purged in 2057?  Sorry, it just really
piqued my interest because it's so far out of any use cases I've heard of.

Mark
--
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
• 3 signs your SCM is hindering your productivity
• Requirements for releasing software faster
• Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot find any appendable volumes - But auto labeling is enabled?

2011-01-04 Thread Mark
On Tue, Jan 4, 2011 at 9:14 AM, Mister IT Guru  wrote:

>
>   LabelMedia = yes; appears in all all my device definitions, and the
> storage daemon has been reloaded, restarted, reconfigured, reloaded, and
> restarted again. :)
>
>

Sorry if you've already posted it and I missed it, but what's that section
of your SD config look like?  Does the user (likely 'bacula') that the SD is
running as have rw access to the path given in your File section, and the
path exists?

Regards,
Mark
--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Help on retention period

2011-01-12 Thread Mark
On Wed, Jan 12, 2011 at 9:35 AM, Valerio Pachera  wrote:

>
> SCOPE: We want the possibility of restore any file till 2 weeks ago.
>

...


> at sunday of the third week, the first full backup get overwritten.
>
> _ _ _ _ _ _ | _ _ _ _ _ _ |
>
> This means that, of the first week, I can only restore file present in
> the incremental backup.
> In other words I do not have a cicle of 2 weeks but 1.
>
>
When your first week's full backup gets overwritten, what are those
incremental backups "incremental to"?  What's you're describing sounds like
what I expect fulls and incrementals to be.  When you overwrite the full,
you've essentially orphaned the incrementals that were created based on that
full backup.
--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] incremental backups too large

2011-01-13 Thread Mark
On Thu, Jan 13, 2011 at 4:42 AM, Bart Swedrowski  wrote:

>
> I think what Lawrence meant was that say full backup takes 33GB, as
> the one below.
>
> | 1,089 | tic FS  | 2011-01-08 02:05:03 | B| F |
> 464,798 | 33,390,404,320 | T |
>
> Now, if you do Incremental backup, it's going to be reported by
> bconsole as even bigger, eg.:
>
> | 1,097 | tic FS  | 2011-01-09 02:05:03 | B| I |
> 6,573 | 39,758,701,241 | T |
> | 1,105 | tic FS  | 2011-01-10 02:05:08 | B| I |
> 4,585 | 39,502,153,253 | T |
>


Have you done a 'list files jobid=' for one of your incrementals?
 Maybe you have a few really large files that are getting changed every day,
and therefore getting backed up each day.  I have no such behavior from
bconsole, my incrementals show only the small amount of change that I'd
expect:

   273  Full 23,0022.278 G  OK   05-Jan-11 03:23 Backup-dev3
   276  Incr42221.52 M  OK   06-Jan-11 03:28 Backup-dev3
   279  Incr10417.43 M  OK   07-Jan-11 03:38 Backup-dev3
   282  Incr30624.18 M  OK   08-Jan-11 03:47 Backup-dev3
   285  Incr 7417.41 M  OK   09-Jan-11 03:18 Backup-dev3
   288  Incr15117.80 M  OK   10-Jan-11 03:20 Backup-dev3
   292  Incr15818.30 M  OK   11-Jan-11 03:53 Backup-dev3
   295  Diff15117.53 M  OK   12-Jan-11 03:31 Backup-dev3
   298  Incr1451.001 M  OK   13-Jan-11 03:27 Backup-dev3

(the 'list jobs' output also shows the small numbers for incrementals, but
the output of 'status client' fits into 80 columns better  : )
--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] newbie question

2011-01-17 Thread Mark
On Mon, Jan 17, 2011 at 5:46 PM, Randy Katz wrote:

> Dan, I said I compiled and installed the program but had not done
> configuration, implies
> I had some exposure to some docs somewhere.

If you care to help here is
> a reply to a
> guy that gave me an off-list reply of sorts, as of yet I have not
> received what he spoke about
> which might prove to be partially helpful
>

Randy, you're sort of coming across as though you feel a sense of
entitlement, "I built Bacula from source but I don't feel like reading
documentation or putting the slightest bit of effort into this, so someone
give me a cookie-cutter setup".  That may well just be due to the nature of
email as a communications medium, so don't take offense to that unless
that's really how you mean it.

Backups are probably not the best item to take shortcuts with (especially at
a hosting company!), and you can spend one afternoon with the excellent
documentation and have all the understanding you need to get Bacula up and
running in your environment, in a way that suits your needs.  Can you make
some time for the documentation?  You'll be far better protected than if you
just throw in some configurations that you don't understand and start
running it.  I guarantee the list it helpful when you show up and ask an
informed question that shows you've done at least a bit of homework.
--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup encryption

2011-01-31 Thread Mark
On Sun, Jan 30, 2011 at 7:44 PM, Dan Langille  wrote:
>
>I keep thinking, the private key (used only for decryption) does not
>need to be on the FD... only the public key (the one used for encryption).
>
>Can someone confirm?
>

I just did a quick test, the FD won't start if the private key isn't
present:

# bacula restart
Starting the Bacula File daemon
31-Jan 08:27 bacula-fd: Fatal Error at filed.c:421 because:
Failed to load private key for File daemon "bacula-fd" in
/etc/bacula/bacula-fd.conf.
31-Jan 08:27 bacula-fd: ERROR in filed.c:221 Please correct configuration
file: /etc/bacula/bacula-fd.conf
#

Mark
--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup encryption

2011-01-31 Thread Mark
On Mon, Jan 31, 2011 at 8:48 AM, Paulo Martinez
wrote:

> Am 31.01.2011 um 15:31 schrieb Mark:
> > On Sun, Jan 30, 2011 at 7:44 PM, Dan Langille 
> > wrote:
> > >
> > >I keep thinking, the private key (used only for decryption) does not
> > >need to be on the FD... only the public key (the one used for
> > encryption).
> > >
> > >Can someone confirm?
> > >
> >
> > I just did a quick test, the FD won't start if the private key isn't
> > present:
> >
>
>
> How did you make the quick test?
>
> Deletion of an entry in the conf file or manipulation of the pem file?
>
> Cheers
>
> PM
>


Yes, simply removed the private key from the PEM file.
--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] minimum full interval?

2011-06-04 Thread Mark
Hi Craig,

On Sat, Jun 4, 2011 at 8:51 AM, Craig Isdahl  wrote:

> I'm looking for functionality along the lines of MinFullInterval (if it
> existed).
>
> I'd like to grab a full backup but only after x days have passed since the
> last full backup.  Ideas, thoughts and suggestions welcome!
>

What are you looking for that Max Full Interval doesn't do?  If you set Max
Full Interval to 30 days, and only schedule incremental jobs, then every 30
days one of your incrementals gets bumped to a full.  Are you looking to
prevent someone from manually firing off a full?

Mark
--
Simplify data backup and recovery for your virtual environment with vRanger.
Installation's a snap, and flexible recovery options mean your data is safe,
secure and there when you need it. Discover what all the cheering's about.
Get your free trial download today. 
http://p.sf.net/sfu/quest-dev2dev2 ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] need help (step by step) for setting up certificates

2011-07-24 Thread Mark
Hi,


> my certs now have the following permissions:
>
> - -rw-r--r-- 1 root   bacula 3195 2011-07-23 16:53 home1.crt
> - -r 1 bacula bacula  887 2011-07-23 16:53 home1.key
> - -rw-r--r-- 1 root   bacula 1359 2011-07-23 16:52 myca.crt
>
> so bacula should be able to read them all now, yet i'm still getting the
> same error `TLS negotiation failed` when trying to run bconsole.
>
>
As you can see there, the only users on the system who can read home1.key
are root and bacula.

When you run bconsole, it runs as you, not as the bacula user.  The
_daemons_ run as root and/or bacula (depending on whether you're talking
about FD, SD, or DIR), but bconsole is just a client to the director.  If
you're logging in as "scar", change home1.key's permissions so that the
group can read it (mode 640) and add "scar" to the bacula group (note that
I'm not sure if bacula will complain about the key's permissions being too
lose, but it's quick to change back if so), or if the filesystem is mounted
with ACL support, just do a setfacl and allow the user "scar" to read the
file.

HTH,
Mark
--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Re: Backing up lvm snapshots?

2011-09-19 Thread Mark
Hi,

On Mon, Sep 19, 2011 at 3:22 PM, Tobias Schenk wrote:

> Oh, sorry, I really got confused with this mail.
>
> Hello,
>
> thank you for the comments regarding readlink. Unfortunately, it does
> not work :(
> if I do:
> bash>readlink /dev/vmguests/backup
> I get:
> ../dm-6
>
>
Does it work if you add the ' --canonicalize' option?

readlink --canonicalize /dev/vmguests/backup


HTH,
Mark
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Re: Backing up lvm snapshots?

2011-09-19 Thread Mark
On Mon, Sep 19, 2011 at 3:27 PM, Mark  wrote:

> Hi,
>
> On Mon, Sep 19, 2011 at 3:22 PM, Tobias Schenk wrote:
>
>> Oh, sorry, I really got confused with this mail.
>>
>>
>> bash>readlink /dev/vmguests/backup
>> I get:
>> ../dm-6
>>
>>
> Does it work if you add the ' --canonicalize' option?
>
> readlink --canonicalize /dev/vmguests/backup
>
>

Sorry for the noise, I see that's not your problem now, your issue is that
the link doesn't exist yet.  It's Monday, at least that's my current
excuse...

Mark
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Encryption keys

2011-10-11 Thread Mark
Hi Jon,

2011/10/11 Jon Schewe 

> Is there any reason (besides good security) that I can't use the same
> private key for all bacula clients? Can I use the same pem file as well?
>
> Jon
>
>
Works fine for me here... I'm not trying to protect my machines' data from
each other, only to ensure it's encrypted when offsite.  They alll use the
same client cert and key.

Regards,
Mark
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem backup Exchange 2010

2011-10-17 Thread Mark
Hi Joseph,

On Mon, Oct 17, 2011 at 5:24 AM, Joseph L. Casale  wrote:

> >I have a client that upgrade his infrastructure with Exchange 2010.
> >If I enable the exchange Plugins the backup fails
> >
> >Client is Windows 2008 R2 (64bit)
> >Server -> centos + 5.0.1 (24 February 2010) i686-pc-linux-gnu redhat
> >
> >What I have to do make it work?
>
> I do this with a vss script that creates and exposes a snapshot of the
> drives that the
> exchange stores Its data on, but as the DB's are small at about 30gig I
> always do fulls
> every day. After the backup, it releases the exposed drive and drops the
> snapshot.
>
> This flushes the transaction logs and is perfectly valid backup, I verify
> the restore in a
> mock environment once and a while and it works well.
>
>
Since the Windows Bacula agent uses VSS snapshots to do its backups, what is
your script doing differently than simply configuring Bacula to backup e.g.
the D: drive (if your Exchange DBs are on D:)?  Honest question, not being
critical.  I've been doing backups as another replier mentioned, using
Windows Backup and then backing up that resulting image with Bacula.
 Perhaps that's overkill?

Thanks,
Mark
--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FreeBSD 9 and ZFS with compression - should be fine?

2012-02-08 Thread Mark
Just checking to see if people are having success with storage daemon
running on FreeBSD 9.0 with ZFS and compression enabled?  I ask because I'm
having issues with the backups completing without any errors reported, but
then an immediate restore attempt fails due to block checksum mismatches,
or trying something like 'bls -j -v -V Full-0079 FileStorage1' will fail
and exit with a block checksum mismatch.

The pool is a raidz1 made up of 5 1.5TB drives.  I can run a scrub, get a
clean result, run a backup, then have a restore from that backup fail (only
for larger backups, small ones seem fine).  A scrub run after that will
also report errors, usually 1 or 2 out of roughly 600GB of data, and it
will show them as repaired.  I'm just trying to determine if I'm being
bitten by the SATA controller, it's an ' '
and I have to set the storage type to IDE instead of AHCI and set
'hint.ahci.0.msi=0' in loader.conf or the system can't even see the drives.
 Or maybe what I'm trying here is a bad idea?  I'd just like the
compression without the overhead and slowdown on the clients that comes
from enabling compression in the fileset.  I'm a FreeBSD neophyte, is the
SATA/AHCI stuff just not good yet or would much better results be likely
with a newer board/controller?

Thanks for any info,
Mark
--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FreeBSD 9 and ZFS with compression - should be fine?

2012-02-09 Thread Mark
Hello again,

On Thu, Feb 9, 2012 at 12:22 PM, Steven Schlansker wrote:

>
> On Feb 9, 2012, at 10:07 AM, Martin Simmons wrote:
>
> >>>>>> On Wed, 8 Feb 2012 20:22:46 -0600, Mark  said:
> >>
> >> Just checking to see if people are having success with storage daemon
> >> running on FreeBSD 9.0 with ZFS and compression enabled?  I ask because
> I'm
> >> having issues with the backups completing without any errors reported,
> but
> >> then an immediate restore attempt fails due to block checksum
> mismatches,
> >> or trying something like 'bls -j -v -V Full-0079 FileStorage1' will fail
> >> and exit with a block checksum mismatch.
> >>
> >> The pool is a raidz1 made up of 5 1.5TB drives.  I can run a scrub, get
> a
> >> clean result, run a backup, then have a restore from that backup fail
> (only
> >> for larger backups, small ones seem fine).  A scrub run after that will
> >> also report errors, usually 1 or 2 out of roughly 600GB of data, and it
> >> will show them as repaired.  I'm just trying to determine if I'm being
> >> bitten by the SATA controller, it's an '  controller>'
> >> and I have to set the storage type to IDE instead of AHCI and set
> >> 'hint.ahci.0.msi=0' in loader.conf or the system can't even see the
> drives.
> >> Or maybe what I'm trying here is a bad idea?  I'd just like the
> >> compression without the overhead and slowdown on the clients that comes
> >> from enabling compression in the fileset.  I'm a FreeBSD neophyte, is
> the
> >> SATA/AHCI stuff just not good yet or would much better results be likely
> >> with a newer board/controller?
> >
> > I think your ZFS setup should work fine, but I don't know about your
> specific
> > hardware.
> >
> > You must have hardware problems -- not necessarily in the SATA controller
> > though.  The block checksum mismatches suggest that the wrong data was
> written
> > to the disk.
> >
> > Have you got EC RAM?  Have you run Memtest86?
>
> I run a FreeBSD 9.0 setup with ZFS (8 drives in raidz2).  Compression and
> dedup on.
>
> It runs like a champ, no problems at all so far.  I'll second the guess
> that you
> have hardware problems.
>
>

Thank you Martin and Steven for your replies.I'm inclined to agree with
you, as a motherboard swapout has resolved the issue.  I don't know if it's
the different RAM or SATA controller, but everything else (cables, drives,
etc) is the same and the problem is gone.  Thanks, I just wanted to make
sure I wasn't doing something sketchy, but it certainly seemed like it was
a logical and pretty simple setup.

Steven, out of curiosity, do you see any benefit with dedup (assuming that
bacula volumes are the only thing on a given zfs volume).  I did some
initial trials and it appeared that bacula savesets don't dedup much, if at
all, and some searching around pointed to the bacula volume format writing
a unique value (was it jobid?) to every block, so no two blocks are ever
the same.  I'd backup hundreds of gigs of data and the dedupratio always
remained 1.00x.

Regards,
Mark
--
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] feature request: add option to assign purged volumes to Scratch pool

2007-02-08 Thread mark . bergman

Item  1:  add option to assign purged volumes to Scratch pool

  Date:   08 Feb, 2007 
  Origin: Mark Bergman 
  Status: 

  What:  
Add an option to allow purged volumes to be automatically assigned
to the Scratch pool upon reuse.

  Why:   
It's often difficult to predict how much media each pool will require 
for a backup. The "Scratch" pool, which allows media to be automatically
assigned to any other pool is an excellent way of dealing with this.

However, as media are purged and recycled, they stay within the pool
to which they were originally assigned. While this has some advantages,
I'd like to see the option to have purged media automatically reassigned
to the Scratch pool, so that (upon reuse) they can be moved to any pool.

  Notes:
It would be helpful to set the "Assign to Scratch Pool Upon Purge"
option individually for each piece of media. This setting could be
recorded in the database or on the media label itself.

For example, if I've got a mix of LTO-2 and LTO-3 tapes in an auto
changer, I may want to assign the LTO-3 tapes to the "Full" pool, and
have them fixed in that pool.

However, the LTO-2 tapes would begin by being assigned to the Scratch
pool. As they are purged, I'd like them to be re-assigned to the Scratch
pool.



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] mysql db dump problems, making sense of dbcheck results

2007-02-16 Thread mark . bergman

I'm running bacula under Linux, and I'm experiencing sporadic segmentation
faults when dumping the catalog database. The seg faults seem to appear about
every 10~20 days, and seem to keep re-occuring (once they start) until the
database is restarted.

Even after the database is restarted, it sometimes takes a few successive dumps 
to get a dumpfile written successfully.

The command being used to dump the database is:

/usr/bin/mysqldump -u $2$MYSQLPASSWORD -f -v \
--skip-opt  \
--add-drop-table\
--add-locks \
--create-options\
--extended-insert   \
--flush-logs\
--quick \
--set-charset   \
--single-transaction\
 $1 >$1.sql 2> $1.sqldump.errs

When the dump is unsucessful, the bacula.sqldump.errs file simply ends with the 
lines:
-- Retrieving table structure for table File...
-- Sending SELECT query...
-- Retrieving rows...


The seg faults began when the database was using ISAM tables, and have continued
through a few db upgrades, and even after dropping and reloading the database.

All other bacula and db operations seem fine (backups, restores, creating/
dropping indices, optimizing the database). The bacula server runs other 
functions as well (cluster head node and computational server)--and there 
aren't any indications of memory or other hardware errors.

I am clearly not a DBA, but I suspect some data within the database as causing 
the 
problem. Does that sound reasonable?


Here's the environment:

Bacula 1.38.11 (soon to upgrade)
Linux 2.4.26 (FC1 based)
Mysql 5.0.22
About 20 clients

Currently, the database is about 11GB while running, and about 3.3GB when 
dumped. I'm using the InnoDB engine. 

Here are some database stats:
File table contains 26654181 rows
Filename table contains  6482800 rows
Job table contains  1883 rows
Jobmedia table contains28763 rows
Path table contains   597237 rows

The database has File_PathId_idx and File_FilenameId_idx indicies.

Full backups are run monthly, and retained for 6 months plus 2 weeks.
Differential backups are run weekly and retained for a month plus a week.
Incremental backups are run nightly and retained for a month plus a week. 

The database has been in production use since August.

Running dbcheck shows:

Found 0 bad Filename records.
Found 0 bad Path records.
Found 0 duplicate Filename records.
Found 0 duplicate Path records.
Found 0 orphaned JobMedia records.
Found 0 orphaned File records.
Found 194350 orphaned Path records.
Found 1212673 orphaned Filename records.
Found 3 orphaned FileSet records.
Found 0 orphaned Client records.
Found 2 orphaned Job records.
Found 0 Admin Job records.
Found 11 Restore Job records.

My questions are:

[1] Are segmentation faults from "mysqldump" a known problem, and is 
there a known solution?

[2] Do the database statistics seem "normal", given the usage that I 
described?

[3] Are the number of orphaned Path and Filename entries found by 
dbcheck reasonable?

[4] Is it safe to have dbcheck delete the orphaned records?

[5] Are there any obvious things that can be done to improve the 
database reliability, stability, or performance?

I understand that there's probably too little information in this post to give 
detailed answers to all these questions, and that doing database tuning via 
e-mail is difficult at best, but I'd appreciate your insight.

Thanks,

Mark


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get 

Re: [Bacula-users] mysql db dump problems, making sense of dbcheck results

2007-02-16 Thread mark . bergman


In the message dated: Fri, 16 Feb 2007 20:07:58 +0100,
The pithy ruminations from Per Andreas Buer on 
 we
re:
=> [EMAIL PROTECTED] wrote
=> >[1] Are segmentation faults from "mysqldump" a known problem, and is 
=> >there a known solution?
=> >   
=> 
=> No. Its usually the result of a bug in either mysql(dump | library | 

Hmmm... The problem has presisted through several versions of mysql.

=> server) or some serious corruption. Does the mysql error log reveal 
=> anything? How abount "check table" on the relevant tables?

Running
check table $TABLE;
for each table produces the message:

++---+--+--+
| Table  | Op| Msg_type | Msg_text |
++---+--+--+
| bacula.$TABLE  | check | status   | OK   | 
++---+--+--+

foreach $TABLE. There were no errors reported.



=> 
=> > (..)
=> >
=> >[5] Are there any obvious things that can be done to improve the 
=> >database reliability, stability, or performance?
=> >   
=> 
=> You might have a memory error. Have you tried running memtest? Could you 

Nope, I haven't tried it. However, this is a production machine, running a lot 
of other things--and there aren't other indications of a memory problem.

=> post your my.cnf together with your machine specs (uname -a and free -m) 
=> gives us the interesting parts.

Sure.

  uname -a:
Linux parthenon 2.4.26-openmosix1 #10 SMP Wed Sep 14 10:18:08 EDT 2005 
i686 i686 i386 GNU/Linux


  free -m
 total   used   free sharedbuffers 
cached
Mem: 11841  11594246  0 97  
10076
-/+ buffers/cache:   1419  10421
Swap: 6655392   6263


###   /etc/my.cnf   ###
# Example MySQL config file for large systems.
#
# This is for a large system with memory = 512M where the system runs mainly
# MySQL.

# The following options will be passed to all MySQL clients
[client]
#password   = your_password
port= 3306
socket  = /var/lib/mysql/mysql.sock

# Here follows entries for some specific programs

# The MySQL server
[mysqld]
port= 3306
socket  = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 256M
max_allowed_packet = 8M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size= 16M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 4


# Replication Master Server (default)
# binary logging is required for replication
log-bin=mysql-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1


# Point the following paths to different dedicated disks
tmpdir  = /san3/var/tmp # It's a faster disk than /
#log-update = /path-to-dedicated-directory/hostname

# Uncomment the following if you are using BDB tables
#bdb_cache_size = 64M
#bdb_max_lock = 10

# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:100M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
innodb_log_arch_dir = /var/lib/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 512M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 256M # Default is 64M
innodb_log_buffer_size = 64M# Default is 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 75

[mysqldump]
quick
max_allowed_packet = 64M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
safe-updates

[isamchk]
key_buffer = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout
###

Yes, the machine is doing way too many things to be the best choice for a 
bacula server...but that's going to change very soon.


Thanks for all your help,

Mark

=> 
=> 
=> Per.
=> 





The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination

Re: [Bacula-users] 'PURGE' command

2007-02-22 Thread mark . bergman


In the message dated: Thu, 22 Feb 2007 16:47:42 EST,
The pithy ruminations from David Romerstein on 
<[Bacula-users] 'PURGE' command> were:
=> Well, I believe that I may have just done something terribly stupid.
=> 
=> As stated before, I'm using bacula (2.0.0, RHEL 4) to archive a very large 
=> amount of data to tape, so that I can remove it from our overloaded 
filestore.
=> 
=> While I was running a backup, another admin bounced the MySQL service, 
=> which took less than a minute to come back up. In that time, 1600+ inserts 
=> into table Filename failed. Once I realized what had happened, I thought 
=> to myself, "No problem - I'll kill this job, recycle the tape so I don't 

Hmmm...you must have a typo there. I'm almost certain you meant to say:
"No problem - I'll kill the other admin, then recycle the tape..."

=> have to drive to the colo and physically swap tapes, restart the job, and 
=> all will be well".

Good plan.

=> 
=> I grab the handy-dandy printout of TFM I that have here, and I look up 
=> "Manually Recycling Volumes". I find that I need to run 'update volume' to 
=> make sure that Recycle is set to 'yes', and then run the 'purge jobs 
=> volume' command to mark the volume as purged. "OK", I think to myself, "no 
=> problem". I run 'update volume', and I make sure the volume is set to 
=> recycle. I then run 'purge'. I'm asked to choose to purge files, jobs, or 
=> volume. Reading the line 'purge jobs volume' in the docs, I assume that 
=> the proper choice here is 'jobs', so I choose that... and it appears that 
=> it is now deleting all of the Jobs for the default client:

Yep.

=> 
=> You have the following choices:
=>   1: files
=>   2: jobs
=>   3: volume
=> Choose item to purge (1-3): 2
=> Automatically selected Client: srv01-fd
=> Begin purging jobs from Client "srv01-fd"
=> 
=> ... and 'mytop' tells me that the current running query is "DELETE from 
=> File where JobID=28". JobID 28 just happens to be the first successful 
=> large (18 million files, 300+GB) backup/archive I made.

Ouch.

=> 
=> This... is a problem. There's no failsafe. Yep, there's a "This command 
=> can be DANGEROUS!!!" warning, but no way to stop the action once it's 
=> started.
=> 
=> The main question here is "are all of my jobs going to disappear from the 
=> db?" - if this is the case, question #2 is "am I going to end up running 
=> bscan to restore my data?"

Yes. Maybe.

Do you run a backup catalog job?
if not...then RTFM for "bscan"


Where there any successful backups between the last catalog backup and the 
"oops"?
If not, then think about dropping and restoring the backup catalog,
then overwrite the volume manually (ie., "dd if=/dev/zero of=/dev/tape")
and purge the volme from the database

If so...then RTFM for "bscan", but it may not be that bad...you can
still drop & restore the catalog, then run bscan to recover data from
the missing jobs--if they are significantly smaller than the 300GB
    job from srv01, this may be an advantage

Do you keep the Bacula sql database dumps on-disk or remove them when the
backup catalog job is complete?

If the sql dump file is on disk...then it's all easy.

If not, then restore the dump file...


Note: these are rough ideas, off the top of my head. If Kern, Arno, or anyone 
else has any suggestions, please listen to them!

Good luck,

Mark

=> 
=> -- D


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] suggestions for improving question & enhancement submissions (Was: Re: Unhelpful help (was Re: Bacula and HP MSL2024autochanger))

2007-03-19 Thread mark . bergman


In the message dated: Fri, 16 Mar 2007 20:53:32 -,
The pithy ruminations from "Rob MacGregor" on 
 
were:
=> On 3/16/07, Darien Hager <[EMAIL PROTECTED]> wrote:
=> > I wonder if a small improvement would be to basically send them an e-
=> > mail as part of the signup process which has some helpful links,
=> > small manual TOC, etc.

I think that a MAJOR improvement would be a form on the website that prompts 
the user for all the "required" information to submit a question to the list 
(or a bug report):

bacula version
fd platform
sd platform
client platform[s]
specific error messages
debug level used
problem description
attached config files


There's so much information that the process of asking--and more importantly, 
answering--questions could be tremendously improved if the questions were 
posted in a standardized format. This would make them easier for human beings 
to scan and respond to, and easier to index for future searches.

Along these lines, I'd also strongly suggest a form for submitting enhancement 
requests. Kern has a strong preference for having the request submitted in a 
particular format...it would improve the quality of the requests, reduce the 
number of times that they are re-submitted because of "formatting errors", and 
make it easier on Kern (and the other developers) if the data was in a 
standardized format.

Mark

=> 
=> I've seen that done on another list, including direct pointers to the
=> section of the FAQ that details how to report problems.
=> 
=> I think maybe one person has actually provided even the majority of
=> the requested information right off (and nobody all of it).  Even
=> people who harp on about having read the FAQ provide little of the
=> requested information.  The most common statement is "I know I should
=> upgrade, but...".
=> 
=> So, in my experience it may help, but I really doubt it.  The sad fact
=> of life is that most people either don't read any of it (probably
=> because they're in too much of a rush to get their post in so they can
=> get help) or don't understand what they read (either language barriers
=> or a lack of a common knowledge space).
=> 
=> -- 
=>  Please keep list traffic on the list.
=> 
=> Rob MacGregor
=>   Whoever fights monsters should see to it that in the process he
=> doesn't become a monster.  Friedrich Nietzsche
=> 
=> -
=> Take Surveys. Earn Cash. Influence the Future of IT
=> Join SourceForge.net's Techsay panel and you'll get the chance to share your
=> opinions on IT & business topics through brief surveys-and earn cash
=> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
=> ___
=> Bacula-users mailing list
=> Bacula-users@lists.sourceforge.net
=> https://lists.sourceforge.net/lists/listinfo/bacula-users
=> 





The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Strange "ERROR" on successful backup

2007-03-29 Thread mark . bergman


In the message dated: Thu, 29 Mar 2007 11:09:39 BST,
The pithy ruminations from Alan Brown on 
 were:
=> On Wed, 28 Mar 2007, Erich Prinz wrote:
=> 
=> > Others have experienced issues relating to antivirus scan engines
=> > interfering with successful backups. Perhaps it is interfering with
=> > the communications between the daemons (different machines from what
=> > I gather.)
=> >
=> > The others have tested by disabling AV then running a test job.
=> 
=> It would be nice if there was a way of telling AV software not to inspect 
=> files being opened by Bacula

Well...there is a Windows API for opening files specifically for backups. Some 
virus-protection software (Symantec [Norton] Anti-Virus) allows you to exclude 
all files opened for backup purposes, regardless of the application opening the 
file.

The Bacula documentation states that Bacula backs up Windows systems using the
Windows API calls, so virus protection software should be able to easily exclude
Bacula.

http://www.bacula.org/dev-manual/Windows_Version_Bacula.html

Mark

=> 
=> AB
=> 



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] specifying client-fd:/path/to/exclude in FileSet (dynamic exclude lists)?

2007-03-29 Thread mark . bergman

Is it possible to specify a client name as well as a path for exclusion lists in
a FileSet?

We use NIS in our environment. There's a category of auto-mounted directories,
used for incremental calculations, that never need to be backed up. Those
directories are defined in an automount map served via NIS.

A script is defined within the FileSet resource that dynamically generates a
list of directories to exclude, based on the paths in the automount map.

The problem is that the paths in the exclude list don't include a client name,
leading to the possibility that the same path will exist on different servers,
where one should be backed up.


For example:

  server + path automount map   mounted as  backup?
  = =   ==  ===
  server1:/san1/JohnDoe auto.scratch/scratch/JohnDoeno
  server2:/san1/JohnDoe auto.home   /home/JohnDoe   yes

In this case, if I generate the exclusion list based on the contents of the
automount map "auto.scratch" (which never needs to be backed up), it will
contain "/san1/JohnDoe". This means that the important files in server2:/san1/
JohnDoe will also be excluded.


How is the 
 File = "|"
mechanism implemented in the FileSet resource? Is there any way for the external
program to determine which client backup is causing the call to the program? For
example, if Bacula set and exported environment variables with details about the
job (the job name, client name, level, etc.), then the external program to test
that the server listed in the automount map matches the current client, and no
other machine.

Is the dynamic exclude list built for each backup job, or just once when the 
bacula-fd daemon starts?


Environment:
bacula 1.38.11 (soon to move to 2.x.x)
bacula-fd under Linux 
mysql5

= snippet from bacula-dir.conf 
FileSet
{
  Name = "Full Set"
  Ignore FileSet Changes = yes # The default is "no", meaning that a change in 
the
 # fileset spec will trigger a full backup for
 # the next backupup. However, if the change to
 # the fileset is designed to exclude more files
 # (as it would here, since the default stance is to
 # backup all local files except those specifically
 # excluded), then running a Full would defeat
 # the purpose of changing the FileSet spec.

  Include
  {
Options
{
  fstype = reiserfs
  fstype = ext2
  fstype = xfs
  fstype = ufs
  onefs = no
  signature = MD5

  Exclude = yes
wildFile = "/usr/local/bacula/var/working/*spool"
wildFile = "/usr/local/bacula/var/working/*sql"
wildFile = "*/var/lib/mysql/*/*.MYI"
wildFile = "/var/lib/mysql/*/*.MYD"
wildDir = "/var/cache/*"
wildDir = "/var/run/*"
wildDir = "/var/spool/clientmqueue/*"
wildDir = "/var/spool/cups/tmp/*"
wildDir = "/lost+found/*"
wildDir = "/dev/pts/*"
wildDir = "/dev/shm/*"
wildDir = "/proc/*"
wildDir = "/tmp/*"
wildDir = "/var/lock/*"
wildDir = "*/var/spool/bacula/*"
wildDir = "bacula-spool/*"
wildDir = "/var/tmp/*"
 }
File = /
  }


  # Exclude Stuff
  Exclude
  {
File = /var/log/wtmp
File = /var/log/wtmpx
File = /var/log/utmp
File = /var/log/lastlog
File = /var/adm/wtmp
File = /var/adm/wtmpx
File = /var/adm/utmp
File = /var/adm/lastlog
File = core
File = /.fsck
File = /.journal
File = "|/usr/local/bacula/bin/dynamic_excludes"
  }
}
===





Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-

[Bacula-users] feature request: report future purge/prune/recycle actions

2007-04-09 Thread mark . bergman
Item 1:   enable bacula to report on purge/prune/recycle actions in the future
  Origin: Mark Bergman <[EMAIL PROTECTED]>
  Date:   Mon Apr  9 11:26:31 EDT 2007
  Status:

  What:   Enable bacula to produce a report on the purge/prune/recycle actions
that would take place at a specified date in the future.

  Why:There are frequent questions about the purge/prune/recycle algorithms
and retention periods. It can be very difficult to predict the
interaction and resource needs of complex retention policies,
and very time-consuming to wait weeks or months to see the
policies in effect.

It would be very helpful to all users to specify a date in the future, 
and receive a report of all the purge/prune/recycle actions
that would take place until that date. Of course, this would
not take into account any tapes that fill up over that period.

This would also be a tremendous aid in resource planning (ie., do we
need to order more blank tapes, or will a large number of tapes be
recycled soon? which tapes will be recycled soon, so that they can 
be put into the autochanger in advance? if the retention period is
increased, will there be any tapes left in the scratch pool in 6 
months?)



----
Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] feature request: aliasing directory paths

2007-04-09 Thread mark . bergman
Item 1:   allow bacula to alias one directory path to another
  Origin: Mark Bergman <[EMAIL PROTECTED]>
  Date:   Mon Apr  9 11:26:31 EDT 2007
  Status: 

  What: Allow bacula to alias one directory path for another. 

  Why: Snapshots (or other means of copying a directory tree) may appear under
a mount point, but all references to the data (for future restores)
should point to the original location. An "alias" within bacula would
enable this path substitution.

  Notes: Many operating systems and storage devices (SAN or NAS) allow a
snapshot to be made of a filesystem, often for backup purposes.

Since bacula does file-level backups, the snapshot needs to be mounted
as a filesystem. However, this means that the backup will be associated
with the mount point, not with the original data. 

For example, if a snapshot of server1:/active/db/directory
is mounted under server1:/snapshot, bacula would do the backup
of /snapshot.

The requested alias feature would allow users to:

exclude /active/db/directory
include /snapshot/active/db/directory
alias /snapshot/active/db/directory to /active/db/directory

this means that the backup catalog (and--most importantly, future
restores) would reference /active/db/directory when searching for
files. Since the underlying storage system (SAN, NAS, LVM, etc.) handles
the snapshot feature and ensures that the data is consistent, the
backup is truely of the data in /active/db/directory.


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed problem with Powervault 124T

2007-04-24 Thread mark . bergman
In the message dated: Tue, 24 Apr 2007 22:57:34 +0200,
The pithy ruminations from Sysadmin Worldsoft on 
 were:
Hello John,
>
John Drescher a icrit :
>
> You did not give any details of your systems. I assume you have a
> gigabit network between all the involved systems. What database are
> you using? Did you properly set up the indexes? Are you using
> spooling?
>
>
I backup a directory mounted on the same server which the library is
connected.

That could make for good performance.

>
[Directory on NAS Powervault 220S<> Raid Controller] <-> Server <->
[SCSI Adaptec 39160 <> Tape Library Powervault 124T]

Hmmm

What RAID level are you using?

Are you spooling data?

Roughly how many files/directories are involved? What OS is involved? The
performance for many OSs (depending on the filesystem in use) is dramatically
worse for very large directories. I've seen applications that write 10+
files per directory, and that caused a simple "ls" to take about an hour.

Is the bacula database on the same storage device as the data that you are
backing up (ie., is there contention for the disks and on the SCSI bus while
doing backups)?

Is the server doing any other significant IO while doing backups?

Are there multiple backups going on at once?

What kind of speed do you get writing raw data to the tape drive completely
outside of any use of bacula? Try something like:

shut down bacula

insert a blank tape

time tar --totals -c -f /dev/your_tape_device /path/to/your/storage
(assuming GNUtar)

time dd ifýev/urandom ofýev/your_tape_device bs24 count102400


>
I use mysql database, i create the database with the script provided by
bacula.
>
Regards.
>

-
Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed problem with Powervault 124T

2007-04-25 Thread mark . bergman


In the message dated: Tue, 24 Apr 2007 22:57:34 +0200,
The pithy ruminations from Sysadmin Worldsoft on 
 were:
=> Hello John,
=> 
=> John Drescher a écrit :
=> 
=> > You did not give any details of your systems. I assume you have a
=> > gigabit network between all the involved systems. What database are
=> > you using? Did you properly set up the indexes? Are you using
=> > spooling?
=> > 
=> 
=> I backup a directory mounted on the same server which the library is 
=> connected.

That could make for good performance.

=> 
=> [Directory on NAS Powervault 220S<> Raid Controller] <-> Server <-> 
=> [SCSI Adaptec 39160 <> Tape Library Powervault 124T]

Hmmm

What RAID level are you using?

Are you spooling data?

Roughly how many files/directories are involved? What OS is involved? The 
performance for many OSs (depending on the filesystem in use) is dramatically 
worse for very large directories. I've seen applications that write 10+ 
files per directory, and that caused a simple "ls" to take about an hour.

Is the bacula database on the same storage device as the data that you are 
backing up (ie., is there contention for the disks and on the SCSI bus while 
doing backups)?

Is the server doing any other significant IO while doing backups?

Are there multiple backups going on at once?

What kind of speed do you get writing raw data to the tape drive completely 
outside of any use of bacula? Try:
shut down bacula

insert a blank tape 

time tar --totals -c -f /dev/your_tape_device /path/to/your/storage 
(assuming GNUtar)

time dd if=/dev/urandom of=/dev/your_tape_device bs=1024 count=102400


=> 
=> I use mysql database, i create the database with the script provided by 
=> bacula.
=> 
=> Regards.
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] cannot open SCSI device '/dev/sg2' - Permission denied

2007-05-08 Thread mark . bergman


In the message dated: Tue, 08 May 2007 16:54:56 PDT,
The pithy ruminations from Mike Toscano on 
<[Bacula-users] cannot open SCSI device '/dev/sg2' - Permission denied> were:
=> Hi. When I try to run backups, I get an error like this:
=> 08-May 11:22 bmm-s1-sd: Alert: cannot open SCSI device '/dev/sg2' -
=> Permission denied
=> 08-May 11:22 bmm-s1-dir: Client1.2007-05-08_11.19.52 Error: Bacula
=> 1.36.3 (22Apr05):
=> 
=> What I've tried:
=> Restarting all bacula services.
=> Combing through my config files.
=> Google -- I couldn't find anyone with the same problem.
=> Creating new volumes for tapes.
=> Staring blankly at the screen.
=> Cursing.
=> Sobbing.
=> 
=> Services seem to be running fine, media is mounted and labeled in
=> bconsole.
=> 
=> Everything was running beautifully for a couple months now. We had to
=> shut the backup server down because of work on our building and after
=> bringing the server back up and changing tapes, I have not been able
=> to do any back-ups successfully. :(   
=> 
=> Can anyone give me a shove in the right direction on what I've got
=> wrong here?
=> 

Sure. The /dev/sg2 device is almost certainly owned by root, and the user 
"bacula" doesn't have permission to open the device.

I'd suggest putting something into your system startup scripts (possibly
/etc/rc.local, or the script that starts bacula) to change the ownership or 
group of the device to something where bacula has permissions.

For example, on my server:

crw-rw  1 root disk 21, 10 Mar 29 19:55 /dev/sg10

where bacula is in group "disk".

Also, in some OS's, the scsi devices are not fixed--if the OS discovers them in 
a different order (not uncommon in a SAN environment), the device that controls 
the tape drive may have a different number at each boot.

I've written a script to walk through the /dev/sg* devices, run an mtx command 
to determine if that device is the tape changer, and create a symlink from
/dev/sgWHATEVER to /dev/changer. That way, all my bacula configs (and other
admin tools and documentation) can always refer to /dev/changer.

Mark

=> Thank you!
=> Mike
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Feature: Reduce Backup Size on RAID

2007-06-04 Thread Mark Best
I'm sorry if this question is easily apparent in the Bacula
documentation but I simply can not find a straight answer. 

My boss wants to move away from our current backup system; so I
recommended Bacula. 

He loves the power and cost savings we would receive from Bacula but
will not convert based on one caveat. 

 

I'm not sure what term is used to describe the feature but basically the
backup server only saves 1 copy of a file. Using MD5 it checks to see if
the file is different if its not it considers it to be the same file and
doesn't backup multiple copies of the file. 

For example most *nixes have the /bin/ls binary file. Assuming it's the
same 'ls' that another system has the backup server only keeps 1 copy of
the binary but 2 entries in the Database - one for each PC. 

 

This feature becomes very helpful as we have say 300 workstation all
running WinXP - 2GB of 'system' files. That could results in 'savings'
of ~598GB of hard drive space - not to mention bandwidth on the network.


(Please note: we backup up everything to a Hard Disk RAID not tape; and
have bandwidth constraints.)

 

Can Bacula perform the above feature? 

What PROs and CONs should I make sure he knows about if it can?

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Testing bacula by using multiple bacula-dir for each bacula-fd (Was: Re: SVN warning)

2007-06-19 Thread mark . bergman


In the message dated: Tue, 19 Jun 2007 09:06:03 +0200,
The pithy ruminations from Kern Sibbald on 
<[Bacula-users] SVN warning> were:
=> Hello,
=> 

[SNIP!]

=> 
=> The upside is that once I finish "tuning" the code, the SD should make much 
=> more efficient re-use of drives than it previously did, and the proper 
=> structures now exist to easily implement drive switching during a job.
=> 

Very appealing.

=> Bottom line, I encourage as much testing as possible, because at this point, 
=> that is what is needed, and I have thoroughly tested the code with lots of 
=> regression testing, but be aware that the SD may be less stable than 
=> previously (unfortunately the regression tests don't cover *everything*).


I've got a working, stable, production installation of bacula, which I cannot 
break (too badly) in testing. However, I've also got access to a second 
tape autochanger, attached to a different server than our production backup 
machine. This will eventually become our new backup server.

In order to both configure our new bacula installation, and test new releases 
of bacula, can I use a single instance of the bacula-fd (v. 1.38.11) on each 
client, and the new bacula-sd and bacula-dir in parallel with the existing 
production bacula-sd and bacula-dir? Will that be a meaningful test 
environment for the new release, or will the version mis-match override any 
intrinsic 
issues in the bacula-2.x code, rendering bug reports meaningless?

Aside from configuring the clients to accept connections from an alternative 
bacula-dir, and ensuring that the schedules don't conflict, do you have any 
suggestions for setting up this kind of environment (ie., each bacula-fd client 
will connect with two different bacula-dir/bacula-sd servers)?

Thanks,

Mark
=> 
=> Best regards,
=> 
=> Kern
=> 


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autoloader: Replace tape in magazine after backups complete?

2006-09-07 Thread mark . bergman


In the message dated: Thu, 07 Sep 2006 16:55:15 EDT,
The pithy ruminations from "Jeremy Koppel" on 
<[Bacula-users] Autoloader: Replace tape in magazine after backups complete?> w
ere:

=> 
=> Well, we just had an autoloader go bad on us.  Quantum sent us a new

Ouch.

=> chassis under warranty, but there was a tape in the drive when the unit
=> tanked out on us.  There's no way to get at it manually, and no user
=> serviceable parts.  They tell me that the only way to get it out is by
=> disassembling the drive, which they'll do, but the process destroys the
=> tape.

Oh, that's bad. I've got the same thing with a long-out-of-warranty AIT2 
changer.

=> 
=>  
=> 
=> So, I'm thinking it would be best to have Bacula replace whatever tape
=> it's using back in the autoloader when the last job of the night runs.
=> Anybody know how I'd configure that?

Sure. Assuming that your last backup job is to backup the Bacula catalog
database, and that there's a RunAfter directive that executes a script
like $BACULA/bin/delete_catalog_backup, I add to that script something 
like:


/bin/echo -e "umount\n0\nq\n" | $BACULA/bin/bconsole -c 
$BACULA/etc/bconsole.conf


This will send the umount command to the Bacula bconsole program. Substitute 
the correct path for your Bacula installation, and repeat the "umount" command 
for each of the tape drives within the autochanger (specifying the drive 
number).

Mark

=> 
=>  
=> 
=> Thanks!
=> 
=>  
=> 
=> --Jeremy Koppel
=> 
=> 
=> 
=> 
=> --_=_NextPart_1655_00025162.00015147
=> Content-Type: text/html;
=>  charset="us-ascii"
=> Content-Transfer-Encoding: quoted-printable
=> 
=>  hemas-microsoft-com:office:word" xmlns=3D"http://www.w3.org/TR/REC-html40";>
=> 
[Gack! Horrible MS-Word HTML deleted]


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] fatal error--Bacula wants tape to be in the "other" drive in autochanger

2006-09-12 Thread mark . bergman
I'm running into a frequent situation where Bacula wants a given volume to be 
in one drive of our autochanger, and it doesn't seem to find the volume when 
it's already in the other drive.

This is with Bacula 1.38.9, under Linux (FC1), with a 23-slot, 2 drive 
autochanger and mysql5.

Most backups work fine, but this is a persistent problem.



Here's an example...there are two tapes (volumes 13 and 39) loaded and in use. 

Volume 39 is in the pool "Incremental", and is in drive 0.
Volume 13 is in the pool "Full", and is in drive 1.

I manually begin a new backup (type "Full"). Bacula correctly wants to use 
volume 13, however it insists on using that volume in drive 0 (which is in use 
by other jobs). This results in a fatal error.

-  bconsole session output 
---

Run Backup job
JobName:  athens-full
FileSet:  Full Set
Level:Full
Client:   athens-fd
Storage:  pv132t
Pool: Full
When: 2006-09-12 12:11:26
Priority: 10
OK to run? (yes/mod/no): yes
Job started. JobId=1050
12-Sep 12:11 parthenon-dir: Start Backup JobId 1050, 
Job=athens-full.2006-09-12_12.11.28
12-Sep 12:11 parthenon-sd: athens-full.2006-09-12_12.11.28 Fatal error: 
acquire.c:263 Wanted Volume "13", but device "Drive-0" (/dev/tape0) is busy 
writing on "39" .
12-Sep 12:02 athens-fd: athens-full.2006-09-12_12.11.28 Fatal error: job.c:1617 
Bad response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

12-Sep 12:11 parthenon-dir: athens-full.2006-09-12_12.11.28 Error: Bacula 
1.38.9 (02May06): 12-Sep-2006 12:11:30
  JobId:  1050
  Job:athens-full.2006-09-12_12.11.28
  Backup Level:   Full
  Client: "athens-fd" mips-sgi-irix6.5,irix,6.5
  FileSet:"Full Set" 2006-07-27 14:15:10
  Pool:   "Full"
  Storage:"pv132t"
  Scheduled time: 12-Sep-2006 12:11:26
  Start time: 12-Sep-2006 12:11:30
  End time:   12-Sep-2006 12:11:30
  Elapsed time:   0 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  Volume name(s): 
  Volume Session Id:  53
  Volume Session Time:1157752704
  Last Volume Bytes:  292,937,748,142 (292.9 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***


stat stor
Automatically selected Storage: pv132t
Connecting to Storage daemon pv132t at parthenon:9103

parthenon-sd Version: 1.38.9 (02 May 2006) i686-pc-linux-gnu redhat (Yarrow)
Daemon started 08-Sep-06 17:58, 51 Jobs run since started.

Running Jobs:
Writing: Full Backup job braid2-full JobId=1039 Volume="13"
pool="Full" device=""Drive-1" (/dev/tape1)"
Files=3,077,247 Bytes=216,468,267,436 Bytes/sec=4,547,939
FDReadSeqNo=30,080,818 in_msg=21084980 out_msg=5 fd=14
Writing: Full Backup job athena1-inc JobId=1045 Volume="39"
pool="Incremental" device=""Drive-0" (/dev/tape0)"
Files=798,055 Bytes=170,634,464,916 Bytes/sec=3,662,705
FDReadSeqNo=9,287,136 in_msg=7011280 out_msg=5 fd=28

-----


I can supply copies of my config files if needed. Please let me know if you've 
got any suggestions.

Thanks,

Mark


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fatal error--Bacula wants tape to be in the "other" drive in autochanger

2006-09-13 Thread mark . bergman


In the message dated: Wed, 13 Sep 2006 12:33:59 +0200,
The pithy ruminations from Kern Sibbald on 
 were:
=> On Wednesday 13 September 2006 11:30, Alan Brown wrote:
=> > On Tue, 12 Sep 2006, Kern Sibbald wrote:
=> > 
=> > > On Tuesday 12 September 2006 18:21, [EMAIL PROTECTED] wrote:
=> > >> I'm running into a frequent situation where Bacula wants a given volume 
=> to
=> > > be
=> > >> in one drive of our autochanger, and it doesn't seem to find the volume 
=> when
=> > >> it's already in the other drive.
=> > >>
=> > >> This is with Bacula 1.38.9, under Linux (FC1), with a 23-slot, 2 drive
=> > >> autochanger and mysql5.
=> > >>
=> > >> Most backups work fine, but this is a persistent problem.
=> > >
=> > > Two drive autochangers did not work correctly until Bacula version 
=> 1.38.11,
=> > > and there have been no such problems posted against 1.38.11.
=> > 
=> > 
=> > Kern: This is exactly the problem I posted last week and I am running 
=> 1.38.11
=> 
=> I don't think it is the same problem, but that is not important.  What is 
=> important is he is not using 1.38.11, and there are known problems with the 

What's important to you--as the developer--may not be as important to me, as an
end-user (who's willing to help improve Bacula as much as possible). From my
perspective, I'm much less concerned about whether there are different issues in
1.38.9 and 1.38.11 than I am about whether the symptom (ie., backups failing) is
still present. It sounds like various users are reporting that the same 
symptoms appear in 1.38.11 and 1.39.20, even if you see that they have 
different causes.


=> management of Volume names on prior versions, so the problem is no longer 
=> relevant to me -- I cannot support old versions other than the most recent 
=> one.  

I think I understand that...and I wasn't actually asking for "support" on an 
older version. In fact, if upgrading to 1.38.11 actually fixes the problem, 
than that's a terrific solution.

However, our one Bacula installation is also our production backup 
system--which is
probably the case for many people. I'm quite reluctant to upgrade to another
version if it doesn't actually solve the problem...and there seems to be a bit
of dispute about whether 1.38.11 really does solve the problem...


So, getting back to my original question:

Is there a definitive solution (and "upgrade to version X.Y.Z" is a 
fine answer)
to the problem where backups fail because Bacula wants to use a volume 
in on tape drive when the volume is already loaded in another drive?

I'm happy to supply other information (debugging output, config files, etc.) as 
needed.

=> 
=> > 
=> > I know why it's occuring - people are running update slots against a 
=> > particular tape drive instead of the autochanger
=> > 
=> > And I know why they do that - running update slots against the changer 
=> > only ever unloads drive 0, no matter which drive is actually specified.

Right. There's no way (that I've found, with 1.38.9) to run "update slots" 
without it explicitely referencing tape drive 0.


=> > 
=> > 
=> > I know you don't regard it as a bug that when run this way, all the tapes 
=> > in the changer end up associated with an individual drive, but I do.
=> 
=> I'm not planning to change the current behavior, but I can make sure that 
the 
=> document is explicit enough on this point.
=> 


I'm a bit confused here... If volumes become associated with a particular drive 
as the result of running "update slots", then it sounds like there's no way 
around the fundamental problem, regardless of whether I upgrade to 1.38.11.

=> > 
=> > Additionally it's definitely a bug that update slots ignores any drive 
=> > specfication.
=> > 
=> > IE:
=> > 
=> > *update slots

[SNIP!]


=> > 3307 Issuing autochanger "unload slot 47, drive 0" command.
=> > This is not drive 1
=> 
=> I wasn't aware of this problem with the drive number, and I don't remember 
=> anyone mentioning it before.  Thanks for pointing it out. The big question 
is 

I'll take a closer look at the output from "update slots" in whatever version 
I'm running, and report the results.

=> whether or not this problem exists in 1.39.22 or not, because it is unlikely 
=> that I will be making any further patches for 1.38.11.  
=> 

For my clarification...do you consider the 1.39.x series to be production 
quality?

Kern --


Re: [Bacula-users] Proper Autochanger configuration with different drives in same changer

2006-09-19 Thread mark . bergman


In the message dated: Tue, 19 Sep 2006 13:12:48 BST,
The pithy ruminations from Alan Brown on 
 were:

=> 
=> On Mon, 18 Sep 2006, Pietari Hyv=E4rinen wrote:
=> 
=> > Hi!
=> > Our Autochcanger ( dell136T) is connected to four tape drives.
=> > Two of the are older LTO-2 and the rest are LTO-3 capable drives.
=> > How I define bacula-dir to understand that there are two of each
=> > drives in same autochanger?
=> 
=> You have the bacula-sd definition right... BUT...
=> 
=> Bacula's current state means you can only use LTO3 tapes in the LTO3=20
=> drives and LTO2 tapes in the LTO2 drives.

Wow.

=> 
=> Bacula has no way of knowing that you can use LTO2 tapes in the LTO3=20
=> drives. I've been requseting that functionality for a while, as we have=20
=> always planned to install LTO3/LTO4 drives in our robot when hardware=20
=> upgrade time happens.

Thank you very much for bringing this up and giving an explanation.

We're about to be getting an LTO3 drive, and I was planning to continue to use 
our stock of LTO2 tapes for incrementals, and add some LTO3 media for full 
backups. This limitation of bacula may cause serious changes to that plan.


=> 
=> 
=> The main stumbling block is really in bacula-dir
=> 
=> # Definition of LTO-2 tape storage autochanger
=> Storage {
=>Name =3D Autochanger
=>Autochanger =3D yes
=>Address =3D XX # N.B. Use a fully qualified name here
=>SDPort =3D 9103
=>Password =3D "supercalafragilisticexpialidocious"
=>Device =3D Autochanger # must be same as Device in=
=>  Storage daemon
=>Media Type =3D LTO-2   #  must be same as MediaType in=
=>  Storage daemon
=>Maximum Concurrent Jobs =3D 15
=> }
=> 
=> As you can see, the mediatype is locked into the changer entry and=20
=> currently the only way to define LTO3 jobs is to create another changer -=
=> =20
=> which will probably cause database issues with "update slots", etc.


Hmmm... What if you define all the media types as LTO3...does bacula check the 
media type when loading a volume, or just the volume name?

If bacula loads an LTO2 tape into what it thinks is an LTO3 drive, will the 
backup proceed normally, except that the data will reach EOT earlier than 
expected?


Kern -- 
I will be able to use the LTO3 autochanger, and a new server (Linux, 
RHEL4, x86_em64t, mysql5) that will eventually host Bacula, for beta
testing new versions, providing an additional platform to resolve this
issue, etc. Please contact me off the list if there are specific things
you'd like run there.


Thanks,

Mark
 
=> 
=> 
=> > We are using bacula version 1.38.11-3.
=> >

[SNIP!]




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula 1.38.11 wants tape to be in the "other" drive in autochanger

2006-09-20 Thread mark . bergman
I'm running into a situation where Bacula wants a given volume to be 
in one drive of our autochanger, and it doesn't seem to find the volume when 
it's already in the other drive. I saw similar situations under Bacula 1.38.9, 
but I've since upgraded to Bacula 1.38.11 (for the -dir and -sd, clients are 
still at 
1.38.9), under Linux (FC1), with a 23-slot, 2 drive autochanger and mysql5.

Most recently, I changed many tapes in the autochanger, restarted bacula, and 
backups have been working fine (with many tape changes) for about a week. 
During this period, there were times when each drive was in use individually, 
and when both drives were in use. Since restarting bacula, there have been no 
manual tape changes, no manual tape movements (via the autochanger or using 
mtx), and I have not run "update slots" or mounted/umounted volumes within 
bacula. All backups for the last week have been initiated by bacula 
automatically, and there have been no restores in this time period.



Now, one job is hung because bacula wants tape "55" in 
drive 1 when it's presently in drive 0:
20-Sep 06:43 parthenon-sd: Please mount Volume "55" on Storage 
Device "Drive-1"
(/dev/tape1) for Job athena1-inc.2006-09-19_23.11.02


Other jobs failed because bacula wanted to load tape 13 into drive 0, when 
that was already occupied.

Please see output from bconsole below, and note that bacula claims that it's 
writing to both tapes 52 and 55 in the same volume!


I am happy to supply config files or additional debugging information.

Thanks,

Mark

-
19-Sep 23:09 parthenon-dir: Start Backup JobId 1148, 
Job=sbia-full.2006-09-19_23.09.01
19-Sep 23:09 parthenon-sd: sbia-full.2006-09-19_23.09.01 Fatal error: 
acquire.c:263 Wanted Volume "13", but device "Drive-0" (/dev/tape0) is busy 
writing on "55" .
19-Sep 23:09 sbia-fd: sbia-full.2006-09-19_23.09.01 Fatal error: job.c:1616 Bad 
response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

19-Sep 23:09 parthenon-dir: sbia-full.2006-09-19_23.09.01 Error: Bacula 1.38.11 
(28Jun06): 19-Sep-2006 23:09:02
  JobId:  1148
  Job:sbia-full.2006-09-19_23.09.01
  Backup Level:   Full
  Client: "sbia-fd" i686-pc-linux-gnu,redhat,7.2
  FileSet:"Full Set" 2006-07-27 14:15:10
  Pool:   "Full"
  Storage:"pv132t"
  Scheduled time: 19-Sep-2006 23:09:00
  Start time: 19-Sep-2006 23:09:02
  End time:   19-Sep-2006 23:09:02
  Elapsed time:   0 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  Volume name(s): 
  Volume Session Id:  51
  Volume Session Time:1158351490
  Last Volume Bytes:  325,491,826,996 (325.4 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***

19-Sep 23:09 parthenon-sd: cbic-full.2006-09-19_23.09.00 Fatal error: 
acquire.c:263 Wanted Volume "13", but device "Drive-1" (/dev/tape1) is busy 
writing on "52" .
19-Sep 23:07 cbic-fd: cbic-full.2006-09-19_23.09.00 Fatal error: job.c:1617 Bad 
response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

19-Sep 23:09 parthenon-dir: cbic-full.2006-09-19_23.09.00 Error: Bacula 1.38.11 
(28Jun06): 19-Sep-2006 23:09:03
  JobId:  1147
  Job:cbic-full.2006-09-19_23.09.00
  Backup Level:   Full
  Client: "cbic-fd" sparc-sun-solaris2.8,solaris,5.8
  FileSet:"Full Set" 2006-07-27 14:15:10
  Pool:   "Full"
  Storage:"pv132t"
  Scheduled time: 19-Sep-2006 23:09:00
  Start time: 19-Sep-2006 23:09:02
  End time:   19-Sep-2006 23:09:03
  Elapsed time:   1 sec
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  Volume name(s): 
  Volume Session Id:  50
  Volume Session Time:1158351490
  Last Volume Bytes:  325,491,826,996 (325.4 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Error
  Termination:*** Backup Error ***


athena1-fd:  Disallowed filesystem. Will not descend into /sbia/home
athena1-fd:  Disal

[Bacula-users] bacula configure fails on Unable to find Gnome 2 installation

2006-09-21 Thread Mark Maciolek

hi,

Trying this on Redhat Enterprise 3, I have attached the config.log

src/gnome2-console does exists in the bacula directory.

Mark
--
-
Mark Maciolek   Network Administrator
[EMAIL PROTECTED]   Morse Hall Room 338AB
University of New Hampshire862-3050
-
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by configure, which was
generated by GNU Autoconf 2.59.  Invocation command line was

  $ ./configure --with-mysql --enable-gnome

## - ##
## Platform. ##
## - ##

hostname = scatterbrain.sr.unh.edu
uname -m = i686
uname -r = 2.4.21-37.ELsmp
uname -s = Linux
uname -v = #1 SMP Wed Sep 7 13:28:55 EDT 2005

/usr/bin/uname -p = unknown
/bin/uname -X = unknown

/bin/arch  = i686
/usr/bin/arch -k   = unknown
/usr/convex/getsysinfo = unknown
hostinfo   = unknown
/bin/machine   = unknown
/usr/bin/oslevel   = unknown
/bin/universe  = unknown

PATH: /usr/kerberos/sbin
PATH: /usr/kerberos/bin
PATH: /net/home/rcc/mlm/bin
PATH: /sbin
PATH: /usr/bin/X11
PATH: /usr/local/bin
PATH: /usr/local/app/ssh/bin
PATH: /usr/freeware
PATH: /usr/bsd
PATH: /bin
PATH: /etc
PATH: /usr/bin
PATH: /usr/etc
PATH: /usr/sbin


## --- ##
## Core tests. ##
## --- ##

configure:1438: checking for true
configure:1456: found /bin/true
configure:1468: result: /bin/true
configure:1483: checking for false
configure:1501: found /bin/false
configure:1513: result: /bin/false
configure:1580: checking for gcc
configure:1596: found /usr/bin/gcc
configure:1606: result: gcc
configure:1850: checking for C compiler version
configure:1853: gcc --version &5
gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-53)
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

configure:1856: $? = 0
configure:1858: gcc -v &5
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.3/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man 
--infodir=/usr/share/info --enable-shared --enable-threads=posix 
--disable-checking --with-system-zlib --enable-__cxa_atexit 
--host=i386-redhat-linux
Thread model: posix
gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-53)
configure:1861: $? = 0
configure:1863: gcc -V &5
gcc: argument to `-V' is missing
configure:1866: $? = 1
configure:1889: checking for C compiler default output file name
configure:1892: gccconftest.c  >&5
configure:1895: $? = 0
configure:1941: result: a.out
configure:1946: checking whether the C compiler works
configure:1952: ./a.out
configure:1955: $? = 0
configure:1972: result: yes
configure:1979: checking whether we are cross compiling
configure:1981: result: no
configure:1984: checking for suffix of executables
configure:1986: gcc -o conftestconftest.c  >&5
configure:1989: $? = 0
configure:2014: result: 
configure:2020: checking for suffix of object files
configure:2041: gcc -c   conftest.c >&5
configure:2044: $? = 0
configure:2066: result: o
configure:2070: checking whether we are using the GNU C compiler
configure:2094: gcc -c   conftest.c >&5
configure:2100: $? = 0
configure:2104: test -z 
 || test ! -s conftest.err
configure:2107: $? = 0
configure:2110: test -s conftest.o
configure:2113: $? = 0
configure:2126: result: yes
configure:2132: checking whether gcc accepts -g
configure:2153: gcc -c -g  conftest.c >&5
configure:2159: $? = 0
configure:2163: test -z 
 || test ! -s conftest.err
configure:2166: $? = 0
configure:2169: test -s conftest.o
configure:2172: $? = 0
configure:2183: result: yes
configure:2200: checking for gcc option to accept ANSI C
configure:2270: gcc  -c -g -O2  conftest.c >&5
configure:2276: $? = 0
configure:2280: test -z 
 || test ! -s conftest.err
configure:2283: $? = 0
configure:2286: test -s conftest.o
configure:2289: $? = 0
configure:2307: result: none needed
configure:2325: gcc -c -g -O2  conftest.c >&5
conftest.c:2: syntax error before "me"
configure:2331: $? = 1
configure: failed program was:
| #ifndef __cplusplus
|   choke me
| #endif
configure:2516: checking for g++
configure:2532: found /usr/bin/g++
configure:2542: result: g++
configure:2558: checking for C++ compiler version
configure:2561: g++ --version &5
g++ (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-53)
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

configure:2564: $? = 0
configure:2566: g++ -v &5
Reading specs from /usr/l

Re: [Bacula-users] bacula 1.38.11 wants tape to be in the "other" drive in autochanger

2006-09-21 Thread mark . bergman


In the message dated: Thu, 21 Sep 2006 12:34:47 BST,
The pithy ruminations from Alan Brown on 
 were:
=> On Wed, 20 Sep 2006 [EMAIL PROTECTED] wrote:
=> 
=> > I'm running into a situation where Bacula wants a given volume to be
=> > in one drive of our autochanger, and it doesn't seem to find the volume 
when
=> > it's already in the other drive. I saw similar situations under Bacula 
1.38.9,
=> > but I've since upgraded to Bacula 1.38.11 (for the -dir and -sd, clients 
are still at
=> > 1.38.9), under Linux (FC1), with a 23-slot, 2 drive autochanger and mysql5.
=> 
=> If you use bconsole - query  - 15: List Volumes Bacula thinks are in changer
=> 
=> What do you see in the "storage" column? All the entries should be for the 
=> autochanger and not the individual drives.

Yes, that's exactly what I see. There's only one storage device defined in my 
bacula-dir.conf, and that's the autochanger. All the volumes belong to that 
device.

=> 
=> 
=> The fastest fix in the meantime is to unload the tape, then rerun the 
=> bconsole "mount" command.

Yes...

Thanks,

Mark

=> 
=> AB
=> 





The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] confused about Full vrs. Incremental backups & pools

2006-09-25 Thread mark . bergman
[Background]
Our configuration has two pools: Full and Incremental. Incremental and
differential backups are run nightly, writing to the "Incremental" pool. Full
backups are done monthly, writing to the "Full" pool.

Incremental backups are retained for 2 weeks, and full backups are retained for
6 months.

The Full and Incremental pools will use different media (LTO3 and LTO2,
respectively), and may have different physical storage policies
(ie., offsite) when the volumes are not in the changer.


[Problem]
Here's the problem...as incremental backups expire and are purged, bacula will
often promote the next incremental to a "full" backup, since it correctly
determines that there are no full backups for a particular client in the
"Incremental" pool. Writing a full backup can be disruptive to both the client
and backup server, as some backups are over 2TB, with clients on a slow network.
I want to avoid unscheduled full backups as a result of promoted incrementals,
and I don't want to be doing full backups every 2 weeks to satisfy the retention
period of the "Incremental" pool.

Is there anyway to avoid this behavior? 


[Goal]
I want a backup configuration where full backups (whether run as scheduled, or
promoted from an incremental) are always written to the "Full" pool.
Specifically, I'd like to have Bacula behave in this way:

[1] all incrementals/differentials should be based on the latest full
backup as found in the Full pool

[2] if an "incremental" backup must be promoted to a "full" backup (ie.,
there's no full backup in the Full pool), that data should be
written to the Full pool

Is this possible?



[Environment]
bacula 1.38.11
Linux (FC1)
mysql5 (innodb)
23 slot, 2 drive autochanger (soon to be 2xLTO3, with a mix of LTO2
and LTO3 media--the LTO2 devoted to the Incremental pool, the
LTO3 for the Full pool only)

Please see below for an edited copy of my bacula-dir.conf.

Thanks,

Mark


---
Director {# define myself
  Name = parthenon-dir
  DIRport = 9101# where we listen for UA connections
  QueryFile = "/usr/local/bacula/bin/query.sql"
  WorkingDirectory = "/usr/local/bacula/var/working"
  PidDirectory = "/usr/local/bacula/var/run"
  Maximum Concurrent Jobs = 65
  Password = "foobar" # Console password
  Messages = Daemon
}

JobDefs {
  Name = "DefaultIncJob"
  Type = Backup
  Level = Incremental
  FileSet = "Full Set"
  Messages = Standard
  Priority = 10
  Storage = pv132t
  Prefer Mounted Volumes = no   # Try to use both tape drives at once
  Maximum Concurrent Jobs = 1   # so that only one instance of each job is 
running
  Pool = Incremental
}
JobDefs {
  Name = "DefaultFullJob"
  Type = Backup
  Level = Full
  FileSet = "Full Set"
  Messages = Standard
  Priority = 10
  Storage = pv132t
  Prefer Mounted Volumes = no   # Try to use both tape drives at once
  Maximum Concurrent Jobs = 1   # so that only one instance of each job is 
running
  Pool = Full
}

# Backup the catalog database (after the nightly save)
Job {
  Name = "BackupCatalog"
  JobDefs = "DefaultIncJob" # No need to keep the catalog backup on full backup 
volumes
  Client = parthenon-fd
  Level = Full
  FileSet="Catalog"
  Schedule = "AfterBackup"
  # This creates an ASCII copy of the catalog
  RunBeforeJob = "/usr/local/bacula/bin/make_catalog_backup bacula bacula 
foobar"
  # This deletes the copy of the catalog
  # RunAfterJob  = "/usr/local/bacula/bin/delete_catalog_backup"
  RunAfterJob  = "/usr/local/bacula/bin/run_after_catalog_backup"
  Write Bootstrap = "/usr/local/bacula/var/working/BackupCatalog.bsr"
  Priority = 11   # run after main backup
}

Job {
  Name = "parthenon-inc"
  Client = parthenon-fd
  JobDefs = "DefaultIncJob"
  Priority = 10
  Write Bootstrap = "/usr/local/bacula/var/working/parthenon.bsr"
  Schedule = "Incremental-Sun"
}
Job {
  Name = "parthenon-full"
  Client = parthenon-fd
  JobDefs = "DefaultFullJob"
  Priority = 10
  Write Bootstrap = "/usr/local/bacula/var/working/parthenon.bsr"
  Schedule = "Full-Sun"
}

Job {
  Name = "delphi-inc"
  Client = delphi-fd
  JobDefs = "DefaultIncJob"
  Write Bootstrap = "/usr/local/bacula/var/working/delphi.bsr"
  SpoolData = yes
  Schedule = "Incremental-Mon"
}
Job {
  Name = "delphi-full"
  Client = delphi-fd
  JobDefs = "DefaultFullJob"
  Write Bootstrap = "/usr/local/bacula/

Re: [Bacula-users] 2 full backups in a row

2006-10-27 Thread mark . bergman


In the message dated: Fri, 27 Oct 2006 11:18:50 +0200,
The pithy ruminations from Arno Lehmann on 
 were:
=> Hi,
=> 
=> On 10/26/2006 3:47 PM, Raphael Perrin wrote:
=> > 
=> > Hi
=> > 
=> > Here is my situation.
=> > 
=> > Once a month I run a full backup on a "FULL" pool.
=> > Then the monday after I run an Incremental backup on an "INCREMENTAL" pool 
and
=> > of course it first run a FULL backup which take more than one night. This
=> > situation is not good.

Yes.

=> 
=> True.
=> Unless you give some more details I can only guess: You've got two jobs 
=> set up, one for full backup, one for incremental backups.
=> 
=> > I use a different pool for the FULL backups to be able to take them away 
from
=> > the company.

I'm in the same situation--there's a pool defined for incremental backups, and 
a pool for full backups (and a "scratch" pool).

I understand why a backup would be promoted from an incremental to a full, if 
no prior "full" exists, but I firmly belive that the backup should then use the 
"full" pool...

=> > 
=> > How can I avoid to run 2 FULL backups in a row ??
=> 
=> Use only one job.
=> If you need a copy of that job, try bcopy or duplicate the media. Or 
=> wait for job copying support in Bacula :-)

I was afraid that was the answer.

This is particularly a problem when backing up some clients takes 2~3 days. 
It's really bad to have an incremental promoted to a "full" backup, and then 
the scheduled full backup runs simultaneously...this kills the performance of 
both the backup server and client.

Thanks,

Mark

=> 
=> > Thanks for your advice
=> > Raphael
=> 
=> Arno
=> 
=> -- 
=> IT-Service Lehmann[EMAIL PROTECTED]
=> Arno Lehmann  http://www.its-lehmann.de
=> 



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] querying database for filenames by checksum?

2006-10-27 Thread mark . bergman

I've got a sudden need to determine whether there's a copy of a particular file
anywhere on our systems, even if it's been renamed. I really don't want to do
terrible things with shell scripts and "find" and "md5sum" and so forth across
~9TB of data (on ~15 servers).



It was absolutely terrific to be able to query the Bacula database to search 
for the file by name and get a quick answer. Is there any similar way to query 
the database for a single filename (a fully qualified host:/path/to/file), and
have the query return all entries where the checksum matches the specified 
file, regardless of their name or location?

I'd be very grateful if anyone could contribute a snippet of SQL...

Thanks,

Mark


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] job priorities enhancement request

2006-11-13 Thread mark . bergman
[Enhancement Request]
I'd like to see the priority algorith adjusted as follows:

If bacula is configured to run jobs concurrently, then all jobs with
a priority number less than the current job (ie., scheduled jobs that 
are more important than the current job) will be allowed to run
concurrently with the current job. The lower numbered
jobs would be put at the front of the queue of the jobs that have the
current priority number. If there are several priorities involved,
then the same algorithm would apply, with lower priority numbered jobs
being effectively raised to the priority of the running job, but ordered
in the queue by their real priority.


Obviously, this would be subject to resource constraints (ie, you can't work 
with two different volumes simultaneously in a single device) and to 
the date/timestamp specified for job scheduling.

This change would be a tremendous improvement in managing multi-drive storage 
devices. This method of scheduling makes bacula more efficient, allowing higher 
importance (lower numerical priority) jobs to make use of physical resources as 
soon as they become available.


[Example]
Recently, I needed to restore some user files. At the time when I scheduled the
job, our tape changer was in use, with backups (priority 10) writing to volumes
in both physical drives.

The restore jobs were scheduled with a lower priority (6).

I was surprised that the restores did not begin when all the backups but one
were complete, and when one tape drive was idle. Unfortunately, the backup job
that was still active takes over 3 days to complete (~1.5TB over a 100Mbs
network). This meant that the restore scheduled on Friday afternoon didn't take
place until Monday afternoon, even though there was a vacant tape device the
whole time.

Essentially, the priority 10 job running in drive 0 blocked a priority 6 job
from using the vacant drive 1.

Checking the bacula documentation, I see that this is the specified
behavior--that when a job of a given priority is running, jobs of a lower
priority won't run until the existing job is finished.


[Environment]
Bacula 1.38.11
Storage Device: PowerVault 132T (2-drive LTO2 autochanger)





On a side note, as our department increasingly depends on bacula, I'm glad to
see so much discussion on the bacula mailing list and that steps to be taken to
ensure the growth and long-term viability of the project. 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error when restoring with multiple catalogs

2006-11-17 Thread Mark Hazen
Greetings fellow Baculites:

We're reconfiguring our bacula services. We back up several machines to 
disk, and this time around to limit the size of the backups of a couple 
of hosts by splitting into multiple catalogs to give each of these 
servers their own databases (one, for example, is a mail server that 
averages about 45GB spread over 550,000 files).

We also have machines which have smaller backups that have been grouped 
into catalogs all their own, such as one for our desktops. We also have 
a separate catalog for the catalog backups.

Each client (or group) has its own device, storage, pool and volumes. 
I've been naming things in a format such as 'storage.name' and 
'pool.name' to help reduce confusion and make it easier to spot config 
mistakes.

Backups are working, but I'm having trouble restoring files. It appears 
that bacula is attempting to restore files from the catalog I created to 
backup catalogs on our new config (which isn't being used yet, actually).

I've verified my steps in the console a dozen times, to no avail. I do a 
"use" to change the catalog to catalogs.desktop, type "restore", choose 
"select the most recent backup for a client", pick the client, mark 
files, then modify the job to look like this:

  JobName:RestoreFiles
  Bootstrap:  /var/bacula/dir.bigdawg.5.restore.bsr
  Where:  C:/restore/
  Replace:always
  FileSet:fileset.desktop.xpsp2
  Client: caughran
  Storage:storage.desktop
  When:   2006-11-17 15:58:28
  Catalog:catalog.desktop
  Priority:   10

But still, I am receiving an error message saying the following:

=-BEGIN-=
17-Nov 15:55 dir.bigdawg: Start Restore Job RestoreFiles.2006-11-17_15.55.21
*
17-Nov 15:55 storage.bigdawg: RestoreFiles.2006-11-17_15.55.21 Fatal 
error: acquire.c:109 Read open device "device.catalogs" 
(/home3/bacula/pools/catalogs) Volume "desktop.01" failed: ERR=dev.c:450 
Could not open: /home3/bacula/pools/catalogs/desktop.01, ERR=No such 
file or directory
=-END-=

Now it should be noted that this isn't the path that bacula should be 
looking for volumes. The volumes for "catalog.catalogs" (the catalog for 
backups of the catalogs) are under /home3/bacula/pools/catalogs, and 
desktops are under /home3/bacula/pools/desktop. We don't have the path 
listed above anywhere in our configs at all.

Here's the relevant config sections (because of the size of our configs, 
we split out the groups into their own config files and use @includes, 
I've just put them together here for your sanity, and left out the 
nonrelevant sections). The client in this case is a WinXP machine (VSS 
enabled).

[[bacula-dir.conf]]

Client {
   Name = ourclient
   Address = the.client.fqdn
   Catalog = catalog.desktop
   Password = ""
}

Job {
   Name = job.ourclient
   Client = ourclient
   JobDefs = jobdef.desktop.xpsp2
}

JobDefs {
   Name = jobdef.desktop.xpsp2
   Type = Backup
   Level = Full
   FileSet = fileset.desktop.xpsp2
   Schedule = schedule.desktop
   Storage = storage.desktop
   Messages = standard
   Pool = pool.desktop
   Priority = 10
   Enabled = yes
}

Storage {
   Name = storage.desktop
   Address = our.server.fqdn
   SDPort = 9103
   Password = "xxx"
   Device = device.desktop
   Media Type = File
}

Pool {
   Name =  pool.desktop
   Pool Type = Backup
   Recycle = yes
   AutoPrune = yes
   Volume Retention = 13 days
   Accept Any Volume = yes
   Maximum Volumes = 14
   Volume Use Duration = 23 hours# yes I know this could be bad, but
 # this is a disk-only server config
}

   # {{ fileset not included for reasons of space }}

[[bacula-sd.conf]]

Device {
   Name = device.desktop
   Media Type = File
   Archive Device = /home3/bacula/pools/desktop
   LabelMedia = yes
   Random Access = yes
   AutomaticMount = yes
   RemovableMedia = no
   AlwaysOpen = no
}


When I list the volumes and pools (and anything, really), everything 
looks the way I would expect it to. Could someone kindly take a look at 
the above and let me know if I'm missing something here?

Thanks in advance (and thank you, Kern, for Bacula. We're thankful for 
it, and wouldn't use a different solution if someone paid us to).

-mh.
-- 
Mark Hazen
Systems Support
The University of Georgia

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error when restoring with multiple catalogs

2006-11-17 Thread Mark Hazen

My apologies, I forgot to finish my thought. That first paragraph should 
have read:

We're reconfiguring our bacula services. We back up several machines to
disk, and this time we wanted to to limit the size of the backup 
catalogs of a couple of hosts by splitting into multiple catalogs(one 
server, for example, is a mail server that averages about 45GB spread 
over 550,000 files).

Apologies for any confusion.
-mh.
-- 
Mark Hazen
Systems Support
The University of Georgia

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal error - attempt to load empty slot in autochanger

2006-12-01 Thread mark . bergman


In the message dated: Fri, 01 Dec 2006 17:31:55 GMT,
The pithy ruminations from Alan Brown on 
 we
re:
=> On Fri, 1 Dec 2006, Kern Sibbald wrote:
=> 
=> > It appears to me that you have run into some sort of race condition where 
two
=> > threads want to use the same Volume and they were both given access.

I run into this condition* about 2~3 times per month. 

* Possibly from a different cause, but the results are the same.

I'm running 1.38.11 under Linux, with a 23-slot, 2 drive autochanger.

=> > Normally that is no problem.  However, one thread  wanted the particular
=> > Volume in drive 0, but it was loaded into drive 1 so it decided to unload 
it
=> > from drive 1 and then loaded it into drive 0, while the second thread went 
on
=> > thinking that the Volume could be used in drive 1 not realizing that in
=> > between time, it was loaded in drive 0.
=> 
=> Someting similar has happened here today, except that one job is now 
=> hanging waiting for the tape that's in the other drive.

Yes, that's pretty frequent here too.

=> 
=> > I'll look at the code to see if there is some way we can avoid this kind of
=> > problem.  Probably the best solution is to make the first thread simply 
start
=> > using the Volume in drive 1 rather than transferring it to drive 0.

That would be terrific.


In the next few weeks I will have access to a second auto-changer with a 
similar configuration. Before that unit is put into production use, I can set 
up test installs of Bacula if you would like me to run field tests to debug the 
autochanger code.

=> 
=> This would be good.
=> 
=> Removing an apparent bias towards using Drive0 would be good too, as it 
=> causes uneven wear on the drives.

Good point.


=> 
=> AB
=> 



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] "stat stor", "stat clients" report wrong pool for volumes

2006-12-07 Thread mark . bergman

I've noticed that bacula (1.38.11) reports that the same media is in different 
pools at the same time, with "stat stor" and "stat clients" giving the wrong 
information.

For example, I've included output from "stat clients" (below, edited). Note the
pool names in the section "Running Jobs", where Volume 35 is correctly
identified as being in the "Full" pool. Then note that in the "Device Status"
section, that volume is incorrectly identified as being in pool "Incremental".



stat clients
Automatically selected Storage: pv132t
Connecting to Storage daemon pv132t at parthenon:9103

parthenon-sd Version: 1.38.11 (28 June 2006) i686-pc-linux-gnu redhat (Yarrow)
Daemon started 06-Dec-06 18:25, 13 Jobs run since started.

Running Jobs:
Writing: Full Backup job rodos-full JobId=2399 Volume="35"
pool="Full" device=""Drive-0" (/dev/tape0)"
Files=116,037 Bytes=3,071,923,207 Bytes/sec=3,317,411
FDReadSeqNo=957,219 in_msg=649467 out_msg=5 fd=26
Writing: Full Backup job smyrna-full JobId=2400 Volume="35"
pool="Full" device=""Drive-0" (/dev/tape0)"
Files=87,591 Bytes=4,771,369,752 Bytes/sec=5,158,237
FDReadSeqNo=765,792 in_msg=531018 out_msg=5 fd=29
Writing: Incremental Backup job parthenon-inc JobId=2415 Volume="27"
pool="Incremental" device=""Drive-1" (/dev/tape1)"
Files=129 Bytes=4,262,705,479 Bytes/sec=7,792,880
FDReadSeqNo=65,910 in_msg=65618 out_msg=5 fd=11



Device status:
Autochanger "pv132t" with devices:
   "Drive-0" (/dev/tape0)
   "Drive-1" (/dev/tape1)
Device "Drive-0" (/dev/tape0) is mounted with Volume="35" Pool="Incremental"
Slot 5 is loaded in drive 0.
Total Bytes=415,927,129,106 Blocks=6,447,286 Bytes/block=64,511
Positioned at File=418 Block=0
Device "Drive-1" (/dev/tape1) is mounted with Volume="27" Pool="Incremental"
Slot 10 is loaded in drive 1.
Total Bytes=172,406,241,415 Blocks=2,672,469 Bytes/block=64,511
Positioned at File=172 Block=8,816

--

The "stat stor" command also reports the wrong information, while query number 
15 (reporting what volumes Bacula thinks are in the autochanger) correctly 
identified volume 000035 as being in the Full pool.

I have seen this behavior on many volumes. It's confusing to human beings, but 
apparently Bacula doesn't use the "stat stor" results to determine what pool a 
volume is in, so it does select the correct media for writing full vrs. 
incremental backups.




Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] avoid promotion of incremental backups to full, based on client name?

2006-12-14 Thread mark . bergman

Based on recent discussions the list, I'm experimenting with cleaning up the 
bacula-dir.conf, and defining one job for each client instead of defining 
"Full" and "Incremental" jobs with different names and hard-coded levels.

For example, I've changed:
Job {
Name = "snickers-full"
Client = snickers-fd
JobDefs = "DefaultFullJob"
Write Bootstrap = "/usr/local/bacula/var/working/snickers.bsr"
Schedule = "Full-Thu"
}
Job {
Name = "snickers-inc"
Client = snickers-fd
JobDefs = "DefaultIncJob"
Write Bootstrap = "/usr/local/bacula/var/working/snickers.bsr"
Schedule = "Inc-Thu"
}

to
Job {
Name = "snickers"
Client = snickers-fd
JobDefs = "DefaultJob"
Write Bootstrap = "/usr/local/bacula/var/working/snickers.bsr"
Schedule = "Thu"
}

Now, the "run" directives in the Schedule determine whether the specific job
will be a Full or Incremental/Differential backup.

So far, this looks good.

However, when I run an incremental backup, I get the message:

No prior Full backup Job record found.
No prior or suitable Full backup found. Doing FULL backup.

I understand the message, but I don't want to do a full backup. The client and
FileSet are unchanged, so a full backup of "snickers" does exist (under the job 
name "snickers-full").

Is there any way to instruct Bacula to base it's idea of whether a Full backup
exists on the combination of the client name and the fileset, not on the job
name? Essentially, can the new "snickers" job be an alias for both
"snickers-full" and "snickers-inc"?


Also, if I do go ahead with the change to the job names, will the existing
job (and file and volume) records be purged according to their existing
retention periods, even if those jobs are no longer being run?

Thanks,

Mark




Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "resume" failed job

2007-01-08 Thread mark . bergman


In the message dated: Mon, 08 Jan 2007 18:19:37 +0100,
The pithy ruminations from Kern Sibbald on 
 were:
=> On Monday 08 January 2007 14:22, Aaron Knister wrote:
=> > Thanks for the sarcasm. NOT. I come here for support, not to be  
=> > ridiculed. And i'll have you know that the restore from that backup  
=> > set did work.
=> 
=> Yes, I was sarcastic, but life is (at least I am) like that.  
=> 
=> I certainly didn't mean to ridicule you, but rather to warn you that IMO, you

Well, the original poster isn't the only one who thought your response was very 
strong.

I've had similar failures--due to client problems--in the midst of multi-TB
backups, and was extremely interested to hear about what might have been a
solution to resume the backup without starting from the beginning.

=> were doing something that is *very* unlikely to work. Perhaps I was wrong -- 
=> I guess you fell into the 1% uncertainty that I had.  I have my doubts, 
=> anyway, good luck.

I appreciate the warning, and probably won't try the same technique of mucking 
with the backup status value in the database. I think that this is a very 
interesting thread, in terms of making bacula more robust. I can see many 
applications for the idea of resuming an interrupted backup (the failure that 
the the original poster described, desktop backups where users may shut off 
machines, support for mobile users over unreliable network links, etc.).

=> 
=> My advice to other users, remains the same: If a Job fails, the File records 
=> will most likely not have been inserted, and in such a case, marking the job 
=> as successfully terminated will most likely result in restore failure (or 
=> screw up of some sort) later down the road because those File records are 
=> critical for most restore operations.

Hmmm I don't know anything about the internal structure of bacula, or much
about databases, but it seems to me that this is a serious weakness. Would it be
possible for baclula to function more like a journaling filesystems in terms of
keeping consistency?

Would it be possible to use the existing algorithms in bacula to insert the File
records into a temporary db table--and also write File records to the temporary
table in anticipation that the write to storage will succeed, and then [move/
copy/insert] those records into the real table only when the write to the
storage media has been acknowledged?

Would this scheme make it possible to:
read the real table corresponding to the media
read the media
read the temporary table
reconcile the differences (ie., any filesets written to media, with
entries in the temporary table, but lacking records in the
permanent table would have the File records updated from the
temporary table)

Mark "exposing my db ignorance daily" Bergman



=> 
=> > 
=> > 
=> > On Jan 8, 2007, at 4:29 AM, Kern Sibbald wrote:
=> > 
=> > > On Sunday 07 January 2007 23:03, Aaron Knister wrote:
=> > >> I solved this problem myself. I'm not sure how elegant the solution
=> > >> is, however.
=> > >>
=> > >> Using myphpadmin I changed the "JobStatus" field in the respective
=> > >> jobid's mysql entry from "f" to "T". I then re-ran the job and it
=> > >> picked up more or less where it left off.
=> > >
=> > > Yes, well congratulations.  I give you 99% probability of having  
=> > > created a set
=> > > of backups that cannot be restored.
=> > >
=> > > I *strongly* recommend that other users don't try manually  
=> > > modifying the DB
=> > > unless you understand *all* the little details of how job records  
=> > > are put
=> > > into the DB.
=> > >
=> > >>
=> > >> -Aaron
=> > >>
=> > >> On Jan 5, 2007, at 10:29 PM, Aaron Knister wrote:
=> > >>
=> > >>> Hi,
=> > >>> I recently had a backup job fail mid way. It was backing up 5
=> > >>> terabytes of data, and had written 3tb off to tape. The job stopped
=> > >>> because there were no more writable volumes in the particular volume
=> > >>> pool to which the job was assigned. I cleared up a volume however  
=> > >>> the
=> > >>> job did not resume and after a while errored out. I would like to
=> > >>> know if i can salvage the 3 terabytes that was already written to
=> > >>> tape and just continue from that point.
=> > >>>
=> > >>> Many thanks.
=> > >>>
=> > >>> -Aaron
=> > >>>


Mar

Re: [Bacula-users] FD running as non-root

2007-01-10 Thread mark . bergman


In the message dated: Wed, 10 Jan 2007 10:39:22 GMT,
The pithy ruminations from Martin Simmons on 
 were:
=> >>>>> On Tue, 09 Jan 2007 16:52:41 -0500, Dan Langille said:
=> > 
=> > This issue came up on IRC yesterday.  The statement in question is at 
=> > http://www.bacula.org/rel-manual/Bacula_Security_Issues.html :
=> > 
=> > "The Clients (bacula-fd) must run as root to be able to access all 
=> > the system files."
=> > 
=> > Someone wanted to run FD as non-root. I replied that would be fine 
=> > provided the UID/GID has permission to access all the files you want 
=> > to backup.  I propose to replace the quoted sentence with:
=> > 
=> > "The Clients (bacula-fd) must run as whatever GID/UID is necessary to 
=> > access whatever files you wish to backup. In addition, if you wish to 
=> > restore over existing files, bacula-fd will require sufficient 
=> > permission to do that.  In most cases, this means root."
=> > 
=> > Comments?
=> 
=> Restore will also be limited for new files, always setting the owner to the
=> user that is running bacula-fd and similarly for the group.

Hmmm I'm strongly in favor of privilege separation, and I like the idea of
running the fd as a non-root user (perhaps group "disk"?). Anyway, I wonder if
this will introduce problems if the user who can read files cannot also
create special attributes (ACLs, Linux immutable files, Solaris "door" files,
device special files, etc.).

There may be a lot of corner cases (very OS specific) that will 
require testing if the FD is run as a non-root user.

Mark
=> 
=> __Martin




Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] GUI interface

2007-01-16 Thread mark . bergman


In the message dated: Tue, 16 Jan 2007 19:35:14 +0100,
The pithy ruminations from Christopher Rasch-Olsen Raa on 
 were:
=> Tirsdag 16 januar 2007 19:32, skrev Peter Buschman:
=> > Bacula Admin Tool (B.A.T)

There's always:

Bacula ADMIN TOol (Nouveau)

or:
badminton

Yes, it's too silly, and too easy to spell incorrectly.

=> >
=> > # bat
=> 
=> Ok, that's just cute. :p

Yep.

Mark

=> 
=> > At 17:37 16.1.2007, Mike wrote:
=> > >On Tue, 16 Jan 2007, Kern Sibbald might have said:
=> > > > Hello,
=> > > >
=> > > > Quite awhile ago, I wrote the email that is copied at the end of
=> > >
=> > > this email.
=> > >
=> > > > Since then, my plans have not changed a lot, but have evolved a bit.
=> > > > I am now ready to begin the GUI project.
=> > > >
=> > > > One of the first and most difficult problems is what should it be
=> > > > called? bconsole is already used,  some of the ideas I have come up
=> > > > with are:
=> > > >
=> > > >  bcon
=> > > >  beacon
=> > > >  badmin
=> > > >  ...
=> > >
=> > >bgui?
=> > >
=> 
=> --
=> Christopher
=> 



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] correction to query.sql #3 (v1.38.11)

2007-01-17 Thread mark . bergman

I noticed an error in the $BACULA/bin/query.sql query #3, which is supposed to 
list the last 20 full backups for a client. As written, in my 1.38.11 release 
of bacula (soon to be upgraded), the query was selecting the last 20 distinct 
records, including the JobMedia.StartFile. I found that there were more than 20 
distinct records for each full backup, thus the output of the query was only 
showing data on the most recent backup. For all I know, this has already been 
corrected in the latest bacula release.

Here's the updated query, which now works correctly for me:

##
:List last 20 Full Backups for a Client
*Enter Client name:
SELECT DISTINCT Job.JobId,Client.Name AS Client,StartTime,JobFiles,JobBytes,
  VolumeName
 FROM Client,Job,JobMedia,Media
 WHERE Client.Name='%1'
 AND Client.ClientId=Job.ClientId
 AND Level='F' AND JobStatus='T'
 AND JobMedia.JobId=Job.JobId AND JobMedia.MediaId=Media.MediaId
 ORDER BY Job.StartTime DESC LIMIT 20;
###

The diff being:
###
--- query.sql   2007-01-17 12:01:25.0 -0500
+++ query.sql-fixed 2007-01-17 12:01:36.0 -0500
@@ -29,7 +29,7 @@
 :List last 20 Full Backups for a Client
 *Enter Client name:
 SELECT DISTINCT Job.JobId,Client.Name AS Client,StartTime,JobFiles,JobBytes,
-  JobMedia.StartFile as VolFile,VolumeName
+  VolumeName
  FROM Client,Job,JobMedia,Media
  WHERE Client.Name='%1'
  AND Client.ClientId=Job.ClientId
####

Mark "exposing my ignorance of SQL daily" Bergman


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] correction to query.sql #3 (v1.38.11)

2007-01-17 Thread mark . bergman


In the message dated: Wed, 17 Jan 2007 18:51:58 GMT,
The pithy ruminations from Alan Brown on 
 were:
=> On Wed, 17 Jan 2007 [EMAIL PROTECTED] wrote:
=> 
=> >
=> > I noticed an error in the $BACULA/bin/query.sql query #3, which is 
supposed to
=> > list the last 20 full backups for a client.
=> 
=> I'd noticed this too, but hadn't gotten round to trying to fix it.
=> 
=> Your fix works for me, but shows up a lack of Job.Name in the output.

Hmmm... I'm not sure about the benefit to having the Job.Name...

=> 
=> >For all I know, this has already been
=> > corrected in the latest bacula release.
=> 
=> It hasn't.

OK.

=> 
=> > The diff being:
=> > ###
=> > --- query.sql   2007-01-17 12:01:25.0 -0500
=> > +++ query.sql-fixed 2007-01-17 12:01:36.0 -0500
=> > @@ -29,7 +29,7 @@
=> > :List last 20 Full Backups for a Client
=> > *Enter Client name:
=> > SELECT DISTINCT Job.JobId,Client.Name AS 
Client,StartTime,JobFiles,JobBytes,
=> > -  JobMedia.StartFile as VolFile,VolumeName
=> > +  VolumeName
=> >  FROM Client,Job,JobMedia,Media
=> >  WHERE Client.Name='%1'
=> >  AND Client.ClientId=Job.ClientId
=> > 
=> 
=> I'd suggest this as well
=> 
=> - SELECT DISTINCT Job.JobId,Client.Name AS 
Client,StartTime,JobFiles,JobBytes,
=> + SELECT DISTINCT Job.JobId,Job.Name,StartTime,JobFiles,JobBytes,

Yes, I can see how having the Job.Name would be useful (especially for the 
bacula-dir server, as the catalog backup is considered a "Full" backup).

=> 
=> IMO There's not much point showing the client name as it's just been 
=> explicitly specified and you can't script "query" commands.

Huh? I script query commands all the time--in fact, as part of the post-catalog 
backup, I've got shell and perl scripts that run about 5 queries nightly, 
reporting on things like what tapes need to be changed, what tapes are 
available to load, etc. I could certainly see scripting reports that query the 
most recent backups for a long list of servers, such as:

foreach $client (alpha-fd, beta-fd, gamma-fd)
do
echo "query\n3\n$client\nquit\n" | bconsole -f 
/etc/bconsole.conf | \
grep "^|"
done

in which case, I would want the client name on each line of the output.

Mark

=>  
=> YMMV
=> 
=> 
=> 




Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] feature request: dynamic job priorities (bias for restores)

2007-01-22 Thread mark . bergman
Item  1:  the numerical priority of restore jobs should be dynamically set to 
make them happen sooner

  Date:   22 Jan, 2007 
  Origin: Mark Bergman 
  Status: 

  What:  
If bacula is configured to run jobs concurrently, then all jobs with
a priority number less than the current job (ie., scheduled jobs that 
are more important than the current job) will be allowed to run
concurrently with the current job. The lower numbered
jobs would be put at the front of the queue of the jobs that have the
current priority number. If there are several priorities involved,
then the same algorithm would apply, with lower priority numbered jobs
being effectively raised to the priority of the running job, but ordered
in the queue by their real priority.

  Why:The reason for the existance of bacula (and any backup software) is 
to restore data when needed. Currently, bacula's method of scheduling
prevents a restore job of greater importance (lower numerical
priority) from running at the same time as other jobs of less
importance (higher priority), even when resources (a tape drive) are
free.

An example scenario is that there a number of backups running,
including a backup of really_slow_client (using tape drive 0 in
the autochanger). All the backups are at priority 10. I attempt to
run a restore (using a tape that's already in the autochanger),
and give the restore job priority 6. Unfortunately, the restore
will not begin until all backups are complete, even when there
are idle tape drives within the autochanger. As far as I know,
even if I manually set the priority of the restore job to "10"
(matching the running backups), the restore would be executed
after the backups that are already in the queue.



  Notes:
Obviously, this would be subject to resource constraints (ie, you can't 
work 
with two different volumes simultaneously in a single device) and to 
the date/timestamp specified for job scheduling.

This change would be a tremendous improvement in managing multi-drive 
storage 
devices. This method of scheduling would make bacula more efficient, 
allowing higher 
importance (lower numerical priority) jobs to make use of physical 
resources as 
soon as they become available.

In some ways, this request is really a hack--the alternative (and
arguably better) method would be to change the way bacula uses
resources, so that multiple of jobs of different types & priorities
can use different physical tape drives within an autochanger at once,
but I suspect that's a much more difficult goal.



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] feature request: fixed naming/numbering of sql queries

2007-01-24 Thread mark . bergman

Item  1:  enable persistent naming/number of SQL queries

  Date:   24 Jan, 2007 
  Origin: Mark Bergman 
  Status: 

  What:  
Change the parsing of the query.sql file and the query command so that
queries are named/numbered by a fixed value, not their order in the
file.


  Why:   
One of the real strengths of bacula is the ability to query the
database, and the fact that complex queries can be saved and
referenced from a file is very powerful. However, the choice
of query (both for interactive use, and by scripting input
to the bconsole command) is completely dependent on the order
within the query.sql file. The descriptve labels are helpful for
interactive use, but users become used to calling a particular
query "by number", or may use scripts to execute queries. This
presents a problem if the number or order of queries in the file
changes.

If the query.sql file used the numeric tags as a real value (rather
than a comment), then users could have a higher confidence that they
are executing the intended query, that their local changes wouldn't
conflict with future bacula upgrades.

For scripting, it's very important that the intended query is
what's actually executed. The current method of parsing the
query.sql file discourages scripting because the addition or
deletion of queries within the file will require corresponding
changes to scripts. It may not be obvious to users that deleting
query "17" in the query.sql file will require changing all
references to higher numbered queries. Similarly, when new
bacula distributions change the number of "official" queries,
user-developed queries cannot simply be appended to the file
without also changing any references to those queries in scripts
or procedural documentation, etc.

In addition, using fixed numbers for queries would encourage more
user-initiated development of queries, by supporting conventions
such as:

queries numbered 1-50 are supported/developed/distributed by
with official bacula releases

queries numbered 100-200 are community contributed, and are
related to media management

queries numbered 201-300 are community contributed, and are
related to checksums, finding duplicated files across
different backups, etc.

queries numbered 301-400 are community contributed, and are
related to backup statistics (average file size, size per
client per backup level, time for all clients by backup level,
storage capacity by media type, etc.)

queries numbered 500-999 are locally created

  Notes:
Alternatively, queries could be called by keyword (tag), rather
than by number.

---
Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] help w. sql query to find directories?

2007-01-24 Thread mark . bergman

Hello,

Could anyone help me out with writing an SQL query to find specified
directories? I'm trying to do something similar to query "#1" (List up to 20
places where a File is saved regardless of the directory), but for directories.
I'm trying to find any backups where the last directory component has a
particular name.

For example, I'm trying to write a query that would tell me that the directory 
named "fred" exists in backups of the following:

server1-fd:/export/home/fred
server2-fd:/public/research/ProjectFoo/collaborators/fred
Fred_workstation-fd:/Documents and Settings/fred

Thanks,

Mark



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large network bacula tips?

2005-10-17 Thread Mark Bober

I get about ~20 MB/s from my fastest storage, so that's 1200 MB/min, or 1.2 
GB/min. You're pulling 300MB/min, or 5 MB/s.

So you might be a touch slow. I have a maximum of 4 jobs running on any given 
storage device, however, and usually during fulls on the weekend I've got no 
more than 4 jobs running at any given time anyway.

Here's my setup:

bacula-dir: Sun v20z (dual Opt, 4g ram) running CentOS 4.1 (RHEL clone).

Tape Storage: Either Quantum SDLT/160 Autoloader or Overland LXB SDLT/110 
changer for large jobs, hanging off an U320 MPT SCSI PCI-X controller. 

My spool device is a set of random SCSI disks, mostly old 50 giggers, in a 
striped software raid. About 400G. They're on a PCI 33mhz controller, a Symbios 
something-or-other. Nothing special.

All gigabit to major servers.

All in all I've got about 80 clients, Windows, Solaris, and Linux. A few OSF/1 
also. I've ran about 3500 jobs, and have totalled about 17 TB over the past... 
2 1/2 months I've been production with Bacula.

(I secretly hope that wins me some sort of "Biggest Bacula" award)

Suggestions:

1) Solaris storage-d was *very slow*. It's Solaris's fault. Try a linux 
storage-d, see what happens. My Linux clients always outpace everything else, 
even given the same hardware. Go ahead and shoot for 1.37.40 as well. 

2) It's Virtuozzo, also. I've got a set of VMWare ESX servers, same hardware as 
the director. They go about 5 MB/s to disk, which GZIP compression on, when I'd 
expect 20 MB/s from a plain Linux install without GZIP. Not much can be done 
about that, really. (this is the VMWare itself - the linux underpinnings, not 
the virtual machines, backing up). If those 65 virtual machines all have load 
on each server, I'm amazed they're that fast at all.

If you're backing up to disk, drop GZIP once and see how it goes. If you're 
going straight to tape, you're pretty much at the limit then. That's a lot of 
virtualization.

Mark


On Thu, Oct 13, 2005 at 03:56:57PM -0700, Lyle Vogtmann wrote:
> Hello fellow Bacula users!
> 
> I've only been lurking on this list for a little while, please excuse
> me if this topic has been covered previously.
> 
> I've got what I would consider a large network of machines each
> hosting many virtual private servers with Virtuozzo. 
> http://www.swsoft.com/en/products/virtuozzo/  (19 servers, each
> hosting an average of 65 virtual environments, average 160GB data per
> server.   Total data to back up ~ 3TB.  Generous estimate to allow for
> growth.)
> 
> I've been tasked with replacing an aging Amanda install that has been
> backing them up to disk daily.
> 
> I've done some testing already with a couple of the servers, and have
> recently started a backup of all systems.  Ran into a small problem
> with the catalog where the File table grew to 4GB and claimed to be
> full, easily fixed by switching from MyISAM to the InnoDB engine.  It
> got me thinking though, are there any other "gotchas" or caveats
> anyone else has overcome in backing up such a large quantity of data?
> 
> We have a gigabit Ethernet network over which the backups are run, but
> it still seems to take an inordinate amount of time to complete a full
> backup.  Currently filling a two gigabyte volume every 6 minutes on
> average.  At that rate, it will take 6 days to finish a full backup?! 
> Maybe I'm doing the math wrong (I already know I haven't taken
> compression into account), but I think I'm missing something.
> 
> Comments and suggestions welcome!  Thanks for such a great project! 
> (It's backing up my home network of 3 Macs handily!)
> 
> Oh yeah:
> Director is running on a FreeBSD 5.4 box, all other clients are Linux.
>  Bacula version 1.36.3 compiled from source (ports tree on director).
> 
> Thanks in advance,
> Lyle Vogtmann
> 
> 
> ---
> This SF.Net email is sponsored by:
> Power Architecture Resource Center: Free content, downloads, discussions,
> and more. http://solutions.newsforge.com/ibmarch.tmpl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large network bacula tips?

2005-10-20 Thread Mark Bober

MySQL 4.1.something (whatever comes with RHEL/Centos)

I'm at 13 GB, and I'll admit I've never ran dbcheck, actually.

Mark

On Tue, Oct 18, 2005 at 09:41:53AM +0200, Juan Luis Frances wrote:
> Hi,
> 
> PgSQL or MySQL? :-)
> 
> Do you have any problems using "dbcheck"?
> 
> My db (mysql) size is 19,93GB and "dbcheck"(1.36.3) dies when the memory free 
> is ended (RAM 2GB and SWAP  2GB).
> 
> El Lunes 17 Octubre 2005 20:05, Mark Bober escribió:
> 
> > All in all I've got about 80 clients, Windows, Solaris, and Linux. A few
> > OSF/1 also. I've ran about 3500 jobs, and have totalled about 17 TB over
> > the past... 2 1/2 months I've been production with Bacula.
> >
> > (I secretly hope that wins me some sort of "Biggest Bacula" award)
> 
> 
> ---
> This SF.Net email is sponsored by:
> Power Architecture Resource Center: Free content, downloads, discussions,
> and more. http://solutions.newsforge.com/ibmarch.tmpl
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large network bacula tips?

2005-10-20 Thread Mark Bober
On Mon, Oct 17, 2005 at 12:08:36PM -0700, Lyle Vogtmann wrote:
> OK, wasn't sure how stable that release was, but since I'm still in
> testing mode, it doesn't really matter.  I'll give it a go.

I've been happy with it; I moved up mainly for Windows VSS.

> > 2) It's Virtuozzo, also. I've got a set of VMWare ESX servers, same 
> > hardware as the director. They
> 
> None of the clients are vps's themselves.  Just hosting vps's, so I
> guess I could have left that out of my original message.  They are the
> main cause of the large amount of data.

Right. I'm not backing up my VMWare VPS either, just the actual hosting 
machine, with that speed. I'd have to try one of the VMs to see what it's speed 
is.

Mark




---
This SF.Net email is sponsored by:
Power Architecture Resource Center: Free content, downloads, discussions,
and more. http://solutions.newsforge.com/ibmarch.tmpl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] DLT 40/80GB never reaches 40GB

2005-10-26 Thread Mark Bober

I've got a SDLT/320 changer I use for some fulls, and a SDLT/220 changer I use 
for weeklies and other fulls.

As I expire tapes from our previous backup system, I'll usually want to make 
sure I've gone through the tapes I've put in the 320 and write EOF to each 
tape, so that the drive 'reformats' it to the full 320. (if the 220 had written 
it before, the drive will leave it 220)

Assuming Bacula has released the drive:

#!/bin/sh

DRV=overland

for i in `seq 1 8`; 
do
echo "Loading $i"
mtx -f /changer/$DRV unload
mtx -f /changer/$DRV load $i
echo "Clearing $i"
mt -f /tapes/$DRV rewind
mt -f /tapes/$DRV weof
mt -f /tapes/$DRV rewind
done

and (IIRC) next time it gets loaded, the drive will treat it as it's default 
compression level. This will also kill the bacula label on it, so.

While I'll admit I've never paid that much attention to it, both Solaris and 
Linux seem to let the tape set it's default, highest density.

Since I have the Media Type defined as SDLT for the 320 and SDLT0 for the 220, 
they'll never mix in Bacula. If I drop the 220 drive in the future, all those 
tapes will have to be "weof"'d as they get recycled.

Mark

On Wed, Oct 26, 2005 at 11:52:19AM +0200, Kern Sibbald wrote:
> On Wednesday 26 October 2005 10:36, Arno Lehmann wrote:
> > Hello,
> >
> > On 26.10.2005 09:53, Kern Sibbald wrote:
> > > On Tuesday 25 October 2005 22:31, Ryan Novosielski wrote:
> > >>One other thing to note -- sometimes the tape drive will improperly
> > >>determine the tape size for whatever reason. I have had experience with
> > >>a DLT7000 drive that, for whatever reason, initialized a tape to 15GB.
> > >>After that, I had to manually set the drive to 35GB and re-initialize
> > >>it. After that, no problem.
> > >
> > > Interesting. I have never heard of a tape drive that can set the tape
> > > size to various values.
> > >
> > > Can you explain in detail how you manually set the drive to 35GB and
> > > re-initialized it?
> >
> > I'm not Ryan, but anyway...
> >
> > Basically, a tape, when initialized, has some sort of "carrier" data
> > written to it which a drive recognizes. My DLT drive, which is rather
> > normal in that respect, allows setting this by two means:
> > Either load a tape, (must be a BOT), use mt to set the density, and
> > write to the tape. Don't try to read anything before that.
> > Or use the front panel of the changer to select a density (here, for DLT
> > IV tapes in a DLT-4000 drive, 10, 15, 20 GB uncompressed and the
> > corresponding compressed values, optimistically called 20, 30, 40 GB.
> 
> Yes, I *have* heard of setting the density, but didn't think of that.  
> Normally, on most systems, the density is set by default to the highest value 
> for that drive ...
> 
> This might be worth mentioning in the manual.
> 
> >
> > After that, the tape is newly initialized like when I load a blank tape,
> > which you recognize by the time it takes until the first byte is
> > actually written - less than a minute with a initialized tape, up to
> > three minutes with initialization.
> >
> > Arno
> >
> > >>Just stuff to look out for.
> > >>
> > >>Anwar Ahmad wrote:
> > >>>Hi Phil,
> > >>>
> > >>>No worries, I've checked that both are DLT2 drives. Specifically they
> > >>>are HP SureStore Autoloaders. Both were bough at the same time from HP
> > >>>directly. They were also configured identically. We initially had a
> > >>>bunch of DLT tapes around (10 or so) from our old HP server which had
> > >>>an inbuilt DLT drive. This was DLT1.
> > >>>
> > >>>When we purchased the 2 Autoloaders we wanted to reuse some of the old
> > >>>tapes rather than junk it since we're using bacula as network backup.
> > >>>We also purchased well over 30 DLT 40/80 tapes.
> > >>>
> > >>>Interestingly, I've relabeled a 40/80 tape that was originally in the
> > >>>pool that backed up from pool1 (the one with the problem) and put it
> > >>>into pool2 (working normally) and it works correctly. I thought about
> > >>>the physical tape drive causing the problem but I've nowhere to check?
> > >>>Is there any way to check using software?
> > >>>
> > >>>I'm not certain how this actually works out. Since both autoloader are
> > >&

[Bacula-users] Compile on XP Instructions. (64 bit offer)

2005-10-26 Thread Mark Bober


As an addendum to this; I've access to Win64 machines, and any version of 
Visual Studio necessary.

I'd offer some time in running a compile (I'm totally oblivious about V Studio, 
however) but looking through the README.vc8 on VS 2005, it doesn't quite look 
like all the support libraries are available for Win64 anyway...?

(not that anything's available from Win64 from anywhere else in the world, but 
hey)

Mark


On Wed, Oct 26, 2005 at 11:32:45AM -0400, Dave Sutherland wrote:
> In the past I have been able to compile the wx-console on my windows XP 
> system using nothing but a cygwin rvt window.  Now that I look at the new 
> instructions for compiling on an XP system things are all over the place.  
> Very confusing. 
>  
> Would someone please revisit the README.WIN32 and create a simple step by 
> step, in order - top, down (no jumping around and looking things up) 
> compiling instructions?  perhaps even create seperate files for vc6, vc7 and 
> whatever else.
>  
> Very respectfully - 
> Dave
>  
>  
> 
>   
> -
> Find your next car at Yahoo! Canada Autos


---
This SF.Net email is sponsored by the JBoss Inc.
Get Certified Today * Register for a JBoss Training Course
Free Certification Exam for All Training Attendees Through End of 2005
Visit http://www.jboss.com/services/certification for more information
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Hey Kern....

2005-11-29 Thread Mark Hazen

Kern-

Knowing full and well the weight that publically supporting a project 
with the size and scope of Bacula puts on your shoulders, I'd like to 
say thank you, thank you, thank you. You do an admirable job, and yes, 
while Bacula may not be as easy to get into as some of the commercial 
backup systems, it's been far more flexible, and just as resilient and 
dependable for us here.


No reply is necessary, I know you've got a lot of fish to fry, but I 
wanted to say "thanks". I think it's important to know how much we all 
do appreciate your hard work.


The same goes for those of you Bacula gurus who take the time to answer 
the community questions. I spent a month reading documentation and 
testing before implementing our Bacula environment here which alleviated 
a great deal of my disorientation by itself, but I also know I was 
graciously given the time to do things right by my employer, which isn't 
a luxury everyone has access to when rolling out backup services.


Cheers, and thanks.
-mh.
--
Mark Hazen
Systems Support Specialist
The University of Georgia


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] odd problem with btape/auto (v1.38.5)

2006-01-26 Thread Mark Bober


Hey, thanks for the reply on this Arno. I was just about to mail out with this 
- oddly, the problem didn't start with my ATL L200's until 1.38.3.

Mark


On Mon, Jan 23, 2006 at 11:18:11PM +0100, Arno Lehmann wrote:
> Hello,
> 
> On 1/23/2006 10:32 PM, Steve Loughran wrote:
> >Hi all
> >
> >First off, I have bacula (v1.38) up and running at work with a Overland 
> >LTO3 unit (zero problems!), so I thought it might be worth getting it 
> >set up at home as well (hell, my MP3s are far more important than work 
> >data :)
> 
> Sure... (how many, and do you want to share? ;-)
> 
> Just joking...)
> 
> >... and I run into a problem at the first hurdle. I am trying to run 
> >"btape" to check the autochanger, and its failing (yes, even with the 
> >"mt $device offline" line enabled in mtx-changer)
> ...
> >3303 Issuing autochanger "load 1 0" command.
> >btape: btape.c:1176 Changer=/etc/bacula/mtx-changer /dev/sg0 load 1 
> >/dev/nst0 0
> >3303 Autochanger "load 1 0" status is OK.
> >btape: dev.c:277 open dev: tape=2 dev_name="ADIC-DLT7000" (/dev/nst0) vol= 
> >mode=OPEN_READ_WRITE
> >btape: dev.c:323 open dev: device is tape
> >btape: autochanger.c:249 Locking changer ADIC-Library
> >23-Jan 20:56 btape: 3301 Issuing autochanger "loaded drive 0" command.
> >btape: autochanger.c:220 run_prog: /etc/bacula/mtx-changer /dev/sg0 loaded 
> >1 /dev/nst0 0 stat=0 result=1
> >
> >23-Jan 20:56 btape: 3302 Autochanger "loaded drive 0", result is Slot 1.
> >btape: autochanger.c:258 Unlocking changer ADIC-Library
> >btape: dev.c:338 Try open "ADIC-DLT7000" (/dev/nst0) mode=OPEN_READ_WRITE 
> >nonblocking=2048
> >btape: dev.c:369 openmode=2 OPEN_READ_WRITE
> >btape: dev.c:382 open dev: tape 3 opened
> >btape: btape.c:338 open device "ADIC-DLT7000" (/dev/nst0): OK
> >btape: dev.c:621 rewind res=0 fd=3 "ADIC-DLT7000" (/dev/nst0)
> >btape: dev.c:277 open dev: tape=2 dev_name="ADIC-DLT7000" (/dev/nst0) vol= 
> >mode=OPEN_READ_WRITE
> >btape: dev.c:323 open dev: device is tape
> >btape: autochanger.c:249 Locking changer ADIC-Library
> >23-Jan 20:56 btape: 3301 Issuing autochanger "loaded drive 0" command.
> >btape: autochanger.c:220 run_prog: /etc/bacula/mtx-changer /dev/sg0 loaded 
> >1 /dev/nst0 0 stat=0 result=1
> >
> >23-Jan 20:56 btape: 3302 Autochanger "loaded drive 0", result is Slot 1.
> >btape: autochanger.c:258 Unlocking changer ADIC-Library
> >btape: dev.c:338 Try open "ADIC-DLT7000" (/dev/nst0) mode=OPEN_READ_WRITE 
> >nonblocking=2048
> >btape: dev.c:369 openmode=2 OPEN_READ_WRITE
> >btape: dev.c:382 open dev: tape 3 opened
> >btape: btape.c:1198 Bad status from rewind. ERR=dev.c:672 Rewind error on 
> >"ADIC-DLT7000" (/dev/nst0). ERR=Input/output error.
> >
> >
> >The test failed, probably because you need to put
> >a longer sleep time in the mtx-script in the load) case.
> >Adding a 30 second sleep and trying again ...
> >3301 Issuing autochanger "loaded" command.
> >btape: btape.c:1133 run_prog: /etc/bacula/mtx-changer /dev/sg0 loaded 1 
> >/dev/nst0 0 stat=0 result="1
> >"
> >Slot 1 loaded. I am going to unload it.
> >btape: btape.c:1147 Results from loaded query=1
> >
> 
> Well, that might easily be solved... I guess the timeout after the wait 
> is not long enough. I'm using a DLT4000 drive here, and sometimes, with 
> tapes initialized by higher capacity drives, I see load times of several 
> minutes.
> 
> My workaround was not to set the sleep timeout to a very high value, but 
> rather to enable the function wait_for_drive in the mtx-changer script. 
> Works great, usually.
> 
> To investigate if thiat is really your problem, you can do the following:
> 
> Stop the SD or unmount the tape drive.
> From a shell, use mtx to load a tape.
> as soon as mtx returns, use tapeinfo to query the tape drives state. 
> Usually, you will _not_ get a ready and BOT state.
> 
> That's the result of the independance of loader and drive in the ADIC 
> FastStor devices - mtx returns as soon as the loader has done its work, 
> but then the drive only started. And DLT loading can take quite long.
> 
> Arno
> 
> -- 
> IT-Service Lehmann[EMAIL PROTECTED]
> Arno Lehmann  http://www.its-lehmann.de
> 
> 
> ---
> This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
> for problems?  

[Bacula-users] aliasing servers?

2007-07-23 Thread mark . bergman

Is there any way to "alias" a server within bacula? Specifically, I just renamed
a server, without changing it's IP. I can easily update the bacula
configurations on the -fd, -sd, and -dir, but I would like it if bacula could be
told that there are two names for this machine, and that a new (full) backup is
not necessary.


This capability be particularly important in the future as
machines with multiple-TB of storage are renamed or re-IP'ed.



Environment:
bacula 1.38.11
Linux (FC1 with 2.4.24 on bacula-sd, bacula-dir. SUSE 10 with  2.6.13
on the client)
Mysql 5.0.22 



Thanks,

Mark





Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] New Problem with the Dell PV 132T

2007-07-26 Thread mark . bergman


In the message dated: Thu, 26 Jul 2007 10:50:34 PDT,
The pithy ruminations from Mike Vasquez on 
<[Bacula-users] New Problem with the Dell PV 132T> were:
=> 
=> I have gotten my OS recognizing the Dell PV 132T.  Now when I run the command

Good.

=> mtx -f /dev/sg2 status I get the following:
=>  Storage Changer /dev/sg2:1 Drives, 23 Slots ( 0 Import/Export )
=>   Data Transfer Element 0:Full (Unknown Storage Element
=> Loaded):VolumeTag = 

[SNIP!]

=>   Storage Element 23:Full :VolumeTag=

Looks good. Why no barcodes? They really, really do make life easier.

=> 
=> Now when I try to run the command mt -f /dev/nst0 status, I get the

Why are you doing that? Is this just for testing (outside bacula), or for some 
other reason?

In general, bacula (and other backup solutions) assume that they have exclusive 
access to the hardware devices, and get very cranky if you do things from the 
OS level.

=> following:
=>   /dev/nst0:  Device or resource busy
=> Even when I do sg_map, It shows that nst0 is busy.
=> 
=> What is causing /dev/nst0  to be busy.  I don't have anything going on with
=> that device.  How can I make it not busy and be able to access the device
=> using mt?

Is bacula running? If so, it's got access to the tape device, which is making 
it busy.

Try:
lsof /dev/nst0
to see any open file handles pointing to /dev/nst0.

Mark

=> 
=> TIA
=> 
=> Mike


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Setting up a Dell PV 132T

2007-07-26 Thread mark . bergman
vices created"
exit 1
fi

for dev in $sgmapADDL
do
sg_reset $dev
done


# Now, re-run this script
$0
fi

===

=> results:
=> 
=> mtx: Request Sense: Long Report=yes
=> mtx: Request Sense: Valid Residual=no
=> mtx: Request Sense: Error Code=0 (Unknown?!)
=> mtx: Request Sense: Sense Key=No Sense
=> mtx: Request Sense: FileMark=no
=> mtx: Request Sense: EOM=no
=> mtx: Request Sense: ILI=no
=> mtx: Request Sense: Additional Sense Code = 00
=> mtx: Request Sense: Additional Sense Qualifier = 00
=> mtx: Request Sense: BPV=no
=> mtx: Request Sense: Error in CDB=no
=> mtx: Request Sense: SKSV=no
=> INQUIRY Command Failed
=> 
=> Would anyone know the cause of this error?  I have the device set at the
=> factory default settings except I have turned off the scanner, since I don't
=> have any barcodes.
=> 
=> TIA
=> Mike
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] exclusion question

2007-08-06 Thread Mark Nienberg
I recently upgraded from 1.38 to the current stable.  I didn't do it earlier 
because, 
well, everything worked so well.  Now I'm trying to come up to speed with the 
changes 
and new features.

I'm curious if a feature to prevent descending into a directory by having an 
indicator file with some specific name in the directory was ever implemented.  
I 
vaguely remember it being on the wish list at one time.  It would allow me to 
greatly
simplify my FileSet resources.

If not, is it possible to simulate it with an option something like this:

Include {
 Options {
 Exclude = yes
 }
 File = \>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] exclusion question

2007-08-06 Thread Mark Nienberg
Mark Nienberg wrote:

> If not, is it possible to simulate it with an option something like this:
> 
> Include {
>  Options {
>  Exclude = yes
>  }
>  File = \ }
> 
> where the "program.to.run.on.client" would search for a particular file name 
> and 
> create a list of directories where it is present.

A closer reading of the docs makes me think that the "program.to.run.on.client" 
would 
have to be run from a cron job or maybe a "Client Run Before Job" and then

File = \>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] questions re. concept of database pruning (Was: Re: old recovery)

2007-08-23 Thread mark . bergman


I've been following this thread with some interest, but with little to
contribute directly. However, the thread does bring up some questions that I've 
had about the concept of pruning the bacula database.

It seems to me that the database should only be pruned when files are added
(ie., when a backup succeeds), not merely because a volume was accessed for
reading.

In other realms, such as the algorithm that bacula employs to decide which
volume to use, bacula tries very hard to keep data on the backup media (and in
the catalog) for as long as possible. The idea that accessing a volume (ie.,
doing a restore) can prune records seems to be inconsistent with the philosophy
of keeping the data whenever possible.

In our environment, it's pretty common that a request to restore files from
directory "X" is then followed by another request to restore files from
directory "Y", when the user realizes that they didn't get all the files they 
needed. If the first restore caused the database records for a multi-TB
backup that spans 3 or 4 tapes to be pruned, then the second restore will be
extremely slow and painful.

It seems that the disk space resource to store a large database catalog is much
"cheaper" than the time resource for a system administrator to bscan tapes or
the time resource for an end user to wait for a restore.

What are the settings required to configure bacula so that the catalog retention
period is the same as the data retention? In other words, as long as the data
exists on the backup media, the database catalog records exist as well?

Obviously, the database size may expand a great deal. I know very, very little
about databases, but would there be significant performance or database
stability advantages to moving the old records into distinct tables? In other
words, instead of pruning database records (while the data still exists on the
backup media), move those records to a separate table or separate database? This
would keep the "hot" database of the most current records at a smaller size,
while letting the database of older records grow. That distinction could give
system administrators & DBAs the choice of what physical disks are used to store
each database, etc.

Does anyone know the practical database size limits for MySQL and Postgres?
Would typical bacula installations (if there is such a thing) be in danger of
reaching those limits if database records were retained as long as the data
itself?


Thanks,

Mark


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "status dir" and scheduled jobs

2007-08-28 Thread mark . bergman


In the message dated: Tue, 28 Aug 2007 09:49:35 +0200,
The pithy ruminations from Arno Lehmann on 
 were:
=> Hi,
=> 
=> 28.08.2007 02:35,, Charles Sprickman wrote::
=> > Hi all,
=> > 
=> > I'm starting to work on a script to alert me to what tapes will be needed 
=> > for upcoming runs.  According to the manual (I think), this info should be 
=> > shown in the "status dir" output.
=> > 

[SNIP!]

=> 
=> I don't thinky you screwed up anything... the problem is that Bacula 
=> does not look ahead in time during a 'sta dir' command. Most of the 
=> time, the "*unknown*" volume display is caused by having no volumes 
=> available at the time of the display, but later, volumes might get 
=> recycled.
=> 
=> By the way: Given that Bacula works with an implicit "Accept Any 
=> Volume" setting anyway, the exact volume to be used is only partly 
=> interesting. Having the pool which will be used displayed would, in 
=> many cases, be more informative.

I agree.

Here's my solution to the question:


I run a script (after the nightly catalog backup) to produce a report of:

what tapes are in the changer that are in need of replacement

what tapes are available for reuse

what backups are scheduled for the next day

estimates of the size of the upcoming backups,
how many volumes will be required, and how many
volumes are available in the changer in the 
specificed pool and the Scratch pool


I've attached the "dailyreport" script, which depends on the 
"bacula_pool_report" script (also attached). The "dailyreport" script also 
depends on two SQL queries. These could be embedded in the script, but since I 
call the queries manually as well, they are in my "query.sql" file. In my case, 
they happen to be numbered queries 18 & 24 (and those numbers are hard-coded 
into the "dailyreport" script). The queries are:


# 18
:List Volumes in the changer that are in need of replacement
SELECT VolumeName As VolName,Storage.Name AS Storage,Slot,Pool.Name AS 
Pool,MediaType,VolStatus AS
 Status, VolErrors AS Errors
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
  AND Slot>0 AND InChanger=1
  AND Media.StorageId=Storage.StorageId
  AND ( (VolErrors>0) OR (VolStatus='Error') OR 
 (VolStatus='Disabled') OR (VolStatus='Full'))
  ORDER BY Slot ASC, VolumeName;
###
# 24
:Find next volumes to load
SELECT VolumeName AS VolName,Pool.Name AS Pool,MediaType AS Type,VolStatus AS 
Status, InChanger
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
AND InChanger=0
AND Media.StorageId=Storage.StorageId
AND ( VolStatus IN ('Purged', 'Recycle') OR Pool.Name='Scratch')
  ORDER BY VolumeName;
##



=> 
=> Arno
=> 
=> > Thanks,
=> > 
=> > Charles


dailyreport
Description: dailyreport


bacula_pool_report
Description: bacula_pool_report

Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Stuck jobs and Max Run Time

2007-09-18 Thread Mark Hazen
Hi folks-

We use bacula to backup a dozen servers and about twice that many 
workstations here. The servers are all running the 2.2.3, the backup 
server hosts the director and storage daemon both, running on RHEL4. I'm 
using RPMs built from the bacula SRPM, without the GUI tools (which 
won't currently compile as noted in the SRPM, but that doesn't affect us 
here).

Last night one of our workstations started its backup, and just sat 
there. This morning (11 hours later) I could contact the client from 
bconsole, it stated that it was running the job, but the file/byte 
counts were at zero.

The last thing the server side log has listed is the completion of the 
previous job. It should be noted that the client's FD was version 1.38, 
but I am (perhaps mistakenly) under the impression that this should not 
be an issue, unless I were trying to use some of the 2.x only features 
(which I wasn't).

I was a little concerned that a job was 'stuck' for so long with no 
progress, but I can understand why the server didn't consider it 'dead'; 
it was still responding cheerfully, stating that it had a job in 
progress, which never progressed. Chalk it up to perhaps a flaky XP 
client machine in need of a restart.

Upon cancelling the job however, the pending jobs were stuck with the 
infamous "waiting on max storage jobs" notice:

Running Jobs:
JobId Level   Name   Status
==
  104 Increme  job.xxx.backup.2007-09-17_19.05.15 has been canceled
  105 Increme  job.yyy.2007-09-17_19.05.16 is waiting on max Storage jobs
  106 Increme  job.zzz.2007-09-17_19.05.17 is waiting on max Storage jobs
  ... and so on.

Sometimes in the past, explicitly requesting the storage daemon to 
remount its devices has caused cancelled jobs 'stuck' in this manner to 
release, but not this time. In this case, I received contradictory 
messages from bacula:

*unmount
The defined Storage resources are:
  1: storage.servers
  2: storage.desktops
  3: storage.rescue
Select Storage resource (1-3): 1
3901 Device "device.servers" (/bacula/pools/server) is already unmounted.
*mount
The defined Storage resources are:
  1: storage.servers
  2: storage.desktops
  3: storage.rescue
Select Storage resource (1-3): 1
3906 File device "device.servers" (/bacula/pools/server) is always mounted.
*q

I'm not sure if this qualifies as an issue, but it was a bit of a 
headscratch for me. Restarting the daemons cleared the problem, but also 
dropped all of the uncompleted jobs, which I wish it hadn't.

So, I'm adding "Max Run Time" entries to the desktop backup 
configuration, in the JobDef block for desktops, but the question 
exists, does this stop the job at the client level or at the server 
level? I'm thinking that stopping it at the client level won't help (as 
far as I can see) with zombie clients, so I just wanted to make sure 
this would indeed resolve our issues when a client goes loopy.

Thanks,
-mh.
-- 
Mark Hazen
Systems Support Specialist
The University of Georgia

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fedora/CentOS RPMs for 2.2.4 published

2007-09-20 Thread Mark Nienberg
Felix Schwarz wrote:

> No Fedora 5:
> I did not build RPMs for Fedora 5 because the Fedora Project eol'd this 
> version 
> of Fedora at the end of June [1]. If you are still using Fedora 5 (or even 
> older 
> versions) IMHO you should switch either to new versions of Fedora or use 
> CentOS 
> if you need longer support cycles. However, if enough users demand FC5 
> packages, 
> I'm willing to build them at least for 2.2.4.

Sourceforge shows that your version 2.0.3 for fedora 5 had:

312 downloads for the client,
221 downloads for mysql,
130 downloads for postgre, and
140 downloads for sqlite
for a total of 803 (combining i386 and x86_64).

I don't think it is at all unusual to have a dedicated backup server on a 
protected 
network running an EOL version of fedora.  Also, the official rpms still 
support 
fedora 4 (which I still use, I'm embarrassed to admit).  In any event, thanks 
for 
your contributions.


Mark


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fedora/CentOS RPMs for 2.2.4 published

2007-09-24 Thread mark . bergman


In the message dated: Sun, 23 Sep 2007 12:15:43 +0200,
The pithy ruminations from Felix Schwarz on 
 were:
=> Mark,
=> 
=> Mark Nienberg wrote:
=> > Sourceforge shows that your version 2.0.3 for fedora 5 had:
=> >
=> > 312 downloads for the client,
=> > 221 downloads for mysql,
=> > 130 downloads for postgre, and
=> > 140 downloads for sqlite
=> > for a total of 803 (combining i386 and x86_64).
=> 
=> But these RPMs were built in April 2007. FC5 was officially supported until 
the
=> end of June. I think (hope) that most users did upgrade when noticing the 
EOL. 
=> Anyway, I would like to see more FC5 users to speak up before spending the 
time 
=> to build rpms for an out-dated Fedora version.

I'm running FC5 on a server that I don't intend to update soon.

Similarly, I'm running everything from FC1 to CentOS5 to Irix 6.5.22. Most of 
these machines will not get updated, largely because there is no pressing need 
to do so, and because the upgrade would cause considerable disruption.

=> 
=>  > I don't think it is at all unusual to have a dedicated backup server on a
=>  > protected network running an EOL version of fedora.

I agree completely.

=> 
=> While I think that this situation is not unusual, I really have mixed 
feelings 
=> when thinking about this: The backup server gets ALL your important data and 
can 
=> access everything (unless you use client encryption). This would be a nice 
=> target for an attacker. Therefore I really recommend using secure operating 
=> systems. In my experience it is quite easy migrating a host which does only 
=> backups to CentOS!

It might be technically easy to do that migration, but there are often other, 
non-technical, considerations that take precedence.

My suggestion, which seems to be the current situtation already, would for the 
bacula developers (thanks, Kern) and packagers to continue their successful 
effort of making sure that bacula compiles on older systems, but that there's 
little need to provide RPMs for very old systems. FYI, I still wouldn't 
consider FC5 to be "very old" in this context, though I wouldn't bother 
providing binaries for FC4 and older.

Thanks,

Mark

=> 
=> fs


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup to disk AND tape

2007-09-24 Thread mark . bergman


In the message dated: Mon, 24 Sep 2007 11:04:00 EDT,
The pithy ruminations from Bob Hetzel on 
 were:
=> The reality is that if you really need reliable data for 10 years you're 
=> probably stuck with technology like paper, optical media (choose wisely 

Paper? True, it's about the best archival media that we've got...however, 
the information density is too low to be practical for most data formats. I've 
yet to see a usable way of rapidly and reliably converting arbitrary data to
--and from--machine readable symbols on paper. I guess that the 
Data Matrix encoding from Siemens has sufficient density (theoretically storing 
about 37GB per US Letter page, but practically many orders of magnitude 
smaller)/

However, I'd argue that the issues you raise with reading tapes--having the
hardware and software to access the data, rather than deterioration of the media
itself--would also be a major problem for any high density storage printed on
paper. I think that microfilm would be a better answer, with a very simple
binary-data-to-text encoding (UUENCODE, for example).


=> as many of these formats are gone too), or online hard drive space that 
=> you'll be continually checking and carrying along with each upgrade (and 
=> backing up with all your regular fulls).  All have their own major 
=> drawbacks.  The benefit to online hard drive space is that new data 
=> needs grow so fast that in many cases it's not that much more expensive 
=> to keep the old stuff around--for instance... 20 years ago 100GB of data 
=> was not available in one storage system.  10 years ago it was a lot but 
=> quite pricey.  Now it's about the smallest hard drive you can buy new.

While that does reflect the change in drive space per unit or per cost over 
time, that's not in and of itself an advantage of hard drives. If I purchase a 
drive today and put data on it with the intent of keeping that data for 10 
years, I've still got to be able to access the drive that I just purchased, 
even if 5 years from now new drives are cheaper and have higher densities. The 
same issues with bus connections, drivers, etc. still apply. We're already 
seeing this with machines that don't support PATA drives, and with pulling data 
off older SCSI drives.

=> 
=> The odds that you can locate a working tape drive of any current (2007 
=> hardware) type and adapters to plug it into 10 years from now (2017 
=> hardware) aren't good the way things are moving.  Not only will you have 

It depends on what level of technology you are considering.

=> to worry about hardware--is PCI still going to go the way of the ISA bus 
=> by then?--but drivers for old adapters on the OS after the next OS are 
=> quite possibly going to be a problem--there's tons of useless adapters 
=> out there now where manufacturers went out of business before updating 
=> their NT 4.0 drivers to work with XP, Server 2000, and Server 2003, so 
=> even if you save both the tape drive and the adapter who's to say the 
=> adapter will have a spot to plug it in and a driver you can load.

Absolutely, if you're dealing with low-end (small office/consumer) hardware. 
For this discussion, I'd immediately rule out any device that has it's own bus 
adapter and drivers.

I'm very, very confident that a 4Gb/s fibre-attached LTO3 or LTO4 or AIT5 drive
purchased today will be usable on a SAN in 10 years. Of course, that might be a
64GB/s SAN, and I might not be able to purchase any more LTO4 media, and I may
need to keep the drive that I purchase in 5 years in order to read the tapes I
write today, and my current drive might not be able to write to my 10 year old
LTO4 media...but I believe that I'd be able to read the data. For example, AIT
manufactureres have a very firm commitment to generational compatability. AIT5
(released in 2006), for example, is read/write compatible with media back to
AIT3 (released in 2001). I believe that the standard says that AIT"n" will be RW
compatible with AIT"n-1", and read compatible with AIT"n-2". 


=> 
=> With regard to 30 years I can almost guarantee problems with just about 
=> any electronic removable media.  While it's true that you can probably 


Yep.


[SNIP!]


=> 
=> In summary... backup software is extremely important for disaster 
=> recovery but should not be considered for long term (5+ years, possibly 
=> even less depending on what you need it for) storage needs in my humble 
=> opinion.
=> 

[SNIP!]

I agree, in that backup is not entirely the same thing as "archive".

Mark



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.

Re: [Bacula-users] Deduplication?

2007-10-02 Thread mark . bergman
ree, as you can probably tell. Their software is an 
alternative. It's not at all a direct competitor to Bacula, and lacks some 
features that I see as very important in my current environment.

=> 
=> Sorry for the rant. It's just that I'd never taken a look through their 
=> web site before. I've frequently seen comments on linuxquestions.org 

Then why offer the rant, with such a cursory examination of the product?

=> offering BackupPC as a solution to people who ask about backing up. I 
=> just hadn't paid much attention one way or the other. I'm sure there's a 
=> place for them, but it's not in the enterprise for the foreseeable future.

Absolutely. BackupPC is not an "enterprise" product, as that's become defined. 
That doesn't mean that it lacks useful features, ideas, and is not a better 
choice than Bacula for some situations.

That said, I wouldn't consider using it here at my current work, just as I 
probably wouldn't recommend Bacula to someone looking to backup two or three
home machines to a small disk array.

=> 
=> 
=> ---
=> 
=> Chris Hoogendyk
=> 
=> -
=>O__   Systems Administrator
=>   c/ /'_ --- Biology & Geology Departments
=>  (*) \(*) -- 140 Morrill Science Center
=> ~~ - University of Massachusetts, Amherst 
=> 
=> <[EMAIL PROTECTED]>
=> 
=> --- 
=> 
=> Erdös 4
=> 



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bconsole problem with modify restore job

2007-10-03 Thread Mark Nienberg
[EMAIL PROTECTED] ~]$ rpm -q bacula-mysql
bacula-mysql-2.2.4-1

In bconsole I cannot change the "Replace" option in a restore job.
See the following  partial bconsole session:


8 files selected to be restored.

Defined Clients:
  1: gecko-fd
  2: gingham-fd
  3: buckeye-fd
  4: tesla-fd
Select the Client (1-4): 1

Run Restore job
JobName: RestoreFiles
Bootstrap:   /var/bacula/buckeye-dir.restore.1.bsr
Where:   /
Replace: always
FileSet: gecko Files
Backup Client:   gecko-fd
Restore Client:  gecko-fd
Storage: File
When:2007-10-03 11:33:07
Catalog: MyCatalog
Priority:10
OK to run? (yes/mod/no): mod
Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Restore Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: File Relocation
 11: Replace
 12: JobId
Select parameter to modify (1-12): 11
Replace:
  1: always
  2: ifnewer
  3: ifolder
  4: never
Select replace option (1-4): 4

Run Restore job
JobName: RestoreFiles
Bootstrap:   /var/bacula/buckeye-dir.restore.1.bsr
Where:   /
Replace: always<<<<<<<< that should be "never"
FileSet: gecko Files
Backup Client:   gecko-fd
Restore Client:  gecko-fd
Storage: File
When:2007-10-03 11:33:07
Catalog: MyCatalog
Priority:10

I'm sure this used to work in prior versions (at least in the 1.series).
Mark


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-sd 2.2.4 goes kaboom! (segfault on despooling data)

2007-10-04 Thread mark . bergman

I'm testing bacula 2.2.4 and the bacula-sd daemon repeatedly exits with a 
segmentation fault. Bacula 1.38.11 works reliably on the same machine with the 
same hardware and configuration 
files.

Environment:
Fedora Core 1
Kernel 2.4.26
gcc 3.3.2
MySQL 5.0.22
Dell PV 132T autochanger

Build configuration script:
./configure \
 --prefix=/usr/local/bacula-2.2.4  \
 --disable-nls \
 --disable-ipv6\
 --enable-batch-insert \
 [EMAIL PROTECTED]\
 [EMAIL PROTECTED] \
 --with-db-name=bacula2\
 --with-mysql  \
 --mandir=/usr/local/bacula-2.2.4/man  \
 --with-pid-dir=/usr/local/bacula-2.2.4/var/run\
 --with-subsys-dir=/usr/local/bacula-2.2.4/var/subsys  \
 --with-dir-user=bacula\
 --with-dir-group=bacula   \
 --with-sd-user=bacula \
 --with-sd-group=bacula\
 --with-fd-user=root   \
 --with-fd-group=root && make


I'm using the same configuration files for the director, sd, and fd that work 
with the 1.38.11 installation (after removing the "Accept Any Volume" 
directive, changing
the paths for 2.2.4, and adding the directive "RecyclePool = Scratch").

The software compiles without error. There were no errors from "btape test" or 
"btape autochanger".

The bacula-sd daemon crashes repeatedly whether or not the variable 
LD_ASSUME_KERNEL was set to "2.4.19" before compiling bacula.

The bacula-sd daemon is running as root (while I sort out an issue that's not
present in 1.38.11 with permissions on the tape device). The bacula-dir normally
runs as user "bacula". 

Even if I modify the bacula-dir options to run it as root, no traceback file is
generated. The message (received via email) from Bacula is:

Using host libthread_db library "/lib/libthread_db.so.1".
0x401728d1 in ?? ()
/usr/local/bacula-2.2.4/etc/btraceback.gdb:1: Error in sourced command 
file:
Cannot access memory at address 0x80a2e34

The fault seems to occur right after the SD begins despooling data. I've got 4 
log files from running the SD with debugging on (set to 200 or higher), and the 
error always happens after the first instance of despooling data. In each case, 
the log file shows "stored.c:582 In terminate_stored() sig=11".

I've attached an excerpt from the SD debugging output.


Thanks,

Mark



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



bacula-2.2.4_SD_debugging
Description: bacula-2.2.4_SD_debugging
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Storage director segfaults, bacula 2.2.4 useless

2007-10-08 Thread mark . bergman

I haven't seen any response to the query that I posted last week. Please let me 
know if there's any more information that I can provide, or any suggested 
changes in my configurate that will resolve the segmentation faults in the 
storage director.

I'd appreciate any help with this issue, as it's preventing me from upgrading 
my current bacula installation. 

Thanks,

Mark


In the message dated: Thu, 04 Oct 2007 11:11:47 EDT,
The pithy ruminations from [EMAIL PROTECTED] on 
<[Bacula-users] bacula-sd 2.2.4 goes kaboom! (segfault on despooling data)> wer
e:
=> This is a multipart MIME message.
=> 
=> --==_Exmh_1191510464_44420
=> Content-Type: text/plain; charset=us-ascii
=> 
=> 
=> I'm testing bacula 2.2.4 and the bacula-sd daemon repeatedly exits with a 
=> segmentation fault. Bacula 1.38.11 works reliably on the same machine with 
the 
=> same hardware and configuration 
=> files.
=> 
=> Environment:
=>  Fedora Core 1
=>  Kernel 2.4.26
=>  gcc 3.3.2
=>  MySQL 5.0.22
=>  Dell PV 132T autochanger
=>  
=> Build configuration script:
=>  ./configure \
=>   --prefix=/usr/local/bacula-2.2.4  \
=>   --disable-nls \
=>   --disable-ipv6\
=>   --enable-batch-insert \
=>   [EMAIL PROTECTED]\
=>   [EMAIL PROTECTED] \
=>   --with-db-name=bacula2\
=>   --with-mysql  \
=>   --mandir=/usr/local/bacula-2.2.4/man  \
=>   --with-pid-dir=/usr/local/bacula-2.2.4/var/run\
=>   --with-subsys-dir=/usr/local/bacula-2.2.4/var/subsys  \
=>   --with-dir-user=bacula\
=>   --with-dir-group=bacula   \
=>   --with-sd-user=bacula \
=>   --with-sd-group=bacula\
=>   --with-fd-user=root   \
=>   --with-fd-group=root && make
=> 
=> 
=> I'm using the same configuration files for the director, sd, and fd that 
work 
=> with the 1.38.11 installation (after removing the "Accept Any Volume" 
directive, changing
=> the paths for 2.2.4, and adding the directive "RecyclePool = Scratch").
=> 
=> The software compiles without error. There were no errors from "btape test" 
or 
=> "btape autochanger".
=> 
=> The bacula-sd daemon crashes repeatedly whether or not the variable 
=> LD_ASSUME_KERNEL was set to "2.4.19" before compiling bacula.
=> 
=> The bacula-sd daemon is running as root (while I sort out an issue that's not
=> present in 1.38.11 with permissions on the tape device). The bacula-dir 
normally
=> runs as user "bacula". 
=> 
=> Even if I modify the bacula-dir options to run it as root, no traceback file 
is
=> generated. The message (received via email) from Bacula is:
=>  
=>  Using host libthread_db library "/lib/libthread_db.so.1".
=>  0x401728d1 in ?? ()
=>  /usr/local/bacula-2.2.4/etc/btraceback.gdb:1: Error in sourced command 
file:
=>  Cannot access memory at address 0x80a2e34
=> 
=> The fault seems to occur right after the SD begins despooling data. I've got 
4 
=> log files from running the SD with debugging on (set to 200 or higher), and 
the 
=> error always happens after the first instance of despooling data. In each 
case, 
=> the log file shows "stored.c:582 In terminate_stored() sig=11".
=> 
=> I've attached an excerpt from the SD debugging output.
=> 
=> 
=> Thanks,
=> 
=> Mark
=> 
=> 
=> 
=> Mark Bergman  [EMAIL PROTECTED]
=> System Administrator
=> Section of Biomedical Image Analysis 215-662-7310
=> Department of Radiology,   University of Pennsylvania
=> 
=> 
http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu
=> 
=> 

- --==_Exmh_1191860770_278340
Content-Type: application/octet-stream ; name="bacula-2.2.4_SD_debugging"
Content-Description: bacula-2.2.4_SD_debugging
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="bacula-2.2.4_SD_debugging"

cGFydGhlbm9uLXNkOiBqb2IuYzoxNzMgYXJjaGl2ZS4yMDA3LTEwLTAzXzE3LjEwLjA0IHdh
aXRpbmcgMTgwMCBzZWMgZm9yIEZEIHRvIGNvbnRhY3QgU0QKcGFydGhlbm9uLXNkOiBibmV0
LmM6NjY2IHdobz1jbGllbnQgaG9zdD0xNzAuMjEyLjEyMy40NTYgcG9ydD0zNjY0MwpwYXJ0
aGVub24tc2Q6IGRpcmNtZC5jOjE3MSBDb25uOiBIZWxsbyBTdGFydCBKb2IgYXJjaGl2ZS4y
MDA3LTEwLTAzXzE3LjEwLjA0CnBhcnRoZW5vbi1zZDogZGlyY21kLmM6MTczIEdvdCBhIEZE
IGNvbm5lY3Rpb24KcGFydGhlbm9uLXNkOiBqb2IuYzoyMTUgRm91bmQgSm9iIGFyY2hpdmUu
MjAwNy0xMC0wM18xNy4xMC4wNApwYXJ0aGVub24tc2Q6IGNyYW0tbWQ1LmM6NzEgc2VuZDog
YXV0aCBjcmFtLW1kNSA8MTA5NTkzM

[Bacula-users] Speed issues with a DLT tape drive

2007-10-10 Thread Mark Maas
Dear list,

I have a speed question about my DLT tape drive. First some tech:

The controller:
description: SCSI storage controller
product: AIC-7892A U160/m
vendor: Adaptec
physical id: 9
bus info: [EMAIL PROTECTED]:09.0
logical name: scsi0
version: 02
width: 64 bits
clock: 66MHz
capabilities: scsi bus_master cap_list scsi-host
configuration: driver=aic7xxx latency=64 maxlatency=25 mingnt=40
resources: iomemory:ff6fe000-ff6fefff irq:177

The Tape Drive:
Vendor: COMPAQModel: DLT4000   Rev: D887
Type:   Sequential-Access  ANSI SCSI revision: 
02
target0:0:6: Beginning Domain Validation
target0:0:6: FAST-10 SCSI 10.0 MB/s ST (100 ns, offset 15)
target0:0:6: Domain Validation skipping write tests
target0:0:6: Ending Domain Validation


But the general speed with which Bacula saves to disk is: Bytes/sec=2,091,958. 
I could be daft here, but that's about 1.9950MB per second. While the drive is 
supposed to be able to write with 10MB's per second.(?)

The data being backed up is on a software raid1, and reading is currently 
around 60, which should be plenty for the Tape drive.

The Storage Data section:
Device {
  Name = DLT-IV
  Archive Device = /dev/nst0
  Media Type = Tape
  LabelMedia = yes
  Random Access = no
  AutomaticMount = yes
  RemovableMedia = yes
  Offline On Unmount = yes
  Maximum Open Wait = 172800
  Volume Poll Interval = 60
  Close on Poll= Yes
  AlwaysOpen = yes


What am I doing wrong? What other config settings are there?

Thanks,
Mark

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed issues with a DLT tape drive

2007-10-10 Thread Mark Maas

- "John Drescher" <[EMAIL PROTECTED]> wrote:
> > The Tape Drive:
> > Vendor: COMPAQModel: DLT4000   Rev:
> D887
> > Type:   Sequential-Access  ANSI SCSI
> revision: 02
> > target0:0:6: Beginning Domain Validation
> > target0:0:6: FAST-10 SCSI 10.0 MB/s ST (100 ns,
> offset 15)
> > target0:0:6: Domain Validation skipping write tests
> > target0:0:6: Ending Domain Validation
> >
> >
> > But the general speed with which Bacula saves to disk is:
> Bytes/sec=2,091,958. I could be daft here, but that's about 1.9950MB
> per second. While the drive is supposed to be able to write with
> 10MB's per second.(?)
> >
> 10MB/s is the interface speed not the speed the drive writes to tape.
> With my DLT 8000,  gigabit network, and fast dual processor servers I
> get between 3 to 4MB/s with on local backups. Remember the DLT8000
> writes twice the bits in the same tape area as the DLT4000 so to make
> a long story short the numbers you get look normal.

Hi John and thank you.

Good that's fine then... Phew. The Debian server is working as expected then.

Thanks!
Mark

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fwd: Speed issues with a DLT tape drive

2007-10-10 Thread Mark Maas

- "John Drescher" <[EMAIL PROTECTED]> wrote:

> BTW, If you have software compression on, turn it off. It will not
> give you any more space on your tape (as the drive has hardware
> compression) and it will only slow the job down.

Thanks! I've done that per the documentation, but thank you very much for the 
suggestion, and thinking with me.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] bacula-sd 2.2.4 goes kaboom! (segfault on despooling data)

2007-10-11 Thread mark . bergman
ad 32769 (LWP 18537)):
#0  0x4017075a in poll () from /lib/i686/libc.so.6
#1  0x4002dd1a in __pthread_manager () from /lib/i686/libpthread.so.0
#2  0x4002dfea in __pthread_manager_event () from /lib/i686/libpthread.so.0
#3  0x401797fa in clone () from /lib/i686/libc.so.6

Thread 1 (Thread 16384 (LWP 18507)):
#0  0x401728d1 in select () from /lib/i686/libc.so.6
---Type  to continue, or q  to quit---
#1  0x0009 in ?? ()
#2  0x401d3d20 in buffer () from /lib/i686/libc.so.6
#0  0x08137148 in ?? ()
(gdb) 
(gdb) quit
The program is running.  Exit anyway? (y or n) y
[EMAIL PROTECTED] bacula-2.2.5]$ exit
exit

Script done on Thu 11 Oct 2007 05:31:46 PM EDT


Thanks,

Mark

=> 
=> Regards,
=> 
=> Kern
=> 
=> On Thursday 04 October 2007 17:11, [EMAIL PROTECTED] wrote:
=> > I'm testing bacula 2.2.4 and the bacula-sd daemon repeatedly exits with a
=> > segmentation fault. Bacula 1.38.11 works reliably on the same machine with
=> > the same hardware and configuration
=> > files.
=> >
=> > Environment:
=> >Fedora Core 1
=> >Kernel 2.4.26
=> >gcc 3.3.2
=> >MySQL 5.0.22
=> >Dell PV 132T autochanger
=> >
=> > Build configuration script:
=> >./configure \
=> > --prefix=/usr/local/bacula-2.2.4  \
=> > --disable-nls \
=> > --disable-ipv6\
=> > --enable-batch-insert \
=> > [EMAIL PROTECTED]\
=> > [EMAIL PROTECTED] \
=> > --with-db-name=bacula2\
=> > --with-mysql  \
=> > --mandir=/usr/local/bacula-2.2.4/man  \
=> > --with-pid-dir=/usr/local/bacula-2.2.4/var/run\
=> > --with-subsys-dir=/usr/local/bacula-2.2.4/var/subsys  \
=> > --with-dir-user=bacula\
=> > --with-dir-group=bacula   \
=> > --with-sd-user=bacula \
=> > --with-sd-group=bacula\
=> > --with-fd-user=root   \
=> > --with-fd-group=root && make
=> >
=> >
=> > I'm using the same configuration files for the director, sd, and fd that
=> > work with the 1.38.11 installation (after removing the "Accept Any Volume"
=> > directive, changing the paths for 2.2.4, and adding the directive
=> > "RecyclePool = Scratch").
=> >
=> > The software compiles without error. There were no errors from "btape test"
=> > or "btape autochanger".
=> >
=> > The bacula-sd daemon crashes repeatedly whether or not the variable
=> > LD_ASSUME_KERNEL was set to "2.4.19" before compiling bacula.
=> >
=> > The bacula-sd daemon is running as root (while I sort out an issue that's
=> > not present in 1.38.11 with permissions on the tape device). The bacula-dir
=> > normally runs as user "bacula".
=> >
=> > Even if I modify the bacula-dir options to run it as root, no traceback
=> > file is generated. The message (received via email) from Bacula is:
=> >
=> >Using host libthread_db library "/lib/libthread_db.so.1".
=> >    0x401728d1 in ?? ()
=> >/usr/local/bacula-2.2.4/etc/btraceback.gdb:1: Error in sourced command
=> > file: Cannot access memory at address 0x80a2e34
=> >
=> > The fault seems to occur right after the SD begins despooling data. I've
=> > got 4 log files from running the SD with debugging on (set to 200 or
=> > higher), and the error always happens after the first instance of
=> > despooling data. In each case, the log file shows "stored.c:582 In
=> > terminate_stored() sig=11".
=> >
=> > I've attached an excerpt from the SD debugging output.
=> >
=> >
=> > Thanks,
=> >
=> > Mark
=> >
=> >
=> > 
=> > Mark Bergman  [EMAIL PROTECTED]
=> > System Administrator
=> > Section of Biomedical Image Analysis 215-662-7310
=> > Department of Radiology,   University of Pennsylvania
=> >
=> > http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upen
=> >n.edu
=> 




-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bug 969 not fixed

2007-10-18 Thread Mark Nienberg
bug 969 shows as fixed in 2.2.5 but it is still broken for me as shown below.

[EMAIL PROTECTED] ~]$ rpm -q bacula-mysql
bacula-mysql-2.2.5-1

Run Restore job
JobName: RestoreFiles
Bootstrap:   /var/bacula/buckeye-dir.restore.2.bsr
Where:   /
Replace: always
FileSet: gecko Files
Backup Client:   gecko-fd
Restore Client:  gecko-fd
Storage: VXA3tape
When:2007-10-18 15:02:54
Catalog: MyCatalog
Priority:10
OK to run? (yes/mod/no): mod
Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Restore Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: File Relocation
 11: Replace
 12: JobId
Select parameter to modify (1-12): 11
Replace:
  1: always
  2: ifnewer
  3: ifolder
  4: never
Select replace option (1-4): 4
Run Restore job
JobName: RestoreFiles
Bootstrap:   /var/bacula/buckeye-dir.restore.2.bsr
Where:   /
Replace: always   <<<<<<<<<< should be never
FileSet: gecko Files
Backup Client:   gecko-fd
Restore Client:  gecko-fd
Storage: VXA3tape
When:2007-10-18 15:02:54
Catalog: MyCatalog
Priority:10

Mark


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem backing up catalog

2007-10-24 Thread mark . bergman


In the message dated: Wed, 24 Oct 2007 14:39:00 BST,
The pithy ruminations from Simon Barrett on 
 were:
=> On Tuesday 23 October 2007 14:52:21 Mateus Interciso wrote:
=> > On Tue, 23 Oct 2007 14:44:15 +0100, Chris Howells wrote:
=> > > Mateus Interciso wrote:

[SNIP!]

=> 
=> 
=> On this matter; adding the password to the RunBeforeJob line causes my 
=> database password to appear on the status emails:
=> 
=> 24-Oct 13:09 fs01-dir: BeforeJob: run command 
"/etc/bacula/make_catalog_backup 
=> bacula bacula MyPasswordHere"
=> 
=> Status emails are sent in clear text across our network.  Is there a 
=> recommended solution to include sensitive variables in the config files 
=> without exposing them like this?  

Sure. Here's one easy solution:

In $BACULA/bacula-dir.conf, have the catalog backup job call a wrapper
script instead of calling make_catalog_backup directly, as in:

=== bacula-dir.conf snippet ===
# Backup the catalog database (after the nightly save)
Job {
  Name = "BackupCatalog"
  Type = Backup
  Level = Full
  Messages = Standard
  Priority = 10
  Storage = pv132t
  Prefer Mounted Volumes = yes
  Maximum Concurrent Jobs = 1  
  Pool = Incremental
  Incremental Backup Pool = Incremental
  SpoolData = yes
  Client = parthenon-fd
  FileSet="Catalog"
  Schedule = "AfterBackup"
  RunBeforeJob = "/usr/local/bacula/bin/make_catalog_backup.wrapper"
  RunAfterJob  = "/usr/local/bacula/bin/run_after_catalog_backup"
  Write Bootstrap = "/usr/local/bacula/var/working/BackupCatalog.bsr"
  Priority = 11   # run after main backup
}
===

The wrapper script is something like:

=== make_catalog_backup.wrapper ===
#! /bin/sh
exec /usr/local/bacula/bin/make_catalog_backup bacula bacula $PASSWORD
===


This will prevent mail from bacula from including the database password. The 
advantage to this method is that it doesn't change make_catalog_backup, so that 
future bacula upgrades will be transparent.

The good news is that mysql is security-conscious enough to overwrite the
command line parameter for the password, so a "ps" display doesn't show the
password as part of the mysql command.

Unfortunately, make_catalog_backup is not that smart, and a "ps" (or grepping
through /proc) will show the password on the command-line. If the backup server
is a single user machine that you consider secure, this may not represent too
much of a risk.

On the other hand, if you want to eliminate this problem completely, skip 
the wrapper script and modify make_catalog_backup so that it uses hard-coded 
values from within the script instead of command-line parameters for the 
dbname, the dbuser, and the password.

=> 
=> Regards,
=> 
=> Simon Barrett
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] sql query to list volumes per pool

2007-10-29 Thread mark . bergman
I believe that someone was asking for an SQL query to report on the assignment 
of pools to volumes. Perhaps this will help...it lists volumes in each pool, 
after prompting for the pool name. Suggestions in improving the SQL are welcome.



# 21
:List Volumes per Pool
*Enter Pool Name:
SELECT VolumeName AS VolName,MediaID AS ID,Pool.Name AS Pool,MediaType AS 
Type,VolStatus AS Status,VolErrors AS Errs,Media.LastWritten AS LastWritten,
   FROM_UNIXTIME(
  UNIX_TIMESTAMP(Media.LastWritten)
+ (Media.VolRetention)
   ) AS Expires
  FROM Media,Pool,Storage
  WHERE Media.PoolId=Pool.PoolId
  AND Media.StorageId=Storage.StorageId
  AND Pool.Name='%1'
  ORDER BY VolumeName;
====



Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mini project

2007-11-05 Thread mark . bergman


In the message dated: Sun, 04 Nov 2007 18:09:51 +0100,
The pithy ruminations from Kern Sibbald on 
<[Bacula-users] Mini project> were:
=> Hello,
=> 
=> Although I am working on a rather large project that I would like to explain 
a 
=> bit later when I have made some real progress (probably after the first of 
=> the year), I am thinking about doing a little mini-project to add a feature 
=> that has always bothered me, and that is the fact that Bacula can at times 
=> when there are failures or bottlenecks have a lot of Duplicate jobs running.


Great idea!
  
=> So, I'm thinking about adding the following two directives, which may not 
=> provide all the functionality we will ever need, but would go a long way:
=> 
=> These apply only to backup jobs.
=> 
=> 1.  Allow Duplicate Jobs  = Yes | No | Higher   (Yes)
=> 
=> 2. Duplicate Job Interval =(0)
=> 
=> The defaults are in parenthesis and would produce the same behavior as today.
=> 
=> If Allow Duplicate Jobs is set to No, then any job starting while a job of 
the 
=> same name is running will be canceled.

Will that also apply to pending jobs? For example, if one of our large full 
backups (2+TB) is running, a few days of incrementals for other clients may be 
scheduled and pending, but not actually running. I'd be happy seeing the 
automatic cancellation of duplicates applied to pending jobs--even if no job of 
that name is running yet.

=> 
=> If Allow Duplicate Jobs is set to Higher, then any job starting with the 
same 
=> or lower level will be canceled, but any job with a Higher level will start. 
 
=> The Levels are from High to Low:  Full, Differential, Incremental

Is it possible to reword this? The description introduces several points of
possible confusion:

1. "level" sounds a lot like "priority"

2. it's inconsistent that "higher" levels take precendence over "lower"
levels, but that "lower" priorities take precendence over
"higher" (numeric) priorities

3. a fixed choice of "Higher" may not always be appropriate in
different environments. 
For example, if I've got a pending Full backup and a pending
Incremental, I might want the Incremental to take precendence,
since it will be relatively quick, and then the Full will be
automatically rescheduled (since it wasn't run) after the
Incremental completes.


What about having the syntax be:

Allow Duplicate Jobs = Yes | No | Precendence (Yes)

and adding yet-another-option:

Duplicate Precedence List = comma separated list of backup levels, in
user-defined priority order from left to right, as in
"Full, Incremental, Differential" (default = Null)

=> 
=> Finally, if you have Duplicate Job Interval set to a non-zero value, any job 
=> of the same name which starts  after a previous job of the 
=> same name would run, any one that starts within  would be 
=> subject to the above rules.  Another way of looking at it is that the Allow 
=> Duplicate Jobs directive will only apply after  of when the 
=> previous job finished (i.e. it is the minimum interval between jobs).

That would be very helpful.

Thanks,

Mark

=> 
=> Comments?
=> 
=> Best regards,
=> 
=> Kern
=> 
=> PS: I think the default for Allow Duplicate Jobs should be Higher, but that 
=> would change current behavior ...
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu




The information contained in this e-mail message is intended only for the 
personal and confidential use of the recipient(s) named above. If the reader of 
this message is not the intended recipient or an agent responsible for 
delivering it to the intended recipient, you are hereby notified that you have 
received this document in error and that any review, dissemination, 
distribution, or copying of this message is strictly prohibited. If you have 
received this communication in error, please notify us immediately by e-mail, 
and delete the original message.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bad LastWritten dates in catalog

2007-11-10 Thread Mark Nienberg
In a pool of 4 volumes I discovered that two of them had what seem to be 
incorrect 
dates listed in the LastWritten field of the catalog.  Needless to say, this 
messed 
up my tape rotation.  One volume had the date 2007-10-18 when it was actually 
last 
used on 2007-09-29. The other was closer, off by only 3 days.

Now I know this seems impossible and you are thinking that some sort of 
operator 
error has occurred. The only thing I can think of is that I may have restored 
from 
these tapes on those later dates, but of course that should not have affected 
the 
LastWritten date.  I'll be keeping a close watch on them for a while, but I 
mention 
it here in case anyone has other ideas.

This is bacula 2.2.5.

P.S.  I simply manually purged a tape to get back on the correct rotation.

Thanks,
Mark


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bad LastWritten dates in catalog

2007-11-10 Thread Mark Nienberg
Allan Black wrote:
> Mark Nienberg wrote:
>> Now I know this seems impossible and you are thinking that some sort of 
>> operator 
>> error has occurred. The only thing I can think of is that I may have 
>> restored from 
>> these tapes on those later dates, but of course that should not have 
>> affected the 
>> LastWritten date.
> 
> No, it's not impossible, it wasn't operator error, and of course a restore 
> shouldn't
> have affected the LastWritten field, but it did. :-)
> 
> This is Issue ID 982, which was fixed by Eric Bollengier recently. There's a 
> patch on
> the SourceForge download section for it.

Thanks Allan and Arno,
Good to know I'm not losing my mind.
Mark


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] RHEL/CentOS and mtx

2007-11-28 Thread Mark Nienberg
I'm getting ready to upgrade my bacula server to better hardware and replace my 
single tape drive with an autoloader.  I saw this comment in the documentation:

"Note, we have feedback from some users that there are certain 
incompatibilities 
between the Linux kernel and mtx. For example between kernel 2.6.18-8.1.8.el5 
of 
CentOS and RedHat and version 1.3.10 and 1.3.11 of mtx. This was fixed by 
upgrading 
to a version 2.6.22 kernel."

Does anyone know if this issue has been resolved in the recent RHEL5.1 release? 
 I 
was planning on using RHEL/CentOS on the new server but I don't want to use 
non-stock 
kernels.  If it is still a problem I could use a fedora distro instead.

Thanks,
Mark


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] is time difference causing problems with auto labeling

2007-11-28 Thread Mark Nienberg
Jason Joines wrote:

> 
>  Anyone seen this before?  Is it just a time synchronization issue 
> causing the error?

I doubt it has anything to do with the time. Let's see the pool definition for 
the 
default pool in your bacula-dir.conf file.

Mark


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] is time difference causing problems with auto labeling

2007-11-28 Thread Mark Nienberg
Jason Joines wrote:

>  Anyone seen this before?  Is it just a time synchronization issue 
> causing the error?

Actually, it looks like your autolabel mechanism is not incrementing properly. 
I 
think bacula is trying to operate multiple times on the same volume name.

Mark


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] All the problems with Bacula

2007-11-29 Thread mark . bergman


In the message dated: Wed, 28 Nov 2007 23:13:18 EST,
The pithy ruminations from "John Stoffel" on 
 were:
=> 

[SNIP!]

=> Shon> here, I wouldn't have come as far as I have. For instance, the
=> Shon> full abilities of the commands aren't well documented. It
=> Shon> required searching the bacula-users posts to discover that I
=> Shon> could do "label slots=1-12 pool=scratch barcodes".
=> 
=> Hear hear!  I want to agree with this statement comletely.  The

I agree!

=> bconsole command and it's builtin help is a total mis-mash and not
=> very user friendly at all.

I'd go beyond that...it's not just the builtin help. Bacula is a very complex
piece of software, with many different ways to accomplish the same tasks. I'd
say that bacula's main weakness is that these methods are often poorly
documented and somewhat inconsistent.


=> 
=> I've got a plan to sit down and sketch out more consistent and clear
=> set of commands for manipulating bacula from the command line.  Anyone
=> else like the 'bcli' name?  
=> 


No! No! This will only exacerbate the problem (if one defines the problem as
"multiple tools/methods, with varying documentation and different levels of
support for each tool"). There is a huge investment in bconsole--it's a
fundamental interface to bacula, with a great deal of documentation (of varying
quality), and many scripts & procedures are built on bconsole. Introducing a
competitor will not improve bconsole directly, will not replace bconsole, and
will create yet-another-way of doing the same tasks.


I'd be hugely in favor of improving bconsole in the following ways:

1. consistency
standardize the "API" within bconsole so that every subcommand
can take arguments the same way.

2. documentation
improve both the off-line documentation and on-line help

3. command-line interface
Many people (myself included) rely on complex scripts to
execute bconsole commands. It's cumbersome and "fragile" to
write scripts that blindly respond to prompts and menu
structures of an interactive program. The scripts are 
difficult to debug and maintain as the program they are calling
changes over time.

For example, I've got a script that includes:
/bin/echo -e "update slots\n\nquit\n" | bconsole -c 
./bconsole.conf
note the "\n\n" sequence. This is vital, as the second carriage
return is a response to an prompt issued within bconsole.

Once the internal bconsole commands are standardized, I think
it would be a tremendous help to allow them to be called
directly from the command line, rather than introducing 
another program. For example, the previous "update slots"
command would be written as:

bacula -c ./bconsole.conf update slots

[SNIP!]

Thanks,

Mark

=> 
=> John
=> 


Mark Bergman  [EMAIL PROTECTED]
System Administrator
Section of Biomedical Image Analysis 215-662-7310
Department of Radiology,   University of Pennsylvania

http://pgpkeys.pca.dfn.de:11371/pks/lookup?search=mark.bergman%40.uphs.upenn.edu



-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] is time difference causing problems with auto labeling

2007-11-29 Thread Mark Nienberg
Jason Joines wrote:
> Mark Nienberg wrote:
>> Jason Joines wrote:
>>
>>>  Anyone seen this before?  Is it just a time synchronization issue 
>>> causing the error?
>> Actually, it looks like your autolabel mechanism is not incrementing 
>> properly. I 
>> think bacula is trying to operate multiple times on the same volume name.
>>
>> Mark
> 
> 
> 
>  Here's my default pool definition:
> Pool
> {
>Name = Default
>Pool Type = Backup
>Recycle = yes
>AutoPrune = yes
>Volume Retention = 365 days
>Maximum Volume Jobs = 1
>Maximum Volume Bytes = 5368709120
>LabelFormat = "$JobName-$namecount"
> }
> 
>  Failure to increment may've been part of the problem.  I didn't 
> think so at the time because I hadn't yet had a job over 5 GB.  Then I 
> realized I had set "Maximum Volume Bytes = 5242880" or 5 MB instead of 5 GB.
> 
>  I had started another thread about incrementing the counter.  It is 
> defined like this:
> Counter
> {
> Name = namecount
> Minimum = 100
> }
> I don't know if it will increment like that or not.  I tried:
> LabelFormat = "$JobName-$namecount+" but got an error about an invalid 
> character in my variable name.  How do you use the increment operator?

I don't know that.  But I would first get it working with something like

LabelFormat = "Test-"

After that works you will know your problem was related to labels and 
you can then start messing with various labeling schemes.

Mark


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Windows FD to Solaris SD Problem

2005-04-27 Thread Mark Bober

Hey there. I'm getting Bacula set up for our organization, and I'm having the 
strangest speed problem between a Windows FD and a Solaris SD.

Here's what I've got:

Solaris 5.8, Ultra 60, GigE, is the Director and a SD, FD.

Win 2k3 Srv, Dual Xeon, GigE, an FD.

Linux Centos 4, Single P4, 100M, SD and FD.

It's a GigE network from point to point.

Here are speed results, in K/sec:

To LinuxSD::
Windows: 6,700
Solaris: 14,000

To SolarisSD::
Linux: 4,500
Windows: 500

See which of these things is not like the other? Two other Windows boxes (a 
duplicate of the one listed, and a VMWare session) get the same results. 
There's no network bottleneck betwixt the Sun and Win boxes; the Linux box 
would be affected as well. This is on 1.36.2, although the Linux box is 1.36.3 
since I just loaded it today. (if that's the problem I'll eat a DLT tape) No 
hardware errors, network errors, neither box is loaded, etc, etc.

Ideas? Suggestions? (aside from running the SD/dir on the Linux box :)) Anyone 
seen this before? I'm kinda boggled.

Thanks!

Mark



---
SF.Net email is sponsored by: Tell us your software development plans!
Take this survey and enter to win a one-year sub to SourceForge.net
Plus IDC's 2005 look-ahead and a copy of this survey
Click here to start!  http://www.idcswdc.com/cgi-bin/survey?id=105hix
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Wildcard Include files?

2005-05-03 Thread Mark Bober

Does the director config '@' (including files) accept any sort of wildcard, 
a'la Apache?

Like:

@/usr/local/etc/conf/*.conf

It's not accepting that syntax; I'm not seeing anything about wildcards in the 
section about included files in the docs.

Thanks!

Mark


---
This SF.Net email is sponsored by: NEC IT Guy Games.
Get your fingers limbered up and give it your best shot. 4 great events, 4
opportunities to win big! Highest score wins.NEC IT Guy Games. Play to
win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   >