[Bacula-users] bacula lto-2 drive performance?

2007-07-10 Thread Bob Hetzel
Hi all,

I'm currently implementing a new bacula server running on a new dual 
core/dual cpu Dell PowerEdge 2900 server into a Dell Powervault 136T 
with a LTO-2 drive (IBM Ultrium-td2 model).  The tape drive and robot 
are on a dedicated Adaptec 29160 card.  The OS is OpenSuse 10.2

I think I should be able to get around 100 gigabytes/hour backup data 
rate with this hardware, but it looks like I'm getting about half that. 
  Of those of you running bacula on LTO-2 hardware, what kind of backup 
rates do you get?

Also, if you've got substantially better than 50 GB/hr on LTO-2 
hardware, what scsi card are you using?

Any other suggestions?

Bob

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula lto-2 drive performance?

2007-07-10 Thread Bob Hetzel

It was suggested that perhaps my 50 GB/hr data rate was due to the 
client maxing out around that rate.  I believe I have verified this. 
Previously I had been used to seeing much higher rates on another backup 
system, but I was only backing up servers using high performance SCSI 
disks, RAIDed together.

I installed the bacula client on one of those servers, where I saw a 
backup rate of about 77 GB/hr (21375.8 KB/s as reported by bacula).

I now see that this is in approximately the same speed I was able to get 
with Veritas Backup Exec on that particular collection of files.  The 
files in question are videos, about 100MB each, 82 GB total, and 
probably reasonably well compressed.



-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula tape handling with a library where tape is removed

2007-07-23 Thread Bob Hetzel
Greetings,

I've been working on setting up bacula with a tape library.  I now have 
a situation where a tape got removed from the library.  Bacula's jobs 
are all hung up waiting for that tape now.

How do I tell bacula to look for the "current tape" but move onto 
another tape automatically when the one it wants isn't in the library? 
I've got plenty of available labelled volumes listed in the pool, but 
for some reason it doesn't want to just grab one.

Bob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula tape handling with a library where tape is removed

2007-07-23 Thread Bob Hetzel
That didn't help.  Bacula and the library know the tape isn't present, 
so bacula just asks again for the same tape.

   Bob

jeffrey Lang wrote:
> After removing/changing tapes in the library make sure to do an "update 
> slots" the the new tape information is loaded into bacula.
> 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula tape handling with a library where tape is removed

2007-07-23 Thread Bob Hetzel


Brian Debelius wrote:
 > Try unmounting the drive, and then mounting it with one of the tapes you
 > think should work.  Then issue a 'status dir' at the console.
 >

That didn't work either.  I can't seem to get bacula to mount a tape 
into the drive so long as this mount request is pending.

Here's what happened... (there's no tape in the drive right now)

*unmount
Automatically selected Storage: Ultrium-2
3901 Device "IBMLTO2-1" (/dev/nst0) is already unmounted.
*mount storage=Ultrium-2 slot=11
3001 OK mount. Device="IBMLTO2-1" (/dev/nst0)
*messages
23-Jul 17:31 gyrus-sd: Invalid slot=0 defined, cannot autoload Volume.
23-Jul 17:31 gyrus-sd: Please mount Volume "LT0105L2" on Storage Device 
"IBMLTO2-1" (/dev/nst0) for Job Client1.2007-07-23_12.05.00

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula windows 2.2.0 uninstall crash and installer config issue?

2007-08-17 Thread Bob Hetzel

I'm trying to create internal documentation for others to install the 
bacula-fd on various windows computers around here.  I just installed 
bacula 2.2.0.  Since it didn't give me most of the configuration screen 
prompts, I decided to backup my config file then uninstall it.  The 
uninstaller seems to be stalling with the progress bar about 1/6 of the 
way.  I was able to terminate the au_.exe process and continue operating 
my computer fine, but is anybody else having a problem with the 
winbacula 2.2.0 uninstaller crashing?

This seemed to work fine in the 2.0.3 version.  Additionally, even when 
I pick Custom install type I don't have the option to install bacula 
without documentation.  This also was an option available in the 2.0.3 
version.

Bob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Database odbc or one more driver.

2007-08-21 Thread Bob Hetzel

If bacula supported other DB engines, bacula could be more easily 
"evaluated", but the mysql setup is pretty darn simple.  I applaud the 
bacula developers for adding extra documentation on mysql into the 
bacula docs rather than just a simple link to a google search or 
www.mysql.org  If I didn't have the extra help I probably would have 
been delayed substantially in implementing bacula at my site.

IMHO, it's somewhat of a design flaw to set up your backup system 
depending upon the enterprise db box.  If the enterprise db server goes 
down you need to get that up and running, then restore the bacula 
catalog, then you can begin restoring everything else.  IMHO, the bacula 
database should be tiny and self-contained.  Once you make bacula depend 
on two systems you start to increase the complexity of the system in a 
way that will make disaster recovery efforts take a lot longer.

I agree regular non-backup apps should generally be able to work with 
more than one db engine but I find this requirement for a backup system 
a completely different matter.

Bob

> I have to agree with Joao (apologies for spelling 
> problems due to Latin alphabet).  The choice of 
> something as trivial as a catalog database can be 
> a complete show-stopper for many applications.  I 
> have seen this when deploying a solution that met 
> all customer requirements, but which was SQL 
> Server based, and was ultimately rejected for no 
> other reason than that it was not 
> Oracle.  Likewise, other solutions, that happen 
> to support Oracle, automatically get a free pass 
> because they support the in-house database even 
> though they have passed none of the acceptance 
> tests required of other solutions.
> 
> Consider that no enterprise backup framework 
> officially supports multiple RDBMS vendors and 
> Bacula's support of multiple databases 
> (abstracted through dedicated interface code) becomes especially attractive.*
> 
> I will not go so far as to say that Bacula needs 
> support for additional databases but that, given 
> the availability of coders and testers, it can 
> easily be ported to most RDBMS's on the planet. 
> Support for more databases is ultimately a 
> positive thing but it does incur additional development and testing effort.
> 
> --PLB

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] autochanger mount/unmount problem

2007-09-05 Thread Bob Hetzel
I've been having this trouble for a while.  It was just posted on the 
list to add autochanger=yes into a spot in the bacula-dir.conf file and 
that solved one problem I had unmounting/mounting...  but I'm still left 
with this one... anybody have any ideas what's still wrong?

Bob

*unmount Dell-PV136T
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Connecting to Storage daemon Dell-PV136T at gyrus:9103 ...
3307 Issuing autochanger "unload slot 18, drive 0" command.
3995 Bad autochanger "unload slot 18, drive 0": ERR=Child exited with code 1
Results=Unloading drive 0 into Storage Element 18...mtx: Request Sense: 
Long Report=yes
mtx: Request Sense: Valid Residual=no
mtx: Request Sense: Error Code=70 (Current)
mtx: Request Sense: Sense Key=Illegal Request
mtx: Request Sense: FileMark=no
mtx: Request Sense: EOM=no
mtx: Request Sense: ILI=no
mtx: Request Sense: Additional Sense Code = 53
mtx: Request Sense: Additional Sense Qualifier = 01
mtx: Request Sense: BPV=no
mtx: R3002 Device "IBMLTO2-1" (/dev/nst0) unmounted.
*mount
Automatically selected Storage: Dell-PV136T
Enter autochanger slot: 18
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result is Slot 18.
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result is Slot 18.
3001 Mounted Volume: LTO219L2
3001 Device "IBMLTO2-1" (/dev/nst0) is mounted with Volume "LTO219L2"

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup to disk AND tape

2007-09-24 Thread Bob Hetzel
The reality is that if you really need reliable data for 10 years you're 
probably stuck with technology like paper, optical media (choose wisely 
as many of these formats are gone too), or online hard drive space that 
you'll be continually checking and carrying along with each upgrade (and 
backing up with all your regular fulls).  All have their own major 
drawbacks.  The benefit to online hard drive space is that new data 
needs grow so fast that in many cases it's not that much more expensive 
to keep the old stuff around--for instance... 20 years ago 100GB of data 
was not available in one storage system.  10 years ago it was a lot but 
quite pricey.  Now it's about the smallest hard drive you can buy new.

The odds that you can locate a working tape drive of any current (2007 
hardware) type and adapters to plug it into 10 years from now (2017 
hardware) aren't good the way things are moving.  Not only will you have 
to worry about hardware--is PCI still going to go the way of the ISA bus 
by then?--but drivers for old adapters on the OS after the next OS are 
quite possibly going to be a problem--there's tons of useless adapters 
out there now where manufacturers went out of business before updating 
their NT 4.0 drivers to work with XP, Server 2000, and Server 2003, so 
even if you save both the tape drive and the adapter who's to say the 
adapter will have a spot to plug it in and a driver you can load.

With regard to 30 years I can almost guarantee problems with just about 
any electronic removable media.  While it's true that you can probably 
find a 9-track mainframe style tape reader to read 30 year old data 
tapes on many current computer systems, the market does not seem to be 
maintaining that trend for the current storage stuff--things are moving 
just too fast.  That's been driven by IBM's mainframe dominance over the 
last 30 years--corporations have been migrating off IBM mainframe 
hardware right and left in favor of hardware from companies that may or 
may not still be in existence 10 years from now.

In summary... backup software is extremely important for disaster 
recovery but should not be considered for long term (5+ years, possibly 
even less depending on what you need it for) storage needs in my humble 
opinion.

> Message: 22 Date: Mon, 24 Sep 2007 08:06:29 -0400 From: "John Drescher" 
> <[EMAIL PROTECTED]> 
> 
>> > but cheap hard drive keep your data safe only for 3 - 4 years for
>> > sure (maybe longer) and some tapes (DLT, LTO) are specified to hold your
>> > data for 15 - 30 Years (if the tape is not constantly in use, so for
>> > archiving purposes).
> 
> On top of that I have several other reasons why tape is better for
> backups. We have 10TB of data online (linux software raid 5 and 6)
> which represents between 1/2 and 2/3 of our data but we do not in any
> way consider this as a backup. What happens if the file system
> corrupts (I have seen this happen) and 1/2 of your data is lost? Hard
> drives use power and require extras (servers/cages) that make the cost
> of them a lot more than the price of a single drive. And they do not
> scale anywhere near as well as tape. And you have to replace them
> every 3 to 5 years or fear that you will loose your data. To avoid
> some of these problems you could store the drives on a shelf (in a
> temp / humidity controlled environment), however there is a big risk
> here that the drive will not spin when you install it 10 years down
> the line making the data on the disk very expensive to recover.
> 
> John


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] autochanger mount/unmount config without blocking

2007-10-04 Thread Bob Hetzel

When I unmount a volume using the bconsole unmount command, bacula 
blocks access to the tape drive until I mount another volume.  Is there 
a way to config it so the drive isn't blocked when a tape is unmounted 
from within bconsole?  If I unmount the tape drive I'd like it to be 
able to grab and mount the proper tape automatically when the next job 
starts, unless I deliberately want to block it from executing jobs. 
Items that sound sort of helpful in the manual are some combination of 
AutomaticMount, AlwaysOpen, Close On Poll, Volume Poll Interval, and 
Offline On Unmount.

However, I'd like it to be able to get the load status of a the drive 
and if a tape is loaded it shouldn't need to rewind/fast forward on each 
job after a pause.  Anybody worked kind of configuration out already?

Otherwise autochanger and backup stuff seems to be working well.

Here's what's in my  bacula-sd.conf
Autochanger {
   Name = Dell-PV136T
   Device = IBMLTO2-1
   Changer Command = "/etc/bacula/mtx-changer %c %o %S %a %d"
   Changer Device = /dev/sg2
}
Device {
   Name = IBMLTO2-1
   Drive Index = 0
   Media Type = LTO-2
   Archive Device = /dev/nst0
   AutomaticMount = yes
   AlwaysOpen = yes
   Offline On Unmount = yes
   RemovableMedia = yes
   RandomAccess = no
   AutoChanger = yes
   SpoolDirectory = /home/baculaspool
   Maximum Spool Size = 200gb
   Alert Command = "sh -c 'smartctl -H -l error %c'"
}

And here's what's in my bacula-dir.conf relative to the changer

Storage {
Name = Dell-PV136T
Autochanger = yes
Address = gyrus
SDPort = 9103
Device = Dell-PV136T
Media Type = LTO-2
Password = ""
}


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] jobs that can't be canceled?

2007-10-17 Thread Bob Hetzel
Greetings all,

Near the end of my day's run of backup jobs I noticed 3 still 
scheduled... 1 was a job that apparently isn't working right, and two 
others waiting on higher priority jobs.

I canceled the problem job (named dms48) and the one I changed priority 
on (wnacs06) and the 3rd job wound up still waiting.  I then was left 
with the following...

*status storage
Automatically selected Storage: Dell-PV136T
Connecting to Storage daemon Dell-PV136T at gyrus:9103

gyrus-sd Version: 2.2.5 (09 October 2007) i686-pc-linux-gnu suse 10.2
Daemon started 16-Oct-07 11:30, 70 Jobs run since started.
  Heap: heap=1,044,480 smbytes=624,104 max_bytes=826,119 bufs=142 
max_bufs=203
Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8

Running Jobs:
Writing: Incremental Backup job dms48 JobId=988 Volume="LTO235L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=1 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=7 in_msg=6 out_msg=5 fd=14


Jobs waiting to reserve a drive:


Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
===



Device status:
Autochanger "Dell-PV136T" with devices:
"IBMLTO2-1" (/dev/nst0)
Device "FileStorage" (/home/baculastorage) is not open.
Device "IBMLTO2-1" (/dev/nst0) is mounted with:
 Volume:  LTO235L2
 Pool:Default
 Media type:  LTO-2
 Slot 26 is loaded in drive 0.
 Total Bytes=212,202,482,688 Blocks=3,289,348 Bytes/block=64,512
 Positioned at File=222 Block=8,328


In Use Volume status:
LTO235L2 on device "IBMLTO2-1" (/dev/nst0)
 Reader=0 writers=1 reserved=0


Data spooling: 1 active jobs, 0 bytes; 63 total jobs, 46,425,349,172 max 
bytes/job.
Attr spooling: 1 active jobs, 0 bytes; 63 total jobs, 60,886,386 max bytes.

*status director
gyrus-dir Version: 2.2.5 (09 October 2007) i686-pc-linux-gnu suse 10.2
Daemon started 16-Oct-07 11:30, 87 Jobs run since started.
  Heap: heap=864,256 smbytes=357,966 max_bytes=566,007 bufs=2,378 
max_bufs=3,910

Scheduled Jobs:
Level  Type Pri  Scheduled  Name   Volume
===



Running Jobs:
  JobId Level   Name   Status
==
988 Increme  dms48.2007-10-17_11.35.06 has been canceled
   1015 Increme  wnacs06.2007-10-17_11.35.33 has been canceled
   1016 FullBackupCatalog.2007-10-17_13.10.37 is waiting execution


*release
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: Dell-PV136T
Connecting to Storage daemon Dell-PV136T at gyrus:9103 ...
3937 Device "IBMLTO2-1" (/dev/nst0) is busy with 1 writer(s).
*estimate job=dms48
Using Catalog "MyCatalog"
Connecting to Client dms48 at dms48.case.edu:9102
Failed to connect to Client.
*

Looking at the spool directory... I find the following file...
-rw-r- 1 root bacula0 2007-10-17 11:43 
gyrus-sd.data.988.dms48.2007-10-17_11.35.06.IBMLTO2-1.spool

which is the job that I tried to cancel.

I was able to stop and start the bacula services and continue so it's 
clear I don't have a hardware failure at least...

Any thoughts?  I've been able to cancel other jobs just fine in the past 
but I just upgraded to bacula version 2.2.5.

Bob

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] message queue bug?

2007-10-25 Thread Bob Hetzel
Greetings all,

*status director
gyrus-dir Version: 2.2.5 (09 October 2007) i686-pc-linux-gnu suse 10.2
Daemon started 24-Oct-07 09:42, 74 Jobs run since started.
  Heap: heap=868,352 smbytes=432,591 max_bytes=590,898 bufs=4,146 
max_bufs=4,596
 rest edited out for brevity...

I've noticed this problem which seems benign but may be evidence of a 
larger problem... anybody care to dive into the code to figure out 
what's going on?
These commands were typed immediately one right after another... no jobs 
were running.

===
*messages
You have no messages.
*estimate job=jxh14-gx280
Using Catalog "MyCatalog"
Connecting to Client jxh14-gx280 at jxh14-gx280.case.edu:9102
Failed to connect to Client.
You have messages.
*messages
25-Oct 10:37 gyrus-dir JobId 0: Fatal error: Error sending Hello to File 
daemon at "gdm1.case.edu:9102". ERR=Broken pipe


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] monitor tape shoe-shining

2007-10-31 Thread Bob Hetzel

> 
> I am also interested in this.
> 
> I have pretty big full backup (3,2TB) for which I need around 12 LTO-2 tapes
> (with hw compression on). I use spooling with max size of 215GB, and
> typically spooling (of max size) lasts 2h50min, while despooling lasts
> 2h40min. Is this normal that spooling lasts longer than actual writing to
> the tapes? 
> 
> I would drop the spooling part, but I'm worried about tape shoe-shining.
> (If only bacula can do despooling and start spooling the next chunk of the
> same job at the same time.)
> 
> Sandi


Spooling with concurrency may help quite a bit.  W/o concurrency, 
spooling will slow things down because LTO-2 tape drives are generally 
faster than non-raided hard drives.  So not only is writing to your 
spool drive slower than writing initially to tape, it's read is also 
slowing the backup down. If you're worried about shoe shining you 
probably don't have to worry much about that with LTO-2 (and newer) 
drives.  So long as the data stream isn't extremely jerky (i.e. speeding 
up and slowing down a lot) the drive should regulate the speed of the 
tape movement so as to properly keep pace with the data stream.  Some 
LTO-1 drives had this but not all.  I believe all LTO-2 ones do though.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] windows client crawls to halt

2007-12-11 Thread Bob Hetzel
Greetings,

I have about 50 windows clients being backed up from my OpenSuse 10.2 
linux based bacula server now.  I've been having trouble with a small 
number of them where they start backing up ok but then much of the time 
the backup goes on forever.  We turned on debugging on the client in 
question with a level of 100 and these are the last few lines of the file...

mxg86: ../compat/compat.cpp:1172 readdir_r(c11ff0, { d_name="NEW Yr 2 
Stud eval 03-04.doc", d_reclen=28, d_off=231881
mxg86: ../compat/compat.cpp:194 Enter wchar_win32_path
mxg86: ../compat/compat.cpp:378 Leave wchar_win32_path=\
mxg86: ../compat/compat.cpp:1172 readdir_r(c11ff0, { 
d_name="NEW%20TEACHING%20SERIES%20FOR%20FACULTY%20updated.docsid=ZLi4TOR9U74&mbox=INBOX&charset=escaped_unicode&uid=5901&number=2&filename=NEW%20TEACHING%20SERIES%20FOR%20FACULTY%20updated.url",
 
d_reclen=184, d_off=231909
mxg86: ../compat/compat.cpp:194 Enter wchar_win32_path
mxg86: ../compat/compat.cpp:378 Leave wchar_win32_path=\
mxg86: ../compat/compat.cpp:1172 readdir_r(c11ff0, { 
d_name="NEWCURRICULUM _06-07_42007.xls", d_reclen=30, d_off=232093
mxg86: ../compat/compat.cpp:194 Enter wchar_win32_path
mxg86: ../compat/compat.cpp:378 Leave wchar_win32_path=\

1) Can anybody comment on the line that's different and why (I'm 
speaking of the one that includes the &mbox=INBOX part)?  I searched the 
entire debug file and didn't find any others like that.
2) Also, any idea whether that has anything to do with the problem?

Typically when this happens, the client will log an error in the 
application event log.  The bacula fd client doesn't terminate, so the 
bacula server thinks it's still connected, but it doesn't appear to ever 
complete the backup even several hours later (it should be completing in 
under 2 hrs).

This was a full backup, so it's not just stalled scanning through the 
file system.  It was most likely upgraded from an Incremental backup, 
however, if that matters.  The server was spooling data and I have 
concurrency set to 3.  As a result, this slows things down for the 
overall picture--once this one gets stuck I'm really only getting two 
concurrent backups rather than 3.  I also have to restart the bacula 
server daemons to kill it as even after using the bconsole cancel 
command it does not terminate over the course of more than an hour.


-
SF.Net email is sponsored by:
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] re-run restore job by jobid?

2009-02-19 Thread Bob Hetzel

Is there an easy way in bacula to re-run a job such as a restore by jobid?

Every now and then I run into a situation where I start a restore but it 
fails due to something like an application on the destination machine 
needing to be shut down.  I'd like a way to just go into bconsole and tell 
it 'run jobid=12345' so I wouldn't have to re-select all the files again.

Does this already exist?

   Bob

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Some restores slow, some restores fast

2009-03-02 Thread Bob Hetzel
Is it possible that the network driver on your boot device is just out of 
date (i.e. buggy) or that the firmware (hard drives, controllers, network 
adapter, etc) on the server with the problem just needs updating?

While we're on the topic... how did you get the windows bare metal restore 
to work?  I was able to get a complete filesystem restore to run but 
couldn't make the server boot into that restored windows after that.

> From: Quibble 
> To: bacula-users@lists.sourceforge.net
> 
> Hi,
> 
> I have been doing bare metal restores, using bacula, of NT boxes, 2000
> boxes, and 2003 boxes with some success.  At this point, one of the
> roadblocks is the speed of restores.  Some restores reach 24 MB/s , while
> others only reach around 1 MB/s.  Using the backups from one particular box
> (it is a 2003 webserver) that restores around .7 to 1.5 MB/s , we have been
> running some tests to see if we can speed up restores.  So far, we have had
> no luck.  
> 
> One of the things we have tried is to restore to completely unused storage
> (to rule out I/O issues)
> Another thing we have tried is splitting the backups into 4 different
> filesets (they run about 20 gigs each in size).  
> 
> Does anyone have any suggestions for speeding up our restores?  
> -- View this message in context: 
> http://www.nabble.com/Some-restores-slow%2C-some-restores-fast-tp22183059p22183059.html
>  Sent from the Bacula - Users mailing list archive at Nabble.com. 



--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula limits

2009-03-02 Thread Bob Hetzel


I think I might have one of the bigger installations doing both servers and 
desktops... although judging from the wiki stats page referenced in this 
thread I'm clearly not the biggest in any category.

The Catalog, SD, and Dir are all on same box.

System Description: Centos 5.2 x86_64, 2xIntel Xeon 5160 3.0Ghz core2 
cpu's, 8GB RAM, MySQL 5.0.45
Bacula version: 2.4.4
Bacula Catalog size on disk (du -sh): 4.9GB

Autochanger: two drive LTO-2, model PowerVault 136T--everything to tape but 
with data and attribute spooling turned on to keep from shoe-shining the 
tape drives.
Data retention period is 90 days for everything.
I do desktop fulls on the 1st of the month, incs for the rest of the month. 
  Server backups are about every 3 weeks, with incs run Mon-Fri.
The breakdown is approximately 35 server jobs, 200 desktop jobs.

Script output...

ClientCount: 246
FileCount: 29793013
FileRetentionAvg: 5356097
FilenameCount: 2809683
FilesPerJobAvg: 3328
JobRetentionAvg: 15552000
PathCount: 617586
TotalBytes: 4567117473319
TotalFiles: 9907476




--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Suggestions for selecting Bacula version

2009-03-12 Thread Bob Hetzel

1) If you use the version that's not the latest, the bacula folks will most 
likely just tell you to upgrade to the latest version, so you might as well 
start there now and try to stay current.

2) On open source software that's where development is still "active" I'd 
highly recommend learning to configure/compile from source.  You don't need 
to know how to program to do this and it's reasonably well written up in 
the Bacula docs at www.bacula.org.  Documentation of decisions that go into 
compiling by the packagers is generally sparse or non-existent.

3) If you find a bug and happen to be the first one to report it you'll be 
able to test the fix (Kern and others have been really good in my 
experience with fixing bugs whenever they can get a full explanation plus 
output showing what's happening) much easier than if you have to wait for 
the next release followed by the next package release.  Generally fixes are 
put out as patches of source code only (new "release" versions of bacula 
seem to be quarterly or so lately).

4) In addition, through configuring/compiling practice you'll probably 
obtain a much better understanding of how Linux/bacula/etc work.  I don't 
compile very much at all on my systems, but bacula is definitely one to 
compile here where I work.  Since you're running a Linux variant and 
bacula's active development takes place on Linux you won't likely hit a 
compiling problem that's not easy to solve.

In my case I just put my compile options in a script so I don't have to 
re-figure them out every time and so it takes only about 5 mins to install 
a new bacula version or a patch fix.

 Bob

Previously, Kevin Keane said...

 > There is no 100% cut-and-dried answer here, but a couple thoughts:
 >
 > - Regardless of what you do, I would not use a version older than, say,
 > 2.2. Preferably use 2.4.
 >
 > - You may want to stay with the version you have been testing. After all,
 >  your test results may not apply to other versions.
 >
 > - If you can, use the version that comes with your distribution, in this
 > case with Ubuntu 8.04. It makes updating and maintenance a lot easier.

 > - If the Ubuntu version is too old, see if you can find a newer version
 > already compiled for Ubuntu 8.04 somewhere else.
 >
 > Only as a last resort recompile yourself. Reynier P?rez Mira wrote:
 >
 >
 >
 > > > Hi every:
 > > > I've been testing Bacula for more than six months. Right now my boss
 > > >  ask
 > > > to me for a suitable version for production. Wich version did yours
 > > > recommend me? I use Ubuntu Server and I was thinking to use the
 > > >  version
 > > > post in the repository for LTS (Ubuntu 8.04) release. This will be
 > > > fine?
 > > > Thanks and cheers in advance
 > >



--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Suggestions for selecting Bacula version

2009-03-13 Thread Bob Hetzel
Regarding the bacula manual...
It's not very Linux specific and is really one of the better manuals I've 
used for an open source project.  Far better than many commercial manuals 
imho.  It's long because bacula has a lot of features--many of which a new 
implementation won't use but may be desired after getting things going.

Reynier Pérez Mira wrote:
> On Thu, 2009-03-12 at 11:39 -0400, Bob Hetzel wrote:
> 
> First of all let me thanks to every people who takes a minutes or maybe
> more to answer my questions. I and some others guy starting a DataCenter
> and a automatic save tools is needed here. Because I used Bacula before
> I just suggest and every people agree with my decision but they told me
> that I will be the person who support Bacula and they need that I will
> be able to solve any problem. For that this was my first question.
> 
> Now as I said before I have a production server conected to a SAN with
> 24 TB at least so size of save are not a problem or better not now maybe
> in a future we need to grow this size. In this server I have Ubuntu
> 8.04.2 LTS installed. It's virgin because I want to dedicated this
> server only to Bacula. The version existent in Ubuntu repositories, for
> 8.04 release, is to old IMHO. For Intrepid Ibex (8.10) wich is the
> latest Bacula is on 2.4.2 version. So I think as you suggest me before
> will be nice compile from sources and apply patches if I need.
> 
>> 1) If you use the version that's not the latest, the bacula folks will
>> most
>> likely just tell you to upgrade to the latest version, so you might as
>> well
>> start there now and try to stay current.
> 
> I agree with all. Maybe I could take this release 2.4.4 as stable and
> compile and configure and start here at this point and then when a
> future version arrive evaluate all to find if I need to upgrade or just
> leave the system intact.
>> 2) On open source software that's where development is still "active"
>> I'd
>> highly recommend learning to configure/compile from source.  You don't
>> need
>> to know how to program to do this and it's reasonably well written up
>> in
>> the Bacula docs at www.bacula.org.  Documentation of decisions that go
>> into
>> compiling by the packagers is generally sparse or non-existent.
> 
> I agree with this too but I'm one of those person who think: "if the
> things are well, why touch them and deconfigure all?". I think also that
> the Bacula team need to create a Basic Manual for those who not know any
> about Linux, is not my case but well, is my opinion. If exists and I'm
> wrong just let me know where to find.
> 
> One more time thanks to every and wait for more question from my
> person ;)
> Cheers

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to display specific jobids in bconsole

2009-03-26 Thread Bob Hetzel


> From: John Drescher 
> On Wed, Mar 25, 2009 at 9:09 AM, Quibble  wrote:
>> >
>> > Hi,
>> >
>> > Is there a way to show the jobids of a specific client in bconsole without
>> > showing every single jobid available? ?We backup a ton of servers and it is
>> > helpful to be able to specify the specific jobids of one client at a time
>> > rather than evey single one.
>> >
> Use the query command.
> 
> John


Related to that... when I do a "restore client='clientname' does anybody 
else think that if I specify "1: List last 20 Jobs run" it should only show 
me jobs for that particular client?

Perhaps this is more properly defined as a feature request than a bug but 
before submitting it I wonder what others think.  If that were changed 
would anything break?

Alternatively, I see option "10: Find the JobIds for a backup for a client 
before a specified time" only lists the most recent Full and it's following 
incrementals (with regard to the date entered).  Is there some menu driven 
way to see all the usable backup jobs for a particular client?  If not, 
should there be?

   Bob

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Severe Performance Issues with high volume of files

2009-04-01 Thread Bob Hetzel
Here's my points/suggestions on your server setup (I'll leave the bacula 
SQL code observations to the bacula developers)...

In order of what I would try:

1) put mysql-temp and the bacula tables on separate storage. Have that also 
separate from bacula's temp if possible--I use raid-0 for the spool volume. 
  (i.e. SpoolDirectory in the Device section of bacula-sd.conf)

2) on unix, 64 bit OS stuff is kinda the same as 32 bit until you get 
beyond 4 GB of ram, so you may want to consider adding some to take it to 
8GB or more.  You can also use 'top' and other commands to monitor how much 
RAM it's using and adjust the tuning parameters for it.

3) You may want to tune mysql parameters some but I'm no mysql expert.  If 
you add ram, some of the mysql parameters will need to be adjusted to take 
advantage of it.  Even if you don't change mysql parameters at all, the 
extra memory will go into disk cache which may help a lot.

4) That server is dual processor capable so you may want to consider adding 
a 2nd CPU.  Have you monitored the load avg values for the server?  That 
may give you some indication of whether you're lacking CPU power or hitting 
some other bottleneck like memory/disk/or just plane io.  There are other 
utilities to help with this too.

I spool everything and then go to tape w/o any disk/file pools so your 
situation is quite different than mine even though we're both running 
bacula on a PowerEdge 2900 with 64 bit linux.  Have you considered 
upgrading to LTO-2 or better?  If you ever have to get all 5 TB off tape at 
LTO-1 speeds you're probably going to be under a lot of stress.

 Bob

> Hi !
> 
> TL;DR
> When backing up / migrating clients with a high volume of files the
> performance drops to a very low level. This can be traced back to the
> Database inserts.
> 
> Our server:
> Dell PE2900
> 4-Core Xeon 2GHz, 4GB Ram
> Red Hat Enterprise Linux 5.3 64-Bit
> 2.6.18-128.1.1.el5 #1 SMP Mon Jan 26 13:58:24 EST 2009 x86_64 x86_64
> x86_64 GNU/Linux
> PERC 5/i   - 200GB SAS Raid-5 (System + Database)
> PERC 4e/DC - 2x 350GB Raid-5 Disk Enclosure (MySQL-Temp & customer Data)
> LSI1020- 4x LTO-1 Streamer
> QLA200 - 5 TB Fibre-Channel SAN with SATA Drives (Bacula Storage)
> 
> Bacula 2.4.4 - batch insert enabled
> MySQL 5.0.45


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Use new client with old server?

2009-04-01 Thread Bob Hetzel
> Date: Wed, 1 Apr 2009 13:44:47 -0400
> From: John Drescher 
 >
> 2009/4/1 Arch Willingham :
>> > Our server is still at 2.4.2 (Ubuntu)?.does it matter which version of
>> > Windows clients are used? How about the beta clients
>> > winbacula-2.5.42-b2.exe?
>> >
>> >
>> >
>> > ?I just tried upgrading one of the Windows clients to
>> > winbacula-2.5.42-b2.exe and it gives an error saying ?01-Apr 13:25
>> > ubuntumachine-dir JobId 0: Fatal error: File daemon at "192.168.1.222:9102"
>> > rejected Hello command?
>> >
> 
> Have you set up the new bacula executable as an exception in the
> windows firewall?
> 
> John


I got that behavior too.  It appears that at least for the windows client 
(and I presume the protocol is the same in all the others) the 2.5.x client 
will not work with a 2.4.x server.  No biggie... I'll be upgrading my sever 
soon enough anyway.  Unfortunately the day I picked to upgrade the server I 
spent entirely too long during my downtime window tinkering with this 
before I reached this conclusion.



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fileset question

2009-05-15 Thread Bob Hetzel
1) regex is more powerful (and therefore slow) for what you're trying to 
do.  WildFile is probably what you'd want instead.
2) You can't do both an include and an exclude of the same directory.
3) Start with something really basic.  Take out all the exclusions and then 
you're just left with the File line.
Then put all the exclusions in one Options block.  It looks to me like you 
only want to include each user's Desktop, My Documents, and Thunderbird 
folders.  Have you looked at how big it would be to just include everything 
  under Documents and settings with possible temp directory exceptions?

That would be quite a lot simpler... like so...

FileSet {

   Name = "XP_WS4"
   EnableVSS=yes
   Include {
   Options {
   Signature = MD5
  compression=GZIP
  ignore case = yes
  exclude = yes
  WildFile = "*.lnk"
  WildFile = "*.mp3"
  WildFile = "*.wma"
  WildDir = "*/temp"
  WildDir = "*/temporary*"
  }
   File = "C:/Documents and Settings"
}

Also, are you sure you'd want to exclude .lnk files?

Also, after setting this up, you can log the list of files it gets with the 
command
@output /tmp/file-listing.txt
estimate listing job=
@output

I typically look over those lists for stuff to exclude, but mainly I sort 
it by size to exclude big stuff that wouldn't be useful, such as installers 
for acrobat reader, etc.

Bob

> From: Jeff Dickens 
> Subject: Re: [Bacula-users] fileset question
> To: bacula-users 
> Message-ID: <4a0c8c03.6050...@m2.seamanpaper.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> Maybe if I had the answers to those questions I would understand why 
> this fileset is totally non-functional.  It backs up just the top level 
> directories under "Documents and Settings".
> 
> FileSet {
> Name = "XP_WS4"
> Include {
> Options {
> exclude = yes
> RegEx = ".*\.lnk"
> RegEx = ".*\.mp3"
> RegEx = ".*\.wma"
> }
> Options {
> signature = MD5
> compression=GZIP
> WildDir  = "C:/Documents and 
> Settings/*/Application Data/Thunderbird"
> WildDir  = "C:/Documents and 
> Settings/*/Application Data/Thunderbird/*"
> RegExDir = "C:/Documents and Settings/[^/]+$"
> }
> Options {
> exclude = yes
> Wild = "C:/Documents and Settings/*"
> }
> File = "C:/Documents and Settings"
> }
> }
> 
> 
> Jeff Dickens wrote:
>> > Here it is:
>> >
>> > FileSet {
>> > Name = "XP_WS3"
>> > Include {
>> > Options {
>> > exclude = yes
>> > RegEx = ".*\.lnk"
>> > RegEx = ".*\.mp3"
>> > RegEx = ".*\.wma"
>> > }
>> > Options {
>> > signature = MD5
>> > compression=GZIP
>> > WildDir  = "C:/Documents and Settings/*/My 
>> > Documents"
>> > Wild = "C:/Documents and Settings/*/My 
>> > Documents/*"
>> > WildDir  = "C:/Documents and Settings/*/Desktop"
>> > Wild = "C:/Documents and Settings/*/Desktop/*"
>> > RegExDir = "C:/Documents and Settings/[^/]+$"
>> > }
>> > Options {
>> > exclude = yes
>> > Wild = "C:/Documents and Settings/*"
>> > }
>> > File = "C:/Documents and Settings"
>> > }
>> > }
>> >
>> > Thanks to those that responded to Robin Bonin's thread about a year 
>> > ago, in particular Martin Simmons.
>> >
>> > I have two follow-up questions:
>> >
>> > What exactly is the RegExDir directive doing?  I understand what the 
>> > regex is doing, but how does that help here, since all the WildDirs 
>> > are getting "ored" together.. what does it match that the others don't?
>> >
>> > Secondly, does anyone else agree that it would be desirable to have 
>> > the fileset look more like this:
>> >
>> > Include {
>> >Rational_Filespec: = "C:/Documents and Settings/*/My Documents"
>> >Rational_Filespec: = "C:/Documents and Settings/*/Desktop"
>> > }
>> >
>> > instead of the dog's breakfast?
>> >
>> >
>> >
>> > Jeff Dickens wrote:
>>> >> Is there any way to accomplish what I'm trying to do here ?
>>> >>
>>> >> FileSet {
>>> >> Name = XP_WS2
>>> >> Enable VSS = yes
>>> >> Include {
>>> >>  Options {
>>> >>   signature = MD5
>>> >>   compression = GZIP
>>> >>   IgnoreCase = yes
>>> >>   Wild = "C:/Documents and Settings/*/Desktop/*"
>>> >>   Wi

Re: [Bacula-users] Can't relabel a tape

2009-05-28 Thread Bob Hetzel
Previously John Drescher  said,

> On Thu, May 28, 2009 at 5:52 AM, C.DriK  wrote:
>> > Hello,
>> >
>> > Thank you for your reply.
>> > My bacula configuration is not perfect, and sometimes I have some problem 
>> > (especially with the autochanger, it not change the tape automaticaly).
>> > Once a problem occurs, the "mt-f ..." no longer works and I have "device 
>> > busy" in the shell.
>> > I do not understand why it did that.
> 
> Use the unmount command in bacula first or stop the bacula-sd
> 
> John
> 

Looks like we got off topic on this thread but anyway...

That might resolve the problem once you've got it but this might help more 
in preventing the problem:

In the mtx-changer script, tweak see this section

case $cmd in
unload)
   debug "Doing mtx -f $ctl unload $slot $drive"
#
# enable the following line if you need to eject the cartridge
  ${MT} -f $device offline
  sleep 5
   ${MTX} -f $ctl unload $slot $drive
   ;;

load)
   debug "Doing mtx -f $ctl load $slot $drive"
   ${MTX} -f $ctl load $slot $drive
   rtn=$?
#

Also note, at least some versions of bacula's "make install" scripting 
overwrote the mtx-changer file so you'd wind up losing your mods when you 
upgrade if you're not careful.

--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers & brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing, & 
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA, & Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] The number of files mismatch! Marking volume in Error in Catalog

2009-06-02 Thread Bob Hetzel

Greetings,

I've been seeing an issue whereby a volume gets marked in error 
periodically.  The last items logged about that volume are typically like this:

02-Jun 11:53 gyrus-sd JobId 83311: Volume "LTO224L2" previously written, 
moving to end of data.
02-Jun 11:53 gyrus-sd JobId 83311: Error: Bacula cannot write on tape 
Volume "LTO224L2" because:
The number of files mismatch! Volume=46 Catalog=45
02-Jun 11:53 gyrus-sd JobId 83311: Marking Volume "LTO224L2" in Error in 
Catalog.

I don't think I have any SCSI errors, but instead the problem seems to be 
related to bacula not properly keeping track of the volume files in some 
rare case.

This time the problem happened not too long after the volume got recycled 
and so I noted one thing about how the tape was used... a backup started on 
another volume and then spanned onto it.  Could that be a source of these 
problems?

Here's the pertinent part of the bacula log file--debugging not turned on 
right now but I'm hoping enough got logged to help.  If not I'll have to 
turn debugging back on but what level would be good for determining the 
source of that error?

http://casemed.case.edu/admin_computing/bacula/bacula-2009-06-01.log.txt

Bob

--
OpenSolaris 2009.06 is a cutting edge operating system for enterprises 
looking to deploy the next generation of Solaris that includes the latest 
innovations from Sun and the OpenSource community. Download a copy and 
enjoy capabilities such as Networking, Storage and Virtualization. 
Go to: http://p.sf.net/sfu/opensolaris-get
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] mystifying restore problem w/ bacula-fd for, windows 3.0.1 .. invisible directories created

2009-07-02 Thread Bob Hetzel

I doubt that it's a corrupted file system, it's merely hidden directories. 
  If you go into windows explorer and change the settings so it shows you 
Hidden, System, Operating System, etc files you should now be able to see 
those dirs from windows explorer.  I don't think that will affect the 
command prompt behavior that you've found but that behavior is not unusual 
for when  directories are marked System or Hidden.

   Bob


> Apparently I need to do much more comprehensive restore testing, because 
> when I need, it my bacula installation has fallen down around me.  This 
> is just the worst of several problems I will be reporting.
> 
> First, I ran a restore.  The original client is a now-dead XP box, the 
> target client is a new XP box:
> 
> The log indicates success.
> 
> 02-Jul 09:04 packrat-dir JobId 33124: Start Restore Job 
> RestoreFiles.2009-07-02_09.04.49_02
> 02-Jul 09:04 packrat-dir JobId 33124: Using Device "VS160"
> 02-Jul 09:05 crow-sd JobId 33124: Ready to read from volume "tape-pool1-0008" 
> on device "VS160" (/dev/nst0).
> 02-Jul 09:05 crow-sd JobId 33124: Forward spacing Volume "tape-pool1-0008" to 
> file:block 82:2807.
> 02-Jul 09:17 packrat-dir JobId 33124: Bacula packrat-dir 3.0.1 (30Apr09): 
> 02-Jul-2009 09:17:40
>   Build OS:   i686-redhat-linux-gnu redhat 
>   JobId:  33124
>   Job:RestoreFiles.2009-07-02_09.04.49_02
>   Restore Client: joe2-fd
>   Start time: 02-Jul-2009 09:04:51
>   End time:   02-Jul-2009 09:17:40
>   Files Expected: 1,688
>   Files Restored: 1,688
>   Bytes Restored: 5,338,764,394
>   Rate:   6942.5 KB/s
>   FD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Restore OK
> 
> 02-Jul 09:17 packrat-dir JobId 33124: Begin pruning Jobs.
> 02-Jul 09:17 packrat-dir JobId 33124: No Jobs found to prune.
> 02-Jul 09:17 packrat-dir JobId 33124: Begin pruning Files.
> 02-Jul 09:17 packrat-dir JobId 33124: No Files found to prune.
> 
> It reported success.  I watched bacula-fd with procmon (from 
> sysinternals.com) because I simply could not believe the results I was 
> getting.  I had specified where=/tmpr2, and I watched it restore the 1,688 
> files with procmon.  Now look what I see:
> 
> (The folder C:\tmpr was from a different restore performed earlier)
> 
> 
> C:\>cd \tmpr2
> 
> C:\tmpr2>dir
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\tmpr2
> 
> File Not Found
> 
> C:\tmpr2>cd
> C:\tmpr2
> 
> C:\tmpr2>dir
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\tmpr2
> 
> File Not Found
> 
> C:\tmpr2>cd \
> 
> C:\>dir
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\
> 
> 07/01/2009  11:45 PM 1,024 .rnd
> 07/01/2009  05:40 PM 0 AUTOEXEC.BAT
> 07/01/2009  05:40 PM 0 CONFIG.SYS
> 07/01/2009  05:44 PM  Documents and Settings
> 07/02/2009  08:29 AM  Program Files
> 07/02/2009  09:18 AM  SysinternalsSuite
> 07/02/2009  08:18 AM  tmpr
> 07/02/2009  08:27 AM  WINDOWS
>3 File(s)  1,024 bytes
>5 Dir(s)   6,451,523,584 bytes free
> 
> C:\>dir tmpr
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\tmpr
> 
> 07/02/2009  08:18 AM  .
> 07/02/2009  08:18 AM  ..
> 07/02/2009  08:18 AM  C
>0 File(s)  0 bytes
>3 Dir(s)   6,451,523,584 bytes free
> 
> C:\>dir tmpr2
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\tmpr2
> 
> File Not Found
> 
> C:\>cd tmpr2
> 
> C:\tmpr2>cd \
> 
> C:\>cd tmpxxx
> The system cannot find the path specified.
> 
> C:\>cd \tmpr2\C
> 
> C:\tmpr2\C>dir
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\tmpr2\C
> 
> File Not Found
> 
> C:\tmpr2\C>cd "Documents and Settings"
> 
> C:\tmpr2\C\Documents and Settings>dir
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\tmpr2\C\Documents and Settings
> 
> File Not Found
> 
> C:\tmpr2\C\Documents and Settings>
> 
> ...
> 
> C:\tmpr2\C\Documents and Settings\Joe>dir
>  Volume in drive C has no label.
>  Volume Serial Number is CC21-996C
> 
>  Directory of C:\tmpr2\C\Documents and Settings\Joe
> 
> File Not Found
> 
> C:\tmpr2\C\Documents and Settings\Joe>dir "My Documents" | more
>  Volume in drive C has no label.
>  Volume Serial Number

[Bacula-users] Bacula spool on SSD -- solid state drive performance testing?

2009-07-22 Thread Bob Hetzel

Has anybody tinkered around with spooling backups on an SSD (aka solid 
state drive) or a raid-0 pair of them for higher performance?

It would seem that the issue of latency introduced by thrashing the hard 
drive with several concurrent readers and writers would be lessened on 
flash ram, but I'm seeing all kinds of info on performance degradation over 
time and unexpectedly low throughput from these devices so I'm wondering if 
anybody has given it a shot yet.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula spool on SSD -- solid state drive performance testing?

2009-07-22 Thread Bob Hetzel


John Drescher wrote:
> On Wed, Jul 22, 2009 at 3:30 PM, Bob Hetzel wrote:
>> Has anybody tinkered around with spooling backups on an SSD (aka solid
>> state drive) or a raid-0 pair of them for higher performance?
>>
> 
> I spool to a 4 drive sata raid 0 but since I only have a single
> gigabit nic connection the file system performance is not the limiting
> factor.
> 
> John

I'm trying to set up an LTO-3/LTO-4 setup and modern fast (15k rpm)
conventional hard drives are > $500 each though it seems.

For LTO-2 and below this is far cheaper since the tape drive speeds are so
much lower but I'd like to keep the tape drive writing at > 60 MB/sec.  In
addition I'd like to keep concurrency up so one slow backup doesn't drag
the prolong the others as badly.


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula spool on SSD -- solid state drive performance testing?

2009-07-22 Thread Bob Hetzel
John Drescher wrote:
> On Wed, Jul 22, 2009 at 4:49 PM, Bob Hetzel wrote:
>>
>> John Drescher wrote:
>>> On Wed, Jul 22, 2009 at 3:30 PM, Bob Hetzel wrote:
>>>> Has anybody tinkered around with spooling backups on an SSD (aka solid
>>>> state drive) or a raid-0 pair of them for higher performance?
>>>>
>>> I spool to a 4 drive sata raid 0 but since I only have a single
>>> gigabit nic connection the file system performance is not the limiting
>>> factor.
>>>
>>> John
>> I'm trying to set up an LTO-3/LTO-4 setup and modern fast (15k rpm)
>> conventional hard drives are > $500 each though it seems.
>>
>> For LTO-2 and below this is far cheaper since the tape drive speeds are so
>> much lower but I'd like to keep the tape drive writing at > 60 MB/sec.  In
>> addition I'd like to keep concurrency up so one slow backup doesn't drag the
>> prolong the others as badly.
>>
> How about 2 to 4 150 or 300GB velociraptors in raid 0.
> 
> http://www.wdc.com/en/products/products.asp?driveid=459
> 
> The 300GB models are around $200 USA. Much faster than a 7200RPM sata
> drive especially when it comes to seeks.
> 
> For SSD it will be very expensive to do that unless you have a small
> spool area.

In theory, the latency from random IO should be much closer to zero on a 
flash drive than on a thrashing hard drive, so I was hoping I might need 
only 1 or two 64GB or 128GB flash drives to provide decent spool size, 
perhaps not even raid-ed.

In addition, SSD/flash drives should be silent and heat up the room less 
(although that latter effect will be small--10 watts vs 2 watts for each drive)

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] status storage freezes at "Used Volume Status"

2009-08-20 Thread Bob Hetzel
Ralf,

My suggestion is to try upgrading your server to 3.0.2.  You won't need to 
upgrade all your FD's.  Since I went through that, the things that used to 
hang bacula in my environment (at exactly the same place you describe) are 
fixed.

Bob

> From: Ralf Gross 
> 
> Hi,
> 
> I'm stuck at the 'status storage' output, or better at a mount command.
> 
> I started a Verify job and had to mount a volume into a drive (ULTRIUM-TD4-D3)
> because the drive was umounted at that time. Nothing exciting I thought
> 
> mtx-changer scrip log:
> 20090820-08:42:37 Parms: /dev/Neo4100 unload 117 /dev/ULTRIUM-TD4-D3 2
> 20090820-08:42:37 Doing mtx -f /dev/Neo4100 unload 117 2
> 
> After that nothing happend. In fact slot 117 wasn't loaded at all at this 
> time.
> After a while I tried to load the needed volume directly with mtx-changer,
> which worked - but this didn't help to get bacula back to work with this 
> drive.
> 
> Any ideas what to do to next? Restarting the SD is no option, there are 
> backups
> running at the moment
> 
> 
> Select Storage resource (1-17): 3
> Connecting to Storage daemon Neo4100 at xx.60.9.193:9103
> 
> VU0EA003-sd Version: 2.4.4 (28 December 2008) x86_64-pc-linux-gnu debian 4.0
> Daemon started 10-Aug-09 21:31, 24 Jobs run since started.
>  Heap: heap=3,657,728 smbytes=2,849,563 max_bytes=6,494,656 bufs=308 
> max_bufs=23,267
> Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8
> 
> Running Jobs:
> Writing: Full Backup job VU0EF005-MPC-Volume1.2009-08-19_09 JobId=14798 
> Volume="A00065L4"
> pool="MPC-Full" device="ULTRIUM-TD4-D1" (/dev/ULTRIUM-TD4-D1)
> spooling=0 despooling=1 despool_wait=0
> Files=35,613 Bytes=4,336,711,446,399 Bytes/sec=51,173,051
> FDReadSeqNo=33,392,901 in_msg=33286663 out_msg=5 fd=11
> Writing: Full Backup job VU0EF005-INV-Volume6.2009-08-12_11 JobId=14614 
> Volume="A00185L4"
> pool="INV-Full" device="ULTRIUM-TD4-D2" (/dev/ULTRIUM-TD4-D2)
> spooling=0 despooling=1 despool_wait=0
> Files=2,464 Bytes=1,631,634,446,948 Bytes/sec=43,752,934
> FDReadSeqNo=12,468,677 in_msg=12461599 out_msg=5 fd=7
> Writing: Verify Volume to Catalog Verify job 
> VerifyVU0EF005-INV-Volume5.2009-08-20_08 JobId=14826 Volume=""
> pool="INV-Full" device="ULTRIUM-TD4-D3" (/dev/ULTRIUM-TD4-D3)
> spooling=0 despooling=0 despool_wait=0
> Files=0 Bytes=0 Bytes/sec=0
> FDSocket closed
> 
> 
> Jobs waiting to reserve a drive:
> 
> 
> Terminated Jobs:
>  JobId  LevelFiles  Bytes   Status   FinishedName 
> ===
>  14610  Full 10,6927.133 T  OK   14-Aug-09 15:42 
> VU0EF005-INV-Volume2.2009-08-12_11
>  146820 0   Error14-Aug-09 16:38 
> VerifyVU0EF005-MPC-Volume1.2009-08-14_13
>  146830 0   Error14-Aug-09 18:47 
> VerifyVU0EF005-MPC-Volume1.2009-08-14_16
>  146810 0   OK   14-Aug-09 21:52 
> VerifyVU0EF005-MPC-Volume2.2009-08-14_13
>  14611  Full 10,9627.013 T  OK   16-Aug-09 12:22 
> VU0EF005-INV-Volume3.2009-08-12_11
>  147120 0   OK   16-Aug-09 17:28 
> VerifyVU0EF005-INV-Volume2.2009-08-15_14
>  147330 0   OK   17-Aug-09 17:14 
> VerifyVU0EF005-INV-Volume3.2009-08-16_13
>  14612  Full 10,2496.524 T  OK   18-Aug-09 04:43 
> VU0EF005-INV-Volume4.2009-08-12_11
>  147730 0   OK   19-Aug-09 08:31 
> VerifyVU0EF005-INV-Volume4.2009-08-18_10
>  14613  Full 13,1296.248 T  OK   19-Aug-09 22:39 
> VU0EF005-INV-Volume5.2009-08-12_11
> 
> 
> Device status:
> Autochanger "Neo4100" with devices:
>"ULTRIUM-TD4-D1" (/dev/ULTRIUM-TD4-D1)
>"ULTRIUM-TD4-D2" (/dev/ULTRIUM-TD4-D2)
>"ULTRIUM-TD4-D3" (/dev/ULTRIUM-TD4-D3)
> Device "ULTRIUM-TD4-D1" (/dev/ULTRIUM-TD4-D1) is mounted with:
> Volume:  A00065L4
> Pool:MPC-Full
> Media type:  LTO4
> Slot 113 is loaded in drive 0.
> Total Bytes=456,449,328,128 Blocks=1,741,216 Bytes/block=262,144
> Positioned at File=85 Block=501
> Device "ULTRIUM-TD4-D2" (/dev/ULTRIUM-TD4-D2) is mounted with:
> Volume:  A00185L4
> Pool:INV-Full
> Media type:  LTO4
> Slot 86 is loaded in drive 1.
> Total Bytes=292,608,017,408 Blocks=1,116,211 Bytes/block=262,144
> Positioned at File=54 Block=10,345
> Device "ULTRIUM-TD4-D3" (/dev/ULTRIUM-TD4-D3) is not open.
> Device is BLOCKED. User unmounted.
> Drive 2 is not loaded.
> 
> 
> Used Volume status:
> 
> 
> 
> 
> 
> TIA, Ralf


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july

Re: [Bacula-users] Howto make bacula unmount a disk after backup?

2009-08-25 Thread Bob Hetzel
Adam,

See more info on what "Always Open" does at 
http://www.bacula.org/en/dev-manual/Storage_Daemon_Configuratio.html

I'm thinking setting it to "no" would be more appropriate for what you're 
trying to do.

However, you may find it more convenient to back up to once place and 
create some other job to copy it to removable USB media when you're ready 
to unmount it.  If you unmount it from a schedule and don't actually unplug 
the drive you may not be able to write any more stuff to it until it's 
unplugged/replugged.

Bob

> From: Adam 
> Subject: [Bacula-users] Howto make bacula unmount a disk after backup?
> To: Bacula 
> Message-ID:
>   
> Content-Type: text/plain; charset="iso-8859-1"
> 
> I have looked at bug report 830 on the bug reports list and it would appear
> that it has not been fixed. I am using version 3.0.2 of Bacula with the
> following device configuration. The device is not unmounted after the job is
> executed.
> 
> Device {
>   Name = USBBackup1
>   Media Type = ExtUSB
>   Device Type = File
>   LabelMedia = yes
>   Random Access = yes
>   Requires Mount = yes
>   Removable Media = yes
>   AlwaysOpen = yes
>   Mount Point = "/dev/disk/by-label/usb-backup-1"
>   Mount Command = "/bin/mount %m"
>   Unmount Command = "/bin/umount %m"
>   Archive Device  = "/mnt/usb-backup-1/backup"
>   Maximum File Size = 10485760
> }
> 
> Regards,
> 
> Adam



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 2 issues that driving me nuts

2009-10-05 Thread Bob Hetzel

> hey fellas,
> 
> have been using bacula for over 4 years now, great neat powerful!
> 
> i come across two issues with bacula, that i think can be taken care of 
> on bacula level, which are not implemented yet (or i am not aware of them):
> 
> 1. using a tape library, sometimes issue arise when bacula have loaded a 
> tape from the slot, then, some idiot (generally help desk people) put a 
> new tape into the same slot, where tape that in the drive used to be. 
> This causing a problem, as bacula insisting on unloading the tape into 
> the slot where it took it. Since it is already full, it fails and all 
> jobs fail as a result. There should be a way for bacula to check if slot 
> is taken, get the next available slot to unload tape.

I don't know for sure what happened with your instance, but in my instance 
I never open the main door and only use the I/O slots.  In that case the 
bug is not really in bacula but in the autochanger.  The best solution is a 
procedural one--just 'release' each drive before using your autochanger's 
"import" function.

> 2. having over 6 pools with 20-30dlt tapes, sometimes issue arise (there 
> is no way to keep up with proper schedule as delivery people always 
> fail): there are tapes (volumes) with status append in the pool 
> available and they are in the tape library, however bacula seem to purge 
> tape that has expiration date in the past, and insist on using that tape 
> first, if that tape not in the library, bacula will wait for the tape it 
> wants. I think bacula should look at the AL tapes that available in pool 
> for current job, and if it is not in the library at the moment, move to 
> the next available tape.
> 
> does any one else ran into similar situation with large bacula 
> implementation ?
> 
> Thanks to all
> Kiryl
> 

I believe I had this issue a long time ago.  If I remember correctly, it is 
because bacula ignores the 'InChanger' field unless you tell it you're 
using a changer in both the bacula-dir.conf and bacula-sd.conf files.  So 
for me, the appropriate section of my bacula-dir.conf file is...
Storage {
Name = Dell-PV136T
Autochanger = yes
Address = gyrus
SDPort = 9103
Device = Dell-PV136T
Media Type = LTO-2
Password = "snip-password-here"
Maximum Concurrent Jobs = 12
}
Note the "Autochanger = yes" line above.

Bob



--
Come build with us! The BlackBerry® Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9-12, 2009. Register now!
http://p.sf.net/sfu/devconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger and barcode

2009-10-16 Thread Bob Hetzel
Nicola,

If you're using barcodes you can greatly simplify what you're doing. 
Instead of using one pool per day, just use one pool for all backups and 
one scratch pool.  Place all the tapes in the scratch pool.

Forget about trying to keep straight which tape is for which day and let 
bacula track that for you.  When you need to do a restore, let bacula tell 
you which volumes that data is on.

Set volume retention, job retention, file retention, and volume use 
duration if you find that you don't fill up a tape as fast as you would 
like, then if you wanted to do some inchanger-out of changer tape rotation, 
periodically remove tapes marked Full or Used.

To label them do 'label barcodes slots=first-last" substituting the slot 
numbers for the first and last slots holding the tapes to be labelled.

Bob

> From: Nicola Quargentan 
> Subject: [Bacula-users] autochanger and barcode
> To: bacula-users@lists.sourceforge.net
> Message-ID: <4ad79c49.3080...@quargentan.com>
> Content-Type: text/plain; charset=ISO-8859-15; format=flowed
> 
> hi,
> 
> I'm very newbie and I want to use a DELL TL2000 autochanger.
> I made 7 pool in config file, one for each day of week:
> Monday, Tuesday, etc.
> 
> I want to use one volume for day and label it same as barcode. When I 
> tried "label barcode" I must to put all tape in only one pool:
> 
> *label barcode
> The defined Storage resources are:
>   1: File
>   2: autochanger
> Select Storage resource (1-2): 2
> Connecting to Storage daemon autochanger at backup.quargentan.loc:9103 ...
> Connecting to Storage daemon autochanger at backup.quargentan.loc:9103 ...
> 3306 Issuing autochanger "slots" command.
> Device "tape-burp-LTO4" has 23 slots.
> Connecting to Storage daemon autochanger at backup.quargentan.loc:9103 ...
> 3306 Issuing autochanger "list" command.
> The following Volumes will be labeled:
> Slot  Volume
> ==
> 1  GDV100L4
> 5  GDV101L4
> 6  GDV102L4
>12  GDV107L4
>13  GDV105L4
>14  GDV104L4
>15  GDV103L4
>16  GDV106L4
> Do you want to label these Volumes? (yes|no): yes
> Defined Pools:
>   1: Scratch
>   2: FullTest
>   3: DiffTest
>   4: IncrTest
>   5: Default
>   6: Monday
>   7: Tuesday
>   8: Wednesday
>   9: Thursday
>  10: Friday
>  11: Saturday
>  12: Sunday
> Select the Pool (1-12): .
> 
> So I cannot put the volume GDV100L4 in Monday, GDV101L4 in Tuesday etc.
> I tried also to put all volumes in Scratch pool, but bacula return a lot 
> of messages:
> Cannot label Volume because it is already labeled.
> 
> 1) Is there a clue to add one volume labeled by barcode to a single pool?
> 2) How I can relabel the volumes (I found a bad clue on the web: erase 
> every tapes manually  :(  ).
> 
> Thanks, Nicola.



--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bconsole shortcut command parameters: specifically add command

2009-11-11 Thread Bob Hetzel

Greetings,

Is there anywhere that the shortcut parameters you can use with bconsole 
are documented so as to avoid the submenus?

For the moment, the task that's frustrating me is that I'd like to "add" in 
a bunch of new volumes and I can't seem to figure out the right parameters 
to the add command so as to specify the label on the command line.

Has anybody figured this out how to skip past this prompt ?

"Enter number of Volumes to create. 0=>fixed name. Max=1000: "

Bob

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula feature request: better client firewall and timeout handling

2008-01-29 Thread Bob Hetzel
Item  45?:  Improve how bacula handles tcpip connections when they drop 
(or are just not answered) unexpectedly both at the beginning and in the 
middle of a backup.
   Date:   January 29, 2008
   Origin: Bob Hetzel [EMAIL PROTECTED]
   Status: New

   What:   Part A: Currently, when a bacula client has a firewall on it 
that isn't set up properly to allow the server to connect in (or is just 
down), it takes bacula a long time to figure that out.  Part B: When a 
client goes away in the middle of a backup (i.e. shut down or just taken 
off the network due to being a laptop on the move) bacula takes a long 
time to figure that out and terminate the backup.

   Why:Workarounds have been suggested such as creating a "before
job" that pings the client.  That is only a partial resolution to
part A and is no resolution to part B, because it tests general 
connectivity to the client without specific bacula connectivity
(i.e. response to a ping is not the same as response to a connection
on port 9102).  In addition, that doesn't resolve the problems created 
when backing up a laptop or other computer that gets shut off or moved 
in the middle of a backup.  Also, many IT shops have created policies to 
turn off ping responses due security situations they can create, for 
instance under Microsoft Windows.

   Notes:  It seems rather obvious to me that implementing part A is 
substantially easier than part B, but both seem rather related and could 
share code.  Part A would be really nice even without part B. though. 
This would also allow the code to give a better message to the server 
which currently just suggests a password problem or bacula not running. 
  Having a way to check if the director connected at all to the bacula 
client would allow for a more specific error message.  I also understand 
that this might require a change in the FD code--thus making older 
clients incompatible with the newer bacula server daemons, so perhaps a 
way to turn it on or off would be helpful too.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] possible bug in bacula-fd on windows

2008-02-11 Thread Bob Hetzel

I'm trying to back up a drive on a windows server which also contains a
web directory.  Client Bacula-fd and server daemons are all version 2.2.8.

Bacula's server components are all running on OpenSuse 10.2

Any help anybody can provide would be greatly appreciated.  To me this
looks like a bacula bug as I was able to use windows MS backup to back
up that directory to a local client .bkpf file fine (although I didn't 
use that program to try the entire file system).  The job stats on the 
1st attempt were...
   SD Files Written:   146,048
   SD Bytes Written:   23,997,511,551 (23.99 GB)

Then on the 2nd attempt they were...

   SD Files Written:   146,054
   SD Bytes Written:   24,053,535,735 (24.05 GB)

As it's a heavily used web server with lots of people adding files the
fact that these numbers changed didn't surprise me but I thought I'd
include that here just in case it might help.  The backup was a full
backup as I've not as of yet been able to back up this file system.

Here's the end of the trace file which ended the same both times... if 
this isn't enough kindly let me know what part in the huge file would be 
helpful as the file is too large to put on this list at 150 MB.

casemed: ../compat/compat.cpp:1092-0
opendir(d:/Casemed_web/webapps/eCurriculumStudents/eCurriculumUserFiles/BLOCK
2 Materials 2007-08/W1 Building Blocks of Life/CANCER BIOLOGY/Dr.
Bokar/CASES)

spec=\\?\d:\Casemed_web\webapps\eCurriculumStudents\eCurriculumUserFiles\BLOCK
2 Materials 2007-08\W1 Building Blocks of Life\CANCER BIOLOGY\Dr.
Bokar\CASES\*,
FindFirstFile returns 2434656
casemed: ../compat/compat.cpp:1099-0FirstFile=.
casemed: ../compat/compat.cpp:1172-0 readdir_r(ffb9d0, { d_name=".",
d_reclen=1, d_off=0
casemed: ../compat/compat.cpp:1172-0 readdir_r(ffb9d0, { d_name="..",
d_reclen=2, d_off=1
casemed: ../compat/compat.cpp:1172-0 readdir_r(ffb9d0, { d_name="CASE
IQ#1 W1 Chronic Myelogenous Leukemia (CML)", d_reclen=47, d_off=3
casemed: ../compat/compat.cpp:194-0 Enter wchar_win32_path
casemed: ../compat/compat.cpp:378-0 Leave wchar_win32_path=\
casemed: ../compat/compat.cpp:1054-0 Opendir
path=d:/Casemed_web/webapps/eCurriculumStudents/eCurriculumUserFiles/BLOCK
2 Materials 2007-08/W1 Building Blocks of Life/CANCER BIOLOGY/Dr.
Bokar/CASES/CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML)
casemed: ../compat/compat.cpp:107-0 Enter convert_unix_to_win32_path
casemed: ../compat/compat.cpp:158-0
path=\\?\d:\Casemed_web\webapps\eCurriculumStudents\eCurriculumUserFiles\BLOCK
2 Materials 2007-08\W1 Building Blocks of Life\CANCER BIOLOGY\Dr.
Bokar\CASES\CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML)
casemed: ../compat/compat.cpp:167-0 Leave cvt_u_to_win32_path
path=\\?\d:\Casemed_web\webapps\eCurriculumStudents\eCurriculumUserFiles\BLOCK
2 Materials 2007-08\W1 Building Blocks of Life\CANCER BIOLOGY\Dr.
Bokar\CASES\CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML)
casemed: ../compat/compat.cpp:1062-0 win32
path=\\?\d:\Casemed_web\webapps\eCurriculumStudents\eCurriculumUserFiles\BLOCK
2 Materials 2007-08\W1 Building Blocks of Life\CANCER BIOLOGY\Dr.
Bokar\CASES\CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML)
casemed: ../compat/compat.cpp:194-0 Enter wchar_win32_path
casemed: ../compat/compat.cpp:208-0 Leave wchar_win32_path no change
casemed: ../compat/compat.cpp:1092-0
opendir(d:/Casemed_web/webapps/eCurriculumStudents/eCurriculumUserFiles/BLOCK
2 Materials 2007-08/W1 Building Blocks of Life/CANCER BIOLOGY/Dr.
Bokar/CASES/CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML))

spec=\\?\d:\Casemed_web\webapps\eCurriculumStudents\eCurriculumUserFiles\BLOCK
2 Materials 2007-08\W1 Building Blocks of Life\CANCER BIOLOGY\Dr.
Bokar\CASES\CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML)\*,
FindFirstFile returns 2430784
casemed: ../compat/compat.cpp:1099-0FirstFile=.
casemed: ../compat/compat.cpp:1172-0 readdir_r(ff8e20, { d_name=".",
d_reclen=1, d_off=0
casemed: ../compat/compat.cpp:1172-0 readdir_r(ff8e20, { d_name="..",
d_reclen=2, d_off=1
casemed: ../compat/compat.cpp:1172-0 readdir_r(ff8e20, { d_name="Cancer
Medicine.url", d_reclen=19, d_off=3
casemed: ../compat/compat.cpp:194-0 Enter wchar_win32_path
casemed: ../compat/compat.cpp:378-0 Leave wchar_win32_path=\
casemed: ../../lib/crypto.c:600-0 crypto_digest_new jcr=f7e7f0
casemed: ../compat/compat.cpp:107-0 Enter convert_unix_to_win32_path
casemed: ../compat/compat.cpp:158-0
path=\\?\d:\Casemed_web\webapps\eCurriculumStudents\eCurriculumUserFiles\BLOCK
2 Materials 2007-08\W1 Building Blocks of Life\CANCER BIOLOGY\Dr.
Bokar\CASES\CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML)\Cancer
Medicine.url
casemed: ../compat/compat.cpp:167-0 Leave cvt_u_to_win32_path
path=\\?\d:\Casemed_web\webapps\eCurriculumStudents\eCurriculumUserFiles\BLOCK
2 Materials 2007-08\W1 Building Blocks of Life\CANCER BIOLOGY\Dr.
Bokar\CASES\CASE IQ#1 W1 Chronic Myelogenous Leukemia (CML)\Cancer
Medicine.url
casemed: ../compat/compat.cpp:107-0 Enter conve

Re: [Bacula-users] possible bug in bacula-fd on windows

2008-02-12 Thread Bob Hetzel
Previously, Arno Lehmann said...

> Is this really the end of the trace file?
> 
> It looks a bit... funny because the expected job finishing stuff is 
> not there, and it also doesn't report a serious problem.
> 
> Can you verify that the FD still runs at this time, and see if it's 
> still got the network connections from the DIR and to the SD open?
> 
> If all this is the case, I can only recommend looking at what the SD 
> is doing at this time...
> 
> Hope this helps you forward,
> 
> Arno

The bacula-fd abends at that point, so that's why it isn't fully logging 
the stuff you'd expect to see for a completed successful backup.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] possible bug in bacula-fd on windows

2008-02-12 Thread Bob Hetzel
Previously, Drew Bentley <[EMAIL PROTECTED]> said...

> So, heavily used server and if I read your email correctly, you're
> saying each time this runs it's reported as a Full backup? Or is it
> detected as an incremental? Do you get a status OK when these report
> to be finished? Any other details?
> 
> My assumption is, this is not a bug but how you have this setup, the
> amount of files you are backing up, files being opened and used, 
added
> while the backup is taking place, etc. Perhaps with this amount of
> data to backup and the server being busy with many files changing or
> being added, you might want to look into taking some type of snapshot
> to grab the backups from instead of running or grabbing the backups
> while the server and or files are in some type of use.
> 
> -Drew
> 

Yes there probably are some files changing.  I doubt that's giving
bacula a problem.  I also tried to use VSS but the backups still
died.  There has not been a successful backup of this filesystem, so
it runs as a full whether I specified it as such or not.  Thanks for
trying to help, but the backup failed hence why I inquired about
whether I've found a bug or not.

The server is moderately loaded (5% to 15% cpu busy as measured by
windows task manager).  File changes are typically a file being
overwritten with a newer version.  Other than that, most of the server
access is by IIS opening files for reading, so I don't think it's a
locking problem.  Either way it probably wouldn't fail on the same
file twice an hour apart.  I should note that the file it fails on is
a web link or .url file which is under 1kb in length.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] possible bug in bacula-fd on windows

2008-02-19 Thread Bob Hetzel

Previously, Arno Lehmann said...

> 12.02.2008 22:48, Bob Hetzel wrote:
>> > Previously, Arno Lehmann said...
>> > 
>>> >> Is this really the end of the trace file?
>>> >>
>>> >> It looks a bit... funny because the expected job finishing stuff is 
>>> >> not there, and it also doesn't report a serious problem.
>>> >>
>>> >> Can you verify that the FD still runs at this time, and see if it's 
>>> >> still got the network connections from the DIR and to the SD open?
>>> >>
>>> >> If all this is the case, I can only recommend looking at what the SD 
>>> >> is doing at this time...
>>> >>
>>> >> Hope this helps you forward,
>>> >>
>>> >> Arno
>> > 
>> > The bacula-fd abends at that point, so that's why it isn't fully logging 
>> > the stuff you'd expect to see for a completed successful backup.
> 
> Oh, "anend" was worth looking up in the dictionary  :-) 
> 
> Ok, the FD crashes, in other words.
> 
> This looks like it's worth a bug report. It will probably help if you 
> can capture a backtrace at this point, or a dump file. Don't ask me 
> how to do this on windows...
> 
> Also, try to create the problem with a minimal setup, i.e. a very 
> simple job backing up only one file. If that runs, add more 
> directories to it. If it crashes in a certain directory, try to see 
> which files cause this and so on.
> 
> That's what I would do, not having debugiing experience under windows.
> 
> Arno
> 
> -- Arno Lehmann IT-Service Lehmann www.its-lehmann.de 

OK, so I think I figured out how to do a crash dump of a service using 
Microsoft Debug Diagnostic tool... I dunno if it'll be helpful but it 
generated a diagnostic information file (69kb .mht or compressed html 
file format) and a dump file (23.8MB .dmp file).  If not, the problem is 
very repeatable so I can re-run it differently if I can get guidance on 
what to do.  Anybody interested in helping with this windows specific 
bacula FD client issue?

Bob

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Update slots causes bacula console to prompt for drive, etc?

2008-02-26 Thread Bob Hetzel
I just noticed this oddity.  Perhaps it's how I've got bacula 
configured, but it otherwise seems to work properly.  I've got an 
autochanger with 72 slots and two tape drives.  I just added some new 
tapes in and did the following...

*update slots
Automatically selected Storage: Dell-PV136T
Enter autochanger drive[0]: 0
Connecting to Storage daemon Dell-PV136T at gyrus:9103 ...
3306 Issuing autochanger "slots" command.
Device "Dell-PV136T" has 0 slots.
No slots in changer to scan.
*update slots
Automatically selected Storage: Dell-PV136T
Enter autochanger drive[0]: 1
Connecting to Storage daemon Dell-PV136T at gyrus:9103 ...
3306 Issuing autochanger "slots" command.
Device "Dell-PV136T" has 72 slots.
Connecting to Storage daemon Dell-PV136T at gyrus:9103 ...
3306 Issuing autochanger "list" command.
Catalog record for Volume "LTO200L2" updated to reference slot 1.
Catalog record for Volume "LTO208L2" updated to reference slot 2.
Catalog record for Volume "LTO201L2" updated to reference slot 3.
... [rest of volume listing removed for brevity]
Volume "LTO243L2" not found in catalog. Slot=53 InChanger set to zero.


So I'm wondering if it's an odd config I've got (or should have) or 
whatnot when the following happen...

1) bacula asks the user for a drive when you issue the update slots command.
2) bacula needs an empty drive to query the changer for tape volumes 
(even though since I'm using barcodes it gets that info without using 
the drive).
3) the bconsole command "list volumes" doesn't show this new volume even 
after it throws up a message noting that it's not in the catalog.
4) Is there an auto label command or config setting so I can just have 
it auto detect new barcoded volumes and then label them and put them in 
the catalog?

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volumes marked with Error

2008-02-28 Thread Bob Hetzel

First a quick note... drives LTO 2nd generation and newer should not 
shoeshine unless you're really running way to slow a cpu for what else 
is going on with the computer.

If the room air temp is less than 85 degrees or so it's unlikely in a 
tape autoloader cabinet (i.e. good airflow by design) that you've got a 
heat problem, imho.  More likely is that you're filling up space 
somewhere, have a scsi issue, perhaps you need a firmware update to your 
library/drive, or need update to a more recent bacula server version due 
to some of the various spooling and multi-drive autochanger related 
bugs, or have a config problem.

You should be able to put a thermometer near the drive to find out the 
exact air temp near it.

Also, have you considered spooling everything?  I know this may slow 
down your backups on fast servers but the increase gained in doing more 
than one thing at a time may balance this, especially on incrementals.

> Date: Wed, 27 Feb 2008 09:31:40 +
> From: Bob Cregan <[EMAIL PROTECTED]>
> Subject: Re: [Bacula-users] Volumes marked with Error
> To: bacula-users@lists.sourceforge.net
> Message-ID: <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> Hi,
>I am running the following
> 
> Director, Storage deamon 2.2.6. The library is an Overland Arcvault 24 
> (LTO3)
> Most clients (about 30 ~ 8TB in total)  are 2.2.4
> 
> Backup are running through fine, but occasionally a volume is marked 
> with status Error when it is nowhere near full. The backups using the 
> volume at the time give no errors. Restores from the backups using the 
> volume are also OK.
> 
> I am losing a lot of tape capacity from this however sometime the tape 
> is marked as Error  when only  a  few Gig has been used.
> 
> There is one thing that I thought may have a bearing. I am using a 
> mixture of spooled and non spooled backups. Some of the backups are on 
> old hardware, consist of lots of very small files (several million  ~1Kb 
> html files ) and the resulting very poor throughput can result in a big 
> shoeshining problem - I generally spool these. Other ones are large 
> database files with good throughput so I run these direct to tape. The 
> spool files can be as big as 80Gb.
> 
> The director allows four concurrent jobs, I have two tape drives one of 
> which is generally Incrementals only the other Full backups only.
> 
> Can there be a problem if a spool file is despooling and a "direct to 
> tape" job also need to write to the tape? Should I spool all my jobs?
> 
> Any advice would be very welcome.
> 
> Thanks
> 
> Bob
> 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volumes marked with Error

2008-02-28 Thread Bob Hetzel

I stand corrected... It seems the data rate matching in LTO-3 drives 
isn't as flexible as I'd have thought... One interesting thing to note 
in the doc you mentioned is that if you think you may be shoe-shining, 
try backing up onto an LTO-2 tape as this will run the drive at native 
LTO-2 speeds rather than at the much higher data throughput of LTO-3.

Bob
[EMAIL PROTECTED] wrote:
> 
> In the message dated: Thu, 28 Feb 2008 11:21:31 EST,
> The pithy ruminations from Bob Hetzel on 
>  were:
> => 
> => First a quick note... drives LTO 2nd generation and newer should not 
> => shoeshine unless you're really running way to slow a cpu for what else 
> => is going on with the computer.
> 
> Huh? That's exactly the opposite of everything I've heard from drive 
> manufacturers, from Curtis Preston, etc.
> 
> The faster the drive, the more difficult it is to supply it with enough data 
> to 
> prevent shoe-shining. In general, the limiting factor isn't the CPU of the 
> backup server, but the disk drives that are supplying data and the network 
> connection[s] end-to-end between the data source and the tape drive.
> 
> For example, an LTO-3 drive supports a native (uncompressed) throughput of 
> 80MB/
> sec. This means that, for "average" data that the drive will be compressing, 
> the
> server must supply data at the rate of 120~160MB/sec to keep the drive running
> at full speed. This is not trivial. Many tape drives are capable of variable
> write speeds, deliberately slowing down when the incoming data rate is
> insufficient, in an attempt to avoid shoe-shining. AFAIK, the minimum data 
> rate
> that LTO3 drives must receive in order to avoid shoe-shining is ~35MB/s. This
> can be difficult to achieve in a real-world environment, such as reading data
> from a server in active use, sending that data over a network (even GigE) 
> that's
> used for other tasks, and competing with other streams of client data being 
> sent
> the same backup media server (ie., storage director).
> 
> My simple rule of thumb:
>   If the data source is a single spindle (ie. not a RAID device), then
>   you will not be able to feed an LTO3 drive fast enough to prevent
>   shoe-shining. With a single disk, even an LTO2 drive may shoe shine,
>   depending on the overall system and the compressibility of the data.
> 
> 
>   http://www.open-mag.com/features/Vol_117/LTO3/LTO3.htm
>   
> http://www.datastor.co.nz/Datastor/Promotions.nsf/4a91ca5e06d20e15cc256ebe0002290e/d954d1c5e5e6df09cc25723b00740956/$FILE/When%20to%20Choose%20LTO3%20Tape%20Drives.pdf
> 
> => 
> => If the room air temp is less than 85 degrees or so it's unlikely in a 
> => tape autoloader cabinet (i.e. good airflow by design) that you've got a 
> 
> Yep.
> 
>   [SNIP!]
> => 
> => Also, have you considered spooling everything?  I know this may slow 
> => down your backups on fast servers but the increase gained in doing more 
> => than one thing at a time may balance this, especially on incrementals.
> 
> Absolutely. You may be able to avoid shoe-shining by spooling data from 
> individual servers onto a fast RAID device on the storage director. This will 
> probably produce a small decrease in the backup time for individual servers, 
> and a significant decrease in the aggregate backup duration for multiple 
> concurrent clients.
> 
> => 
> => > Date: Wed, 27 Feb 2008 09:31:40 +
> => > From: Bob Cregan <[EMAIL PROTECTED]>
> => > Subject: Re: [Bacula-users] Volumes marked with Error
> => > To: bacula-users@lists.sourceforge.net
> => > Message-ID: <[EMAIL PROTECTED]>
> => > Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> => > 
> => > Hi,
> => >I am running the following
> => > 
> => > Director, Storage deamon 2.2.6. The library is an Overland Arcvault 24 
> => > (LTO3)
> => > Most clients (about 30 ~ 8TB in total)  are 2.2.4
> => > 
> 
>   [SNIP!]
> 
> => > 
> => > There is one thing that I thought may have a bearing. I am using a 
> => > mixture of spooled and non spooled backups. Some of the backups are on 
> => > old hardware, consist of lots of very small files (several million  ~1Kb 
> => > html files ) and the resulting very poor throughput can result in a big 
> => > shoeshining problem - I generally spool these. Other ones are large 
> => > database files with good throughput so I run these direct to tape. The 
> => > spool files can be as big as 80Gb.
> => > 
> 
> I have a very similar arran

Re: [Bacula-users] OfflineOnUnmount strange behaviour

2008-02-28 Thread Bob Hetzel
From: Tilman Schmidt <[EMAIL PROTECTED]>

>> >OfflineOnUnmount = yes; # when unmounted, eject tape
>> > [...] has two unexpected side effects.
>> > a) If I label a new tape it is ejected after the operation, followed by
>> > a message from Bacula that it couldn't mount the new tape.
>> > b) I cannot relabel a tape while a job is waiting for a tape; when the
>> > the relabel command is sent to the storage daemon, the tape is ejected
>> > and then a message appears "relabel operation failed".
> 
> And another one:
> 
> c) The tape is ejected when the storage daemon shuts down, eg. when the
> server is rebooted.
> 
>> > Can that be avoided?
> 
> Other than by avoiding that option altogether, that is - which is what
> I am doing now.
> 
> Thx
> T.

I concur with your experiences...

Alternatively, I've found that I needed to modify the unload section of 
my mtx-changer script to uncomment out the line to take the device 
offline and then use a "sleep 5" before the line to unload the tape.

I've also found that running a "make install" overwrites those needed 
changes by overwriting my mtx-changer script with the one that comes 
with the bacula source so now I don't do that anymore... I just copy the 
binaries myself.

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Update slots causes bacula console to prompt for drive, etc?

2008-02-29 Thread Bob Hetzel
> Date: Wed, 27 Feb 2008 10:17:05 -0500
> From: [EMAIL PROTECTED]
  >
> In the message dated: Tue, 26 Feb 2008 19:12:38 EST,
> The pithy ruminations from Bob Hetzel on 
> <[Bacula-users] Update slots causes bacula console to prompt for drive, etc?> 
> w
> ere:
> => I just noticed this oddity.  Perhaps it's how I've got bacula 
> 
> And which version of bacula would that be?

2.2.8 for both director and sd

> 
> => configured, but it otherwise seems to work properly.  I've got an 
> => autochanger with 72 slots and two tape drives.  I just added some new 
> => tapes in and did the following...
> => 
> => *update slots
> => Automatically selected Storage: Dell-PV136T
> => Enter autochanger drive[0]: 0
> => Connecting to Storage daemon Dell-PV136T at gyrus:9103 ...
> => 3306 Issuing autochanger "slots" command.
> => Device "Dell-PV136T" has 0 slots.
> => No slots in changer to scan.
>^^^
> 
> I often see this if I do something like add/remove/move tapes outside of 
> bacula, using mtx or the front-panel controls of the tape drive. The only 
> reliable way to address the tape drive after this is to restart bacula.
> 

Yup.  I in fact did take the library offline to add tapes.  Has anybody 
found a method of tape rotation in and out of the library that doesn't 
involve shutting bacula down?  This is kind of a big deal as most backup.

> => *update slots
> => Automatically selected Storage: Dell-PV136T
> 
> I've got a PV132T.
> 
> => Enter autochanger drive[0]: 1
> 
> Does bacula in fact honor the request to use slot 1? In my case (still 
> running 
> 1.38.11), bacula uses drive 0 regardless of the specification. I believe this 
> has been fixed in later versions.
> 

Yes that does seem to work in my 2.2.8

> 
>   [SNIP!]
> 
> => So I'm wondering if it's an odd config I've got (or should have) or 
> => whatnot when the following happen...
> => 
> => 1) bacula asks the user for a drive when you issue the update slots 
> command.
> 
> Yes, that's normal (depending on the bacula version).
> 
> => 2) bacula needs an empty drive to query the changer for tape volumes 
> => (even though since I'm using barcodes it gets that info without using 
> => the drive).
> 
> Yes, that's normal (depending on the bacula version).
> 
> => 3) the bconsole command "list volumes" doesn't show this new volume even 
> => after it throws up a message noting that it's not in the catalog.
> 
> It sounds like you're not using the configuration option to automatically add 
> new & unknown volumes to the catalog.
> 

I can't seem to find that option.

> => 4) Is there an auto label command or config setting so I can just have 
> => it auto detect new barcoded volumes and then label them and put them in 
> => the catalog?
> 
> Yes.

Can anybody add detail on that?

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] multi-drive autochanger problem

2008-10-30 Thread Bob Hetzel

Greetings,

I've got a PowerVault 136T autochanger.  I was running well with two 
drives up until recently but I've now added a 3rd drive and seem to be 
having 1 problem more often now.  The problem I'm seeing is that it'll 
load 3 tapes but then sometimes pick a drive it wants to run a specific 
backup on but it'll want a tape that's loaded into a different drive, 
then just sit there waiting for a mount request.  I've noticed other 
times when it does the unmount/mount properly (although ideally it 
shouldn't switch drives at all I'd think if a tape it wants is mounted 
already in any drive and that drive isn't busy).  I went from seeing 
this every month or so to now seeing this twice in the last week.

Here's more info on the settings I'm using...
all 3 drives in one pool
prefer mounted volumes=no
Maximum Concurrent Jobs=12
spool data=yes

Is there anybody else who's started gathering/reporting debug logs on this?







-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multi-drive autochanger problem

2008-10-30 Thread Bob Hetzel


John Drescher wrote:
> On Thu, Oct 30, 2008 at 3:34 PM, Bob Hetzel <[EMAIL PROTECTED]> wrote:
>> Greetings,
>>
>> I've got a PowerVault 136T autochanger.  I was running well with two
>> drives up until recently but I've now added a 3rd drive and seem to be
>> having 1 problem more often now.  The problem I'm seeing is that it'll
>> load 3 tapes but then sometimes pick a drive it wants to run a specific
>> backup on but it'll want a tape that's loaded into a different drive,
>> then just sit there waiting for a mount request.  I've noticed other
>> times when it does the unmount/mount properly (although ideally it
>> shouldn't switch drives at all I'd think if a tape it wants is mounted
>> already in any drive and that drive isn't busy).  I went from seeing
>> this every month or so to now seeing this twice in the last week.
>>
>> Here's more info on the settings I'm using...
>> all 3 drives in one pool
>> prefer mounted volumes=no
>> Maximum Concurrent Jobs=12
>> spool data=yes
>>
>> Is there anybody else who's started gathering/reporting debug logs on this?
>>
> 
> Are you using bacula-2.4.2 or better?
> 
> John

Oops, I meant to add that I'm using 2.4.3 actually.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] file dir computer restarts resulting in mismatched file count error on tape

2008-11-05 Thread Bob Hetzel

I think I may have found a bug or perhaps at least a design limitation 
in Bacula.  I'm running version 2.4.3.  I use spooling and I back up to 
tape.  I seem to get periodic errors logged about a file count mismatch 
resulting in a tape getting marked as status "Error".

I think I was able to determine the cause of one of these... the backup 
client running Windows Vista was restarted in the middle of the backup. 
  The next time that tape is used, the file count mismatch is noted and 
the tape is marked in error.  Additionally, bacula apparently filled up 
the tape before the client was restarted, so I don't know for sure if 
that had something to do with the problem.

This raises some questions...

1) When a client goes away in the middle of a backup, bacula should 
handle that properly, but it appears to be missing a part of what it 
would normally do when a backup completes successfully.

2) In theory, if it can't do a whole backup the files that it does get 
onto the tape should be recoverable too but I've not checked if it 
handles it such that they are.

3) When a tape file count mismatch is found, can't it just correct the 
mismatch, send an e-mail and move on, w/o marking the tape as status 
Error when the tape is actually fine?




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] searching the bacula catalog for a filename with wildcards?

2008-11-07 Thread Bob Hetzel

Folks,

I've got a user who thinks a file was deleted but she isn't sure of the 
exact name or where it was in the file system.  Is there a command to 
search for a file with wildcards which could be anywhere in the catalog 
(where both the date it was backed up and the directory it lived in are 
not known)?

Barring that, what might the syntax of a mysql select command to do it?

   Bob

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] client list for restores sort order

2008-11-11 Thread Bob Hetzel

I've currently got over 150 backup clients installed with bacula so when 
I want to do a restore and I have it list the clients by name the list 
is rather unwieldy.  I'm thinking the list is ordered by when they were 
added?

If it's doing a database query to generate this client list, where can 
the sort order be changed?  Ideally I'd like to change it to order by 
client name.

Bob

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Antw.: Re: question about schedules and, retentions

2008-11-11 Thread Bob Hetzel

Arno Lehmann <[EMAIL PROTECTED]>  wrote:

> Carlo Maesen wrote:
>> > I did read the bacula manual but, I have some questions about schedules.
>> > I creat the following schedule:
>> > Schedule {
>> >   Name = aca-cycle
>> >   Run = Level=Incremental Pool=aca mon-thu at 22:00
>> >   Run = Level=Full Pool=aca 1st-4th sat at 22:00
>> > }
>> > 
>> > I backup one client according this schedule, but each different run has 
>> > also a different file and job retention. (Incr = 4 weeks, Full = 1 year)
>> > Do I have to create 2 different clients and jobs, one for the incemental 
>> > backup and one for the full ?
>> > Because the file and job retenion is defined in the client-directive.
> 
> If you actually need the job-specific retention times you are in 
> trouble...
> 
> An incremental can only be based on the latest full backup for the 
> same job, and a job is defined by the unique combination of client and 
> fileset.
> 
> The better approach is to use distinct pools for full, differential, 
> and incremental backups, where each pool has its own retention settings.
> 
> When a job is purged from a pool volume, the accompanying file and job 
> data is also removed.
> 
> Typically, you'll keep the full backup longest, so in essence, the job 
>   and file retentions apply to full backups only, if they are longer 
> than the retention times of the partial backup pools retention times.
> 
> This, typically, is exactly what is needed - complete control when 
> restoring from recent backups, and less control but also less database 
> use for the long-term storage.
> 
> Arno

If I could chime in here... different retention times for incrementals 
and fulls sounds reasonable on it's face, but IMHO is likely to bite you 
eventually... what you're doing with this technique is purging the data 
that changes the most more often.  Sometimes that's helpful but it 
sacrifices flexibility when a file changes a lot but you don't know when 
somebody messed it up, or you don't discover that it was messed up until 
after you've expired that differential/incremenatl.  Also, the space 
required to keep all those incrementals is likely a lot less than the 
space required to keep the fulls so you might as well keep both.

Your mileage may vary... I'm sure there's some reason to use different 
retention times like perhaps auditors, but I've heard they really want 
stuff saved on special WORM media now anyhow.

Bob


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula hang waiting for storage

2008-11-26 Thread Bob Hetzel

I've got bacula currently in a hung state with the following interesting 
info.  When I run a status storage produces the following...


Automatically selected Storage: Dell-PV136T
Connecting to Storage daemon Dell-PV136T at gyrus:9103

gyrus-sd Version: 2.4.3 (10 October 2008) i686-pc-linux-gnu suse 10.2
Daemon started 25-Nov-08 19:20, 59 Jobs run since started.
  Heap: heap=3,756,032 smbytes=3,519,564 max_bytes=3,684,397 bufs=555 
max_bufs=557
Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8

Running Jobs:
Writing: Incremental Backup job axh93-gx270 JobId=45634 Volume="LTO261L2"
 pool="Default" device="IBMLTO2-3" (/dev/nst2)
 spooling=0 despooling=0 despool_wait=1
 Files=78 Bytes=21,123,239 Bytes/sec=2,337
 FDReadSeqNo=970 in_msg=750 out_msg=9 fd=20
Writing: Incremental Backup job bxn4-gx280 JobId=45641 Volume="LTO261L2"
 pool="Default" device="IBMLTO2-3" (/dev/nst2)
 spooling=0 despooling=0 despool_wait=1
 Files=155 Bytes=2,925,138,595 Bytes/sec=323,648
 FDReadSeqNo=45,916 in_msg=45480 out_msg=9 fd=35
Writing: Incremental Backup job cdking JobId=45646 Volume="LTO261L2"
 pool="Default" device="IBMLTO2-3" (/dev/nst2)
 spooling=0 despooling=0 despool_wait=1
 Files=88 Bytes=11,846,912 Bytes/sec=1,310
 FDReadSeqNo=920 in_msg=672 out_msg=9 fd=23
Writing: Incremental Backup job ceg3-d810 JobId=45648 Volume="LTO253L2"
 pool="Default" device="IBMLTO2-2" (/dev/nst1)
 spooling=0 despooling=1 despool_wait=0
 Files=35 Bytes=1,391,695,993 Bytes/sec=176,588
 FDReadSeqNo=21,542 in_msg=21439 out_msg=9 fd=36
Writing: Incremental Backup job clifford3 JobId=45651 Volume="LTO261L2"
 pool="Default" device="IBMLTO2-3" (/dev/nst2)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=32
Writing: Incremental Backup job cxj57-gx270 JobId=45657 Volume="LTO261L2"
 pool="Default" device="IBMLTO2-3" (/dev/nst2)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=33
Writing: Incremental Backup job dxa2-d630 JobId=45665 Volume="LTO261L2"
 pool="Default" device="IBMLTO2-3" (/dev/nst2)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=17
Writing: Incremental Backup job educationdean JobId=45667 Volume=""
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDSocket closed


Jobs waiting to reserve a drive:
3605 JobId=45667 wants free drive but device "IBMLTO2-1" (/dev/nst0) 
is busy.

[terminated jobs info snipped out]
Device status:
Autochanger "Dell-PV136T" with devices:
"IBMLTO2-1" (/dev/nst0)
"IBMLTO2-2" (/dev/nst1)
"IBMLTO2-3" (/dev/nst2)
Device "IBMLTO2-1" (/dev/nst0) is mounted with:
 Volume:  LTO342L2
 Pool:Default
 Media type:  LTO-2
 Slot 32 is loaded in drive 0.
 Total Bytes=11,991,168,000 Blocks=185,874 Bytes/block=64,512
 Positioned at File=14 Block=0
Device "IBMLTO2-2" (/dev/nst1) is mounted with:
 Volume:  LTO253L2
 Pool:Default
 Media type:  LTO-2
 Slot 48 is loaded in drive 1.
 Total Bytes=2,193,408 Blocks=33 Bytes/block=66,466
 Positioned at File=1 Block=0
Device "IBMLTO2-3" (/dev/nst2) is not open.
 Device is being initialized.
 Drive 2 status unknown.


Used Volume status:
[nothing further and the bconsole program hangs here]

Note that the last Writing line has no volume listed.  The odd thing is 
that there actually is a tape in IBMLTO2-1.  There's no tape in drive 
IBMLTO2-3.  The pool apparently needs another appendable volume and 
there are several available in the Scratch pool but bacula is stuck.

I tried to mount a volume into the empty drive and got back the following...
*mount slot=61 drive=2
Automatically selected Storage: Dell-PV136T
3001 Device "IBMLTO2-3" (/dev/nst2) is doing acquire.

Does anybody have any idea what to do to further troubleshoot this?  I 
have had some other instances of bacula getting hung up and so I have 
already previously applied the 2.4.3-orphaned-jobs.patch

Bob

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula hang waiting for storage

2008-11-29 Thread Bob Hetzel


> From: Arno Lehmann <[EMAIL PROTECTED]> 
 > Date: Thu, 27 Nov 2008 08:14:45 +0100
> Hi, 
 >
> 26.11.2008 21:22, Bob Hetzel wrote:
>> > I've got bacula currently in a hung state with the following interesting 
>> > info.  When I run a status storage produces the following...
> 
> Is your Bacula still stuck? If so, and you have gdb installed, and a 
> Bacula with debug symbols, now might be a good time to see what it's 
> doing...
> 

I'll put the traceback at the end

>> > 
>> > Used Volume status:
>> > [nothing further and the bconsole program hangs here]
> 
> That alone would be a bug, I guess...
[snip]
> Sounds like it's worth a bug report - especially if you can re-create 
> the problem. I cc'ed this to Eric, who - I believe - has been working 
> on this sort of problems recently.
> 
> Arno
> 
It didn't crash so I tried to force the traceback per the instructions 
at http://www.bacula.org/en/dev-manual/What_Do_When_Bacula.html

The first line of the output implies that I did something wrong but 
anyway...

/usr/sbin: No such file or directory.
Using host libthread_db library "/lib/libthread_db.so.1".
[Thread debugging using libthread_db enabled]
[New Thread -1214433584 (LWP 5411)]
[New Thread -1416819824 (LWP 9229)]
[New Thread -1425212528 (LWP 8819)]
[New Thread -1374856304 (LWP 8816)]
[New Thread -1341285488 (LWP 8670)]
[New Thread -1400034416 (LWP 8553)]
[New Thread -1366463600 (LWP 8402)]
[New Thread -1442006128 (LWP 8160)]
[New Thread -1408427120 (LWP 8157)]
[New Thread -1383249008 (LWP 8154)]
[New Thread -1358070896 (LWP 8151)]
[New Thread -1324434544 (LWP 8017)]
[New Thread -1316041840 (LWP 8016)]
[New Thread -1307649136 (LWP 8015)]
[New Thread -1282483312 (LWP 8014)]
[New Thread -1274090608 (LWP 8013)]
[New Thread -1265697904 (LWP 8012)]
[New Thread -1257292912 (LWP 8007)]
[New Thread -1248900208 (LWP 8004)]
[New Thread -1240507504 (LWP 8003)]
[New Thread -1299256432 (LWP 8002)]
[New Thread -1332892784 (LWP 8001)]
[New Thread -1433613424 (LWP 8000)]
[New Thread -1231897712 (LWP 7921)]
[New Thread -1223505008 (LWP 5414)]
[New Thread -1215112304 (LWP 5413)]
0xb7f6a410 in __kernel_vsyscall ()
$1 = "gyrus-dir", '\0' 
$2 = 0x80f8e00 "bacula-dir"
$3 = 0x80f90b0 "/usr/sbin/"
$4 = 0x80f91e0 "MySQL"
$5 = 0x80ed18b "2.4.3 (10 October 2008)"
$6 = 0x80ed1a3 "i686-pc-linux-gnu"
$7 = 0x80ed1b5 "suse"
$8 = 0x80ed1ba "10.2"
#0  0xb7f6a410 in __kernel_vsyscall ()
#1  0xb7c73876 in __nanosleep_nocancel () from /lib/libpthread.so.0
#2  0x080a827b in bmicrosleep (sec=60, usec=0) at bsys.c:71
#3  0x08071b28 in wait_for_next_job (one_shot_job_to_run=0x0) at 
scheduler.c:130
#4  0x0804de85 in main (argc=0, argv=0xbfe00164) at dird.c:288

Thread 26 (Thread -1215112304 (LWP 5413)):
#0  0xb7f6a410 in __kernel_vsyscall ()
#1  0xb7adba41 in ___newselect_nocancel () from /lib/libc.so.6
#2  0x080a99b9 in bnet_thread_server (addrs=0x80f9860, max_clients=20, 
client_wq=0x80f64e0, handle_client_request=0x808d536 
)
 at bnet_server.c:161
#3  0x0808d52e in connect_thread (arg=0x80f9860) at ua_server.c:84
#4  0xb7c6c112 in start_thread () from /lib/libpthread.so.0
#5  0xb7ae22ee in clone () from /lib/libc.so.6

Thread 25 (Thread -1223505008 (LWP 5414)):
#0  0xb7f6a410 in __kernel_vsyscall ()
#1  0xb7c707dc in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
/lib/libpthread.so.0
#2  0x080ccf27 in watchdog_thread (arg=0x0) at watchdog.c:307
#3  0xb7c6c112 in start_thread () from /lib/libpthread.so.0
#4  0xb7ae22ee in clone () from /lib/libc.so.6

Thread 24 (Thread -1231897712 (LWP 7921)):
#0  0xb7f6a410 in __kernel_vsyscall ()
#1  0xb7c7302b in __read_nocancel () from /lib/libpthread.so.0
#2  0x080a9300 in read_nbytes (bsock=0x8213df0, ptr=0xb692b1b4 
"\333\v\016\b\350\261\222\266\217#", nbytes=4) at bnet.c:82
#3  0x080aba1c in BSOCK::recv (this=0x8213df0) at bsock.c:381
#4  0x080a9021 in bnet_recv (bsock=0x8213df0) at bnet.c:187
#5  0x0808ef27 in do_storage_status (ua=0x8173d48, store=0x81d9f00) at 
ua_status.c:325
#6  0x0808f6ae in status_cmd (ua=0x8173d48, cmd=0x8160778 "status 
storage") at ua_status.c:134
#7  0x08076dd0 in do_a_command (ua=0x8173d48, cmd=0x8160778 "status 
storage") at ua_cmds.c:180
#8  0x0808d647 in handle_UA_client_request (arg=0x81b77e8) at 
ua_server.c:147
#9  0x080cd952 in workq_server (arg=0x80f64e0) at workq.c:357
#10 0xb7c6c112 in start_thread () from /lib/libpthread.so.0
#11 0xb7ae22ee in clone () from /lib/libc.so.6

Thread 23 (Thread -1433613424 (LWP 8000)):
#0  0xb7f6a410 in __kernel_vsyscall ()
#1  0xb7c7302b in __read_nocancel () from /lib/libpthread.so.0
#2  0x080a9300 in read_nbytes (bsock=0x820e898, ptr=0xaa8cc024 "", 
nbytes=4) at bnet.c:82
#3  0x080aba1c in BSOCK::recv (this=0x820e898) at bsock.c:381
#4  0x0805de8

Re: [Bacula-users] bacula hang waiting for storage

2008-12-03 Thread Bob Hetzel

Previously, From: Arno Lehmann <[EMAIL PROTECTED]> said...

>> Thread 26 (Thread -1215112304 (LWP 5413)):
>> > #0  0xb7f6a410 in __kernel_vsyscall ()
>> > #1  0xb7adba41 in ___newselect_nocancel () from /lib/libc.so.6
>> > #2  0x080a99b9 in bnet_thread_server (addrs=0x80f9860, max_clients=20, 
>> > client_wq=0x80f64e0, handle_client_request=0x808d536 
>> > )
>> >  at bnet_server.c:161
> 
> The above line looks like it might be related to the problem... in 
> general, there's one thread per job running (plus the parent threads), 
> and the variable max_clients might indicate the number of currently 
> active thread servers is exhausted or something...
> 
[snip]

> Ok... there are quite a number of threads that could be console 
> connections. There is a hard limit of the active console connections - 
> it seems possible that you ran into that limit.
> 
> Have you checked how many console connections are currently open?
> 
> IIRC, if you SIGTERM a console, it does not necessarily die... so 
> there could be console processes laying around somewhere, keeping 
> their connections open.
> 
> If you find those and 'kill -9' them, do your new console connections 
> work?
> 
> Arno
> 

I generally operate with no more than two console connections.  I think 
I may have had a hung console connection before doing that traceback 
which I would have ctrl-c'd to get out of.  Of course at this point I 
don't still have it stalled, and I restarted the server a couple of 
times since then for other reasons.  If it happens again I'll do the 
traceback and also do a "ps -ef" too.  I think in this case I only had 
the -sd, -dir, and -fd running though so was there something else you 
meant?  Also, after I ctrl-c'd the connections I was able to run a new 
console connection and do certain things, but it would hang in the same 
spot if I did a "status storage" or a mount request.

In addition, I don't know if this has any bearing but here are the 
concurrency values I was operating under...
In bacula-sd.conf
Maximum Concurrent Jobs = 20
3 drives with spooling turned on.
In bacula-dir.conf
in the Director section:
Maximum Concurrent Jobs = 12
In the Jobs sections,
Maximum Concurrent Jobs = 12
In the Storage section
Maximum Concurrent Jobs = 12

There are 3 drives in my autochanger, spooling is turned on.  I've 
temporarily set bacula back to using only two drives since things were 
running more smoothly before I added the 3rd one.


Thanks,

   Bob

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula stuck inserting attributes

2008-12-04 Thread Bob Hetzel

Hi all,

I'm using bacula 2.4.3 and I'm thinking I might have created a problem 
by terminating dbcheck with ctrl-c.  I had read about it taking a long 
time to complete if you haven't run it in a while and don't have the 
right temporary indexes created.  Anyway... I didn't restart the 
database server after terminating it, but then I started the bacula 
daemons and now the running jobs have all been stuck with status "Dir 
inserting Attributes" for 14 hrs or so.  These jobs normally only take a 
few minutes.

I started googling and came up with this command to see what bacula is 
doing.

# mysqladmin -u root processlist
+-++---++-+---+--+--+
| Id  | User   | Host  | db | Command | Time  | State 
  | Info 
  |
+-++---++-+---+--+--+
| 517 | bacula | localhost | bacula | Query   | 68125 | Copying to tmp 
table | SELECT DISTINCT Path.PathId,File.PathId FROM Path LEFT OUTER 
JOIN File ON (Path.PathId=File.PathId)  |
| 519 | bacula | localhost | bacula | Sleep   | 270   | 
  | 
  |
| 520 | bacula | localhost | bacula | Query   | 50855 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 521 | bacula | localhost | bacula | Query   | 50803 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 522 | bacula | localhost | bacula | Query   | 50802 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 523 | bacula | localhost | bacula | Query   | 50799 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 524 | bacula | localhost | bacula | Query   | 50797 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 525 | bacula | localhost | bacula | Query   | 50782 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 526 | bacula | localhost | bacula | Query   | 50735 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 527 | bacula | localhost | bacula | Query   | 50689 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 528 | bacula | localhost | bacula | Query   | 50641 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 529 | bacula | localhost | bacula | Query   | 50518 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 530 | bacula | localhost | bacula | Query   | 50242 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 531 | bacula | localhost | bacula | Query   | 50060 | Locked 
  | LOCK TABLES Path write, batch write, Path as p write 
  |
| 536 | root   | localhost || Query   | 0 | 
  | show processlist 
  |
+-++---++-+---+--+--+


Can anybody tell me what my next step should be to let those backups 
finish normally?

 Bob

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula stuck inserting attributes

2008-12-04 Thread Bob Hetzel

Nevermind all... The answer was
to run "mysql" then kill the offending process which was apparently 
leftover from the dbcheck command.  In my case the output showed it was 
process 517, so I did...

mysql> kill 517 ;

That allowed bacula to continue normally.

But methinks that's a bug for dbcheck not to abort stuff properly when 
you ctrl-c it.  Clearly not something I would deem a high priority bug 
though.

Bob
Previously, I wrote:
> 
> Hi all,
> 
> I'm using bacula 2.4.3 and I'm thinking I might have created a problem 
> by terminating dbcheck with ctrl-c.  I had read about it taking a long 
> time to complete if you haven't run it in a while and don't have the 
> right temporary indexes created.  Anyway... I didn't restart the 
> database server after terminating it, but then I started the bacula 
> daemons and now the running jobs have all been stuck with status "Dir 
> inserting Attributes" for 14 hrs or so.  These jobs normally only take a 
> few minutes.
> 
> I started googling and came up with this command to see what bacula is 
> doing.
> 
> # mysqladmin -u root processlist
> +-++---++-+---+--+--+
>  
> 
> | Id  | User   | Host  | db | Command | Time  | State  | 
> Info  |
> +-++---++-+---+--+--+
>  
> 
> | 517 | bacula | localhost | bacula | Query   | 68125 | Copying to tmp 
> table | SELECT DISTINCT Path.PathId,File.PathId FROM Path LEFT OUTER 
> JOIN File ON (Path.PathId=File.PathId)  |
> | 519 | bacula | localhost | bacula | Sleep   | 270   |  | 
>  |
> | 520 | bacula | localhost | bacula | Query   | 50855 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 521 | bacula | localhost | bacula | Query   | 50803 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 522 | bacula | localhost | bacula | Query   | 50802 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 523 | bacula | localhost | bacula | Query   | 50799 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 524 | bacula | localhost | bacula | Query   | 50797 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 525 | bacula | localhost | bacula | Query   | 50782 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 526 | bacula | localhost | bacula | Query   | 50735 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 527 | bacula | localhost | bacula | Query   | 50689 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 528 | bacula | localhost | bacula | Query   | 50641 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 529 | bacula | localhost | bacula | Query   | 50518 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 530 | bacula | localhost | bacula | Query   | 50242 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 531 | bacula | localhost | bacula | Query   | 50060 | Locked  | 
> LOCK TABLES Path write, batch write, Path as p write 
>  |
> | 536 | root   | localhost || Query   | 0 |  | show 
> processlist  |
> +-++---++-+---+--+--+
>  
> 
> 
> 
> Can anybody tell me what my next step should be to let those backups 
> finish normally?
> 
> Bob
> 

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula hang waiting for storage

2008-12-04 Thread Bob Hetzel

> From: Julien Cigar <[EMAIL PROTECTED]> - 2008-12-04 16:14
>  
> On Thu, 2008-12-04 at 15:44 +, Alan Brown wrote:
>> On Wed, 3 Dec 2008, Julien Cigar wrote:
>> 
>> > > Which model and revision?
>> >
>> > I posted the full output on the FreeBSD ML some times ago :
>> > http://lists.freebsd.org/pipermail/freebsd-scsi/2008-November/003706.html
>> 
>> Is that Ultra or LVD?
>> 
>> Ultra 160/320 single-ended scsi have maximum cable lengths in the region
>> of 60-90cm.
>> 
>> LVD is several metres.
>> 
> 
> I think it's an Ultra, there are only two connectors on it (plus one to
> connect to the SCSI card). Actually it's configured as this :
> SCSI CARD == TAPE DRIVE == TERMINATOR
> 
> Just to be sure that everything is compatible (as I'm not a SCSI
> expert), the tape drive has the following interface : 
> - Embedded SCSI interface (Ultra160LVD, Single-ended or Low Voltage
> differential)
> 
> but the SCSI cards I tested are not U160, the last one was one from
> QLogic :
> - QLA1020/104x Fast-Wide-SCSI "Fast!SCSI IQ" Host Adapter'
> 
> and at boot it's detected as :
> 
> sa0 at isp0 bus 0 target 5 lun 0
> sa0:  Removable Sequential Access SCSI-2 device 
> sa0: 40.000MB/s transfers (20.000MHz, offset 8, 16bit)
> 
> I hope everything is compatible (U160 vs SCSI 2 ?)
> 
> Thanks for your help,
> Julien
>  

After noting that my cable length is quite a bit longer than 90cm I 
checked around... Just a minor nit-pick...Ultra 160's cable length is 
12m and it is LVD.

http://en.wikipedia.org/wiki/SCSI#cite_ref-length_7-0

But anyway, one shouldn't try to drive a LVD peripheral with a single 
ended SCSI controller, as Alan pointed out in another post.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed and integration issues

2008-12-05 Thread Bob Hetzel


> Date: Fri, 5 Dec 2008 04:45:56 -0500
> From: David Lee Lambert <[EMAIL PROTECTED]>

> I'm trying to use Bacula to do daily backups of data stored in iSCSI LUNs on 
> a  
> NetApp filer, using NetApp snapshots to ensure consistency.  The hosts to be 
> backed up have dual Gigabit Ethernet connections to the NetApp.  The backup 
> host consists of:

One thing you should make sure if is that your snapshot is read-only and 
you're not trying to update the access time of the files when you back 
them up.

> 
> - a desktop-class (32-bit, 2.4GHz) machine with a single local SATA drive

That has enough CPU power for bacula at LTO-2 speeds but the single 
(slow) SATA drive may be causing your bacula database to crawl.

> - an Overland Storage autochanger with room for 12 LTO-4 tapes

LTO-4 needs data streamed very fast.  When you are running a backup, if 
you look at the output from the "top" command what's using most of the 
CPU?  Is it bacula or the postgress database?  Keep in mind that to move 
large amounts of data rapidly across the bus of your backup server 
you're going to need CPU power.  Since your backup server needs to do 
more than just move the data being backed up you may need a faster 
backup server.

> - a built-in Fast Ethernet adapter (3com 3c509) and an add-in Gigabit 
> Ethernet 
> adapter (Linksys rev 10)
> - running Ubuntu G server and kernel 2.6.22; Bacula is storing its catalog in 
> a local Postgres database
> 
> One issue we've struggled with is speed.  With the GB adapter, reading files 
> from a snapshot via iSCSI, we were consistently getting less than 2MByte/sec, 
> sometimes as low as 300kbyte/sec.  Yesterday we switched to the 100Mbit 
> adapter,  and were sometimes able to almost max it out during a full backup 
> (network usage of 10 to 11 MByte/sec on the Fast Ethernet adapter),  but it 
> also slowed down sometimes: it took 25 minutes to back up a 22GB LUN with 7GB 
> of files,  and it took 25 minutes to back up a 6GB LUN with 1.1GB of files 
> (yes, almost exactly the same amount of total time).

The speed of the backup will depend on the size of the files so you will 
likely see widely varying speeds if you do a collection of different 
servers.  Web directories and mail trees (at least the ones where every 
message is a single file) are probably going to be the slowest.  They'll 
be slow due to the stuff needed on starting and stopping of reads for 
the Netapp as well as catalog and other operations on your backup 
server.  This is true for any backup system that does file backups 
rather than image backups (i.e. where you can easily restore 1 file as 
opposed to just a whole volume of files).

The 100 Mbit adapter showing improved performance suggests you have a 
bag gig card somewhere, a problem on your network, or if you're lucky 
just a net card driver or firmware that needs to be updated.

> I recently did dd to a raw tape and got a speed of at least 17MByte/sec.  The 
> local drive seems to have a write speed of about 7Mbyte/sec,  so pooling to 
> local disk is not an option.  On our faster servers with dual server-class 
> Gigabit Ethernet adapters,  I can get burst read speeds of 40 to 70 
> Mbyte/sec.

We had a problem whereby some low end desktop class gigabit switches (8 
or 10 ports I can't remember which) would perform really badly if you 
plugged a single 100 Mbit device into them, even on the other ports. 
Your mileage may vary on that though.  If you can go all Gigabit you'll 
probably not need to do anything really complicated like a parallel 
backup network.

> We'd also like our tape-rotation policy, for at least some of our tapes, to 
> mirror as closely as possible what we do for our existing servers with local 
> tape drives:  daily tape rotation in a two-week cycle,  with tapes written at 
> night and taken off-site for one week starting the day after they're written. 
>  
> That gives us an 18-hour window in which to write the tapes, and we should be 
> able to fill an 800-GB tape in 17 hours 46 minutes ( 800e8 / 1.25e7 / 3600 = 
> 17.77 ) at Fast Ethernet speed.  We probably have less data than that to back 
> up;  in fact, if we keep our other current tape drives and don't back 
> up /usr/portage or similar directories anywhere, we probably have less than 
> 400GB.  Therefore,  I think we should do a full backup each day; perhaps even 
> a full backup of the first snapshot and incremental backups for later 
> snapshots that same day.  Is that reasonable?  
> 
> Is it possible to initiate an incremental backup that would store all changes 
> against the contents of a certain medium?  (Say tape 5 is in the drive today 
> and has a 380GB full backup and 6 20-GB incremental backups going back 3 
> months.  File /foo/bar/xxx changed monday and tuesday, so the newest copy is 
> on the tuesday tape;  but write a copy to the friday tape as well.)

This was too confusing for me to follow.  If you want more than one copy 
of a file that changes often you c

[Bacula-users] bacula sometimes gets stuck when volume wanted is already in a different drive

2008-12-23 Thread Bob Hetzel

I'm running v 2.4.3 of bacula on OpenSuse 10.2 with a two tape-drive 
autoloader and I've been having this problem every few days for a while 
now: bacula wants a tape that's already loaded but in a different drive 
and it's not able to just unload the tape and move it or better yet just 
use it where it's already mounted on the other drive.

23-Dec 03:02 gyrus-sd JobId 50576: Please mount Volume "LTO289L2" or 
label a new one for:
 Job:  arc2-gx280.2008-12-22_19.30.24
 Storage:  "IBMLTO2-1" (/dev/nst0)
 Pool: Default
 Media type:   LTO-2

Here's the info on the tape it wants from a list volumes command...

| MediaId | VolumeName | VolStatus | Enabled | VolBytes| 
VolFiles | VolRetention | Recycle | Slot | InChanger | MediaType | 
LastWritten |
   107 | LTO289L2   | Append|   1 | 119,531,768,832 | 
125 |7,776,000 |   1 |   50 | 1 | LTO-2 | 2008-12-22 
20:04:24

And here's the output from a status storage command

*status storage
Automatically selected Storage: Dell-PV136T
Connecting to Storage daemon Dell-PV136T at gyrus:9103

gyrus-sd Version: 2.4.3 (10 October 2008) i686-pc-linux-gnu suse 10.2
Daemon started 19-Dec-08 17:02, 255 Jobs run since started.
  Heap: heap=4,100,096 smbytes=3,422,070 max_bytes=3,759,755 bufs=358 
max_bufs=535
Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8

Running Jobs:
Writing: Incremental Backup job aab9 JobId=50574 Volume="LTO289L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=18
Writing: Incremental Backup job dmp9-gx620 JobId=50579 Volume="LTO289L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=17
Writing: Incremental Backup job arc2-gx280 JobId=50576 Volume="LTO289L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=13
Writing: Incremental Backup job mxp53-gx280 JobId=50586 Volume="LTO289L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=15
Writing: Incremental Backup job tgf5 JobId=50588 Volume="LTO289L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=14


Jobs waiting to reserve a drive:


Terminated Jobs:
  JobId  LevelFiles  Bytes   Status   FinishedName
===
  50483  Incr 944.917 G  OK   22-Dec-08 12:59 jas88-gx280
  50558  Incr  0 0   Cancel   22-Dec-08 13:10 smm62-gx270
  50562  Incr  0 0   Other22-Dec-08 13:12 tdb3-745
  50573  Full 893.284 G  OK   22-Dec-08 13:17 BackupCatalog
  50530  Full  0 0   Cancel   22-Dec-08 13:19 nab-gx280
  50577  Incr176500.6 M  OK   22-Dec-08 20:02 beh
  50578  Incr118960.7 M  OK   22-Dec-08 20:03 dms29-gx270
  50581  Incr2651.460 G  OK   22-Dec-08 20:04 gradresed
  50585  Incr10819.15 M  OK   22-Dec-08 20:04 megpop
  50587  Incr  0 0   Cancel   22-Dec-08 20:32 sns12-gx280


Device status:
Autochanger "Dell-PV136T" with devices:
"IBMLTO2-1" (/dev/nst0)
"IBMLTO2-2" (/dev/nst1)
Device "IBMLTO2-1" (/dev/nst0) is not open.
 Device is BLOCKED waiting for mount of volume "LTO289L2",
Pool:Default
Media type:  LTO-2
 Drive 0 status unknown.
Device "IBMLTO2-2" (/dev/nst1) is mounted with:
 Volume:  LTO289L2
 Pool:Default
 Media type:  LTO-2
 Slot 50 is loaded in drive 1.
 Total Bytes=119,531,768,832 Blocks=1,852,860 Bytes/block=64,512
 Positioned at File=125 Block=0


Used Volume status:
LTO210L2 on device "IBMLTO2-2" (/dev/nst1)
 Reader=0 writers=0 devres=0 volinuse=0
LTO289L2 on device "IBMLTO2-1" (/dev/nst0)
 Reader=0 writers=0 devres=5 volinuse=1


Data spooling: 0 active jobs, 0 bytes; 238 total jobs, 86,653,540,305 
max bytes/job.
Attr spooling: 0 active jobs, 0 bytes; 238 total jobs, 15,082,824 max bytes.


In this case I was able to release the tape from drive LTO2-2 and issue 
a mount for that tape onto the desired drive and things continued.

Before doing this, however, in case it's helpful I forced a traceback:

gyrus:/var/bacula #  ps -ef |grep bacula
root  5006 1  1 Dec19 ?01:01:03 /usr/sbin/bacula-sd -u 
root -g bacula -v -c /etc/bacula/bacula-sd.conf
root  5020 1  0 Dec19 ?00:00:12 /usr/sbin/bacula-fd -v 
-c /etc/bacula/bacula-fd.conf
bacula5029 1  0 Dec19 ? 

[Bacula-users] bacula hang issue. was: bacula sometimes gets stuck when volume wanted is already in a different drive

2009-01-06 Thread Bob Hetzel

Hi all,

I've had a problem for a while whereby bacula hangs waiting for storage.

Here's the message I posted to the bacula-users list previously.
http://marc.info/?l=bacula-users&m=123004380923706&w=2

Yesterday I upgraded to 2.4.4 and I think I've still got that problem--bacula 
still stops processing backups, but the message output is different.  Here's 
the 
last messages from the console this time.

05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "WMI Writer", State: 0x1 
(VSS_WS_STABLE)
05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "MSDEWriter", State: 0x1 
(VSS_WS_STABLE)
05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "Microsoft Writer (Bootable 
State)", State: 0x1 (VSS_WS_STABLE)
05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "Microsoft Writer (Service 
State)", State: 0x1 (VSS_WS_STABLE)
05-Jan 18:28 gyrus-sd JobId 53263: Job write elapsed time = 01:25:13, Transfer 
rate = 1.977 M bytes/second
05-Jan 18:28 gyrus-sd JobId 53263: Committing spooled data to Volume 
"LTO295L2". 
Despooling 10,120,678,805 bytes ...
05-Jan 19:30 gyrus-dir JobId 53310: Prior failed job found in catalog. 
Upgrading 
to Full.
05-Jan 19:30 gyrus-dir JobId 53311: Prior failed job found in catalog. 
Upgrading 
to Full.
05-Jan 19:30 gyrus-dir JobId 53313: Prior failed job found in catalog. 
Upgrading 
to Full.
05-Jan 19:30 gyrus-dir JobId 53323: Prior failed job found in catalog. 
Upgrading 
to Full.
06-Jan 08:55 gyrus-dir JobId 53280: Fatal error: Network error with FD during 
Backup: ERR=Connection timed out
06-Jan 08:55 gyrus-sd JobId 53280: Job regcomm-gx280.2009-01-05_11.53.02.12 
marked to be canceled.
06-Jan 08:55 gyrus-dir JobId 53280: Fatal error: No Job status returned from FD.
06-Jan 08:55 gyrus-dir JobId 53280: Error: Bacula gyrus-dir 2.4.4 (28Dec08): 
06-Jan-2009 08:55:07
   Build OS:   i686-pc-linux-gnu suse 10.2
   JobId:  53280
   Job:regcomm-gx280.2009-01-05_11.53.02.12
   Backup Level:   Incremental, since=2009-01-04 12:00:41
   Client: "regcomm-gx280" 2.4.0 (04Jun08) 
Linux,Cross-compile,Win32
   FileSet:"cd-drive-dirs" 2007-11-05 19:00:00
   Pool:   "Default" (From Job resource)
   Storage:"Dell-PV136T" (From Job resource)
   Scheduled time: 05-Jan-2009 11:53:02
   Start time: 05-Jan-2009 17:13:48
   End time:   06-Jan-2009 08:55:07
   Elapsed time:   15 hours 41 mins 19 secs
   Priority:   15
   FD Files Written:   0
   SD Files Written:   0
   FD Bytes Written:   0 (0 B)
   SD Bytes Written:   0 (0 B)
   Rate:   0.0 KB/s
   Software Compression:   None
   VSS:no
   Storage Encryption: no
   Volume name(s):
   Volume Session Id:  92
   Volume Session Time:1231165805
   Last Volume Bytes:  281,632,167,936 (281.6 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  Error
   SD termination status:  Error
   Termination:*** Backup Error ***


I then ran a status director and saw that jobs have been stalled for more than 
12 hours.
Upon running a status storage, the bconsole program became unresponsive.
*status storage
Automatically selected Storage: Dell-PV136T
Connecting to Storage daemon Dell-PV136T at gyrus:9103

gyrus-sd Version: 2.4.4 (28 December 2008) i686-pc-linux-gnu suse 10.2
Daemon started 05-Jan-09 09:30, 79 Jobs run since started.
  Heap: heap=3,608,576 smbytes=3,262,825 max_bytes=3,328,738 bufs=567 
max_bufs=569
Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8

Running Jobs:
Writing: Full Backup job krr6-d830.2009-01-05_11 JobId=53240 Volume="LTO298L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=22,666 Bytes=16,473,267,441 Bytes/sec=259,119
 FDReadSeqNo=437,933 in_msg=373469 out_msg=9 fd=29
Writing: Incremental Backup job lcc3-o755.2009-01-05_11 JobId=53243 
Volume="LTO298L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=49
Writing: Full Backup job lsk2.2009-01-05_11 JobId=53248 Volume="LTO298L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=1
 Files=51,118 Bytes=6,367,622,723 Bytes/sec=107,850
 FDReadSeqNo=528,490 in_msg=378838 out_msg=9 fd=50
Writing: Full Backup job mes179-d630.2009-01-05_11 JobId=53254 Volume="LTO298L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=0
 Files=0 Bytes=0 Bytes/sec=0
 FDReadSeqNo=6 in_msg=6 out_msg=5 fd=16
Writing: Full Backup job mje42-gx280.2009-01-05_11 JobId=53256 Volume="LTO298L2"
 pool="Default" device="IBMLTO2-1" (/dev/nst0)
 spooling=0 despooling=0 despool_wait=1
 Files=26,703 Bytes=4,115,465,652 Bytes/sec=69,705
 FDReadSeqNo=285,444 in_msg=209770 out_msg

Re: [Bacula-users] [Bacula-devel] bacula hang issue. was: bacula sometimes gets stuck when volume wanted is already in a different drive

2009-01-06 Thread Bob Hetzel
ad 15 (Thread -1313989744 (LWP 26043)):
#0  0xb7fd3410 in __kernel_vsyscall ()
#1  0xb7e1ec4e in __lll_mutex_lock_wait () from /lib/libpthread.so.0
#2  0xb7e1aa3c in _L_mutex_lock_88 () from /lib/libpthread.so.0



> 
> On Tuesday 06 January 2009 15:59:13 Bob Hetzel wrote:
>> Hi all,
>>
>> I've had a problem for a while whereby bacula hangs waiting for storage.
>>
>> Here's the message I posted to the bacula-users list previously.
>> http://marc.info/?l=bacula-users&m=123004380923706&w=2
>>
>> Yesterday I upgraded to 2.4.4 and I think I've still got that
>> problem--bacula still stops processing backups, but the message output is
>> different.  Here's the last messages from the console this time.
>>
>> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "WMI Writer", State: 0x1
>> (VSS_WS_STABLE)
>> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "MSDEWriter", State: 0x1
>> (VSS_WS_STABLE)
>> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "Microsoft Writer
>> (Bootable State)", State: 0x1 (VSS_WS_STABLE)
>> 05-Jan 18:28 mxg86: VSS Writer (BackupComplete): "Microsoft Writer (Service
>> State)", State: 0x1 (VSS_WS_STABLE)
>> 05-Jan 18:28 gyrus-sd JobId 53263: Job write elapsed time = 01:25:13,
>> Transfer rate = 1.977 M bytes/second
>> 05-Jan 18:28 gyrus-sd JobId 53263: Committing spooled data to Volume
>> "LTO295L2". Despooling 10,120,678,805 bytes ...
>> 05-Jan 19:30 gyrus-dir JobId 53310: Prior failed job found in catalog.
>> Upgrading to Full.
>> 05-Jan 19:30 gyrus-dir JobId 53311: Prior failed job found in catalog.
>> Upgrading to Full.
>> 05-Jan 19:30 gyrus-dir JobId 53313: Prior failed job found in catalog.
>> Upgrading to Full.
>> 05-Jan 19:30 gyrus-dir JobId 53323: Prior failed job found in catalog.
>> Upgrading to Full.
>> 06-Jan 08:55 gyrus-dir JobId 53280: Fatal error: Network error with FD
>> during Backup: ERR=Connection timed out
>> 06-Jan 08:55 gyrus-sd JobId 53280: Job regcomm-gx280.2009-01-05_11.53.02.12
>> marked to be canceled.
>> 06-Jan 08:55 gyrus-dir JobId 53280: Fatal error: No Job status returned
>> from FD. 06-Jan 08:55 gyrus-dir JobId 53280: Error: Bacula gyrus-dir 2.4.4
>> (28Dec08): 06-Jan-2009 08:55:07
>>Build OS:   i686-pc-linux-gnu suse 10.2
>>JobId:  53280
>>Job:regcomm-gx280.2009-01-05_11.53.02.12
>>Backup Level:   Incremental, since=2009-01-04 12:00:41
>>Client: "regcomm-gx280" 2.4.0 (04Jun08)
>> Linux,Cross-compile,Win32 FileSet:"cd-drive-dirs"
>> 2007-11-05 19:00:00
>>Pool:   "Default" (From Job resource)
>>Storage:"Dell-PV136T" (From Job resource)
>>Scheduled time: 05-Jan-2009 11:53:02
>>Start time: 05-Jan-2009 17:13:48
>>End time:   06-Jan-2009 08:55:07
>>Elapsed time:   15 hours 41 mins 19 secs
>>Priority:   15
>>FD Files Written:   0
>>SD Files Written:   0
>>FD Bytes Written:   0 (0 B)
>>SD Bytes Written:   0 (0 B)
>>Rate:   0.0 KB/s
>>Software Compression:   None
>>VSS:no
>>Storage Encryption: no
>>Volume name(s):
>>Volume Session Id:  92
>>Volume Session Time:1231165805
>>Last Volume Bytes:  281,632,167,936 (281.6 GB)
>>Non-fatal FD errors:0
>>SD Errors:  0
>>FD termination status:  Error
>>SD termination status:  Error
>>Termination:*** Backup Error ***
>>
>>
>> I then ran a status director and saw that jobs have been stalled for more
>> than 12 hours.
>> Upon running a status storage, the bconsole program became unresponsive.
>> *status storage
>> Automatically selected Storage: Dell-PV136T
>> Connecting to Storage daemon Dell-PV136T at gyrus:9103
>>
>> gyrus-sd Version: 2.4.4 (28 December 2008) i686-pc-linux-gnu suse 10.2
>> Daemon started 05-Jan-09 09:30, 79 Jobs run since started.
>>   Heap: heap=3,608,576 smbytes=3,262,825 max_bytes=3,328,738 bufs=567
>> max_bufs=569 Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8
>>
>> Running Jobs:
>> Writing: Full Backup job krr6-d830.2009-01-05_11 JobId=53240
>> Volume="LTO298L2" pool="Default" device="IBMLTO2-1" (/dev/nst0)
>>  spooling=0 despooling=0 despool_wait=0
>>  

[Bacula-users] restore oddities: files restored more than files expected and job pruning

2009-01-13 Thread Bob Hetzel

Here's the specifics of my bacula installation
gyrus-dir Version: 2.4.3 (10 October 2008) i686-pc-linux-gnu suse 10.2
Daemon started 08-Jan-09 19:31, 799 Jobs run since started.
  Heap: heap=1,871,872 smbytes=632,680 max_bytes=2,934,571 bufs=5,050 
max_bufs=10,223

I recently ran a restore job and below is some of the output I got.

1) How can it restore more files than expected
2) Why is it pruning jobs on restore?  I thought it only pruned jobs on 
backup?  Couldn't this be a problem for me if I have to restore something 
that's right near the expiration date?
3) I was not restoring any files directly in "vhf" but instead was 
restoring files in

C:/Documents and Settings/vhf/Application Data/Power On

but the error message doesn't say anything about that directory.  It looks 
like I got what I needed from the restore but I would not have thought that 
windows would "lock" an entire directory.  Is that possible or did this 
message leave off something?  Or was bacula trying to re-set the 
permissions of all the folders above the dir it was restoring?  I presume 
that one of the folders being restored had permissions inherited from above.

Anyway, here's the job output...

13-Jan 10:55 gyrus-sd JobId 54681: Reposition from (file:block) 77:12285 to 
78:0
13-Jan 10:55 jc-gx620: RestoreFiles.2009-01-13_10.51.49 Error: 
../../findlib/create_file.c:384 Could not open C:/Documents and 
Settings/vhf/: ERR=The process cannot access the file because it is being 
used by another process.

13-Jan 10:56 gyrus-sd JobId 54681: End of file 79 on device "IBMLTO2-1" 
(/dev/nst0), Volume "LTO295L2"
13-Jan 10:56 gyrus-sd JobId 54681: Alert: smartctl version 5.38 
[i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen
13-Jan 10:56 gyrus-sd JobId 54681: Alert: Home page is 
http://smartmontools.sourceforge.net/
13-Jan 10:56 gyrus-sd JobId 54681: Alert:
13-Jan 10:56 gyrus-sd JobId 54681: Alert: TapeAlert: OK
13-Jan 10:56 gyrus-dir JobId 54681: Error: Bacula gyrus-dir 2.4.3 
(10Oct08): 13-Jan-2009 10:56:14
   Build OS:   i686-pc-linux-gnu suse 10.2
   JobId:  54681
   Job:RestoreFiles.2009-01-13_10.51.49
   Restore Client: jc-gx620
   Start time: 13-Jan-2009 10:51:53
   End time:   13-Jan-2009 10:56:14
   Files Expected: 297
   Files Restored: 300
   Bytes Restored: 78,384,602
   Rate:   300.3 KB/s
   FD Errors:  1
   FD termination status:  Error
   SD termination status:  OK
   Termination:*** Restore Error ***

13-Jan 10:56 gyrus-dir JobId 54681: Begin pruning Jobs.
13-Jan 10:56 gyrus-dir JobId 54681: No Jobs found to prune.
13-Jan 10:56 gyrus-dir JobId 54681: Begin pruning Files.
13-Jan 10:56 gyrus-dir JobId 54681: Pruned Files from 1 Jobs for client 
jc-gx620 from catalog.
13-Jan 10:56 gyrus-dir JobId 54681: End auto prune.


--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Limitations displaying large file sizes in bconsole when browsing for a restore?

2009-11-17 Thread Bob Hetzel

Are there any odd limitations in the bconsole (both 'restore' and 'estimate 
listing' sections) browsing with regard to large files, such as VMware disk 
files?

I've just stumbled onto this oddity...

# ls -l pituitary-flat.vmdk
-rw-r- 1 root root 21474836480 Nov 17 15:27 pituitary-flat.vmdk

So that file size is over 21GB

When I go into bconsole restore and browse that directory I get...

doing an estimate listing (chopping out only the file I'm talking about) I 
get...

-rw-r-   1 root root2147483648 2009-11-17 15:21:37 
/fs1/pituitary/pituitary-flat.vmdk


doing a restore then browsing to that file inside bconsole I also get

-rw-r-   1 root root2147483648  2009-11-17 10:24:44 
/fs1/pituitary/pituitary-flat.vmdk

I don't think it's sparse file, and bacula seems to be backing it up and 
restoring it properly as a 21GB file, just that it seems to display as 
exactly 1/10 the size, or perhaps it's just dropping trailing zeros when a 
file is bigger than 10GB?


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] RFC: backing up hundreds of TB

2009-11-28 Thread Bob Hetzel


Ralf Gross wrote:
> Arno Lehmann schrieb:
>> 27.11.2009 13:23, Ralf Gross wrote:
>>> [crosspost to -users and -devel list]
>>>
>>> Hi,
>>>
>>> we are happily using bacula since a few years and already backing up
>>> some dozens of TB (large video files) to tape.
>>>
>>> In the next 2-3 years the amount of data will be growing to 300+ TB.

I guess my first question is how fast will that ramp up be?  LTO-5 is set 
to hit the market next year, with one vendor already doing a pre-order deal 
where they sell you LTO-4 (I assume for way more than it's worth) then 
upgrade you to LTO-5 when the drives become available.  I presume that's a 
really bad headache in the making but I could see upgrading to LTO-5 in 
many data centers by early 2011.

Could you make do until that time with copies going to multiple home-brewed 
raid arrays in different buildings so you can copy the data to disk and 
then back it up to tape later?


>>> We are looking for some very pricy solutions for the primary storage
>>> at the moment (NetApp etc). But we (I) are also looking if it is
>>> possible to go on with the way we store the data right now. Which is
>>> just some large raid arrays and backup2tape.
>> Good luck... while I agree that SAN/NAS appliances tend to look 
>> expensive, they've got their advantages when your space has to grow to 
>> really big sizes. Managing only one setup, when some physical disk 
>> arrays work together is one of these advantages.
> 
> I fully agree. But this comes with a price that is 5-10 time higher
> than a setup with simple RAID arrays and a large changer. In the end
> I'll present 2 or 3 concepts and others will decide how valuable the
> data is.
> 

I recently saw this article http://blogs.zdnet.com/BTL/?p=23765

If you need high performance that would be a lousy solution but if you just 
need high capacity, you could probably get awesome performance for those 
"occasional" reads with more RAM.  What I mean is that this might be a 
great place to put a 2nd copy of the data but you wouldn't be able to back 
it up entirely to tape in a reasonable backup window.  However if you only 
need to read a single 5-10 GB file off it every couple hours, the 
performance should be really good.

With the price difference between something like this and a SAN it seems 
like it might be worth it for some big shops to try to 3 or more way 
redundancy as you'd still come out way ahead.  Of course if you were trying 
to do video editing right off it or an enormous database with a lot of 
reads/searches/writes you'd probably be pulling your hair out a lot. Other 
than that the only downside I see is that you have to take it down to swap 
out a dead drive.  Proper cooling and modern drives should give this thing 
decent reliability, but you'd want additional layers of redundancy to allow 
for offline swapping of parts, software/firmware upgrades, etc.



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Monthly rotation strategies

2009-12-03 Thread Bob Hetzel

As I've said before... do NOT rely on all your backups for an
entire month being on one tape.  If the tape breaks on the last
backup of the month, the net result is that you'll have no backup
at all.  I currently have Volume Use Duration set to 5 in my
environment and I feel like that's even a little risky but I haven't
fully ramped everything up yet.  It may seem like a waste to buy
many LTO-4 tapes for each month but you'll wonder how you could
worry about such a small cost ($40 / tape or so) if you wind up
getting stung by a problem like that.  The cost per gigabyte goes up a
lot if you consider that you're only putting 200 GB vs 800 GB on some
tapes but whenever thinking about backup strategies, don't forget to
think about the cost of lost data as you weigh relative factors.
And this number will also probably be small compared to the price of
the changer.

How big is your changer?  Is it big enough to always just keep a
group of tapes in plus a few scratches?  If that's the case, then
my suggestion is just pull out tapes marked Full or Used and bring
back new scratch tapes as frequently as you can manage.  After getting
your settings right you can do it almost blindly whereby you just
bring back the oldest scratch tapes (i.e. use the scratch pool).

Volume Use Duration will apply to each tape not each "tape group".
I like to use just one pool for all my backups of a certain "class
of service" and let bacula handle all the management.

Don't bother with one pool per month.  You'll find that something will 
happen (holiday, network error, whatever) and bacula will get out of sync 
eventually anyway.

  > Normally I would set up with weekly tape group rotations and Daily pools
> so that for a work week, I would have maybe 3 tapes for each day and
> more tapes for the full backup on Fridays. I would also have things like
> 
>   Volume Retention = 17d
>   Volume Use Duration = 2d
>   Maximum Volume Jobs = 4
> 
> to allow for tapes not inserted on time (Volume Retention slightly less
> than 3 weeks, etc.)
> 
> Now I am considering a monthly group of tapes with maybe 3 groups of
> rotating tapes and that causes me all sorts of confusion with the ideal
> setup. I have an LTO-4 tape w/ changer so I am dealing with groups of
> tapes for each rotation.
> 
> My thought is that the first backup for each cycle would be a full
> backup. Then the rest of the month would be an incremental backup.
> 
> I think I could probably use maybe 61 or 62 days for 'Volume Retention',
> maybe 29 days for 'Volume Use Duration' and some very high number for
> Maximum Volume Jobs so that each workday's incremental could just keep
> getting written to the same tape until it fills.
> 
> I think my questions are:
> 
> - If I target the first day of the month to start a new rotation for a
> set of tapes, this date may come over a weekend or a holiday but I
> gather I can set the 'Maximum delays' to an appropriate value but is
> there a better way to handle this?
> 
> - How can I ensure that the previous months tapes status is changed to
> 'used' so that bacula doesn't ask for a tape from the previous month?
> 
> - Do I have to segregate the 'Pools' for each monthly set of tapes? Is
> there any reason to segregate the 'Pools' for those that are Full or
> Incremental backups when I want to keep the entire month together as a
> group for swapping in/out of the changer?



> - Does this seem correct...
> Schedule {
>   Name = "MonthlyCycle"
>   Run = Level=Differential Pool=Backup 2-31 at 9:30pm
>   Run = Level=Full Pool=Backup 1 at 9:30pm
> }
> 
> Craig
> 
> 
> -- This message has been scanned for viruses and dangerous content by 
> MailScanner, and is believed to be clean. 



--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multi Volume Archives

2009-12-14 Thread Bob Hetzel
1) Bacula is indeed able to seek within a tape.  If you're having trouble 
with this functionality you need to look at the storage daemon options at

http://www.bacula.org/3.0.x-manuals/en/install/install/Storage_Daemon_Configuratio.html#SECTION0083

Fast Forward Space File
Forward Space Record
Maximum File Size

The first two should probably be left at their default values but the 
Maximum File Size value may need to be increased if you're using LTO-4 
hardware.

2) If you pick a file that's on one of the other 9 tapes in your example it 
should be fine.  The big problem will be if you need files on that bad 
tape.  In that case you won't be able to get it.  So here's an example...
Day one, full backup winds up using 9 full tapes and part of a 10th tape. 
Day two, that 10th tape is appended to, but then breaks.  In this case, 
you'll probably be able to restore anything except the files on that last 
tape.  I believe bacula restores the files in the order they were written, 
so any kind of restore you do that invovles tape 9 and tape 10 will get all 
the tape 9 stuff just fine but then you'll need to cancel it because you 
won't be able to give it tape 10.  However, the tape 9 files will already 
be written and canceling the restore does not delete them.

Bob


> Date: Sun, 13 Dec 2009 11:30:29 +1300
> From: Richard Scobie 

> If a tape is damaged in a multi volume archive, is it possible to 
> recover all the data from the rest of the set?
> 
> I understand Bacula is not currently able to seek within a tape in order 
> to recover a file - the tape must be read sequentially.
> 
> In a multi volume archive, does it need to read through all 10 LTO4 
> tapes in order to recover files from the 10th tape?
> 
> Regards,
> 
> Richard


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multi Volume Archives

2009-12-14 Thread Bob Hetzel

> On Mon, 2009-12-14 at 15:39 -0500, Bob Hetzel wrote:
>> 1) Bacula is indeed able to seek within a tape.  If you're having trouble 
>> with this functionality you need to look at the storage daemon options at
>> 
>> http://www.bacula.org/3.0.x-manuals/en/install/install/Storage_Daemon_Configuratio.h
>>  \
>> tml#SECTION0083 
>> Fast Forward Space File
>> Forward Space Record
>> Maximum File Size
>> 
>> The first two should probably be left at their default values but the 
>> Maximum File Size value may need to be increased if you're using LTO-4 
>> hardware.
>> 
> 
> that's interesting... I did not change them on the recent LTO-4 that I
> just set up but btape testing did not reveal any problems. Is btape
> testing sufficient to point out any necessary changes here?
> 
> Craig
> 

It's not that there's a problem with using the default values it's just 
that some of they aren't optimized for cutting edge hardware.  Read the 
description of the values and it's spelled out pretty well.  In addition if 
you google Maximum File Size LTO-4 bacula you'll come up with this 
explanation too

http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg36340.html

Also, you probably should experiment with minimum and maximum block size. 
I would definitely recommend doing some repeatable tests with this kind of 
tuning parameters as the optimum values may depend heavily on your 
director's CPU and other hardware.

Bob



--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula 3.0.3 maintain ctime on restored files?

2010-01-15 Thread Bob Hetzel

> In the case here it's rather a large issue (granted due to other 
> problems like hardware or software issues that require restores of data 
> and is NOT something that I want to continue, however we are living in 
> an imperfect world) but to give you an idea the dataset size is about 
> 30-40TiB with restores anywhere from 2TiB to a full restore to back that 
> back up again not only takes a LOT of tapes which is costly it also 
> takes a LOT of time (day/weeks) where nothing else can be run.
> 
> What I have been doing which is just painful is do a full restore, then 
> have to do a full backup right after then continue so a restore is 
> actually the time of about 2x of a full.  (for a full set this is about 
> 9-10 days on LTO4).   This has happened about 3 times so far in the past 
> 2 months.

Have you considered breaking up such a big dataset into pieces that only 
take up perhaps 5 tapes or less each?  I know it can be a pain to manage, 
but it seems like that would go a long way toward.  The other benefit is 
that if a tape breaks during the restore (or is just discovered to be 
unreadable) you can still get 100% of the other datasets restored.  IMHO, 
any time you realize you can't back up your fileset in one day or even one 
weekend that's a good opportunity to take a step back and re-think things. 
  Perhaps you might even need to take the step of scheduling the fulls so 
they don't occur on the same schedule.

I realize you that your periodic audits of your filesets to make sure 
you're not missing something become a lot more complicated, but a painful 
audit once per quarter seems like a lot better than what you've described 
above.




--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] bug in autolabel code in 5.0.0?

2010-02-02 Thread Bob Hetzel


John Drescher wrote:
> On Tue, Feb 2, 2010 at 11:16 AM, Bob Hetzel  wrote:
>>
>> John Drescher wrote:
>>> On Tue, Feb 2, 2010 at 10:58 AM, Bob Hetzel  wrote:
>>>> Greetings,
>>>>
>>>> I've just upgraded to 5.0.0 and have run into this odd issue.  Bacula
>>>> seems
>>>> to be stuck waiting for a labeled volume, even after it attempts to label
>>>> a
>>>> new volume.  The bconsole messages seem to reflect this loop...
>>>>
>>>> *mess
>>>> 02-Feb 10:32 claustrum-sd JobId 3583: Error: block.c:1012 Read error on
>>>> fd=9 at file:blk 0:0 on device "IBMLTO4-0" (/dev/nst0). ERR=Device or
>>>> resource busy.
>>>> 02-Feb 10:32 claustrum-sd JobId 3583: Labeled new Volume "LTO429L4" on
>>>> device "IBMLTO4-0" (/dev/nst0).
>>>> 02-Feb 10:33 claustrum-sd JobId 3583: Error: block.c:1012 Read error on
>>>> fd=9 at file:blk 0:0 on device "IBMLTO4-0" (/dev/nst0). ERR=Device or
>>>> resource busy.
>>>> 02-Feb 10:33 claustrum-sd JobId 3583: Labeled new Volume "LTO429L4" on
>>>> device "IBMLTO4-0" (/dev/nst0).
>>>> 02-Feb 10:33 claustrum-sd JobId 3583: Error: block.c:1012 Read error on
>>>> fd=9 at file:blk 0:0 on device "IBMLTO4-0" (/dev/nst0). ERR=Device or
>>>> resource busy.
>>>> 02-Feb 10:33 claustrum-sd JobId 3583: Labeled new Volume "LTO429L4" on
>>>> device "IBMLTO4-0" (/dev/nst0).
>>>>
>>>> The tape in question, LTO429L4 is new and never used before.  Here's the
>>>> list volumes output for it
>>>>
>>>> |  48 | LTO429L4   | Append|   1 | 0 |
>>>>  0
>>>> |   10,368,000 |   1 |   39 | 1 | LTO   | -00-00
>>>> 00:00:00 |
>>>> +-
>>>>
>>>> *status storage
>>>> Automatically selected Storage: Dell-ML6000
>>>> Connecting to Storage daemon Dell-ML6000 at claustrum:9103
>>>>
>>>> claustrum-sd Version: 5.0.0 (26 January 2010) x86_64-unknown-linux-gnu
>>>> ubuntu 9.10
>>>> Daemon started 02-Feb-10 09:36, 0 Jobs run since started.
>>>>  Heap: heap=2,592,768 smbytes=2,274,219 max_bytes=2,306,226 bufs=168
>>>> max_bufs=174
>>>> Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8
>>>>
>>>> Running Jobs:
>>>> Writing: Incremental Backup job pituitary JobId=3583 Volume="LTO429L4"
>>>>pool="Westwing" device="IBMLTO4-0" (/dev/nst0)
>>>>spooling=0 despooling=0 despool_wait=0
>>>>Files=0 Bytes=0 Bytes/sec=0
>>>>FDReadSeqNo=6 in_msg=6 out_msg=4 fd=5
>>>> Writing: Incremental Backup job cortex JobId=3584 Volume="LTO428L4"
>>>>pool="Westwing" device="IBMLTO4-1" (/dev/nst1)
>>>>spooling=0 despooling=0 despool_wait=0
>>>>Files=0 Bytes=0 Bytes/sec=0
>>>>FDReadSeqNo=6 in_msg=6 out_msg=4 fd=7
>>>> 
>>>>
>>>> Jobs waiting to reserve a drive:
>>>> 
>>>>
>>>> Terminated Jobs:
>>>> 
>>>> 
>>>>
>>>> Device status:
>>>> Autochanger "Dell-ML6000" with devices:
>>>>   "IBMLTO4-0" (/dev/nst0)
>>>>   "IBMLTO4-1" (/dev/nst1)
>>>> Device "IBMLTO4-0" (/dev/nst0) open but no Bacula volume is currently
>>>> mounted.
>>>>Device is being initialized.
>>>>Slot 39 is loaded in drive 0.
>>>>Total Bytes Read=0 Blocks Read=0 Bytes/block=0
>>>>Positioned at File=0 Block=0
>>>> Device "IBMLTO4-1" (/dev/nst1) open but no Bacula volume is currently
>>>> mounted.
>>>>Device is being initialized.
>>>>Slot 38 is loaded in drive 1.
>>>>Total Bytes Read=0 Blocks Read=0 Bytes/block=0
>>>>Positioned at File=0 Block=0
>>>> 
>>>>
>>>> Used Volume status:

I've turned the debug level up to 100, told bacula to mount slot=39 into 
drive 0 and got the following in the SD's trace file:

claustrum-sd: dircmd.c:213-0 Message channel init completed.
claustrum-sd: bnet.c:669-0 who=client host=127.0.1.1 port=36643
claustrum-sd: cram-md5.c:73-0 send: auth cram-md5 
<819442589.1265155...@claustrum-sd> ssl=0
claustrum-sd: c

Re: [Bacula-users] [Bacula-devel] bug in autolabel code in 5.0.0?

2010-02-03 Thread Bob Hetzel

Has anybody gotten 5.0.0 to label tapes properly?  I upgraded to 5.0.0 a 
couple days ago and backups worked that first night until they ran out of 
appendable (i.e. already partially written) media.  Since that point, I've 
been completely stuck.

I just ran the btape "auto" test on a lark and that passed, although 
strangely bacula couldn't read the volume label I created inside there.  I 
then deleted that volume and labeled it from within the director and got 
the following...

*delete volume=LTO429L4

This command will delete volume LTO429L4
and all Jobs saved on that volume from the Catalog
Are you sure you want to delete Volume "LTO429L4"? (yes/no): yes
*mess
You have no messages.
*label barcodes slots=1
Automatically selected Storage: Dell-ML6000
Enter autochanger drive[0]:
Connecting to Storage daemon Dell-ML6000 at claustrum:9103 ...
3306 Issuing autochanger "slots" command.
Device "Dell-ML6000" has 46 slots.
Connecting to Storage daemon Dell-ML6000 at claustrum:9103 ...
3306 Issuing autochanger "list" command.
The following Volumes will be labeled:
Slot  Volume
==
1  LTO429L4
Do you want to label these Volumes? (yes|no): yes
Defined Pools:
  1: Scratch
  2: Westwing
Select the Pool (1-2): 1
Connecting to Storage daemon Dell-ML6000 at claustrum:9103 ...
Sending label command for Volume "LTO429L4" Slot 1 ...
block.c:1012 Read error on fd=3 at file:blk 0:0 on device "IBMLTO4-0" 
(/dev/nst0). ERR=Device or resource busy.
3000 OK label. VolBytes=131072 DVD=0 Volume="LTO429L4" Device="IBMLTO4-0" 
(/dev/nst0)
Catalog record for Volume "LTO429L4", Slot 1  successfully created.
*release drive=0
Automatically selected Storage: Dell-ML6000
3307 Issuing autochanger "unload slot 1, drive 0" command.
3022 Device "IBMLTO4-0" (/dev/nst0) released.
*mount
Automatically selected Storage: Dell-ML6000
Enter autochanger drive[0]:
Enter autochanger slot:
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result: nothing loaded.
3901 Unable to open device "IBMLTO4-0" (/dev/nst0): ERR=dev.c:491 Unable to 
open device "IBMLTO4-0" (/dev/nst0): ERR=No medium found

*mount slot=1
Automatically selected Storage: Dell-ML6000
Enter autochanger drive[0]:
3301 Issuing autochanger "loaded? drive 0" command.
3302 Autochanger "loaded? drive 0", result: nothing loaded.
3304 Issuing autochanger "load slot 1, drive 0" command.
3305 Autochanger "load slot 1, drive 0", status is OK.
block.c:1012 Read error on fd=5 at file:blk 0:0 on device "IBMLTO4-0" 
(/dev/nst0). ERR=Device or resource busy.
3902 Cannot mount Volume on Storage Device "IBMLTO4-0" (/dev/nst0) because:
Requested Volume "" on "IBMLTO4-0" (/dev/nst0) is not a Bacula labeled 
Volume, because: ERR=block.c:1012 Read error on fd=5 at file:blk 0:0 on 
device "IBMLTO4-0" (/dev/nst0). ERR=Device or resource busy.
3905 Device "IBMLTO4-0" (/dev/nst0) open but no Bacula volume is mounted.
If this is not a blank tape, try unmounting and remounting the Volume.
*




--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] bug in autolabel code in 5.0.0?

2010-02-04 Thread Bob Hetzel


Richard Scobie wrote:
> Bob Hetzel wrote:
>>
>> Has anybody gotten 5.0.0 to label tapes properly?  I upgraded to 5.0.0 a
>> couple days ago and backups worked that first night until they ran out of
>> appendable (i.e. already partially written) media.  Since that point, 
>> I've
>> been completely stuck.
> 
> Is bacula the only thing you changed?
> 
> Are restores from tape working OK or do you still see the ERR=Device or 
> resource busy?
> 
> Regards,
> 
> Richard

Your question made me take a step back... I've been in kind of a conundrum 
lately.  Over the last few months sometimes the problems I've had running 
bacula have cropped up after installing OS updates and the thought occurred 
to me that some of my issues could be due to the dynamically linked 
libraries being upgraded w/o bacula being recompiled.  So when I thought I 
was ready to build and install bacula 5.0.0 I figured that would be a good 
time to install the OS updates.  The first one that popped up was the 
update from 9.04 to 9.10 Ubuntu update.  So I went through that, then built 
and installed Bacula 5.0.0.  Then the problems started.

So now I've tried re-compiling Bacula 3.0.3 and installing that and it is 
having the same problems so the issue must be related to the Ubuntu upgrade.

My next thought was to try use "tar" to read and write from a tape.  That 
works fine:  I've been able to back up an entire directory and restore 
files from it.  I've also back up a single file and restored that file, 
comparing it to the original file and no differences were found.

Upgrading to Ubuntu 9.10 brought along a ton of upgraded programs and 
libraries so I've no idea where to begin.

Has anybody got any other ideas?

 Bob

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] bug in autolabel code in 5.0.0?

2010-02-04 Thread Bob Hetzel

Richard Scobie wrote:
> Bob Hetzel wrote:
> 
>> My next thought was to try use "tar" to read and write from a tape.  That
>> works fine:  I've been able to back up an entire directory and restore
>> files from it.  I've also back up a single file and restored that file,
>> comparing it to the original file and no differences were found.
> 
> This will not be exercising the autochanger so will probably be fine. 
> What I was wondering is if you are able to restore a bacula backup 
> through the autochanger.
> 
>> Upgrading to Ubuntu 9.10 brought along a ton of upgraded programs and
>> libraries so I've no idea where to begin.
>>
>> Has anybody got any other ideas?
> 
> Have a look at this thread:
> 
> http://marc.info/?l=bacula-users&m=126281654919968&w=2
> 
> I have just set up a system using an HP library and was seeing this 
> error and this thread is what I found.
> 
> Possibly the kernel has been updated on your system and broken things. I 
> had to compile the latest stable to get it working.
> 
> Regards,
> 
> Richard

Richard,

The auto test passed, but I now cannot mount any tape from within bacula, 
either completely new (i.e. unlabeled) or filled completely with bacula 
backups.  It always thinks the tape is not labeled, giving errors like these:

block.c:1012 Read error on fd=5 at file:blk 0:0 on device "IBMLTO4-0" 
(/dev/nst0). ERR=Device or resource busy.
3902 Cannot mount Volume on Storage Device "IBMLTO4-0" (/dev/nst0) because:
Requested Volume "" on "IBMLTO4-0" (/dev/nst0) is not a Bacula labeled 
Volume, because: ERR=block.c:1012 Read error on fd=5 at file:blk 0:0 on 
device "IBMLTO4-0" (/dev/nst0). ERR=Device or resource busy.
3905 Device "IBMLTO4-0" (/dev/nst0) open but no Bacula volume is mounted.
If this is not a blank tape, try unmounting and remounting the Volume.



Here's the kernel version Ubuntu installed: 9.10

# uname -a
Linux claustrum 2.6.31-17-server #54-Ubuntu SMP Thu Dec 10 18:06:56 UTC 
2009 x86_64 GNU/Linux

Perhaps some part of the tape handling is broken in that kernel?  Anybody 
else running bacula on Ubuntu 9.10?

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full Backup After Previously Successful Full with Ignore FileSet Changes Enabled

2010-02-11 Thread Bob Hetzel

> "Graham Sparks"  kirjoitti viestiss? 
> news:snt109-w401dea1ed7c90a15a91f1b81...@phx.gbl...
>   > Hello,
>   > 
>   > I'm a fairly new Bacula user (all daemons running on same 
> machine-Ubuntu804 and a FD on Windows XP Home client). I've set up a Full 
> backup of a drive on the client that ran on Saturday and have an incremental 
> backup of the same fileset done on Monday. Having noticed that the file size 
> was large for the two day's worth of data, I excluded the Windows swap file 
> from the fileset.
>   > 
>   > Today's incremental however wouldn't run. Bacula insisted on running a 
> new full backup.
>   > 
> 
> What did you do here? Did you let it run, or did you cancel it? I assume you 
> cancelled it... see below.
> 
>   > I'm aware that this is because I have changed the fileset, but read about 
> an option (Ignore FileSet Changes = yes) that is supposed to ignore that fact 
> and continue to perform incrementals. After adding this and reloading the 
> configuration, Bacula still won't perform an incremental backup.
>   > 
>   > Is there a reason why it still refuses to run an incremental backup (I 
> deleted the JobID for the failed promoted Full backup with the delete JobIB 
> command)?
>   > 
>   > Have a try if restarting the director helps.
>   > 
>   > Reload should be enough, but recently I noticed that 3.0.3 didn't 
> recognize fileset option changes reliably after reload.
>   > 
>   > --
>   > TiN
> 
>   I performed as restart (and a separate start/stop) but it's the same.
> 
>   I've tested it with a smaller job and it seems to be the case that the 
> IgnoreFileSetChanges option only takes effect if present in the FileSet 
> definition when the original Full backup runs (adding it in afterwards 
> doesn't make a difference).
> 
>   Many thanks for the reply though!
> 
> 
> One thing that still came into my mind: did you cancel the forced-full backup 
> that had started after changing the fileset, when you didn't want it to run 
> as full backup again? If so, the reason probably is that the previous full 
> backup didn't complete succesfully (because it was canceled). Then the 
> behaviour isn't only because of the fileset change anymore, but because of 
> the non-succesful previous full backup, which requires the next one to be 
> forced to be a full one, whether there were fileset changes or not. For more 
> information about this, see the explanation under the "Level" directive of 
> the "Job Resource" in the documentation.
> 
> Btw, when asking this kind of questions in the future, pls. tell which 
> version of Bacula you have. I guess you have got it from some Ubuntu repo, 
> and maybe it wasn't the latest released Bacula. Then the Real Gurus (not me) 
> here might immediatedly be able to say "oh yes, that was a bug that was fixed 
> in x.y"
> 

Here's my understanding of why it didn't work as you expect... when you add 
the Ignore Fileset Changes parameter you're end result is to change the 
fileset.  The way bacula detects fileset changes is by simply hashing the 
fileset.  Perhaps it would be nice if bacula excluded the Ignore Fileset 
changes part, but I've learned to work around this by simply adding that 
value in on every fileset or add it in when ready to do a full backup anyway.

Bob


--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re store OK but no files are restored

2010-02-12 Thread Bob Hetzel

> I am running Bacula 3.0.2 on Ubuntu server Apache2 MySQL
> 
> The backups seem to be working OK so I thought I had better test a restore!
> This also seemed to run OK, but there were no files restored. I tried
> different "where" locations but still no joy.
> The log snip below shows all is well too. Any ideas please?
> 
> 12-Feb 08:37 mistral-dir JobId 836: Start Restore Job
> RestoreFiles.2010-02-12_08.37.15_35
> 12-Feb 08:37 mistral-dir JobId 836: Using Device "usb-drive-a"
> 12-Feb 08:37 mistral-sd JobId 836: Ready to read from volume
> "hdda-full-0038" on device "usb-drive-a" (/mnt/bupa/bup).
> 12-Feb 08:37 mistral-sd JobId 836: Forward spacing Volume "hdda-full-0038"
> to file:block 0:36195.
> 12-Feb 08:37 mistral-dir JobId 836: Bacula mistral-dir 3.0.2 (18Jul09):
> 12-Feb-2010 08:37:55
>   Build OS:   i686-pc-linux-gnu ubuntu 9.04
>   JobId:  836
>   Job:RestoreFiles.2010-02-12_08.37.15_35
>   Restore Client: glyn-laptop-fd
>   Start time: 12-Feb-2010 08:37:17
>   End time:   12-Feb-2010 08:37:55
>   Files Expected: 2
>   Files Restored: 2
>   Bytes Restored: 325,729
>   Rate:   8.6 KB/s
>   FD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Restore OK
> 
> Cheers
> Glyn
> -- 

According to http://bugs.bacula.org/view.php?id=1403 this problem started 
around version 2.4.4 and was fixed in version 3.0.3

Bacula's windows FD was marking folders as hidden on restores.  The files 
are there you just can't see the folder containing them.  If you open up a 
command prompt you'll be able to cd into them but only if you know what 
they're called.



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VSS Windows Backups

2010-02-16 Thread Bob Hetzel
> 
> Hello,
> 
> 15.02.2010 20:15, Joseph L. Casale wrote:
>>> >> For such a full backup, you need some secondary windows installation 
>>> >> for the resore itself - for Server 2003 and XP, BartPE is a convenient 
>>> >> way to get such a thing. For newer Windows versions, you probably best 
>>> >> use Microsoft's PE system (I haven't actually built such a beast yet).
>> > 
>> > I cant say that wont work, but I would strongly recommend not to approach
>> > this that way, I would put money on an installation plagued with issues.
> 
> I tend to disagree - but I admit you seem to know what you're talking 
> about  :-) 
> 
> Anyway, my scenario in more detail - I'd be happy to see any hidden 
> pitfalls!
> Use a secondary windows /typically PE-based) to boot. Create the 
> partitions you originally had on the system in question (I'm aware of 
> Server 2k8's service partition...)
> Assign drive letters as before, and format as before.
> Start FD, and restore a complete backup to it's original location.
> Make sure you've got the partitions activated, boot loader in place, etc.
> Reboot the restored system.
> 
> Both theory and my experience tell me that you'll end up with a 
> complete windows, happily running where it was backed up.
> For sanity#s sake, you next apply the system state backup you - 
> hopefully - captured during your regular backups, following 
> Microsoft's procedures.
> 
> After three reboots, you should have your system in a consistent, 
> mostly up-to-date state.
> 
> Be aware that some applications - typically everything based on 
> databases - may require additional steps, for example to replay 
> transaction logs written and backed up after the last regular back up.
> 
>> > Reinstall windows, reinstall apps with appropriate methods (like exchange
>> > cant just be re-installed new, setup needs switch's),
> 
> That alone can be a problem - only identifying everything you had 
> before requires a full-blown configuration and deployment management 
> system, in my experience  :-( 
> 
>> > then add in only
>> > applicable data.
> 
> This is even worse (though getting better at least with Microsoft's 
> applications) as it's really hard to determine wat is "applicable data".
> 
>> > You are *will* break all sorts of things pulling the rug
>> > out from under complicated applications like AD/Exchange etc...
> 
> True, but with the combination of VSS and system state backup / 
> restore plus the things you (should) know about managing AD you get to 
> an up-to-date, restored, system quite quickly - much faster than 
> reinstalling tons of applications, updates, patches, service packs, 
> bug fixes and the like one by one.
> 
>>> >> I would expect problems when, for whatever reasons, you need to 
>>> >> restore for example IIS (meta)data only, as I'm pretty sure that doing 
>>> >> this in a running windows will not result in a merge of the data in 
>>> >> the live system and the restored data, but only in an error (more 
>>> >> likely) or loss of the current data by overwriting with backed up 
>>> >> files (less likely).
>> > 
>> > Yup, Metabase is involved, AFAIK it best done from a system state or using
>> > the provided scripts which you can script with a runbeforejob and let 
>> > Bacula
>> > snag it after. See msdn and technet, you'll see all that's involved in that
>> > ugly one.
>> > 
>> > http://technet.microsoft.com/en-us/library/cc783795%28WS.10%29.aspx
> 
> Actually, the stuff Microsoft has in its libraries is quite complete 
> and provides a good way to spend lots of time for windows admins 
> thinking about backup and recovery  ;-) 
> 
> Not really being a windows admin myself and needing that stuff makes 
> me spend even more time with it...
> 
> Thanks for your insight!
> 
> Arno
> 
> -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabr?ck 
> www.its-lehmann.de 

Last year I tried some experimentation with bare-metal restore using bacula 
and bart-pe of a Windows boot volume and I never did get it to work 
properly.  I believe there are least two pitfalls, probably more:

1) How to make it bootable?  You can restore all the important files but 
getting it to boot is another matter.

2) I couldn't get far enough for this to be an issue but I believe bacula's 
handling of "Junction Points"--it gripes but doesn't back them up, will 
break many things too.  Can anybody shed light on whether these will be 
auto-created by the OS if they're missing?

Has anybody actually documented fully the steps to get a Windows Server 
2003 bare-metal bart-pe restore working like this?

Regarding the IIS metabase, if you go into the IIS Manager app, then right 
click on Properties for the local computer, then tick the setting to 
"Enable Direct Metabase Edit" you should be able to just back up the 
metabase folder as regular files.  If you stop IIS then restore the files 
MBSchema.xml and MetaBase.xml as regular files you should be back to where 
you were with the IIS config at least.  All th

Re: [Bacula-users] VSS Windows Backups

2010-02-17 Thread Bob Hetzel

> From: Arno Lehmann 
> Subject: Re: 
> To: bacula-users@lists.sourceforge.net
> Message-ID: <4b7bc766.4040...@its-lehmann.de>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> Hi,
> 
> 16.02.2010 16:48, Bob Hetzel wrote:
> 
>> > Last year I tried some experimentation with bare-metal restore using 
>> > bacula 
>> > and bart-pe of a Windows boot volume and I never did get it to work 
>> > properly.  I believe there are least two pitfalls, probably more:
>> > 
>> > 1) How to make it bootable?  You can restore all the important files but 
>> > getting it to boot is another matter.
> 
> This is about Windows 2k3 -
> using diskpart I never had a problem getting the system bootable.
> The simple case - one partition only - is rather straightforward:
> run diskpart on the recovery system, select the (only) disk to work 
> with, "clean", "create partition primary", "active", "assign letter=c".
> Quit diskpart. Format the disk with NTFS.
> Restore
> Reboot
> handle all the other things to be considered - typically, boot into 
> Directory Service Restore Mode or what that's called, apply the system 
> state backup you hopefully have.
> You might need more reboots and more steps in between, depending on 
> the applications you need to handle. I don't know about IIS, but SQL 
> server, for example, typically also needs manual restores of data 
> backups and log replays.

Are there any files which specifically should NOT be restored... perhaps I 
overwrote a boot file that was created by diskpart?

> 
>> > 2) I couldn't get far enough for this to be an issue but I believe 
>> > bacula's 
>> > handling of "Junction Points"--it gripes but doesn't back them up, will 
>> > break many things too.  Can anybody shed light on whether these will be 
>> > auto-created by the OS if they're missing?
> 
> No idea... yet.
> 
>> > Has anybody actually documented fully the steps to get a Windows Server 
>> > 2003 bare-metal bart-pe restore working like this?
> 
> I'm working on it right now...

I'm sure I'll not be the only one that will be very indebted to you on that.

> 
>> > Regarding the IIS metabase, if you go into the IIS Manager app, then right 
>> > click on Properties for the local computer, then tick the setting to 
>> > "Enable Direct Metabase Edit" you should be able to just back up the 
>> > metabase folder as regular files.  If you stop IIS then restore the files 
>> > MBSchema.xml and MetaBase.xml as regular files you should be back to where 
>> > you were with the IIS config at least.  All the web content, and CGI 
>> > applications, and dll's is another matter, of course.
> 
> The latter would - hopefully - be handled by the normal backup and 
> system state backup. The Metabase... well, I don't even know what 
> that's good for, but seeing that you can force that to exist as 
> regular files is already good!
> 

The Metabase is windows speak for the IIS config.  Sadly, I believe that's 
not included by default as part of the system state.  Ditto with the keys 
needed for it.

http://support.microsoft.com/kb/269586

> Cheers,
> 
> Arno
> 
> -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabr?ck 
> www.its-lehmann.de 



--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows XP File Daemon - Service Stops With Unknown Error

2010-03-09 Thread Bob Hetzel
3 things that have given me grief due to the windows FD stopping or being 
unresponsive (but still mysteriously running)...
a) buggy network driver--in one case I was able to update the driver to fix.
b) bad network switch or network port
c) infection with a virus/malware that was using up the "max tcp 
connections".  Check the event log for that error.
d) regex in the fileset had a bug but that's been fixed around bacula 
version 2.5 or so.  Along the way I learned that regex was slower to 
process than wilddir and wildfile so I stopped using it for windows 
filesets anyway.

In all cases these issues caused the windows FD to stop during a backup, so 
I hope this helps but if not, you may have to turn on debugging in the FD 
and cross your fingers that it happens and logs what went wrong.

Bob

> From: Drew Bentley 
>> > I can't seem to figure out what's going on because I've never
>> > encountered this issue when backing up a Windows XP client with the
>> > latest version of Bacula.
>> >
>> > I have no problems backing up the client but the file daemon
>> > unexpectedly dies randomly, usually before a backup is even scheduled.
>> >
>> > The error I'm seeing in the event log goes something like this:
>> >
>> > Bacula Error: /home/kern/bacula/k/bacula/src/lib/bsock.c:518
>> > Read error from Director daemon::9101:
>> > ERR=Unknown error
>> >
>> > Above that it complains about local computer and registry info or
>> > message DLL files to display messages from a remote computer, use
>> > /AUXSOURCE=flag to retrieve this description, see Help and Support for
>> > more details.
>> >
>> > Anyone encounter this? Is this machine missing something? I'm not a
>> > Windows guy, more of a Unix guy. Everything seems to work okay when I
>> > restart the FD service and backups run without issues. I have backups
>> > kick off at like 7pm and it always seems to die before 3 or 4pm. No
>> > one really remotes into this machine, it's a Quickbooks machine and
>> > only shares out the Intuit QB stuff to the HR persons machine.
>> >
>> > Any other info I can provide if necessary. I couldn't seem to locate
>> > this type of issue in the mailing list archives.
>> >
>> > -Drew
>> >
> No ideas from anyone?  Now it seems to die without any reports in the
> event viewer or logs.
>
> -Drew
>

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long-term backups - was: Hard disk as backup media

2010-03-26 Thread Bob Hetzel


>> >
>> > What is the best strategy and storage media for long-term backups, say
>> > to 10 or 20 years (if any)? I ask because I do have an old DLT tape
>> > drive and some tapes, unusable, because its SCSI controller is no longer
>> > among us. It is not 10 years old and is already a problem.
>> >
>> > --
>> > Marcio Merlone
>> >
>> >
> Thinking in long term, i belive that the future is the cloud. Pay for 'hire'
> space and let the others take care of migrating data to new technologies
> when they will be available
>
> D.

And what will you do when that company goes bankrupt and shuts down?  Your 
best case scenario is that you will get 90 days notice.  Have any of these 
business gone away with no notice at all yet?  I'd be wary of any company 
that says they can store unlimited data for less than the price of a 2 TB 
hard drive.  There are no economies of scale with respect to high-end 
storage, only the administration of that storage--when the amount of data 
stored gets very large, the cost of the staff members maintaining it become 
far less significant than the cost of the storage array + rent/air 
conditioning/electricity/redundancy location/etc of the facility housing it.

If you really need to keep the data you have to figure all the issues such 
as exactly how long you need it for and how rapidly you need to be able to 
get it, as well as how often.  If you don't care enough about long-term 
accessibility of the data to pay for 100% likelihood you'll be able to get 
it whenever you need it, you're probably just paying for something you'll 
never use.

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Recycle Pool and autoloaders

2010-03-31 Thread Bob Hetzel
You may want to upgrade bacula to 5.0.1.  One change I've noticed is that 
prior versions didn't purge/recycle tapes not found in the autochanger, 
which as a consequence didn't add them back into the Scratch pool.


> Hi,
> Yes, that's exactly the feature i need.  I'll put it in as a feature
> request, and I think i'll do some manual purges maybe monthly for anything
> over it's retention cycle period.
>
> Thank you.
> Dermot.
>
>
> On 30 March 2010 16:03, John Drescher  wrote:
>>> > > I must be missing something here. Every day we take out the tape that 
>>> > > was
>>> > > used the night before, and put it in a safe (in case the comms rooms is
>> > ever
>>> > > inaccessible).  We then look for a tape from the scratch pool to put 
>>> > > into
>>> > > the empty slot.  This is fine, except that the scratch pool is not being
>>> > > replenished.
>> >
>> > I know. My requirements are different. No safe at the moment..
>> >
>>> > > You say you pick the replacement tapes from the oldest or new volumes.
>> >  Do
>>> > > you have to manually purge the oldest tapes before you can put them in
>> > the
>>> > > drive?  Do you use retention periods at all, or just cycle them 
>>> > > yourself?
>>> > > I'm curious if your method could solve my problem.
>> >
>> > Yes I use retention periods but only 1 pool recycles. The rest are
>> > archive that are run from manual jobs. Anyways in this situation I try
>> > to make sure that the pool that recycles always has old tapes in the
>> > changer so that automatic recycling will work. I know your situation
>> > is more complicated.
>> >
>> > You may want to submit a feature request. This should not be difficult
>> > to implement. Basically implement an option to disable the only purge
>> > volumes when needed concept.
>> >
>> > John
>> >


--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula volume selection -- InChanger

2010-04-07 Thread Bob Hetzel
> From: Tom Eastman 
>
> Hey guys,
>
> I'm still having trouble controlling which volume bacula chooses to
> write to when it has plenty of volumes set to 'append'.  I *need* bacula
> to only select a volume to append to that has InChanger=1.  I kinda
> thought that it would do this automatically, at least from what I've
> read, but it doesn't seem to be doing so.
>
> I've just upgraded to 5.0.1.
>
> How do I force bacula to only select a tape from the list of tapes that
> are InChanger=1?
>
> Thanks!
>
>   Tom
>

Tom,

In your bacula-dir.conf file, go to the Storage section.  Do you have a 
line that says "Autochanger = yes"?  If not, add it.

Bob

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multiple drives in changer

2010-04-07 Thread Bob Hetzel
> Date: Tue, 6 Apr 2010 08:52:24 -0600
> From: Robert LeBlanc 
>
> On Tue, Apr 6, 2010 at 6:13 AM, Matija Nalis
> 
>> > wrote:
>> > On Fri, Apr 02, 2010 at 10:36:59AM -0600, Robert LeBlanc wrote:
>>> > > On Fri, Apr 2, 2010 at 2:44 AM, Matija Nalis 
>>> > > 
>>> > >
 > > > I think you need to set
 > > > Prefer Mounted Volumes = no
>>> > >
>>> > > I guess this is where we need clarification about what is an available
>>> > > drive. I took this to mean a drive that has no tape is more available,
>> > and
>>> > > then a drive that does already have a tape mounted would be next in
>>> > > availability.
>> >
>> > Hm, it looks to me that any drive which is not doing R/W operation
>> > (no matter if there is a tape in drive or not) is counted as available.
>> > I could be wrong on that, though.
>> >
>> > Anyway, the safest way to know is to test it and let the others know
>> > how it goes :)
>> >
>From my observations of a few tests, this indeed seems to be the case. If
> the drive is not being R/W to/from, it is considered available.
>
>
>>> > > It seems that as long as no job is writing to that tape, then
>>> > > the drive is available. I do want this setting to yes and not no,
>> > however, I
>>> > > would like to minimize tape changes, but take advantage of the multiple
>>> > > drives.
>> >
>> > From what I see in practice, "Prefer Mounted Volumes = yes" would
>> > make sure there is only one drive in each pool that does the writing.
>> >
>> > For example, I have pool of 4 drives and I start 10 jobs at the same
>> > time, all using the same pool. I have an concurrency of >10 and
>> > spooling enabled, so all the jobs run at once and start spooling to
>> > disk -- but when they need to despool, one drive will grab a free
>> > tape from Scratch, and all the jobs will wait for their in turn to
>> > write to one tape in one drive, leaving 3 drives idle all the time.
>> > Only when that tape is full, another one is loaded, and the process
>> > repeats.
>> >
>> > I think same happens when I disable spooling, but then the 4 jobs all
>> > interleave writes -- but still all of them will write on one tape in
>> > one drive only.
>> >
>> > If you set "Prefer Mounted Volumes = no", then all 4 drives get
>> > loaded with 4 fresh tapes (or just use them if right tapes are
>> > already in right drives -- I guess, I have autochanger) and each
>> > tape gets written to at the same time, maximizing drive (and thus,
>> > the tape) usage.
>> >
>> > But "no" setting can (or at least could in the past) lead to
>> > deadlocks sometimes (if you have autochanger), when no new jobs will
>> > get serviced because drive A will wait for tape 2 that is currently
>> > in drive B, and at the same time drive B will wait for tape 1 which
>> > is currently in drive A. Then the manual intervention (umount/mount)
>> > is needed (which is a big problem for us as we have lots of jobs/tapes).
>> >
>> > The (recommended) alternative is to go semi-manual way -- dedicate
>> > special pool for each drive, and go with "Prefer Mounted Volumes =
>> > yes" Then one can (and indeed, must) specify manually which jobs will
>> > go in which pools (and hence, in which drives) and can optimize it
>> > for maximum parallelism without deadlocks -- but it requires more
>> > planing and is problematic if your backups are more dynamic and hard
>> > to predict, and you have to redesign when you add/upgrade/remove
>> > drives, and your pools might become somewhat harder to manage.
>> >
> This is exactly my experience, and my goal is not to use multiple drives in
> the same pool at the same time, it's to use drives for different pools at
> the same time one drive per pool. We are looking to bring up a lot more
> storage in the future and will probably adopt the mentality of multiple
> daily, weekly, monthly pools and split them up based on the number of drives
> we want to run concurrently. I think that is the best way to go with Bacula
> for what we want to do.
>
> Thanks,
>
> Robert LeBlanc
> Life Sciences & Undergraduate Education Computer Support
> Brigham Young University


Your other option is to try out the "Maximum Concurrent Jobs" in the device 
section of your storage daemon's config.  That's working well for me.  One 
word of caution though: since the way it allocates jobs to drives is not 
the same as prefer mounted volumes you should carefully consider all the 
different concurrent parameters and how they relate to each other.

Here's an example of how I first tried it (not very good)...

dir.conf:
Director: Maximum concurrent jobs = 20
Jobdefs: Maximum concurrent jobs = 10

sd.conf:
Storage: Maximum Concurrent jobs = 20
Device1: Maximum concurrent jobs = 2
Device2: Maximum concurrent jobs = 2

The way prefer mounted volumes worked, it round robin alternated the drive 
assignments for each job as it started them.  For This new directive it 
does the opposite, assigns them jobs to the first until it hits the max, 
then assigns the rest to the next dr

Re: [Bacula-users] Bacula timeouts on usb-drive

2010-04-22 Thread Bob Hetzel


> Hi, im using a really old Bacula, and it look like I don't have any success 
> in getting a permission to upgrade to the current version from my boss.
>
> Ive got one server that is backing up two clients (all linux). The server 
> stores the data on a external usb-drive that is mounted by autofs. This works 
> fine except for one little issue.
> Then the usb-drive don't is used it gets unmounted due to -timeout option in 
> /etc/auto.master. When the scheduled backup is initialized it takes a few 
> seconds for the usb-drive to startup and get mounted.
> When this occur Bacula simply seems to timeout, notifying me that there is no 
> volume mounted, and therefore it failed. If I start a backup manually in the 
> same situation the result is the same.
> If I manually starts a backup directly after the failed backup, it all works 
> fine.
>
> Does anybody have a suggestion on what to do?
>
> /Mike

You could try a RunBeforeJob command that includes something like

/bin/ls /auto-mounted-fs >>/dev/null
/bin/sleep 60

You may not actually need the sleep statement so try it without too.

alternatively, could you run an actual mount command to do it?

Bob

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Automatic Eject tape once job finishes

2010-05-17 Thread Bob Hetzel

> From: taisgeal 
> Subject: [Bacula-users]  Automatic Eject tape once job finishes
> To: bacula-users@lists.sourceforge.net
> Message-ID: <1273850935.m2f.334...@www.backupcentral.com>
>
>
> Hello,
>
> I have read and reread dozens of threads on this board and at this stage am 
> at my wit's end. I would really appreciate any help that anyone could give me.
>
> My setup is as follows :
>
> bacula 5.0.1 running on a Debian Lenny server(server1)Bacula
> MSL 6000 LTO autochanger with 30 slots and one drive on a separate Debian 
> Lenny server(server2)
> various Linux clients
>
> Everything works and overall, I am very impressed with Bacula. It is very 
> fast, very stable and has not let me down. It performs better and is much 
> more flexible than a number of commercial products that I have used in the 
> past.
>
> However, for the life of me, I cannot see how to move a tape from the MSL 
> 6000 drive back to the slot from which it was taken, once the backup job is 
> finished. This is not a problem for me during the week. However, I change my 
> tapes every Friday and I have to stop the bacula-sd daemon on server2 and 
> then manually move it using the front panel on the MSL 6000. After that, I 
> start up the bacula-sd daemon again on server2 and I am back in business. i 
> never have to go anywhere near server1.
>
> I am going on holidays soon and need to change this, such that all tapes are 
> back in their slots, so that my holiday cover doesn't have to log into 
> server2 at all.
>
> I have tried the "Unmount on offline" directive, but that just seems to lock 
> up the drive itself.
>
> Any help would be greatly appreciated.
>
> Thanks.
>

The "release" command is probably the manual command you want, but if you 
do this every day you'll have less likelihood of something like a power 
failure during the time you're not doing any backups or restores messing up 
a tape or tape drive.

At the end of every day's backups I have a job to backup the catalog.

In that job it runs "make_catalog_backup" as a RunBeforeJob.  As a 
RunAfterJob it runs a command to copy the catalog file and configs to 
another server.  Inside that script I added

echo release drive=2 |/opt/bacula/bin/bconsole
echo release drive=1 |/opt/bacula/bin/bconsole
echo release drive=0 |/opt/bacula/bin/bconsole



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Basic question on Pruning and Volume Retention

2010-05-24 Thread Bob Hetzel


> Message: 1
> Date: Thu, 20 May 2010 10:34:26 +0200
> From: Marcus M?lb?sch 
> Subject:
> To: bacula-users@lists.sourceforge.net
> Message-ID: <4bf4f412.40...@as-infodienste.de>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>> >I have to admit I am a bit confused now with two places to define
>> > AutoPrune.
>> >
>> > The manual tells nothing about which resource takes precedence; or
>> > at least I cannot find it.
> One cigarette later it is clear now:
>
> The AutoPrune directive in the Client Resource prunes Job and File
> records; while the AutoPrune directive in the Pool Resource prunes
> volume records.
>
> Pruning Volume records also prunes file and job records.
>
> So to keep the archived backups in the catalog I need to disable
> AutoPruning of File and Job Records, disable AutoPruning in the Pool
> resource for the Pool containing the Volumes with the Archives, but
> enable AutoPruning for the Volumes containing the Monthly, Weekly, Daily
> Pools.
>
> I think I finally got it right :-)
>
> Marcus
>

I believe pruning a volume does NOT prune file and job records.  I believe 
when you try to prune a volume, bacula checks for unpruned job records and 
if it finds any it does not prune the volume.  I think it does not check 
for whether there are still Files in the db associated with that job, 
however.  For archives, you may find it easier to just set a meaningful 
retention period for the job and a short retention period for the files and 
then bscan them back in if needed.  But that's a pain.  I would question 
the usefulness of most any backup package for "archiving" data.

Client hard drive space is cheap and your space requirements for new data 
are probably growing exponentially (as opposed to linearly) so you may be 
able to just keep it live wherever it lives--setting it Read-Only or 
something to avoid accidental deletion or modifications.


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] low despooling speeds when parallelism is increased

2010-06-02 Thread Bob Hetzel
What CPU's do you have in your 2950?  I'm backing up to two LTO-2 in my 
PowerEdge 2900 server (dual Xeon 5160 CPU's) and it seems to be pretty 
maxed out with two tape drives.  On my R710 I only get 70MB/sec despool 
rates to LTO-4 drives.

If you can't get higher results when using a SAN I'd think that the 
bottleneck is the server CPU.  If you're not CPU bound, then increasing the 
read cache wherever you're holding the spool should help.

Bob

> From: Athanasios Douitsis 
>
> Hi everyone,
>
> Our setup consists of a Dell 2950 server (PERC6i) w/ FreeBSD7 and two HP
> Ultrium LTO4 drives installed in a twin Quantum autochanger enclosure.
> Our bacula version is 5.0.0 (which is the current FreeBSD port version).
>
> Here is our problem:
>
> When running a single job, our setup is able to consistently surpass
> 70Mbytes/sec (or even 80) on despooling, which should be reasonably
> enough.  Unfortunately, when running several jobs on both drives (for
> example 6+6 parallel jobs) our despooling speeds drop to about
> 20Mbytes/sec or even less. The speed of the jobs to finish last
> naturally ramp up, especially for the very last. Our hypothesis is that
> the spooling area cannot handle simultaneous reading (from jobs that are
> still tranfering) and writing (from the currently despooling job) too
> well, hence the performance loss.
>
> So far we were using a common spool area for both drives on these two
> test setups:
>
> 1)A spool area on a Clarion CX4 Fibre Channel array (4Gbps) w/ 2x10Krpm
> disks on a raid0 configuration.
> 2)A 2xSCSI320 raid0 striped configuration in the server itself (via the
> PERC6i controller).
>
> Both setups yielded similarly poor results.
>
> Our thoughts/questions:
>
> -Should we use a separate spool area for each drive?
> -Anyone else that has had problems with the despooling speed being too
> low? What are your proposed solutions, if any?
>
> I realize this is not strictly a bacula question, however the matter
> should be of interest for any bacula admin out there. I understand that
> under like 40Mbytes/sec the drive constantly starts and stops, a
> process which  is detrimental to its expected lifetime (and the tape's
> as well).
>
>


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bare metal windows server 2003 restore

2010-06-14 Thread Bob Hetzel
I've never been able to get the bare-metal restore to work doing a restore 
starting from a Live CD.  I last tried it over a year ago and people 
responded a while later saying they got it to work that way and they would 
update a web page with said info but it appears never to have happened. 
That would be far simpler than messing with the windows ASR disks.  I was 
able to get the files restored but couldn't make the system fully windows 
bootable.  If anybody has done some recent bare-metal restoring for 
windows, please update the wiki page or put up another page somewhere else 
and link to it.

Also, there have been conflicting messages posted on here about whether 
bacula restores windows junction points in the current version or if it 
still just complains about not wanting to back them up (it still complains 
but I don't know for sure if that means it doesn't want to traverse through 
them or if it's actually backing up the junction point as a special file 
system item at all).

Off the topic of bacula for a second, what did you mean by  "PS MS SQL 
Server v5 is involved here." ?  Did you mean MS SQL 2005 or MySql 5?

MySQL does not support VSS.  MS SQL does (but only in the 2005 and later 
version I presume) but if your db is doing a lot of writes I wouldn't rely 
on it--for trying to do a fully 100% safe restore of a database engine (or 
anything else if you're not wanting writes made after you back it up to be 
lost completely) you really need to shut the db engine down, take the 
backup, then work on the restore.  Otherwise you risk having people think 
stuff got updated when those updates are about to get lost.  For regular 
backups, it's not necessary to shut the db down but if you know you're 
going to wipe the drive after the backup you really want to eliminate all 
writes that can't be lost before taking the final backup.

 Bob

> From: Bruno Friedmann 
> Subject: Re: [Bacula-users] bare metal windows server 2003 restore
> To: bacula-users@lists.sourceforge.net
> Message-ID: <4c125566.4030...@ioda-net.ch>
> Content-Type: text/plain; charset=UTF-8
>
> Hi Gavin
>
> How would you restore a VSS snapshot without having VSS ( in you linux live 
> cd ) ?
> That's the real question.
>
> So yes yours steps are naive (in my opinion). What is described in the wiki 
> are the rights step.
>
> Otherwise, if you don't change your hardware, and just want to arrange some 
> partionning, with the help of having store place
> somewhere ( network or usb ) you could do it directly offline with a live cd 
> ( systemrescuecd ) and ntfsclone
> save all your data, adjust partitionning , save the mbr ( but you don't need 
> to change it )
> and restore with ntfsclone.
>
> You're done ... and yes no need of bacula ( but I would certainly have a full 
> backup of the system )
> don't forget to generate the system state snapshot like mentionned in wiki if 
> you are not doing it already ...
>
> Do a chkdsk /F and a reboot before cloning just to be sure ntfs are in good 
> shape.
>
>
> On 06/11/2010 03:57 PM, Gavin McCullagh wrote:
>> > Hi,
>> >
>> > we have a windows server 2003 server here and realised that its disk setup
>> > is in such a bad way that we want to reinstall it.  Never having done one,
>> > we thought it would be nice to try a bare metal restore of the machine from
>> > the backups (to spare disks).  Both c:\ and d:\ drives are entirely backed
>> > up by Bacula using VSS.
>> >
>> > I was expecting to:
>> >
>> > 1. Put a linux live cd in the server and boot it.
>> > 2. Partition the disk(s) appropriately.  Format them appropriately (NTFS).
>> > 3. Start a bacula-fd in linux.
>> > 4. Tell the bacula-dir to restore that server entirely through the running
>> >bacula-fd (probably need to do c:\ and d:\ separately).
>> > 5. Restore the MBR somehow (windows recovery cd maybe?)
>> > 6. Cross my fingers and reboot.
>> >
>> > However, when I looked at the wiki, I found this article which seems a
>> > little more complex.
>> >
>> >   http://wiki.bacula.org/doku.php?id=windows_bare_metal_recovery:ntbackup
>> >
>> > Are my steps [1-5] extremely naive?  Would that not work?  Do I have to go
>> > the way the wiki page says?  I thought I recalled someone suggesting that
>> > [1-6] should work.
>> >
>> > Many thanks in advance for any info,
>> >
>> > Gavin
>> >
>> > PS MS SQL Server v5 is involved here.  Should having VSS mean that's okay
>> > to just restore directly?  We do have database backups if need be, but it
>> > would be nice if that wasn't needed.
>> >
>> >
>
> -- Bruno Friedmann



--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.n

[Bacula-users] Testing bacula configs generates orphaned buffer message: bacula 5.0.3

2010-09-09 Thread Bob Hetzel

Folks,

I might have a typo somewhere in my bacula config but I can't find it.
The reason I suspect something odd is that I got this error:
*estimate job=bbj2-o755
Using Catalog "MyCatalog"
Connecting to Client bbj2-o755 at bbj2-o755.case.edu:9102
Error sending include list.
You have messages.
*mess
09-Sep 10:20 gyrus-dir JobId 0: Error: getmsg.c:190 Malformed message: 
Invalid FileSet command: valid FileSet command: valid FileSet command: 
valid FileSet command: valid FileSet command: valid FileSet command: valid 
FileSet command: valid FileSet command: valid FileSet command: valid 
FileSet command: valid FileSet command: valid FileSet command: valid 
FileSet command: valid FileSet command: valid FileSet command: valid 
FileSet command: valid FileSet command: valid FileSet command: valid 
FileSet command: valid FileSet command: valid FileSet command: valid 
FileSet command: val
09-Sep 10:20 gyrus-dir JobId 0: Fatal error: Socket error on Include 
command: ERR=No data available

When I went to check the config file I got this:

#/opt/bacula/bin/bacula-dir -t /opt/bacula/etc/bacula-dir.conf
bacula-dir: smartall.c:403 Orphaned buffer: bacula-dir 18 bytes at 1fccf918 
from parse_conf.c:415

I tried running at a higher debug level but that didn't help me find 
anything wrong either.

My config is quite large since I'm backing up around > 150 clients, which 
makes manually going through the config line by line to check for stray 
quotes and other typos difficult to say the least.

Anybody have any other ideas for how to parse the files looking for obvious 
issues like quoting, etc?

Bob

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Suggestion: order by VolumeName instead of MediaId

2010-09-09 Thread Bob Hetzel

I have a suggestion that I figure others might also want but I figured it 
would be good to mention it here to see if anybody can think of a reason 
not to change the sort order from the list volumes command.

Currently when we do a 'list volumes' in bconsole, the records come back 
ordered by MediaId.  For those of us running bacula with barcodes, the 
MediaId field is almost never used.  Could the sort order be changed so 
that it sorts by VolumeName instead?

I suspect there must have been some reason to sort by MediaId but I can't 
think of it (however that certainly isn't meant to imply that it doesn't 
exist).

It may or may not be worth it to add a configure parameter or .conf setting 
so the end user can choose... barring both of those does anybody have a 
concern about just changing the 'list volumes' query to order the volumes 
by name instead of media id?

Bob

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula failures reporting or sql query help...

2010-09-23 Thread Bob Hetzel

Greetings all,

We're using bacula to back up around 150 windows desktops here.  On any 
given day a substantial percentage of people are not around so we always 
have a bunch of failures for every day--too many to chase after every 
person to find out why it failed every day.  Has anybody written a report 
that perhaps shows something like the names of jobs that haven't ended in a 
success in x number of days?

I'm trying to create the right query to get this out of the database, but 
the problem with what I have so far is that clients which have never had a 
good backup don't show up in my result set, so I'm asking here if somebody 
else has already gone down this path and figured out the solution.

Bob

--
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] No medium found.. How to force it?

2010-10-06 Thread Bob Hetzel

>> > Isnt it?supposed?to pull media from scratch pool?
 >
> I believe it will pull from the scratch pool if only if there are no
> acceptable volumes in the pool. CNH895 was an acceptable volume even
> though it was not in the changer so it picked that over grabbing one
> from the scratch.
>
> There may be some configuration setting to prevent bacula from wanting
> a volume that is not in the changer but I do not know. Some one else
> will have to help with that..
>
> John
>

In your Storage section of your bacula-dir.conf file, if you are using an 
autochanger you need to specify 'Autochanger = yes' so bacula will check 
the InChanger field when it works with media.

Bob


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Introduction and Help Needed

2010-10-25 Thread Bob Hetzel



> From: Brian Blater 
> Subject: [Bacula-users] Introduction and Help Needed
> To: bacula-users@lists.sourceforge.net
> Message-ID:
>   
> Content-Type: text/plain; charset=ISO-8859-1
>
> Ok, I'm new to Bacula but I'm in no way new to IT or backups. I've
> been around the "IT block" many times in the 20+ years I've been doing
> this.
>
> Anyways, here is a little back ground. The company I work for sets up
> many international project offices with from 5 to 100+ users. I've
> been asked to come up with a solution that will work in our smaller
> project offices of 5 to 10 people, but provide basically the same
> features of our larger offices, just with a reduced cost. Currently
> the company has always used some sort of Windows offering, but when
> 80% of the cost of a server is just Windows and it's licenses, that
> just isn't going to work any more.
>
> So, I'm working on a Linux server that will basically do all the core
> functions of our Windows offering but for quite a bit less $$$. At
> this point I've got most everything working, but need to come up with
> a backup strategy. Now, I've used Symantec's offerings for many years,
> so that is where my knowledge is. With this being Linux I didn't feel
> tar and/or rsync would work to the same level as Backup Exec etc
> would, so in my investigating I turned to Bacula.
>
> After a very steep learning curve (Google and I have become very close
> friends during this week), I've got my Linux server backing up with
> Bacula to an external USB drive. That seems to be working fine in it's
> basic setup (although there are seveal more "tweaks" I would like to
> do, but that will come later.) So, now I turn to backing up a Windows
> client. I've got the client/fd defined, have my schedule setup, my
> volume/sd is setup. Now, I go about configuring the file set. I'm
> trying to use wilddir and wildfile definitions to not backup alot of
> the "extra" windows crap. However, I keep getting the error - No drive
> letters found for generating VSS snapshots and Could not stat
> "'"C:/Documents": ERR=The system cannot find the path specified. I've
> scoured Google and it has helped me get this far, but I just don't see
> what I'm doing wrong here (been a long week, so I must be tired and
> just not seeing it.)
>
> Here is the file set for this backup:
>
> FileSet {
>   Name = "Default WinXP File Set"
>   Enable VSS = yes
>   Include {
> Options {
>   wilddir = "C:/Documents and Settings/*/Cookies"
>   wilddir = "C:/Documents and Settings/*/Recent"
>   wilddir = "C:/Documents and Settings/*/Local Settings/Temp"
>   wilddir = "C:/Documents and Settings/*/Local Settings/History"
>   wilddir = "C:/Documents and Settings/*/My Documents/My Music"
>   wilddir = "C:/Documents and Settings/*/My Documents/My Videos"
>   wilddir = "*temporary internet files*"
>   wildfile = "*pagefile.sys"
>   wildfile = "*.log"
>   exclude = yes
> }
> Options {
>   signature = MD5
>   Compression = GZIP9
>   ignore case = yes
> }
> File = '"C:/Documents and Settings"'
>   }
>   Exclude {
> File = "c:/temp"
>   }
> }
>
> Since I'm connecting to the director and everything else seems to be
> working correctly, I haven't included the other conf files, but if it
> would help.
>
> Anyways, if someone could point me in the right direction I would be
> greatful. For now I feel it an accomplishment to have the server at
> least backed up.
>
> Thanks,
> Brian
>

Two things that are odd... you're including "c:/documents and settings" but 
then excluding c:/temp, that's just unhelpful.  My approach is a little 
different--tell it to back up c:/ and then exclude the dirs and files I 
don't want.

So here's what I have...  works for XP, Vista, Win7:

FileSet {
   Name = "c-drive-dirs"
   EnableVSS=yes
   Ignore FileSet Changes = yes
   Include {
 Options {
  @/opt/bacula/etc/bacula-filesets.inc
 }
 File = "C:/"
 Exclude Dir Containing = "excludemefrombackup"
   }
}
The reason to include bacula-filesets.inc is so we can do C: on some 
computers, C: and D:, C: and F: on other computers, etc similarly.

Anyway, bacula-filesets.inc contains the following (please excuse the 
word/line wrapping that e-mail puts in):

   signature = MD5
   noatime = yes
   ignore case = yes
   Exclude = yes
   WildDir = "*/DO_NOT_BACKUP"
   #self explanatory, added 1/8/2009 beh

   # exclude any directory with cache in the name
   WildDir = "*/Cache"
   WildDir = "*/MSOCache"

   WildDir = "*/Windows Defender"
   WildDir = "*/Temporary Internet Files"
   WildDir = "*/Temp"
   WildDir = "*/tmp"

   WildDir = "*/restored/d"
   WildDir = "*/restored/c"

   WildDir = "*/ATI Technologies"

   WildDir = "*/wmdownloads"
   WildDir = "*/My Music"
   WildDir = "*/iTunes"
   WildDir = "*/Cookies"
   WildDir = "*/Program Fi*/Microsoft Games"

   WildFile =

Re: [Bacula-users] Bacula problems in windows 7 64 bit

2010-10-29 Thread Bob Hetzel
Zak,

I'm running bacula-fd (the 5.0.3 x63 version) fine on a bunch of Win7 x64 
computers here w/o problems.  However, there are some problems with the 
windows installers (both 32 bit and 64 bit) under many circumstances not 
acceptable config files.  If you look at the bacula-fd.conf file, make sure 
there aren't any remaining sections that say something like Name = 
@monitor_n...@.  To fix this you'd have to fill in all the @variables with 
actual values and then restart the bacula-fd service.

Bob

> From: List Man 
> Subject: [Bacula-users] Bacula problems in windows 7 64 bit
> To: bacula-users@lists.sourceforge.net
> Message-ID: <8568605.10676.1288293089051.javamail.r...@shangana>
> Content-Type: text/plain; charset=utf-8
>
> I have been unable to get bacula 5.0.3 64 bit to run on a windows 7 desktop.  
> I got the following error:  A system error has occurred.  System error 1067 
> has occurred.   The process terminated unexpectedly.  I saw the same problem 
> on the internet, but there is no fix for it so I decided to use the 32 bit 
> version.  This version starts fine, but the VSS does not work properly.  
> According to my research, the 32 bit version does not work on a 64 bit 
> computer.  Does anyone have any clue?
>
>
>
> I am using bacula dir and sd version 5.0.0  on server.  The client is using 
> version 5.0.3.
>
>
> TIA,
>
>
>
> Zak
>


--
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store 
http://p.sf.net/sfu/nokia-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Can you tell which are active clients from Bacula's database?

2010-11-05 Thread Bob Hetzel


I've created a report that we use to e-mail us of when a system has 
repeated failures--the interval we chose was 10 days but it could easily be 
changed to suit your needs.  I'm sure that a more complex query could be 
created but I simplified the task and split it into two queries...


First I query out stuff that's never had a successful backup, then I do a 
separate query looking at what I attempted to back up that day... Attached 
is the perl program...  Hopefully this little perl script will be helpful 
to you or anybody else.


I can see that this won't exactly solve your concern of eliminating records 
from the resultset that aren't "current".  However, once the 
no-longer-active clients' backups have fully expired and you run the bacula 
'dbcheck' tool they will be purged from the client list in the db so should 
be eliminated from the report.


   Bob


From: Matthew Seaman 

Hi there,

We have a variable population of client machines being backed up
by bacula.  What I'd like to do is build a query for the bacula DB
that will detect eg. if any clients haven't had a full backup within the
last week.  (Yes -- I know there are configuration options to
automatically promote incrementals etc. to fulls in that situation:
we're using them.)  We'll then hook this up to our Nagios so the Ops
team gets alerted.

So I have come up with this query:

SELECT clientid, name, max(endtime) FROM job
WHERE level = 'F' AND type = 'B' AND jobstatus = 'T'
GROUP BY clientid, name
HAVING max(endtime) < now() - interval '7 day'
ORDER BY name

(We're using Postgresql)

This does pretty much what I want, except that the output includes job
records from clients that have been decommissioned and removed from
bacula-dir.conf.  Now, for the life of me, I can't see anything in the
DB that indicates whether a client backup job is active or not.  Is it
just me being blind or am I going to have to parse that out of the
bacula config files?

Cheers,

Matthew

-- Matthew Seaman Systems Administrator E msea...@squiz.co.uk



#!/usr/bin/perl -w
# Usage: To get today's report, run with no parameters
#To get a report from yesterday's backups, execute this with the 
parameter 'yesterday' (w/o quotes)
#
# Original author statement: 
## only works with mysql currently, should work with postgres
## (c) 2007 Falk Stern (falk.st...@akquinet.de) 
## akquinet System Integration GmbH - http://www.akquinet.de/
## published under GPLv2
## 
## November 5, 2010: Greatly adapted in 2009 and 2010 by Bob Hetzel 
b...@case.edu to produce 
## listing of failed backups from the last 10 days.
# 


use DBI;
use DBD::mysql;
use Data::Dumper;
use Date::Calc qw(Today Add_Delta_Days Day_of_Week_to_Text Day_of_Week 
Month_to_Text);
#use strict;


my $user = "bacula";
my $password = "";
my $database = "bacula";
my $dbhost = "localhost";
my $dsn = "DBI:mysql:database=$database;host=$dbhost;port=3306";
my $dbh = DBI->connect($dsn, $user, $password);
my $sth;

my $reportday = $ARGV[0];

my (undef,undef,undef,$mday,$mon,$year,$wday,undef,undef) = localtime;

($year,$mon,$mday) = Today();
if ($reportday eq 'yesterday') {
  ($year,$mon,$mday) = Add_Delta_Days($year,$mon,$mday,-1);
  $wday=Day_of_Week($year,$mon,$mday);
}
#$year = $year - 1900;

#print "Year = $year \n";
#print "Month = $mon \n";
#print "Day of month = $mday \n";
#print "Weekday = $wday \n";

#print "nowish = $nowish\n";
my @days = ('Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 
'Saturday');
my @months = ('January', 'February', 'March', 'April', 'May', 'June', 'July', 
'August', 'September', 'October', 'November', 'December');
 

#$year += 1900;
#$mon++;
$mon = sprintf("%02d",$mon);
#$mon = 11;
#print "MDAY = $mday \n";
$mday = sprintf("%02d",$mday);
#print "MDAY = $mday \n";
#$mday = 20;

my %jobstates = (
"A" => "canceled by admin",
"B" => "blocked",
"C" => "created, but not running",
"c" => "waiting for client resource",
"D" => "verify differences",
"d" => "waiting for maximum jobs",
"E" => "term. in error",
"e" => "term. with non-fatal error",
"f" => "term. with fatal error",
"F" => "waiting on File Daemon",
"j" => "waiting for

Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-11-12 Thread Bob Hetzel


> From: Gavin McCullagh 
> Subject: Re: [Bacula-users] Tuning for large (millions of files)
>   backups?
> To: bacula-users@lists.sourceforge.net
> Message-ID: <2010144733.gz20...@gcd.ie>
> Content-Type: text/plain; charset=us-ascii
>
> On Mon, 08 Nov 2010, Gavin McCullagh wrote:
>
>> > We seem to have the correct indexes on the file table.  I've run optimize 
>> > table
>> > and it still takes 14 minutes to build the tree on one of our bigger 
>> > clients.
>> > We have 51 million entries in the file table.
> I thought I should give some mroe concrete information:
>
> I don't suppose this is news to anyone but here's the mysql slow query log to
> correspond:
>
> # Time: 10 14:24:49
> # u...@host: bacula[bacula] @ localhost []
> # Query_time: 1139.657646  Lock_time: 0.000471 Rows_sent: 4263403  
> Rows_examined: 50351037
> SET timestamp=1289485489;
> SELECT Path.Path, Filename.Name, Temp.FileIndex, Temp.JobId, LStat, MD5 FROM 
> ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, 
> File.FilenameId AS FilenameId, LStat, MD5 FROM Job, File, ( SELECT 
> MAX(JobTDate) AS JobTDate, PathId, FilenameId FROM ( SELECT JobTDate, PathId, 
> FilenameId FROM File JOIN Job USING (JobId) WHERE File.JobId IN 
> (9944,9950,9973,9996) UNION ALL SELECT JobTDate, PathId, FilenameId FROM 
> BaseFiles JOIN File USING (FileId) JOIN Job  ON(BaseJobId = Job.JobId) 
> WHERE BaseFiles.JobId IN (9944,9950,9973,9996) ) AS tmp GROUP BY PathId, 
> FilenameId ) AS T1 WHERE (Job.JobId IN ( SELECT DISTINCT BaseJobId FROM 
> BaseFiles WHERE JobId IN (9944,9950,9973,9996)) OR Job.JobId IN 
> (9944,9950,9973,9996)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = 
> File.JobId AND T1.PathId = File.PathId AND T1.FilenameId = File.FilenameId ) 
> AS Temp JOIN Filename ON (Filename.FilenameId = Temp.FilenameId) JOIN Path ON 
> (Path.PathId = Temp.PathId) WHERE FileIndex > 0 
ORDE!
> R BY Temp.JobId, FileIndex ASC;
>
>
> I've spent some time with the mysqltuner.pl script but to no avail thus far.
> There's 6GB RAM so it suggests a key buffer size of >4GB which I've set at
> 4.1GB.
>
> This is an Ubuntu Linux server running MySQL v5.1.41.  The mysql data is on an
> MD software RAID 1 array on 7200rpm SATA disks.  The tables are MyISAM (which 
> I
> had understood to be quicker than innodb in low concurrency situations?).  The
> tuner script is suggesting I should disable innodb as we're not using it which
> I will do though I wouldn't guess that will make a massive difference.
>
> There are no fragmented tables currently.
>
> Gavin
>

I'm starting to think the issue might be linked to some kernels or linux 
distros.  I have two bacula servers here.  One system is a year and a half 
old (12 GB RAM), has with a File table having approx 40 million File 
records.  That system has had the slowness issue (building the directory 
tree on restores took about an hour) running first Ubuntu 9.04 or 9.10 and 
now RedHat 6 beta.  The kernel currently is at 2.6.32-44.1.el6.x86_64.  I 
haven't tried downgrading, instead I tweaked the source code to use the old 
3.0.3 query and recompiled--I don't use Base jobs or Accurate backups so 
that's safe for me.

The other system is 4 yrs or so old, with less memory (8GB), slower cpus, 
slower hard drives, etc., and in fairness only 35 million File records. 
This one builds the directory tree in approx 10 seconds, but is running 
Centos 5.5.  The kernel currently is at 2.6.18-194.11.3.el5.

I'm still convinced that this one slow MySQL query could be changed to 
allow MySQL to better optimize it.  I started with the same my.cnf file 
settings and then tried tweaking them because the newer computer has more 
ram but that didn't help.

Is anybody up to the task of rewriting that query?

--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow LTO4 write speed

2010-11-18 Thread Bob Hetzel
1) Make sure all the firmwares are up to date: the 1068E card, the tape 
drives, specifically.  While you're at it, make sure the Adaptec card has 
up to date firmware too.

2) You might want to try this other setting too...
Maximum File Size = 3GB

3) Looking at your output, you only got a bit over 500GB.  Are you testing 
with an LTO-3 tape? (max uncompressed size of those is only 400GB, whereas 
LTO-4 tapes fit 800GB uncompressed so you should have gotten much more on 
the tape before it thought it hit the end)  If so, my understanding is that 
the tape drive will operate backward compatibly, including lowering the max 
read and write speeds to the previous generation's.

4) Also, writing to the tape is very CPU intensive.  Is the system  busy 
doing something else?


Just a few shots in the dark there, hopefully one of them helps.

In addition, until you get everything working, you should probably not mess 
with the default network buffer size.  The manual has this to say about 
that setting:
Maximum Network Buffer Size = bytes
where bytes specifies the initial network buffer size to use with the File 
daemon. This size will be adjusted down if it is too large until it is 
accepted by the OS. Please use care in setting this value since if it is 
too large, it will be trimmed by 512 bytes until the OS is happy, which may 
require a large number of system calls. The default value is 32,768 bytes.
The default size was chosen to be relatively large but not too big in the 
case that you are transmitting data over Internet. It is clear that on a 
high speed local network, you can increase this number and improve 
performance. For example, some users have found that if you use a value of 
65,536 bytes they get five to ten times the throughput. Larger values for 
most users don't seem to improve performance. If you are interested in 
improving your backup speeds, this is definitely a place to experiment. You 
will probably also want to make the corresponding change in each of your 
File daemons conf files.


Bob


>
> Hi!
>
> We're seeing strange behaviour with Bacula 5.0.3 on Debian/Squeeze and
> Kernels 2.6.36 and 2.6.32 (for a while, we've had CentOS 5 for testing,
> but that didn't change a thing).
>
> See the 'btape fill' results below, the write speed doesn't get above
> 50MB/s - the tape drive is an ULTRIUM-HH4 LTO4 drive in a Tandberg
> StorageLoader. Tar is able to write with >120MB/s to the tapes.
>
> I tried different block sizes (64K to 2M; the tape drive claims to
> support up to 16M, but Linux doesn't let me), the relevant parts of
> bacula-sd.conf are below. I'm quit out of ideas how I could speed up the
> tape writes for bacula?
>
> The system btw is a single Xeon E5420, 8GB Ram. The Storage Library/Tape
> drives are SAS-connected via a LSI Logig MPTSAS 1068E and storage is a
> 24-disk Raid50 via Adaptec 52445.
>
>
> 16-Nov 17:02 btape JobId 0: 3304 Issuing autochanger "load slot 1, drive 0" 
> command.
> 16-Nov 17:03 btape JobId 0: 3305 Autochanger "load slot 1, drive 0", status 
> is OK.
> Wrote Volume label for volume "TestVolume1".
> Wrote Start of Session label.
> 17:03:23 Begin writing Bacula records to first tape ...
> Wrote block=5000, file,blk=11,239 VolBytes=10,483,662,848 rate=47.01 MB/s
> Wrote block=1, file,blk=22,3 VolBytes=20,969,422,848 rate=40.87 MB/s
> Wrote block=15000, file,blk=32,243 VolBytes=31,455,182,848 rate=42.68 MB/s
> Wrote block=2, file,blk=43,7 VolBytes=41,940,942,848 rate=45.93 MB/s
> Wrote block=25000, file,blk=53,247 VolBytes=52,426,702,848 rate=46.27 MB/s
> Wrote block=3, file,blk=64,11 VolBytes=62,912,462,848 rate=47.87 MB/s
> 17:25:41 Flush block, write EOF
> Wrote block=35000, file,blk=74,251 VolBytes=73,398,222,848 rate=48.03 MB/s
> Wrote block=4, file,blk=85,15 VolBytes=83,883,982,848 rate=49.02 MB/s
> Wrote block=45000, file,blk=95,255 VolBytes=94,369,742,848 rate=49.15 MB/s
> Wrote block=5, file,blk=106,19 VolBytes=104,855,502,848 rate=49.74 MB/s
> Wrote block=55000, file,blk=116,259 VolBytes=115,341,262,848 rate=49.82 MB/s
> Wrote block=6, file,blk=127,23 VolBytes=125,827,022,848 rate=50.13 MB/s
> 17:46:06 Flush block, write EOF
> Wrote block=65000, file,blk=137,263 VolBytes=136,312,782,848 rate=50.29 MB/s
> Wrote block=7, file,blk=148,27 VolBytes=146,798,542,848 rate=50.29 MB/s
> Wrote block=75000, file,blk=158,267 VolBytes=157,284,302,848 rate=50.52 MB/s
> Wrote block=8, file,blk=169,31 VolBytes=167,770,062,848 rate=50.50 MB/s
> Wrote block=85000, file,blk=179,271 VolBytes=178,255,822,848 rate=50.81 MB/s
> Wrote block=9, file,blk=190,35 VolBytes=188,741,582,848 rate=49.16 MB/s
> 18:08:33 Flush block, write EOF
> Wrote block=95000, file,blk=200,275 VolBytes=199,227,342,848 rate=49.50 MB/s
> Wrote block=10, file,blk=211,39 VolBytes=209,713,102,848 rate=49.43 MB/s
> Wrote block=105000, file,blk=221,279 VolBytes=220,198,862,848 rate=49.79 MB/s
> Wrote block=11, file,blk=232,43 VolBytes=230,684,622,848 rate=4

Re: [Bacula-users] bscan, file retention, and pruning

2010-11-18 Thread Bob Hetzel
What you've hit on is something I've noted too... I'm thinking it would be 
a nice tweak/enhancement to bacula if the pruning function was disabled on 
restore jobs.  Another case that could trigger it might be just restoring 
from your oldest backup.

I've no idea how simple this change might be, though.  It seems rather 
counter intuitive for bacula to try to prune something at the end of a 
restore job (successful or failed) so it may be a bigger project than 
adding a simple if statement...  Has anybody dug into that part of the code?

Bob

> From: Craig Miskell 
> Subject: [Bacula-users] bscan, file retention, and pruning
> To: bacula-users 
> Message-ID: <4ce45109.4010...@opus.co.nz>
> Content-Type: text/plain; charset=ISO-8859-1
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>   So I have just seen a case where an old tape with a job that had it's 
> file
> records pruned by the File Retention was bscan'd to get the records back into
> the database.
>
> The operator then tried to run a restore, but had managed to leave the tape
> drive in an inconsistent state (unmounted, with the tape in it, so mtx had a
> hernia), and the Restore job failed.  That's unfortunate, but it happens, and
> isn't the real problem.  When the job failed and finished, the File Retention
> period kicked in, and the bscan'd records were purged.
>
> This is somewhat annoying, and means we have to bscan again (4 hours+).  In 
> the
> general case of a bscan and a single successful restore, it's pretty much ok.
> But in case of a failure of the restore, or if we find we have to do more than
> one restore (the user decides they need more files after the first batch), 
> this
> is a real pain.
>
> The somewhat crude approach is to raise File Retention on the client to a big
> enough period to cover back to when the tape was written, while going through
> the bscan/restore process, and setting it back to normal afterwards.
>
> Is there a better way?  I'm thinking of something like marking the job as
> not-pruneable after the bscan and while doing restores, but I'm open to any
> suggestions.
>
> Thanks,
>
> - --
> Craig Miskell
> Senior Systems Administrator
> Opus International Consultants
> Phone: +64 4 471 7209
> I think we agree, the past is over
> - -George W Bush
>


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bscan, file retention, and pruning

2010-11-19 Thread Bob Hetzel


On 11/18/2010 11:00 PM, Dan Langille wrote:
> On 11/18/2010 4:20 PM, Bob Hetzel wrote:
>>> From: Craig Miskell
>>> Subject: [Bacula-users] bscan, file retention, and pruning
>>> To: bacula-users
>>> Message-ID:<4ce45109.4010...@opus.co.nz>
>>> Content-Type: text/plain; charset=ISO-8859-1
>>>
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA1
>>>
>>> Hi,
>>> So I have just seen a case where an old tape with a job that had it's file
>>> records pruned by the File Retention was bscan'd to get the records back
>>> into
>>> the database.
>>>
>>> The operator then tried to run a restore, but had managed to leave the tape
>>> drive in an inconsistent state (unmounted, with the tape in it, so mtx
>>> had a
>>> hernia), and the Restore job failed. That's unfortunate, but it happens,
>>> and
>>> isn't the real problem. When the job failed and finished, the File
>>> Retention
>>> period kicked in, and the bscan'd records were purged.
>>>
>>> This is somewhat annoying, and means we have to bscan again (4 hours+).
>>> In the
>>> general case of a bscan and a single successful restore, it's pretty
>>> much ok.
>>> But in case of a failure of the restore, or if we find we have to do
>>> more than
>>> one restore (the user decides they need more files after the first
>>> batch), this
>>> is a real pain.
>>>
>>> The somewhat crude approach is to raise File Retention on the client to
>>> a big
>>> enough period to cover back to when the tape was written, while going
>>> through
>>> the bscan/restore process, and setting it back to normal afterwards.
>>>
>>> Is there a better way? I'm thinking of something like marking the job as
>>> not-pruneable after the bscan and while doing restores, but I'm open to any
>>> suggestions.
>>>
>>> Thanks,
>
>> What you've hit on is something I've noted too... I'm thinking it would be
>> a nice tweak/enhancement to bacula if the pruning function was disabled on
>> restore jobs. Another case that could trigger it might be just restoring
>> from your oldest backup.
>>
>> I've no idea how simple this change might be, though. It seems rather
>> counter intuitive for bacula to try to prune something at the end of a
>> restore job (successful or failed) so it may be a bigger project than
>> adding a simple if statement... Has anybody dug into that part of the code?
>
> Do not set auto prune on.
>
> Instead, use an Admin job to do your pruning for you.
>

Interesting idea... Do you have a good prune script?

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Copy Job performance issues

2010-12-07 Thread Bob Hetzel


This comes up periodically on the list.  Check this thread for more 
settings you'll need to tweak to get more write speed:

http://marc.info/?t=12899980386&r=1&w=2

> Subject: [Bacula-users] Copy Job performance issues
> To: bacula-users 
> Message-ID: 
> Content-Type: text/plain; charset=us-ascii
>
> Dear bacula users,
>
> I have a problem concerning the speed of copy jobs. My setup is:
> - bacula server is OpenSuSE 11.3, version 5.0.3 using postgres, 2 Xeons (4 
> cores) and 8 GB memory.
> - Attached is an iSCSI-RAID containing File devices
> - Copy Jobs run to a Quantum Scalar 50 Tapelibrary with 2 LTO4 tape drives. 
> The library and the tapes are connected via eSATA. According to wikipedia, 
> LTO4 should transfer 120MB/s without compression. Compression is enabled.
>
> My backups go to the iSCSI-RAID. I run CopyJObs every day to copy all 
> uncopied jobs to tape.
> My problem is, that my copy jobs run with max. 45 Mbyte/s
>
> I checked the following:
> - btape shows a maximum speed of 80MByte/sec.
> - I can read about 150 Mbyte/s from the RAID device
> - Copying data from RAID to tape using dd I get rates about 80 MByte/s
> - While copying the cpu load is about 20-50% (bacula-sd)
>
> So it seems as if bacula-sd itself slows the copy jobs down, but I cannot 
> imagine, why. Maybe it is a configuration issue, but reading the manuals 
> didn't help.
>
> Configuration of the tape device in bacula-sd.conf:
>
> Autochanger {
>   Name = "Scalar50-2"
>   Device = "LTO4-0"
>   Device = "LTO4-1"
>   Changer Device = /dev/tape/by-id/scsi-3500e09e0bb562001
>   Changer Command = "/usr/lib64/bacula/mtx-changer %c %o %S %a %d"
> }
>
> Device {
>   Name = "LTO4-0"
>   Device Type = Tape;
>   Media Type = LTO4
>   Archive Device = /dev/tape/by-id/scsi-3500110a00104374e-nst
>   AutomaticMount = yes;
>   AlwaysOpen = yes;
>   RemovableMedia = yes;
>   RandomAccess = no;
>   Autochanger = yes
>   Drive Index = 0
>   Spool Directory = /var/spool/bacula
> }
>
> Device {
>   Name = "LTO4-1"
>   Device Type = Tape;
>   Media Type = LTO4
>   Archive Device = /dev/tape/by-id/scsi-3500110a0012be1ee-nst
>   AutomaticMount = yes;
>   AlwaysOpen = yes;
>   RemovableMedia = yes;
>   RandomAccess = no;
>   Autochanger = yes
>   Drive Index = 1
>   Spool Directory = /var/spool/bacula
> }
>
> Any ideas?
>
> --
> Kind regards
> Christoph
> _
> Christoph Litauer
> Uni Koblenz, Computing Centre, Office A 022
> Postfach 201602, 56016 Koblenz
> Fon: +49 261 287-1311, Fax: -100 1311
>
>


--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Storage error: "The sizes do not match!"

2010-12-15 Thread Bob Hetzel
This issue got mostly eliminated with a contributor's code change in 5.0.3 
(I can't remember who but this has been really helpful to me at least).

You'll notice the two numbers differ by exactly one, so somebody built a 
patch to have it just fix the inconsistency and continue appending to the 
volume w/o marking the volume in error.

Bob

> From: dmbo 
> Subject: [Bacula-users]  Storage error: "The sizes do not match!"
> To: bacula-users@lists.sourceforge.net
> Message-ID: <1291857583.m2f.348...@www.backupcentral.com>
>
> Hi,
>
> I am using bacula for a couple of days. It works perfect, except one thing: 
> from time to time I get this message:
> 09-Дек 10:05 sl0358066-sd JobId 335: Error: 
> Bacula cannot write on disk Volume "nas101l4" because: The sizes do not 
> match! Volume=12487974757 Catalog=11999231858
> Then I should use "label" bconsole command that creates a new file.
>
> How to solve this problem?
>
> Yes, I googled it, no result so far. There are lots of topics about it but 
> without solution.
>


--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How change the MedId for an exist Volume

2010-12-16 Thread Bob Hetzel


> From: Martin Simmons 

>> > On Wed, 15 Dec 2010 08:19:02 -0200, Rodrigo N de Castro Fernandes 
>> > said:
>> >
>> > I would like to know how to change the MediaId from a VolumeName.
>> > Just for backup Media/Volume sort organization.
>> >
>> > Can somebody help me? Is it possible? Is it recommended?
> It isn't recommended, but you could do it with some SQL commands if you are
> very careful.
>
> Purge the volume first, otherwise you will need to update the jobmedia table
> as well.  You may need to adjust the SQL sequence that controls the automatic
> numbering of new volumes as well, if you new MediaId is higher.  There may be
> other problems that I haven't though about...
>
> __Martin

I'm not 100% sure what the original poster's goal was but in my case my 
goal was to change how bacula sorts when you do a 'list volumes'.  It sorts 
by default by MediaId which is kinda unhelpful to those of us that use 
actual bar-coded tapes.  The MediaId is generated when the volume is first 
inserted into the catalog, but if the autochanger picks up the tapes in the 
wrong order you're left with that odd sorting.  I tried to update the 
tables directly for a while but eventually I decided it was too big a pain 
to wait for a tape to be Scratch (I presume you meant Prune not Purge, by 
the way).

My solution was to just tweak the query bacula uses so that it sorts on the 
VolumeName field instead.  Unfortunately I'm on vacation so can't dig into 
the code to see what file it was in, but all you have to do is change the 
line where the query ends in something like 'order by MediaId' to use 
something 'order by VolumeName' instead.  I figured that would be a great 
enhancement to the bacula code but didn't know if there were actually some 
people who preferred the MediaId order, perhaps because they're mixing LTO3 
and LTO4 tapes or something, so I didn't bother generating a patch to submit.

Bob

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] concurrent jobs + autochanger

2011-01-26 Thread Bob Hetzel
> Hello,
> I'm completely lost regarding the Maximum Concurrent Jobs directive on
> multiple places.
>
> First, my setup:
> -Autochanger, about 70 slots
> -2 LTO4 Drives inside autochanger
> -2 backup Pools defined, and a Scratch pool. pool1 used by Job1 for unix
> clients,  pool2 used by Job2 and Job3 for windows clients.
>
> What I want, is for those LTO 2 drives writing the jobs concurrently,
> without specifying exact job-device mapping. Right now I cannot do this,
> all jobs are written to the 1st drive sequentially.
> All Jobs use the "Autochanger" device, not the drives themselves.
>
> I Have "Maximum Concurrent Jobs = 2" to the Director {} in bacula-dir
> I Have "Maximum Concurrent Jobs = 20" to the Storage{} in baclula-sd
>
> Should I use the "Maximum Concurrent Jobs = 2" on the Autochanger or on
> the LTO drives?
> Should I use the "Maximum Concurrent Jobs = 1" on each LTO Device{} ?
>
> Thank you
> -Spiros

Spiros,

There's been a new directive added in recent versions.
In the bacula-sd.conf file, you can now add
Maximum Concurrent Jobs = 

I use 6, but your mileage may vary.  Here's the section of the manual where 
it's been described:

http://www.bacula.org/5.0.x-manuals/en/main/main/New_Features_in_5_0_0.html#SECTION0051

Here's how I've found it to work.  If you set that number to be 2, and you 
have more than 2 jobs running, the first two will be assigned to the 1st 
device, etc. There's more to it than that so you'll want to watch how it 
behaves.  Remember that you have to restart the storage daemon whenever you 
change it's config file--you can't re-load it.

Here's a snippet of my bacula-sd file that shows it in the Device section.

Autochanger {
   Name = Dell-PV136T
   Device = IBMLTO2-1, IBMLTO2-2
   Changer Command = "/opt/bacula/scripts/mtx-changer %c %o %S %a %d"
   Changer Device = /dev/sg4
}
Device {
   Name = IBMLTO2-1
   Drive Index = 0
   Media Type = LTO-2
   Archive Device = /dev/nst0
   Changer Device = /dev/sg4
   Maximum Concurrent Jobs = 6
[rest of section deleted for brevity]

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   >