Hi All,
I performed a manual Full backup on 2/6/13.
However, when I recently attempted to do a restore (via "Select backup
for a client before a specified time" with "2013-02-23 00:00:00" as the
argument), I was presented with the following:
++---+---+---+
% /scratch
/dev/sda1 61T 42T 19T 70% /data
Any thoughts?
Mike
On 10/22/2012 03:37 PM, Jérôme Blion wrote:
> Hello,
>
> Are you sure there is no loop ?
> Typically, it can happens with onefs=no
>
> HTH.
> Jérôme Blion.
>
>
> Le 22/10/2012 22:33, Mike S
On 10/22/2012 01:15 PM, John Drescher wrote:
>> I currently have a machine with ~3 GB of data.
>>
>> However, ~230 GB is being backed up by Bacula.
>>
>> I performed a "bconsole -> estimate client=blah listing", and it doesn't
>> look like any files beyond what I specified in the fileset are being
Hi All,
I currently have a machine with ~3 GB of data.
However, ~230 GB is being backed up by Bacula.
I performed a "bconsole -> estimate client=blah listing", and it doesn't
look like any files beyond what I specified in the fileset are being
backed up.
I even set sparse=yes in the fileset op
, Josh Fisher a écrit :
On 6/13/2011 2:15 AM, Mike Seda wrote:
I forgot to mention that during my debugging, I did have "Heartbeat
Interval" set to 10 on the Client, Storage, and Director resources.
The same error still occurred... Very odd.
I have encountered similar situations wi
timeout of
the state on a forwarding device. Dropping spool sizes is only
increasing the frequency of communication across that path. You will
likely see this problem solved completely by setting a short duration
keepalive in your bacula configs.
-Blake
On Fri, Jun 10, 2011 at 20:48,
I just encountered a similar error in RHEL 6 using 5.0.3 (on the server
and client) with Data Spooling enabled:
10-Jun 02:06 srv084 JobId 43: Error: bsock.c:393 Write error sending
65536 bytes to Storage daemon:srv010.nowhere.us:9103: ERR=Broken pipe
10-Jun 02:06 srv084 JobId 43: Fatal error: bac
All,
I'm still doing some testing with Bacula in my new environment. After
one week of backups, Bacula is storing approximately 25,000,000 files
(10 TB of data). Our other 5 TBs of data is not in Bacula yet, but will
be soon. Our 15 TB of total data will also grow by 50% each year.
Postgres see
Doh. I mean to say "If I'm *not* pointing to removable media"...
On 06/02/2011 02:59 PM, Mike Seda wrote:
> Hi All,
> I'm currently tweaking a Bacula D2D setup, and am wondering if I
> should be writing to a disk-based Autochanger versus directly to a
> disk-
Hi All,
I'm currently tweaking a Bacula D2D setup, and am wondering if I should
be writing to a disk-based Autochanger versus directly to a disk-based
Device. If I'm pointing to removable media, I should just write directly
to one or more Devices (w/o an Autochanger), right?
Mike
-
On 05/20/2011 10:48 AM, Mike Seda wrote:
> Hi All,
> I'm currently setting up a disk-based storage pool in Bacula and am
> wondering what I should set "Maximum Volume Bytes" to. I was thinking of
> setting it to "100G", but am just wondering if this is sane.
&g
Hi All,
I'm currently setting up a disk-based storage pool in Bacula and am
wondering what I should set "Maximum Volume Bytes" to. I was thinking of
setting it to "100G", but am just wondering if this is sane.
FYI, the total data of our clients is 15 TB, but we are told that this
data should at
=fe35ad03e0bff911ab8c7fcea9684bba83e3e9b9
I'm not sure why this commit never made it into bacula-client-5.0.3 from
FreeBSD Ports though.
Mike
On 05/18/2011 11:16 AM, Mike Seda wrote:
> Hi Martin,
> It turns out that I do have a jobhisto table after all:
>
> bacula=> SE
locationlog
jobhisto
pathhierarchy
unsavedfiles
basefiles
jobmedia
job
client
counters
version
cdimages
device
status
pathvisibility
(24 rows)
bacula=>
Mike
On 05/18/2011 10:46 AM, Mike Seda wrote:
> Hi Martin,
> It looks like make_bacula_tables succeeded. There
BLE
CREATE INDEX
psql::320: NOTICE: CREATE TABLE / PRIMARY KEY will create
implicit index "unsavedfiles_pkey" for table "unsavedfiles"
CREATE TABLE
psql::327: NOTICE: CREATE TABLE / PRIMARY KEY will create
implicit index "cdimages_pkey" for table "cdimages"
CREATE
Hi All,
I'm currently attempting to stand up a Bacula Director on FreeBSD 8.2.
I installed the following packages from FreeBSD Ports:
bacula-client-5.0.3
bacula-server-5.0.3
postgresql-client-8.3.14,1
postgresql-server-8.3.14
Everything has gone pretty well so far, but I just ran into the error b
Hi All,
I'm currently attempting to architect a Bacula solution for an
environment of 100+ clients with 15+ TB of total data (5,000,000+ files).
During my research on the above solution, I read that it's possible to
run multiple storage (and director) daemons. This was very interesting
to me, b
All,
I'm currently experiencing the hung bconsole issue, as well.
No modifications were made to my system recently.
After a couple of days, bconsole just doesn't respond anymore.
I just get the following...
[ms...@backup0 ~]$ sudo /usr/sbin/bconsole -d 200
Password:
Connecting to Director local
felix,
i too have noticed a severe performance degradation due to high load
when i upgraded the backups server from 2.0.1 to 2.2.7. both sets of
rpms came from the rpms-contrib-fschwarz repository on sourceforge.net.
were your rpms based on an --enable-batch-insert option to ./configure ?
also,
hi arno,
please see inline responses:
Arno Lehmann wrote:
> Hi,
>
> 13.01.2008 00:06, Mike Seda wrote:
>
>> hi arno,
>> i have matched up my config to the example that you gave. so now i have:
>> FD: 20
>> SD: 20
>> DIR/Jobs: 1
>> DIR/Director:
meone
was going to propose that. :-P
best,
mike
Alan Brown wrote:
> On Thu, 10 Jan 2008, Mike Seda wrote:
>
>> hi arno,
>> i forgot to mention that i have:
>> Maximum Concurrent Jobs = 2
>>
>> should i increase that number?
>
> Yes - and read the fine manua
hi arno,
i have matched up my config to the example that you gave. so now i have:
FD: 20
SD: 20
DIR/Jobs: 1
DIR/Director: 20
DIR/Storage1(Autochanger): 4
...but, jobs still seem to run in serial... any thoughts?
thx,
mike
Arno Lehmann wrote:
> Hi,
>
> 10.01.2008 20:34, Mike S
her wrote:
>
>> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL PROTECTED]> wrote:
>>
>>> all,
>>> i am frustrated beyond belief.
>>>
>>> i recently spent $8K for a second tape drive for my library.
>>>
>>> all i wan
Michael Short wrote:
> I meant for this to hit the list:
>
> On Jan 8, 2008 3:22 PM, Michael Short <[EMAIL PROTECTED]> wrote:
>
>> Mike,
>>
>> I recommend that you upgrade your FD agents, I have had some trouble
>> with the volumes produced by 2.0FD->2.2SD/DIR
>>
yikes... i will upgrade th
Arno Lehmann wrote:
> Hi,
>
> 08.01.2008 17:34, John Drescher wrote:
>
>> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL PROTECTED]> wrote:
>>
>>> all,
>>> i am frustrated beyond belief.
>>>
>>> i recently spent $8K for a seco
John Drescher wrote:
> On Jan 8, 2008 11:21 AM, Mike Seda <[EMAIL PROTECTED]> wrote:
>
>> all,
>> i am frustrated beyond belief.
>>
>> i recently spent $8K for a second tape drive for my library.
>>
>> all i want is for two bacula backup jobs (on
all,
i am frustrated beyond belief.
i recently spent $8K for a second tape drive for my library.
all i want is for two bacula backup jobs (one for each tape drive) to
run concurrently. i upgraded the backup server to 2.2.7 (to take
advantage of improved code for handling multiple drives). the c
all,
i wish to upgrade from version 2.0.1 to 2.2.6. the purpose of the
upgrade is so that bacula will better handle my library which has two
tape drives.
can i do the upgrade just on my bacula server and not the clients? and,
is there a mysql db upgrade step involved? if so, what is involved.
All,
I will soon purchase a second (identical) tape drive for my autochanger,
which will be used primarily for cloning jobs.
After reading the documentation, it seems that the following (with
site-specific modifications of job-name and media-type) is all that
needs to be added to each Job (or p
Hi All,
My bacula database (with MyISAM tables) is currently 5.3 GB in size
after only 10 months of use.
Last weekend my File table filled up, which was easily fixed by doing
the following as recommended at
http://www.bacula.org/dev-manual/Catalog_Maintenance.html#SECTION00244
Hi All,
I wish to save space on my tapes by changing backup levels in my
"Monthly" Schedule resource.
My current "Monthly" Schedule resource is:
Schedule {
Name = "Monthly"
Run = Level=Full Pool=Monthly 1st fri at 23:05
Run = Level=Full Pool=Weekly 2nd-5th fri at 23:05
Run = Level=Differe
All,
I am interested in purchasing another autochanger.
I am under the impression that the only way to conveniently (without the
annoyance of manual shuffling) use two autochangers is to have different
MediaTypes assigned to them. Is that correct?
Thx,
Mike
Arno Lehmann wrote:
> Hi,
>
> On 12
herefore bacula)?
>
> I have found the following:
> http://article.gmane.org/gmane.comp.bacula.user/30699/match=fiber+channel
> So apparently Mike Seda a FC connection working
> and:
> http://article.gmane.org/gmane.comp.bacula.user/38666/match=fiber+channel
> So Arno ind
again.
The tapes can be recovered by backing them up and reformatting them (to
get rid of the MTEOM marker).
Basically, my tapes that I thought were bad are actually recoverable.
Cheers,
Mike
Mike Seda wrote:
> I should mention that all of my other tapes are working flawlessly with
> my c
Hi Arno,
I activated the wait_for_drive function in /etc/bacula/mtx-changer, and
the warnings went away!
Yippeee!! You rule!! 8-)
Cheers,
Mike
Hi,
18.09.2007 20:30,, Mike Seda wrote::
> > Hi All,
> > I just conducted a successful restore job via bconsole, but received
&g
Hi All,
I just conducted a successful restore job via bconsole, but received
some weird warning messages. Suspiciously, these warnings started to
appear after a library (robotics and drive) firmware upgrade.
The warning that I receive is:
Warning: acquire.c:200 Read open device "Drive-1" (/dev/n
acula-sd.conf -i FILE0002 -o MSR133L3 -v -w
/var/bacula/spool VG1-LV0 Drive-1
This seemed to work fine... Bacula seemed to like this... No more
errors... 8-)
Martin Simmons wrote:
>>>>>> On Wed, 02 May 2007 12:01:08 -0400, Mike Seda said:
>>>>>>
:48 bcopy: Marking Volume "MSR133L3" in Error in Catalog.
> 01-May 22:49 bcopy: Invalid slot=0 defined, cannot autoload Volume.
> Mount Volume "" on device "Drive-1" (/dev/nst0) and press return when ready:
>
> Any thoughts?
>
>
> Martin Simmo
"Drive-1" (/dev/nst0) and press return when ready:
Any thoughts?
Martin Simmons wrote:
>>>>>> On Mon, 30 Apr 2007 17:54:48 -0400, Mike Seda said:
>>>>>>
>> Hi All,
>> I successfully bcopied a tape to disk. Then, during a subse
our question, but I'm not 100% certain what it was
>> to begin with. :)
>>
>
> Right. The only way to make the MediaIds contiguous would be to hack the
> counter back down to its previous value in the catalog.
>
> __Martin
>
>
>
>> Mike Seda wrote:
>>
this since I would prefer my MediaIds to be contiguous.
Martin Simmons wrote:
>>>>>> On Sun, 29 Apr 2007 21:22:20 -0400, Mike Seda said:
>>>>>>
>> All,
>> I want to duplicate a tape volume to another tape volume. Since I only
>>
All,
I want to duplicate a tape volume to another tape volume. Since I only
have one tape drive, I must copy the tape volume to a disk volume and
then copy the disk volume to a duplicate tape.
Question # 1
If I ever need to bscan in the duplicate tape, I want the volume name to
be that of the o
All,
Is it possible to delete a single job in bacula 2.0.1?
Thx,
Mikw
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data.
onto another? Probably migration, huh?
Kern Sibbald wrote:
> On Saturday 31 March 2007 20:38, Mike Seda wrote:
>
>> I have a similar MTEOM error with one of my tapes. My drive and other
>> tapes are totally fine. I can even restore from this tape, but just
>> cannot app
I have a similar MTEOM error with one of my tapes. My drive and other
tapes are totally fine. I can even restore from this tape, but just
cannot append to it. The exact error is below:
18-Mar 02:17 uwharrie-sd: Volume "MSR122L3" previously written, moving
to end of data.
18-Mar 02:35 uwharrie-s
Hi All,
Does anyone know if the bacula-client FC6 x86_64 rpms work with RHEL5?
Best,
Mike
Felix Schwarz wrote:
> Arnaud Mombrial wrote:
>
>> Does anyone knows if there would be (or is there already ??) an fc6 package
>> for bacula-client ?
>>
>
> Sorry for the delay, I was *extremly* bus
Hi Sven,
I wouldn't nfs mount anything on my backup server. I would just backup
the NFS share on the server-side, i.e. run bacula-fd on the NFS server
and add the NFS shared dir to the NFS server's FileSet.
Btw, last night my clients got the following Rates in MB/s:
4,7,14, 21,10
These rates var
Mike
Kern Sibbald wrote:
> On Thursday 15 March 2007 16:17, Mike Seda wrote:
>
>> Ok.. I have found all references to MediaId... The following sql
>> statements were executed:
>> select MediaId, JobId from JobMedia;
>> select MediaId from Media;
>> select
Right. After the double migration, I verified the job with a small
restore... Everything seemed fine...
Kern Sibbald wrote:
> On Thursday 15 March 2007 20:44, Mike Seda wrote:
>
>> Fyi, my double migration was successfull... First I ran a job called T2D
>> and then ra
19:25, Mike Seda wrote:
>
>> Hi Kern,
>> My proposed method may be feasible, but do you recommend a more elegant
>> solution to accomplish a migration job between two different tape pools
>> based on the storage resources at my disposal? I just want to make sure
from Job;
select PoolId from Media;
select PoolId,JobId from Job;
I will search through the output of these commands and see if I can
bring the database back to a sane state... Am I on the right track?
Thx,
M
Mike Seda wrote:
> Hi All,
> Kern is right... I should have never changed those Me
Hi All,
Kern is right... I should have never changed those MediaIds... I
actually remember changing a few other things such as renaming the
Default pool to Weekly and resetting the auto_increment value on Media
and Pool tables. I also remember changing something about PoolId. In
hind-sight, I d
Is the following legal syntax for a migration job?
Selection Type = SQLQuery
Selection Pattern = "http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sou
t;> =R
>>
>> Jason King wrote:
>>
>>
>>> Not 100% sure, but archived volumes probably stay archived until their
>>> rentention period is up, then they would go into recycle which would
>>> then allow bacula to use them. Read-Only is s
I have the same question:
"Could someone tell me if there is a difference in the way that Bacula
handles volumes with the status "Archive" and the status "Read-Only"?"
Did this ever get answered?
Neal Gamradt wrote:
> All,
>
> Could someone tell me if there is a difference in the way that Bacu
Kern Sibbald wrote:
> On Friday 09 March 2007 02:33, Mike Seda wrote:
>
>> Hi All,
>> I too have switched off a few machines lately, and wish to save their
>> last complete backup to an Archive pool. Basically, I want to migrate a
>> few jobs from my &quo
Hi All,
I too have switched off a few machines lately, and wish to save their
last complete backup to an Archive pool. Basically, I want to migrate a
few jobs from my "Weekly" pool to my "Archive" pool. Fortunately, I too
have one last full backup in the system, but since I only have one tape
d
planning to do it on repaired), but didn't
> actually get it out the door.
>
> FYI, though, your versions are screwed up. U2 was 06/06, U3 is 11/06.
> Not sure which one you mean, though, I would use U3 if I were you.
>
> Mike Seda wrote:
>
>> Hi All,
>> Has ther
Hi All,
Has there been any progress regarding the creation of a Solaris 10
(sparc) rescue cd? I am very interested in testing for you if there is
any code available. I have a SunFire V440 (sparc) box reserved for
testing. My plan is to test a bare metal restore of my production
SunFire V440 (So
point
>
> -Aaron
>
> Mike Seda wrote:
>> Aaron Knister wrote:
>>> During the job run an "iostat -k 2" and tell me what the iowait
>>> percentage is.
>> i've been using top, and the elevated load does positively correlate
>> to iowait
>
yeah... i thought so, but the docs recommend using md5 sigs so i left it
in the config
> What is the average speed of your jobs (in terms of MB/s) during this
> high load?
30 MB/s
>
> -Aaron
>
> Mike Seda wrote:
>> Wow... On my server running bacula-sd and bacula-dir,
Wow... On my server running bacula-sd and bacula-dir, the load average
can get as high as 9.2 when doing full backups of certain machines...
Any thoughts as to why?
Server specs:
Dell PE 2650 (2 x 3.06 GHz Xeon 32-bit single-core)
PX502 LTO-3 FC AutoChanger
RHEL 4 AS
bacula 2.0.1 i386 (via rpm)
d
Hi All,
I have a bacula client with about 283 GB used disk space, yet bacula
seemed to backup around 1.398 TB, and would have backed up more unless I
canceled the job. The machine's specs are listed below:
2-way Opteron 846
RHEL 4 WS
Bacula 2.0.1 x86_64 client (via rpm)
Any thoughts?
Best,
Mi
Hi All,
I received "ERR=Input/output error" upon executing "label barcodes".
However, "update slots" was successful. Any ideas? Verbose bconsole
output provided below:
Connecting to Storage daemon Tape at uwharrie:9103 ...
Sending label command for Volume "MSR100L3" Slot 1 ...
3301 Issuing autoc
/800 GB
Cheers,
Mike
James Ray wrote:
> Mike Seda wrote:
>
>> Hi All,
>> I noticed from the following threads that the Quantum PX502 Library
>> (LTO-3 SCSI) seems to work with bacula:
>> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg13744.h
Hi All,
I wish to install bacula 2.0.0 on my el4 system. I just have a
question... What is the difference between the "rpms" and
"rpms-contrib-fschwarz" links at
http://sourceforge.net/project/showfiles.php?group_id=50727 ? Is there a
reason why these links are separated? Basically, which is th
Hi All,
I plan to install bacula next week. I am just curious as to what
"Maximum Spool Size" should be set to if my goal is to backup a total of
3 TB over gigE. I am still going for a D2T solution, but I just wanted
to see if you guys think that leveraging 100 to 300 GB of unused disk
for spoo
Hi All,
Does bacula work for Solaris 10 clients with ZFS filesystems?
Best,
Mike
-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your
Hi All,
I noticed from the following threads that the Quantum PX502 Library
(LTO-3 SCSI) seems to work with bacula:
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg13744.html
https://mail.dvs1.informatik.tu-darmstadt.de/lurker/message/20051214.230617.b5f41ed5.en.html
But, ha
69 matches
Mail list logo