Bill Moran said:
> In response to "Ralf Gross" <[EMAIL PROTECTED]>:
>
>> the / disk of my bacula (1.38.5) server crashed Friday night. The
>> postgres
>> db is on a separate disk. I restored the filesystem from a second backup
>> system. Since I only
Bill Moran said:
> In response to "Ralf Gross" <[EMAIL PROTECTED]>:
>
>> Does this part of the manual makes sense at all?
>>
>> http://www.bacula.org/rel-manual/Catalog_Maintenance.html#SECTION000237000
>>
>> Compacting Your PostgreS
Hi,
I have a problem with a job that finished with an error message. Job 108
started yesterday morning and ended today at 3:55.
Bacula 1.38.5, debian stable, postgresql 7.4.7-6sarge
Running Jobs:
Console connected at 31-Jul-06 09:43
JobId Level Name Status
==
Hi!
Kern Sibbald said:
> Well, not having the *full* output from the job makes it a bit hard to
> diagnose the problem. Normally, there should be additional output with
> the
> first error that was printed (affected_rows = 0).
>
> I have seen this kind of problem when a job is pruned while the jo
Hi,
has anyone experiences with these LTO-3 libraries?
Overland ARCvault 24 Library
http://www.overlandstorage.com/german/products/arcvault24.html
NEC T40A Tape Library
http://www.sw.nec.co.jp/products/storage/english/ll040f/index.html
The Overland is pretty new but I guess that both will work
Hi,
we will extend our main fileserver with an external RAID array with
~4TB of disk space. Additionally we'll purchase an 24 slot Lib with
one LTO3 drive. The backup and the file server will be connected by
GigabitEthernet.
I've an spare DL380G2 server with 1Ghz, 1.5GB of memory and some
intern
George R.Kasica said:
> In terms of scheduling items.any chance to have bacula show more
> than just the next jobs due to be run in the next 24 hours when you do
> a stat dir?
>
> It'd be nice to know in more advance for purposes of loading tapes if
> there isn't someone on site say over a week
Ralf Gross said:
> we will extend our main fileserver with an external RAID array with
> ~4TB of disk space. Additionally we'll purchase an 24 slot Lib with
> one LTO3 drive. The backup and the file server will be connected by
> GigabitEthernet.
>
> I've an spare DL3
Hi,
lately I've seen that verify jobs that have differences just doesn't finish.
bacula 2.4.4-b1, psql
*st dir
[...]
Running Jobs:
JobId Level Name Status
==
9602 VolumeT VerifyVU0EF005-Absicherung
Ralf Gross said:
> lately I've seen that verify jobs that have differences just doesn't
> finish.
>
> bacula 2.4.4-b1, psql
>
> *st dir
>
> [...]
>
> Running Jobs:
> JobId Level Name Status
> =
Arno Lehmann said:
> 18.02.2009 09:36, Ralf Gross wrote:
>> Ralf Gross said:
>>> lately I've seen that verify jobs that have differences just doesn't
>>> finish.
>>>
>>> bacula 2.4.4-b1, psql
>>>
>>> *st dir
>>&g
albre...@googlemail.com schrieb:
> Hello,
>
> I have a problem with the VolumeToCatalog Verify of one of my backup
> jobs. I always get the error message that some files are in the
> Catalog but not on the volume. However this is not true - I can
> successfully restore files which are being report
albre...@googlemail.com schrieb:
> 2009/3/9, Ralf Gross :
> > albre...@googlemail.com schrieb:
> >> Bacula always seems to think that the end of volume is reached:
> >> -
> >> 09-Mar 16:11 SD.xen JobId 3: End of Volume at file 7 on devic
Alex Bremer schrieb:
> 2009/3/10, Ralf Gross :
> > Hm, no idea what the problem is. But file 7 is not a real backed up
> > file, it's some kind of mark that bacula writes to the volume.
> >
> > Can you check the volume with bscan?
>
> Yes, seems to work - I t
Alex Bremer schrieb:
> 2009/3/11, Ralf Gross :
> > So the volume only has 7 volume files (markers/chunks). You could add
> > -d 100 or 200 to the daemon options in the bacula-fd start script on
> > the client where the verify is running and redirect the output to a
> >
Alex Bremer schrieb:
> 2009/3/12, Ralf Gross :
> > The verify is done in the fd, so I would addd the debug option there
> > too. And also to the sd, because if bacula complains about a file that
> > is missing on the volume, you might find the answer there.
>
> Yes, I
John Drescher schrieb:
> On Sat, Mar 14, 2009 at 12:56 PM, cpreston
> wrote:
> >
> > I'm resending this original poster's question because the
> > connection between bacula-users and the forum was down for a time.
> > There will be three more like this.
> >
> There is a forum now?
http://www.bac
Doug Forster schrieb:
> I have gone into the database and can see that the database is empty for the
> job in question. I think that there is an issue with the insertion of over a
> million entrees all at once that is giving bacula a hard time. I have found
> a supporting post here:
> http://www.ba
Paul Hanson schrieb:
> Currently we have an IBM TS3200 working very well over fibre channel and
> has two Ultrium 4 tape units. If I set concurrency to two (2) then both
> tape units can work fine. However, if only one tape unit is in operation
> and two jobs start for the SAME tape pool, then the
David Murphy schrieb:
> I want to wipe all data from Bacula, I purged and recreated the db and
> removed all the files in /var/lib/bacula and even emptied its log file but
> status client= still show all my old jobs, how do I remove them I
> think they are causing issues with trying to rest
Mike Ruskai schrieb:
> When a tape volume is recycled, the contents are lost. Is the same true
> for a disc volume? Is the whole file truncated, or does it just start
> from the beginning, and only destroy the contents it actually overwrites?
bacula always recycles the whole volume, no matter
Hurtlin Blaise schrieb:
> Hi,
>
> I'm using Bacul for some months now. A very good product !
>
> Unfortunately, a tape used for daily backup died. How can I replace it ?
> I want to label a new tape with the same name as the old-defect one. Is
> this possible ? Or I have to delete the old from Ba
John Lockard schrieb:
> I can't see a way in 2.4.x, but maybe it's present in the
> 3.0.x code... I would like to compress my Incremental backups,
> but not my Differential backups or Full backups.
>
> I keep my incremental backups on disk. They never transition
> to Tape. My Differentials run
Kern Sibbald schrieb:
>
> This is to inform you that we have uploaded the Bacula version 3.0.0 source
> tar files and the Win32/64 installer files to the Bacula Source Forge
> download location.
Thanks for your (and all contributors) work on bacula!
Ralf
--
Hi,
I`m having a hard time finding the reason why my verify jobs are
failing with bacula 2.4.4.
This week a backup of ~10 TB data finished without errors. The backup
job ueses one of three LTO-4 drives in a autochanger.
Now the VolumeToCatalog verify job fails each time. I tried two
different dr
Stephen Thompson schrieb:
> Is there any built-in/simple way to determine how far along a job is?
> Some kind of progress meter against a job size estimate?
>
> Even knowing how much has been put to tape at a given point would be
> nice. We have jobs that take more than 24 hours to run. :S
>
>
Ralf Gross schrieb:
>
> I`m having a hard time finding the reason why my verify jobs are
> failing with bacula 2.4.4.
>
> This week a backup of ~10 TB data finished without errors. The backup
> job ueses one of three LTO-4 drives in a autochanger.
>
> Now the VolumeTo
Item 1: Extend the verify code to make it possible to verify
older jobs, not only the last one that has finished
Date: 10 April 2009
Origin: Ralf Gross (Ralf-Lists ralfgross.de)
Status: not implemented or documented
What: At the moment a VolumeToCatalog job compares
Steven Palm schrieb:
>
> On Apr 27, 2009, at 2:00 PM, Arno Lehmann wrote:
> > I'd suggest to just back up that pool with a very plain setup.
> > Accurate backups will be good, probably.
>
> My only questions is, given it's extensive use of hardlinks to
> minimize the pool size, if there anyth
Hi,
there is a regular discussion on the backuppc mailing
elmarr...@systemcompetence.de schrieb:
>
> Filename.FilenameId,batch.LStat, batch.MD5 FROM batch JOIN Path ON
> (batch.Path = Path.Path) JOIN Filename ON (batch.Name = Filename.Name):
> ERR=disk I/O error
have you checked your system log files or dmesg? Is the partition
still mounted rw or was
Hi,
is it possible to configure a backup job to _not_ store information
about the backed up files/paths etc in the database?
I recently tried to backup my BackupPC pool with bacula. BackupPC
makes excessive use of hardlinks, I ended up with 50 GB of data in my
postgres database only for this sing
Ralf Gross schrieb:
>
> is it possible to configure a backup job to _not_ store information
> about the backed up files/paths etc in the database?
>
> I recently tried to backup my BackupPC pool with bacula. BackupPC
> makes excessive use of hardlinks, I ended up with 5
Holikar, Sachin (ext) schrieb:
> Hello,
>
> In Bacula, we are using LTO-3 Tapes defined in a volume. One strange this we
> notice is , the tapes are marked as full for different Volume Bytes written.
> Please see below, for one tape, it shows as around 800 GB of Bytes written
> and marked as Ful
ginzzer schrieb:
> I have an extra disk, I format it as ext3 (no partition), I can mount
> and do i/o from it. I set a device like
>
>
> Device {
> Name = FileStorage
> Media Type = File
> Archive Device = /dev/sdg
> LabelMedia = yes; # lets Bacula label unlabeled media
John Doe schrieb:
>
> Hi,
>
> each day I get "Verify Fatal Error" on my 4 scheduled verify jobs...
> But when I run them manualy, they seem to work fine...
> By example:
>
> 08-Jul 05:00 backup-dir JobId 739: Start Verify JobId=739 Level=Incremental
> Job=filer_Verify.2009-07-08_05.00.55
>
Hi,
in the past there was some discussion about what transfer rates can be reached
with bacula and modern LTO-4 drives.
I've spend some time in tuning my setup.
After changing the setup I can reach up to 130 MB/s with spooling from a 2 disk
RAID0 and a few changes in my bacula-sd.conf. IIRC chai
Hi,
I'm looking for some info about the usage of tape volumes (date of first usage,
data written...).
*llist media=A00162L4
mediaid: 539
volumename: A00162L4
slot: 34
poolid: 49
mediatype: LTO4
firstwritten: 2009-07-10 15:36:16
lastwrit
Eduardo Sieber schrieb:
>
> I Have a bacula verion backup-dir Version: 2.4.4 (28 December 2008)
> i486-pc-linux-gnu debian 5.0, intalled on a Ubuntu 9.1 server. I have 2 tape
> drives attached on this server (A DLT 40/80Gb and a sony SDX470V 40/102 GB).
>
> I've ran btape on both tapes and everyt
Bob Hetzel schrieb:
>
> Has anybody tinkered around with spooling backups on an SSD (aka solid
> state drive) or a raid-0 pair of them for higher performance?
>
> It would seem that the issue of latency introduced by thrashing the hard
> drive with several concurrent readers and writers would b
Meyer, Mark schrieb:
> I'm interested in what parameters for what scheduler you're using.
> Did you test using synthetic benchmarks or with the production
> configuration?
I use this parameters for the cfq scheduler (got the info about that
on the LKML).
echo 10 > /sys/block/sda/queue/iosched/sl
rorycl schrieb:
>
> I'm going to cross-post this text on the Amanda and Bacula lists.
> Apologies in advance if you see this twice.
>
> Our company is about to provide centralised backups for several pools of
> backup data of between 1 and 15TB in size. Each pool changes daily but
> backups to ta
Alan Brown schrieb:
> On Thu, 13 Aug 2009, Ralf Gross wrote:
>
> > I just had a bunch of 40 LTO-4 tapes that had problems during backups
> > (or worse - only during the verify job afterwards). All from the same
> > production date. So my trust in tapes is not that good
Hi,
I'm stuck at the 'status storage' output, or better at a mount command.
I started a Verify job and had to mount a volume into a drive (ULTRIUM-TD4-D3)
because the drive was umounted at that time. Nothing exciting I thought
mtx-changer scrip log:
20090820-08:42:37 Parms: /dev/Neo4100 unlo
Hi,
Bob Hetzel schrieb:
> My suggestion is to try upgrading your server to 3.0.2. You won't need to
> upgrade all your FD's. Since I went through that, the things that used to
> hang bacula in my environment (at exactly the same place you describe) are
> fixed.
3.0.2 is not yet available in
Sven Hartge schrieb:
> Ralf Gross wrote:
> > Bob Hetzel schrieb:
>
> >> My suggestion is to try upgrading your server to 3.0.2. You won't need to
> >> upgrade all your FD's. Since I went through that, the things that used to
> >> hang bacula
bdeschanes schrieb:
>
> I have NetWorker driving a Quantum Superloader. About 3 months ago
> it started doing exactly the same thing. As long as jobs are running
> it's fine, but once a job has been complete for even a couple
> minutes it takes a bus-reset to make it work again. The only common
>
Hi,
does anyone have a custom sql query to get all volume files?
Something like
12: List Files for a selected Job
but for volumes. I tried a combination of query 12 and
14: List Jobs stored for a given Volume name
But my query was a bit suboptimal... :/
Out of memory: kill process 32013 (bacu
Martina Mohrstein schrieb:
> So my question is how could I prevent the schedule of a job when the
> same job is already running?
Maybe the new Duplicate Job Control feature in 3.0.x helps to prevent
this?
http://www.bacula.org/en/dev-manual/New_Features.html#515
- Allow Duplicate Jobs
- Allow Hi
Silver Salonen schrieb:
> On Sunday 30 August 2009 13:58:44 Ralf Gross wrote:
> > Martina Mohrstein schrieb:
> > > So my question is how could I prevent the schedule of a job when the
> > > same job is already running?
> >
> > Maybe the new Duplicate Job Co
Christian Rohmann schrieb:
>
> Here are the devices listed (lsscsi):
>
> --- cut ---
> [1:0:0:0]tapeQUANTUM ULTRIUM 42170 /dev/st0
> [1:0:0:1]mediumx DELL PV-124T 0063 /dev/sch0
> --- cut ---
Can you post the 'lsscsi -g' output and try the sg device lsscsi
Daniel Bareiro schrieb:
>
> When updating an installation of Debian GNU/Linux testing I see that
> with this was updated the installed version of bacula-fd, being
> now installed 3.0.2-3+b1, of Debian testing repositories.
>
> The problem with this is I am observing that, for some reason, the
> B
Hi,
bacula 3.0.2 (updated 3 weeks ago). Can anyone explain why bacula is not using
a volume from the INV-MPC-Differential pool?
status dir:
[...]
Running Jobs:
Console connected at 10-Sep-09 12:06
Console connected at 10-Sep-09 12:19
JobId Level Name Status
John Drescher schrieb:
> On Thu, Sep 10, 2009 at 6:37 AM, Ralf Gross wrote:
> >
> > *list media pool=INV-MPC-Differential
> > +-++---+-++--+--+-+--+---+---+---
Willians Vivanco schrieb:
> Hi, i need to restore data of server A's backup on to server B
> filesystem... Any suggestion?
err, where's the problem? What is not working?
bconsole -> restore -> -> Restore Client:
Ralf
--
Ralf Gross schrieb:
> John Drescher schrieb:
> > On Thu, Sep 10, 2009 at 6:37 AM, Ralf Gross wrote:
> > >
> > > *list medi
Pedro Bordin Hoffmann schrieb:
> Hello!
> Im doing some backup that required 2, 3 tapes.. so my question is..
> The label of first tape is monday. and when I put a second tape.. should I
> name it like monday-2 ? or something else? Or Monday again?
> When will I make a second time this backup, sho
Bernardus Belang (Berni) schrieb:
>
> I just want to know, whether or not IBM or HP LTO-4 tape drive connected
> to RedHat Enterprise Linux 5 will work with Bakula ?
> Thank you for your attention.
As long as you OS supports the drive and the changer bacula will be
working fine.
So first test
Gabriel - IP Guys schrieb:
>
>
> > -Original Message-
> > From: John Drescher [mailto:dresche...@gmail.com]
> > Sent: 22 October 2009 18:13
> > To: Gabriel - IP Guys; bacula-users
> > Subject: Re: [Bacula-users] First Backup Completed - but still
> > confusion, full backup only 23M?
> >
Arno Lehmann schrieb:
>
> 30.10.2009 07:24, Leslie Rhorer wrote:
>
> > 2. Span multiple target drives
>
> Sure.
I'm not sure if he might has thought of a single backup job spanning
multiple drives.
This wouldn't be possible AFAIK.
Ralf
---
James Harper schrieb:
> > Arno Lehmann schrieb:
> > >
> > > 30.10.2009 07:24, Leslie Rhorer wrote:
> > >
> > > > 2. Span multiple target drives
> > >
> > > Sure.
> >
> > I'm not sure if he might has thought of a single backup job spanning
> > multiple drives.
> >
> > This wouldn't be possible AFA
Arno Lehmann schrieb:
> Hi,
>
> 30.10.2009 11:56, James Harper wrote:
> >
> >> -Original Message-----
> >> From: Ralf Gross [mailto:r...@stz-softwaretechnik.de] On Behalf Of Ralf
> > Gross
> >> Sent: Friday, 30 October 2009 2
Hi,
bacula 3.0.2, psql, debian etch
Every now and then I receive error mails about missing files from verify jobs
where I can't find the problem.
The backup job:
31-Okt 03:34 VUMEM004-dir JobId 16728: Bacula VUMEM004-dir 3.0.2 (18Jul09):
31-Okt-2009 03:34:20
Build OS: x86_64-p
Martin Simmons schrieb:
> >>>>> On Tue, 3 Nov 2009 09:51:17 +0100, Ralf Gross said:
> >
> > bacula 3.0.2, psql, debian etch
> >
> > Every now and then I receive error mails about missing files from verify
> > jobs
> > where I can't fi
Martin Simmons schrieb:
> >>>>> On Tue, 10 Nov 2009 10:54:26 +0100, Ralf Gross said:
> >
> > Martin Simmons schrieb:
> > > >>>>> On Tue, 3 Nov 2009 09:51:17 +0100, Ralf Gross said:
> > > >
> > > > bacula 3.0.2, psq
Michael Galloway schrieb:
> out of curiosity, i'm wondering what other folks are getting for backup rates
> to LTO4. i seem to be getting these sorts of rates:
>
> local disk backups (3ware raid6 9650SE sata disks/xfs filesystem):
> Elapsed time: 8 hours 22 mins
> Priority:
Hi,
where can I see the scheduled start time of an job that was started
with bconsole -> run? I started several jobs this way, all with a
start time in the future. After the run command finished, I got an
bconsole message for each job "waiting for x seconds". But with
status dir I now only see
Drew Bentley schrieb:
> So I setup a new backup server with Bacula 2.2.7 EL3 RPM. I was
> running some tests to make sure everything checked out ok with my
> configs, a few clients. Once my testing was complete, I wanted to
> start clean, remove all the old jobs, terminated jobs, etc.
>
> I figure
Drew Bentley schrieb:
> On Jan 2, 2008 2:21 PM, Ralf Gross <[EMAIL PROTECTED]> wrote:
> > Drew Bentley schrieb:
> >
> > > So I setup a new backup server with Bacula 2.2.7 EL3 RPM. I was
> > > running some tests to make sure everything checked out ok with m
Drew Bentley schrieb:
> > > > >
> > > > > Terminated Jobs:
> > > > > JobId LevelFiles Bytes Status FinishedName
> > > > >
> > > > > 1 Full 0 0 Error02-Jan-08 13:09
> > > > > mai
Hi,
I've a problem with some verify jobs. My normal backup/verify jobs are
running fine. For my archive backups I created a extra psql db - I
don't know if this makes a difference.
# Catalog
Catalog {
Name = MyCatalog
dbname = bacula; user = bacula; password = verysecret
}
# Archive Catalog
Ralf Gross schrieb:
> 03-Jan 10:08 VU0EM005-sd JobId 39: Fatal error: read.c:139 Error
> sending to File daemon. ERR=Die Wartezeit für die Verbindung ist
> abgelaufen
> 03-Jan 10:08 VU0EM005-sd JobId 39: Error: bsock.c:306 Write error
> sending 65536 bytes to client:10.60.1.252
Hi,
debian etch, amd64, postgres 8.1, bacula 2.2.7
I'm a bit confused by rate numbers I get for some backup jobs. In the example
below, I get ~24 MB/s if I calculate the backup rate by myself (10h 57m + 943
GB). The rate in bacula's jobs output is just 894 KB/s.
Is this rate value something diff
Ralf Gross schrieb:
>
> I'm a bit confused by rate numbers I get for some backup jobs. In the example
> below, I get ~24 MB/s if I calculate the backup rate by myself (10h 57m + 943
> GB). The rate in bacula's jobs output is just 894 KB/s.
>
> Is this rate value some
Item 1: enable/disable compression depending on storage device (disk/tape)
Origin: Ralf Gross [EMAIL PROTECTED]
Date: 2008-01-11
Status: Initial Request
What: Add a new option to the storage resource of the director. Depending
on this option, compression will be enabled
Nick - schrieb:
> Next I run mtx ?f /dev/sg0 status
>
> Storage Changer /dev/sg0:2 Drives, 23 Slots ( 1 Import/Export )
> Data Transfer Element 0:Empty
> Data Transfer Element 1:Empty
> Storage Element 1:Full :VolumeTag=13
> [...]
> /mtx ?f /dev/sg0 inquir
Timo Neuvonen schrieb:
> Below are a few clips from my recent backup logs (2.2.7 on CentOS 5.1). This
> has never been a big issue to me, so I haven't bothered me with the rate
> values. But could someone advice me what causes the huge difference between
> the transfer rate reported by storage d
Tilman Schmidt schrieb:
> >>
> >> I saw that there will be Baculasystems in Paris next week. Who knows who's
> >> founded this company?
> >
> > I think that you can get informations here
> > http://www.mail-archive.com/search?q=professional%20support&[EMAIL
> > PROTECTED]&start=0
>
> Did you a
Marc Schiffbauer schrieb:
>
> I just uploaded updated bacula packages to the packman package
> repository (Version 2.2.8)
> [...]
Thanks for the good work!
Ralf
-
This SF.net email is sponsored by: Microsoft
Defy all challe
Hydro Meteor schrieb:
>
> For those interested in running Bacula 2.2.8 on Mac OS X 10.5.1 (Leopard)
> including Leopard Server, I can confirm, with a simple Bacula backup and
> restore test, that Bacula does not capture or restore file system resources
> that have Access Control List (ACL) metadat
Bruno Friedmann schrieb:
> Noah Dain wrote:
> > On Feb 11, 2008 11:13 AM, Chun Kit Hui <[EMAIL PROTECTED]> wrote:
> >> Dear Bruno,
> >>
> >> You mean that xfs will save ACLs and extended data together with the file
> >> instead of meta-data? And without special configuration, Bacula will backup
> >
Bruno Friedmann schrieb:
> Ralf Gross wrote:
> > Bruno Friedmann schrieb:
> >> Noah Dain wrote:
> >>> On Feb 11, 2008 11:13 AM, Chun Kit Hui <[EMAIL PROTECTED]> wrote:
> >>>> Dear Bruno,
> >>>>
> >>>> You mean that x
Hi,
at the moment I only backup to tape (LTO-3/4), but soon I'll start to
do incremental and differential backups to disk. The gzip compression
can only be enabled on a per job basis, thus it will be enabled for
full backups to tape too.
The manual states that 'it is not generally a good idea to
John Drescher schrieb:
> > So with LTO it seems to be no problem to use gzip and the drives
> > hardware compression.
> >
> > Any thoughts on this?
> >
> With software compression on an LTO device your backups will take much
> longer unless your client can somehow compress at the rate your tape
Dan Langille schrieb:
> There was a recent discussion of hardware versus software compression.
> In general, I recommend hardware compression, unless your software can
> keep up with your tape drive.
>
> Is all hardware compression compatible?
>
> Given a tape containing compressed data, can it
Justin Francesconi schrieb:
> Well, it turns out there was an issue with the bacula-dir.conf as stated.
> It works fine and btape can mostly control the drive, except for every time
> I try to run a backup job to the tape, I get the following error:
> Cannot find any appendable volumes.
> Please us
Tilman Schmidt schrieb:
> On two of my Bacula installations, entering the command "show nextvol"
> in bconsole frequently crashes the director. Symptoms:
> - bconsole exits (shell prompt appears)
> - bacula-dir process exits (vanishes)
> - after restarting bacula-dir and re-entering bconsole, I rec
[EMAIL PROTECTED] schrieb:
> I have a FileVolume that is full. How do I purge about 50% of the files in
> this volume?
>
> This is why I need to reduce the size of the file.
> The FileVolume is in a partition that is now full.
You can't free space of volumes partially. You have to purge the
vol
Carlos André schrieb:
> I got a little problem here in my work :)...
>
> Backup policy we use a IBM LTO-3 to make full backups on fridays
> and a HP DAT72 to make incremental backups from monday to thursday
>
> I trying to implement Bacula (2.2.8) using same policy (LTO 3 on fridays to
> full bac
Carlos André schrieb:
> I got a little problem here in my work :)...
>
> Backup policy we use a IBM LTO-3 to make full backups on fridays
> and a HP DAT72 to make incremental backups from monday to thursday
>
> I trying to implement Bacula (2.2.8) using same policy (LTO 3 on fridays to
> full bac
Hi,
during Volume2Catalog verify jobs I sometimes get a "Packet size too
big..." error message and the job fails. The error usually doesn't
appear during a verify rerun of the same job (debian etch, bacula
2.2.7).
02-Jan 17:55 VU0EM005-sd JobId 34: End of file 249 on device "LTO3"
(/dev/ULTRIUM-
Riho Lodi schrieb:
>
> Can I clean up volume file from old backup jobs (files) to make volume
> file smaller?
no, you can only recyle the whole volume.
Ralf
-
This SF.Net email is sponsored by the Moblin Your Move Develope
Hi,
last night I was hit by a mtx/drive problem.
20081007-00:02:22 Doing mtx -f /dev/Neo4100 load 96 2
20081007-00:02:22 Device /dev/ULTRIUM-TD4-D3 - not ready, retrying...
20081007-00:02:23 Device /dev/ULTRIUM-TD4-D3 - not ready, retrying...
[...]
20081007-00:07:35 Parms: /dev/Neo4100 loaded 96
Arno Lehmann schrieb:
> Perhaps a hardware-related problem? Have you had a look into the
> system's log files?
Didn't find anything related in the logs.
> > Then the sd died:
>
> Now that's bad... even in case of a seriuos problem the SD shouldn't die.
>
> > 07-Okt 00:19 VU0EA003-sd: ABORTI
Stefan Lubitz schrieb:
> [...]
> 08-Oct 22:06 BACKUP3-sd JobId 10: Job write elapsed time = 06:26:12,
> Transfer rate = 70.36 M bytes/second
The job is writing with 70.36 MB/s to the spool device, which seems to
be ok to me.
> [...]
> 09-Oct 04:53 BACKUP3-sd JobId 10: Despooling elapsed time = 0
Chris Howells schrieb:
> I am running bacula 2.4.3 with an IBM TS3100 autochanger. The setup has
> been running fine for a year, but we've recently added a third tape to
> each pool, and since then bacula happy writes to the first two tapes,
> but then refuses to go any further, even though the
Doytchin Spiridonov schrieb:
>
> the no.1 item from the list here:
>
> http://www.bacula.org/en/?page=projects
>
>
> Item: 1 Accurate restoration of renamed/deleted files
That's the most important new feature I'm looking forward to. I
recently was hit by a filesystem corruption and had to
Doytchin Spiridonov schrieb:
> By the way I am curious what percentage of users using Bacula are
> aware of your experience. It is mentioned in the docs but I am sure
> a lot of people miss that info.
>
> We also run a find script - this helps to remove the revived deleted
> files after restore...
Kjetil Torgrim Homme schrieb:
> [...]
> any comments? I might write a patch to do this, at least the first
> two directives, so let me hear it if you think it should be done
> differently :-)
I like the changes about the SpoolData and Compression directives.
I posted a feature request to the li
pedro moreno schrieb:
>Hi people.
>
>I want to know if this is possible, I have 1 tape device where I
> save my Full backups, my Differential backups goes to my hard disk,
> when I run the Full backups, my compression is disable in order to use
> my device HW compression, but my differenti
201 - 300 of 427 matches
Mail list logo