Julien Cigar schrieb:
> ...
> (sa0:isp0:0:5:0): REWIND. CDB: 1 0 0 0 0 0
> (sa0:isp0:0:5:0): CAM Status: SCSI Status Error
> (sa0:isp0:0:5:0): SCSI Status: Check Condition
> (sa0:isp0:0:5:0): MEDIUM ERROR asc:52,0
> (sa0:isp0:0:5:0): Cartridge fault
> (sa0:isp0:0:5:0): Retrying Command (per Sense D
S?bastien Delneste schrieb:
>
> I'm administrating an heterogeneous hosting platform and we have all
> kinds of servers which we backup on 2 different storages using one
> director.
>
> One storage is running great but the second one is giving me problems
> when I try to re-use volumes (wh
Julien Cigar schrieb:
Are you sure you need these additional options?
> Hardware End of Medium = no
> Backward Space Record = no
> Backward Space File= no
> Fast Forward Space File = no
> BSF at EOM = yes
> Two EOF = yes
I remember that there has been some discussion on the lis
T. Horsnell schrieb:
> >> I want to know if this is possible, I have 1 tape device where I
> >>save my Full backups, my Differential backups goes to my hard disk,
> >>when I run the Full backups, my compression is disable in order to use
> >>my device HW compression, but my differential backups h
pedro moreno schrieb:
>Hi to all.
>
>I'm in the same behavior as Hermant, this is a small feature that
> could help a lot of people like us. Normally
>T. Horsnell everybody will enable HW compression over SW in a tape device.
>
>Ralf I will request again this, hope in the future t
S?bastien Delneste schrieb:
> I notice something else :
> The VolFiles is always 0 ... is this normal ?
I guess not.
Anything additional in the database logs?
Ralf
-
This SF.Net email is sponsored by the Moblin Your Move De
Kevin Keane schrieb:
> I'm troubleshooting my backup strategy again.
>
> For some reason, today all my jobs ran a full backup instead of the
> scheduled incremental. The messages say "Backup Level: Full (upgraded
> from Incremental)". In this particular example below, the most recent
> full bac
Hi,
I have a 3 drive autochanger (bacula 2.2.8). Right now I configure
each job to use one of the 3 drives, not the changer device.
Storage {
Name = Neo4100
Address =
SDPort = 9103
Password = "wiped"
Device = Neo4100
Media Type = LTO4
Autochanger = yes
Heartbeat Interval =
Hi,
I hate to reply to my own mail, but does nobody have an idea about
what I'm trying to do? ;)
If not, I'll submit an feature request. Because I think it is more
important to limit the number of concurent jobs on the drives, than on
the changer.
Ralf Gross schrieb:
>
> Hi,
Alex Chekholko schrieb:
> Doesn't your config do exactly what you want?
>
> > > Storage {
> > > Name = Neo4100
> > > Address =
> > > SDPort = 9103
> > > Password = "wiped"
> > > Device = Neo4100
> > > Media Type = LTO4
> > > Autochanger = yes
> > > Heartbeat Interval = 5min
>
Alex Chekholko schrieb:
> On Tue, 9 Dec 2008 21:02:02 +0100
> Ralf Gross <[EMAIL PROTECTED]> wrote:
>
> >
> > > # grep Maximum /etc/bacula/bacula-*.conf
> > > /etc/bacula/bacula-dir.conf: Maximum Concurrent Jobs = 3
> > > /etc/bacula/bacula-d
Alex Chekholko schrieb:
> On Wed, 10 Dec 2008 18:22:55 +0100
> Ralf Gross wrote:
>
> > Alex Chekholko schrieb:
> > > On Tue, 9 Dec 2008 21:02:02 +0100
> > > Ralf Gross wrote:
> > >
> > > >
> > > > > # grep Maximum /etc
Item 1: "Maximum Concurrent Jobs" for drives when used with changer device
Origin: Ralf Gross ralf-lists ralfgross.de
Date: 2008-12-12
Status: Initial Request
What: respect the "Maximum Concurrent Jobs" directive in the _drives_
Storage sectio
Hi,
what is the proper way to rename a client? Just changing the name in
the config file does not work. I guess some sql is involved but I
can't find anything about this in the docs or the wiki.
I found this thread with an reply from Arno:
http://thread.gmane.org/gmane.comp.sysutils.backup.bacul
Tilman Schmidt schrieb:
> ...
> PS: This list could really use a better archive. Searching the SF mail
> archive is no fun at all, it takes ages to respond, and the message
> display is next to unreadable.
gmane works fine for me
http://news.gmane.org/search.php?match=bacula
Ralf
--
Kevin Keane schrieb:
> > ...
> > Finally, mail allows me to keep the whole history of this list on my
> > site (disk space is cheap, and Bacula can back up my mail storage). I
> > see no way to easily access a larger forum locally.
> >
> I think you are making some rather large assumptions her
Sergio Belkin schrieb:
> 2008/12/17 Marc Schiffbauer :
> > * Sergio Belkin schrieb am 17.12.08 um 13:53 Uhr:
> >> How Can I define a remote storage in bacula-sd.conf ? namely a tape
> >> device that is in another host that bacula director.
> >
> > I think you ned to run the bacula-sd on that remot
GATOUILLAT Pierre-Damien schrieb:
> I made two pools and my schedule is :
>
> Schedule {
> Name = "NightlySave"
> Run = Level=Full Pool=Weekly 1st-5th sat at 01:05
> Run = Level=Incremental Pool=DailyFile tue-fri at 00:05
> }
>
This looks good and should work.
> But in
Bob Hetzel schrieb:
>
> I'm running v 2.4.3 of bacula on OpenSuse 10.2 with a two tape-drive
> autoloader and I've been having this problem every few days for a
> while
> now: bacula wants a tape that's already loaded but in a different
> drive
> and it's not able to just unload the tape and mo
Sergio Belkin schrieb:
>
> Can be a tape is in more than a pool?
I don't think it's possible. How should different volume retention
times or other pool options be handled?
Ralf
--
___
Jonathan Larsen schrieb:
>
> I am having an issue with my pools not recycling correctly. Right now i have
> jobs that were last written going back as far as 2008-09-07 that have yet to
> purge and go into recycling automatically. Below is one of my Pools and
> it's configuration. All my pools a
Ferdinando Pasqualetti schrieb:
> I think you should use /dev/random, not /dev/zero unless hardware
> compression is disabled in order to have more realistic figures.
This wouldn't be a good idea, /dev/random or /dev/urandom are just too
slow in generating random data. To test the nativ speed of
T. Horsnell schrieb:
> Ralf Gross wrote:
> > Ferdinando Pasqualetti schrieb:
> >> I think you should use /dev/random, not /dev/zero unless hardware
> >> compression is disabled in order to have more realistic figures.
> >
> > This wouldn't be a good id
Craig White schrieb:
> On Wed, 2009-01-07 at 17:57 +0100, Ralf Gross wrote:
> > T. Horsnell schrieb:
> > > Ralf Gross wrote:
> > > > Ferdinando Pasqualetti schrieb:
> > > >> I think you should use /dev/random, not /dev/zero unless hardware
> >
Sergio Belkin schrieb:
> The output test was:
>
> dd if=/dev/zero of=/dev/nst0 bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 7.80926 seconds, 134 MB/s
This looks ok.
> Below is output of tapeinfo -f /dev/sg0
>
> Product Type: Disk Drive
But sg0
Ferdinando Pasqualetti schrieb:
>
> about one month ago I asked the list if someone else was experiencing some
> scheduled jobs not executed, without any error message using bacula-mysql
> 2.4.3-1. I did not get any feedback, so, on the release of bacula 2.4.4 I
> upgraded to it. The problem p
Ferdinando Pasqualetti schrieb:
> >
> > Have you searched the messages logfile for anything related to these
> > jobs?
> Thanks for iour interest,
> In the /var/bacula/log file there is no record in this period of time,
> only an empty line:
>
> 08-gen 18:47 bacula-dir JobId 31282: Begin pruning
Ferdinando Pasqualetti schrieb:
> > Hm, but from the list jobs output of you first mail there have been some
> > more jobs running between 18:47 and 22:20.
> >
> >
> > | 31,282 | hserv2t-job | 2009-01-08 18:05:57 | B| F |
> 296,650|33,888,670,611 | T |
> > | 31,283 | dom
Sergio Belkin schrieb:
> Job Report:
>
> 08-Jan 16:40 DirectorandStorageServerThatHasDat72-dir-dir JobId 1975:
> Bacula DirectorandStorageServerThatHasDat72-dir-dir 2.4.2 (26Jul08):
> 08-Jan-2009 16:40:04
> Build OS: i686-redhat-linux-gnu redhat
> JobId: 1975
>
Erik P. Olsen schrieb:
> I have set up an exclude for File = /home/erik/.gvfs but it is nevertheless
> not
> excluded. Is it a bug or can't hidden directories be excluded? I keep getting
> following warnings:
>
> 14-Jan 23:38 epohost-fd JobId 1081: Could not stat /home/erik/.gvfs:
> ERR=Perm
Hi,
I have a problem with a new lto-4 tape. Only 411 GB were written on it, then
bacula detected an write error.
JobId 9250: Error: block.c:568 Write error at 411:14724 on device
"ULTRIUM-TD4-D2"
(/dev/ULTRIUM-TD4-D2). ERR=Eingabe-/Ausgabefehler.
JobId 9250: Error: Re-read of last
Ralf Gross schrieb:
> Hi,
>
> I have a problem with a new lto-4 tape. Only 411 GB were written on it, then
> bacula detected an write error.
>
> JobId 9250: Error: block.c:568 Write error at 411:14724 on device
> "ULTRIUM-TD4-D2"
> (/dev/ULTRIUM-TD
Hello,
> The problem described in the email below is probably an important data loss
> problem due (most likely) to an I/O error, but more importantly due to a
> misconfigured tape drive. From the information I see below, it appears to me
> that you have lost significant data. This is probabl
Kern Sibbald schrieb:
> On Thursday 05 February 2009 12:48:18 Ralf Gross wrote:
> > Hello,
> >
> > > The problem described in the email below is probably an important data
> > > loss problem due (most likely) to an I/O error, but more importantly due
> > >
Victor Sterpu schrieb:
> Backing up a mail server I realized that the backup is incomplete.
> I use bacula 2.4.4.
> Bacula-fd runs as root.
> My FileSet is like this:
> FileSet {
> Name = "mail"
> Include {
> Options {
> signature = MD5
> compression = GZIP
>
Kern Sibbald schrieb:
>
> The problem described in the email below is probably an important data loss
> problem due (most likely) to an I/O error, but more importantly due to a
> misconfigured tape drive. From the information I see below, it appears to me
> that you have lost significant data.
Kern Sibbald schrieb:
> On Saturday 07 February 2009 11:33:58 Ralf Gross wrote:
> > Kern Sibbald schrieb:
> > > The problem described in the email below is probably an important data
> > > loss problem due (most likely) to an I/O error, but more importantly due
> >
Hi,
I purged some volumes of a failed backup and manually changed the state to
recycle. Now I see this in the bacula log:
11-Feb 08:19 VU0EA003-sd JobId 9460: Recycled volume "A00045L4" on device
"ULTRIUM-TD4-D2" (/dev/ULTRIUM-TD4-D2), all previous data lost.
11-Feb 08:19 VUMEM004-dir JobId 946
Martin Simmons schrieb:
> >>>>> On Wed, 11 Nov 2009 14:56:33 +0100, Ralf Gross said:
> >
> > Martin Simmons schrieb:
> > > >>>>> On Tue, 10 Nov 2009 10:54:26 +0100, Ralf Gross said:
> > > >
> > > > Martin Simmo
Moby schrieb:
> It is my understanding that one must use the same job name for full and
> incremental backups, otherwise Bacula is not able to perform an
> incremental backup.
> I have a need to send full backups of machines to one disk and
> incremental backups to another disk. If I have to use t
[crosspost to -users and -devel list]
Hi,
we are happily using bacula since a few years and already backing up
some dozens of TB (large video files) to tape.
In the next 2-3 years the amount of data will be growing to 300+ TB.
We are looking for some very pricy solutions for the primary storage
Kevin Keane schrieb:
> Just a thought... If I understand you correctly, the files never
> change once they are created? In that case, your best bet might be
> to use a copy-based scheme for backup.
Yes, the files won't change. They are mostly raw camera data. They
will be read again, but not chan
Alan Brown schrieb:
> On Fri, 27 Nov 2009, Ralf Gross wrote:
>
> > Most files will be written once and maybe never been accessed again. But
> > the data need to be online and there is a requirement for backup and the
> > ability to restore deleted files (retention time
Arno Lehmann schrieb:
> 27.11.2009 13:23, Ralf Gross wrote:
> > [crosspost to -users and -devel list]
> >
> > Hi,
> >
> > we are happily using bacula since a few years and already backing up
> > some dozens of TB (large video files) to tape.
> >
>
Kern Sibbald schrieb:
> > [...]
> > Anyone else here with the same problem? Anyone (maybe Kern or Eric)
> > here that can tell if one of the upcoming new bacula features (dedup?)
> > could help to solve the problem with the massive amount of tapes
> > needed and the growing time windows and bandwid
thebuzz schrieb:
>
> Im having a big problem with an Windows 2000 old ASP server that I try to
> backup via Bacula
>
> FD on Windows server is 3.0.2 and server is 3.0.2
>
> I get an 3Gb backup every time - but when I do an bconsole command
> list files jobid=17
> it shows
>
> *list files job
Tom Epperly schrieb:
> I don't understand why bacula seems to automatically create volumes for
> some pools, but forces me to use the "label" command for others. I've
> tried increasing the "Maximum Volumes" setting, but it doesn't seem to
> help.
Have set 'LabelMedia = yes' in the Device co
Tom Epperly schrieb:
> Ralf Gross wrote:
>> Tom Epperly schrieb:
>>
>>> I don't understand why bacula seems to automatically create volumes
>>> for some pools, but forces me to use the "label" command for others.
>>> I've
Gabriel - IP Guys schrieb:
>
> I want to purge all failed jobs from bacula, before I synchronised my
> disk which I keep the backups on.
>
> If I purge a job, will that job just be removed from the index, or will
> the data actually be removed from the volumes?
> [...]
the job will just be remov
Martin Reissner schrieb:
> Hello,
>
> once again I have a problem with running concurrent jobs on my bacula
> setup. I'm using Version: 3.0.3 (18 October 2009) on DIR and SD and all
> data goes to a single SD where multiple Device Resources are configured
> (RAID-6 Harddisk Storage). Running concu
Steve Costaras schrieb:
>
> I've been diving into Bacula the past 2-3 weeks to come up with a backup
> system here for some small server count but very large data store sizes
> (30+TiB per server).
>
> In the coarse of my testing I have noticed something and want to know if
> it's by design (
Tino Schwarze schrieb:
> On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:
>
> > >> I'm not seeing anywhere close to 60M/s ( < 30 ). I think I just fixed
> > >> that. I increased the block size to 1M, and that seemed to really
> > >> increase the throughput, in the test I just did.
Thomas Mueller schrieb:
>
> >> > I tried to do that years ago but I believe this made all tapes that
> >> > were already written to unreadable (and I now have 80) so I gave this
> >> > up. With my 5+ year old dual processor Opteron 248 server I get
> >> > 25MB/s to 45MB/s despools (which measures
Thomas Mueller schrieb:
>
> >> > With
> >> >
> >> > Maximum File Size = 5G
> >> > Maximum Block Size = 262144
> >> > Maximum Network Buffer Size = 262144
> >> >
> >> > I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
> >> > Size gave me some extra MB/s, I think it's as import
jk04 schrieb:
>
> Debian Server: bacula-server-2.4.4-1
> Fedora Client: bacula-client-3.0.3-5
>
> When I check the client's status from the server the following message
> is displayed:
>
> Fatal error: File daemon at "192.168.0.55:9102" rejected Hello command
>
> I believe this is because the
Thomas Wakefield schrieb:
> Take a directory, dump it to tape, and it will live forever (roughly
> 5-10 years) on tape. And the copy on disk will be deleted. But if
> needed, we could pull the copy back from tape. We could possibly
> write 2 copies to tape for redundancy.
>
> I already use bacu
Uwe Schuerkamp schrieb:
> Hi folks,
>
> I just set up the first 3.0.3 bacula server (compiled from source on
> CentOS 5.4) in our environment and was wondering wether 2.x clients
> cann still talk to a 3.x server version? I cannot test this right now
> without going through major changes because t
Steve Costaras schrieb:
>
>
> Some history:
>
> On 01/14/2010 16:24, Dan Langille wrote:
> > Steve Costaras wrote:
> >> On 01/14/2010 15:59, Dan Langille wrote:
> >>> Steve Costaras wrote:
> I see the mtimeonly flag in the fileset options but there are many
> caveats about using it as
CoolAtt NNA schrieb:
>
> Hi All...
>
> I want to backup a client with 2 different filesets each with different file
> retention period.
> please help me how do i proceed.
>
> I have the following in bacula-dir.conf :
>
> Client {
> Name = mypc
> Address = 10.0.0.45
> FDPort = 9102
> Ca
Tommy schrieb:
> New to bacula
>
> Ubuntu9.10 (Karmic..)
> bacula 2.4.4
> mysql
>
> Director runs on machine dell.xxx.xxx as dell-dir
> dell-fd and dell-sd test backups run fine
> Client (5.0.x) runs on thinky.xxx.xxx as thinky-fd
You can't use 5.0 bacula-fd with < 5.0 bacula-dir.
Ralf
-
Hi,
bacula 3.0.3 SD+ DIR, 2.4.4 FD, Debian Lenny, psql 8.4
The backup job 19429 was running for nearly two days and then failed while
changing the LTO3 tape. The job failed two times now. No messages in syslog.
The message "ERR=Datei oder Verzeichnis nicht gefunden" means "ERR=file or
directory
follow up
Cacti shows that swap started growing this morning and reached it's
maximum when the job failed...
Ralf Gross schrieb:
> Hi,
>
> bacula 3.0.3 SD+ DIR, 2.4.4 FD, Debian Lenny, psql 8.4
>
> The backup job 19429 was running for nearly two days and then failed
Ralf Gross schrieb:
> follow up
>
> Cacti shows that swap started growing this morning and reached it's
> maximum when the job failed...
Restarted the job yesterday evening. Now after 24h the SD seems to eat up
memory again.
Tasks: 172 total, 1 running, 171 sleeping,
Ralf Gross schrieb:
> Ralf Gross schrieb:
> > follow up
> >
> > Cacti shows that swap started growing this morning and reached it's
> > maximum when the job failed...
>
> Restarted the job yesterday evening. Now after 24h the SD seems to eat up
> me
Glynd schrieb:
>
> I am running Bacula 3.0.2 on Ubuntu server Apache2 MySQL
>
> The backups seem to be working OK so I thought I had better test a restore!
> This also seemed to run OK, but there were no files restored. I tried
> different "where" locations but still no joy.
> The log snip below
Glynd schrieb:
>
> The "where" argument I left at default /bacula-restores and that is where I
> looked.
Can you post all the options you choose for the restore (cut & paste)?
Ralf
--
SOLARIS 10 is the OS for Data Cente
glynd schrieb:
> 1 file selected to be restored.
>
> Run Restore job
> JobName: RestoreFiles
> Bootstrap: /var/lib/bacula/bin/working/mistral-dir.restore.9.bsr
> Where: /bacula-restores
> Replace: always
> FileSet: Sugar Set
> Backup Client: glyn-laptop-fd
Glynd schrieb:
>
> Here is the ls -l on the ubuntu server where the bacula dir runs
>
> r...@mistral:/home/cirrus/mailarchiva_dist# ls -l /bacula-restores/
> total 0
>
> There is also the same directory on the Windows 7 glyn-laptop. and when I
> look in there the directory structure is there, bu
Arno Lehmann schrieb:
> > A better way in my opinion is to used a spool sized ring buffer in
> > memory rather then a disk based spool. The consumer would only start
> > after the producer had put a large set amount in it and continued until
> > drained the buffer.
>
> Sure... those approaches
Bostjan Müller schrieb:
>
> I have been testing our storage system and after the tests I deleted the
> databases and created them anew. After the new startup the Pools came out
> blank - there are no volumes in them. I ran the label barcodes command and
> it found all the tapes, but it did not add
Arno Lehmann schrieb:
> Hello,
>
> 23.02.2010 14:05, jch2os wrote:
> > So I'm trying to restore a file. Here is what I'm doing.
> ...
> > You are now entering file selection mode where you add (mark) and
> > remove (unmark) files to be restored. No files are initially added, unless
> > you used t
Howdy!
I'm still thinking if it would be possible to use bacula for backing
up xxx TB of data, instead of a more expensive solution with LAN-less
backups and snapshots.
Problem is the time window and bandwith.
If there would be something like a incremental forever feature in
bacula the problem c
Gavin McCullagh schrieb:
> Hi,
>
> On Fri, 26 Feb 2010, Ralf Gross wrote:
>
> > I'm still thinking if it would be possible to use bacula for backing
> > up xxx TB of data, instead of a more expensive solution with LAN-less
> > backups and snapshots.
>
Brock Palen schrieb:
> Is there a way to verify a volumes internal checksums against its data?
>
> Even better is there a way to have the catalog compare its checksums
> against the files in volumes? I know about the verify job but that
> appears to be only for the most recent job run. I wo
Bellucci Srl3 Bellucci Srl schrieb:
> "13-Apr 12:19 bckam101-dir JobId 46775: Fatal error: Unable to authenticate
> with File daemon at "bckam102:9102". Possible causes: Passwords or names not
> the same or
>
> Maximum Concurrent Jobs exceeded on the FD or FD networking messed up
> (restart dae
Jerry Lowry schrieb:
> Martin, I am trying to restore the files to the file system on the
> bacula server. The client 'swift-fd' definitely does NOT have room on
> the disk to restore all the pdfs. That is why my restore is configured
> with -> "where= /backup0/bacula-restores".
>
> No,
>
Spencer Marshall schrieb:
>
> Has anyone else experienced problems with "Packet size too big" in
> 5.0.1? I was running a Level=VolumeToCatalog at the time. I found
> bug an old bug http://bugs.bacula.org/view.php?id=1061 which
> appeared to have the same problems. I removed the "Heartbeat
> In
Phil Stracchino schrieb:
> > The /usr slice is about 4.8G, and shouldn't be changing-- at least
> > not at the tune of 51M every night.
>
>
> "Shouldn't" is a powerful word. You might want to test the theory by
> doing something like this:
>
> find /usr /opt/bacula/bin -mtime -1 -ls
>
> to lis
Uwe Schuerkamp schrieb:
> Hi folks,
>
> looks like lucid comes with bacula v5 which seemingly is unable to
> talk to our server running bacula 2.4.x:
>
> bacula-fd -c /etc/bacula/bacula-fd.conf -d 99 -f
> bacula-fd: filed_conf.c:452-0 Inserting director res: lotus-mon
> lotus-fd: filed.c:275-0
skipunk schrieb:
>
> Hoping someone could help me out. My department recently purchased
> a Dell PowerVault TL2000 autochanger connected via SAS5.
>
> We are upgrading from a spectralogic AIT4 system.
>
> The sad part is the LTO-4 drive is running much slower than the AIT
> system. I'm avg aro
skipunk schrieb:
>
> I am aware that more nic's do not increase throughput. Basically a
> backup server has to be in place tonight and I really don't want to
> start from scratch and now reaching for anything that will resolve
> the issue within the next few hours.
1. test the lto drive with tar/
Athanasios Douitsis schrieb:
>
> Hi everyone,
>
> Our setup consists of a Dell 2950 server (PERC6i) w/ FreeBSD7 and two HP
> Ultrium LTO4 drives installed in a twin Quantum autochanger enclosure.
> Our bacula version is 5.0.0 (which is the current FreeBSD port version).
>
> Here is our problem
Athanasios Douitsis schrieb:
> Tuning it down to 3+3 (from the original 6+6) seems to alleviate but not
> solve the problem completely. I totaly agree with your suggestion, from
> this point onwards it seems to be a case of finding a spooling setup
> with enough I/O oomph.
>
> One question howeve
Daniel Bareiro schrieb:
> ...
> Now that I see, this has nothing to do with the tape drive:
>
> -
> backup:~# smartctl -i /dev/sda
> smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce
> Allen
> Home page is h
Alan Brown schrieb:
> On Sat, 12 Jun 2010, Daniel Bareiro wrote:
>
> > This card uses the module 'cciss' which has been loaded by the kernel:
>
> AFAIK cciss only supports disks.
Hm, no it should work with tapes.
http://www.kernel.org/doc/Documentation/blockdev/cciss.txt
- quote -
SCSI sequent
Lamp Zy schrieb:
>
> I'm using bacula-5.0.2.
>
> Is there a way in bconsole to see which jobs used a particular tape?
>
> I can run "list jobmedia", save the output to a file and then grep for
> the media name but it's a lot of steps and it shows only the jobid.
bconsole -> query
13: List Job
Hi,
I'm still using bacula 3.0.3 on debian lenny (amd64). In the next
weeks I wanted to move on to 5.0.x. But due to the freeze of the next
debian stable release there will be no 5.0.3 in debian for some time.
5.0.3 seems to have a lot of importent bug fixes, so I'd like to skip
5.0.2 which is ava
Rory Campbell-Lange schrieb:
> On 14/09/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
> > Hi,
> >
> > I'm still using bacula 3.0.3 on debian lenny (amd64). In the next
> > weeks I wanted to move on to 5.0.x. But due to the freeze of the next
> > debian st
Sven Hartge schrieb:
> On 14.09.2010 18:02, Rory Campbell-Lange wrote:
> > On 14/09/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
>
> >> Thanks, I know this version from backports, and if there is no other
> >> deb available, I'll give it a try.
> >&
Thomas Mueller schrieb:
> Am Tue, 14 Sep 2010 16:41:04 +0200 schrieb Ralf Gross:
>
> > Hi,
> >
> > I'm still using bacula 3.0.3 on debian lenny (amd64). In the next weeks
> > I wanted to move on to 5.0.x. But due to the freeze of the next debian
> >
Heitor Faria schrieb:
>
> For several months, I cant compile Bacula Disaster recovery cd-rom on
> Debian 5.
> The make command, return several dependencies and missed paths (e.g.
> networking).
> Does anyone knows what is happening?
I'm not sure if the Rescue CD is still supported / maintained
Hi,
I just updated from 3.0.3 to 5.0.3. I know that there have been problems with
the update_postgresql_tables script. Here are my indexes:
bacula=# select * from pg_indexes where tablename='file';
schemaname | tablename | indexname| tablespace |
indexdef
Rory Campbell-Lange schrieb:
> On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
>
> All of the indexes are below; you seem to have the correct ones for the file
> table.
>
> The Debian problems with 5.0.3 were/are related to the upgrade trying to
> create
> an i
John Drescher schrieb:
> On Mon, Oct 4, 2010 at 3:01 PM, Ralf Gross wrote:
> > Rory Campbell-Lange schrieb:
> >> On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
> >>
> >> All of the indexes are below; you seem to have the correct ones for the
&
Hi,
after updateing to 5.0.3 it checked the state of some accurate backups. I'm not
sure if I fully understand why bacula is backing up some files.
I'm interested in the job VUMEM008-psql-dumps.
Terminated Jobs:
JobId LevelFiles Bytes Status FinishedName
Hi,
it seems that I messed something up during the upgrade from 3.0.3 to
5.0.3 (Debian Lenny, AMD64, psql 8.4).
There is one job that repeatedly ends with error.
ERR=sql_get.c:433 No volumes found for JobId=
I'm not sure what the real problem is. Volume vumem008-inc-0663 is there and
was wr
Ralf Gross schrieb:
>
> it seems that I messed something up during the upgrade from 3.0.3 to
> 5.0.3 (Debian Lenny, AMD64, psql 8.4).
>
> There is one job that repeatedly ends with error.
>
> ERR=sql_get.c:433 No volumes found for JobId=
I purged the 2 volume,
Ralf Gross schrieb:
> Hi,
>
> after updateing to 5.0.3 it checked the state of some accurate backups. I'm
> not
> sure if I fully understand why bacula is backing up some files.
>
> I'm interested in the job VUMEM008-psql-dumps.
>
>
> Terminated J
Martin Simmons schrieb:
> >>>>> On Tue, 5 Oct 2010 10:13:59 +0200, Ralf Gross said:
> >
> > All the information about the job tells me that 3 files and a directory were
> > backed up. But the size of the backup (4,478,194,813 bytes) does not fit
> >
Ralf Gross schrieb:
> > job with the checksum errors:
> >
> > 05-Okt 04:05 VUMEM004-sd JobId 26039: Volume "vumem008-inc-0655" previously
> > written, moving to end of data.
> > 05-Okt 04:05 VUMEM004-sd JobId 26039: Ready to append to end of Volume
> &
301 - 400 of 427 matches
Mail list logo