On Mon, Jun 29, 2009 at 7:31 PM, Doug Forster wrote:
> I don't remember but I think bacula will traverse mount points so . . .
> "mount --bind olddir newdir"
> Would likely work. I do know though that there at least was an issue with
> differing file system types ie nfs/gfs mounts ect.
i will try
On Mon, Jun 29, 2009 at 7:15 PM, Dirk Bartley wrote:
> Filesets can have exclude and include lists that are pulled in at
> runtime from files. The run before script could create these lists??
the fileset is defined server-side. how can the server include stuff
that is created on the client?
hi!
we have servers that operate with several independent raid arrays that
contain (lots of) data to backup.
currently we create tar files of the data-to-be-backed-up on each raid
and would like bacula to pick it up from there.
we have a script that does the tar handling that is run as
ClientRun
hi!
reading the documentation i understand that you should have several
volumes for concurrent backups, on different devices/directories. (i
work on disk for now.)
However some people here on the list seem to be doing well with
concurrent backups to only one volume. is that actually true or am i
On Wed, Jun 17, 2009 at 1:09 AM, Andreas
Schuldei wrote:
> perhaps the retention times are to blame? how should the file retention,
> job retention and volume retention times be relative to each other?
> all of those retention times are long since expired for the earlier files.
i r
On Wed, Jun 17, 2009 at 1:28 AM, John Drescher wrote:
> 2009/6/16 Andreas Schuldei
>
> >:
> > On Wed, Jun 17, 2009 at 12:48 AM, francisco javier funes nieto
> > wrote:
> >>
> >> Maybe this can help you ..
> >>
> >>
> >>
On Wed, Jun 17, 2009 at 12:48 AM, francisco javier funes nieto <
esen...@gmail.com> wrote:
> Maybe this can help you ..
>
>
> http://www.bacula.org/en/dev-manual/Automatic_Volume_Recycling.html#SECTION00118
>
should that even work if we dont use tape but backup to hard disk?
becau
hi!
i dont want to wait untill my filesystem is filled up on my storage
cluster and want to start to purge and recycle volumes now.
i want to recycle all volumes that have the status "purged". i am not
afraid to enter the database (postgresql here) and run sql queries.
what query should i run?
wi
On Thu, Apr 16, 2009 at 10:33 AM, Julien Cigar wrote:
> Do you have the same problem as this one :
> http://www.nabble.com/file-count-mismatch-tt19508099.html ?
no, mine is differnt: the byte count for the restored data was 0 for
me, and i checked the restore location and it was not created and
t
On Thu, Apr 16, 2009 at 10:03 AM, Graham Keeling wrote:
> On Wed, Apr 15, 2009 at 09:34:30PM +0200, Andreas Schuldei wrote:
>> tonight i ran my very first concurrent backup and the backup time went
>> down nicely. yay.
>>
>> when trying to restore something
On Wed, Apr 15, 2009 at 9:51 PM, John Drescher wrote:
> On Wed, Apr 15, 2009 at 3:34 PM, Andreas Schuldei
> wrote:
>> Hi!
>>
>> tonight i ran my very first concurrent backup and the backup time went
>> down nicely. yay.
>>
>> when trying to re
Hi!
tonight i ran my very first concurrent backup and the backup time went
down nicely. yay.
when trying to restore something from the backup i got this:
==
15-Apr 15:00 lettuce.spotify.net-dir JobId 17536: Start Restore Job
RestoreFiles.2009-04-15_15.00.51
15-Apr 15:00 lettuce.spotify.
On Sun, Apr 12, 2009 at 2:19 AM, Andreas Schuldei
wrote:
> that is a solution for now since we backup to disk. i did read
> http://www.bacula.org/en/rel-manual/Basic_Volume_Management.html and
> understood nothing, though. the text is not very well written.
one great and easy way would
On Sun, Apr 12, 2009 at 3:37 AM, John Drescher wrote:
> On Sat, Apr 11, 2009 at 8:19 PM, Andreas Schuldei
> Concurrency will work great. I have been using it in the way I
> described for 5 years with bacula. I mean with a small spool size and
> several concurrent jobs. At one point
On Sun, Apr 12, 2009 at 2:29 AM, Dan Langille wrote:
> Andreas Schuldei wrote:
>> On Sat, Apr 11, 2009 at 11:48 PM, John Drescher wrote:
>>> I would just enable concurrency. Use a small spool file (less than 10
>>> GB) and let several machines run their backups simul
and then concurrency wont work anymore,
will it? at that point we would need to come back to that "ahead of
time" backup anyway, right?
On Sat, Apr 11, 2009 at 11:48 PM, John Drescher wrote:
> On Sat, Apr 11, 2009 at 5:29 PM, Andreas Schuldei
> wrote:
>> hi!
>>
&g
hi!
Currently our bacula system cycles through our servers and for each it
initiates the respective backup shell script, waits for its
completion, transfers the data and continues on to the next box.
This cycle is rather predictable and repetitiv. On some servers, where
time consuming finds and t
On Mon, Mar 23, 2009 at 12:39 PM, Tilman Schmidt
wrote:
> http://wiki.bacula.org/doku.php?id=faq#why_does_dbcheck_take_forever_to_run
thanks, that helped a lot.
for postgresql the column and table names were lower case, though.
---
hi!
does dbcheck even check if backups referenced in the database are
still there on the harddisk? i am backing up to disk currently and
would like to make sure no orphaned catalog entries remain after some
files were deleted the other day.
/andreas
--
On Mon, Mar 16, 2009 at 5:47 PM, Kevin Keane wrote:
> I don't know enough about the internals of bacula, but my gut feeling is
> that there is some kind of database corruption. You may want to run dbcheck.
Now i am running dbcheck for ~18h. is there a way to pull out the
breaks or see how far it
On Mon, Mar 16, 2009 at 4:47 PM, Kevin Keane wrote:
> Andreas Schuldei wrote:
>> On Mon, Mar 16, 2009 at 3:04 PM, John Drescher wrote:
>>
>>> On Mon, Mar 16, 2009 at 4:36 AM, Andreas Schuldei
>>> wrote:
>>>> when using a new pool (file storage, is
On Mon, Mar 16, 2009 at 3:04 PM, John Drescher wrote:
> On Mon, Mar 16, 2009 at 4:36 AM, Andreas Schuldei
> wrote:
>> Hi!
>>
>> when using a new pool (file storage, is that the same?) for the first
>> time i get this error:
>>
>> Fatal error: catreq.c:4
Hi!
when using a new pool (file storage, is that the same?) for the first
time i get this error:
Fatal error: catreq.c:487 Attribute create error. Pool record not
found in Catalog.
do i have to initiate the pool? how do i do that?
/andreas
--
hi!
i am looking for a script that would re-sync my volumes and the
catalog as it is stored in the database (postgresql 8.1 in my case).
The volumes are full, so i try to get rid of some bloated, unnecessary backups.
after some manual probing and fiddling with the database, and some
auto-pruning
24 matches
Mail list logo