Great, I played around a bit more and eventually got it working by
downloading pre compiled ports for mysql-server/mysql-client and and
then completing a make install for the bacula-server port.
I download a pre compiled pack
I download a pre compiled package and installed... now i receive:
bacula-dir: sql_update.c:266-0 NumVols=0
bacula-dir: sql_create.c:341-0 In create mediatype
bacula-dir: sql_create.c:344-0 selectmediatype: SELECT
MediaTypeId,MediaType FROM MediaType WHERE MediaType='File'
bacula-dir: mysql.c:237-
Hi list,
I'm new to bacula and tried it on my linux box and it works quite well.
Now I wanted to backup my win7 machine with bacula to the linux one, but
it fails at installing bacula on my win7 64bit professional :(
Ok, so I don't had bakula before; I downloaded 32 and 64bit versions
because of
Alan Brown wrote:
> On 14/06/10 20:39, richard wrote:
>> For info:
>>
>> I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
>> specced machine. There is no data spooling , (although attribute
>> spooling is used), - backup is directly off a local RAID5.
>>
>> bacula 5.0.2 linux x8
So, I'm getting around to getting my VirtualFull backups running. I'm
already doing D2D2T, so my "Next Pool" directive for my three pools (Daily,
Weekly, Monthly) each point to their respective tape pools. I've run a
VirtualFull and it took the first pool it found (my Disk Daily pool) and now
it's
On 14/06/10 20:39, richard wrote:
> For info:
>
> I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
> specced machine. There is no data spooling , (although attribute
> spooling is used), - backup is directly off a local RAID5.
>
> bacula 5.0.2 linux x86_64.
>
How much ram/swa
Bob Hetzel wrote:
>I've never been able to get the bare-metal restore to work doing a restore
>starting from a Live CD. I last tried it over a year ago and people
>responded a while later saying they got it to work that way and they would
>update a web page with said info but it appears never to h
On Mon, Jun 14, 2010 at 3:39 PM, richard wrote:
> For info:
>
> I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
> specced machine. There is no data spooling , (although attribute
> spooling is used), - backup is directly off a local RAID5.
>
I think the only way to figure thi
For info:
I do a weekly full backup of 11.5TB (1.5M files) on a very similarly
specced machine. There is no data spooling , (although attribute
spooling is used), - backup is directly off a local RAID5.
bacula 5.0.2 linux x86_64.
Regards,
Richard
-
Hi,
On Mon, 14 Jun 2010, Bob Hetzel wrote:
> I've never been able to get the bare-metal restore to work doing a restore
> starting from a Live CD. I last tried it over a year ago and people
> responded a while later saying they got it to work that way and they would
> update a web page with s
>
> Honestly, if you have that large of a backup set and the individual
> files can be in the terabyte range, you might want to reconsider whether
> spooling is the correct approach. Remember, the primary purposes of
> spooling are to allow a fast dump to disk of small backup sets followed
> by
Hi,
On Sat, 12 Jun 2010, James Harper wrote:
> You really need a windows live CD (eg bartpe) or else you won't get all
> your NTFS ACL's and other stuff restored properly. Also, certain
> versions of mkntfs are broken wrt making a partition bootable.
That's a real shame. Knoppix et al are so mu
I am trying to run a python script for the newVolume Event on bacula
5.01 on Ubuntu 10.04 with python 2.6.5
The installation is actually working without python scripting and the
LabelFormat directive in the bacula-dir.conf, but for the test I have
removed the LabelFormat from the pool definitio
I've never been able to get the bare-metal restore to work doing a restore
starting from a Live CD. I last tried it over a year ago and people
responded a while later saying they got it to work that way and they would
update a web page with said info but it appears never to have happened.
That
Apologies: I should have added that this was a full backup, so the
slowness is not due to searching for newer files.
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad Father's Day Giveaway. ONE MASSIVE PRI
On 06/14/10 13:23, Alan Brown wrote:
> On 14/06/10 17:06, ekke85 wrote:
>> I have 710 files in that directory/nfs share/fileset. There is only
>> one or two files that is over 1TB, I might try to backup one of those
>> files on its own and see what it does.
>>
>>> how many are there already in th
On Mon, Jun 14, 2010 at 12:47 PM, Joseph Spenner wrote:
> I have a file backup solution with 21 files. After the 21st, it will start
> over at 1. But what would happen if bacula doesn't find file_01? If it
> finds it, it will write over the top and wipe out what was previously stored.
> But
On 14/06/10 17:06, ekke85 wrote:
> I have 710 files in that directory/nfs share/fileset. There is only
> one or two files that is over 1TB, I might try to backup one of those
> files on its own and see what it does.
>
>> how many are there already in the database?
>>
>>
> I am no expert wit
On Mon, Jun 14, 2010 at 12:17 PM, Keith Edmunds wrote:
> We're seeing very slow spooling from a locally-attached disk:
>
> 12-Jun 10:31 rodney-sd JobId 179: User specified spool size reached.
> 12-Jun 10:44 rodney-sd JobId 179: Spooling data again ...
> 12-Jun 15:11 rodney-sd JobId 179: User speci
We're seeing very slow spooling from a locally-attached disk:
12-Jun 10:31 rodney-sd JobId 179: User specified spool size reached.
12-Jun 10:44 rodney-sd JobId 179: Spooling data again ...
12-Jun 15:11 rodney-sd JobId 179: User specified spool size reached.
12-Jun 15:23 rodney-sd JobId 179: Spooli
I have a file backup solution with 21 files. After the 21st, it will start
over at 1. But what would happen if bacula doesn't find file_01? If it finds
it, it will write over the top and wipe out what was previously stored. But if
it's not present at all, will bacula return an error? Or wil
On Mon, Jun 14, 2010 at 12:06 PM, ekke85 wrote:
>
>
>>
>> How many?
>>
>
> I have 710 files in that directory/nfs share/fileset. There is only one or
> two files that is over 1TB, I might try to backup one of those files on its
> own and see what it does.
>
>>
>> how many are there already in th
>
> How many?
>
I have 710 files in that directory/nfs share/fileset. There is only one or two
files that is over 1TB, I might try to backup one of those files on its own and
see what it does.
>
> how many are there already in the database?
>
I am no expert with the DB side, but I have 1
Hello,
I have several, nearly identical Centos Servers, so I decided to use BaseJob
to save some space.
Unfortunately I have encountered strange problem - some of the files, that
should be backed up by Full Level in fact aren't.
For testing purpose I configured 2 clients - first "BaseClient" and s
ekke85 wrote:
> There is a lot of files that i need to backup.
How many?
how many are there already in the database?
> The other problem that I think might cause it, is that some of the files is
> 1.1TB on its own.
Does anyone on the list know what bacula tries to do if a file being
backed
>> AFAIK cciss only supports disks.
>
> Hm, no it should work with tapes.
>
> http://www.kernel.org/doc/Documentation/blockdev/cciss.txt
>
I find this part interesting:
Additionally, note that the driver will not engage the SCSI core at init
time. The driver must be directed to dynamically engag
Alan Brown schrieb:
> On Sat, 12 Jun 2010, Daniel Bareiro wrote:
>
> > This card uses the module 'cciss' which has been loaded by the kernel:
>
> AFAIK cciss only supports disks.
Hm, no it should work with tapes.
http://www.kernel.org/doc/Documentation/blockdev/cciss.txt
- quote -
SCSI sequent
Running through Brief Tutorial about Bacula
Some of the output i get is quite different from the tutorial.
*status client
Automatically selected Client: vega-fd
Connecting to Client vega-fd at vega:9102
vega-fd Version: 5.0.2 (28 April 2010) i686-pc-linux-gnu ubuntu 8.10
Daemon started 14-Jun-
On Sat, 12 Jun 2010, Daniel Bareiro wrote:
> This card uses the module 'cciss' which has been loaded by the kernel:
AFAIK cciss only supports disks.
--
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad F
>
> How many files are in the 11Tb and why can't you break it into smaller
> chunks?
>
> Depending on the size of your mysql database, you could quite easily run
> out of ram+swap with a large dataset.
>
> "top" is a useful tool - especially its "M" function.
>
There is a lot of files that
On Sat, 12 Jun 2010, James Harper wrote:
> You really need a windows live CD (eg bartpe) or else you won't get all
> your NTFS ACL's and other stuff restored properly.
Reminder to all: NTFS ACL and other filesystem semantics (but not
structure) are based on VMS, not Unix. This is a direct result
On Fri, 11 Jun 2010, ekke85 wrote:
> Thanks for the fast replies and sorry for my slow response. It seems my
> spooling was not working right, but I have it working now. I am not sure
> what uses the memory up, but here is my configs:
How many files are in the 11Tb and why can't you break it into
Hi,
resolved this.
I was seeing the working directory under bacula/working rather than having
/bacula/bin/working
i created this directory and it worked
still have a directory called bacula/working with lots of data in, dont know
why this is...but its working.
thanks for pointers *
+-
I've the problem, that sometimes a virtualfull fails with the following
message. I've about 20 similar jobdefinitions
and sometimes they work, and sometimes they fail. Is it possible to get
a logfile or something to debug why the
storagedaemon didn't accept the command at this time? if i run the j
> On Fri, 11 Jun 2010 10:30:52 +0200, Niek Linnenbank said:
>
> Hi Martin,
>
> We are running bacula 5.0.1-1ubuntu1 on Ubuntu Server 10.04. Tsi-vms01-fd is
> a dedicated machine which is used as our fileserver, and has the following
> ethernet card:
>
> Ethernet controller: Broadcom Corporat
Hello world,
our Problem at the moment is this: We have configured monthly full,
weekly differential and daily incremental backups. Unfortunately,
the differential and incremental backups seem to lose all information
about the higher level jobs.
Symptoms: The differential level backup, one week a
36 matches
Mail list logo