This is very interesting, I'm very interested in learning more.
So, let me see if I understand this properly:
1. You get an S3 account
2. You install s3fs (from github)
3. There is a config option in the s3fs to use a cache, and somehow you
specify the EBS volume as the cache
4. Bacula is
Hey there Heitor!
Nice to hear from ya bud!! I actually did get this to work. The key that
lead to success was to add a 'use_cache' option. I basically assigned a
10GB EBS volume just to use as a cache. It was just an experiment, and I
planned on getting did of the 10GB EBS and get one much smalle
On 6/10/2015 3:50 PM, Tim Dunphy wrote:
> Hey guys,
>
> I was really excited when I upgraded my bacula server to CentOS 7 and
> installed the latest version of s3fs. Because I found that I was able
> to mount an s3 bucket to my local file system with the right user id
> and permissions to use
>> Also I'm wondering if there's any optimizations or changes that anyone may
>> know
>> about that I can do to S3FS to allow this to work?
Just in time: you can always try to download your volumes using AWS web
interface.
>> Thanks,
>> Tim
> Cheers,
>
> Hey guys,
Hey Tim,
> I was really excited when I upgraded my bacula server to CentOS 7 and
> installed
> the latest version of s3fs. Because I found that I was able to mount an s3
> bucket to my local file system with the right user id and permissions to use
> with bacula.
> My old server was