Hi,

On 08/23/2012 06:56 PM, Greg Burd wrote:
Hello all.

Apologies if any of the questions I ask are overly obvious, I'm just diving into the Java 
code and trying to find my way around.  My goal is to build layer which allows objects 
destined for secondary storage to reside in an S3-compatible service - specifically, Riak 
Cloud Storage (which we call "Riak CS").  Later on I'd like to plumb in a way 
to allow Riak CS to provide the S3 service itself to users of CloudStack deployments.


IIRC CloudStack already supports Swift from OpenStack, which is also S3 compatible?

I've never seen it in action, but shouldn't this work already?

Wido

So it seems that to do this I'll need to implement a 
`cloud.bridge.io.s3.S3ServiceBucketAdapter` (or something with a similar name) 
which implements the `S3BucketAdapter` API, correct?  As far as I can tell, 
that class will essentially just call out using the S3 API to a specified S3 
server (in my case, a running Riak CS cluster somewhere).  I assume that 
somewhere in the code there is the Amazon API for accessing S3, correct?

Then I'm guessing I'll need to also implement or change something related to 
auth in `cloud.bridge.auth.s3`, correct?  We have an authentication system 
built into Riak CS now, somehow these two will need to merge into one unified 
auth system.

Finally I'll need to integrate into the build tools/scripting so that 
deployment is automated.  Something like `ant deploy-riak-cs` I'm guessing.

So, I have a few questions:

1. Am I on the right track?
2. Has anyone already started work on building an S3 secondary storage backend 
integration?
3. What's the best way to integrate auth?
4. What parts of this should be designed to be reusable to provide S3 service 
itself at a later date?
5. What needs to be done to provide the expected level of integration into the 
build/test tools?

Anything else?  Thanks for any and all help.

best,

@gregburd, Basho Technologies | http://basho.com | @basho


Reply via email to