Re: Riak CS: write ACL without delete permission

2014-03-28 Thread Jochen Delabie
Hi Reid,

You're right, the way I use this with S3 is by using a custom policy where
get and put only is allowed:

{
  "Statement": [
{
  "Sid": "Stmt1356692141310",
  "Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketAcl",
"s3:GetBucketLocation",
"s3:GetBucketLogging",
"s3:GetBucketNotification",
"s3:GetBucketPolicy",
"s3:GetBucketRequestPayment",
"s3:GetBucketVersioning",
"s3:GetBucketWebsite",
"s3:GetLifecycleConfiguration",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTorrent",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
  ],
  "Effect": "Allow",
  "Resource": [
"arn:aws:s3:::*"
  ]
}
  ]
}



On Fri, Mar 28, 2014 at 12:48 AM, Reid Draper  wrote:

> Hi Jochen,
>
> I'm not aware of any ACL in S3 that supports this. The WRITE ACL will
> grant 'create, overwrite and delete' of objects [1]
>
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/ACLOverview.html
>
> Reid
>
> On Mar 25, 2014, at 7:13 AM, Jochen Delabie 
> wrote:
>
> Hi,
>
> Is it possible to assign an ACL to a bucket where a client can
> write/upload an object but not delete an object?
>
> So basically a WRITE permission without the possibility to delete.
>
> Thanks,
> Jochen Delabie
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Lowlevel Access to RiakCS objects?

2014-03-28 Thread Tom Santero
Hi Martin,

The short answer is no, Riak CS does not expose lower level access to the
blocks that are replicated to Riak.

That said, I'm curious, is anything preventing you from segmenting your
videos and writing a playlist + the segements to Riak CS? This would allow
you to seed varnish, while at the same time reducing the cost of a cache
miss to something reasonable for users.

Regards,
Tom

On Thu, Mar 27, 2014 at 3:05 PM, Martin Alpers  wrote:

> Hi all,
>
> is there a canonical way to access RiakCS objects on a lower level? If I
> remember correctly, RiakCS basically distributes larger objects into chunks
> of one megabye each, and mapreduces them together on retrieval.
> I would like to read those chunks for caching purposes.
>
> For those interested in why I would wnat that:
> A Riak/RiakCS cluster is the heart of our yet-to-be-implemented video
> delivery cluster. A video management system will enable registered user to
> upload their videos and the public can watch them.
> In order to reduce intra-cluster traffic, we intent to cache the videos,
> preferably in RAM.
> We do not have any numbers on how often users would skip parts of the
> video and generate range requests. If that case is really common, we would
> prefer to serve them from cache as well, at least with Varnish and Squid,
> some users would experience unacceptably long delays.
> We looked out for a cache that could pipe through any request on an URL on
> which caching is in progress and serve from cache afterwards.
>
> The problem with both Varnish and Squid (and I suppose most caches,
> because this behaviour seems reasonable in most cases) boils down to
> treating a caching in progress as a cache hit.
> My colleague started to write his own caching proxy in NodeJS, but using
> asynchronous callbacks to check if a file exists, and to create it if it
> does not, strikes me as somewhat couragous for production.
>
> Now while we cannot risk to let some users wait for hundreds of megabytes
> to be cached before delivery begins, and while we want at least to be
> prepared to face many more range requests than the average "wget was
> interrupted" case, it occurred to me thata few megabytes are not an issue
> at multi-fast-ethernet speed.
> So if we can split our files into objects small enough, we could code a
> proxy that translates a range request into one or more normal requests for
> those chunks, cuts off a certain offset of the first chunk if the range of
> the orignal request began somewhere off the boundary, and reconcatenates
> those chunks in correct order for delivery.
> So the cache would never have to be bypassed, and the whole headache of
> telling a complete hit from one "in progress" would be gone.
>
> Since RiakCS has already split our files into small pieces, and somehow
> tracks them, so could we possibly piggyback on that?
>
> And by the way, I just came across the memory backend. I assume it is
> distributed like the persistent ones, so it will not help me redice
> internal traffic, right?
>
> Any input is highly appreciated.
>
> Best Regards
> Martin
>
> --
> Greetings, Martin Alpers
>
> martin-alp...@web.de; Public Key ID: 10216CFB
> Tel: +49 431/90885691; Mobile: +49 176/66185173
> JID: martin.alp...@jabber.org
> FYI: http://apps.opendatacity.de/stasi-vs-nsa/
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Two instances on the same server

2014-03-28 Thread Massimiliano Ciancio
Hello list,
I'm trying to start two different instances of riak on the same server
in order to have a test instance. I replicated the /riak/rel/riak
dir  into two dfferent dirs (prod and test) and changed node name and
ports.
To start/stop a node I use full path "//riak/rel/prod/bin/riak
start" and "//riak/rel/test/bin/riak start"
But starting/stopping a node doesn't work: the two instances of riak
are confused. For example:
- all stopped
- start prod instance -> ok
- start test instance -> the node is already running
Same problems even when stopping: the two instances are confused.

Is there some name/ port I haven't changed?

Follows my changes:
PROD:
{pb, [ {"0.0.0.0", 8087 } ]}
  {http, [ {"0.0.0.0", 8098 } ]},
  {https, [{ "127.0.0.1", 8069 }]},
  {handoff_port, 8099 },
 -name r...@192.168.1.xxx

TEST:
{pb, [ {"0.0.0.0", 18087 } ]}
  {http, [ {"0.0.0.0", 18098 } ]},
  {https, [{ "127.0.0.1", 18069 }]},
  {handoff_port, 18099 },
  -name t...@192.168.1.xxx

There is more to change?
Is there a better strategy to test on the same servers?

Thanks in advance
Massimiliano

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak CS: write ACL without delete permission

2014-03-28 Thread Reid Draper
For those following along at home too, ACLs and bucket policies are distinct 
access controls in S3. Riak CS has only limited support for bucket policies at 
the moment.

Reid

On Mar 28, 2014, at 5:04 AM, Jochen Delabie  wrote:

> Hi Reid,
> 
> You're right, the way I use this with S3 is by using a custom policy where 
> get and put only is allowed:
> 
> {
>   "Statement": [
> {
>   "Sid": "Stmt1356692141310",
>   "Action": [
> "s3:AbortMultipartUpload",
> "s3:GetBucketAcl",
> "s3:GetBucketLocation",
> "s3:GetBucketLogging",
> "s3:GetBucketNotification",
> "s3:GetBucketPolicy",
> "s3:GetBucketRequestPayment",
> "s3:GetBucketVersioning",
> "s3:GetBucketWebsite",
> "s3:GetLifecycleConfiguration",
> "s3:GetObject",
> "s3:GetObjectAcl",
> "s3:GetObjectTorrent",
> "s3:GetObjectVersion",
> "s3:GetObjectVersionAcl",
> "s3:GetObjectVersionTorrent",
> "s3:ListBucket",
> "s3:ListBucketMultipartUploads",
> "s3:ListBucketVersions",
> "s3:ListMultipartUploadParts",
> "s3:PutObject",
> "s3:PutObjectAcl",
> "s3:PutObjectVersionAcl"
>   ],
>   "Effect": "Allow",
>   "Resource": [
> "arn:aws:s3:::*"
>   ]
> }
>   ]
> }
> 
> 
> On Fri, Mar 28, 2014 at 12:48 AM, Reid Draper  wrote:
> Hi Jochen,
> 
> I'm not aware of any ACL in S3 that supports this. The WRITE ACL will grant 
> 'create, overwrite and delete' of objects [1]
> 
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/ACLOverview.html
> 
> Reid
> 
> On Mar 25, 2014, at 7:13 AM, Jochen Delabie  wrote:
> 
>> Hi,
>> 
>> Is it possible to assign an ACL to a bucket where a client can write/upload 
>> an object but not delete an object?
>> 
>> So basically a WRITE permission without the possibility to delete.
>> 
>> Thanks,
>> Jochen Delabie
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Lowlevel Access to RiakCS objects?

2014-03-28 Thread Martin Alpers
Thanks for your answer, Tom.

I wanted to avoid the overhead of one segemting layers whose objects are 
segmented by yet another layer, but come to think of it, since Riak can store 
chunks up to 1MB in size, why not use Riak directly?
And I wanted to avoid additional complexity of additional code which basically 
mimics RiakCS.

But I see nothing to prevent me from doing all that segmentation stuff. Having 
slept a night over it, I think it is the way to go.

So thanks for the feedback, and for taking your time to read my question in the 
first place.

Best regards,
Martin

> Hi Martin,
> 
> The short answer is no, Riak CS does not expose lower level access to the
> blocks that are replicated to Riak.
> 
> That said, I'm curious, is anything preventing you from segmenting your
> videos and writing a playlist + the segements to Riak CS? This would allow
> you to seed varnish, while at the same time reducing the cost of a cache
> miss to something reasonable for users.
> 
> Regards,
> Tom
> 
> On Thu, Mar 27, 2014 at 3:05 PM, Martin Alpers  wrote:
> 
> > Hi all,
> >
> > is there a canonical way to access RiakCS objects on a lower level? If I
> > remember correctly, RiakCS basically distributes larger objects into chunks
> > of one megabye each, and mapreduces them together on retrieval.
> > I would like to read those chunks for caching purposes.
> >
> > For those interested in why I would wnat that:
> > A Riak/RiakCS cluster is the heart of our yet-to-be-implemented video
> > delivery cluster. A video management system will enable registered user to
> > upload their videos and the public can watch them.
> > In order to reduce intra-cluster traffic, we intent to cache the videos,
> > preferably in RAM.
> > We do not have any numbers on how often users would skip parts of the
> > video and generate range requests. If that case is really common, we would
> > prefer to serve them from cache as well, at least with Varnish and Squid,
> > some users would experience unacceptably long delays.
> > We looked out for a cache that could pipe through any request on an URL on
> > which caching is in progress and serve from cache afterwards.
> >
> > The problem with both Varnish and Squid (and I suppose most caches,
> > because this behaviour seems reasonable in most cases) boils down to
> > treating a caching in progress as a cache hit.
> > My colleague started to write his own caching proxy in NodeJS, but using
> > asynchronous callbacks to check if a file exists, and to create it if it
> > does not, strikes me as somewhat couragous for production.
> >
> > Now while we cannot risk to let some users wait for hundreds of megabytes
> > to be cached before delivery begins, and while we want at least to be
> > prepared to face many more range requests than the average "wget was
> > interrupted" case, it occurred to me thata few megabytes are not an issue
> > at multi-fast-ethernet speed.
> > So if we can split our files into objects small enough, we could code a
> > proxy that translates a range request into one or more normal requests for
> > those chunks, cuts off a certain offset of the first chunk if the range of
> > the orignal request began somewhere off the boundary, and reconcatenates
> > those chunks in correct order for delivery.
> > So the cache would never have to be bypassed, and the whole headache of
> > telling a complete hit from one "in progress" would be gone.
> >
> > Since RiakCS has already split our files into small pieces, and somehow
> > tracks them, so could we possibly piggyback on that?
> >
> > And by the way, I just came across the memory backend. I assume it is
> > distributed like the persistent ones, so it will not help me redice
> > internal traffic, right?
> >
> > Any input is highly appreciated.
> >
> > Best Regards
> > Martin
> >
> > --
> > Greetings, Martin Alpers
> >
> > martin-alp...@web.de; Public Key ID: 10216CFB
> > Tel: +49 431/90885691; Mobile: +49 176/66185173
> > JID: martin.alp...@jabber.org
> > FYI: http://apps.opendatacity.de/stasi-vs-nsa/
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> >

-- 
Greetings, Martin Alpers

martin-alp...@web.de; Public Key ID: 10216CFB
Tel: +49 431/90885691; Mobile: +49 176/66185173
JID: martin.alp...@jabber.org
FYI: http://apps.opendatacity.de/stasi-vs-nsa/


signature.asc
Description: Digital signature
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Two instances on the same server

2014-03-28 Thread Massimiliano Ciancio
I'm really sorry!
It's fine: it wasn't Riak confusing instances. It was me!! I exchanged
the config files :((
Sorry
Massimiliano

2014-03-28 18:17 GMT+01:00 Massimiliano Ciancio :
> Hello list,
> I'm trying to start two different instances of riak on the same server
> in order to have a test instance. I replicated the /riak/rel/riak
> dir  into two dfferent dirs (prod and test) and changed node name and
> ports.
> To start/stop a node I use full path "//riak/rel/prod/bin/riak
> start" and "//riak/rel/test/bin/riak start"
> But starting/stopping a node doesn't work: the two instances of riak
> are confused. For example:
> - all stopped
> - start prod instance -> ok
> - start test instance -> the node is already running
> Same problems even when stopping: the two instances are confused.
>
> Is there some name/ port I haven't changed?
>
> Follows my changes:
> PROD:
> {pb, [ {"0.0.0.0", 8087 } ]}
>   {http, [ {"0.0.0.0", 8098 } ]},
>   {https, [{ "127.0.0.1", 8069 }]},
>   {handoff_port, 8099 },
>  -name r...@192.168.1.xxx
>
> TEST:
> {pb, [ {"0.0.0.0", 18087 } ]}
>   {http, [ {"0.0.0.0", 18098 } ]},
>   {https, [{ "127.0.0.1", 18069 }]},
>   {handoff_port, 18099 },
>   -name t...@192.168.1.xxx
>
> There is more to change?
> Is there a better strategy to test on the same servers?
>
> Thanks in advance
> Massimiliano

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Two instances on the same server

2014-03-28 Thread Kelly McLaughlin
Try using the devrel Makefile target. It builds a set of releases under dev/ 
that are able to run on the same machine. 

Kelly
On March 28, 2014 at 11:19:09 AM, Massimiliano Ciancio 
(massimili...@ciancio.net) wrote:

Hello list,  
I'm trying to start two different instances of riak on the same server  
in order to have a test instance. I replicated the /riak/rel/riak  
dir into two dfferent dirs (prod and test) and changed node name and  
ports.  
To start/stop a node I use full path "//riak/rel/prod/bin/riak  
start" and "//riak/rel/test/bin/riak start"  
But starting/stopping a node doesn't work: the two instances of riak  
are confused. For example:  
- all stopped  
- start prod instance -> ok  
- start test instance -> the node is already running  
Same problems even when stopping: the two instances are confused.  

Is there some name/ port I haven't changed?  

Follows my changes:  
PROD:  
{pb, [ {"0.0.0.0", 8087 } ]}  
{http, [ {"0.0.0.0", 8098 } ]},  
{https, [{ "127.0.0.1", 8069 }]},  
{handoff_port, 8099 },  
-name r...@192.168.1.xxx  

TEST:  
{pb, [ {"0.0.0.0", 18087 } ]}  
{http, [ {"0.0.0.0", 18098 } ]},  
{https, [{ "127.0.0.1", 18069 }]},  
{handoff_port, 18099 },  
-name t...@192.168.1.xxx  

There is more to change?  
Is there a better strategy to test on the same servers?  

Thanks in advance  
Massimiliano  

___  
riak-users mailing list  
riak-users@lists.basho.com  
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com  
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Two instances on the same server

2014-03-28 Thread Brian Sparrow
Might also check out riak-manage[1], a tool for quickly setting up and managing 
clusters. I use it on my Mac Pro to manage local clusters. 

-Brian 

[1] https://github.com/basho/riak-manage
-- 
Brian Sparrow
Developer Advocate
Basho Technologies

Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Friday, March 28, 2014 at 2:43 PM, Kelly McLaughlin wrote:

> Try using the devrel Makefile target. It builds a set of releases under dev/ 
> that are able to run on the same machine. 
> 
> Kelly
> 
> On March 28, 2014 at 11:19:09 AM, Massimiliano Ciancio 
> (massimili...@ciancio.net (mailto:massimili...@ciancio.net)) wrote:
> 
> > Hello list, 
> > I'm trying to start two different instances of riak on the same server 
> > in order to have a test instance. I replicated the /riak/rel/riak 
> > dir into two dfferent dirs (prod and test) and changed node name and 
> > ports. 
> > To start/stop a node I use full path "//riak/rel/prod/bin/riak 
> > start" and "//riak/rel/test/bin/riak start" 
> > But starting/stopping a node doesn't work: the two instances of riak 
> > are confused. For example: 
> > - all stopped 
> > - start prod instance -> ok 
> > - start test instance -> the node is already running 
> > Same problems even when stopping: the two instances are confused. 
> > 
> > Is there some name/ port I haven't changed? 
> > 
> > Follows my changes: 
> > PROD: 
> > {pb, [ {"0.0.0.0", 8087 } ]} 
> > {http, [ {"0.0.0.0", 8098 } ]}, 
> > {https, [{ "127.0.0.1", 8069 }]}, 
> > {handoff_port, 8099 }, 
> > -name r...@192.168.1.xxx (mailto:r...@192.168.1.xxx) 
> > 
> > TEST: 
> > {pb, [ {"0.0.0.0", 18087 } ]} 
> > {http, [ {"0.0.0.0", 18098 } ]}, 
> > {https, [{ "127.0.0.1", 18069 }]}, 
> > {handoff_port, 18099 }, 
> > -name t...@192.168.1.xxx (mailto:t...@192.168.1.xxx) 
> > 
> > There is more to change? 
> > Is there a better strategy to test on the same servers? 
> > 
> > Thanks in advance 
> > Massimiliano 
> > 
> > ___ 
> > riak-users mailing list 
> > riak-users@lists.basho.com (mailto:riak-users@lists.basho.com) 
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> ___
> riak-users mailing list
> riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Lowlevel Access to RiakCS objects?

2014-03-28 Thread Brian Akins
I know of at least one case that uses riak cs for live HLS video at scale, 
which is somewhat similar to your use case, so this is not uncharted territory. 
While technically it's possible you can design and implement something more 
efficient than CS, that may be effort better spent in other areas of you 
application. JMO.

Sent from my iPhone

> On Mar 28, 2014, at 10:53 AM, Martin Alpers  wrote:
> 
> Thanks for your answer, Tom.
> 
> I wanted to avoid the overhead of one segemting layers whose objects are 
> segmented by yet another layer, but come to think of it, since Riak can store 
> chunks up to 1MB in size, why not use Riak directly?
> And I wanted to avoid additional complexity of additional code which 
> basically mimics RiakCS.
> 
> But I see nothing to prevent me from doing all that segmentation stuff. 
> Having slept a night over it, I think it is the way to go.
> 
> So thanks for the feedback, and for taking your time to read my question in 
> the first place.
> 
> Best regards,
> Martin
> 
>> Hi Martin,
>> 
>> The short answer is no, Riak CS does not expose lower level access to the
>> blocks that are replicated to Riak.
>> 
>> That said, I'm curious, is anything preventing you from segmenting your
>> videos and writing a playlist + the segements to Riak CS? This would allow
>> you to seed varnish, while at the same time reducing the cost of a cache
>> miss to something reasonable for users.
>> 
>> Regards,
>> Tom
>> 
>>> On Thu, Mar 27, 2014 at 3:05 PM, Martin Alpers  wrote:
>>> 
>>> Hi all,
>>> 
>>> is there a canonical way to access RiakCS objects on a lower level? If I
>>> remember correctly, RiakCS basically distributes larger objects into chunks
>>> of one megabye each, and mapreduces them together on retrieval.
>>> I would like to read those chunks for caching purposes.
>>> 
>>> For those interested in why I would wnat that:
>>> A Riak/RiakCS cluster is the heart of our yet-to-be-implemented video
>>> delivery cluster. A video management system will enable registered user to
>>> upload their videos and the public can watch them.
>>> In order to reduce intra-cluster traffic, we intent to cache the videos,
>>> preferably in RAM.
>>> We do not have any numbers on how often users would skip parts of the
>>> video and generate range requests. If that case is really common, we would
>>> prefer to serve them from cache as well, at least with Varnish and Squid,
>>> some users would experience unacceptably long delays.
>>> We looked out for a cache that could pipe through any request on an URL on
>>> which caching is in progress and serve from cache afterwards.
>>> 
>>> The problem with both Varnish and Squid (and I suppose most caches,
>>> because this behaviour seems reasonable in most cases) boils down to
>>> treating a caching in progress as a cache hit.
>>> My colleague started to write his own caching proxy in NodeJS, but using
>>> asynchronous callbacks to check if a file exists, and to create it if it
>>> does not, strikes me as somewhat couragous for production.
>>> 
>>> Now while we cannot risk to let some users wait for hundreds of megabytes
>>> to be cached before delivery begins, and while we want at least to be
>>> prepared to face many more range requests than the average "wget was
>>> interrupted" case, it occurred to me thata few megabytes are not an issue
>>> at multi-fast-ethernet speed.
>>> So if we can split our files into objects small enough, we could code a
>>> proxy that translates a range request into one or more normal requests for
>>> those chunks, cuts off a certain offset of the first chunk if the range of
>>> the orignal request began somewhere off the boundary, and reconcatenates
>>> those chunks in correct order for delivery.
>>> So the cache would never have to be bypassed, and the whole headache of
>>> telling a complete hit from one "in progress" would be gone.
>>> 
>>> Since RiakCS has already split our files into small pieces, and somehow
>>> tracks them, so could we possibly piggyback on that?
>>> 
>>> And by the way, I just came across the memory backend. I assume it is
>>> distributed like the persistent ones, so it will not help me redice
>>> internal traffic, right?
>>> 
>>> Any input is highly appreciated.
>>> 
>>> Best Regards
>>> Martin
>>> 
>>> --
>>> Greetings, Martin Alpers
>>> 
>>> martin-alp...@web.de; Public Key ID: 10216CFB
>>> Tel: +49 431/90885691; Mobile: +49 176/66185173
>>> JID: martin.alp...@jabber.org
>>> FYI: http://apps.opendatacity.de/stasi-vs-nsa/
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> -- 
> Greetings, Martin Alpers
> 
> martin-alp...@web.de; Public Key ID: 10216CFB
> Tel: +49 431/90885691; Mobile: +49 176/66185173
> JID: martin.alp...@jabber.org
> FYI: http://apps.opendatacity.de/stasi-vs-nsa/
> ___
> riak-users mailing list
> riak-use