Hey Ryan,
Here is the error from the sasl log. It looks like some sort of
encoding error. Any thoughts on how to fix this? I am storing the
data as BERT encoded binary and I set the content-type as
application/octet-stream.
Thanks for your help!
Andrew
ERROR REPORT 9-Jun-2011::21:37:05 =
Andrew,
Maybe you could elaborate on the error? I tested this against master
(commit below) just now with success.
2b1a474f836d962fa035f48c05452e22fc6c2193 Change dependency to allow for
R14B03 as well as R14B02
-Ryan
On Wed, Jun 22, 2011 at 7:03 PM, Andrew Berman wrote:
> Hello,
>
> I'm hav
Hi Greg,
Two questions:
1. Was this count normally correct _before_ upgrading to 14.2?
2. Have you performed a direct delete (e.g. via curl) of any keys under your
_rsid_ bucket?
-Ryan
On Mon, Jun 13, 2011 at 4:22 PM, Greg Pascale wrote:
> Hi,
>
> we recently upgraded to riak-search 0.14.2 a
Afternoon, Evening, Morning to All -
For today's Recap: Python code, Riak and Basho at WindyCityDB, new
wiki content and more.
Enjoy -
Mark
Community Manager
Basho Technologies
wiki.basho.com
twitter.com/pharkmillups
--
Riak Recap for June 20 - 21
=
re: puppet module - I just stumbled onto this simple puppet config
from Mårten Gustafson while digging through the irc logs for today's
recap:
https://gist.github.com/1038441
(I know some of you have already seen it but I figured I should attach
it to the thread nonetheless.)
Mark
On Wed, Jun 2
Hello,
I'm having issues link walking using the Map Reduce link function. I am
using HEAD from Git, so it's possible that's the issue, but here is what is
happening.
I've got two buckets, user and user_email where user_email contains a link
to the user.
When I run this:
{
"inputs": [
If any of this ever officially makes it to a Bitbucket/GitHub repo,
let's make sure it gets added to the "Recipes" section on the
wiki. :)
http://wiki.basho.com/Community-Developed-Libraries-and-Projects.html#Recipes%2C-Cookbooks%2C-and-Configurations
Thanks,
Mark
On Tue, Jun 21, 2011 at 1:
Ryan,
Thanks for the info. Yea, it turned out Riak wasn't exactly what I was
looking for in my use case (although I did build a prototype on it). Ended
up going with Redis (probably could have used memcache or membase instead)
and implemented a partial (not all commands supported) slave in Erlang
And here's the link I neatly forgot to include:
http://wiki.basho.com/Vector-Clocks.html
Mathias Meyer
Developer Advocate, Basho Technologies
On Mittwoch, 22. Juni 2011 at 17:18, Mathias Meyer wrote:
> Manuel,
>
> what you're seeing is not specific to links, it's generally how concurrent
> u
Manuel,
what you're seeing is not specific to links, it's generally how concurrent
updates to a single object are handled in Riak, links are no exception. If you
want to handle that properly you need to enable the allow_mult property on the
bucket in question.
Now, whenever two clients update
As Nico suggests below, although this is not an elegant solution, I would be
satisfied with causing the deletes to fail unless all primary nodes are up.
On 6/16/2011 12:09 PM, Nico Meyer wrote:
> If all primary nodes are up, the problem is nonexistent. You could
> certainly implement any number
Jordon,
I used the ETS backend with success while working @ AOL. In my case I was
using it to cache ~100K, short lived (hours), largish objects.
I'm confused because you said you don't care if you lose data, but one of
Riak's main strengths is to do everything possible to avoid data loss.
Perha
Ryan,
I don't have any recommendations on how to compose your keys, just to note
that if using bitcask they are stored in memory so you may have to watch out
for that in regards to key length and # of objects.
What I really wanted to tell you is that secondary indices should arrive in
some form i
On 22 Jun 2011, at 13:46, Jon Brisbin wrote:
>
>
> Well, you have 2 choices
>
> 1) Abstract away the fact that your client is async, so you can implement
> RiakResponse etc as wrappers around a Future, first call to a method calls
> get on the Future
> 2) You can require your users to think
- Original Message -
> Well, you have 2 choices
> 1) Abstract away the fact that your client is async, so you can
> implement RiakResponse etc as wrappers around a Future, first call
> to a method calls get on the Future
> 2) You can require your users to think aysnc and code callbacks
>
Thanks for your answers. I now have clear how to configure Ripple. Regarding
the other two questions, It looks like I should take a closer look at the
Risky framework as it seems to offer more control on how the mapping is
done.
Regards.
___
riak-users m
Yeah agreed grizzly is a very nice API
Real pity Oracle completely screwed up java.net as it pretty much orphaned a
tone of good support mailing list and forum posts in the process.
>From what I understand there are quite a few services using Java for this
sort of thing at the moment.
These guys
On 22 Jun 2011, at 12:32, Jon Brisbin wrote:
> I should have thought of that before. It would have saved me some time. :)
> There's several things in there I can use to help (like RiakResponse,
> IRiakObject, etc...).
>
> The only thing I would say is that I was wanting to return Futures for
I've looked at netty too, but I thought Grizzly was a little more mature and I
liked the abstractions of Grizzly better.
I'm really looking into something like this for a proof of concept I'm doing
that might involve Riak in a very large-scale, massively-concurrent system that
handles large tr
I should have thought of that before. It would have saved me some time. :)
There's several things in there I can use to help (like RiakResponse,
IRiakObject, etc...).
The only thing I would say is that I was wanting to return Futures for
everything, rather than the responses themselves (and/or
Hello,
Probably I have some miss understating on whats the best way to update links in
Riak, but I am wondering if you know about this hazard when updating links:
client A: getLinks(X)
client B: getLinks(X)
clientB:updateLinks(X)
clientA:updateLinks(X)
where the update made by B is cleaned by
I have been interested in doing the same thing for a while.
Instead of Grizzly I started messing around with Netty
https://github.com/netty/ which is very similar.
I am not sure what either of you have found but in my experiance the NIO
stuff only really comes into it's own with large transfers o
Les,
maybe it's worth looking into Beetle [1] which is a HA messaging solution built
on RabbitMQ and Redis. It supports multiple brokers and message de-duplication,
using Redis. It's written in Ruby, but should either way give you some
inspiration on how something like this could be achieved.
Hey Jon,
Any chance you want to implement the RawClient interface
(https://github.com/basho/riak-java-client/blob/master/src/main/java/com/basho/riak/client/raw/RawClient.java)
from the basho riak-java-client library? That way your client can be swapped
straight into the basho lib?
If not, let
24 matches
Mail list logo