Instead of using 2i, you could do the following when saving:
POST
http://{IP}:8098/riak/users/rohman
{"email":"roh...@mahalostudio.com","otherdata":""}
POST
http://{IP}:8098/riak/emails/roh...@mahalostudio.com
{"owner":"rohman"}
So
checking if an email address exists is only a GET
http:/
I appreciate you sharing your design. But I can never understand why people go
to such great lengths to add transactions to an eventually-consistent db.
You could possibly solve the same problem by using memcached or Redis. Example:
1. Look up the user in Riak. If that lookup succeeds, abort.
2.
Hmm, but if the username (or email) must be unique then I think this problem
may be more than just indexing. There's also an id reservation issue.
At least in my case, I did not think 2i would work for me, because I needed
one of the indexes (email) to be unique and reservable on a first-come b
Ok, then 2I will work fine for what you're doing if you're going to keep adding
fields like that. You can just use the binary type for the 2I keys (_bin).
http://basho.com/blog/technical/2011/09/14/Secondary-Indexes-in-Riak/
You first said you were concerned about the query performance due to th
Hi Nate,
There are only 2 secondary keys for now (in addition to the primary key), but
this number will grow to 5 or more pretty soon.
I think when you say "insert each separately", you mean create 2 duplicate
objects, one keyed by username and one keyed by email. Or do you mean create
one
On Nov 7, 2011, at 5:45 PM, Greg Pascale wrote:
> Hi,
>
> I'm thinking about using 2i for a certain piece of my system, but I'm worried
> that the document-based partitioning may make it suboptimal.
>
> The issue is that the secondary fields I want to query over (email and
> username) are uniq
Hi,
I'm thinking about using 2i for a certain piece of my system, but I'm worried
that the document-based partitioning may make it suboptimal.
The issue is that the secondary fields I want to query over (email and
username) are unique, so each will only ever map to one value. Since 2i queries
Hi,
Thanks for the reply.
There were a lot of useful details there, but the thing I need to reply to, I
think, is the whole question of whether the map step is combining multiple
images, or processing a single image.
The number of calibrations (in the sense I used that term) is always small.
I'
On Nov 7, 2011, at 1:23 PM, andrew cooke wrote:
> Apologies if this is a dumb idea, or I am asking in the wrong place. I'm
> muddling around trying to understand various bits of technology while piecing
> together a possible project. So feel free to tell me I'm wrong :o)
>
> I am considering ho
Great project, Andrew. It's not a dumb idea, sounds pretty awesome actually. I
just don't think Riak will get you there.
As I see it, the basic outline looks something like:
fetch one image > fetch another image > mutate > write output
I just don't see how Riak's implementation of map reduce al
Hi,
Apologies if this is a dumb idea, or I am asking in the wrong place. I'm
muddling around trying to understand various bits of technology while piecing
together a possible project. So feel free to tell me I'm wrong :o)
I am considering how best to design a system that processes data from
te
Hi Spike,
After decoding your msg it looks like your query is simply 'AND'. The
parser barfs at this as it expects terms on both sides of the AND and this
causes the the listener for that particular connection to go down. No
other connections should be affected. Since the query is parsed on the
In the release notes, we mention 256, 512, or 1024 as reasonable
maximum ring sizes depending on the performance of your underlying
hardware. If you have the hardware to spare, you could try setting up
a duplicate cluster and then upgrade it to 1.0 and see how things work
out. If not, you may want
Thanks for the explanation Sean.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Riak-1-0-0-Key-Issue-tp3487595p3487907.html
Sent from the Riak Users mailing list archive at Nabble.com.
___
riak-users mailing list
riak-users@lis
Each of those JSON objects returned is a response from a single virtual
node (vnode) that has completed the key-listing operation. Streaming the
results simply means that Riak doesn't buffer and concatenate the results
for you, in the hopes that your application might be able to do something
useful
Hello,
My Riak servers are misbehaving when users enter invalid queries. The
gen_server erlang process for the PB transport dies but the overall OS level
process is still alive. I am exclusively using PB to access Riak, so everything
grinds to a halt. I plan to work around this by writing a Ria
Lets say I already have a system with a ring size of 1024 in 0.14.2,
should I wait to upgrade until this is sorted out? And how long will
that be? Where is this in terms of Basho's priorities?
You say stay under 1024, so I assume that means the max size you
recommend would be 512? Does this al
x@x:~$ curl 127.0.0.1:8098/riak/string?keys=stream
{"props":{"allow_mult":false,"basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,
Thanks for the detailed response Joe. We're at 256 partition ring size
which seems, a little high, but within your guidance.
Jim
On 11/6/11 8:10 PM, "Joseph Blomstedt" wrote:
>> RE: large ring size warning in the release notes, is the performance
>> degradation linear below 256? That is, until
19 matches
Mail list logo