Re: CRDT objects and 2i?
Hi Paul, CRDTs are stored as a normal object in Riak, although in a format that allows Riak to resolve conflicts automatically, meaning that the normal restrictions on object size applies. As secondary indexes do not have to be related to the data in any way, Riak would not be able to determine how secondary indexes are to be resolved, and secondary indexes are therefore not supported for CRDTs. If you need to query CRDTs I would instead recommend enabling Riak Search 2.0 as this supports querying CRDTs through Solr. Best regards, Christian On Wed, Mar 26, 2014 at 6:39 AM, Paul Walk wrote: > I'm still a little unclear on the relationship between CRDT, K/V objects > and buckets. I understand that bucket_types add a third level of > name-spacing, so I expected that so long as I used a bucket_type > consistently I would be able to use CRDT objects with 2i indexes. However, > using the Ruby client, I find that while this successfully returns my > previously saved CRDT map called 'my_map_object_key': > > map = > Riak::Crdt::Map.new(my_bucket,'my_map_object_key','my_map_bucket_type') > > its key is not recognised in the same bucket, using the same bucket_type. > So this: > > puts my_bucket.get_index('$key','0'..'',{:bucket_type=> > my_map_bucket_type}).size > > returns '0'. > > I suspect I am misunderstanding something fundamental - can anyone advise? > > Thanks, > > Paul > > --- > Paul Walk > http://www.paulwalk.net > --- > > > > > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: CRDT objects and 2i?
Thanks Christian - that’s nice and clear. Paul On 26 Mar 2014, at 08:15, Christian Dahlqvist wrote: > Hi Paul, > > CRDTs are stored as a normal object in Riak, although in a format that allows > Riak to resolve conflicts automatically, meaning that the normal restrictions > on object size applies. As secondary indexes do not have to be related to the > data in any way, Riak would not be able to determine how secondary indexes > are to be resolved, and secondary indexes are therefore not supported for > CRDTs. > > If you need to query CRDTs I would instead recommend enabling Riak Search 2.0 > as this supports querying CRDTs through Solr. > > Best regards, > > Christian > > > On Wed, Mar 26, 2014 at 6:39 AM, Paul Walk wrote: > I’m still a little unclear on the relationship between CRDT, K/V objects and > buckets. I understand that bucket_types add a third level of name-spacing, so > I expected that so long as I used a bucket_type consistently I would be able > to use CRDT objects with 2i indexes. However, using the Ruby client, I find > that while this successfully returns my previously saved CRDT map called > ‘my_map_object_key’: > > map = > Riak::Crdt::Map.new(my_bucket,’my_map_object_key’,’my_map_bucket_type’) > > its key is not recognised in the same bucket, using the same bucket_type. So > this: > > puts my_bucket.get_index('$key','0'..'',{:bucket_type=> > my_map_bucket_type}).size > > returns ‘0’. > > I suspect I am misunderstanding something fundamental - can anyone advise? > > Thanks, > > Paul > > --- > Paul Walk > http://www.paulwalk.net > --- > > > > > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > --- Paul Walk http://www.paulwalk.net --- ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak Search API: Returning matching documents
Are there any plans to introduce a Riak Search API that will allow a search to return matching documents, rather than references to matching documents? Its seems rather cumbersome to have to generate at least two round trips, or more, to satisfy a query. I suppose this would also require Riak KV to implement a bulk fetch API, as a result set may return a larger number of results. The closest approximation to this functionality at the moment is to perform a search using MapReduce. This has the advantage of allowing you to stream the results, but incurs the overhead of setting up the MR job, and has an implicit R=1 that can't be tuned. Elias Levy ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Search API: Returning matching documents
The newest Riak Search 2 (yokozuna) allows you to create custom solr schemas. One of the field options is to save the values you wish to index. Then when you perform a search request, you can retrieve any of the stored values with a field list (fl) property Eric On Mar 26, 2014 9:37 AM, "Elias Levy" wrote: > Are there any plans to introduce a Riak Search API that will allow a > search to return matching documents, rather than references to matching > documents? Its seems rather cumbersome to have to generate at least two > round trips, or more, to satisfy a query. > > I suppose this would also require Riak KV to implement a bulk fetch API, > as a result set may return a larger number of results. > > The closest approximation to this functionality at the moment is to > perform a search using MapReduce. This has the advantage of allowing you > to stream the results, but incurs the overhead of setting up the MR job, > and has an implicit R=1 that can't be tuned. > > Elias Levy > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Search API: Returning matching documents
On Wed, Mar 26, 2014 at 9:44 AM, Eric Redmond wrote: > The newest Riak Search 2 (yokozuna) allows you to create custom solr > schemas. One of the field options is to save the values you wish to index. > Then when you perform a search request, you can retrieve any of the stored > values with a field list (fl) property > Certainly, but that makes the KV storage redundant. If that is really the suggested solution, then there may as well be a null storage KV backend. Of curse, a null KV backend is not sufficient, given that writes to Yokozuna happen asynchronously and that AAE is used to ensure KV and Yomozuna are in sync. You'd also need to make writes to Yokozuna synchronous with the client request and disable AAE as there is nothing to sync with. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: link walking in the Riak 2.0 Ruby client?
On 25 Mar 2014, at 11:05, Paul Walk wrote: > I presume Riak::Links are being retained - it's just the link-walking > functionality which is being deprecated? Link-walking itself has been removed from the 2.0 ruby-client. We still store and retrieve the metadata, and you can access it, but we don't walk the links for you. Bryce Kerley br...@basho.com smime.p7s Description: S/MIME digital signature ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Search API: Returning matching documents
That is correct. Storing values in solr make the values redundant. That's because solr and Riak are different systems with different storage strategies. Since we only return the solr results, the only way we can access the kv values would be to make a separate call, which is not much different from your client making that same call. As for separating Riak Search from kv entirely, this is a possibility we've looked into, but it won't be ready for 2.0. I'm sorry to say that, for the time being, the only option for your request is to store values in both places. Eric On Mar 26, 2014 10:05 AM, "Elias Levy" wrote: > On Wed, Mar 26, 2014 at 9:44 AM, Eric Redmond wrote: > >> The newest Riak Search 2 (yokozuna) allows you to create custom solr >> schemas. One of the field options is to save the values you wish to index. >> Then when you perform a search request, you can retrieve any of the stored >> values with a field list (fl) property >> > Certainly, but that makes the KV storage redundant. If that is really the > suggested solution, then there may as well be a null storage KV backend. > > Of curse, a null KV backend is not sufficient, given that writes to > Yokozuna happen asynchronously and that AAE is used to ensure KV and > Yomozuna are in sync. You'd also need to make writes to Yokozuna > synchronous with the client request and disable AAE as there is nothing to > sync with. > > > > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Search API: Returning matching documents
On Wed, Mar 26, 2014 at 10:36 AM, Eric Redmond wrote: > That is correct. Storing values in solr make the values redundant. That's > because solr and Riak are different systems with different storage > strategies. Since we only return the solr results, the only way we can > access the kv values would be to make a separate call, which is not much > different from your client making that same call. > > As for separating Riak Search from kv entirely, this is a possibility > we've looked into, but it won't be ready for 2.0. I'm sorry to say that, > for the time being, the only option for your request is to store values in > both places. > Thanks for the response Eric. I understand the current limitations. My question was forward looking. Riak is an amazing piece of technology that provides great availability. Ops loves Riak. Alas, in my opinion, its weakness has always been one of easy of use for developers. When it was just the KV store, the complexities of eventual consistency were placed squarely in the developer's shoulders and querability was very limited. 2i helped somewhat, and the new CRDT data types improve things tremendously, as does Yokozuna. But there are still gaps. No bulk loading. No bulk fetching. Riak has always felt like a collection of components, rather than an integrated system. KV is unaware of Bitcask expirations. Search doesn't returned matched documents. MongoDB's cluster and storage layers may be a disgrace, but the one thing they got right is the expressive API. Its one reason why developers love Mongo, at the same time is hated by Ops. I'd love to see this ease of use within Riak, so I can actually get our developers to use it more. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Search API: Returning matching documents
Agree with a lot of your points, Elias. But I've found that as a solo developer pushing product in my organization, and I would venture to say there are others like mine, Riak's ops proposition trumps some of these developer issues. Not having to hire ops personnel to babysit a Riak app is a big win for organizations that barely have money to hire a dev. If you are a developer that pushes product you can deal with round trip issues, multi fetch issues, etc. Aka. Riak's lack of developer sugar. You mentioned it earlier, but a search > MR is exactly how I've done multi fetch in Riak 1.x and, it seems, will continue to do in Riak 2.x. Of course, solutions are specific to your application. A search > user land multi fetch wrapper function is trivial to implement. Actually, I don't know why Basho doesn't ship just such a wrapper in erlang that would take an array of bucket/key pairs and push out an array of responses. But either way, it's not really a show stopper. Ya sugar is nice but, as you know, eventually you crash. -Alexander Sicular @siculars On Mar 26, 2014, at 2:10 PM, Elias Levy wrote: > On Wed, Mar 26, 2014 at 10:36 AM, Eric Redmond wrote: > That is correct. Storing values in solr make the values redundant. That's > because solr and Riak are different systems with different storage > strategies. Since we only return the solr results, the only way we can access > the kv values would be to make a separate call, which is not much different > from your client making that same call. > > As for separating Riak Search from kv entirely, this is a possibility we've > looked into, but it won't be ready for 2.0. I'm sorry to say that, for the > time being, the only option for your request is to store values in both > places. > > > Thanks for the response Eric. I understand the current limitations. My > question was forward looking. > > Riak is an amazing piece of technology that provides great availability. Ops > loves Riak. Alas, in my opinion, its weakness has always been one of easy of > use for developers. When it was just the KV store, the complexities of > eventual consistency were placed squarely in the developer's shoulders and > querability was very limited. > > 2i helped somewhat, and the new CRDT data types improve things tremendously, > as does Yokozuna. But there are still gaps. No bulk loading. No bulk > fetching. > > Riak has always felt like a collection of components, rather than an > integrated system. KV is unaware of Bitcask expirations. Search doesn't > returned matched documents. > > MongoDB's cluster and storage layers may be a disgrace, but the one thing > they got right is the expressive API. Its one reason why developers love > Mongo, at the same time is hated by Ops. > > I'd love to see this ease of use within Riak, so I can actually get our > developers to use it more. > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak Search API: Returning matching documents
Just want to add: driver (client) dev is listening! Adding this our clients is a fairly easy thing, and I'll ad it to our todo list. - Roach On Wed, Mar 26, 2014 at 4:03 PM, Alexander Sicular wrote: > Agree with a lot of your points, Elias. But I've found that as a solo > developer pushing product in my organization, and I would venture to say > there are others like mine, Riak's ops proposition trumps some of these > developer issues. Not having to hire ops personnel to babysit a Riak app is > a big win for organizations that barely have money to hire a dev. > > If you are a developer that pushes product you can deal with round trip > issues, multi fetch issues, etc. Aka. Riak's lack of developer sugar. You > mentioned it earlier, but a search > MR is exactly how I've done multi fetch > in Riak 1.x and, it seems, will continue to do in Riak 2.x. Of course, > solutions are specific to your application. A search > user land multi fetch > wrapper function is trivial to implement. Actually, I don't know why Basho > doesn't ship just such a wrapper in erlang that would take an array of > bucket/key pairs and push out an array of responses. But either way, it's > not really a show stopper. > > Ya sugar is nice but, as you know, eventually you crash. > > -Alexander Sicular > > @siculars > > On Mar 26, 2014, at 2:10 PM, Elias Levy wrote: > > On Wed, Mar 26, 2014 at 10:36 AM, Eric Redmond wrote: >> >> That is correct. Storing values in solr make the values redundant. That's >> because solr and Riak are different systems with different storage >> strategies. Since we only return the solr results, the only way we can >> access the kv values would be to make a separate call, which is not much >> different from your client making that same call. >> >> As for separating Riak Search from kv entirely, this is a possibility >> we've looked into, but it won't be ready for 2.0. I'm sorry to say that, for >> the time being, the only option for your request is to store values in both >> places. > > > Thanks for the response Eric. I understand the current limitations. My > question was forward looking. > > Riak is an amazing piece of technology that provides great availability. Ops > loves Riak. Alas, in my opinion, its weakness has always been one of easy of > use for developers. When it was just the KV store, the complexities of > eventual consistency were placed squarely in the developer's shoulders and > querability was very limited. > > 2i helped somewhat, and the new CRDT data types improve things tremendously, > as does Yokozuna. But there are still gaps. No bulk loading. No bulk > fetching. > > Riak has always felt like a collection of components, rather than an > integrated system. KV is unaware of Bitcask expirations. Search doesn't > returned matched documents. > > MongoDB's cluster and storage layers may be a disgrace, but the one thing > they got right is the expressive API. Its one reason why developers love > Mongo, at the same time is hated by Ops. > > I'd love to see this ease of use within Riak, so I can actually get our > developers to use it more. > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > > > > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com