Re: Riak CS - Unable to create/view bucket details using dragon disk
Hi Shino, I tried multiple times with the access key and secret key configured in riak-cs.conf. But I'm unable to create or view buckets using s3 cmd. Could you let me know if there is any documentation for this? My aim to create buckets, upload objects to riak buckets from a HTTP interface similar to aws - s3. -- View this message in context: http://riak-users.197444.n3.nabble.com/Riak-CS-Unable-to-create-view-bucket-details-using-dragon-disk-tp4033494p4033579.html Sent from the Riak Users mailing list archive at Nabble.com. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Deleted keys come back
>Can you tell us more about that? How do siblings affect the results of search in your case? When I checked last year, search result and stats result includes sibling object. Search result and stats count includes sibling object. It's complicated to handle with the result. $ curl -sS http://:/types/ssdms_test/buckets/unit_test_bucket1/keys/bkey1_1 Siblings: 3HIxQ2pcu0YMjekbtIuixb 3JonReTfLC3GvG534WS21i 63t3g19KaCLu7JFLMxHxJJ $ curl -sS 'http://:/search/query/unit_test_bucket1_index?wt=json&q=key1_s%3aval1_1&rows=50&stats=true&stats.field=key3_i' | jq . { "responseHeader": { "status": 0, "QTime": 7, "params": { "q": "key1_s:val1_1", "shards": "192.168.1.235:8093/internal_solr/unit_test_bucket1_index", "stats": "true", "192.168.1.235:8093": "_yz_pn:63 OR _yz_pn:61 OR _yz_pn:59 OR _yz_pn:57 OR _yz_pn:55 OR _yz_pn:53 OR _yz_pn:51 OR _yz_pn:49 OR _yz_pn:47 OR _yz_pn:45 OR _yz_pn:43 OR _yz_pn:41 OR _yz_pn:39 OR _yz_pn:37 OR _yz_pn:35 OR _yz_pn:33 OR _yz_pn:31 OR _yz_pn:29 OR _yz_pn:27 OR _yz_pn:25 OR _yz_pn:23 OR _yz_pn:21 OR _yz_pn:19 OR _yz_pn:17 OR _yz_pn:15 OR _yz_pn:13 OR _yz_pn:11 OR _yz_pn:9 OR _yz_pn:7 OR _yz_pn:5 OR _yz_pn:3 OR _yz_pn:1", "rows": "50", "wt": "json", "stats.field": "key3_i" } }, "response": { "numFound": 3, "start": 0, "maxScore": 4.149883, "docs": [ { "key3_i": 1, "key2_s": "val_2", "key1_s": "val1_1", "_yz_id": "1*ssdms_test*unit_test_bucket1*bkey1_1*61*3HIxQ2pcu0YMjekbtIuixb", "_yz_rk": "bkey1_1", "_yz_rt": "ssdms_test", "_yz_rb": "unit_test_bucket1" }, { "key3_i": 1, "key2_s": "val_2", "key1_s": "val1_1", "_yz_id": "1*ssdms_test*unit_test_bucket1*bkey1_1*61*3JonReTfLC3GvG534WS21i", "_yz_rk": "bkey1_1", "_yz_rt": "ssdms_test", "_yz_rb": "unit_test_bucket1" }, { "key3_i": 1, "key2_s": "val_2", "key1_s": "val1_1", "_yz_id": "1*ssdms_test*unit_test_bucket1*bkey1_1*61*63t3g19KaCLu7JFLMxHxJJ", "_yz_rk": "bkey1_1", "_yz_rt": "ssdms_test", "_yz_rb": "unit_test_bucket1" } ] }, "stats": { "stats_fields": { "key3_i": { "min": 1, "max": 1, "count": 3, "missing": 0, "sum": 3, "sumOfSquares": 3, "mean": 1, "stddev": 0, "facets": {} } } } } -- View this message in context: http://riak-users.197444.n3.nabble.com/Deleted-keys-come-back-tp4033536p4033580.html Sent from the Riak Users mailing list archive at Nabble.com. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak CS - Unable to create/view bucket details using dragon disk
I have been using s3curl.sh with success to do the dirty work with Riak CS. Here’s a snippet from my docker setup files. I use the same methodology in production. 187 # CREATE CS BUCKET AND APPLY ACL 188 ## 189 bin/s3curl.pl \ 190 --debug \ 191 --id ${RIAK_ADMIN_KEY} \ 192 --key ${RIAK_ADMIN_SECRET} \ 193 --acl private \ 194 -- -s -v -x social.dev.pryvy.com:50201 \ 195 -X PUT http://social-media.cs.pryvy.com/ > /dev/null 2>&1 196 echo "created social-media bucket" 197 198 eval "bin/s3curl.pl --debug --id ${RIAK_ADMIN_KEY} --key ${RIAK_ADMIN_SECRET} -- -s -v -x social.dev.pryvy.com:50201 -H 'x-amz-grant-full-control: id=\"${RIAK_A DMIN_ID}\"' -H 'x-amz-grant-write: id=\"${RIAK_SOCIAL_ID}\"' -X PUT http://social-media.cs.pryvy.com/?acl"; > /dev/null 2>&1 199 echo "applied social bucket policy" Also, when applying the ACL, it has to evaluated as shown above if you are using bash as your shell. HTH, Shawn On 10/13/15, 1:24 AM, "riak-users on behalf of G" wrote: >Hi Shino, > >I tried multiple times with the access key and secret key configured in >riak-cs.conf. But I'm unable to create or view buckets using s3 cmd. > >Could you let me know if there is any documentation for this? > >My aim to create buckets, upload objects to riak buckets from a HTTP >interface similar to aws - s3. > > > >-- >View this message in context: >http://riak-users.197444.n3.nabble.com/Riak-CS-Unable-to-create-view-bucket-details-using-dragon-disk-tp4033494p4033579.html >Sent from the Riak Users mailing list archive at Nabble.com. > >___ >riak-users mailing list >riak-users@lists.basho.com >http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak CS - Unable to create/view bucket details using dragon disk
Riak cs fast track would answer your question On Oct 13, 2015 21:30, "Shawn Debnath" wrote: > I have been using s3curl.sh with success to do the dirty work with Riak > CS. Here’s a snippet from my docker setup files. I use the same methodology > in production. > > 187 # CREATE CS BUCKET AND APPLY ACL > 188 ## > 189 bin/s3curl.pl \ > 190 --debug \ > 191 --id ${RIAK_ADMIN_KEY} \ > 192 --key ${RIAK_ADMIN_SECRET} \ > 193 --acl private \ > 194 -- -s -v -x social.dev.pryvy.com:50201 \ > 195 -X PUT http://social-media.cs.pryvy.com/ > /dev/null 2>&1 > 196 echo "created social-media bucket" > 197 > 198 eval "bin/s3curl.pl --debug --id ${RIAK_ADMIN_KEY} --key > ${RIAK_ADMIN_SECRET} -- -s -v -x social.dev.pryvy.com:50201 -H > 'x-amz-grant-full-control: id=\"${RIAK_A DMIN_ID}\"' -H > 'x-amz-grant-write: id=\"${RIAK_SOCIAL_ID}\"' -X PUT > http://social-media.cs.pryvy.com/?acl"; > /dev/null 2>&1 > 199 echo "applied social bucket policy" > > > Also, when applying the ACL, it has to evaluated as shown above if you are > using bash as your shell. > > HTH, > Shawn > > > > > On 10/13/15, 1:24 AM, "riak-users on behalf of G" < > riak-users-boun...@lists.basho.com on behalf of m.gnanen...@yahoo.co.in> > wrote: > > >Hi Shino, > > > >I tried multiple times with the access key and secret key configured in > >riak-cs.conf. But I'm unable to create or view buckets using s3 cmd. > > > >Could you let me know if there is any documentation for this? > > > >My aim to create buckets, upload objects to riak buckets from a HTTP > >interface similar to aws - s3. > > > > > > > >-- > >View this message in context: > http://riak-users.197444.n3.nabble.com/Riak-CS-Unable-to-create-view-bucket-details-using-dragon-disk-tp4033494p4033579.html > >Sent from the Riak Users mailing list archive at Nabble.com. > > > >___ > >riak-users mailing list > >riak-users@lists.basho.com > >http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak does not have primary partitions running ?
Whar is the right solution if i have partition not running in my cluster when i execute riak-admin transfers command ? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak-KV and Spring
Jagan, We don't have updated support in Spring Data for the latest Riak releases. The Java client is very capable on its own (but somewhat verbose if you're used to using SD repos). I recently wrote an example app that uses Boot and the Riak Java client to ingest IoT data. It's located here: https://github.com/jbrisbin/tfl-ingest jb On Sun, Oct 11, 2015 at 10:19 PM Jagan Mangalampalli wrote: > my company is a spring shop. we are trying to implement Riak but i see > that the spring and riak integration is way behind. > when trying to build a spring boot app , I found that the > spring-data-riak-1.0.0.M3.jar is not compatible with the spring boot 1.2. > any suggestions on implementing Spring Boot + Riak ? > -Jagan > ___ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Java Riak client can't handle a Riak node failure?
Alexander, thanks for that reminder. Yes, n_val = 2 would suit us better. I'll look into getting that tested. Regards, Vanessa On Thu, Oct 8, 2015 at 1:04 PM, Alexander Sicular wrote: > Greetings and salutations, Vanessa. > > I am obliged to point out that running n_val of 3 (or more) is highly > detrimental in a cluster size smaller than 5 nodes. Needless to say, it is > not recommended. Let's talk about why that is the case for a moment. > > The default level of abstraction in Riak is the virtual node, or vnode. > The vnode represents a segment of the ring (ring_size, configurable in > steps of power of 2, default 64, min 8, max 1024.) The "ring" is the number > line 0 - 2^160 which represents the output of the SHA hash. > > Riak achieves high availability through replication. Riak replicates data > to vnodes. A hypothetical replica set may be, for example, set[vnode 1, > vnode 10, vnode 20]. Note, I said vnodes. Not physical nodes. And therein > lies the concern. Considering a default ring size of 64 and a default > replica count of 3, the minimum recommended production deployment of a Riak > cluster should be 5 due to the fact that in that circumstance every replica > set combination is guaranteed to have each vnode on a distinct physical > node. Anything less than that will certainly have some combinations of > replica sets which have two of its copies on the same physical host. Note I > said some combinations. Some fraction of node replica set combinations will > have two of their copies allocated to one physical machine. > > You can see where I'm going with this. Firstly, performance will be > negatively impacted when writing more than one copy of data to the same > physical hardware, aka disk. But more importantly, you are negating Riak's > high availability mechanic. If you lost any given physical node you would > lose access to two copies of the set of data which had two replicas on that > node. > > Riak is designed to withstand loss of any two physical nodes while > maintaining access to 100% of your corpus assuming the fact that you are > running the default settings and have deployed 5 nodes. > > Here is the rule of thumb that I recommend (me personally, not Basho) to > folks looking to deploy clusters with less than 5 nodes: > > 1,2 nodes: n_val 1 > 3,4 nodes: n_val 2 > 5+ nodes: n_val 3 > > In summary, please consider reconfiguring your production deployment. > > Sincerely, > Alexander > > @siculars > http://siculars.posthaven.com > > Sent from my iRotaryPhone > > On Oct 7, 2015, at 19:56, Vanessa Williams < > vanessa.willi...@thoughtwire.ca> wrote: > > Hi Dmitri, what would be the benefit of r=2, exactly? It isn't necessary > to trigger read-repair, is it? If it's important I'd rather try it sooner > than later... > > Regards, > Vanessa > > > > On Wed, Oct 7, 2015 at 4:02 PM, Dmitri Zagidulin > wrote: > >> Glad you sorted it out! >> >> (I do want to encourage you to bump your R setting to at least 2, though. >> Run some tests -- I think you'll find that the difference in speed will not >> be noticeable, but you do get a lot more data resilience with 2.) >> >> On Wed, Oct 7, 2015 at 6:24 PM, Vanessa Williams < >> vanessa.willi...@thoughtwire.ca> wrote: >> >>> Hi Dmitri, well...we solved our problem to our satisfaction but it >>> turned out to be something unexpected. >>> >>> The keys were two properties mentioned in a blog post on "configuring >>> Riak’s oft-subtle behavioral characteristics": >>> http://basho.com/posts/technical/riaks-config-behaviors-part-4/ >>> >>> notfound_ok= false >>> basic_quorum=true >>> >>> The 2nd one just makes things a little faster, but the first one is the >>> one whose default value of true was killing us. >>> >>> With r=1 and notfound_ok=true (default) the first node to respond, if it >>> didn't find the requested key, the authoritative answer was "this key is >>> not found". Not what we were expecting at all. >>> >>> With the changed settings, it will wait for a quorum of responses and >>> only if *no one* finds the key will "not found" be returned. Perfect. >>> (Without this setting it would wait for all responses, not ideal.) >>> >>> Now there is only one snag, which is that if the Riak node the client >>> connects to goes down, there will be no communication and we have a >>> problem. This is easily solvable with a load-balancer, though for >>> complicated reasons we actually don't need to do that right now. It's just >>> acceptable for us temporarily. Later, we'll get the load-balancer working >>> and even that won't be a problem. >>> >>> I *think* we're ok now. Thanks for your help! >>> >>> Regards, >>> Vanessa >>> >>> >>> >>> On Wed, Oct 7, 2015 at 9:33 AM, Dmitri Zagidulin >>> wrote: >>> Yeah, definitely find out what the sysadmin's experience was, with the load balancer. It could have just been a wrong configuration or something. And yes, that's the documentation page I recommend - http://docs.basho.com/riak/latest/ops/
Re: Java Riak client can't handle a Riak node failure?
Hi Dmitri, your point about r=2 is noted. I'll probably go with that. The thing I have to decide is how to reconcile duplicates. For the time being I can tolerate some stale data/inconsistency (caused by r=1). But for the future I would prefer not to risk that. Thanks to everyone for their help with the gory details. I'll be upgrading from Riak 1.4.10 to Riak 2.1.2 (or whatever the latest is) shortly and I'll take all of these points into consideration at that time. Best regards, Vanessa Vanessa Williams ThoughtWire Corporation http://www.thoughtwire.com On Thu, Oct 8, 2015 at 8:45 AM, Dmitri Zagidulin wrote: > Hi Vanessa, > > The thing to keep in mind about read repair is -- it happens > asynchronously on every GET, but /after/ the results are returned to the > client. > > So, when you issue a GET with r=1, the coordinating node only waits for 1 > of the replicas before responding to the client with a success, and only > afterwards triggers read-repair. > > It's true that with notfound_ok=false, it'll wait for the first > non-missing replica before responding. But if you edit or update your > objects at all, an R=1 still gives you a risk of stale values being > returned. > > For example, say you write an object with value A. And let's say your 3 > replicas now look like this: > > replica 1: A, replica 2: A, replica 3: notfound/missing > > A read with an R=1 and notfound_ok=false is just fine, here. (Chances are, > the notfound replica will arrive first, but the notfound_ok setting will > force the coordinator to wait for the first non-empty value, A, and return > it to the client. And then trigger read-repair). > > But what happens if you edit that same object, and give it a new value, > B? So, now, there's a chance that your replicas will look like this: > > replica 1: A, replica 2: B, replica 3: B. > > So now if you do a read with an R=1, there's a chance that replica 1, with > the old value of A, will arrive first, and that's the response that will be > returned to the client. > > Whereas, using R=2 eliminates that risk -- well, at least decreases it. > You still have the issue of -- how does Riak decide whether A or B is the > correct value? Are you using causal context/vclocks correctly? (That is, > reading the object before you update, to get the correct causal context?) > Or are you relying on timestamps? (This is an ok strategy, provided that > the edits are sufficiently far apart in time, and you don't have many > concurrent edits, AND you're ok with the small risk of occasionally the > timestamp being wrong). You can use the following strategies to prevent > stale values, in increasing order of security/preference: > > 1) Use timestamps (and not pass in vector clocks/causal context). This is > ok if you're not editing objects, or you're ok with a bit of risk of stale > values. > > 2) Use causal context correctly (which means, read-before-you-write -- in > fact, the Update operation in the java client does this for you, I think). > And if Riak can't determine which version is correct, it will fall back on > timestamps. > > 3) Turn on siblings, for that bucket or bucket type. That way, Riak will > still try to use causal context to decide the right value. But if it can't > decide, it will store BOTH values, and give them back to you on the next > read, so that your application can decide which is the correct one. > > > > > > > > On Thu, Oct 8, 2015 at 1:56 AM, Vanessa Williams < > vanessa.willi...@thoughtwire.ca> wrote: > >> Hi Dmitri, what would be the benefit of r=2, exactly? It isn't necessary >> to trigger read-repair, is it? If it's important I'd rather try it sooner >> than later... >> >> Regards, >> Vanessa >> >> >> >> On Wed, Oct 7, 2015 at 4:02 PM, Dmitri Zagidulin >> wrote: >> >>> Glad you sorted it out! >>> >>> (I do want to encourage you to bump your R setting to at least 2, >>> though. Run some tests -- I think you'll find that the difference in speed >>> will not be noticeable, but you do get a lot more data resilience with 2.) >>> >>> On Wed, Oct 7, 2015 at 6:24 PM, Vanessa Williams < >>> vanessa.willi...@thoughtwire.ca> wrote: >>> Hi Dmitri, well...we solved our problem to our satisfaction but it turned out to be something unexpected. The keys were two properties mentioned in a blog post on "configuring Riak’s oft-subtle behavioral characteristics": http://basho.com/posts/technical/riaks-config-behaviors-part-4/ notfound_ok= false basic_quorum=true The 2nd one just makes things a little faster, but the first one is the one whose default value of true was killing us. With r=1 and notfound_ok=true (default) the first node to respond, if it didn't find the requested key, the authoritative answer was "this key is not found". Not what we were expecting at all. With the changed settings, it will wait for a quorum of responses and only if *no one* finds the key will "not found" be re