On Thu, Feb 25, 2016 at 9:40 PM, Cosmin Marginean <mailto:cos.margin...@gmail.com>> wrote:
> On 25 Feb 2016, at 19:26, Cosmin Marginean <mailto:cosmargin...@gmail.com>> wrote:
>
> Hi,
>
> I couldn’t find this anywhere in the docs: is there a mechanism in Riak to
>
ight now, a map is just a riak object.
>
> I guess with Riak Search indexing though, you could get something similar,
> maybe?
>
> On 25 Feb 2016, at 19:40, Cosmin Marginean wrote:
>
>> On 25 Feb 2016, at 19:26, Cosmin Marginean wrote:
>>
>> Hi,
>>
On 25 Feb 2016, at 19:26, Cosmin Marginean wrote:
Hi,
I couldn’t find this anywhere in the docs: is there a mechanism in Riak to
fetch only one Register (or a specific entry) from a map?
We have a use case where we have a map and need to only get the value for a key
in the map, rather than
nt.api.convert.Converter).
>
> I think this is the right way to add custom serialization.
>
> Regards,
> Vitaly
>
> On Mon, Feb 22, 2016 at 11:29 AM, Cosmin Marginean <
> cos.margin...@gmail.com> wrote:
>
>> Hi
>>
>> I presume that Riak Java cl
Hi
I presume that Riak Java client is using Jackson for JSON-to-POJO and vice
versa.
Is there a way to easily inject a custom object mapper there? Or at least
to get a reference to it in order to add custom serializers?
Thank you
Cosmin
___
riak-users
Both AWS engineers and Basho people, will most likely ask for throughput etc
before recommending solutions.
That being said, there’s a few things to consider. For example, most AWS
engineers will suggest you use all the availability zones when deploying a
distributed storage solution like Riak.
Hi Ilya
Read “quorum” is unusual and a bit against the idea of distributed DB, but I
believe what you might find useful is enabling strong consistency, which you
can then chose to apply at a bucket(type) level:
* http://docs.basho.com/riak/latest/dev/advanced/strong-consistency/
One of the reas
uture.
> --
> Luke Bakken
> Engineer
> lbak...@basho.com (mailto:lbak...@basho.com)
>
>
> On Tue, Jun 2, 2015 at 5:05 AM, Cosmin Marginean (mailto:cosmin...@gmail.com)> wrote:
> > I’m simulating a failure with a concurrent update on a strongly consistent
> > bu
I’m simulating a failure with a concurrent update on a strongly consistent
bucket/entry.
This fails as it should for the second update, however the error message/code
is not entirely useful.
com.basho.riak.client.core.netty.RiakResponseException: failed
We’ve investigated this and the error
> we'll get the ball rolling from there.
>
> Thanks,
> Alex
>
>
> On Mon, Apr 27, 2015 at 7:04 PM, Cosmin Marginean (mailto:cosmin...@gmail.com)> wrote:
> > One quick question on the Riak Java Client.
> >
> > Brian Roach seemed to have been the
One quick question on the Riak Java Client.
Brian Roach seemed to have been the only active contributor to this. Recently
he mentioned that he's leaving Basho though, so I was wondering if he'll be
maintaining this moving forward.
Since the Riak Java client is now a fundamental part of our ec
> Riakclient.executeAsync(updateOp);
>
> future.await();
> if (future.isSuccess()) {
> ...
> } else {
> ...
> }
>
> Thanks,
> - Roach
>
> On Tue, Feb 3, 2015 at 3:39 PM, Cosmin Marginean (mailto:cosmin...@gmail.com)> wrote:
> > I have an edge
I have an edge case where consistency is favoured over availability so I’m
using a "consistent": true bucket type for a very specific operation.
I worked in testing my setup so ended up faking an entire failure by
deliberately using an incorrect vClock.
Using StoreValue, the (second) write fails
s.html
>
> - Roach
>
> On Tue, Jan 27, 2015 at 4:29 AM, Cosmin Marginean (mailto:cosmin...@gmail.com)> wrote:
> > I am implementing a custom way to handle Riak Links using a Java Client.
> > Looking at the samples available
> > (https://github.com/basho/riak-java-cl
I am implementing a custom way to handle Riak Links using a Java Client.
Looking at the samples available
(https://github.com/basho/riak-java-client/wiki/Using-links which is outdated)
it seems that it’s not entirely straightforward to use RiakLinks with POJOs and
automatic conversion. More imp
Hi guys, any ideas on this thing below? Am I missing something around how links
should work when using bucket types?
Thank you
Cos
On Wednesday, 21 January 2015 at 20:42, Cosmin Marginean wrote:
> Using Riak 2.0.2 on CentOS and trying to create some Links attached to an
> object usi
Hi Santi,
I’m presuming you’re running Linux, so this might be the result of Riak binding
to 127.0.01 by default. You might want to adjust that in /etc/riak/riak.conf
Check for listener.protobuf.* and listener.http.* area. More on this topic
here: http://docs.basho.com/riak/latest/ops/building/b
PS: Upgraded to 2.0.4, but issue is still reproducing.
Cos
On Wednesday, 21 January 2015 at 20:42, Cosmin Marginean wrote:
> Using Riak 2.0.2 on CentOS and trying to create some Links attached to an
> object using the Links header.
> We are using bucket types and this behaviou
Using Riak 2.0.2 on CentOS and trying to create some Links attached to an
object using the Links header.
We are using bucket types and this behaviour doesn’t seem to replicate when
using plain buckets.
Doing the following works as expected
curl -H "Content-Type:text/plain" -X POST
"http://19
ng on the roadmap, no.
>
> Thanks,
> - Roach
>
>
>
> On Tue, Jan 20, 2015 at 10:24 AM, Cosmin Marginean (mailto:cosmin...@gmail.com)> wrote:
> > (Apologies if this is a recurring topic, but I haven’t read a clear
> > statement yet in relation to this)
(Apologies if this is a recurring topic, but I haven’t read a clear statement
yet in relation to this)
Using Riak, I sometimes feel that link walking might be a corner stone for
certain data modelling techniques. The Riak documentation though states clearly
that this is not feasible while also
teEntity(SomeEntity e)
> {
> this.entity = e;
> }
>
> @Override
> public SomeEntity apply(SomeEntity original)
> {
> return entity;
> }
>
> }
>
> ...
> .withUpdate(new UpdateEntity(entity))
> ...
>
>
>
> I've tested b
I’m doing a fairly “by the book” clobber update (store and fetch below work
fine) on an entity using the Java client. I’m seeing an error that happens at
type-inference time within the Riak Java client. I’m pasting below the exact
test that I’m using to generate this, as well as the stacktrace.
--verbose /var/lib/riak/anti_entropy/*/*
> 4. Turn Riak back on
>
> This will preserve your buckets and bucket types in in cluster
> metadata. You can also automate that, but it's a little more
> complicated.
>
> On Sun, Jan 4, 2015 at 4:25 AM, Cosmin Marginean (m
Hi,
I’m running Riak 2.0.2 on Centos 6.5 installed from an RPM
(http://s3.amazonaws.com/downloads.basho.com/riak/2.0/2.0.2/rhel/6/riak-2.0.2-1.el6.x86_64.rpm)
I’m interested in wiping the entire storage (in order to re-run tests, etc). I
have a script that traverses buckets/keys but I’m aware
25 matches
Mail list logo