Bryan* (misspelled your name), need a morning coffee :(
On 04/09/2019 10:29, Guido Medina wrote:
Thanks a lot Brian, I have deleted such directory and now the node
started, let's see how it behaves.
Guido.
On 04/09/2019 10:19, Bryan Hunt wrote:
Easiest solution is to just delete the
ll be
844930634081928249586505293914101120738586001408
On 4 Sep 2019, at 10:01, Guido Medina <mailto:gmed...@temetra.com>> wrote:
Hi all,
We had a power cut which caused one of the nodes to corrupt one of
the LevelDB files, after this that node doesn't even want to start,
here is the error we are seeing:
2019-09-04
Hi all,
We had a power cut which caused one of the nodes to corrupt one of the
LevelDB files, after this that node doesn't even want to start, here is
the error we are seeing:
2019-09-04 08:46:41.584 [error] <0.2329.0>@riak_kv_vnode:init:527
Failed to start riak_kv_eleveldb_backend backend for
ore handing off
all data. How can I resolve this?"
https://docs.basho.com/riak/kv/2.2.3/developing/faq/
This may be helpful as well -
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-November/038435.html
On Tue, Feb 19, 2019 at 3:22 AM Guido Medina wrote:
Hi,
Can someone pl
Hi,
Can someone please point me out to the guide explaining how to change
wants and choose claim functions for Riak distribution % distribution?
We would like to set these permanently and trigger a cluster
redistribution, I just can't find that documentation anymore, I was able
to find this
*Sent:*01 February 2019 19:22
*To:*Guido Medina mailto:gmed...@temetra.com>>
*Cc:*riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
*Subject:*Re: [ANN] Riak 2.9.0 - Release Candidate 1 Available
Replication would be the optimum solution - in theory
Hi all,
Nice work on the upcoming 2.9.0 release, I have a quick question:
Will it be possible to switch from the eleveldb to the new leveled
backend and Tictac AAE for an existing cluster?
In case it is not possible we are thinking to use the new replication
and move to a brand new cluster.
command from the wrong instructions before.
Guido.
On 01/06/18 09:37, Russell Brown wrote:
I don’t see a call to `riak_search_vnode:repair` in those docs
Do you still run legacy riak search (i.e. not yokozuna/solr)?
On 1 Jun 2018, at 09:35, Guido Medina wrote:
Sorry, not repairing a single p
:35, Guido Medina wrote:
Sorry, not repairing a single partition but all partitions per node:
https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-all-partitions-on-a-node
On 01/06/18 09:34, Guido Medina wrote:
Hi Russell,
I was repairing each node as specified in this
Sorry, not repairing a single partition but all partitions per node:
https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-all-partitions-on-a-node
On 01/06/18 09:34, Guido Medina wrote:
Hi Russell,
I was repairing each node as specified in this guide
https
.
Can you tell me what command you ran, it looks to me from the output below that
you’re connected to node and typing commands in the console?
Is this some snippet that you attach and run?
Cheers
Russell
On 1 Jun 2018, at 09:07, Guido Medina wrote:
Hi all,
We started the partitions repair a
Hi all,
We started the partitions repair a couple of weeks ago, so far so good
(3 nodes out of 7 done), then we started getting this error:
(r...@node4.domain.com)3> [riak_search_vnode:repair(P) || P <-
Partitions].
** exception error: undefined function riak_search_vnode:repair/1
The first
-ok
From what you describe, it sounds like only a single copy (out of the original
three), somehow remain present in your cluster.
Best Regards,
Bryan Hunt
On 17 May 2018, at 15:42, Guido Medina wrote:
Hi all,
After some big rebalance of our cluster some keys are not found anymore unless
we se
/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok
From what you describe, it sounds like only a single copy (out of the original
three), somehow remain present in your cluster.
Best Regards,
Bryan Hunt
On 17 May 2018, at 15:42, Guido Medina wrote:
Hi all,
Aft
Hi all,
After some big rebalance of our cluster some keys are not found anymore
unless we set R = 3, we had N = 3 and R = W = 2
Is there any sort of repair that would correct such situation for Riak
2.2.3, this is really driving us nuts.
Any help will be truly appreciated.
Kind regards,
Gu
false;
}
}
return true;
}
So my other question is if this still holds true for the current Riak
Java client 2.1.1?
On 22/06/17 09:49, Guido Medina wrote:
Hi,
I see now there is support for 2i which we needed in order to migrate
to 2.x, there was another issue with
Hi,
I see now there is support for 2i which we needed in order to migrate to
2.x, there was another issue with the old client which forced us to
modify the client, such issue was related to the following, let me put
an example:
public class POJO {
@RiakKey
public String getKey() {
/
force a rebuild of your hash trees. They rebuild automatically
anyway; this is just making them rebuild faster.
So it would recover .. eventually.
-Fred
On Jun 1, 2017, at 7:13 AM, Guido Medina <mailto:gmed...@temetra.com>> wrote:
*Correction:* I replied to an old e-mail instead of c
*Correction:* I replied to an old e-mail instead of creating a new one
and forgot to change the subject.
Hi all,
My impression of Riak has always been that it would recover from
anything but yesterday we had the worst happened, there was a power
outage so all the servers in a Riak cluster wen
Hi all,
My impression of Riak has always been that it would recover from
anything but yesterday we had the worst happened, there was a power
outage so all the servers in a Riak cluster went down, once they were
back we have been having these constant so my questions are:
* How can I recover
Hi Travis,
I have done similar things using the Java client but I will assume you
have access to change certain settings at the C client, assuming you
have RW = 2 and N =3, your client is returning to you once 2 writes are
made but an asynchronous write is still pending which will eventually
Hi,
Are there any plans on releasing a Riak Java client with Netty-4.1.x?
The reasoning for this is that some projects like Vert.x 3.3.0 for
example are already on Netty-4.1.x and AFAIK Netty's 4.1.x isn't just a
drop in replacement for 4.0.x
Would it make sense to support another Riak Java
Disregard please, our patch file uses FQDN and it was pointing to the
wrong file (vm.args)
Guido.
On 13/11/14 14:43, Guido Medina wrote:
Why is it Riak assuming the node is called "nonode@nohost"?
I didn't have to set that before and somehow my vm.args ended with the
prope
Why is it Riak assuming the node is called "nonode@nohost"?
I didn't have to set that before and somehow my vm.args ended with the
proper node name, in this case this one should be
"r...@x1.clonmel.temetra.com"
Regards,
Guido.
On 13/11/14 14:34, Guido Medina wr
onode@nohost,[{'r...@x1.clonmel.temetra.com',['r...@x2.clonmel.temetra.com','r...@x3.clonmel.temetra.com']},{'r...@x2.clonmel.temetra.com',['r...@x5.clonmel.temetra.com','r...@x6.clonmel.temetra.com']},{'r...@x3.clonmel.temetra.com
Hi,
Failed miserably trying to upgrade Riak 1.4.10 to 2.0.2 after created
all the patch files to use the new riak.conf, I noticed that we don't
need custom parameters any longer so the only thing we changed was the
backend to be "leveldb"
Is it level DB max open files parameter needed any lo
You need to learn how to use Jackson, there is no other way around it,
like you were told, you need a default constructor, or an annotated
constructor to instruct Jackson how to instantiate your class, like the
following example:
public class TestClass {
@JsonCreator
public TestClass(
Hi,
Is it possible to run the Riak Java client 1.4.x against Riak 2.x? At
the moment we would have to do a major refactor in our application to
support Riak Java client v2.x so I'm wondering if it is possible to
first migrate to Riak v2.x before we start our refactor.
Best regards,
Guido.
Hi,
Is there a way to estimate the size of a Riak Key using Riak Java client
1.4.x, say from an instance of IRiakObject?
An approximation would be OK, I can probably get the Json's string and
check its length but then there are 4 sizes to take into account right?
1. The Json string length.
Alex De la rosa wrote:
Hi Guido,
This could be a solution; although I would try to do it in an
homogeneous system where only one NoSQL DB would be around if possible :)
Thanks!
Alex
On Fri, Aug 29, 2014 at 5:03 PM, Guido Medina
mailto:guido.med...@temetra.com>> wrote:
Maybe what y
Maybe what you are looking for is a combination of both, say, your KV
data in Riak with a combination of background processes able to build
the necessary searching graphs in Neo4J, in such way your data is secure
in a Riak cluster and searchable on several Neo4J servers.
That's just an idea wh
Hi,
We have had a problem now twice on Ubuntu 12.04 with MDM arrays (all
nodes Raid 1 with 2x2TB disks) where once a disk fails 2i queries don't
return the expected result, the only solution we have been able to apply
is by replacing the affected node.
After doing an AAE clean up, repaired p
Hi,
Does anyone know and can recommend a good -with price/value in mind-
data center in Australia for hosting a Riak cluster? Our main data
center is in Germany -http://www.hetzner.de- which is great, stable,
fast and cheap but haven't had any luck finding something similar, and
it has to be
Hi Simon,
There are some (maybe related) Level DB fixes in 1.4.9 and 1.4.10, I
don't think there isn't any harm for you to do a rolling upgrade since
nothing major changed, just bug fixes, here is the release notes' link
for reference:
https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
period of time). But I just want to make sure I understand what you're
saying there.
Warm regards,
Bryce
On 04/24/2014 01:13 AM, Guido Medina wrote:
Hi Bryce,
If each session ID is unique, even with multiple writers is unlikely
for you to be writing to the same key at the same time fro
Hi Bryce,
If each session ID is unique, even with multiple writers is unlikely for
you to be writing to the same key at the same time from two different
writers, that being the case, you could store each event as a JSON
object into a JSON array and when your array reaches a threshold, say
for
Hi,
Base on the documentation and tricks I have seen to fix/repair stuff in
Riak I would suggest the following approach:
riak-admin [cluster] (if cluster is specified then run the
action at every node)
and I know most of the commands are designed like that but adding to
them for speci
What would be a high ring size that would degrade performance for v3:
128+? 256+?
I should had asked using the original response but I deleted it by accident.
Guido.
On 10/04/14 10:30, Guido Medina wrote:
Hi,
What's the latest non-standard version of this function? v3 right? If
Basho
is a good first step before
waiting for transfers.
Thanks!
--
Luke Bakken
CSE
lbak...@basho.com <mailto:lbak...@basho.com>
On Wed, Apr 9, 2014 at 7:54 AM, Guido Medina
mailto:guido.med...@temetra.com>> wrote:
What do you mean by "wait for
Hi,
What's the latest non-standard version of this function? v3 right? If
Basho adds more versions to this, is this somewhere documented?
For our nodes standard choose/wants claim functions were doing a weird
distribution so the numbers even out a bit better (just a bit better) by
using v3,
o.com>
On Wed, Apr 9, 2014 at 6:34 AM, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Hi,
If nodes are already upgraded to 1.4.8 (and they went all the way
from 1.4.0 to 1.4.8 including AAE buggy versions)
Will the following command (as root) on Ubuntu Serve
Hi,
If nodes are already upgraded to 1.4.8 (and they went all the way from
1.4.0 to 1.4.8 including AAE buggy versions)
Will the following command (as root) on Ubuntu Servers 12.04:
riak stop; rm -Rf /var/lib/riak/anti_entropy/*; riak start
executed on each node be enough to rebuild AAE h
Glad it helped, we have been using them for years, I would also
recommend to use Ubuntu Server 12.04 LTS over there.
Regards,
Guido.
On 03/03/14 11:50, Massimiliano Ciancio wrote:
2014-02-21 15:44 GMT+01:00 Guido Medina :
http://www.hetzner.de/en/
Really powerful servers, 500+ mbps inter
http://www.hetzner.de/en/
Really powerful servers, 500+ mbps inter server communication.
Guido.
On 21/02/14 14:34, Massimiliano Ciancio wrote:
Hi all,
is there someone who have suggestions for a good, not so expensive,
European provider where to get 5 servers to install Riak?
Thanks
MC
__
The short answer is no, there is nothing that can fulfil your requirements
We developed something similar for PostgreSQL, we call it Hybrid DAOs,
basically each POJO is annotated as Riak KV and also as Eclipselink JPA
entity, I can only give you partial code and some hints.
You need a standar
sell Brown wrote:
On 30 Jan 2014, at 10:58, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Hi,
Now I'm curious too, according to
http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/
the default value for Erlang property last_write_wins is false, n
Hi,
Now I'm curious too, according to
http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/
the default value for Erlang property last_write_wins is false, now, if
95% of the buckets/keys have no siblings (or conflict resolution), does
that mean that for such buckets las
.
Guido.
On 29/01/14 11:44, Russell Brown wrote:
Oh damn, wait. You said 1.4.*. There might, therefore be siblings, do a counter
increment before the copy to ensure siblings are resolved (if you can.) Or use
RiakEE MDC.
On 29 Jan 2014, at 11:27, Guido Medina wrote:
Hi,
We are using Riak Java
Hi,
We are using Riak Java client 1.4.x and we want to copy all counters
from cluster A to cluster B (all counters will be stored on a single to
very few buckets), if I list the keys using special 2i bucket index and
then treat each key as IRiakObject, will that be enough to copy
counters, or
Hi,
I'm trying to get different distributions on the ring and I used before
the following Erlang code, which for Riak 1.4.7 is not re-triggering a
re-calculation:
* What is the latest algorithm version? v3? And is there a list of the
last 2 or 3 versions? Sometimes depending on the keys o
Basho Technologies
@davidjrusek
On January 24, 2014 at 8:41:07 AM, Guido Medina
(guido.med...@temetra.com <mailto://guido.med...@temetra.com>) wrote:
Hi,
Is there any small doc that could explain its usage a little bit:
From the Java perspective it would be nice if it point out its
c
Hi,
What's a good value for transfer limit when re-arranging adding/removing
nodes?
Or if there is a generic rule of thumb like physical nodes, processors, etc.
Once transfer is completed, is it a good practice to set it back to its
default value or should the calculated (guessed?) transfer l
Hi,
Is there any small doc that could explain its usage a little bit:
From the Java perspective it would be nice if it point out its counter
part methods with Atomic Integer like:
* How to create it? Does incrementing a counter will just create it
with zero as initial value and then incre
At Maven central repos it has yes:
https://repository.sonatype.org/index.html#nexus-search;quick~riak-pb
HTH,
Guido.
On 21/01/14 14:22, Jon Brisbin wrote:
Have the Riak Java Protobuf artifacts been updated to take advantage
of Riak 2.0 features yet?
I'd like to work some more on getting the
Hi,
We have this configuration at our vm.args for all of our 8 cores servers:
## Erlang scheduler tuning:
https://github.com/basho/leveldb/wiki/Riak-tuning-1
*+S 4:4*
But at /var/log/riak/console.log we have the following warning, should
we ignore it?
2014-01-13 22:15:35.246 [warning] <0.
Hi Shimon,
Did you try streaming the 2i bucket index and then doing your job per
key basis? I sent you a code snippet the other day.
It should work fine regardless of how many keys you have in your bucket,
it is equivalent to the section:
http://docs.basho.com/riak/latest/dev/using/2i/ - Lo
I meant "Hi Shimon", sorry for the wrong spelling.
Guido.
On 09/12/13 12:12, Guido Medina wrote:
Hi Simon,
We use HA proxy for that matter, set up HA proxy to at localhost:8087
and then your Riak Java client pointing to it, this is a sample HA
proxy config: https://gist.github
Hi Simon,
We use HA proxy for that matter, set up HA proxy to at localhost:8087
and then your Riak Java client pointing to it, this is a sample HA proxy
config: https://gist.github.com/gburd/1507077
HTH,
Guido.
On 09/12/13 12:05, Shimon Benattar wrote:
Hi Riak users,
we are using the Riak
Assuming your backend is on LevelDB, have you try streaming the special
bucket 2i index? We stream millions of keys using specific 2i indexes,
if you didn't create any 2i index on your bucket, you can still fetch
that special key, and per key you can fetch it one by one while
iterating over you
Hi Michael,
I'm quite sure lately annotating the Riak key method was enabled at the
client but I'm not 100% of its usage yet (I haven't used it), in the
meantime for testing you can annotate the property with @RiakKey inside
your POJO, that should do, then just treat that property as any other
it shouldn't stop Tomcat
shutdown anymore. This is that I learn from source code. I called the
method in Servlet listener and never had issues after that. Before I
had similar behavior like you have.
Thank you,
Konstantin.
On Nov 5, 2013 5:31 AM, "Guido Medina" <mailto:guido.
scheduled tasks.
Guido.
On 05/11/13 13:31, Guido Medina wrote:
That's done already, I'm looking at the source now, not sure of the
following timer needs to be cancelled when Riak client shutdown method
is called:
public abstract class RiakStreamClient implements Iterable {
st
t-thread", true);
...
...
}
Guido.
On 05/11/13 13:29, Konstantin Kalin wrote:
You need to call shutdown method of Riak client when you are stopping
your application.
Thank you,
Konstantin.
On Nov 5, 2013, at 5:06, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Sorry,
Sorry, I meant "stopping Tomcat from shutting down properly"...I must
have been thinking of some FPS night game.
On 05/11/13 13:04, Guido Medina wrote:
Hi,
We are tracing some threads at our webapp which are stopping Tomcat
from shooting down properly, one of them seems to be re
Hi,
We are tracing some threads at our webapp which are stopping Tomcat from
shooting down properly, one of them seems to be related with Riak Java
client, here is the repeating stack trace once all services have been
stopped properly:
*Thread Name:* riak-stream-timeout-thread
*State:* in Ob
Your tests are not close to what you are going to have in production
IMHO, here are few recommendations:
1. Build a cluster with at least 5 nodes with N=3 and R=W=2 (You can
update your bucket properties via PBC with Java)
2. Use PBC instead of HTTP.
3. If you are only importing data call
t a higher ratio given that ZFS will use compression over the entire
volume not ‘just’ the data in the DB.
That said there is a lot more to ZFS then compression and CRC ;) like
snapshots, cloning, ARC ^^
On 03 Oct 2013, at 9:56, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Guido.
On 03/10/13 12:11, Guido Medina wrote:
I have heard some SAN's horrors stories too, Riak nodes are so
cheap that I don't see the point in even having any mirror on the
node, here my points:
1. Erlang interprocess communication brings some network usage
And for ZFS? I wouldn't recommend it, after Riak 1.4 snappy LevelDB
compression does a nice job, why take the risk of yet another not so
enterprise ready compression algorithms.
I could be wrong though,
Guido.
On 03/10/13 12:11, Guido Medina wrote:
I have heard some SAN's horrors s
I have heard some SAN's horrors stories too, Riak nodes are so cheap
that I don't see the point in even having any mirror on the node, here
my points:
1. Erlang interprocess communication brings some network usage, why yet
another network usage on replicating the data? If the whole idea of
Hi,
Is there a way to quick check if a key is present without fetching it
using the Riak Java client? It would be nice to have one for quick
checks without fetching the key:
/interface Bucket {//
// //
// public boolean isKeyPresent(String key);//
// //
//}/
Of course, that wo
Morning,
Is there a way to determine what nodes a key belong to? I'm guessing
that the hash of a key will be computed using the bucket name and key
combined, I'm having some issues with some writes and would like to see
if there is a pattern, knowing what nodes are involved will help me a lot.
Hi,
I'm trying to tune our Riak cluster using
http://docs.basho.com/riak/latest/ops/advanced/backends/leveldb/#Parameter-Planning
but I'm still lost on how to use the calculation results, here are my
questions:
1. Does this calculator
https://github.com/basho/basho_docs/raw/master/source
on that - are you seeing anything
in the Riak logs?
- Roach
On Wed, Sep 25, 2013 at 12:11 PM, Guido Medina wrote:
Like this: withConnectionTimeoutMillis(5000).build();
Guido.
On 25/09/13 18:08, Brian Roach wrote:
Guido -
When you say "the client is configured to time out" do y
utMillis()?
- Roach
On Wed, Sep 25, 2013 at 5:54 AM, Guido Medina wrote:
Hi,
Streaming 2i indexes is not timing out, even though the client is configured
to timeout, this coincidentally is causing the writes to fail (or or the
opposite?), is there anything elemental that could "lock"
Hi,
Streaming 2i indexes is not timing out, even though the client is
configured to timeout, this coincidentally is causing the writes to fail
(or or the opposite?), is there anything elemental that could "lock" (I
know the locking concept in Erlang is out of the equation so LevelDB?)
somethi
Jared,
Is it possible to elaborate more on "meet me in the middle
settings/scenarios?", let me explain, let's say the quorum is configured
with low values, say, R=W=1 and N=3, doesn't that add more work to AAE
background process? Could there be ways to sacrifice some client
performance with l
Hi,
Is it possible to have Riak control running at HTTP port on localhost?
Assuming security is provided by SSH tunnels.
If so what is needed to be done at app.config? I enabled Riak control
but it is redirecting me to HTTPS.
Thanks,
Guido.
___
ri
ected and there is a different
transport for sensitive information.
Regards,
Guido.
On 18/09/13 16:15, Christopher Meiklejohn wrote:
On Wednesday, September 18, 2013 at 10:12 AM, Guido Medina wrote:
Hi,
Is it possible to have Riak control running at HTTP port on localhost?
Assuming security
27;fragmentation'. This condense and re-Put
operation would be the tricky part, and would need to use vector clock
and ensure there are 0 siblings when finished. But it should be
possible? It seems like this is an uber-simplified form of a CRDT data
structure?
Thanks,
Alex
On Thu,
Alex,
RabbitMQ which is a good high performer, developed in Erlang and scales
just as Riak.
The old saying, the right tool for the right job, I like how fast Riak
is fetching/storing key values on a distributed environment, I don't
like Riak for queues, is it because it wasn't designed for t
Create a pseudo getters for the 2i indexes, valid return types are:
String, Long (And Integer) and Set of any of the mentioned, the benefit
of this is the fact that your 2i indexes are not actual properties, they
are meant to be computation of something, example:
/public class Postcode {//
//
Hi,
I have to say it is nice, we started using it today and it seems to
leave a very low CPU and memory footprint at both cluster and
application using the client, now I have a couple of questions:
1. This one is probably part of Riak 1.4.x but won't hurt to ask: Will
reduce identity (to c
ical cores with
no hyper threading, so total threads is also 8, would that still be "+S
4:4", "+S 8:8" or "+S 8:0"
Thanks,
Guido.
On 14/08/13 15:41, Matthew Von-Maszewski wrote:
"threads=8" is the key phrase … +S 4:4
On Aug 14, 2013, at 10:04 AM, Guido M
For the following information should it be +S 4:4 or +S 4:8?
root@somehost# lshw -C processor
*-cpu
description: CPU
product: Intel(R) Core(TM) i7 CPU 930 @ 2.80GHz
vendor: Intel Corp.
physical id: 4
bus info: cpu@0
version: Intel(R) Core(TM) i
ype "riak-admin
reformat-indexes" and tail -f /var/log/riak/console.log which should
be done really fast if there isn't anything to fix.
4. Do 1 to 3 per node.
5. Do 1 and 2 but for for Riak 1.4.1.
HTH,
Guido.
On 13/08/13 13:50, Guido Medina wrote:
Same here, except that Riak 1.3.2
Same here, except that Riak 1.3.2 did that for me automatically. As
Jeremiah mentioned, you should go first to 1.3.2 on all nodes, per node
the first time Riak starts it will take some time upgrading the 2i
indexes storage format, if you see any weirdness then execute
"riak-admin reformat-index
Hi Brian,
*/New thread for this, sorry for the hijacking./*
Yes, without fetch should be indeed used without mutation or conflict
resolution, originally we had mutations and siblings, but our
application ended up creating too many siblings and made Riak fail
miserably, so we disable the sibli
the resolved object be
the one passed? I'm doing some tests and if I do store a mutation
returning the body without fetching, I get a new mutated object and not
the one I passed + mutation. So I'm wondering if that was the original
intention.
Thanks,
.
On 11/08/13 18:49, Guido Medina wrote:
Hi Brian,
I probably asked a similar question before, let's say you have an
in-memory cache and a single writer (I know, not the best distributed
design), if you do the following, take into account that we use
mutations but we have no siblings enable
ou haven't passed in a Resolver then the DefaultResolver is used
which ... isn't really a "resolver" - it simply passes through an
object if there's only one, or throws an exception if there's multiple
(siblings) present.
Thanks,
- Roach
On Sun, Aug 11, 2013
Hi Matt,
Like Sean said, you should have a mutator if you are dealing with
conflict resolution in domain objects; a good side effect of using a
mutator is that Riak Java client will fetch-modify-write so your
conflict resolver will be called once(?), if you don't use mutators, you
get the eff
As a 2nd thought, you could have a key per player on the player's bucket
and a key with the collection of units per player on the unit's bucket.
Guido.
On 07/08/13 15:52, Guido Medina wrote:
Whats the size of each unit JSON wise?, if it is too small, you could
have the player's
Whats the size of each unit JSON wise?, if it is too small, you could
have the player's units inside a single key as a collection, that way
when you fetch a player your key will contain the units and you could
play around with mutations/locking of such player's key. And also, it
will leverage y
Hi Massimiliano,
I think your design is very thorough, I wouldn't worry about the
cardinality of such index but its per index size (how many keys a single
2i index will return?) , in that case think of 2i as yet another
relational DB (LevelDB), you should test it with many keys and check its
Yes, it is thread safe, you can treat them as singleton instances per
bucket, the following order is the kind of the general usage pattern:
* Fetch bucket.
* Optional: If exists verify it has your application values (N value, etc)
* If doesn't exist create it with your settings.
* Cache it a
r, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Sat, Jul 27, 2013 at 11:44 AM, Guido Medina
mailto:guido.med...@temetra.com>> wrote:
Are you saying that you can join two 2i indexes? Let's say you
have a 2i named "date&q
cs.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Sat, Jul 27, 2013 at 11:16 AM, Guido Medina
mailto:guido.med...@temetra.com>> wrote:
Rohman,
I think the reason for this is that the cluste
Rohman,
I think the reason for this is that the cluster will have to do the
whole intersection in memory, 2i only provides queries for 1 single
index and then return that result to either the client streaming or not,
intersection indeed will require a MapReduce job to get a hold of both
lists
wrote:
Guido -
Right now, no.
We've been having some internal discussions around that topic and
whether it's really a "client library" operation or not.
How are you using stats? Is it for a monitoring app or ... ?
Thanks,
Brian Roach
On Thu, Jul 25, 2013 at 4:25 AM, Guid
1 - 100 of 180 matches
Mail list logo