27;fragmentation'. This condense and re-Put
operation would be the tricky part, and would need to use vector clock
and ensure there are 0 siblings when finished. But it should be
possible? It seems like this is an uber-simplified form of a CRDT data
structure?
Thanks,
Alex
On Thu,
ected and there is a different
transport for sensitive information.
Regards,
Guido.
On 18/09/13 16:15, Christopher Meiklejohn wrote:
On Wednesday, September 18, 2013 at 10:12 AM, Guido Medina wrote:
Hi,
Is it possible to have Riak control running at HTTP port on localhost?
Assuming security
Hi,
Is it possible to have Riak control running at HTTP port on localhost?
Assuming security is provided by SSH tunnels.
If so what is needed to be done at app.config? I enabled Riak control
but it is redirecting me to HTTPS.
Thanks,
Guido.
___
ri
Jared,
Is it possible to elaborate more on "meet me in the middle
settings/scenarios?", let me explain, let's say the quorum is configured
with low values, say, R=W=1 and N=3, doesn't that add more work to AAE
background process? Could there be ways to sacrifice some client
performance with l
Hi,
Streaming 2i indexes is not timing out, even though the client is
configured to timeout, this coincidentally is causing the writes to fail
(or or the opposite?), is there anything elemental that could "lock" (I
know the locking concept in Erlang is out of the equation so LevelDB?)
somethi
utMillis()?
- Roach
On Wed, Sep 25, 2013 at 5:54 AM, Guido Medina wrote:
Hi,
Streaming 2i indexes is not timing out, even though the client is configured
to timeout, this coincidentally is causing the writes to fail (or or the
opposite?), is there anything elemental that could "lock"
on that - are you seeing anything
in the Riak logs?
- Roach
On Wed, Sep 25, 2013 at 12:11 PM, Guido Medina wrote:
Like this: withConnectionTimeoutMillis(5000).build();
Guido.
On 25/09/13 18:08, Brian Roach wrote:
Guido -
When you say "the client is configured to time out" do y
Hi,
I'm trying to tune our Riak cluster using
http://docs.basho.com/riak/latest/ops/advanced/backends/leveldb/#Parameter-Planning
but I'm still lost on how to use the calculation results, here are my
questions:
1. Does this calculator
https://github.com/basho/basho_docs/raw/master/source
Morning,
Is there a way to determine what nodes a key belong to? I'm guessing
that the hash of a key will be computed using the bucket name and key
combined, I'm having some issues with some writes and would like to see
if there is a pattern, knowing what nodes are involved will help me a lot.
Hi,
Is there a way to quick check if a key is present without fetching it
using the Riak Java client? It would be nice to have one for quick
checks without fetching the key:
/interface Bucket {//
// //
// public boolean isKeyPresent(String key);//
// //
//}/
Of course, that wo
I have heard some SAN's horrors stories too, Riak nodes are so cheap
that I don't see the point in even having any mirror on the node, here
my points:
1. Erlang interprocess communication brings some network usage, why yet
another network usage on replicating the data? If the whole idea of
And for ZFS? I wouldn't recommend it, after Riak 1.4 snappy LevelDB
compression does a nice job, why take the risk of yet another not so
enterprise ready compression algorithms.
I could be wrong though,
Guido.
On 03/10/13 12:11, Guido Medina wrote:
I have heard some SAN's horrors s
Guido.
On 03/10/13 12:11, Guido Medina wrote:
I have heard some SAN's horrors stories too, Riak nodes are so
cheap that I don't see the point in even having any mirror on the
node, here my points:
1. Erlang interprocess communication brings some network usage
t a higher ratio given that ZFS will use compression over the entire
volume not ‘just’ the data in the DB.
That said there is a lot more to ZFS then compression and CRC ;) like
snapshots, cloning, ARC ^^
On 03 Oct 2013, at 9:56, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Your tests are not close to what you are going to have in production
IMHO, here are few recommendations:
1. Build a cluster with at least 5 nodes with N=3 and R=W=2 (You can
update your bucket properties via PBC with Java)
2. Use PBC instead of HTTP.
3. If you are only importing data call
Hi,
We are tracing some threads at our webapp which are stopping Tomcat from
shooting down properly, one of them seems to be related with Riak Java
client, here is the repeating stack trace once all services have been
stopped properly:
*Thread Name:* riak-stream-timeout-thread
*State:* in Ob
Sorry, I meant "stopping Tomcat from shutting down properly"...I must
have been thinking of some FPS night game.
On 05/11/13 13:04, Guido Medina wrote:
Hi,
We are tracing some threads at our webapp which are stopping Tomcat
from shooting down properly, one of them seems to be re
t-thread", true);
...
...
}
Guido.
On 05/11/13 13:29, Konstantin Kalin wrote:
You need to call shutdown method of Riak client when you are stopping
your application.
Thank you,
Konstantin.
On Nov 5, 2013, at 5:06, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Sorry,
scheduled tasks.
Guido.
On 05/11/13 13:31, Guido Medina wrote:
That's done already, I'm looking at the source now, not sure of the
following timer needs to be cancelled when Riak client shutdown method
is called:
public abstract class RiakStreamClient implements Iterable {
st
it shouldn't stop Tomcat
shutdown anymore. This is that I learn from source code. I called the
method in Servlet listener and never had issues after that. Before I
had similar behavior like you have.
Thank you,
Konstantin.
On Nov 5, 2013 5:31 AM, "Guido Medina" <mailto:guido.
Hi Michael,
I'm quite sure lately annotating the Riak key method was enabled at the
client but I'm not 100% of its usage yet (I haven't used it), in the
meantime for testing you can annotate the property with @RiakKey inside
your POJO, that should do, then just treat that property as any other
Assuming your backend is on LevelDB, have you try streaming the special
bucket 2i index? We stream millions of keys using specific 2i indexes,
if you didn't create any 2i index on your bucket, you can still fetch
that special key, and per key you can fetch it one by one while
iterating over you
Hi Simon,
We use HA proxy for that matter, set up HA proxy to at localhost:8087
and then your Riak Java client pointing to it, this is a sample HA proxy
config: https://gist.github.com/gburd/1507077
HTH,
Guido.
On 09/12/13 12:05, Shimon Benattar wrote:
Hi Riak users,
we are using the Riak
I meant "Hi Shimon", sorry for the wrong spelling.
Guido.
On 09/12/13 12:12, Guido Medina wrote:
Hi Simon,
We use HA proxy for that matter, set up HA proxy to at localhost:8087
and then your Riak Java client pointing to it, this is a sample HA
proxy config: https://gist.github
Hi Shimon,
Did you try streaming the 2i bucket index and then doing your job per
key basis? I sent you a code snippet the other day.
It should work fine regardless of how many keys you have in your bucket,
it is equivalent to the section:
http://docs.basho.com/riak/latest/dev/using/2i/ - Lo
Hi,
We have this configuration at our vm.args for all of our 8 cores servers:
## Erlang scheduler tuning:
https://github.com/basho/leveldb/wiki/Riak-tuning-1
*+S 4:4*
But at /var/log/riak/console.log we have the following warning, should
we ignore it?
2014-01-13 22:15:35.246 [warning] <0.
At Maven central repos it has yes:
https://repository.sonatype.org/index.html#nexus-search;quick~riak-pb
HTH,
Guido.
On 21/01/14 14:22, Jon Brisbin wrote:
Have the Riak Java Protobuf artifacts been updated to take advantage
of Riak 2.0 features yet?
I'd like to work some more on getting the
Hi,
Is there any small doc that could explain its usage a little bit:
From the Java perspective it would be nice if it point out its counter
part methods with Atomic Integer like:
* How to create it? Does incrementing a counter will just create it
with zero as initial value and then incre
Hi,
What's a good value for transfer limit when re-arranging adding/removing
nodes?
Or if there is a generic rule of thumb like physical nodes, processors, etc.
Once transfer is completed, is it a good practice to set it back to its
default value or should the calculated (guessed?) transfer l
Basho Technologies
@davidjrusek
On January 24, 2014 at 8:41:07 AM, Guido Medina
(guido.med...@temetra.com <mailto://guido.med...@temetra.com>) wrote:
Hi,
Is there any small doc that could explain its usage a little bit:
From the Java perspective it would be nice if it point out its
c
Hi,
I'm trying to get different distributions on the ring and I used before
the following Erlang code, which for Riak 1.4.7 is not re-triggering a
re-calculation:
* What is the latest algorithm version? v3? And is there a list of the
last 2 or 3 versions? Sometimes depending on the keys o
Hi,
We are using Riak Java client 1.4.x and we want to copy all counters
from cluster A to cluster B (all counters will be stored on a single to
very few buckets), if I list the keys using special 2i bucket index and
then treat each key as IRiakObject, will that be enough to copy
counters, or
.
Guido.
On 29/01/14 11:44, Russell Brown wrote:
Oh damn, wait. You said 1.4.*. There might, therefore be siblings, do a counter
increment before the copy to ensure siblings are resolved (if you can.) Or use
RiakEE MDC.
On 29 Jan 2014, at 11:27, Guido Medina wrote:
Hi,
We are using Riak Java
Hi,
Now I'm curious too, according to
http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/
the default value for Erlang property last_write_wins is false, now, if
95% of the buckets/keys have no siblings (or conflict resolution), does
that mean that for such buckets las
sell Brown wrote:
On 30 Jan 2014, at 10:58, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Hi,
Now I'm curious too, according to
http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/
the default value for Erlang property last_write_wins is false, n
The short answer is no, there is nothing that can fulfil your requirements
We developed something similar for PostgreSQL, we call it Hybrid DAOs,
basically each POJO is annotated as Riak KV and also as Eclipselink JPA
entity, I can only give you partial code and some hints.
You need a standar
http://www.hetzner.de/en/
Really powerful servers, 500+ mbps inter server communication.
Guido.
On 21/02/14 14:34, Massimiliano Ciancio wrote:
Hi all,
is there someone who have suggestions for a good, not so expensive,
European provider where to get 5 servers to install Riak?
Thanks
MC
__
Glad it helped, we have been using them for years, I would also
recommend to use Ubuntu Server 12.04 LTS over there.
Regards,
Guido.
On 03/03/14 11:50, Massimiliano Ciancio wrote:
2014-02-21 15:44 GMT+01:00 Guido Medina :
http://www.hetzner.de/en/
Really powerful servers, 500+ mbps inter
Hi,
If nodes are already upgraded to 1.4.8 (and they went all the way from
1.4.0 to 1.4.8 including AAE buggy versions)
Will the following command (as root) on Ubuntu Servers 12.04:
riak stop; rm -Rf /var/lib/riak/anti_entropy/*; riak start
executed on each node be enough to rebuild AAE h
o.com>
On Wed, Apr 9, 2014 at 6:34 AM, Guido Medina <mailto:guido.med...@temetra.com>> wrote:
Hi,
If nodes are already upgraded to 1.4.8 (and they went all the way
from 1.4.0 to 1.4.8 including AAE buggy versions)
Will the following command (as root) on Ubuntu Serve
Hi,
What's the latest non-standard version of this function? v3 right? If
Basho adds more versions to this, is this somewhere documented?
For our nodes standard choose/wants claim functions were doing a weird
distribution so the numbers even out a bit better (just a bit better) by
using v3,
is a good first step before
waiting for transfers.
Thanks!
--
Luke Bakken
CSE
lbak...@basho.com <mailto:lbak...@basho.com>
On Wed, Apr 9, 2014 at 7:54 AM, Guido Medina
mailto:guido.med...@temetra.com>> wrote:
What do you mean by "wait for
What would be a high ring size that would degrade performance for v3:
128+? 256+?
I should had asked using the original response but I deleted it by accident.
Guido.
On 10/04/14 10:30, Guido Medina wrote:
Hi,
What's the latest non-standard version of this function? v3 right? If
Basho
Hi,
Base on the documentation and tricks I have seen to fix/repair stuff in
Riak I would suggest the following approach:
riak-admin [cluster] (if cluster is specified then run the
action at every node)
and I know most of the commands are designed like that but adding to
them for speci
Hi Bryce,
If each session ID is unique, even with multiple writers is unlikely for
you to be writing to the same key at the same time from two different
writers, that being the case, you could store each event as a JSON
object into a JSON array and when your array reaches a threshold, say
for
period of time). But I just want to make sure I understand what you're
saying there.
Warm regards,
Bryce
On 04/24/2014 01:13 AM, Guido Medina wrote:
Hi Bryce,
If each session ID is unique, even with multiple writers is unlikely
for you to be writing to the same key at the same time fro
Hi Simon,
There are some (maybe related) Level DB fixes in 1.4.9 and 1.4.10, I
don't think there isn't any harm for you to do a rolling upgrade since
nothing major changed, just bug fixes, here is the release notes' link
for reference:
https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
Hi,
Does anyone know and can recommend a good -with price/value in mind-
data center in Australia for hosting a Riak cluster? Our main data
center is in Germany -http://www.hetzner.de- which is great, stable,
fast and cheap but haven't had any luck finding something similar, and
it has to be
Hi,
We have had a problem now twice on Ubuntu 12.04 with MDM arrays (all
nodes Raid 1 with 2x2TB disks) where once a disk fails 2i queries don't
return the expected result, the only solution we have been able to apply
is by replacing the affected node.
After doing an AAE clean up, repaired p
Maybe what you are looking for is a combination of both, say, your KV
data in Riak with a combination of background processes able to build
the necessary searching graphs in Neo4J, in such way your data is secure
in a Riak cluster and searchable on several Neo4J servers.
That's just an idea wh
Alex De la rosa wrote:
Hi Guido,
This could be a solution; although I would try to do it in an
homogeneous system where only one NoSQL DB would be around if possible :)
Thanks!
Alex
On Fri, Aug 29, 2014 at 5:03 PM, Guido Medina
mailto:guido.med...@temetra.com>> wrote:
Maybe what y
Hi,
Is there a way to estimate the size of a Riak Key using Riak Java client
1.4.x, say from an instance of IRiakObject?
An approximation would be OK, I can probably get the Json's string and
check its length but then there are 4 sizes to take into account right?
1. The Json string length.
Hi,
Is it possible to run the Riak Java client 1.4.x against Riak 2.x? At
the moment we would have to do a major refactor in our application to
support Riak Java client v2.x so I'm wondering if it is possible to
first migrate to Riak v2.x before we start our refactor.
Best regards,
Guido.
You need to learn how to use Jackson, there is no other way around it,
like you were told, you need a default constructor, or an annotated
constructor to instruct Jackson how to instantiate your class, like the
following example:
public class TestClass {
@JsonCreator
public TestClass(
Hi,
Failed miserably trying to upgrade Riak 1.4.10 to 2.0.2 after created
all the patch files to use the new riak.conf, I noticed that we don't
need custom parameters any longer so the only thing we changed was the
backend to be "leveldb"
Is it level DB max open files parameter needed any lo
onode@nohost,[{'r...@x1.clonmel.temetra.com',['r...@x2.clonmel.temetra.com','r...@x3.clonmel.temetra.com']},{'r...@x2.clonmel.temetra.com',['r...@x5.clonmel.temetra.com','r...@x6.clonmel.temetra.com']},{'r...@x3.clonmel.temetra.com
Why is it Riak assuming the node is called "nonode@nohost"?
I didn't have to set that before and somehow my vm.args ended with the
proper node name, in this case this one should be
"r...@x1.clonmel.temetra.com"
Regards,
Guido.
On 13/11/14 14:34, Guido Medina wr
Disregard please, our patch file uses FQDN and it was pointing to the
wrong file (vm.args)
Guido.
On 13/11/14 14:43, Guido Medina wrote:
Why is it Riak assuming the node is called "nonode@nohost"?
I didn't have to set that before and somehow my vm.args ended with the
prope
Hi,
Are there any plans on releasing a Riak Java client with Netty-4.1.x?
The reasoning for this is that some projects like Vert.x 3.3.0 for
example are already on Netty-4.1.x and AFAIK Netty's 4.1.x isn't just a
drop in replacement for 4.0.x
Would it make sense to support another Riak Java
Hi Travis,
I have done similar things using the Java client but I will assume you
have access to change certain settings at the C client, assuming you
have RW = 2 and N =3, your client is returning to you once 2 writes are
made but an asynchronous write is still pending which will eventually
Hi all,
My impression of Riak has always been that it would recover from
anything but yesterday we had the worst happened, there was a power
outage so all the servers in a Riak cluster went down, once they were
back we have been having these constant so my questions are:
* How can I recover
*Correction:* I replied to an old e-mail instead of creating a new one
and forgot to change the subject.
Hi all,
My impression of Riak has always been that it would recover from
anything but yesterday we had the worst happened, there was a power
outage so all the servers in a Riak cluster wen
force a rebuild of your hash trees. They rebuild automatically
anyway; this is just making them rebuild faster.
So it would recover .. eventually.
-Fred
On Jun 1, 2017, at 7:13 AM, Guido Medina <mailto:gmed...@temetra.com>> wrote:
*Correction:* I replied to an old e-mail instead of c
Hi,
I see now there is support for 2i which we needed in order to migrate to
2.x, there was another issue with the old client which forced us to
modify the client, such issue was related to the following, let me put
an example:
public class POJO {
@RiakKey
public String getKey() {
/
false;
}
}
return true;
}
So my other question is if this still holds true for the current Riak
Java client 2.1.1?
On 22/06/17 09:49, Guido Medina wrote:
Hi,
I see now there is support for 2i which we needed in order to migrate
to 2.x, there was another issue with
Hi all,
After some big rebalance of our cluster some keys are not found anymore
unless we set R = 3, we had N = 3 and R = W = 2
Is there any sort of repair that would correct such situation for Riak
2.2.3, this is really driving us nuts.
Any help will be truly appreciated.
Kind regards,
Gu
/2.1.1/developing/app-guide/replication-properties/#the-implications-of-notfound-ok
From what you describe, it sounds like only a single copy (out of the original
three), somehow remain present in your cluster.
Best Regards,
Bryan Hunt
On 17 May 2018, at 15:42, Guido Medina wrote:
Hi all,
Aft
-ok
From what you describe, it sounds like only a single copy (out of the original
three), somehow remain present in your cluster.
Best Regards,
Bryan Hunt
On 17 May 2018, at 15:42, Guido Medina wrote:
Hi all,
After some big rebalance of our cluster some keys are not found anymore unless
we se
Hi all,
We started the partitions repair a couple of weeks ago, so far so good
(3 nodes out of 7 done), then we started getting this error:
(r...@node4.domain.com)3> [riak_search_vnode:repair(P) || P <-
Partitions].
** exception error: undefined function riak_search_vnode:repair/1
The first
.
Can you tell me what command you ran, it looks to me from the output below that
you’re connected to node and typing commands in the console?
Is this some snippet that you attach and run?
Cheers
Russell
On 1 Jun 2018, at 09:07, Guido Medina wrote:
Hi all,
We started the partitions repair a
Sorry, not repairing a single partition but all partitions per node:
https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-all-partitions-on-a-node
On 01/06/18 09:34, Guido Medina wrote:
Hi Russell,
I was repairing each node as specified in this guide
https
:35, Guido Medina wrote:
Sorry, not repairing a single partition but all partitions per node:
https://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/repairs/#repairing-all-partitions-on-a-node
On 01/06/18 09:34, Guido Medina wrote:
Hi Russell,
I was repairing each node as specified in this
command from the wrong instructions before.
Guido.
On 01/06/18 09:37, Russell Brown wrote:
I don’t see a call to `riak_search_vnode:repair` in those docs
Do you still run legacy riak search (i.e. not yokozuna/solr)?
On 1 Jun 2018, at 09:35, Guido Medina wrote:
Sorry, not repairing a single p
Hi all,
Nice work on the upcoming 2.9.0 release, I have a quick question:
Will it be possible to switch from the eleveldb to the new leveled
backend and Tictac AAE for an existing cluster?
In case it is not possible we are thinking to use the new replication
and move to a brand new cluster.
*Sent:*01 February 2019 19:22
*To:*Guido Medina mailto:gmed...@temetra.com>>
*Cc:*riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
*Subject:*Re: [ANN] Riak 2.9.0 - Release Candidate 1 Available
Replication would be the optimum solution - in theory
Hi,
Can someone please point me out to the guide explaining how to change
wants and choose claim functions for Riak distribution % distribution?
We would like to set these permanently and trigger a cluster
redistribution, I just can't find that documentation anymore, I was able
to find this
ore handing off
all data. How can I resolve this?"
https://docs.basho.com/riak/kv/2.2.3/developing/faq/
This may be helpful as well -
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-November/038435.html
On Tue, Feb 19, 2019 at 3:22 AM Guido Medina wrote:
Hi,
Can someone pl
Hi all,
We had a power cut which caused one of the nodes to corrupt one of the
LevelDB files, after this that node doesn't even want to start, here is
the error we are seeing:
2019-09-04 08:46:41.584 [error] <0.2329.0>@riak_kv_vnode:init:527
Failed to start riak_kv_eleveldb_backend backend for
ll be
844930634081928249586505293914101120738586001408
On 4 Sep 2019, at 10:01, Guido Medina <mailto:gmed...@temetra.com>> wrote:
Hi all,
We had a power cut which caused one of the nodes to corrupt one of
the LevelDB files, after this that node doesn't even want to start,
here is the error we are seeing:
2019-09-04
Bryan* (misspelled your name), need a morning coffee :(
On 04/09/2019 10:29, Guido Medina wrote:
Thanks a lot Brian, I have deleted such directory and now the node
started, let's see how it behaves.
Guido.
On 04/09/2019 10:19, Bryan Hunt wrote:
Easiest solution is to just delete the
101 - 180 of 180 matches
Mail list logo