It might also make a lot of sense to roll your own secondary indices. That
is, have a CRDT set represent the primary key of the rows which meet the 2i
condition. In that, you can query the CRDT set, and ensure some level of
consistency. There are further tricks to be played here if interested.
I'm
I would discourage running Riak in Docker. If you use Docker in bridge
mode, then it becomes fairly difficult to deal with networking across
machines. If you run it in host mode, you run into issues with epmd in
the host network namespace. There are some workarounds to this, like
using third party
Two suggestions:
1. Use Riak-EE, and have two rings. When you do an update, copy over one
ring to the other side after you do a "cold reboot"
2. Use the Riak Mesos Framework. Mesos is like K8s, but it has stateful
storage primitives. (Link: https://github.com/basho-labs/riak-mesos)
On Mon, Jun 6,
What do you mean it's not returning? It's returning stale data? Or
it's erroring?
On Tue, May 24, 2016 at 7:34 AM, Vikram Lalit wrote:
> Hi - I'd appreciate if someone can opine on the below behavior of Riak that
> I am observing... is that expected, or something wrong in my set-up /
> understand
I never got around to it, but it should be pretty easy to glue Consul
to Stanchion to HaProxy. As far as gluing HAProxy to Consule -- that
should be pretty easy: https://github.com/hashicorp/consul-template.
And Stanchion with Consul lock:
https://www.consul.io/docs/commands/lock.html. It shouldn't
How did you install OTP18? When dealing with Erlang, and multiple
installs of it, I might suggest using kerl
(https://github.com/kerl/kerl). It's an excellent tool for dealing
with the problem.
On Sat, May 21, 2016 at 12:01 PM, Robert Latko wrote:
> Hi all,
>
> Quick question:
>
> I have an inst
Is the plan to keep using riak_dt_vclock? If so, I might contribute
some optimizations for large numbers of actor entries (1000s).
On Thu, Apr 28, 2016 at 12:55 AM, Russell Brown wrote:
> Hi,
> Riak DT[1] is in need of some love. I know that some of you on this list
> (Sargun, are you here? Hein
We're using riak_dt in anger in our product. We are already using it
with rebar3, and Erlang 18.3 through some super messy patches.
I would love to see a register that takes the logical clock, and
timestamp for resolution, rather than just a straightup timestamp. My
biggest ask though is delta-CRD
ally, my target is so simple, I just need to be able to put some
> key-value by
> executing the multi-paxos.
>
> On Tue, Mar 8, 2016 at 11:39 PM, Sargun Dhillon wrote:
>>
>> If you want to learn to use riak_ensemble the library, the
>> documentation that Joe put toget
If you want to learn to use riak_ensemble the library, the
documentation that Joe put together (and others)
https://github.com/basho/riak_ensemble/blob/develop/doc/Readme.md
There's real world usage of the code here:
-https://github.com/basho/riak_kv/blob/develop/src/riak_kv_ensembles.erl
Basical
Can you tell us a little bit about the application? It might be easier to
use Riak kv rather than riak ensemble directly.
Sent from my iPhone
On Mar 8, 2016, at 06:54, Agung Laksono wrote:
Hi Basho developer,
I've seen the video that you guys present about rian_ensemble. I am
interesting to im
They're effectively equal
Sent from my iPhone
On Mar 6, 2016, at 17:32, Robert Latko wrote:
Hi all,
Found the problem/solution.
With the first cluster, I set it up with a ring size of 64. Prior to
loading data. I stopped the node(s), changed the ring size to 256, then
restarted.
Therefore, i
So, if you pass back the vclock while pr=pw=quorum, and sloppy_quorum
= false you should get RYOW consistency.
On Thu, Mar 3, 2016 at 6:25 PM, Christopher Mancini wrote:
> If you don't need strong consistency for all Riak requests, just certain
> ones, then explore the use of R and N vals that ca
Given the tunables that Riak has, I would say that its reliability in
terms of consistency (as in ACID) are unparalleled. It offers a
variety of weak consistency options from eventual consistency, to
strong eventual consistency, to strong consistency*.
In terms of fault-tolerance, it is interestin
It's possible that during the strongly consistent joined, there was some
leader instability. Do you have any logs of the event? Can you recreate the
event? Also, the recommendation for SC, is to either turn off tree
verification, or run with 7 nodes, and n=5.
On Thu, Oct 8, 2015 at 2:23 AM, Ali Rı
if there is a failure, and leader is re-elected, the epoch of the ensemble
changes, so you may have to refetch the object to get a new causal context
in order to perform another write. If this occurs, you should be able to
heal by doing another read, modify, write cycle.
On Wed, Sep 23, 2015 at 3:
So, the way it should work is pretty simple:
Run the command a la: ./basho_bench -N nodeA@1 0.0.1.123 -C
basho_bench_cookie
(It's key that the IP address be the external IP, and not the internal IP
of the box, a la loopback)
In addition, nodeB must have the same version of Erlang installed as Nod
Are you using bitcask, or LevelDB? What version of Riak are you using?
Bitcask will lazily merge files in the background in order to reclaim
space. This is pretty aggressive, and it should show up pretty quickly. On
the other hand, LevelDB deletes files at compaction time. Compactions
aren't quite
Put a period at the end.
On Wed, Aug 12, 2015 at 8:52 PM, Toby Corkindale wrote:
> I'm trying to set up another cluster, and I'm hitting problems with Riak
> complaining that ** System running to use fully qualified hostnames **
> ** Hostname db04 is illegal **
>
> However, as far as I can see, t
I doesn't actually sound like you need strong consistency at all.
Strong consistency can be set at a bucket-type level, and will get you
what you want, but it may be too heavy handed.
It sounds like the token should be written to Riak and you should be
getting a durable ack from Riak before ever g
You could map your keys to a given bucket, and that bucket to a given
backend using multi_backend. There is some cost to having lots of backends
(memory overhead, FDs, etc...). When you want to do a mass drop, you could
down the node, and delete that given backend, and bring it up. Caveat: AAE,
MDC
When you start basho_bench, you must start Erlang in distributed mode,
which means you must set a node name, and a cookie.
So, it would look something like the following: "./basho_bench -N
foo@127.0.0.1 -C basho_bench examples/riakc_pb_distributed.config"
Ensure that in examples/riakc_pb_distribu
I advice Ubuntu 14.04 with a Utopic kernel. Userland is pretty
trustworthy on 14.04, and newer kernels rarely make things worse.
On Thu, Mar 5, 2015 at 8:51 AM, Luke Bakken wrote:
> Hello,
>
> Use an operating system that is officially supported:
> http://docs.basho.com/riak/latest/downloads/
>
>
Are you using LevelDB? If so, it'd be impossible, and even unfair for
me to summarize all the work that's been done by MVM on LevelDB as
part of 1.4, and 2.0: https://github.com/basho/leveldb/wiki
On Tue, Feb 10, 2015 at 6:15 PM, siva Ram wrote:
> In terms of performance improvement improvemen
I really don't recommend doing this. Really, the cluster shouldn't be
changing often enough where the ensemble status is changing all that
often. You can do this by exploiting internal APIs that are accessible
over distributed Erlang in riak_ensemble. Though, connecting to the
cluster over distribu
ading the old ones after this process is completed?
>
> Thanks a lot for the other tips, you've been very helpful!
>
> Best regards,
> Edgar
>
> On 24 January 2015 at 21:09, Sargun Dhillon wrote:
>>
>> Several things:
>> 1) If you have data at rest that doesn
Several things:
1) If you have data at rest that doesn't change, make sure you have
AAE, and it's ran before your cluster is manipulated. Given that
you're running at 85% space, I would be a little worried to turn it
on, because you might run out of disk space. You can also pretty
reasonably put th
So, you need to set add_paths as a runtime option, once you compile
the Erlang program into beam files (perhaps with a .app) so that Riak
can load your code. Have you already compiled your program with the
Erlang compiler that comes with your version of Riak? Which version of
Riak are you using? In
Assuming your storage engine is bitcask:
1. Turn off all your Riak nodes
2. rm -rf --verbose /var/lib/riak/bitcask/*/*
3. rm -rf --verbose /var/lib/riak/anti_entropy/*/*
4. Turn Riak back on
This will preserve your buckets and bucket types in in cluster
metadata. You can also automate that, but it
ning for only one key, or is common for
>> more?
>>
>> What is the CPU utilisation in the cluster when you're experience these
>> timeouts?
>>
>> Can you spot anything peculiar in your server's $ dmesg outputs? Any I/O
>> errors there?
>>
>&g
Several things:
1) I recommend you have a 5-node cluster:
http://basho.com/why-your-riak-cluster-should-have-at-least-five-nodes/
2) What version of Riak are you using?
3) What backend(s) are you using?
4) What's the size of your keyspace?
5) Are you actively rewriting keys, or writing keys to the
Can you post the files in the log directory on a github gist, and run
the command "./dev1/bin/riak console"? In addition, run Riak as a
non-root user, with the max files limit bumped up as high as you can
set it.
On Wed, Dec 24, 2014 at 10:47 PM, Ildar Alishev wrote:
> Hello
>
>
> I have a proble
are disabled
> (allow_multi = false).
>
> Regards.
>
> On Mon, Dec 22, 2014 at 9:17 PM, Sargun Dhillon wrote:
>> What versions of Riak are you using? And are these CRDT sets?
>>
>> Sent from my iPhone
>>
>>> On Dec 22, 2014, at 16:04, Claudio Cesar
What versions of Riak are you using? And are these CRDT sets?
Sent from my iPhone
> On Dec 22, 2014, at 16:04, Claudio Cesar Sanchez Tejeda
> wrote:
>
> I'm a sysadmin and I managing 5 cluster of RIAK:
>
> - two of them are LXC containers on the same physical machine (3 nodes
> per cluster)
> -
So, from my understanding, one of your servers was being replaced, so
you did a leave from the cluster, and it failed to commit, and then
another node failed, resulting in 3/4ish or 3/5ish of the ring being
up?
Did you down the failed node, or remove it from the cluster? What's
the current status o
How did you install R17? kerl? If so, it silently doesn't install
crypto if you don't have libssl-dev installed, see here:
https://github.com/evax/kerl/issues/13
On Wed, Nov 5, 2014 at 12:28 AM, Abd El-Fattah Mahran
wrote:
>
> Hi,
>
> I downloaded .deb package with R17 and it is working with no e
Are you rewriting keys? What client library are you using to upload them?
Just as an aside, you should probably use Riak-CS, Vanilla Riak isn't
really meant to handle object storage.
On Tue, Oct 21, 2014 at 1:47 AM, Simon Rodriguez wrote:
> Hi all,
>
> I've read several threads about this questi
Distributed deletion, and garbage collection is a really hard problem.
Riak has a couple different ways to do it talked about here:
http://docs.basho.com/riak/latest/ops/advanced/deletion/
The default mechanism tombstones the key, and then waits 3 seconds
after the write has reached stable state (
When you do read / modify / writes, are you also planning on sending
the relevant read through one node only? In that case, your update
latency might suffer if the egress queues of your designated node get
backed up on writes, waiting for a very low cost read query.
You're more likely to get awkwa
So, I've been doing some testing lately. Riak sounds like it'll meet
your usecases with some caveats. LevelDB on SSDs will handle change
often fairly well, as long as your disks have enough throughput, and
CPUs are large enough to handle the compaction. If you're in GCE, I
recommend using persisten
So, from my understanding, there are two types of data that you have.
Family 1:
-Constantly changing
-Representable by CRDTs
-Can handle eventual consistency
Family 2:
-Rarely changing
-Need immediate consistency, and linearizability
With both key families, you care about tail latency.
Questions
So, I don't have a ton of experience with Riaknostic, but taking a
casual glance at the source code, it appears that Riaknostic caches
some node-local data about the ring (see:
https://github.com/basho/riaknostic/blob/2.0.0/src/riaknostic_node.erl#L192-L208).
You should be able to unset this by att
So, if I'm interpreting your message correctly, you're executing the
following steps, with the following steps, in order?
1. Key delete, X -> success
2. Key list -> yields a set, which contains X
3. Key fetch, X -> 404 / missing
I'd first ask why you're doing key listing. That's an anti-pattern.
Y
Although, I'm not a Basho engineer, I work for a partner. I've
deployed Riak, and Riak-CS in production services. I promise you don't
have to stay in your corner if you want to talk about deploying, and
using Riak. You'll find me at the conference. There tends to be some
split of industry types, an
If you upgrade to 2.0, then there is a ring-resize feature built right in.
That might be a better approach to take.
On Tue, Sep 2, 2014 at 10:07 AM, Mark Rechler wrote:
> Hello All,
>
> What would be the best tool for moving data from one riak cluster to
> another?
>
> To give some context, we
I second John's opinions. Generally, I would have have one key which
is the secondary index, being an observe-remove OR-Set (or a relevant
type for your application, be a register, g-set, or a plain old
OR-set) pointing to back to the keys. Unfortunately, this mechanism
can become quite unwieldy in
he
> bucket name.
> 3. Coming from RDBMS background, we are used to seeing incremental Ids.
> K-sort is not a hard requirement but nice to have.
>
> On 8/20/14, 11:03 AM, "Sargun Dhillon" wrote:
>
>>I have questions for your question.
>>
>>1. What are
I have questions for your question.
1. What are you using your keys for? Do they get passed around in to
clients in Javascript? This is important because Javascript only
reliably implements IEEE 754 floating point, which is limited to 53
bits of precision.
2. What backend are you using? In Bitcas
Is the requirement for having AAE enabled now removed for strong consistency?
On Mon, Jul 28, 2014 at 4:55 PM, Joseph Blomstedt wrote:
> This means the consistency sub-system is not enabled/active. You can
> verify this with the output of `riak-admin ensemble-status`.
>
> To enable strong consist
You really should have some level of IP filtering to prevent people
from connecting directly to your BEAM / EPM instances, but even if
they do have the ability to make a TCP/IP connection, they have to
know the distributed Erlang cookie in order to connect. More on this:
http://www.erlang.org/doc/r
it for AAE exchange to occur, as on a real
cluster, this might take a while.
On Wed, May 28, 2014 at 3:24 AM, Sargun Dhillon wrote:
> So, I noticed that if I don't have anti-entropy on, and I enable
> strongly consistent Riak, it doesn't work. Specifically, what happens
> is that
So, I noticed that if I don't have anti-entropy on, and I enable
strongly consistent Riak, it doesn't work. Specifically, what happens
is that riak_kv_ensembles sets up the ensembles, but the
riak_ensemble_peer never gets past to all_sync state. It appears that
this is because the riak_kv_ensemble_
How much deeper does your tree go? What's the average number of
children a node has? What is your query pattern (fetch a parent, and
all of its children?)?
On Fri, Apr 11, 2014 at 10:13 AM, Sapre, Meghna A
wrote:
> Hi all,
>
> Most of my data is in parent-child format (1:n).
>
> For read/write
If you can afford the disk space, and potential latency overheads, you
can make your bitcask merges less frequent. You can find a window when
your databases are going to be in low-usage, and have each node have a
separate window for bitcask merges, which (A) will latency-level
across your cluster,
er?
> https://github.com/basho/riak_kv/issues
>
>
> On Mon, Jan 20, 2014 at 3:08 PM, Sargun Dhillon wrote:
>>
>> So, I don't know how many people are aware of if, but Riak supports
>> custom hashing (partitioning) functions. It's exposed as a bucket
>>
Not to fork the thread too far from the topic being discussed, but is
there any possibility of opening up the API used for multidatacenter
replication? Specifically, the fullsync API? I imagine the code inside
riak_repl can also be used for an external node to connect and get a
full dump of a node'
So, I don't know how many people are aware of if, but Riak supports
custom hashing (partitioning) functions. It's exposed as a bucket
property (chash_keyfun), in which you can deploy your own code to hash
keys to ensure data locality to specific vnodes. This can come in
handy when doing custom mapr
57 matches
Mail list logo