Hi Daniel,
One possible configuration would be:
+ fronting the Riak cluster with HAProxy (or a hardware load balancer)
+ when the Riak server boots-up, block the Riak API ports (using iptables)
+ also at boot, spawn a riak-admin wait-for-service riak_kv process [0]
+ once the riak-admin exits, dr
Hi Daniel,
Secondary index queries need at least 1/n_val primary partitions to be
available before it could run successfully and Riak would return
{error,insufficient_vnodes_available} while the required primary partitions
are coming up.
I would suggest defensive programming (retrying the 2i quer
Hi Daniel,
"A Little Riak Book" covers the logic behind partition allocation in an
overly simplified way.
Riak will distribute partitions to vnodes in a pseudo-random fashion,
resulting in allocations like you described. These allocations are less
optimal when the number of riak nodes are small,
Hi Massimiliano,
As a first step I would recommend setting pb_backlog to 64 or 128 in your
app.config
How are you distributing the load from your python clients to the Riak
cluster? Is every python client connecting directly to one Riak node or do
you have a pool of Riak servers configured in eac
Hi Daniel,
target_n_val can be changed at any time and will take effect at the first
iteration of the claim algorithm [0] (which usually runs whenever you
add/remove nodes from the cluster via the riak-admin command)
On default settings, Riak is able to replicate data to distinct nodes in
all clu
Hi Daniil,
You should mark the claimant node as down.
Run the following command on another node:
riak-admin down riak@
Regards,
Ciprian
On Wed, Apr 23, 2014 at 12:13 PM, Daniil Churikov wrote:
> Hello riak users.
>
> We have a riak-1.3.2 cluster with 3 nodes. One of this node was physicall
Hi Leonid,
Which Riak version are you running?
Have you committed* the cluster plan after issuing the cluster force-remove
commands?
What is the output of $ riak-admin transfer-limit, ran from one of your
riak nodes?
*Do not run this command yet if you have not done it already.
Please run a r
handler:handle_event:85 monitor long_gc <0.713.0>
> [{initial_call,{riak_core_vnode,init,1}},{almost_current_function,{gen_fsm,loop,7}},{message_queue_len,0}]
> [{timeout,126},{old_heap_block_size,0},{heap_block_size,1597},{mbuf_size,0},{stack_size,38},{old_heap_size,0},{heap_size,658}]
Hi Andrew,
Looks like a streaming search operation was timing out, unable to generate
more results:
https://github.com/basho/riak_search/blob/develop/src/riak_search_op_utils.erl#L174
This could happen if another node involved in this operation became
unavailable (due to network segmentation, a
Hi Simon,
Quick answer: the more (ports open), the merrier. At least 6.
As these are Erlang specific settings, I recommend having a look at the
official answer [0].
[0] http://www.erlang.org/faq/how_do_i.html#idp27500560
Regards,
Ciprian
On Fri, Aug 15, 2014 at 3:15 PM, Simon Hartley <
simon
Hi Marcel,
What is the configured ring size for this cluster?
You can slow down the transfers by running $ riak-admin transfer-limit 1 in
one of your riak nodes. iowait should decrease as well once transfer-limit
is lowered, unless one of your disks is failing or is about to fail.
Regards,
Cipr
Hi Guido,
Yes, you can run the 1.4.x java client against riak 2.0 as long as you
don't activate the newer features like security and bucket types.
Regards,
Ciprian
On Mon, Nov 10, 2014 at 2:58 PM, Guido Medina
wrote:
> Hi,
>
> Is it possible to run the Riak Java client 1.4.x against Riak 2.x?
Hi Jason,
Are these random timeouts happening for only one key, or is common for more?
What is the CPU utilisation in the cluster when you're experience these
timeouts?
Can you spot anything peculiar in your server's $ dmesg outputs? Any I/O
errors there?
Regards,
Ciprian
On Mon, Dec 29, 2014
Hi Ildar,
Please have a look at the configuration files: /etc/riak/app.config and
/etc/riak/vm.config
By default Riak binds to localhost, but you can change that using the
following snippet:
export riakIP=$(ifconfig eth0 | grep 'inet addr' | cut -d: -f2 | cut -d' '
-f1)
sudo sed -i "s/127.0.0.1
Hi Ildar,
We have a web GUI for riak called reckon [0]
While not in active development, it's a good starting point to browse your
riak data.
Please note that Rekon should NOT be used on a production cluster!
[1] https://github.com/basho/rekon
Regards,
Ciprian
On Thu, Jan 15, 2015 at 11:07 AM
Hi Simon,
Please find below some pointers regarding AAE concepts [0] and management
[1]
[0] http://docs.basho.com/riak/1.4.12/theory/concepts/aae/
[1] http://docs.basho.com/riak/1.4.12/ops/advanced/aae/
Regards,
Ciprian
On Thu, Jan 22, 2015 at 1:21 PM, Simon Hartley <
simon.hart...@williamhill
Hi Simon,
Looking at this problem from another angle, a ring size of 128 is too large
for just 3 servers with 4 GB RAM each. For instance when dimensioning a
cluster with LevelDB backend we recommend our customers to observe the
calculations on this spreadsheet [0].
Filling the above spreadsheet
nced?
>
>
>
> I can rebuild the cluster with ring size 16 if necessary, but can you
> explain why the current larger ring size produces the sudden memory spike
> and subsequent crash?
>
>
>
> Thanks,
>
>
> Simon.
>
>
>
> *From:* Ciprian Manea [mailto:
s of the remainder of the vnode
> system (i.e. everything but the storage component)?
>
>
>
> Thanks,
>
>
>
> Simon.
>
>
>
> *From:* Ciprian Manea [mailto:cipr...@basho.com]
> *Sent:* 12 February 2015 12:56
>
> *To:* Simon Hartley
> *Cc:* ri
the same problems?
>
>
>
> Thanks,
>
>
>
> Simon.
>
>
>
> *From:* Ciprian Manea [mailto:cipr...@basho.com]
> *Sent:* 13 February 2015 08:17
>
> *To:* Simon Hartley
> *Cc:* riak-users@lists.basho.com
> *Subject:* Re: Simple 3 node test cluster eating all m
Hi Daniel,
Have you investigated your server's dmesg output? Segfaults can be
triggered also by memory corruption. Please check that first.
Regards,
Ciprian
On Tue, Feb 17, 2015 at 1:00 PM, Daniel Iwan wrote:
> We are experiencing crash of beam.smp on one of nodes in 3-node cluster
> (ring
>
Hi,
Please read the following configuration of /etc/riak-cs/riak-cs.conf:
listener = 10.0.2.10:8080
riak_host = 10.0.2.10:8087
stanchion_host = 10.0.2.10:8085
as:
listener = 0.0.0.0:8080
riak_host = 10.0.2.10:8087
stanchion_host = 10.0.2.10:8085
This quick fix will have Riak CS listen on all i
Hi Mohamad,
It's possible that some of your nodes are down or just restarting which
triggers the "partition not running" in `riak-admin transfers` output.
Please ensure that all riak nodes are running.
Regards,
Ciprian
On Tue, Oct 13, 2015 at 5:48 PM, Mohamad Taufiq
wrote:
> Whar is the right
23 matches
Mail list logo