ame">>,<<"A1">>}}
{{<<"testbucket">>,{<<"employee_designation">>,<<"Analyst">>}}
{{<<"testbucket">>,{<<"employee_designation">>,<<"Technical">>}}
Apologies, clicked send in the middle of an incomplete thought. It should
have read:
Backing up the LevelDB data files while the node is stopped would remove the
necessity of using the LevelDB repair process upon restoring to make the
vnode self-consistent.
From: Joe Caswell
Date: Thursday
er/009
861.html
In either restore case, having a backup of the merge_index data files is not
helpful, so there does not appear to be any point in backing them up.
Joe Caswell
From: Sean McKibben
Date: Tuesday, January 21, 2014 1:04 PM
To: Elias Levy
Cc: "riak-users@lists.basho.com"
S
Justin,
The binary in the log entry below equates to:
{<<"collector-collect-twitter">>,<<"data_followers">>,<<32897-byte string>>}
Hope this helps.
Joe
From: Justin Long
Date: Sunday, November 24, 2013 5:17 PM
To: Joe Caswel
/Repairing-Search-Indexes/
However, you would need to first modify your extractor to not produce
search keys larger than 32k or the corruption issues will recur.
Joe Caswell
From: Richard Shaw
Date: Sunday, November 24, 2013 4:25 PM
To: Justin Long
Cc: riak-users
Subject: Re: Runaway "F
de
started. After the first node, all the rest can be handled with cluster
replace.
Joe Caswell
From: Brady Wetherington
Date: Monday, October 21, 2013 12:26 PM
To: Dave Brady
Cc: Riak Users Mailing List
Subject: Re: 1.4.2: 'riak-admin reip' no longer works?
Huh. I thought we were su
Dilip
What is meant by 'not working'?
Those values specify where the backends store their data files, and should
not be changed once users/objects have been stored.
Joe
From: dilip kumar
Reply-To: dilip kumar
Date: Monday, August 12, 2013 1:15 PM
To: "riak-users@lists.basho.com"
Subject
There are 2 options for that situation when riak-admin cluster plan gives
you the Not all replicas will be on distinct nodes warning.
1. You can riak-admin cluster clear, which will wipe out the plan, and the
riak-admin cluster plan again. You will get the same effect of
redistributing the vnodes
Elias,
Just for the sake of argument, if you use
index(bucket_name,"$key",'0'..'z')
do you get the same result?
Joe
From: Elias Levy
Date: Friday, June 21, 2013 1:57 AM
To: "riak-users@lists.basho.com"
Subject: Mismatched object counts
I've just inserted some data into a six node Ri
The cpu_sup module is not a Basho/Riak module. The specific error you noted
is discussed on the erlang-questions list here:
http://erlang.org/pipermail/erlang-questions/2008-May/034891.html
It seems to be related to locales. Perhaps setting the mentioned
environment variables when starting Riak w
When you set a custom property on a bucket, the custom settings as well as
the current defaults are stored and gossiped among the nodes.
When Riak checks a property for a bucket, it checks for these previously
stored settings first, then falls back on the default props.
When you change default_buck
>From: Pavel Kirienko
>Date: Monday, May 6, 2013 6:29 AM
>To:
>Subject: Inconsistent cluster membership
>Here are the questions:
>1. Why member-status output on different nodes is inconsistent, and what to do
about it?
Some occurrance has caused the nodes to no longer agree on the ring. If you
er, we do have dynamic ring sizing on the
product roadmap, but there is no release date set for that feature.
Joe Caswell
From: Tom Zeng
Date: Sunday, April 21, 2013 9:38 PM
To:
Subject: the optimal value of the ring_creation_size
Hi,
I am wondering what's the best value for ring_cre
is allowed to merge by setting the merge_window on
the node that is being overloaded
(http://docs.basho.com/riak/latest/tutorials/choosing-a-backend/Bitcask/#Con
figuring-Bitcask)
Joe Caswell
From: Yuri Lukyanov
Date: Thursday, April 18, 2013 5:07 AM
To: "riak-users@lists.basho.com"
We have seen this a couple times. A common cause is the riak user needs
write access to the $PIPE_DIR in order to start properly. PIPE_DIR is
usually /tmp/riak/, check your riak script to make sure. If the directory
doesn't exist, it will be created with unix perm 755. Verify that the
riak user
15 matches
Mail list logo