Those are all riak-admin commands (eg. riak-admin down) to run on still
connected node, pointing to the node you need to change.
Note that these instructions are for renaming a node at a time, not every node
on a ring at once.
Eric
On Dec 28, 2012, at 6:55 PM, Eric Redmond wrote:
> It's not
Thank you for the tips. This definitely gives me something to go on. Some
questions though.
One, what is the path to the data/ring directory that is to be removed? Two, I
don't understand what step 4 is trying to accomplish. Mark down? Riak-admin
down? Finally, I also need some elaboration on
It's not documented well, but hopefully these more specific steps will help.
https://github.com/basho/basho_docs/issues/28
Eric
On Dec 28, 2012, at 12:57 PM, rkevinbur...@charter.net wrote:
> I am trying to get a cluster going and I have successfully renamed all of the
> listening addresses an
0.0.0.0 will bind to all ip addresses. You generally don't want to do that.
@siculars
http://siculars.posterous.com
Sent from my iRotaryPhone
On Dec 28, 2012, at 17:09, rkevinbur...@charter.net wrote:
> The instructions at
> http://docs.basho.com/riak/1.2.0/cookbooks/Basic-Cluster-Setup/ ind
The instructions at
http://docs.basho.com/riak/1.2.0/cookbooks/Basic-Cluster-Setup/ indicate
editing app.config and vm.args with the IP address of the machine. What
is wrong with 0.0.0.0?
___
riak-users mailing list
riak-users@lists.basho.com
http://
I am trying to get a cluster going and I have successfully renamed all
of the listening addresses and the names of the riak nodes but it
appears that none of the node will start. I get the following error in
console.log. Any idea what this means?
2012-12-28 14:41:29.912 [info] <0.7.0> Applic
I don't believe allow_mult is enabled. It shouldn't be at least!
On Dec 28, 2012, at 1:23 PM, Brian Roach wrote:
> On Fri, Dec 28, 2012 at 12:34 PM, Dietrich Featherston
> wrote:
>> Primarily stores but I did see one case of socket timeouts simply building a
>> new connection pool using the rj
On Fri, Dec 28, 2012 at 12:34 PM, Dietrich Featherston
wrote:
> Primarily stores but I did see one case of socket timeouts simply building a
> new connection pool using the rjc.
This should be simply a result of attempting to bring up another
instance of the client when the node can't accept mor
On Dec 28, 2012, at 11:57 AM, Brian Roach wrote:
> On Fri, Dec 28, 2012 at 11:37 AM, Dietrich Featherston
> wrote:
>>
>> All socket operations. It looks as though those that open a new socket are
>> especially
>> impacted. We are running 1.2.1 with the leveldb backend. Same 9 node SSD
>> cl
On Fri, Dec 28, 2012 at 11:37 AM, Dietrich Featherston
wrote:
>
> All socket operations. It looks as though those that open a new socket are
> especially
> impacted. We are running 1.2.1 with the leveldb backend. Same 9 node SSD
> cluster info I
> have posted to the list before but don't have ac
On Dec 28, 2012, at 11:28 AM, Brian Roach wrote:
> Dietrich -
>
> I haven't seen this in testing or have had anyone report this; could I
> get some more info?
>
> What operations are timing out like this? Is that the complete message
> in the riak error.log?
That is the complete message in t
On Dec 28, 2012, at 11:28 AM, Brian Roach wrote:
> Dietrich -
>
> I haven't seen this in testing or have had anyone report this; could I
> get some more info?
>
> What operations are timing out like this? Is that the complete message
> in the riak error.log? Which version of Riak are you runn
Dietrich -
I haven't seen this in testing or have had anyone report this; could I
get some more info?
What operations are timing out like this? Is that the complete message
in the riak error.log? Which version of Riak are you running?
Do you see this simply dropping in the 1.0.6 client to your e
Hey Joshua,
Do you know all your keys, or are they predictable? If so, you can read
them in batches and write them back with the new indicies, which will not
put the strain of list_keys on your cluster.
That said, just wanted to point out that Option #2 necessitates Option #1,
indexing of object
No results Alexander because of the : on the hour..removing them it works
but I can't change the date format.
I don't get the "noop tokenizer"...this is my schema file, already updated
making the date fields as string.
%% Schema for 'logs'
{
schema,
[
{version, "1.1"},
{n
Change your schema.erl for that bucket to index those date fields as strings. I
believe it's the "noop" tokenizer.
@siculars
http://siculars.posterous.com
Sent from my iRotaryPhone
On Dec 28, 2012, at 9:35, Daniel Gerep wrote:
> Hi all,
>
> I'm saving logs like this:
>
> {
> "transaction_
Hi all,
I'm saving logs like this:
{"transaction_type": "company","app_name": "app.posxml","started_at":
"2012/12/28
12:23:19","finished_at": "2012/12/28 12:23:26","serial_number":
"941-823-764","terminal_id": "474","framework_version": "3.53","status":
"timeout","sent": "31305042202020080001
Let me clarify a bit:
1) There is only one vclock in a response, but at one time prior to
you requesting the key, the vclock of individual replicas were
divergent, which results in the siblings.
2) The ETag is related to the "vtag" but is not exactly the same.
Reading the riak_kv source gives us t
Riak doesn't have atomic updates. This if_not_modified does not gives you any
guaranties. Best way to handle with simultaneously updates is try to
engineer scheme so that only one client makes concurrent updates and in case
of conflict any sibling will be good for you. Another option is try to use
Hello,
Say I have N=3, R=2 and W=2, and two clients are simultaneously trying to
update the same object with if_not_modified=true. Is there a possible
scenario where both clients can succeed? If not and if at most one client
succeeds then setting if_not_modified=true would be a way to atomically l
On 28/12/2012, at 10:24 AM, Tom Lanyon wrote:
> We're doing some maintenance on a Riak 1.2.1 cluster; it had two nodes in the
> cluster (50% ring each), and I used 'cluster join' to add a third node
> (should now be ~33% ring pending each).
>
> Whilst it was transferring the ~33% of partitions
21 matches
Mail list logo