That's odd. It should still be firing.
Are you seeing any increase in the postcommit_fail stats? It may spew a
lot of logging at you, but you could enable debug logging to see if it is
being fired or not.
lager:set_loglevel(lager_file_backend, "console.log", debug).
To reset back to info
lag
I'm running a 4-"node" cluster on one machine, riak-1.2.0. The
configuration is very close to the default development environment setup,
except I've turned on riak search in app.config for each node and added the
indexing pre-commit hook and a schema (I've tested it on individual
documents and it
Hi Ingo,
Sorry for the holdup here.
Riak shouldn't be throwing this error if all your R and W values are
set to "1". Are you running Riak 1.2?
Mark
On Mon, Sep 17, 2012 at 3:10 AM, Ingo Rockel
wrote:
> Anyone?
>
> Am 30.08.2012 18:34, schrieb Ingo Rockel:
>
>> Hi List,
>>
>> I'm trying to set
Many moons ago (circa 0.14.0) when you did a curl -X DELETE to a riak
bucket with a post commit hook it would be invoked and you could use the
X-Riak-Deleted tag to process the file. I have just such a post commit
hook running on on a 0.14.0 build.
We recently looked at upgrading to 1.2, but disc
On Sep 17, 2012, at 1:00 PM, Kresten Krab Thorup wrote:
> It looks like your m/r request is missing a Content-Type header (probably
> should be application/json). Perhaps it is a new requirement/validator in
> 1.2, perhaps the old client library is not passing it along.
Kresten is correct. R
It looks like your m/r request is missing a Content-Type header (probably
should be application/json). Perhaps it is a new requirement/validator in 1.2,
perhaps the old client library is not passing it along.
Kresten
Trifork
On 17/09/2012, at 18.25, "Colin Alston"
wrote:
> Hi
>
> I'm havin
Hi Praveen,
There are a few things that could be contributing the to 404s.
The most likely issue has to do with your "n" and "r" values. With an
"r" of 1, and a laggy node in your cluster, you could have a situation
that resembled a netsplit. (At the moment, Riak does better with
downed nodes th
Sxin,
Your issue is a known one when building from source on a system with no
access to Github. I documented the issue and fix on our Wiki here
http://wiki.basho.com/Installing-Riak-from-Source.html#Installation-on-Closed-Networks
The simple summary is that you will need to distribute one other
Hi
I'm having serious issues trying to upgrade Riak from 1.1.2 to 1.2.0_1
on Ubuntu.
To upgrade I stop a cluster node, purge the old package and install
1.2.0_1 but MapReduce fails on 1.2.
Exception: Error running MapReduce operation. Status: 500 :
500 Internal Server
ErrorInternal Server ErrorT
Thank you for the reply,
adding back the other nodes did (after a while) fix the prblem
even tho the ring stated it was complete some 10 minutes before the actual
query worked again.
2012/9/17 Reid Draper
> Paul,
>
> It looks like you're running a 3-node cluster. If two of the nodes fail,
> you
On Sep 17, 2012, at 10:34 AM, Kresten Krab Thorup wrote:
> As I understand Paul's situation, the 3 nodes are up and running again.
> Should that not be enough to avoid the "insufficient vnodes" error?
Ah, yes. I've misunderstood. Serves me right for responding before I've had my
morning coff
As I understand Paul's situation, the 3 nodes are up and running again. Should
that not be enough to avoid the "insufficient vnodes" error?
After a node down, I can see that it should matters better by running "riak
repair" of the involved partitions, but 2i should be able to run again when
su
Paul,
It looks like you're running a 3-node cluster. If two of the nodes fail, you'll
likely
not be able to run `coverage` queries like 2i and list-keys. If you need to be
able
to sustain losing 2 nodes and still successfully run 2i, I'd suggest running at
least
a 5 node cluster.
Reid
On Sep
The frequency of error is now more common. Upto 1 failed request in 10.
This is breaking everything.
On Mon, Sep 17, 2012 at 3:04 PM, Praveen Baratam
wrote:
> Here are some more details about the cluster.
>
> {ring_creation_size, 1024},
>
> {default_bucket_props, [
> {n_val, 2},
> {r, 1},
> {w
Hi,
Could anyone provide a basic working example using a map/reduce
combining "starts_with" and "tokenize" so that I can see if my syntax is
wrong or if something else brings me the famous
"{"error":"map_reduce_error"}"
still can't figure out how to make it work :
https://gist.github.com/36
Anyone?
Am 30.08.2012 18:34, schrieb Ingo Rockel:
Hi List,
I'm trying to set the n-val to 1 for my single-node test server but do
always fail with the following error:
Specified w/dw/pw values invalid for bucket n value of 1
This is my bucket configuration:
{"props":{"allow_mult":false,"basi
recently 2 of the vm's running riak crashed. (probably not due to riak)
When i now run "curl $riak/buckets/$2/keys?keys=true" i get the following
error message:
500 Internal Server
ErrorInternal Server ErrorThe server
encountered an error while processing this
request:{error,{error,{badmatch,{erro
Here are some more details about the cluster.
{ring_creation_size, 1024},
{default_bucket_props, [
{n_val, 2},
{r, 1},
{w, 1},
{allow_mult, false},
{last_write_wins, false},
{precommit, []},
{postcommit, []},
{chash_keyfun, {riak_core_util, chash_std_keyfun}},
{linkfun, {modfun, riak_kv_
18 matches
Mail list logo