Re: What ever happened to delete support for post-commit hooks?

2012-09-17 Thread Jon Meredith
That's odd. It should still be firing. Are you seeing any increase in the postcommit_fail stats? It may spew a lot of logging at you, but you could enable debug logging to see if it is being fired or not. lager:set_loglevel(lager_file_backend, "console.log", debug). To reset back to info lag

Running dev setup based on "The Riak Fast Track", nodes crashing during re-add to index existing documents.

2012-09-17 Thread Ted Cooper
I'm running a 4-"node" cluster on one machine, riak-1.2.0. The configuration is very close to the default development environment setup, except I've turned on riak search in app.config for each node and added the indexing pre-commit hook and a schema (I've tested it on individual documents and it

Re: Specified w/dw/pw values invalid for bucket n value of 1

2012-09-17 Thread Mark Phillips
Hi Ingo, Sorry for the holdup here. Riak shouldn't be throwing this error if all your R and W values are set to "1". Are you running Riak 1.2? Mark On Mon, Sep 17, 2012 at 3:10 AM, Ingo Rockel wrote: > Anyone? > > Am 30.08.2012 18:34, schrieb Ingo Rockel: > >> Hi List, >> >> I'm trying to set

What ever happened to delete support for post-commit hooks?

2012-09-17 Thread David Goehrig
Many moons ago (circa 0.14.0) when you did a curl -X DELETE to a riak bucket with a post commit hook it would be invoked and you could use the X-Riak-Deleted tag to process the file. I have just such a post commit hook running on on a 0.14.0 build. We recently looked at upgrading to 1.2, but disc

Re: Upgrading from 1.1.2 to 1.2.0

2012-09-17 Thread Reid Draper
On Sep 17, 2012, at 1:00 PM, Kresten Krab Thorup wrote: > It looks like your m/r request is missing a Content-Type header (probably > should be application/json). Perhaps it is a new requirement/validator in > 1.2, perhaps the old client library is not passing it along. Kresten is correct. R

Re: Upgrading from 1.1.2 to 1.2.0

2012-09-17 Thread Kresten Krab Thorup
It looks like your m/r request is missing a Content-Type header (probably should be application/json). Perhaps it is a new requirement/validator in 1.2, perhaps the old client library is not passing it along. Kresten Trifork On 17/09/2012, at 18.25, "Colin Alston" wrote: > Hi > > I'm havin

Re: 404 Error in Live Cluster

2012-09-17 Thread Mark Phillips
Hi Praveen, There are a few things that could be contributing the to 404s. The most likely issue has to do with your "n" and "r" values. With an "r" of 1, and a laggy node in your cluster, you could have a situation that resembled a netsplit. (At the moment, Riak does better with downed nodes th

Re: Error--Cloning into 'leveldb'...ERROR: Command ['get-deps'] failed!make: *** [deps] Error 1

2012-09-17 Thread Jared Morrow
Sxin, Your issue is a known one when building from source on a system with no access to Github. I documented the issue and fix on our Wiki here http://wiki.basho.com/Installing-Riak-from-Source.html#Installation-on-Closed-Networks The simple summary is that you will need to distribute one other

Upgrading from 1.1.2 to 1.2.0

2012-09-17 Thread Colin Alston
Hi I'm having serious issues trying to upgrade Riak from 1.1.2 to 1.2.0_1 on Ubuntu. To upgrade I stop a cluster node, purge the old package and install 1.2.0_1 but MapReduce fails on 1.2. Exception: Error running MapReduce operation. Status: 500 : 500 Internal Server ErrorInternal Server ErrorT

Re: {error,insufficient_vnodes_available}

2012-09-17 Thread Paul Scheltema
Thank you for the reply, adding back the other nodes did (after a while) fix the prblem even tho the ring stated it was complete some 10 minutes before the actual query worked again. 2012/9/17 Reid Draper > Paul, > > It looks like you're running a 3-node cluster. If two of the nodes fail, > you

Re: {error,insufficient_vnodes_available}

2012-09-17 Thread Reid Draper
On Sep 17, 2012, at 10:34 AM, Kresten Krab Thorup wrote: > As I understand Paul's situation, the 3 nodes are up and running again. > Should that not be enough to avoid the "insufficient vnodes" error? Ah, yes. I've misunderstood. Serves me right for responding before I've had my morning coff

Re: {error,insufficient_vnodes_available}

2012-09-17 Thread Kresten Krab Thorup
As I understand Paul's situation, the 3 nodes are up and running again. Should that not be enough to avoid the "insufficient vnodes" error? After a node down, I can see that it should matters better by running "riak repair" of the involved partitions, but 2i should be able to run again when su

Re: {error,insufficient_vnodes_available}

2012-09-17 Thread Reid Draper
Paul, It looks like you're running a 3-node cluster. If two of the nodes fail, you'll likely not be able to run `coverage` queries like 2i and list-keys. If you need to be able to sustain losing 2 nodes and still successfully run 2i, I'd suggest running at least a 5 node cluster. Reid On Sep

Re: 404 Error in Live Cluster

2012-09-17 Thread Praveen Baratam
The frequency of error is now more common. Upto 1 failed request in 10. This is breaking everything. On Mon, Sep 17, 2012 at 3:04 PM, Praveen Baratam wrote: > Here are some more details about the cluster. > > {ring_creation_size, 1024}, > > {default_bucket_props, [ > {n_val, 2}, > {r, 1}, > {w

Re: M/R query with riak-defined keys

2012-09-17 Thread Antoine
Hi, Could anyone provide a basic working example using a map/reduce combining "starts_with" and "tokenize" so that I can see if my syntax is wrong or if something else brings me the famous "{"error":"map_reduce_error"}" still can't figure out how to make it work : https://gist.github.com/36

Re: Specified w/dw/pw values invalid for bucket n value of 1

2012-09-17 Thread Ingo Rockel
Anyone? Am 30.08.2012 18:34, schrieb Ingo Rockel: Hi List, I'm trying to set the n-val to 1 for my single-node test server but do always fail with the following error: Specified w/dw/pw values invalid for bucket n value of 1 This is my bucket configuration: {"props":{"allow_mult":false,"basi

{error,insufficient_vnodes_available}

2012-09-17 Thread Paul Scheltema
recently 2 of the vm's running riak crashed. (probably not due to riak) When i now run "curl $riak/buckets/$2/keys?keys=true" i get the following error message: 500 Internal Server ErrorInternal Server ErrorThe server encountered an error while processing this request:{error,{error,{badmatch,{erro

Re: 404 Error in Live Cluster

2012-09-17 Thread Praveen Baratam
Here are some more details about the cluster. {ring_creation_size, 1024}, {default_bucket_props, [ {n_val, 2}, {r, 1}, {w, 1}, {allow_mult, false}, {last_write_wins, false}, {precommit, []}, {postcommit, []}, {chash_keyfun, {riak_core_util, chash_std_keyfun}}, {linkfun, {modfun, riak_kv_