Is there anyway to analyze Riak objectes to make sure there isnt duplicates
or near duplicates?
also,
I want to switch to riak_kv_eleveldb_backend for secondary indexes, was
wondering if there is any disadvantages to this backend?
Thx
--
View this message in context:
http://riak-users.197444.
Awesome, thanks for the quick turnaround Brian! We'll test it out and let you
know how it goes.
Cheers,
Will
From: riak-users-boun...@lists.basho.com [riak-users-boun...@lists.basho.com]
on behalf of Brian Roach [ro...@basho.com]
Sent: Thursday, March
Hey there,
I'm playing with a two node riak 1.1.1 cluster, but can't figure how to
make riak happy with only one node (and the other one failed).
My use case is the following:
- HTML pages gets stored in the riak cluster (it's used as a page cache)
- I'm using secondary indexes to tag those docum
Cedric,
The problem you are seeing is that the list of vnodes does not spread perfectly
between physical nodes. So with N=2 on only a 2 node cluster, you aren't
given assurances that those two copies of data will end up on two vnodes owned
by different physical nodes. So if you have data X
Hi
Thanks for your reply. Was expecting something along those lines.
My production setup don't justify more than two physical nodes for that
caching feature.
So I guess that my options with riak are:
1. keep N=2 but add two more physical nodes
2. keep two physical nodes, but set N=3 and hope that
On Fri, Mar 30, 2012 at 6:17 PM, idmartin wrote:
> Is there anyway to analyze Riak objectes to make sure there isnt duplicates
> or near duplicates?
A M/R Job? Identity map, followed by a reduce that does the
comparison? Yeah, it won't be super-efficient, but then a comparison
like that never is,
Hey riak-users,
As part of our commitment to improving our client libraries, we have just
released riak-python-client, version 1.4.0, to PyPi (
http://pypi.python.org/pypi/riak/1.4.0 ) and tagged it on Github (
http://git.io/-i8KvQ ). This version includes some enhancements and
bugfixes from Basho
Hi list,
I spend some time looking at this issue today. I suspect it is due to all
the Javascript VMs being busy inside Riak due to many parallel MapReduce
jobs.
If you're seeing this issue please check for this message in the
console.log file
16:40:50.037 [notice] JS call failed: All VMs are b