Hi,
I've been reading up on Riak search and am very pleased to see the new
index repair command in 1.2. However, it seems that in order to use it you
must be able to somehow detect when there are inconsistencies in your
search indexes across vnodes. While it's likely the inconsistencies would
be s
You may also want to have a look at this post by Aphyr. There are a LOT of
caveats when trying to do this sort of thing.
http://aphyr.com/posts/254-burn-the-library
-Andrew
On Fri, Mar 22, 2013 at 9:02 PM, Sean Cribbs wrote:
> Datomic does something similar -- except that instead of updating
Hi Rahul,
That error message is misleading. In general it can be treated as a 500
error. My guess is that you do not have stanchion running. Stanchion
provides a serialization layer for creating users and buckets. See
http://docs.basho.com/riakcs/latest/tutorials/fast-track/Building-a-Local-Test-E
44 AM, Rahul Bongirwar <
bongirwar.rahul...@gmail.com> wrote:
> Hi,
>
> All process are running on my node ( riak, riak-cs, stanchion ) but still
> getting same error.
>
> I tried this on atleast 3 different node but facing same problem.
>
> Thanks,
> Rahul
>
&g
reating a first user after all
> configuration done. So not able to create admin user also.
>
> Thanks,
> Rahul
>
>
>
> On Thu, Jun 20, 2013 at 5:25 PM, Andrew Stone wrote:
>
>> Hi Rahul, I'm bringing this back on list in case anyone else has this
>> is
Hi Andre,
The blocks are going to be spread across some subset of that 100 servers.
Since Riak CS stores the chunks inside Riak they are hashed based on
primary key. Currently there is no way to co-locate chunks in Riak CS. You
can read more about How riak manages storage here:
http://docs.basho.c
Hi Thomas,
Yes you can change the n_val in the default bucket properties of the riak
cluster. That's how CS determines how many replicas to store. However,
please keep in mind that reducing the replicas can reduce your availability.
-Andrew
On Wed, Aug 21, 2013 at 5:27 AM, Thomas Dunham wrote:
Hi Toby,
Can you try raising the pb_backlog to 128 in your riak app.config on each
node. It's likely those disconnect errors are left over from the stampede
of connections from the CS connection pool on startup. For one reason or
another the resets don't come through and the hanging disconnected s
Good to hear. :)
On Wed, Sep 18, 2013 at 11:52 PM, Toby Corkindale <
toby.corkind...@strategicdata.com.au> wrote:
> Hi Andrew,
> Thanks for that -- so far things are looking stable after making that
> change.
>
> -T
>
>
> On 19/09/13 13:27, Andrew Stone wrote:
&
./configure --prefix=/home/dave && make && make install
On Fri, Oct 11, 2013 at 3:10 PM, Dave King wrote:
> That would be great if installing erlang didn't require sudo...
>
> Dave
>
>
>
> On Fri, Oct 11, 2013 at 11:50 AM, Jon Meredith wrote:
>
>> You should be able to to build from source and
Hi Georgio,
There are many possible ways to do something like this. Riak CS in
particular chunks large files into immutable data blocks, and has manifests
pointing to those blocks to track versions of files. Manifests and blocks
are each stored in their own riak object. There are some tricks aroun
Think of an object with thousands of siblings. That's an object that has 1
copy of the data for each sibling. That object could be on the order of
100s of megabytes. Everytime an object is read off disk and returned to the
client 100mb is being transferred. Furthermore leveldb must rewrite the
enti
Oops. Didn't reply to the list. Sorry for the dupe Matt.
Hi Matt,
My guess is that this has to do with a fairly recent change to the cluster
join mechanism.
Try attaching to the erlang shell with "riak attach" and running the
following command:
riak_ensemble_manager:enable().
In the future a r
Hi Toby,
We've seen this scenario before. It occurs because riak-cs stores bucket
information in 2 places on disk:
1) Inside the user record (for bucket permissions)
2) Inside a global list of buckets, since each bucket must be unique
What has happened most likely is that the bucket is no lon
Hi Charles,
AFAIK we haven't ever tested Riak Cs with the MapR connector. However, if
MapR works with S3, you should just have to change the IP to point to a
load balancer in front of your local Riak CS cluster. I'm unaware of how to
change that setting in MapR though. It seems like a question for
Thanks for all the work Seth!
On Sat, Oct 18, 2014 at 11:42 AM, Seth Thomas wrote:
> An update for the Riak CS Chef cookbook has been released, bringing
> support for Riak CS 1.5.1 along with updates to dependences and some bug
> fixes.
>
> You can grab it from github [1] or the Chef Supermarket
Hi Jonathan,
Sorry for the late reply. It looks like riak_ensemble still thinks that
those old nodes are part of the cluster. Did you remove them with
'riak-admin cluster leave' ? If so they should have been removed from the
root ensemble also, and the machines shouldn't have actually left the
clu
you're spreading your cluster across North America I would
> suggest you reconsider. A Riak cluster is meant to be deployed in one data
> center, more specifically in one LAN. Connecting Riak nodes over a WAN
> introduces network latencies. Riak's approach to multi datacenter
Hi Stefan,
You need to configure Riak to listen on the right interface. You are trying
to hit it from 127.0.0.1 which is only available from the local machine.
If you set web_ip to 0.0.0.0 in app.config for riak_core it will listen on
all interfaces. Then you can try to hit it with a curl remotel
ike nginx.
>
> -alexander
>
> On 2010-12-01, Andrew Stone wrote:
> > Hi Stefan,
> >
> > You need to configure Riak to listen on the right interface. You are
> trying
> > to hit it from 127.0.0.1 which is only available from the local machine.
> >
> >
20 matches
Mail list logo