Pawel,
I think you've wandered into a real disconnect between how we communicate Riak
replication, and how it actually works.
Although we say that N="nodes to replicate to", in reality, N="vnodes
replicated to with every attempt to ensure they are different nodes with no
guarantee". This is wh
There's no general best practice for keeping denormalized data in sync, beyond
the obvious case, which is to update all values through whatever client you use
to update one. If your number of keys are few, this is not going to be a hard
hit on your updates. If you have an unbounded number of key
st-commit has no javascript option, only pre-commit.
> -f
>
> From: Eric Redmond [mailto:eredm...@basho.com]
> Sent: Thursday, November 29, 2012 11:17 AM
> To: Felix Terkhorn
> Cc: riak-users@lists.basho.com
> Subject: Re: Best practice -- duplicating and syncing objects
>
You're pinging the protocol buffer port 8091. HTTP is on another port, check
your app.config setting.
Eric
On Dec 12, 2012, at 6:23 AM, Pablo Vieytes wrote:
> Hi,
> I'm new with Riak. I'm trying to use riak-erlang-client but I have some
> problems.
>
> I can connect with the browser to local
No, since you can't guarantee that your nodes' clocks are perfectly in sync.
Last write wins is only suitable if you don't care all that much about
causality, and just want easy resolution.
On Dec 14, 2012, at 12:40 PM, "Vergara, Jeaneth Aguilar"
wrote:
> If we have no siblings (last write wi
Sadly it looks like you're right. There's a lot of focus on the upcoming
release, but we should certainly run through those PRs soon.
Thanks for the heads up.
Eric
On Dec 16, 2012, at 1:15 PM, Elias Levy wrote:
> No changes in a year and pending pulls up to a year old.
>
> Elias Levy
> _
It's not documented well, but hopefully these more specific steps will help.
https://github.com/basho/basho_docs/issues/28
Eric
On Dec 28, 2012, at 12:57 PM, rkevinbur...@charter.net wrote:
> I am trying to get a cluster going and I have successfully renamed all of the
> listening addresses an
Those are all riak-admin commands (eg. riak-admin down) to run on still
connected node, pointing to the node you need to change.
Note that these instructions are for renaming a node at a time, not every node
on a ring at once.
Eric
On Dec 28, 2012, at 6:55 PM, Eric Redmond wrote:
> It
Hurray! We love doc contributions!
On Feb 8, 2013, at 10:38 AM, Mark Phillips wrote:
> Hi Deepak,
>
> On Fri, Feb 8, 2013 at 3:15 AM, Deepak Balasubramanyam
> wrote:
>> Hi folks,
>>
>> After trying a couple of configurations, I'd recommend using a VPC on EC2. I
>> decided to share my experien
You are trying to start a new Erlang node, but another node with the same name
is already running.
Make sure no other riak beam processes.
On Mar 20, 2013, at 2:41 PM, Kevin Burton wrote:
> Now I cannot get it going again. I seem to always get the error:
>
> 16:38:10.032 [info] Protocol: "in
Correction, perhaps the node is not running, or the vm.args `-name` does not
match.
On Mar 20, 2013, at 4:00 PM, Eric Redmond wrote:
> You are trying to start a new Erlang node, but another node with the same
> name is already running.
>
> Make sure no other riak beam processes.
One practical matter to also keep in mind is the number of nodes you must scale
to and by.
The recommended minimum nodes when running Riak in production is five servers.
Once you start with that minimum, scaling is a matter of adding single nodes as
you need them, six to seven to eight and so o
On Apr 10, 2013, at 2:26 PM, Tom Zeng wrote:
> Hi list,
>
> We have a production installation with only 3 nodes and running on 1.2.1.
> I'd appreciate to get some facts to convince IT to increase the number of
> nodes to 7 and upgrade to 1.3. I heard people from Basho mentioned ideally 7
st
> Riak DC meetup. not as the minimal but for better performance, when I was
> chatting with a couple of Basho devs about performance benchmarking, and
> about Riak is quite a bit slower on single node against Mongo.
>
>
> On Wed, Apr 10, 2013 at 5:56 PM, Eric Redmond wrote
This commonly happens if a node name or IP has changed args.vm -name without
notifying the ring. Either use the riak-admin cluster replaced command, or
delete the ring directory and rejoin the cluster.
Note this isn't the only reason for missing capabilities, but a very common one.
Sent from my
Rather than geohashing, you might consider Yokozuna. It has built in geo
searches.
Eric
On May 19, 2013, at 11:47 AM, Ryan Chazen wrote:
> Hi
>
> I'm looking at Riak as a solution for handling this type of data:
>
> A map is divided into sectors, and on the map there are many objects with
>
t respond to
this announcement with issues, I may miss it.
Thanks to many people who have helped, and continue to help keep me honest,
especially:
John Daily
Bob Ippolito
Ricardo Tomé Gonçalves
Macneil Shonle
And everyone else who submitted an issue, patch, or constructive comments.
Thanks,
Eric R
You first wrote `subject_id_int`, then searched by `subject_id_txt`.
Eric
On Jun 3, 2013, at 6:55 AM, Kartik Thakore wrote:
> Hello,
>
> So I have some data here:
> http://aimed.cc:8098/riak/nca_nca_audiometric_data_docs/1MxuOAxpyZ4NCyVW4QqxPeDbjTr
>
> http://aimed.cc:8098/riak/rekon/go#/buck
Alexander,
The simplest answer is that we never recommend running Riak on one node. The
recommended minimum is 5, but you could possibly get away with 3 (the default
repl value).
There is a blog post about this from last year, explaining why:
http://basho.com/why-your-riak-cluster-should-have-a
You need to delete the data/ring directory, then try again.
You need to make the changes *before* starting riak, not after. If you make any
changes to the system, the configurations that the ring first builds will not
longer be true.
Eric
On Jun 13, 2013, at 6:32 AM, Sanjeev Neelarapu
wrote:
The short answer is: not without writing some code.
The longer answer is:
To search PDFs in RiakSearch you must write your own extractor[1], which
requires writing some Erlang, and use some PDF extraction library. Or if you
already have the text handy, manually index the data yourself using upd
That's correct. The XML extractor nests by element name, separating elements by
an underscore.
Eric
On Jul 17, 2013, at 12:46 PM, Dave Martorana wrote:
> Hi,
>
> I realize I may be way off-base, but I noticed the following slide in Ryan’s
> recent Ricon East talk on Yokozuna:
>
> http://cl.
Dave,
Your initial line was correct. Yokozuna is not yet compatible with 1.4.
Eric
On Jul 15, 2013, at 1:00 PM, Dave Martorana wrote:
> Hi everyone. First post, if I leave anything out just let me know.
>
> I have been using vagrant in testing Yokozuna with 1.3.0 (the official 0.7.0
> “relea
Currently, Yokozuna has some slight differences from stock Riak. Don't follow
the public docs instructions. Follow these instead:
https://github.com/basho/yokozuna/blob/master/docs/INSTALL.md
Yokozuna requires R15B02.
Eric
On Jul 26, 2013, at 1:52 AM, Erik Andersen wrote:
> Hi!
>
> I want
Yokozuna returns keys and optionally matching fields. If you use highlighting,
you can get a portion of the value, but no, it still returns keys, just like
normal Solr (because it is normal solr).
YZ does support pagination in the way that you can skip and limit, but it
doesn't have a cursor or
It's hard to answer that, because both configurations are below Riak's recommended specifications. You should have at least 5 servers, and they should have at least 4GB RAM each. The minimums are for different reasons.The minimum RAM is clear: keeping things like keys resident reduces the chances o
The answer is in the output. You need to set your ulimit higher. Try
something like 4.
Eric
On Aug 21, 2013 9:30 AM, "Abdul Hamed" wrote:
> hello ,
> i am Trying to install Riak CS on Ubuntu 12.04 using this link
> http://docs.basho.com/riakcs/latest/tutorials/quick-start-riak-cs/
Fwiw, that link is old, and should have redirected. The new perf tuning doc
is here.
http://docs.basho.com/riak/latest/ops/tuning/linux/
Eric
On Sep 15, 2013 10:56 AM, "Evan Vigil-McClanahan"
wrote:
> Riak is no longer built or tested on 32 bit machines, so that could
> potentially be a problem
On Oct 6, 2013 4:35 PM, "Alex Rice" wrote:
>
> Hi all, unable to find these in the Riak docs
>
> 1) Can someone point me at directions for cleanup of orphaned Links
> after an object is Deleted from Riak. Post-commit hooks or something?
> I really hope I dont have to learn Erlang just to clean up
For building json you should also check out a tool like mochijson2.
On Oct 17, 2013 6:51 AM, "Daniil Churikov" wrote:
> {ok, Worker} = riakc_pb_socket:start_link("my_riak_node_1", 8087),
> Obj = riakc_obj:new(<<"my_bucket">>, <<"my_key">>,
> <<"{\"key\":\"\val\"}">>,
> <<"application/json">>),
>
Apologies that it's unclear, and I'll update the docs to correct this.
http://docs.basho.com/riak/latest/ops/advanced/install-custom-code/
When you install custom code, you must install that code on every node.
Eric
On Oct 17, 2013, at 9:17 AM, Tristan Foureur wrote:
> Hi,
>
> My question is
How many nodes are you running? You should aim for around 8-16 vnodes per
server (must be a power of 2). So if you're running 5 nodes, you should be fine
with 4GB since it'll be approx 12 vnodes per. If you're only running on 1
server, you'll be running 64 vnodes on that single server (which is
That's pretty much it, with a small correction: there's a 1-1 mapping between a
Solr shard and Riak node (not vnode). Yokozuna attaches a partition number to
each index document which specifies it's riak object's vnode. Then at query
time, partitions are specified to filter out replicated values
The new sets aren't sorted, they're just proper sets.
Riak isn't fundamentally changing from a key value store, the values of the new
data types are just convergent. So you still retrieve the whole value at a
time, no partial values (eg. subsets).
Eric
On Nov 15, 2013, at 6:39 PM, bernie wro
Kartik,
Your confusion is understandable, since our docs are not up to date yet (and
they won't be until closer to the final 2.0 release date). Because of this, the
docs you're reading correspond to the old version of RiakSearch, and not the
new Search (codenamed yokozuna). Thus, you cannot rel
That is not a valid solr query. You need to search by field:value. Try:
http://192.168.1.10:8098/solr/logs/select?q=*:*
Eric
On Nov 27, 2013, at 7:23 AM, Kartik Thakore wrote:
> Cool. I did the data activate and emptied out the bucket and set the props
> and created a different index. Still n
sanity check I'd also be curious to see the
> results of the *:* query.
>
>
> On Wed, Nov 27, 2013 at 11:15 AM, Eric Redmond wrote:
> That is not a valid solr query. You need to search by field:value. Try:
>
> http://192.168.1.10:8098/solr/logs/select?q=*:*
>
>
Anton,
Depending on how soon you plan to be in production, this sounds like a good
usecase for yokozuna (the new Riak Search) coming in 2.0 (sometime in Q1). It
has builtin support for handling semistructured data like JSON or XML, nested
even, and will allow you to query by multiple fields at
By default, w=2, so only two of the default three nodes need to be successfully
written for a write to be a success. That final write could potentially take
some time to replicate, especially if you're loading a lot of data at once. You
could try setting 3=w on initial bulk upload to ensure valu
On Jan 20, 2014, at 12:35 PM, Elias Levy wrote:
> On Mon, Jan 20, 2014 at 12:14 PM, Russell Brown wrote:
> Longer answer: Riak gave users the option of client or _vnode_ ids in version
> vectors. By default Riak uses vnode ids. Riak erred on the side of caution,
> and would create false concu
For version 1.4 counters, riak_kv_pncounter. For 2.0 CRDT counters,
riak_dt_pncounter.
Eric
On Jan 23, 2014, at 3:44 PM, Bryce Verdier wrote:
> In 1.4 there was just the simple function riak_kv_counters:value. In 2.0 I
> found the riak_kv_crdt module, which has a value function in it. But I'm
Rob,
The one second wait is because yokozuna is the glue code (putting it very, very
simply) between a Riak cluster and distributed Solr instances. When you write
an object to Riak, yokozuna asynchronously fires off an update to the Solr
service. Solr is, by default, configured to soft commit w
ike automatically generated with
> PID?
> Anyways that error looks more like .erlang_cookie file not being written
> correctly (empty). Would it happen due to node name change?
>
> D
>
>
> On 27 January 2014 15:27, Eric Redmond wrote:
> You started Riak under one nam
Actually people use Riak as a distributed cache all the time. In fact, many
customers use it exclusively as a cache system. Not all backends write to disk.
Riak supports a main memory backend[1], complete with size limits and TTL.
Eric
[1]: http://docs.basho.com/riak/latest/ops/advanced/backend
gt; system. Node.js workers simulate that in-memory cache, php applications write
> and read from them and when something is dirty, it's persisted to riak...
>
> Best regards
>
>
>
>
> On 30 January 2014 22:26, Eric Redmond wrote:
> Actually people use Riak as a
Is that the value you have in Riak? Because that's invalid JSON. kay/values are
separated with a colon (:) not a hash rocket (=>).
Eric
On Feb 3, 2014, at 9:06 AM, Srdjan Pejic wrote:
> Hello all,
>
> I'm storing a log of events in Riak. The key is an UUID plus number of
> seconds since a vi
Jeremy,
The new documentation for creating schemas are still in progress, and docs for
creating a custom schema are still a few weeks out.
curl -XPUT http://localhost:8098/search/schema -H'content-type:application/xml'
--data @my_schema_file.xml
my_schema_file.xml should contain the custom sol
Sorry, the path is: /search/schema/YOUR_SCHEMA_NAME
On Feb 4, 2014, at 1:12 PM, Jeremy Pierre wrote:
> Hi all,
>
> I have a single vagrant node (ubuntu 12.04) running 2.0-pre11 for some
> prototyping and need the new search awesomeness. Creating and using an index
> with the default schema w
Sorry, clicked send too fast.
After creating the index, then you add the bucket property, pointing to the
search_index.
curl -XPUT "$RIAK_HOST/buckets/cats/props" -H'content-type:application/json'
-d'{"props":{"search_index":"famous"}}
http://docs.basho.com/riak/2.0.0pre11/dev/using/search/#Simple-Setup
curl -XPUT "$RIAK_HOST/search/index/famous"
Eric
On Feb 5, 2014, at 10:08 AM, Kartik Thakore wrote:
> What is the url to install search bucket indexes hooks on pre11?
> ___
> riak-
On Feb 15, 2014, at 5:25 AM, EmiNarcissus wrote:
> Hi,
>
> I’ve been using riak as backend storage engine since it starts support search
> syntax, and it really helps me a lot during my development. But still there
> are quite a lot of questions bothers me a lot and cannot find a answer
> a
Abhishek,
You're conflating two completely different version of Riak Search. If you
downloaded Riak pre11, you need to reference the pre11 documentation:
http://docs.basho.com/riak/2.0.0pre11/dev/using/search/
search-cmd is unusable for 2.0 Search. You no longer manually manage indexes in
new
Searching that version list is not generally useful if you just want the
version of the Riak node you're running. Try running "riak version".
As for "Riak Search 2.0", the project that makes it work is called "yokozuna".
Eric
On Mar 20, 2014, at 5:24 PM, "Sapre, Meghna A" wrote:
> Thanks, se
stall separately?
>
> Thanks,
> Meghna
>
>
> From: Eric Redmond [mailto:eredm...@basho.com]
> Sent: Friday, March 21, 2014 8:53 AM
> To: Sapre, Meghna A
> Cc: Michael Dillon; riak-users@lists.basho.com
> Subject: Re: Riak Search 2.0
>
> Searching that versi
Yes
On Mar 21, 2014 10:24 PM, "Buri Arslon" wrote:
> is riakc_pb_socket:create_search_schema/3/4 erlang way of adding custom
> schemas to Riak?
>
>
> On Fri, Mar 21, 2014 at 11:05 PM, Buri Arslon wrote:
>
>> Thanks, Luke!
>>
>>
>> On Fri, Mar 21, 2014 at 6:04 PM, Luke Bakken wrote:
>>
>>> Hi Bu
> From: Alexander Sicular [mailto:sicul...@gmail.com]
> Sent: Friday, March 21, 2014 11:47 PM
> To: Sapre, Meghna A
> Cc: Eric Redmond; riak-users@lists.basho.com
> Subject: Re: Riak Search 2.0
>
> You need to install jsonpp, https://github.com/jmhodges/jsonpp. You could
>
The newest Riak Search 2 (yokozuna) allows you to create custom solr
schemas. One of the field options is to save the values you wish to index.
Then when you perform a search request, you can retrieve any of the stored
values with a field list (fl) property
Eric
On Mar 26, 2014 9:37 AM, "Elias Lev
ar 26, 2014 10:05 AM, "Elias Levy" wrote:
> On Wed, Mar 26, 2014 at 9:44 AM, Eric Redmond wrote:
>
>> The newest Riak Search 2 (yokozuna) allows you to create custom solr
>> schemas. One of the field options is to save the values you wish to index.
>> Then when
This was a changed back a few weeks ago, where allow_mult is back to false
for buckets without a type (default), but is true for buckets with a type.
Sorry for the back and forth, but we decided it would be better to keep it
as false so as to not break existing users. However, we strongly encourag
The new Solr support is a new feature for Riak 2.0.0. You don't need to install
solr separately, as it's included.
Download: http://docs.basho.com/riak/2.0.0beta1/downloads/
Using Search: http://docs.basho.com/riak/2.0.0beta1/dev/using/search/
Eric
On May 7, 2014, at 10:04 AM, Brisa Anahí Jimé
You're very perceptive! That tag is not a typo, but nothing more has been
published, yet. If I were a betting man, I'd say something is bound to happen
at some date in the future.
Eric
On Jul 14, 2014, at 12:46 PM, Dave Martorana wrote:
> Hi!
>
> I've been kind of holding off development fo
In short, not really. Solr is at best near realtime (NRT). By default, it takes
Solr one second to soft commit the value to its index. This timing can be
lowered, but note that the lower the number, the more it impacts overall system
performance. One second is the general top speed, but feel fre
This is a documentation error. As of today, use versions 1.4.5.
Eric Redmond, Engineer @ Basho
On Sun, Aug 03, 2014 at 10:59 AM, Anton Khalikov <an...@khalikov.ru> wrote:Hello Riak users and Basho guys
I've found an interesting page today:
http://docs.basho.com/riakcs/latest/cookbo
Sorry, stanchion 1.5.0 for CS 1.5.0, else stanchion 1.4.3 for CS 1.4.5
Eric Redmond, Engineer @ Basho
On Sun, Aug 03, 2014 at 11:09 AM, Eric Redmond <eredm...@basho.com> wrote:This is a documentation error. As of today, use versions 1.4.5.
Eric Redmond, Engineer @ Basho
On Sun, Aug 03, 2014
om interface to Solr that accepts arbitrary binary values. In the mean time, to use yokozuna, you'll have to encode your keys to utf8.
Eric Redmond, Engineer @ Basho
On Sun, Aug 10, 2014 at 4:01 PM, David James <davidcja...@gmail.com> wrote:I'm using UUIDs for keys in Riak -- conver
You're expected to base 64 encode it. UUID is simply the kind of value it expects, like a date or an integer.
Eric Redmond, Engineer @ Basho
On Sun, Aug 10, 2014 at 5:03 PM, David James <davidcja...@gmail.com> wrote:Thanks for the quick responses.Eric: I don't understand. Why do
although I expected compatibility issues
> with something. Base 64 encoding the binary value is a nice compromise for
> me, and takes 22 characters (if you drop the padding) instead of the usual 36
> for the hyphenated hex format.
>
> It would still require re encoding all the key
If Solr is stumbling over bad data, your node's solr.log should be filled up.
If Yokozuna is stumbling over bad data that it's trying to send Solr in a loop,
the console.log should be full. If yokozuna is going ahead and indexing bad
values (such as unparsable json), it will go ahead and index a
Glad that works for you. Saves me from having to suggest you use 2.0 :)
Eric
On Aug 10, 2014, at 4:25 PM, tele wrote:
> Hi,
>
> I have a simple solr document like this:
>
>
>
>
> Oslo
> Bob
> 34
>
>
>
> As "id" i use an email addr a
Note that the search-cmd is for search pre 2.0, which does not use solr. If you're planning on using the new Solr based search, you'll need to run Riak 2.0, and write an import script as Dmitri pointed out.
Eric Redmond, Engineer @ Basho
On Thu, Aug 14, 2014 at 7:38 AM, Dmitri Zagiduli
Alex,
looking through your previous emails, it looked like you created a bucket type
named "likes". If that's the case, you'd swap the function params:
bucket = client.bucket_type('likes').bucket('counter_bucket')
Hope that helps,
Eric
On Aug 17, 2014, at 10:33 AM, Alex De la rosa wrote:
>
name of the bucket you
> created... so it looks as it should work exactly the contrary as how it has
> to be done (><).
>
> However, is finally fixed and I can finally use counters! thank you so much.
>
> Thanks!
> Alex
>
>
> On Sun, Aug 17, 2014 at 8:04 PM,
Also note that you don't run "enable_search" for 2.0.
On Aug 18, 2014, at 11:50 AM, Sean Cribbs wrote:
> Hi Alex,
>
> Bare counters become the "counter" field in the Solr index. For counts
> greater than 3 you might query with "counter:[3 TO *]".
>
> Hope that helps!
>
> On Mon, Aug 18, 2014
Alex,
Don't call enable_search(). That enables *old* Riak Search (it sets the
property "search":true). To revert that setting, bucket.set_property("search",
False)
On Aug 18, 2014, at 10:55 AM, Alex De la rosa wrote:
> Hi Luke,
>
> As an alternative version and following the Python Client
gt; I see! Understood, could you provide a little full example on how it should
> work? Because I think I also tried without it and failed.
>
> Luke told me to try using the GIT version one see if is a bug that was
> already fixed there.
>
> Thanks,
> Alex
>
> On Monday,
That's a Solr query, which you can find in the Solr documentation. But my
initial through would be:
bucket.search("counter:*" sort="counter desc", rows=5)
Eric
On Aug 18, 2014, at 12:06 PM, Alex De la rosa wrote:
> Hi Sean,
>
> Thank you for the "counter" field trick :)
>
> What id you don
any testings that I wanted to
> start fresh... so I kinda translated the documentation, but that is
> irrelevant to the case.
>
> Thanks,
> Alex
>
>
> On Mon, Aug 18, 2014 at 10:59 PM, Eric Redmond wrote:
> Your steps seemed to have named the index "famoso".
.7/dist-packages/riak/client/transport.py",
> line 126, in _with_retries
> return fn(transport)
> File "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
> line 182, in thunk
> return fn(self, transport, *args, **kwargs)
> File "/usr/local
Alex,
I was mistaken about the bucket search. The documentation is wrong, and the API
is weirded for backward compatibility reasons. You should be able to search by
index name this way.
client.fulltext_search(index, query, **params)
We'll update the docs to match.
Eric
On Aug 18, 2014, at 2
You don't pass in a query as a url encoded string, but rather a set of
parameters. So you'd call something like:
search_results = riak_client.fulltext_search(self.Result_Index,
'build.type:CI', group='on', 'group.field'='build.version')
Eric
On Aug 19, 2014, at 1:08 PM, Sapre, Meghna A wrote
Creating a search schema is non blocking, meaning that it will return ok as it
continues to process and propagate the schema on the backend. Attempting to
bind an index to a custom schema is generally because 1) the schema failed to
create (you can check out the solr.log or error.log for details
Yang,
The current Riak clients don't support the full range of Solr queries. In order
to remain backward compatible with the old search protocol buffers, they only
support a subset of search options. For new search options, such as facets or
geolocation, you'll have to make HTTP queries.
Query
On Sep 16, 2014, at 9:37 PM, anandm wrote:
> I started checking out Riak as an alternative for one my projects. We are
> currently using SolrCloud 4.8 (36 Shards + 3 Replica each) - and all stored
> fields (64 fields per doc + each doc at about 6kb on average)
> I want to get this thing migrated
s are bad.
> for example if riak listen on http, but referer is https - 403 thrown,
> same host http - works fine.
>
>
> This why it is not reproduce if open link in browser: no referer was in
> original request.
>
>
>
> On Fri, Sep 26, 2014 at 8:53 PM, Eric Redmond
Currently, if you change the schema after you've imported data, you have to
rebuild your index again. Riak won't detect that you've changed schemas, unless
you invalidate your AAE tree (as described in the upgrade doc) and let AAE
repair the indexes with the new schema data.
Eric
On Oct 9, 20
Look into your Riak log/solr.log, but it Solr isn't running. Ensure you have
Java installed, that the port isn't busy, and that you have given it enough RAM
for the JVM to properly function.
Eric
On Oct 11, 2014, at 3:47 PM, Soulrice wrote:
> Hello. My Riach Klooster after starting throws th
Have you added/removed nodes from the cluster, or did you start populating data
before the cluster was done being built? You may have ran into a known handoff
bug that we're currently working on.
Eric
On Oct 22, 2014, at 11:32 AM, Alexander Popov wrote:
> Also if run query with specific id,
s several hangs while transferring between nodes, and I was restart
> they several times.
>
> Should re-save problematic records helps with this issue?
>
> On Wed, Oct 22, 2014 at 10:42 PM, Eric Redmond wrote:
> Have you added/removed nodes from the cluster, or did you st
Alexander,
I'm sorry to say, currently, the way to update a schema is to update the
schema, and update all of the objects.
This is actually a feature we're working on (aiming for 2.1), but it doesn't
exist right now. What will happen with this change is that AAE will be able to
repair indexes
On Nov 11, 2014, at 3:55 AM, Jason Ryan wrote:
> Hi all,
>
> I have some quick (hopefully!) questions around Riak search.
>
> 1. I'm getting multiple documents returned in the search results for a query
> - which I assume is based on the n_val of my bucket as I tried to change the
> search i
There is a 1 second soft-commit delay while Solr adds the value to the index in
memory. No matter what you do, there will always be a delay (all indexers have
this issue, including SolrCloud and ElasticSearch).
Eric
On Nov 12, 2014, at 6:03 AM, Alexander Popov wrote:
> I put data to some buc
Oleksiy,
Indexes have some overhead of their own, but they also have a reasonable limit
on doc count (hundreds of millions per node). To answer you question requires a
bit more knowledge of your use-case. One index can be more efficient, as long
as you're not creating hundreds of thousands of b
>
> Trustev Ltd, 2100 Cork Airport Business Park, Cork, Ireland.
>
> On 12 November 2014 17:53, Eric Redmond wrote:
>
> On Nov 11, 2014, at 3:55 AM, Jason Ryan wrote:
>
>> Hi all,
>>
>> I have some quick (hopefully!) questions around Riak search.
>>
Geoff, comments inline.
On Nov 13, 2014, at 3:13 PM, Geoff Garbers wrote:
> Hi all.
>
> I've been looking around for a bit with some sort of guidelines as to how
> best to structure search indexes within Riak 2.0 - and have yet to come up
> with anything that satisfies my questions.
>
> I ca
Go ahead and submit an issue. We'll take a look at it, but it could be months
before a fix ends up in a release. I'd take Ryan's suggestion for now and
replace spaces with another char like underscore (_).
Eric
On Nov 21, 2014, at 11:20 PM, Oleksiy Krivoshey wrote:
> Bucket backend is elevel
on_j_ryan
web: www.trustev.com
Trustev Ltd, 2100 Cork Airport Business Park, Cork, Ireland.
On 24 November 2014 at 13:55, Eric Redmond wrote:
Your response header should contain shard and many filter values. What URL are
you using to search with? Does it begin with /solr or /solr_internal? What
Sorry, worst typo ever.
"Solr can't manage...60k objects" should be "Solr can manage...60k objects"
Eric
On Nov 22, 2014, at 6:13 AM, Eric Redmond wrote:
> Geoff, comments inline.
>
> On Nov 13, 2014, at 3:13 PM, Geoff Garbers wrote:
>
>> Hi all
Automatic updating of indexes due to schema changes doesn't exist yet. It'll be
added soon:
https://github.com/basho/yokozuna/pull/427
Eric
On Nov 25, 2014, at 8:21 AM, Yang Zhenguo wrote:
> I have an existed schema and I want to add one column in it, such as
>
>
>
> What's the steps to
Yes, that is one of the options.
Eric
On Nov 25, 2014, at 8:40 AM, Yang Zhenguo wrote:
> Hi Eric,
>
> Any suggestion for my requirement? create a new search index?
>
> Regards,
> Zhenguo
>
>
> 2014-11-26 0:24 GMT+08:00 Eric Redmond :
> Automatic updating of
gt;> Geoffrey Garbers
>> Senior Developer
>>
>>
>>
>> Cell: +27 (0)766 476 920
>> Skype: geoff.garbers
>> ge...@totalsend.com
>> www.totalsend.com
>>
>> +1 347-431-0494
>> +44 (0)203 519 1082
>> +61 (0)3 9
1 - 100 of 108 matches
Mail list logo