Hey Steve,

Your data will remain in Riak, but Search will then use AAE to rebuild the data 
on the Solr side. This is only to fix Solr’s core data issues. If you’re 
worried about an interruption on the SOLR side, as there would be one to 
reindex the data, you could then wait until things are sured-up in our next 
release/patch-release to help solve these issues.

As I mentioned, we’re working on a fix for this with an internal patch that 
we’re testing at this moment.

Thanks.

Zeeshan Lakhani
programmer | 
software engineer at @basho | 
org. member/founder of @papers_we_love | paperswelove.org
twitter => @zeeshanlakhani

> On Mar 5, 2015, at 12:13 PM, Steve Garon <steve.ga...@gmail.com> wrote:
> 
> Does deleting core.properties or moving search-root/index deletes any of my 
> index data? Will that cause any interruption to SOLR? Cause I'm having 
> problems on a production cluster and I really don't want to have to backup 
> 300 millions keys again or cause any interruptions to the system ...
> 
> 
> Steve
> 
> On 5 March 2015 at 10:11, Zeeshan Lakhani <zlakh...@basho.com 
> <mailto:zlakh...@basho.com>> wrote:
> Hey Steve,
> 
> We’re currently tracking this issue, 
> https://github.com/basho/yokozuna/issues/442 
> <https://github.com/basho/yokozuna/issues/442>, and are working on testing 
> out a patch internally. I will update you as soon as we get clarity there.
> 
> As a possible workaround, I would attempt to delete the `core.properties` 
> file located in the search root_directory/index of each node 
> (./data/yz/<<index_name>>). This file should then be recreated on the next 
> attempt to creating the core on the Solr side, 
> https://github.com/basho/yokozuna/blob/92ca14cc35b46c8e7ac86cad6d92547e68e8d917/src/yz_index.erl#L171
>  
> <https://github.com/basho/yokozuna/blob/92ca14cc35b46c8e7ac86cad6d92547e68e8d917/src/yz_index.erl#L171>.
> 
> If that doesn’t work, you can then delete or `mv` that search-root/index 
> directory out of the way, for which it will then get recreated and AAE will 
> sync the data.
> 
> Thanks.
> 
> Zeeshan Lakhani
> programmer | 
> software engineer at @basho | 
> org. member/founder of @papers_we_love | paperswelove.org 
> <http://paperswelove.org/>
> twitter => @zeeshanlakhani
> 
>> On Mar 5, 2015, at 9:02 AM, Steve Garon <steve.ga...@gmail.com 
>> <mailto:steve.ga...@gmail.com>> wrote:
>> 
>> In solr.log, the kind of exceptions that I'm getting right now is the 
>> following:
>> 
>> 1. IO Error while trying to get the size of the 
>> Directory:java.io.FileNotFoundException: SOME_RAMDOM_FILE (.doc, .pos, .fnm, 
>> .si, .nv, .gen extensions)
>> 2. SolrException.java:120 null:org.apache.sorl.common.SolrException: Core 
>> with name 'BUCKET NAME' already exists. 
>> 3. SolrException.java:120 null:org.eclipse.jetty.io.EofException
>> 4. Server refused connection 
>> 5. IOException occured when talking to server
>> 
>> 
>> 
>> 
>> Steve
>> 
>> On 5 March 2015 at 08:50, Steve Garon <steve.ga...@gmail.com 
>> <mailto:steve.ga...@gmail.com>> wrote:
>> Yes I do have yz_events,handle_info in crash log. Tons of them actually and 
>> they have a big ass strack trace attached to each of them.
>> 
>> It would be hard for me to provide you with logs. If you have specific 
>> questions you want answers to I'd be happy to help though.
>> 
>> 
>> Steve
>> 
>> On 4 March 2015 at 14:20, Zeeshan Lakhani <zlakh...@basho.com 
>> <mailto:zlakh...@basho.com>> wrote:
>> Hey Steve,
>> 
>> Sorry to see you’re having new issues.
>> 
>> We’ll have the fix for “space in the key” out soon; it’s currently under 
>> review. And, I know that this issue is unrelated.
>> 
>> I have a few different routes/thoughts for you, but are you seeing anything 
>> related to `yz_events,handle_info` in your crash logs? Also, can you 
>> gist/pastebin me your solr logs? I’d like to seem if it correlates with 
>> something we’re currently looking at. 
>> 
>> Thanks. 
>> 
>> Zeeshan Lakhani
>> programmer | 
>> software engineer at @basho | 
>> org. member/founder of @papers_we_love | paperswelove.org 
>> <http://paperswelove.org/>
>> twitter => @zeeshanlakhani
>> 
>>> On Mar 4, 2015, at 10:39 AM, Steve Garon <steve.ga...@gmail.com 
>>> <mailto:steve.ga...@gmail.com>> wrote:
>>> 
>>> Hey all, 
>>> 
>>> We were having the "space in the key" bug in our cluster so we went through 
>>> the whole dataset, backing it up to json file and removing the spaces in 
>>> the keys. Then we trashed our whole cluster and restart from scratch 
>>> reimporting the whole data. Everything worked like a charm for two weeks 
>>> but this weekend, not sure what happened but AAE died again. 
>>> 
>>> I have two issues now: 
>>> 1. AAE is trying to recreate an index that already exists and crashes with 
>>> an "Already exists" error ... I get this in my error log every 15 seconds
>>> 2. AAE crashes while iterating through entropy data with a request timeout 
>>> error every hour followed by tons of failed to index objects with request 
>>> timeout as well. Stack trace looks like this:
>>> 
>>> [error] emulator Error in process <025931.7166> on node 'riak@IP' with exit 
>>> value: {function_clause,[{yz_entropy,iterate_entropy_data,[<<11 
>>> bytes>>,[{continuation,<<159 
>>> bytes>>},{limit,100},{partition,12}],#Fun<yz_index_hashtree.5.46188917,{error,{error,req_timeout}}],[{file,"src/yz_entropy.erl"},{line,44}]},{yz_index_hashtree,'-fold_keys/2-lc$^0/1-0-',3,[{file...
>>>  (TRUNCATED)
>>> 
>>> [error] <0.1371.0>@yz_kv:index:215 failed to index object 
>>> {{<<"TYPE">>,<<"BUCKET">>},<<"KEY">>} with error {"Failed to index 
>>> docs",{error,req_timeout}} because ... (REPEATED MULTIPLE TIME FOR 
>>> DIFFERENT KEYS)
>>> 
>>> I tried clearing the yz anti-entropy tree and reinitialising the 
>>> yz_entropy_mgr with no luck. Anything I can do to fix this? 
>>> 
>>> Oh FYI, I cannot insert data with spaces in the key anymore because we are 
>>> using a wrapper on top of riak that will prevent us to do so therefore my 
>>> issues are not related to this for sure.
>>> 
>>> These are some config changes that may be good to know for more context.
>>> We added this to ibrowse.conf:
>>> {dest, "localhost", 8093, 100, 1000, []}.
>>> 
>>> Jetty is set with minthread 80, acceptors 80, and is using the NIO 
>>> connector.
>>> 
>>> All our solr buckets have filterCache disabled with softcommits set to 10s 
>>> instead of 1.
>>> 
>>> Our riak.conf has background_manager turned on with AAE and handoff using 
>>> it.
>>> 
>>> Thanks,
>>> 
>>> Steve
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
>>> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
>> 
>> 
>> 
> 
> 

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to