Re: Yokozuna indexes too slow

2015-10-01 Thread Jason Voegele
Hello Ilyas, I’ve got a few questions for you to help diagnose the performance 
problem.

* What Riak version are you running?

* Are you using the default solrconfig on the index, or a custom configuration?

* You say that there is a 200x slow down when Yokozuna indexing is enabled. Do 
you have any specific numbers? A graph would be helpful if you can generate one.

* When you say that Yokozuna takes up 100% of one core, have you been able to 
narrow that down to specific process within Yokozuna or Riak? Using “riak-admin 
top” might help here.

* Can you provide your JVM configuration (heap settings, etc.)?


> On Oct 1, 2015, at 2:18 AM, ilyas  wrote:
> 
> 
> Hello to the Riak community! 
> I have a cluster of 5 nodes (hardware servers with SSDs, 8 CPU cores, 16GB of 
> RAM, most of it is free). When I store data to the database (up to 1 000 000 
> keys) without indexes, everything happens very quickly. If I create an index 
> for yokozuna and do the same, the store rate decreases greatly (200 times). 
> The process yokozuna takes up 100% of the time one core of CPU, other cores 
> are not loaded. Keys are short strings (16 symbols), values are json strings 
> (length is about 200 characters). 
> How can I find out the reason of such performance and fix it?
> 
> creation of bucket:
> 
> #!/bin/bash
> 
> RIAK_HOST="127.0.0.1:8098"
> 
> #define search scheme
> curl -XPUT $RIAK_HOST/search/schema/dbsearch -H 
> 'Content-Type:application/xml' --data-binary @dbSchema.xml
> 
> # create search index for scheme
> curl -XPUT $RIAK_HOST/search/index/user_idx_bin -H 'Content-Type: 
> application/json' -d '{"schema":"dbsearch"}'
> 
> # create bucket type for leleldb backend
> sudo /usr/sbin/riak-admin bucket-type create leveldb 
> '{"props":{"backend":"leveldb_mult"}}'
> sudo /usr/sbin/riak-admin bucket-type activate leveldb
> 
> #create buckets with known backends and indexes
> curl -XPUT $RIAK_HOST/types/leveldb/buckets/SomeBucket/props -H 
> 'Content-Type: application/json' -d 
> '{"props":{"search_index":"user_idx_bin"}}'
> 
> index scheme:
> 
> 
> 
> 
>  
>
>
>
>
> 
>
> multiValued="false" required="true"/>
> multiValued="false"/>
> multiValued="false"/>
> multiValued="false"/>
> multiValued="false"/>
> multiValued="false"/>
> multiValued="false"/>
> multiValued="false"/>
> multiValued="false"/>
> 
>
>  
> 
> 
>
>
>
>
> positionIncrementGap="0"/>
> multiValued="true" class="solr.StrField" />
> 
> 
>  _yz_id
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna indexes too slow

2015-10-01 Thread Jason Voegele
Hello Ilyas,

According to this Stack Overflow post, that error message might be a symptom of 
low memory conditions:

http://stackoverflow.com/questions/21180596/solr-error-when-doing-full-import-25-rows-org-apache-solr-common-solrexcepti
 


Could you try bumping up the -Xmx value on your JVM and see if it helps?

> On Oct 1, 2015, at 11:37 AM, ilyas  wrote:
> 
> org.apache.solr.common.SolrException;null:org.eclipse.jetty.io.EofException

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to control max RAM size of a Solr process?

2015-11-24 Thread Jason Voegele
> On Nov 24, 2015, at 4:03 AM, mtakahashi-ivi  
> wrote:
> 
> If I use riak search, riak launches solr process.
> After I put half a million objects, Solr process allocates bigger size of
> RAM than I set in search.solr.jvm_options.
> For example, even if I set "-Xmx 512", Solr process allocates 2GB of memory.
> My questions are
> * How to control max RAM size of a Solr process?
> * What the off-heap memory is used for?

Hi Masanori,

Can you please provide the full and exact value you have for the 
search.sol.jvm_options property? Better yet, your full Riak config file if you 
can share it.

As for off-heap memory, I don’t believe Solr utilizes off-heap memory in any 
way. Do you have any indication otherwise?

-- 
Jason Voegele
"Most of us, when all is said and done, like what we like and make up reasons 
for it afterwards."
-- Soren F. Petersen
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr indexes are dropped after recovering a node

2015-12-23 Thread Jason Voegele
> On Dec 23, 2015, at 12:54 PM, István  wrote:
> 
> Hi,
> 
> I had to move the nodes of a Riak cluster to new ones. Everything is fine 
> with the data, we have been following the recovery procedures here:
> 
> http://docs.basho.com/riak/latest/ops/running/backups/#Restoring-a-Node 
> <http://docs.basho.com/riak/latest/ops/running/backups/#Restoring-a-Node>
> 
> After the moving all of the nodes I found out that all of the Solr indexes 
> are gone.

Hi Istvan,

It looks like you are restoring an entire cluster, not just a single node 
within a cluster. If so, the relevant recovery procedures are documented on 
this page:

http://docs.basho.com/riak/latest/ops/running/recovery/failure-recovery/#Cluster-Recovery-From-Backups
 
<http://docs.basho.com/riak/latest/ops/running/recovery/failure-recovery/#Cluster-Recovery-From-Backups>

Can you try following the full cluster recovery procedure and see if that 
solves the problem?

-- 
Jason Voegele
Manly's Maxim:
Logic is a systematic method of coming to the wrong conclusion
with confidence.___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak YZ/Solr creating invalid segments

2016-01-11 Thread Jason Voegele
> On Jan 11, 2016, at 2:06 AM, Josh Yudaken  wrote:
> […]
> Unfortunately the standard Lucene CheckIndex is unable to recover this
> error, but there is a patch available at:
> https://issues.apache.org/jira/browse/LUCENE-6762
> 
> After modifying the patch to run on Lucene 4.7 [any plans on
> upgrading?] we managed to bring our nodes back up and they seem to be
> functioning fine.
> 
> Have you seen these issues anywhere else? Any advice on how to try
> solve them besides continually running the updated CheckIndex script
> after each failure?

Hi Josh,

Can you please specify the version of Riak you are running?

We had a fix for a similar (but not identical) issue here:
https://github.com/basho/yokozuna/blob/3e749512d2df07c81def3c9c592615fd8d2d1234/docs/RELEASE_NOTES.md#bugsmisc
 
<https://github.com/basho/yokozuna/blob/3e749512d2df07c81def3c9c592615fd8d2d1234/docs/RELEASE_NOTES.md#bugsmisc>

We are planning on upgrading to Solr 4.10 in our next release, but the patch 
that you linked to does not appear to have been applied to any Lucene version 
that ships with Solr 4.10 or Solr 5. I’ve created an issue to track that patch:
https://github.com/basho/yokozuna/issues/607 
<https://github.com/basho/yokozuna/issues/607>

-- 
Jason Voegele
audophile, n:
Someone who listens to the equipment instead of the music.___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Solr Error Handling

2016-02-26 Thread Jason Voegele
On Feb 26, 2016, at 10:40 AM, Colin Walker  wrote:
> Due to bad planning on my part, Solr is having trouble indexing some of the 
> fields I am sending to it, specifically, I ended up with some string fields 
> in a numerical field. Is there a way to retrieve the records from Riak that 
> have thrown errors in sole?

Hi Colin,

Can you tell us what version of Riak you are running? In recent versions of 
Riak you can get this information by expiring the AAE trees for Yokozuna and 
then noting the objects that are flagged as not being indexable. See 
http://docs.basho.com/riak/latest/ops/advanced/aae/#AAE-and-Riak-Search 
<http://docs.basho.com/riak/latest/ops/advanced/aae/#AAE-and-Riak-Search> for 
some background info on AAE and Yokozuna, if needed.

Another possible option is to see if the the “_yz_err” field that is 
automatically created on certain error conditions might hold the information 
you need. See http://docs.basho.com/riak/latest/dev/advanced/search-schema/ 
<http://docs.basho.com/riak/latest/dev/advanced/search-schema/> for info on 
“_yz_err”.

-- 
Jason Voegele
When my brain begins to reel from my literary labors, I make an occasional
cheese dip.
-- Ignatius Reilly

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Spaces in the search string

2016-09-06 Thread Jason Voegele
Hi Sean,

Have you tried escaping the space in your query?

http://stackoverflow.com/questions/10023133/solr-wildcard-query-with-whitespace


> On Sep 5, 2016, at 6:24 PM, sean mcevoy  wrote:
> 
> Hi List,
> 
> We have a solr index where we store something like:
> <<"{\"key_s\":\"ID\",\"body_s\":\"some test string\"}">>}],
> 
> Then we try to do a riakc_pb_socket:search with the pattern:
> <<"body_s:*test str*">>
> 
> The request will fail with an error message telling us to check the logs and 
> in there we find:
> 
> 2016-09-05 13:37:29.271 [error] <0.12067.10>@yz_pb_search:maybe_process:107 
> {solr_error,{400,"http://localhost:10014/internal_solr/crm_db.campaign_index/select
>  
> ",<<"{\"error\":{\"msg\":\"no
>  field name specified in query and no default specified via 'df' 
> param\",\"code\":400}}\n">>}} 
> [{yz_solr,search,3,[{file,"src/yz_solr.erl"},{line,284}]},{yz_pb_search,maybe_process,3,[{file,"src/yz_pb_search.erl"},{line,78}]},{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,388}]},{riak_api_pb_server,connected,2,[{file,"src/riak_api_pb_server.erl"},{line,226}]},{riak_api_pb_server,decode_buffer,2,[{file,"src/riak_api_pb_server.erl"},{line,364}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]
> 
> 
> Through experiment I've figured out that it doesn't like the space as it 
> seems to think the part of the search string after that space is a new key to 
> search for. Which seems fair enough.
> 
> Anyone know of a work-around? Or am I formatting my request incorrectly?
> 
> Thanks in advance.
> //Sean.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Secondary indexes or Riak search ?

2017-02-02 Thread Jason Voegele
Hi Alex,

There is some info on this page that can help you decide:

http://docs.basho.com/riak/kv/2.2.0/developing/usage/secondary-indexes/

See the sections titled "When to Use Secondary Indexes" and " When Not to Use 
Secondary Indexes".

Sent from my iPad

> On Feb 2, 2017, at 4:43 AM, Alex Feng  wrote:
> 
> Hello Riak-users,
> 
> I am currently using Riak search to do some queries, since my queries are 
> very simple, it should be fulfilled by secondary indexes as well. 
> So, my question is which one has better performance and less overhead, let's 
> say both can fulfill the query requirement.
> 
> Many thanks in advance.
> 
> Br,
> Alex
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com