Re: CPU usage 100% during search
Le mer. 4 janv. 2017 à 06:55, Rajnish kamboj a écrit : > Might be you are right as we are running our system under load test so we > are firing 200 requests with JMeter (loop forever). This may be the cause > of 100% CPU usage with 4 vCPUs machine. > In this case should we manually restrict our Lucene search threads in our > application OR there is some other inbuild Lucene mechanism to restrict the > threads to search Lucene index. > Well, you could but that would not make sense, 100% CPU usage is really the best you can get. Why would you like to make things worse artificially? That said, you might still want to restrict the number of threads that have access to the Lucene index (typically by using a fixed threadpool) to a bit more than the number of CPUs you have. This will still allow for 100% CPU usage while reducing context switching and causing Lucene to have fewer thread-local objects, which means potentially better throughput and memory usage.
term frequency in solr
Please help me with this: I have this code which return term frequency from techproducts example: / import java.util.List; import org.apache.solr.client.solrj.SolrClient; import org.apache.solr.client.solrj.SolrQuery; import org.apache.solr.client.solrj.SolrRequest; import org.apache.solr.client.solrj.impl.HttpSolrClient; import org.apache.solr.client.solrj.request.QueryRequest; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.client.solrj.response.TermsResponse; public class test4 { public static void main(String[] args) throws Exception { String urlString = "http://localhost:8983/solr/techproducts";; SolrClient solr = new HttpSolrClient.Builder(urlString).build(); SolrQuery query = new SolrQuery(); query.setTerms(true); query.addTermsField("name"); SolrRequest req = new QueryRequest(query); QueryResponse rsp = req.process(solr); System.out.println(rsp); System.out.println("numFound: " + rsp.getResults().getNumFound()); TermsResponse termResp =rsp.getTermsResponse(); List terms = termResp.getTerms("name"); System.out.print("size="+ terms.size()); } } / the result is 0 records I don't know why?? this is what I got: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. {responseHeader={status=0,QTime=0,params={terms=true,terms.fl=name,wt=javabin,version=2}},response={numFound=0,start=0,docs=[]}} numFound: 0 Exception in thread "main" java.lang.NullPointerException at solr_test.solr.test4.main(test4.java:29)
Re: Indexing architecture
Hi, Any better architecture ideas for my below mentioned use case? Regards, Suriya On Wed, 28 Dec 2016 at 11:27 PM, suriya prakash wrote: > Hi, > > I have 100 thousand indexes in Hadoop grid because 90% of my indexes will > be inactive and I can distribute the other active indexes based on load. > Scoring will work better for each index but I don't worry about it now. > > What are the optimisations I need to do to Scale better? > > I do commit every time now. Should i work on keeping active index writer > open and commit periodically with wal for failures. > > Update calls will happen frequently (80% load). I will read stored fields > and update the existing document with new value. I don't compress > storedfields now, because it has to uncompress block of data. Should I > reconsider compression? > > Scale: 100s of indexes will be active at a time in a single machine(16gb > ram) > > should I have to change to shard based architecture? > I see some benefits there more batching will happen, multiple threads will > not load the system. What other benefits can we get? > > > Please share your ideas/any link for multi user environment. > > > Regards, > Suriya > > >