Your problem is Term Frequency, You do not want to consider term frequency
try omit Term Frequency.
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-of-results-ordering-tp2139314p2150722.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi, there is nothing in the log, and the optimize finishes successfully:
0
17
I run optmize through browser by entering url
http://localhost:8080/myindex/update?optimize=true
or
http://localhost:8080/myindex/update?stream.body=
Thanks.
On Mon, Dec 27, 2010 at 7:12 AM, Li Li wrote:
> mayb
maybe you can consult log files and it may show you something
btw how do you post your command?
do you use curl 'http://localhost:8983/solr/update?optimize=true' ?
or posting a xml file?
2010/12/27 Rok Rejc :
> On Mon, Dec 27, 2010 at 3:26 AM, Li Li wrote:
>
>> see maxMergeDocs(maxMergeSize) in s
On Mon, Dec 27, 2010 at 3:26 AM, Li Li wrote:
> see maxMergeDocs(maxMergeSize) in solrconfig.xml. if the segment's
> documents size is larger than this value, it will not be merged.
>
I see that in my solrconfig.xml, but it is commented and marked as
deprecated. I have uncommented this setting (
hi all:
I use solr to index my documents, and I put my text in a cdata
segment.however, solr always throws an exception complaining about
thexml file processing
.
It seems that I can still index the document successfully!!!(actually , I'm
not sure about cos there are pretty too many document!)
Is the optimize finished? By default, the optimize command goes in and
the HTTP request returns. You have to add attributes to the
command.
On Sun, Dec 26, 2010 at 9:23 AM, Rok Rejc wrote:
> Hi all,
>
> I have created an index, commited the data and after that I had run the
> optimize with defau
500 rows can be a lot of rows. A Filter query is a normal query the
first time it is run, and cached thereafter. If you do a sequence of
different time ranges, it will be slow. So, if you just do a query for
each time range, and use the query and filter query caches, they might
be faster.
On Sat,
see maxMergeDocs(maxMergeSize) in solrconfig.xml. if the segment's
documents size is larger than this value, it will not be merged.
2010/12/27 Rok Rejc :
> Hi all,
>
> I have created an index, commited the data and after that I had run the
> optimize with default parameters:
>
> http://localhost:8
> I did heap dump + heap histogram before killing the jvm today and the
only really suspicious thing was the top line in the histogram:
class [B,
81883 instances,
3,974,092,842 bytes
> Most of the instances (actually all of around a hundred of them I
checked with jhat) look almost the same in term
Hi all,
I have created an index, commited the data and after that I had run the
optimize with default parameters:
http://localhost:8080/myindex/update?stream.body=
I was suprised that after the optimizing was finished there was 21 segments
in the index:
reader :
SolrIndexReader{this=724a2dd4,r
10 matches
Mail list logo