I might be mistaken here, but why exactly wouldn't you use the facet
approach? I don't know exactly about how to do this in core Lucene but
with Solr it works very well also for multi-valued fields. You could
just say "give me the 100 most frequent terms in field X" for each field
you're inter
the index used with my Solr instance has format -9
(Lucene 2.9).
So that's not the matter, I guess...still ideas?! ;)
Am 20.04.2011 10:17, schrieb Erik Fäßler:
Thank you very much for your answers :-) First of all, I just noticed
I sent the question unintentionally to the Lucene list while
page.
That should show you exactly what is being searched. You might also want
to look at the analysis page for your field and see how your query
is tokenized.
But, like I said, this looks like it should work. If you can post the results of
adding&debugQuery=on and your actual definition for &q
Hallo there,
my issue qualifies as newbie question I guess, but I'm really a bit
confused. I have an index which has not been created by Solr. Perhaps
that's already the point although I fail to see why this should be an
issue with my problem.
I use the admin interface to check which result
Hi Simon,
thanks for your answer. My comments below:
so you mean you would want to do that analysis on the client side and
only shoot the already tokenized values to the server?
What exactly is too slow? Can you provide more info what the problem is?
After all I think you should ask on the sol
Hi there,
I'd like to serialize some Lucene Documents I've built before. My goal
is to send the documents over a http connection to a Solr server which
then should add them to its index.
I thought this would work as the Document class implements Serializable
as do the Fields. Unfortunately,