Hi all,
I am planning to use Lucene(not in cluster) for indexing and querying good
volume data. Use case is, 10-20 documents / second(roughly around 15-20
fields) and in parallel doing query. Below is the approach i am planning to
take, can anyone please let me know from their past experience if
Peyman: I'll contact you off list to try and address your specific
problem.
As a general reminder for all users: If you need help with the mailing
list, step #1 should be to email the automated help system via
java-user-help@lucene (identified in the Mailin-List and List-Help mail
MIME header
Hi
I am not sure who to report this to but I have tried to unsubscribe from lucene
lists (including java-user@lucene.apache.org) without success many times now. I
have sent an unsubscribe email to all of the list servers on this page, with no
bounces.
https://lucene.apache.org/core/discussion
I explain better i created a my class MyStoredField.java for saving a
value.
Reading for understading if it is a normal stored field or not i used
"instanceof".
But in debug the class generated when i read the document id StoredField
not MyStoredField . No way for doing it?
2016-09-07 16:
I have a doubt.
I created a class storing a special value.
When i store the document saving this field all ok.
when i read the document the field is found but it is a different class
(StoredField instead MyStoredField)
it is ok so? or i saw wrong previously in other examples?
Hi,
TermVectors perhaps?
Ahmet
On Tuesday, September 6, 2016 4:21 PM, szzoli wrote:
Hi All,
How can I list all the terms from a document? I also need the counts of each
term per document.
I use Lucene 6.2. I found some solutions for older versions. These din't
work with 6.2
Thank you in ad
Thank you.
Can you please share me code snippet to deal with these chars.
I tried but couldn't achieve.
On Tue, Sep 6, 2016 at 10:59 PM, Iker Huerga wrote:
> here is the thing, you are probably using the StandardAnalyzer so those
> special characters are going to be removed at indexing time
>
>