Re: Store a query in a database for later use

2012-05-18 Thread Ahmet Arslan
> 2. toString() doesn't always generate a query that the
> QueryParser can parse.

I remember similar discussion, I think Xml-Query-Parser is more suitable for 
this use case. 

http://www.lucidimagination.com/blog/2009/02/22/exploring-query-parsers/


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Better Way of calculating Cosine Similarity between documents

2012-05-18 Thread Kasun Perera
Hi all

I’m indexing collection of documents using Lucene specifying TermVerctor at
the indexing time. Then I retrieve terms and their term frequencies by
reading the index and calculate TF-IDF scores vector for each document.
Then using TF-IDF vectors, I calculate pairwise cosine similarity between
documents using the equation here
http://en.wikipedia.org/wiki/Cosine_similarity.

This is my problem

Say I have two identical documents “A” and “B” in this collection (A and B
have more than 200 sentences).

If I calculate pairwise cosine similarity between A and B it gives me
cosine value=1 which is perfectly OK.

But If I remove a single sentence from Doc “B”, it gives me cosine
similarity value around 0.85 between these two documents. The documents are
almost similar but cosine values are not. I understand the problem is with
the equation that I’m using.

Is there better way/ better equation that I can use for calculating cosine
similarity between documents?

-- 
Regards

Kasun Perera


Re: Better Way of calculating Cosine Similarity between documents

2012-05-18 Thread Akos Tajti
köszi!





On Fri, May 18, 2012 at 11:19 AM, Kasun Perera  wrote:

> Hi all
>
> I’m indexing collection of documents using Lucene specifying TermVerctor at
> the indexing time. Then I retrieve terms and their term frequencies by
> reading the index and calculate TF-IDF scores vector for each document.
> Then using TF-IDF vectors, I calculate pairwise cosine similarity between
> documents using the equation here
> http://en.wikipedia.org/wiki/Cosine_similarity.
>
> This is my problem
>
> Say I have two identical documents “A” and “B” in this collection (A and B
> have more than 200 sentences).
>
> If I calculate pairwise cosine similarity between A and B it gives me
> cosine value=1 which is perfectly OK.
>
> But If I remove a single sentence from Doc “B”, it gives me cosine
> similarity value around 0.85 between these two documents. The documents are
> almost similar but cosine values are not. I understand the problem is with
> the equation that I’m using.
>
> Is there better way/ better equation that I can use for calculating cosine
> similarity between documents?
>
> --
> Regards
>
> Kasun Perera
>


RE: NullPointerException using IndexReader.termDocs when there are no matches

2012-05-18 Thread Edward W. Rouse
Thanks, I missed that. And the API doc fails to mention it, though it is
pretty standard for a next() method.

> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Thursday, May 17, 2012 6:20 PM
> To: java-user@lucene.apache.org
> Subject: Re: NullPointerException using IndexReader.termDocs when there
> are no matches
> 
> I think you need to pay attention to what td.next() returned; I
> suspect in your case it returned false which means you cannot use any
> of its APIs (.doc(), .freq(), etc.) after that.
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> On Thu, May 17, 2012 at 5:52 PM, Edward W. Rouse
>  wrote:
> > Lucene 3.6, java 1.6 I get the following:
> >
> > java.lang.NullPointerException at
> >
> org.apache.lucene.index.DirectoryReader$MultiTermDocs.doc(DirectoryRead
> er.ja
> > va:1179)
> >
> > when running this code:
> >
> > IndexReader reader = this.getReader(index);
> > int d = -1;
> > TermDocs td = reader.termDocs(this.createIdTerm(id));
> > if(td != null)
> > {
> >  td.next();
> >  d = td.doc();
> > }
> >
> > private Term createIdTerm(String id)
> > {
> >  return new Term(Constants.DEFAULT_ID_FIELD, id);
> > }
> >
> > At the time this code runs I would expect that td would be null since
> there
> > are no documents in the index that match the term, but that is not
> the case.
> > Instead I get the NPE when trying td.doc(). I can wrap the code in a
> > try/catch for that line, but I think there must be a better way to
> determine
> > if I got any matches for a Term.
> >
> > Edward W. Rouse
> > Comsquared System, Inc.
> > 770-734-5301
> >
> >
> >
> >
> > -
> > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: java-user-h...@lucene.apache.org
> >
> 
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org



-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Performance of storing data in Lucene vs other (No)SQL Databases

2012-05-18 Thread Konstantyn Smirnov
Hi all,

apologies, if this question was already asked before.

If I need to store a lot of data (say, millions of documents), what would
perform better (in terms of reads/writes/scalability etc.): Lucene with
stored fields (Field.Store.YES) or another NoSql DB like Mongo or Couch?

Does it make sense to index and store the data separately?

TIA

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Performance-of-storing-data-in-Lucene-vs-other-No-SQL-Databases-tp3984704.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Performance of storing data in Lucene vs other (No)SQL Databases

2012-05-18 Thread Glen Newton
Storing content in large indexes can significantly add to index time.

The model of indexing fields only in Lucene and storing just a key,
and then storing the content in some other container (DBMS, NoSql,
etc) with the key as lookup is almost a necessity for this use case
unless you have a completely static index (create once + never add
to).

Thanks,
Glen

On Fri, May 18, 2012 at 10:44 AM, Konstantyn Smirnov
 wrote:
> Hi all,
>
> apologies, if this question was already asked before.
>
> If I need to store a lot of data (say, millions of documents), what would
> perform better (in terms of reads/writes/scalability etc.): Lucene with
> stored fields (Field.Store.YES) or another NoSql DB like Mongo or Couch?
>
> Does it make sense to index and store the data separately?
>
> TIA
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Performance-of-storing-data-in-Lucene-vs-other-No-SQL-Databases-tp3984704.html
> Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>



-- 
-
http://zzzoot.blogspot.com/
-

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: Better Way of calculating Cosine Similarity between documents

2012-05-18 Thread nemeskey . david

Hi,

can you provide a minimal example (no. of sentences max 5)? 1 -> 0.85  
seems a rather big decrease in score to me, so unless you removed the  
longest sentence with the rarest words in the collection, I smell some  
bug, e.g. you forgot to remove it from the denominator as well, etc.  
It would also be a good idea to compute the distance without IDF  
weighting to see if you experience a similar effect.


Regards,
David Nemeskey

Quoting Kasun Perera :


Hi all

I’m indexing collection of documents using Lucene specifying TermVerctor at
the indexing time. Then I retrieve terms and their term frequencies by
reading the index and calculate TF-IDF scores vector for each document.
Then using TF-IDF vectors, I calculate pairwise cosine similarity between
documents using the equation here
http://en.wikipedia.org/wiki/Cosine_similarity.

This is my problem

Say I have two identical documents “A” and “B” in this collection (A and B
have more than 200 sentences).

If I calculate pairwise cosine similarity between A and B it gives me
cosine value=1 which is perfectly OK.

But If I remove a single sentence from Doc “B”, it gives me cosine
similarity value around 0.85 between these two documents. The documents are
almost similar but cosine values are not. I understand the problem is with
the equation that I’m using.

Is there better way/ better equation that I can use for calculating cosine
similarity between documents?

--
Regards

Kasun Perera






This message was sent using IMP, the Internet Messaging Program.


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: NullPointerException using IndexReader.termDocs when there are no matches

2012-05-18 Thread Michael McCandless
OK I committed an improvement to the 3.6.x javadocs (in case we do a
3.6.1).  Thanks!

Mike McCandless

http://blog.mikemccandless.com

On Fri, May 18, 2012 at 9:37 AM, Edward W. Rouse  wrote:
> Thanks, I missed that. And the API doc fails to mention it, though it is
> pretty standard for a next() method.
>
>> -Original Message-
>> From: Michael McCandless [mailto:luc...@mikemccandless.com]
>> Sent: Thursday, May 17, 2012 6:20 PM
>> To: java-user@lucene.apache.org
>> Subject: Re: NullPointerException using IndexReader.termDocs when there
>> are no matches
>>
>> I think you need to pay attention to what td.next() returned; I
>> suspect in your case it returned false which means you cannot use any
>> of its APIs (.doc(), .freq(), etc.) after that.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Thu, May 17, 2012 at 5:52 PM, Edward W. Rouse
>>  wrote:
>> > Lucene 3.6, java 1.6 I get the following:
>> >
>> > java.lang.NullPointerException at
>> >
>> org.apache.lucene.index.DirectoryReader$MultiTermDocs.doc(DirectoryRead
>> er.ja
>> > va:1179)
>> >
>> > when running this code:
>> >
>> > IndexReader reader = this.getReader(index);
>> > int d = -1;
>> > TermDocs td = reader.termDocs(this.createIdTerm(id));
>> > if(td != null)
>> > {
>> >  td.next();
>> >  d = td.doc();
>> > }
>> >
>> > private Term createIdTerm(String id)
>> > {
>> >  return new Term(Constants.DEFAULT_ID_FIELD, id);
>> > }
>> >
>> > At the time this code runs I would expect that td would be null since
>> there
>> > are no documents in the index that match the term, but that is not
>> the case.
>> > Instead I get the NPE when trying td.doc(). I can wrap the code in a
>> > try/catch for that line, but I think there must be a better way to
>> determine
>> > if I got any matches for a Term.
>> >
>> > Edward W. Rouse
>> > Comsquared System, Inc.
>> > 770-734-5301
>> >
>> >
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: java-user-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: old fashioned....."Too many open files"!

2012-05-18 Thread Michel Blase
This is the code in charge of managing the Lucene index. Thanks for your
help!



package luz.aurora.lucene;

import java.io.File;
import java.io.IOException;
import java.util.*;
import luz.aurora.search.ExtendedQueryParser;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.*;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.highlight.Highlighter;
import org.apache.lucene.search.highlight.QueryScorer;
import org.apache.lucene.search.highlight.SimpleHTMLFormatter;
import org.apache.lucene.search.highlight.SimpleSpanFragmenter;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;


public class LuceneManager {

private HashMap IndexesPaths;
private HashMap Writers;

private int CurrentOpenIndex_ID;
private String CurrentOpenIndex_TablePrefix;

public  LuceneManager(int CurrentOpenIndex_ID,String
CurrentOpenIndex_TablePrefix, HashMap IndexesPaths) throws
Exception {
this.CurrentOpenIndex_ID = CurrentOpenIndex_ID;
this.IndexesPaths = IndexesPaths;
this.Writers = new HashMap();
this.CurrentOpenIndex_TablePrefix = CurrentOpenIndex_TablePrefix;

SetUpWriters();
}

private void SetUpWriters() throws Exception {
Set set = IndexesPaths.entrySet();
Iterator i = set.iterator();

while(i.hasNext()){
Map.Entry index = (Map.Entry)i.next();
int id = (Integer)index.getKey();
String path = (String)index.getValue();

File app = new File(path);
Directory dir = FSDirectory.open(app);
IndexWriterConfig config = new
IndexWriterConfig(LuceneVersion.CurrentVersion,new
StandardAnalyzer(LuceneVersion.CurrentVersion));

//config.setMaxBufferedDocs(50);
config.setRAMBufferSizeMB(400);
TieredMergePolicy mp =
(TieredMergePolicy)config.getMergePolicy();
mp.setUseCompoundFile(true);
config.setMergePolicy(mp);

/*
LogMergePolicy lmp = (LogMergePolicy)config.getMergePolicy();
lmp.setUseCompoundFile(true);
lmp.setMaxMergeDocs(1);
config.setMergePolicy(lmp);
*/

Writers.put(id, new IndexWriter(dir,config));
}
}

public void AddDocument(int IndexId,Document doc,Analyzer analyzer)
throws CorruptIndexException, IOException {
IndexWriter im = Writers.get(IndexId);
im.addDocument(doc, analyzer);
}

public void AddDocument(Document doc,Analyzer analyzer) throws
CorruptIndexException, IOException {
IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
im.addDocument(doc, analyzer);
}

public void DeleteDoc(int IndexId,int SegmentIdFromDb) throws
CorruptIndexException, IOException {
IndexWriter im = Writers.get(IndexId);
Term term = new Term("SegmentID",Integer.toString(SegmentIdFromDb));
im.deleteDocuments(term);
}

public void DeleteDocuments(String query) throws ParseException,
CorruptIndexException, IOException {

ExtendedQueryParser parser = new
ExtendedQueryParser(LuceneVersion.CurrentVersion,"ID",new
StandardAnalyzer(LuceneVersion.CurrentVersion));
Query q = parser.parse(query);

Set set = Writers.entrySet();
Iterator i = set.iterator();

while(i.hasNext()){
Map.Entry app = (Map.Entry)i.next();
IndexWriter im = (IndexWriter)app.getValue();
im.deleteDocuments(q);
}
}

private IndexSearcher getSearcher() throws CorruptIndexException,
IOException {
int NumberOfIndexes = Writers.size();

ArrayList readers = new ArrayList();
IndexReader[] readerList = new IndexReader[NumberOfIndexes];

Set set = Writers.entrySet();
Iterator i = set.iterator();
while(i.hasNext()){
Map.Entry index = (Map.Entry)i.next();
IndexWriter iw = (IndexWriter)index.getValue();
readers.add(IndexReader.open(iw, true));
}

MultiReader mr = new MultiReader(readers.toArray(readerList));
return new IndexSearcher(mr);
}

public void close() throws CorruptIndexException, IOException {
Set set = Writers.entrySet();
Iterator i = set.iterator();
while(i.hasNext()){
Map.Entry index = (Map.Entry)i.next();
IndexWriter iw = (IndexWriter)index.getValue();
iw.close();
}
}

public void commit() throws CorruptIndexException, IOException,
Exception {
Set set = Writers.entrySet();
Iterator i = set.iterator();
while(i.hasNext()){
Map.Entry index = (Map.Entry)i.next();
  

Re: old fashioned....."Too many open files"!

2012-05-18 Thread Ian Lea
You may need to cut it down to something simpler, but I can't see any
reader.close() calls.


--
Ian.


On Fri, May 18, 2012 at 5:47 PM, Michel Blase  wrote:
> This is the code in charge of managing the Lucene index. Thanks for your
> help!
>
>
>
> package luz.aurora.lucene;
>
> import java.io.File;
> import java.io.IOException;
> import java.util.*;
> import luz.aurora.search.ExtendedQueryParser;
> import org.apache.lucene.analysis.Analyzer;
> import org.apache.lucene.analysis.standard.StandardAnalyzer;
> import org.apache.lucene.document.Document;
> import org.apache.lucene.index.*;
> import org.apache.lucene.queryParser.ParseException;
> import org.apache.lucene.search.IndexSearcher;
> import org.apache.lucene.search.Query;
> import org.apache.lucene.search.TopDocs;
> import org.apache.lucene.search.highlight.Highlighter;
> import org.apache.lucene.search.highlight.QueryScorer;
> import org.apache.lucene.search.highlight.SimpleHTMLFormatter;
> import org.apache.lucene.search.highlight.SimpleSpanFragmenter;
> import org.apache.lucene.store.Directory;
> import org.apache.lucene.store.FSDirectory;
>
>
> public class LuceneManager {
>
>    private HashMap IndexesPaths;
>    private HashMap Writers;
>
>    private int CurrentOpenIndex_ID;
>    private String CurrentOpenIndex_TablePrefix;
>
>    public  LuceneManager(int CurrentOpenIndex_ID,String
> CurrentOpenIndex_TablePrefix, HashMap IndexesPaths) throws
> Exception {
>        this.CurrentOpenIndex_ID = CurrentOpenIndex_ID;
>        this.IndexesPaths = IndexesPaths;
>        this.Writers = new HashMap();
>        this.CurrentOpenIndex_TablePrefix = CurrentOpenIndex_TablePrefix;
>
>        SetUpWriters();
>    }
>
>    private void SetUpWriters() throws Exception {
>        Set set = IndexesPaths.entrySet();
>        Iterator i = set.iterator();
>
>        while(i.hasNext()){
>            Map.Entry index = (Map.Entry)i.next();
>            int id = (Integer)index.getKey();
>            String path = (String)index.getValue();
>
>            File app = new File(path);
>            Directory dir = FSDirectory.open(app);
>            IndexWriterConfig config = new
> IndexWriterConfig(LuceneVersion.CurrentVersion,new
> StandardAnalyzer(LuceneVersion.CurrentVersion));
>
>            //config.setMaxBufferedDocs(50);
>            config.setRAMBufferSizeMB(400);
>            TieredMergePolicy mp =
> (TieredMergePolicy)config.getMergePolicy();
>            mp.setUseCompoundFile(true);
>            config.setMergePolicy(mp);
>
>            /*
>            LogMergePolicy lmp = (LogMergePolicy)config.getMergePolicy();
>            lmp.setUseCompoundFile(true);
>            lmp.setMaxMergeDocs(1);
>            config.setMergePolicy(lmp);
>            */
>
>            Writers.put(id, new IndexWriter(dir,config));
>        }
>    }
>
>    public void AddDocument(int IndexId,Document doc,Analyzer analyzer)
> throws CorruptIndexException, IOException {
>        IndexWriter im = Writers.get(IndexId);
>        im.addDocument(doc, analyzer);
>    }
>
>    public void AddDocument(Document doc,Analyzer analyzer) throws
> CorruptIndexException, IOException {
>        IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
>        im.addDocument(doc, analyzer);
>    }
>
>    public void DeleteDoc(int IndexId,int SegmentIdFromDb) throws
> CorruptIndexException, IOException {
>        IndexWriter im = Writers.get(IndexId);
>        Term term = new Term("SegmentID",Integer.toString(SegmentIdFromDb));
>        im.deleteDocuments(term);
>    }
>
>    public void DeleteDocuments(String query) throws ParseException,
> CorruptIndexException, IOException {
>
>        ExtendedQueryParser parser = new
> ExtendedQueryParser(LuceneVersion.CurrentVersion,"ID",new
> StandardAnalyzer(LuceneVersion.CurrentVersion));
> Query q = parser.parse(query);
>
>        Set set = Writers.entrySet();
>        Iterator i = set.iterator();
>
>        while(i.hasNext()){
>            Map.Entry app = (Map.Entry)i.next();
>            IndexWriter im = (IndexWriter)app.getValue();
>            im.deleteDocuments(q);
>        }
>    }
>
>    private IndexSearcher getSearcher() throws CorruptIndexException,
> IOException {
>        int NumberOfIndexes = Writers.size();
>
>        ArrayList readers = new ArrayList();
>        IndexReader[] readerList = new IndexReader[NumberOfIndexes];
>
>        Set set = Writers.entrySet();
>        Iterator i = set.iterator();
>        while(i.hasNext()){
>            Map.Entry index = (Map.Entry)i.next();
>            IndexWriter iw = (IndexWriter)index.getValue();
>            readers.add(IndexReader.open(iw, true));
>        }
>
>        MultiReader mr = new MultiReader(readers.toArray(readerList));
>        return new IndexSearcher(mr);
>    }
>
>    public void close() throws CorruptIndexException, IOException {
>        Set set = Writers.entrySet();
>        Iterator i = set.iterator();
>        while(i.hasNext()){
>            Map.Entry index = (Map.Entry)i.next();
>   

Re: old fashioned....."Too many open files"!

2012-05-18 Thread Michel Blase
Thanks Ian,

the point is that I keep the readers open to share them across search. Is
this wrong?


On Fri, May 18, 2012 at 9:58 AM, Ian Lea  wrote:

> You may need to cut it down to something simpler, but I can't see any
> reader.close() calls.
>
>
> --
> Ian.
>
>
> On Fri, May 18, 2012 at 5:47 PM, Michel Blase  wrote:
> > This is the code in charge of managing the Lucene index. Thanks for your
> > help!
> >
> >
> >
> > package luz.aurora.lucene;
> >
> > import java.io.File;
> > import java.io.IOException;
> > import java.util.*;
> > import luz.aurora.search.ExtendedQueryParser;
> > import org.apache.lucene.analysis.Analyzer;
> > import org.apache.lucene.analysis.standard.StandardAnalyzer;
> > import org.apache.lucene.document.Document;
> > import org.apache.lucene.index.*;
> > import org.apache.lucene.queryParser.ParseException;
> > import org.apache.lucene.search.IndexSearcher;
> > import org.apache.lucene.search.Query;
> > import org.apache.lucene.search.TopDocs;
> > import org.apache.lucene.search.highlight.Highlighter;
> > import org.apache.lucene.search.highlight.QueryScorer;
> > import org.apache.lucene.search.highlight.SimpleHTMLFormatter;
> > import org.apache.lucene.search.highlight.SimpleSpanFragmenter;
> > import org.apache.lucene.store.Directory;
> > import org.apache.lucene.store.FSDirectory;
> >
> >
> > public class LuceneManager {
> >
> >private HashMap IndexesPaths;
> >private HashMap Writers;
> >
> >private int CurrentOpenIndex_ID;
> >private String CurrentOpenIndex_TablePrefix;
> >
> >public  LuceneManager(int CurrentOpenIndex_ID,String
> > CurrentOpenIndex_TablePrefix, HashMap IndexesPaths)
> throws
> > Exception {
> >this.CurrentOpenIndex_ID = CurrentOpenIndex_ID;
> >this.IndexesPaths = IndexesPaths;
> >this.Writers = new HashMap();
> >this.CurrentOpenIndex_TablePrefix = CurrentOpenIndex_TablePrefix;
> >
> >SetUpWriters();
> >}
> >
> >private void SetUpWriters() throws Exception {
> >Set set = IndexesPaths.entrySet();
> >Iterator i = set.iterator();
> >
> >while(i.hasNext()){
> >Map.Entry index = (Map.Entry)i.next();
> >int id = (Integer)index.getKey();
> >String path = (String)index.getValue();
> >
> >File app = new File(path);
> >Directory dir = FSDirectory.open(app);
> >IndexWriterConfig config = new
> > IndexWriterConfig(LuceneVersion.CurrentVersion,new
> > StandardAnalyzer(LuceneVersion.CurrentVersion));
> >
> >//config.setMaxBufferedDocs(50);
> >config.setRAMBufferSizeMB(400);
> >TieredMergePolicy mp =
> > (TieredMergePolicy)config.getMergePolicy();
> >mp.setUseCompoundFile(true);
> >config.setMergePolicy(mp);
> >
> >/*
> >LogMergePolicy lmp = (LogMergePolicy)config.getMergePolicy();
> >lmp.setUseCompoundFile(true);
> >lmp.setMaxMergeDocs(1);
> >config.setMergePolicy(lmp);
> >*/
> >
> >Writers.put(id, new IndexWriter(dir,config));
> >}
> >}
> >
> >public void AddDocument(int IndexId,Document doc,Analyzer analyzer)
> > throws CorruptIndexException, IOException {
> >IndexWriter im = Writers.get(IndexId);
> >im.addDocument(doc, analyzer);
> >}
> >
> >public void AddDocument(Document doc,Analyzer analyzer) throws
> > CorruptIndexException, IOException {
> >IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
> >im.addDocument(doc, analyzer);
> >}
> >
> >public void DeleteDoc(int IndexId,int SegmentIdFromDb) throws
> > CorruptIndexException, IOException {
> >IndexWriter im = Writers.get(IndexId);
> >Term term = new
> Term("SegmentID",Integer.toString(SegmentIdFromDb));
> >im.deleteDocuments(term);
> >}
> >
> >public void DeleteDocuments(String query) throws ParseException,
> > CorruptIndexException, IOException {
> >
> >ExtendedQueryParser parser = new
> > ExtendedQueryParser(LuceneVersion.CurrentVersion,"ID",new
> > StandardAnalyzer(LuceneVersion.CurrentVersion));
> > Query q = parser.parse(query);
> >
> >Set set = Writers.entrySet();
> >Iterator i = set.iterator();
> >
> >while(i.hasNext()){
> >Map.Entry app = (Map.Entry)i.next();
> >IndexWriter im = (IndexWriter)app.getValue();
> >im.deleteDocuments(q);
> >}
> >}
> >
> >private IndexSearcher getSearcher() throws CorruptIndexException,
> > IOException {
> >int NumberOfIndexes = Writers.size();
> >
> >ArrayList readers = new ArrayList();
> >IndexReader[] readerList = new IndexReader[NumberOfIndexes];
> >
> >Set set = Writers.entrySet();
> >Iterator i = set.iterator();
> >while(i.hasNext()){
> >Map.Entry index = (Map.Entry)i.next();
> >IndexWriter iw = (IndexWriter)index.getVa

Re: old fashioned....."Too many open files"!

2012-05-18 Thread Chris Hostetter

: the point is that I keep the readers open to share them across search. Is
: this wrong?

your goal is fine, but where in your code do you think you are doing that? 

I don't see any readers ever being shared.  You open new ones (which are 
never closed) in every call to getSearcher()

: > >while(i.hasNext()){
: > >Map.Entry index = (Map.Entry)i.next();
: > >IndexWriter iw = (IndexWriter)index.getValue();
: > >readers.add(IndexReader.open(iw, true));
: > >}
: > >
: > >MultiReader mr = new MultiReader(readers.toArray(readerList));
: > >return new IndexSearcher(mr);


-Hoss

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: old fashioned....."Too many open files"!

2012-05-18 Thread Michel Blase
also.my problem is indexing!

Preparation:

private void SetUpWriters() throws Exception {
Set set = IndexesPaths.entrySet();
Iterator i = set.iterator();

while(i.hasNext()){
Map.Entry index = (Map.Entry)i.next();
int id = (Integer)index.getKey();
String path = (String)index.getValue();

File app = new File(path);
Directory dir = FSDirectory.open(app);
IndexWriterConfig config = new
IndexWriterConfig(LuceneVersion.CurrentVersion,new
StandardAnalyzer(LuceneVersion.CurrentVersion));

//config.setMaxBufferedDocs(50);
config.setRAMBufferSizeMB(400);
TieredMergePolicy mp =
(TieredMergePolicy)config.getMergePolicy();
mp.setUseCompoundFile(true);
config.setMergePolicy(mp);

/*
LogMergePolicy lmp = (LogMergePolicy)config.getMergePolicy();
lmp.setUseCompoundFile(true);
lmp.setMaxMergeDocs(1);
config.setMergePolicy(lmp);
*/

Writers.put(id, new IndexWriter(dir,config));
}
}


adding document:

public void AddDocument(Document doc,Analyzer analyzer) throws
CorruptIndexException, IOException {
IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
im.addDocument(doc, analyzer);
}


there's not much more I'm doing!


RE: old fashioned....."Too many open files"!

2012-05-18 Thread Edward W. Rouse
Have you tried adding im.commit() after adding a document? Could be all of
the uncommitted documents are leaving files open.

> -Original Message-
> From: Michel Blase [mailto:mblas...@gmail.com]
> Sent: Friday, May 18, 2012 1:24 PM
> To: java-user@lucene.apache.org
> Subject: Re: old fashioned."Too many open files"!
> 
> also.my problem is indexing!
> 
> Preparation:
> 
> private void SetUpWriters() throws Exception {
> Set set = IndexesPaths.entrySet();
> Iterator i = set.iterator();
> 
> while(i.hasNext()){
> Map.Entry index = (Map.Entry)i.next();
> int id = (Integer)index.getKey();
> String path = (String)index.getValue();
> 
> File app = new File(path);
> Directory dir = FSDirectory.open(app);
> IndexWriterConfig config = new
> IndexWriterConfig(LuceneVersion.CurrentVersion,new
> StandardAnalyzer(LuceneVersion.CurrentVersion));
> 
> //config.setMaxBufferedDocs(50);
> config.setRAMBufferSizeMB(400);
> TieredMergePolicy mp =
> (TieredMergePolicy)config.getMergePolicy();
> mp.setUseCompoundFile(true);
> config.setMergePolicy(mp);
> 
> /*
> LogMergePolicy lmp =
> (LogMergePolicy)config.getMergePolicy();
> lmp.setUseCompoundFile(true);
> lmp.setMaxMergeDocs(1);
> config.setMergePolicy(lmp);
> */
> 
> Writers.put(id, new IndexWriter(dir,config));
> }
> }
> 
> 
> adding document:
> 
> public void AddDocument(Document doc,Analyzer analyzer) throws
> CorruptIndexException, IOException {
> IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
> im.addDocument(doc, analyzer);
> }
> 
> 
> there's not much more I'm doing!


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: old fashioned....."Too many open files"!

2012-05-18 Thread Michel Blase
but commit after each insert should be really expensive and unnecessary! no?

On Fri, May 18, 2012 at 10:31 AM, Edward W. Rouse wrote:

> Have you tried adding im.commit() after adding a document? Could be all of
> the uncommitted documents are leaving files open.
>
> > -Original Message-
> > From: Michel Blase [mailto:mblas...@gmail.com]
> > Sent: Friday, May 18, 2012 1:24 PM
> > To: java-user@lucene.apache.org
> > Subject: Re: old fashioned."Too many open files"!
> >
> > also.my problem is indexing!
> >
> > Preparation:
> >
> > private void SetUpWriters() throws Exception {
> > Set set = IndexesPaths.entrySet();
> > Iterator i = set.iterator();
> >
> > while(i.hasNext()){
> > Map.Entry index = (Map.Entry)i.next();
> > int id = (Integer)index.getKey();
> > String path = (String)index.getValue();
> >
> > File app = new File(path);
> > Directory dir = FSDirectory.open(app);
> > IndexWriterConfig config = new
> > IndexWriterConfig(LuceneVersion.CurrentVersion,new
> > StandardAnalyzer(LuceneVersion.CurrentVersion));
> >
> > //config.setMaxBufferedDocs(50);
> > config.setRAMBufferSizeMB(400);
> > TieredMergePolicy mp =
> > (TieredMergePolicy)config.getMergePolicy();
> > mp.setUseCompoundFile(true);
> > config.setMergePolicy(mp);
> >
> > /*
> > LogMergePolicy lmp =
> > (LogMergePolicy)config.getMergePolicy();
> > lmp.setUseCompoundFile(true);
> > lmp.setMaxMergeDocs(1);
> > config.setMergePolicy(lmp);
> > */
> >
> > Writers.put(id, new IndexWriter(dir,config));
> > }
> > }
> >
> >
> > adding document:
> >
> > public void AddDocument(Document doc,Analyzer analyzer) throws
> > CorruptIndexException, IOException {
> > IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
> > im.addDocument(doc, analyzer);
> > }
> >
> >
> > there's not much more I'm doing!
>
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>


RE: old fashioned....."Too many open files"!

2012-05-18 Thread Edward W. Rouse
I don't know. I do it as a matter of course. But if it fixes the problem,
then at least you know why you are getting the error and can work on a
scheme (using counters maybe), to do regular commits after every 10/20/100
documents.

But you can't fix it until you know why it happens and this would confirm or
eliminate one possible cause.

> -Original Message-
> From: Michel Blase [mailto:mblas...@gmail.com]
> Sent: Friday, May 18, 2012 1:49 PM
> To: java-user@lucene.apache.org
> Subject: Re: old fashioned."Too many open files"!
> 
> but commit after each insert should be really expensive and
> unnecessary! no?
> 
> On Fri, May 18, 2012 at 10:31 AM, Edward W. Rouse
> wrote:
> 
> > Have you tried adding im.commit() after adding a document? Could be
> all of
> > the uncommitted documents are leaving files open.
> >
> > > -Original Message-
> > > From: Michel Blase [mailto:mblas...@gmail.com]
> > > Sent: Friday, May 18, 2012 1:24 PM
> > > To: java-user@lucene.apache.org
> > > Subject: Re: old fashioned."Too many open files"!
> > >
> > > also.my problem is indexing!
> > >
> > > Preparation:
> > >
> > > private void SetUpWriters() throws Exception {
> > > Set set = IndexesPaths.entrySet();
> > > Iterator i = set.iterator();
> > >
> > > while(i.hasNext()){
> > > Map.Entry index = (Map.Entry)i.next();
> > > int id = (Integer)index.getKey();
> > > String path = (String)index.getValue();
> > >
> > > File app = new File(path);
> > > Directory dir = FSDirectory.open(app);
> > > IndexWriterConfig config = new
> > > IndexWriterConfig(LuceneVersion.CurrentVersion,new
> > > StandardAnalyzer(LuceneVersion.CurrentVersion));
> > >
> > > //config.setMaxBufferedDocs(50);
> > > config.setRAMBufferSizeMB(400);
> > > TieredMergePolicy mp =
> > > (TieredMergePolicy)config.getMergePolicy();
> > > mp.setUseCompoundFile(true);
> > > config.setMergePolicy(mp);
> > >
> > > /*
> > > LogMergePolicy lmp =
> > > (LogMergePolicy)config.getMergePolicy();
> > > lmp.setUseCompoundFile(true);
> > > lmp.setMaxMergeDocs(1);
> > > config.setMergePolicy(lmp);
> > > */
> > >
> > > Writers.put(id, new IndexWriter(dir,config));
> > > }
> > > }
> > >
> > >
> > > adding document:
> > >
> > > public void AddDocument(Document doc,Analyzer analyzer) throws
> > > CorruptIndexException, IOException {
> > > IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
> > > im.addDocument(doc, analyzer);
> > > }
> > >
> > >
> > > there's not much more I'm doing!
> >
> >
> > -
> > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: java-user-h...@lucene.apache.org
> >
> >


-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Re: old fashioned....."Too many open files"!

2012-05-18 Thread Michel Blase
Ian was right! I didn't notice that before each insert the code was
performing a search!

but I'm not sure how to solve the problem! This is how I changed the code,
after each search I'm closing the IndexSearcherbut stillI get too
many open files!


 private IndexSearcher getSearcher() throws CorruptIndexException,
IOException {
int NumberOfIndexes = Writers.size();

ArrayList readers = new ArrayList();
IndexReader[] readerList = new IndexReader[NumberOfIndexes];

Set set = Writers.entrySet();
Iterator i = set.iterator();
while(i.hasNext()){
Map.Entry index = (Map.Entry)i.next();
IndexWriter iw = (IndexWriter)index.getValue();
readers.add(IndexReader.open(iw, true));
}

MultiReader mr = new MultiReader(readers.toArray(readerList));
return new IndexSearcher(mr);
}

public TopDocs Search(String q,Analyzer analyzer,int NumberOfResults)
throws Exception {
ExtendedQueryParser parser = new
ExtendedQueryParser(LuceneVersion.CurrentVersion,"ID",analyzer);
Query query = parser.parse(q);

IndexSearcher is = getSearcher();
TopDocs res = is.search(query, NumberOfResults);
is.close();

return res;
}




On Fri, May 18, 2012 at 11:04 AM, Edward W. Rouse wrote:

> I don't know. I do it as a matter of course. But if it fixes the problem,
> then at least you know why you are getting the error and can work on a
> scheme (using counters maybe), to do regular commits after every 10/20/100
> documents.
>
> But you can't fix it until you know why it happens and this would confirm
> or
> eliminate one possible cause.
>
> > -Original Message-
> > From: Michel Blase [mailto:mblas...@gmail.com]
> > Sent: Friday, May 18, 2012 1:49 PM
> > To: java-user@lucene.apache.org
> > Subject: Re: old fashioned."Too many open files"!
> >
> > but commit after each insert should be really expensive and
> > unnecessary! no?
> >
> > On Fri, May 18, 2012 at 10:31 AM, Edward W. Rouse
> > wrote:
> >
> > > Have you tried adding im.commit() after adding a document? Could be
> > all of
> > > the uncommitted documents are leaving files open.
> > >
> > > > -Original Message-
> > > > From: Michel Blase [mailto:mblas...@gmail.com]
> > > > Sent: Friday, May 18, 2012 1:24 PM
> > > > To: java-user@lucene.apache.org
> > > > Subject: Re: old fashioned."Too many open files"!
> > > >
> > > > also.my problem is indexing!
> > > >
> > > > Preparation:
> > > >
> > > > private void SetUpWriters() throws Exception {
> > > > Set set = IndexesPaths.entrySet();
> > > > Iterator i = set.iterator();
> > > >
> > > > while(i.hasNext()){
> > > > Map.Entry index = (Map.Entry)i.next();
> > > > int id = (Integer)index.getKey();
> > > > String path = (String)index.getValue();
> > > >
> > > > File app = new File(path);
> > > > Directory dir = FSDirectory.open(app);
> > > > IndexWriterConfig config = new
> > > > IndexWriterConfig(LuceneVersion.CurrentVersion,new
> > > > StandardAnalyzer(LuceneVersion.CurrentVersion));
> > > >
> > > > //config.setMaxBufferedDocs(50);
> > > > config.setRAMBufferSizeMB(400);
> > > > TieredMergePolicy mp =
> > > > (TieredMergePolicy)config.getMergePolicy();
> > > > mp.setUseCompoundFile(true);
> > > > config.setMergePolicy(mp);
> > > >
> > > > /*
> > > > LogMergePolicy lmp =
> > > > (LogMergePolicy)config.getMergePolicy();
> > > > lmp.setUseCompoundFile(true);
> > > > lmp.setMaxMergeDocs(1);
> > > > config.setMergePolicy(lmp);
> > > > */
> > > >
> > > > Writers.put(id, new IndexWriter(dir,config));
> > > > }
> > > > }
> > > >
> > > >
> > > > adding document:
> > > >
> > > > public void AddDocument(Document doc,Analyzer analyzer) throws
> > > > CorruptIndexException, IOException {
> > > > IndexWriter im = Writers.get(this.CurrentOpenIndex_ID);
> > > > im.addDocument(doc, analyzer);
> > > > }
> > > >
> > > >
> > > > there's not much more I'm doing!
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: java-user-h...@lucene.apache.org
> > >
> > >
>
>
> -
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>
>


Re: old fashioned....."Too many open files"!

2012-05-18 Thread Michel Blase
IndexSearcher Lucene 3.6 API:

public void close()
   throws IOException


Note that the underlying IndexReader is not closed, if IndexSearcher was
constructed with IndexSearcher(IndexReader r). If the IndexReader was
supplied implicitly by specifying a directory, then the IndexReader is
closed.


just added:

private void CloseIndexSearcher(IndexSearcher is) throws IOException {
IndexReader[] rl = is.getSubReaders();
for(IndexReader r : rl) {
r.close();
}
is.close();
}

and everything seems fine now!

Sorry for wasting your time! Hope that my stupidity will help someone else!
Michel

On Fri, May 18, 2012 at 11:30 AM, Michel Blase  wrote:

> Ian was right! I didn't notice that before each insert the code was
> performing a search!
>
> but I'm not sure how to solve the problem! This is how I changed the code,
> after each search I'm closing the IndexSearcherbut stillI get too
> many open files!
>
>
>  private IndexSearcher getSearcher() throws CorruptIndexException,
> IOException {
> int NumberOfIndexes = Writers.size();
>
> ArrayList readers = new ArrayList();
> IndexReader[] readerList = new IndexReader[NumberOfIndexes];
>
> Set set = Writers.entrySet();
> Iterator i = set.iterator();
> while(i.hasNext()){
> Map.Entry index = (Map.Entry)i.next();
> IndexWriter iw = (IndexWriter)index.getValue();
> readers.add(IndexReader.open(iw, true));
> }
>
> MultiReader mr = new MultiReader(readers.toArray(readerList));
> return new IndexSearcher(mr);
> }
>
> public TopDocs Search(String q,Analyzer analyzer,int NumberOfResults)
> throws Exception {
> ExtendedQueryParser parser = new
> ExtendedQueryParser(LuceneVersion.CurrentVersion,"ID",analyzer);
> Query query = parser.parse(q);
>
> IndexSearcher is = getSearcher();
> TopDocs res = is.search(query, NumberOfResults);
> is.close();
>
> return res;
> }
>
>
>
>
> On Fri, May 18, 2012 at 11:04 AM, Edward W. Rouse 
> wrote:
>
>> I don't know. I do it as a matter of course. But if it fixes the problem,
>> then at least you know why you are getting the error and can work on a
>> scheme (using counters maybe), to do regular commits after every 10/20/100
>> documents.
>>
>> But you can't fix it until you know why it happens and this would confirm
>> or
>> eliminate one possible cause.
>>
>> > -Original Message-
>> > From: Michel Blase [mailto:mblas...@gmail.com]
>> > Sent: Friday, May 18, 2012 1:49 PM
>> > To: java-user@lucene.apache.org
>> > Subject: Re: old fashioned."Too many open files"!
>> >
>> > but commit after each insert should be really expensive and
>> > unnecessary! no?
>> >
>> > On Fri, May 18, 2012 at 10:31 AM, Edward W. Rouse
>> > wrote:
>> >
>> > > Have you tried adding im.commit() after adding a document? Could be
>> > all of
>> > > the uncommitted documents are leaving files open.
>> > >
>> > > > -Original Message-
>> > > > From: Michel Blase [mailto:mblas...@gmail.com]
>> > > > Sent: Friday, May 18, 2012 1:24 PM
>> > > > To: java-user@lucene.apache.org
>> > > > Subject: Re: old fashioned."Too many open files"!
>> > > >
>> > > > also.my problem is indexing!
>> > > >
>> > > > Preparation:
>> > > >
>> > > > private void SetUpWriters() throws Exception {
>> > > > Set set = IndexesPaths.entrySet();
>> > > > Iterator i = set.iterator();
>> > > >
>> > > > while(i.hasNext()){
>> > > > Map.Entry index = (Map.Entry)i.next();
>> > > > int id = (Integer)index.getKey();
>> > > > String path = (String)index.getValue();
>> > > >
>> > > > File app = new File(path);
>> > > > Directory dir = FSDirectory.open(app);
>> > > > IndexWriterConfig config = new
>> > > > IndexWriterConfig(LuceneVersion.CurrentVersion,new
>> > > > StandardAnalyzer(LuceneVersion.CurrentVersion));
>> > > >
>> > > > //config.setMaxBufferedDocs(50);
>> > > > config.setRAMBufferSizeMB(400);
>> > > > TieredMergePolicy mp =
>> > > > (TieredMergePolicy)config.getMergePolicy();
>> > > > mp.setUseCompoundFile(true);
>> > > > config.setMergePolicy(mp);
>> > > >
>> > > > /*
>> > > > LogMergePolicy lmp =
>> > > > (LogMergePolicy)config.getMergePolicy();
>> > > > lmp.setUseCompoundFile(true);
>> > > > lmp.setMaxMergeDocs(1);
>> > > > config.setMergePolicy(lmp);
>> > > > */
>> > > >
>> > > > Writers.put(id, new IndexWriter(dir,config));
>> > > > }
>> > > > }
>> > > >
>> > > >
>> > > > adding document:
>> > > >
>> > > > public void AddDocument(Document doc,Analyzer analyzer) throws
>> > > > CorruptIndexException, IOException {
>> > > > 

Unable to run LookupBenchmarkTest

2012-05-18 Thread Sudarshan Gaikaiwari
I am trying to run the LookupBenchmarkTest using the following command

ant -v test -Dtestcase=LookupBenchmarkTest
-Dtests.seed=24BC5D3301BB6D9 -Dargs="-Dfile.encoding=UTF-8"

I see the following error
---

   [junit4] Default encoding: UTF-8
   [junit4] Suite: org.apache.lucene.search.suggest.LookupBenchmarkTest
   [junit4]> (@BeforeClass output)
   [junit4]   2> NOTE: test params are: codec=Lucene40,
sim=RandomSimilarityProvider(queryNorm=true,coord=false): {},
locale=sv, timezone=Pacific/Tongatapu
   [junit4]   2> NOTE: Linux 2.6.32-41-generic amd64/Sun Microsystems
Inc. 1.6.0_26 (64-bit)/cpus=2,threads=1,free=101664256,total=121372672
   [junit4]   2> NOTE: All tests run in this JVM: [LookupBenchmarkTest]
   [junit4]   2> NOTE: reproduce with: ant test
-Dtestcase=LookupBenchmarkTest -Dtests.seed=71CBF75133B374B2
-Dtests.locale=sv -Dtests.timezone=Pacific/Tongatapu
-Dargs="-Dfile.encoding=UTF-8"
   [junit4]   2>
   [junit4] ERROR   0.00s | LookupBenchmarkTest (suite)
   [junit4]> Throwable #1: java.lang.AssertionError: disable
assertions before running benchmarks!

---

If I comment out the Assertions in the common-build.xml file


   


I get the following error.
-
   [junit4] Default encoding: UTF-8
   [junit4] Suite: org.apache.lucene.search.suggest.LookupBenchmarkTest
   [junit4]> (@BeforeClass output)
   [junit4]   2> NOTE: test params are: codec=Lucene40, sim=null,
locale=null, timezone=(null)
   [junit4]   2> NOTE: Linux 2.6.32-41-generic amd64/Sun Microsystems
Inc. 1.6.0_26 (64-bit)/cpus=2,threads=1,free=102887424,total=121372672
   [junit4]   2> NOTE: All tests run in this JVM: [LookupBenchmarkTest]
   [junit4]   2> NOTE: reproduce with: ant test
-Dtestcase=LookupBenchmarkTest -Dtests.seed=88E90A184025623E
-Dargs="-Dfile.encoding=UTF-8"
   [junit4]   2>
   [junit4] ERROR   0.00s | LookupBenchmarkTest (suite)
   [junit4]> Throwable #1: java.lang.Exception: Test class
requires assertions, enable assertions globally (-ea) or for
Solr/Lucene subpackages only.
   [junit4]>at
org.apache.lucene.util.TestRuleAssertionsRequired.validate(TestRuleAssertionsRequired.java:45)
   [junit4]>at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:35)



Please let me know how I should run this test.

-- 
Sudarshan Gaikaiwari
www.sudarshan.org
sudars...@acm.org

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org



Boosting numerical field

2012-05-18 Thread Meeraj Kunnumpurath
Hi,

Is there anyway in a query, I can boost the relevance of a hit based on the 
value of a numerical field in the index. i.e higher the value of the field, 
more relevant the hit is.

Kind regards
Meeraj

-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org