Hi
I am currently building an application whereby there is a remote index server
(yes it probably does sound like Solr :)) and users use my API to send
documents to the indexing server for indexing. The 2 methods primarily used is
add and commit. So the user can send requests for documents to
Hi
Apologies for re sending this email but I was just wondering if any one might
be able to advise on the below. I'm not sure if I've provided enough info.
Again any help would be appreciated.
Amin
Sent from my iPhone
On 1 Aug 2010, at 20:00, Amin Mohammed-Coleman wrote:
>
s wrote:
> Can you post the full exception? And also the log output from
> IndexWriter.setInfoStream.
>
> Mike
>
> On Tue, Aug 3, 2010 at 5:28 PM, Amin Mohammed-Coleman
> wrote:
>> Hi
>>
>> Apologies for re sending this email but I was just wondering if
Hi
I have a list of batch tasks that need to be executed. Each batch contains
1000 documents and basically I use a RAMDirectory based index writer, and at
the end of adding 1000 documents to the memory i perform the following:
ramWriter.commit();
indexWriter.addIndexesNoOptimize(ramW
indexwriter OR commit the changes
> before you can see your changes, see IndexWriter.close/commit
>
> Best
> Erick
>
>
>
> On Thu, Aug 26, 2010 at 10:42 AM, Amin Mohammed-Coleman
> wrote:
>
>> Hi
>>
>>
>> I have a list of batch tasks that need t
Hi All
I was wondering whether I can use TermRangeQuery for my use case. I have a
collection of ids (represented as XDF-123) and I would like to do a search for
all the ids (might be in the range of 1) and for each matching id I want to
get the corresponding data that is stored in the inde
note the
> performance warning in the javadocs.
>
>
> --
> Ian.
>
>
> On Fri, Nov 26, 2010 at 2:18 PM, Amin Mohammed-Coleman
> wrote:
>> Hi All
>>
>> I was wondering whether I can use TermRangeQuery for my use case. I have a
>> collection of ids (r
es a Collator but note the
> performance warning in the javadocs.
>
>
> --
> Ian.
>
>
> On Fri, Nov 26, 2010 at 2:18 PM, Amin Mohammed-Coleman
> wrote:
>> Hi All
>>
>> I was wondering whether I can use TermRangeQuery for my use case. I have a
>> colle
as you expect.
>>>
>>> I'm not clear what you mean by XDF-123 but if you've got
>>>
>>> AAA-123
>>> AAA-124
>>> ...
>>> ABC-123
>>> ABC-234
>>> etc.
>>>
>>> then you'll be fine. If
, Ian Lea wrote:
>>
>>> What sort of ranges are you trying to use? Maybe you could store a
>>> separate field, just for these queries, with some normalized form of
>>> the ids, with all numbers padded out to the same length etc.
>>>
>>> --
>>> Ian
Hi
Apologies up front if this question has been asked before.
I have a document which contains a field that stores an untokenized value such
as TEST_TYPE. The analyser used is StandardAnalyzer and I pass the same
analyzer into the query. I perform the following query : fieldName:TEST_*,
howe
Hi
I have the following situation:
Document document = new Document();
String body ="This is a body of document";
Field field = new Field("body", body, Field.Store.YES,
Field.Index.ANALYZED);
document.add(field);
String id ="1
Begin forwarded message:
From: Amin Mohammed-Coleman
Date: 26 December 2008 20:19:02 GMT
To: java-user@lucene.apache.org
Subject: Field Not Present In Document
Hi
I have the following situation:
Document document = new Document();
String body ="This is a body of doc
= indexReader.document(1)
I would fire up Luke (http://www.getopt.org/luke) against your index
and see what is inside of it.
On Dec 26, 2008, at 3:19 PM, Amin Mohammed-Coleman wrote:
Hi
I have the following situation:
Document document = new Document();
String body ="This is a bo
fine. I get the following when I print the document:
Documentrtf document that will be indexed.
Amin Mohammed-Coleman> stored/
uncompressed,indexed stored/
uncompressed,indexed stored/
uncompressed,indexed stored/
uncompressed,indexed>
The problem is when I use the following to search
I get
the following when I print the document:
Documentrtf
document that will be indexed.
Amin Mohammed-Coleman>
stored/uncompressed,indexed
stored/uncompressed,indexed
stored/uncompressed,indexed
stored/uncompressed,indexed>
The problem is when I use the following to search I get
te: http://www.dbsight.net
demo: http://search.dbsight.com
Lucene Database Search in 3 minutes:
http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes
DBSight customer, a shopping comparison site, (anonymous per
request) got
2.6 Million Euro funding!
On Thu, Jan 1, 2009 at 11:30 PM, A
-Coleman
I am using the StandardAnaylzer therefore I wouldnt expect the words
document, indexed, Amin Mohammed-Coleman to be removed.
I have referenced the Lucene In Action book and I can't see what I may
be doing wrong. I would be happy to provide a testcase should it be
required.
bodycontent
This is more what I expected although "Amin Mohammed-Coleman" hasn't
been stored in the index. Should I not be using
indexWriter.optimize() ?
I tried using the search function in luke and got the following results:
body:test ---> returns result
b
ould suggest something like this
try
{
while //this could be your looping through a data reader for
example
{
indexWriter.addDocument(document);
}
}
finally
{
commitAndOptimise()
}
HTH
Shashi
- Original Message
From: Amin Mohammed-Coleman
To: java-
ddDocument?
I would suggest something like this
try
{
while //this could be your looping through a data reader for
example
{
indexWriter.addDocument(document);
}
}
finally
{
commitAndOptimise()
}
HTH
Shashi
- Original Message ----
From: Amin Mohammed-Coleman
eader for
example
{
indexWriter.addDocument(document);
}
}
finally
{
commitAndOptimise()
}
HTH
Shashi
- Original Message
From: Amin Mohammed-Coleman
To: java-user@lucene.apache.org
Sent: Saturday, January 3, 2009 4:02:52 AM
Subject: Re: Search Problem
Hi ag
n put them up
somewhere for download.
On Jan 3, 2009, at 1:07 PM, Amin Mohammed-Coleman wrote:
Hi again
Sorry I didn't include the WorkItem class! Here is the final test
case. Apologies!
On 3 Jan 2009, at 14:02, Grant Ingersoll wrote:
You shouldn't need to call close and opti
t() throws
Exception {
JavaBuiltInRTFHandler builtInRTFHandler = new
JavaBuiltInRTFHandler();
Document document = builtInRTFHandler.getDocument(rtfFile);
assertNotNull(document);
String value = document.get(FieldNameEnum.BODY.getDescription());
assertNotNull(value);
assertNotSame(""
value = document.get(FieldNameEnum.BODY.getDescription());
assertNotNull(value);
assertNotSame("", value);
assertTrue(value.contains("Amin Mohammed-Coleman"));
assertTrue(value.contains("This is a test rtf document that will
be indexed."));
String path = document.get(
Hi
Test case passing now. Thanks for your help. I kind of thought it was
probably something I was doing wrong!
Cheers
Amin
On 4 Jan 2009, at 16:59, Grant Ingersoll wrote:
On Jan 4, 2009, at 2:49 AM, Amin Mohammed-Coleman wrote:
Hi Grant
Thank you for looking at the test case. I
Hi
I have a class that uses the MultiSearcher inorder to perform search
using different other searches. Here is a snippet of the class:
MultiSearcher multiSearcher = null;
try {
multiSearcher = new MultiSearcher(searchers.toArray(new
IndexSearcher[] {}));
QueryParser
I've been working on integrating hibernate search and Gigaspaces XAP.
It's been raised as a openspaces project and awaiting approval.
The aim is to place indexes on the space and use gigaspaces middleware
support for clustering, replication and other services.
Sent from my iPhone
On 15 Jan
Hi
I have recently worked on developing an application which allows you
to upload a file (which is indexed so you can search later). I have
numerous tests to show that you can index and search documents (in
some instances within the same test), however when I perform the
operation in the
llis() + " ms");
return summaryList.toArray(new Summary[] {});
}
Do I need to do this explicitly?
Cheers
Amin
On 19 Jan 2009, at 20:48, Greg Shackles wrote:
After you make the commit to the index, are you reloading the index
in the
searchers?
- Greg
On M
need
to tell
those IndexSearchers to re-open the indexes because they have
changed since
they were first opened. That should solve your problem.
- Greg
On Mon, Jan 19, 2009 at 4:45 PM, Amin Mohammed-Coleman >wrote:
I make a call to my search class which looks like this:
public Sum
roblem.
- Greg
On Mon, Jan 19, 2009 at 4:45 PM, Amin Mohammed-Coleman >wrote:
I make a call to my search class which looks like this:
public Summary[] search(SearchRequest searchRequest) {
List summaryList = new ArrayList();
StopWatch stopWatch = new StopWatc
summaryList.toArray(new Summary[] {});
}
The searchers are configured in spring using which looks like this:
class="org.apache.lucene.search.IndexSearcher" scope="prototype" lazy-
init="true" >
ref="rtfDirectory" />
I set the depend
r();
> if (!oldIndexReader.isCurrent()) {
> IndexReader newIndexReader = oldIndexReader.reOpen();
> oldIndexReader.close();
> indexSearcher.close();
> IndexSearcher indexSearch = new IndexSearcher(newIndexReader);
> }
>
> Regards
> Ganesh
>
> - Original
a reopen() method in the IndexReader class. You can use that.
-Original Message-
From: Amin Mohammed-Coleman [mailto:ami...@gmail.com]
Sent: Tuesday, January 20, 2009 5:02 AM
To: java-user@lucene.apache.org
Subject: Re: Indexing and Searching Web Application
Am I supposed to close the oldIndexReade
der reader = ...
...
IndexReader new = r.reopen();
if (new != reader) {
... // reader was reopened
reader.close(); //Old reader is closed.
}
reader = new;
Regards
Ganesh
- Original Message ----- From: "Amin Mohammed-Coleman" >
To:
Cc:
Sent: Wednesday, January 21, 2009 1:07 AM
Subject: Re:
= new IndexSearcher(reader);
indexSearchers.add(indexSearch);
}
First search works ok, susequent search result in:
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
Cheers
On Wed, Jan 21, 2009 at 1:47 PM, Amin Mohammed-Coleman wrote:
> Hi
> Will give tha
the names in your code
snippet rather confusing.
--
Ian.
On Wed, Jan 21, 2009 at 6:59 PM, Amin Mohammed-Coleman > wrote:
Hi
I did the following according to java docs:
for (IndexSearcher indexSearcher: searchers) {
IndexReader reader = indexSearcher.getIndexReader();
IndexRea
e the already closed exception when you try to use them.
--
Ian.
On Wed, Jan 21, 2009 at 8:24 PM, Amin Mohammed-Coleman > wrote:
Hi,
That is what I am doing with the line:
indexSearchers.add(indexSearch);
indexSearchers is an ArrayList that is constructed before the for
loop:
List inde
adds the newly opened searcher to the end of your array.
The
original (closed) one is still there.
indexSearchers.add(indexSearch);
}
[EOE] So if you use searchers anywhere from here on, it's got closed
readers in it if you closed any of them.
Best
Erick
On Wed, Jan 21, 2009 at 4:19 PM,
al (closed) one is still there.
indexSearchers.add(indexSearch);
}
[EOE] So if you use searchers anywhere from here on, it's got closed
readers in it if you closed any of them.
Best
Erick
On Wed, Jan 21, 2009 at 4:19 PM, Amin Mohammed-Coleman >wrote:
Hi
I am trying to get an understa
Hi
I'm probably going to get shot down for asking this simple question.
Although I think I understand the basic concept of Field I feel there is
something that I am missing and I was wondering if someone might help to
clarify.
You can store a field value in an index using Field.Store.YES or if th
enized content.
>
> Regards
> Ganesh
>
> - Original Message - From: "Amin Mohammed-Coleman" <
> ami...@gmail.com>
> To:
> Sent: Thursday, February 05, 2009 2:00 PM
> Subject: Field.Store.YES Question
>
>
>
> Hi
>>
>> I'm
Hi
I am looking at building a faceted search using Lucene. I know that Solr
comes with this built in, however I would like to try this by myself
(something to add to my CV!). I have been looking around and I found that
you can use the IndexReader and use TermVectors. This looks ok but I'm not
su
Hi
Sorry to re send this email but I was wondering if I could get some
advice on this.
Cheers
Amin
On 16 Feb 2009, at 20:37, Amin Mohammed-Coleman
wrote:
Hi
I am looking at building a faceted search using Lucene. I know that
Solr comes with this built in, however I would like to
Hi
Thanks just what I needed!
Cheers
Amin
On 22 Feb 2009, at 16:11, Marcelo Ochoa wrote:
Hi Amin:
Please take a look a this blog post:
http://sujitpal.blogspot.com/2007/04/lucene-search-within-search-with.html
Best regards, Marcelo.
On Sun, Feb 22, 2009 at 1:18 PM, Amin Mohammed-Coleman
ope that made sense...!
On Mon, Feb 23, 2009 at 7:20 AM, Amin Mohammed-Coleman wrote:
> Hi
>
> Thanks just what I needed!
>
> Cheers
> Amin
>
>
> On 22 Feb 2009, at 16:11, Marcelo Ochoa wrote:
>
> Hi Amin:
>> Please take a look a this blog post:
>> ht
The reason for the indexreader.reopen is because I have a webapp which
enables users to upload files and then search for the documents. If I don't
reopen i'm concerned that the facet hit counter won't be updated.
On Tue, Feb 24, 2009 at 8:32 PM, Amin Mohammed-Coleman wrote:
>
all the IndexReader#close() method. If nothing is pointing at
> the readers they should be garbage collected. Also, you might
> want to warm up your new IndexSearcher before you switch to it, meaning run
> a few queries on it before you swap the old one out.
>
> M
>
>
>
> O
dexSearcher before you switch to it, meaning
>> run
>> a few queries on it before you swap the old one out.
>>
>> M
>>
>>
>>
>> On Tue, Feb 24, 2009 at 12:48 PM, Amin Mohammed-Coleman > >wrote:
>>
>> The reason for the indexreader.re
release(currentSearcher);
>currentSearcher = newSearcher;
> }
> }
>
> /*
> #A Current IndexSearcher
> #B Create initial searcher
> #C Implement in subclass to warm new searcher
> #D Call this to reopen searcher if index changed
> #E Returns current searcher
dexreader is
not up to date. When this is set to true the indexsearchers are refreshed.
I would be grateful on your thoughts.
On Thu, Feb 26, 2009 at 1:35 PM, Amin Mohammed-Coleman wrote:
> Hi
>
> Thanks for your help. I will modify my facet search and my other code to
> u
Forgot to mention that the previous code that i sent was related to facet
search. This is a general search method I have implemented (they can
probably be combined...).
On Thu, Feb 26, 2009 at 8:21 PM, Amin Mohammed-Coleman wrote:
> Hi
> I have modified my search code. Here is the fol
so
>creating unnecessary garbage; instead, they should be created once
>& reused.
>
> You should consider simply using Solr -- it handles all this logic for
> you and has been well debugged with time...
>
> Mike
>
> Amin Mohammed-Coleman wrote:
>
> The reaso
thanks. i will rewrite..in between giving my baby her feed and playing with
the other child and my wife who wants me to do several other things!
On Sun, Mar 1, 2009 at 1:20 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:
>
> Amin Mohammed-Coleman wrote:
>
> Hi
();
assert newReader != currentSearcher.getIndexReader();
IndexSearcher newSearcher = new IndexSearcher(newReader);
warm(newSearcher);
swapSearcher(newSearcher);
}
}
should the above be synchronised?
On Sun, Mar 1, 2009 at 1:25 PM, Amin Mohammed-Coleman wrote:
> thanks. i w
>
> It's best to call this method from a single BG "warming" thread, in which
> case it would not need its own synchronization.
>
> But, to be safe, I'll add internal synchronization to it. You can't simply
> put synchronized in front of the method, since
sorrry I added
release(multiSearcher);
instead of multiSearcher.close();
On Sun, Mar 1, 2009 at 2:17 PM, Amin Mohammed-Coleman wrote:
> Hi
> I've now done the following:
>
> public Summary[] search(final SearchRequest searchRequest)
> throwsSearchExecutionExceptio
o other thread is reopening
> #E Finish reopen and notify other threads
> #F Reopen searcher if there are changes
> #G Check index version and reopen, warm, swap if needed
> #H Returns current searcher
> #I Release searcher
> #J Swaps currentSearcher to new searcher
>
d call maybeReopen(), and then call get() and gather each
> IndexSearcher instance into a new array. Then, make a new
> MultiSearcher (opposite of what I said before): while that creates a
> small amount of garbage, it'll keep your code simpler (good
> tradeoff).
>
> Mike
>
&
your help!
On Sun, Mar 1, 2009 at 4:18 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:
>
> This is not quite right -- you should only create SearcherManager once
> (per Direcotry) at startup/app load, not with every search request.
>
> And I don't see releas
> SearcherManager (from your Directory instances). You don't need any
> searchers during initialize.
>
> Is DocumentSearcherManager the same as SearcherManager (just renamed)?
>
> The release method is wrong -- you're calling .get() and then
> immediately release
t; LOGGER.debug("Initialising multi searcher ");
>>
>> documentSearcherManagers = new
>> DocumentSearcherManager[directories.size()];
>>
>> for (int i = 0; i < directories.size() ;i++) {
>>
>> Directory directory = directories.get(i);
&
" +
>> stopWatch.getTotalTimeMillis() + " ms");
>>
>> return summaryList.toArray(new Summary[] {});
>>
>> }
>>
>>
>>
>> I hope this makes sense...thanks again!
>>
>>
>> Cheers
>>
>> Amin
>>
>&g
gt; get() before maybeReopen() should simply let you search based on the
> searcher before reopening.
>
> If you just do get() and don't call maybeReopen() does it work?
>
>
> Mike
>
> Amin Mohammed-Coleman wrote:
>
> I noticed that if i do the get() before the maybe
ut the numDocs() of each IndexReader you get from the SearcherManager?
>
> Something is wrong and it's best to explain it...
>
>
> Mike
>
> Amin Mohammed-Coleman wrote:
>
> Nope. If i remove the maybeReopen the search doesn't work. It only works
>> when i c
esn't make sense. I maybe missing something here.
Cheers
Amin
On 2 Mar 2009, at 15:48, Amin Mohammed-Coleman wrote:
I'm seeing some interesting behviour when i do get() first followed
by maybeReopen then there are no documents in the directory
(directory that i am interested in.
that search will not see the
> newly opened readers, but the next search will.
>
> I'm just thinking that since you see no results with get() alone, debug
> that case first. Then put back the maybeReopen().
>
> Can you post your full code at this point?
>
>
> Mike
>
>
call
> maybeReopen() in get, unless at the time you first create SearcherManager
> the Directories each have an empty index in them.
>
> Mike
>
> Amin Mohammed-Coleman wrote:
>
> Hi
>> Here is the code that I am using, I've modified the get() method to
>> inc
Hi
I am currently indexing documents (pdf, ms word, etc) that are uploaded,
these documents can be searched and what the search returns to the user are
summaries of the documents. Currently the summaries are extracted when
indexing the file (summary constructed by taking the first 10 lines of the
ael McCandless wrote:
>
>
>> You should look at contrib/highlighter, which does exactly this.
>>
>> Mike
>>
>> Amin Mohammed-Coleman wrote:
>>
>> Hi
>>> I am currently indexing documents (pdf, ms word, etc) that are uploaded,
>>> these
turday, March 07, 2009 12:46 PM
> > To: java-user@lucene.apache.org
> > Subject: Re: Lucene Highlighting and Dynamic Summaries
> >
> > It depends :)
> >
> > It's a trade-off. If storing is not prohibitive, I recommend that as
> > it makes life easier
Thanks! The final piece that I needed to do for the project!
Cheers
Amin
On Sat, Mar 7, 2009 at 12:21 PM, Uwe Schindler wrote:
> > cool. i will use compression and store in index. is there anything
> > special
> > i need to for decompressing the text? i presume i can just do
> > doc.get("cont
Hi
Got it working! Thanks again for your help!
Amin
On Sat, Mar 7, 2009 at 12:25 PM, Amin Mohammed-Coleman wrote:
> Thanks! The final piece that I needed to do for the project!
> Cheers
>
> Amin
>
> On Sat, Mar 7, 2009 at 12:21 PM, Uwe Schindler wrote:
>
>> >
but no highlighted summary. However
if I search using "aspectj" I get the same doucment with highlighted
summary.
Just to mentioned I do rewrite the original query before performing the
highlighting.
I'm not sure what i'm missing here. Any help would be appreciated.
ng this so I can code around this if is normal.
Apologies again for re sending this mail
Cheers
Amin
Sent from my iPhone
On 9 Mar 2009, at 07:50, Amin Mohammed-Coleman wrote:
Hi
I am seeing some strange behaviour with the highlighter and I'm
wondering if anyone else is experie
n this.
>
>
>
> Amin Mohammed-Coleman wrote:
>
>> Hi
>>
>> Apologies for re sending this mail. Just wondering if anyone has
>> experienced the below. I'm not sure if this could happen due nature of
>> document. It does seem strange one term search returns summar
sometimes for other files.
>
>
> Cheers
> Amin
>
>
> On Wed, Mar 11, 2009 at 6:11 PM, markharw00d
> wrote:
>
> If you can supply a Junit test that recreates the problem I think we can
> start to make progress on this.
>
>
>
> Amin Mohammed-Coleman wrote:
JIRA raised:
https://issues.apache.org/jira/browse/LUCENE-1559
Thanks
On Thu, Mar 12, 2009 at 11:29 AM, Amin Mohammed-Coleman wrote:
> Hi
>
> Did both attachments not come through?
>
> Cheers
> Amin
>
>
> On Thu, Mar 12, 2009 at 9:52 AM, mark harwood wrote:
>
&g
le on JIRA. Currently on a
cramped train!
Cheers
On 11 Mar 2009, at 18:11, markharw00d wrote:
If you can supply a Junit test that recreates the problem I think we
can start to make progress on this.
Amin Mohammed-Coleman wrote:
Hi
Apologies for re sending this mail. Just wondering if a
JIRA updated. Includes new testcase which shows highlighter not working as
expected.
On Thu, Mar 12, 2009 at 5:56 PM, Amin Mohammed-Coleman wrote:
> Hi
>
> I have found that it is not issue with POI. I extracted text using PoI but
> differenlty and the term is extracted properly.
I did the following:
highlighter.setMaxDocCharsToAnalyze(Integer.MAX_VALUE);
which works.
On Thu, Mar 12, 2009 at 6:41 PM, Amin Mohammed-Coleman wrote:
> JIRA updated. Includes new testcase which shows highlighter not working as
> expected.
>
>
> On Thu, Mar 12, 2009 a
ighlighted terms, etc.).
Maybe we should do the same for highlighter?
Mike
Amin Mohammed-Coleman wrote:
I did the following:
highlighter.setMaxDocCharsToAnalyze(Integer.MAX_VALUE);
which works.
On Thu, Mar 12, 2009 at 6:41 PM, Amin Mohammed-Coleman >wrote:
JIRA updated. Includes new t
Sweet! When will this highlighter be available? Can I use this now?
Cheers!
On Fri, Mar 13, 2009 at 10:10 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:
>
> Amin Mohammed-Coleman wrote:
>
> I think that would be good.
>>
>
> I'll open an issue.
ow by pulling the patch attached to the issue & testing it
> yourself. If you do so, please report back! This is how Lucene improves.
>
> I'm hoping we can include it in 2.9...
>
> Mike
>
>
> On Mar 13, 2009, at 6:35 AM, Amin Mohammed-Coleman wrote:
>
> Sw
Ok. I tried to apply the patch(s) and completely messed it up (user
error). Is there a full example of the highlighter that is available that I
can apply and test?
Cheers
Amin
On Fri, Mar 13, 2009 at 12:09 PM, Amin Mohammed-Coleman wrote:
> Absolutely! I have received considerable help f
Hi
I'm looking at trying to implement pagination for my search project.
I've been google-ing for a solution. So far no luck. I've seen
implementations of HitCollector which looks promising, however my
search method has to completely change.
For example I'm currently using the following:
Why don't you create a Lucene document that represents a Person and then
index the fields name, age, phone number, etc. Search on the name and then
get the corresponding phone number from the search.
Cheers
Amin
On Sun, Mar 15, 2009 at 10:56 AM, Seid Mohammed wrote:
> I want to Index Person_Nam
/search/Filter.html
> >
> filter,
> int n,
> Sort
> <
> http://lucene.apache.org/java/2_4_1/api/org/apache/lucene/search/Sort.html
> >
> sort)
>throws IOException
> <http://java.sun.com/j2se/1
Mar 15, 2009 at 8:49 AM, Seid Mohammed
> wrote:
> >
> >> that is exactly my question
> >> how can I do that?
> >>
> >> thanks a lot
> >> Seid M
> >>
> >> On 3/15/09, Amin Mohammed-Coleman wrote:
> >> > Why don't you c
d something off
> on Sunday that I don't really understand well enough
>
> Sorry 'bout that
> Erick
>
> On Sun, Mar 15, 2009 at 9:15 AM, Amin Mohammed-Coleman >wrote:
>
> > HI Erick
> > Thanks for your reply, glad to see I'm not the only person
>
ery,filter,pageHitCollector)
I intend to use comparators to do the sorting and use collections.sort().
I would be grateful for any feedback on whether this is a good approach.
Cheers
Amin
On Mon, Mar 16, 2009 at 8:03 AM, Amin Mohammed-Coleman wrote:
> Hi Erick
>
> I
Hi
I've implemented the solution using the PageHitCounter from the link and I
have noticed that in certain instances I get a 0 score for queries like
"document OR aspectj".
has anyone else experienced this?
Cheers
Amin
On Mon, Mar 16, 2009 at 8:07 PM, Amin Mohammed-Coleman wrot
Hi
Please ignore the problem I raised. User error !
Sorry
Amin
On 19 Mar 2009, at 09:41, Amin Mohammed-Coleman
wrote:
Hi
I've implemented the solution using the PageHitCounter from the link
and I have noticed that in certain instances I get a 0 score for
queries like "d
Hi
If I choose to subclass the default similarity, do I need to apply the same
subclassed Similarity to IndexReader, IndexWriter and IndexSearcher?
I am interested in doing the below:
Similarity sim = new DefaultSimilarity() {
public float lengthNorm(String field, int numTerms) {
if(field
e.
Cheers
Amin
On Fri, Mar 20, 2009 at 4:20 PM, Amin Mohammed-Coleman wrote:
> Hi
>
> If I choose to subclass the default similarity, do I need to apply the
> same subclassed Similarity to IndexReader, IndexWriter and IndexSearcher?
>
> I am interested in doing the bel
Hi
How do you expose a pagination without a customized hit collector. The
multi searcher does not expose a method for hit collector and sort.
Maybe this is not an issue for people ...
Cheers
Amin
On 20 Mar 2009, at 17:25, "Uwe Schindler" wrote:
Why not use a MultiSearcher an all single
s worked.
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
-Original Message-
From: Amin Mohammed-Coleman [mailto:ami...@gmail.com]
Sent: Friday, March 20, 2009 6:43 PM
To: java-user@lucene.apache.org
Cc: ;
Subject: Re: Performance t
Hi
I was wondering if soemthing like LingPipe or Gate (for text extraction)
might be an idea? I've started looking at it and I'm just thinking it may
be applicable (I maybe wrong).
Cheers
Amin
On Wed, Mar 25, 2009 at 4:18 PM, Grant Ingersoll wrote:
> Hi MFM,
>
> This comes down to a preprocess
Hi
I was going to suggest looking at hibernate search. It comes with
event listeners that modify your indexes when the persistent entity
changes. It use lucene under the hood so if you need to access lucene
the you can.
Indexing can be done sync or async and the documentation shows how to
1 - 100 of 128 matches
Mail list logo