Here are two package-private issues I've met. But I could find
workarounds on these issues easily.
o.a.l.search.FieldDocSortedHitQueue
o.a.l.search.HitQueue
I think any of package-private methods of those two class should be public.
- Cheolgoo Kang
On Tue, Feb 24, 2009 at 9:05 PM, Mi
)
Is there any reason the constructor of TopFieldDocs has no modifier declaration?
- Cheolgoo Kang
-
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org
How about using NumberTools and range query/filters?
http://lucene.apache.org/java/2_3_2/api/core/org/apache/lucene/document/NumberTools.html
- Cheolgoo Kang
2008/8/12 장용석 <[EMAIL PROTECTED]>:
> hi.
>
> I am searching for lucene api or function like query "FIELD > 1
. . . .
Simpy -- http://www.simpy.com/ - Tag - Search - Share
- Original Message
From: Cheolgoo Kang <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Friday, April 6, 2007 10:40:52 AM
Subject: IndexModifier's docCount is inconsistent
When we use IndexModifier
When we use IndexModifier's docCount() method, it calls it's
underlying IndexReader's numDocs() or IndexWriter's docCount() method.
Here is the problem that IndexReader.numDocs() cares about deleted
documents, but IndexWriter.docCount() ignores it.
So, I've made some modifications in IndexWriter.
Keywords.setKeyword(String) could've been able to stack all the
keywords set by the digester. So, setKeyword(String) method should be
written like below using java.util.List:
public static class KeyWords
{
private String lineNum;
private List kw = new LinkedList();
pub
On 3/17/07, Lokeya <[EMAIL PROTECTED]> wrote:
Hi,
I am trying to index the content from XML files which are basically the
metadata collected from a website which have a huge collection of documents.
This metadata xml has control characters which causes errors while trying to
parse using the DOM
Check this out.
http://mail-archives.apache.org/mod_mbox/lucene-java-user/200512.mbox/[EMAIL
PROTECTED]
On 6/1/06, Monsur Hossain <[EMAIL PROTECTED]> wrote:
When Lucene first issues a query, it caches a hash of sort values (one
value per document, plus a bit more if you are sorting on strings
e FieldCaches get
> populated and warmed up.
>
> Otis
>
> --- Cheolgoo Kang <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > I'm running an index on FSDirectory with 0.4M documents with each of
> > 7 fields.
> >
> > When I open an IndexReader an
Hi,
I'm running an index on FSDirectory with 0.4M documents with each of 7 fields.
When I open an IndexReader and an IndexSearcher, the average search
time with hits of 0.2M items (yeah, very common word) takes about
150~250 msec and it's pretty good. But the first time just after
opening IndexRe
Hi,
You first save those search keywords entered by users into some kind
of storage like a database system or even into a dedicated Lucene
index. So it's a database and web issue, not a Lucene one.
And, as you know, Lucene does not provide this functionality out of the box.
Good luck!
On 12/8/0
Hi,
On 11/11/05, Grant Ingersoll <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Was wondering if someone could help me out with a few things in Korean
> as related to Lucene:
> 1. Which Analyzer do you recommend? From the list, I see that some
> have had success with the StandardAnalyzer. Are there any c
Thanks Bialecki,
I'm trying to test your program, thanks a lot!
And also, can you give me the paper you've cited [1] and [2]? I've
googled(entire web and google scholar) about it but got nothing.
On 11/8/05, Andrzej Bialecki <[EMAIL PROTECTED]> wrote:
> KwonNam Son wrote:
>
> >First of all, I re
> Sent: Tuesday, November 08, 2005 4:44 PM
> Subject: Re: korean and lucene
>
>
> > Hello Cheolgoo,
> >
> > I will test the patch.
> >
> >
> > Thanks,
> >
> > Youngho
> >
> > - Original Message -
> > From: "Cheolgoo
On 11/8/05, Cheolgoo Kang <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I've created a new JIRA issue with Korean analysis that
> StandardAnalyzer splits one word into several tokens each with one
> character. Cause Korean is not a phonogram, one character in Korean
So
t; > > > import org.apache.lucene.search.IndexSearcher;
> > > > > > import org.apache.lucene.search.Hits;
> > > > > > import org.apache.lucene.search.Query;
> > > > > > import org.apache.lucene.queryParser.QueryParser;
> > > > > > import
Hello,
Thanks for your announcement in lucenebook.com and java-user list!
But the name of our translator Moonho Lee's name is misspelled :) 'ha'
should be corrected to 'ho'.
Thanks again!
On 11/8/05, Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> Hello,
>
> If there are any Koreans (or others dy
StandardAnalyzer's JavaCC based StandardTokenizer.jj cannot read
Korean part of Unicode character blocks.
You should 1) use CJKAnalyzer or 2) add Korean character
block(0xAC00~0xD7AF) to the CJK token definition on the
StandardTokenizer.jj file.
Hope it helps.
On 10/4/05, John Wang <[EMAIL PROT
>
> > On 9/9/05, Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> > > Hello Cheolgoo,
> > >
> > > I always pronounce the "plu" part as "plu" in the word "plus", and
> > > "cene" as the word "seen". Somet
Cause the RAMDirectory is not serializable, it's hard to send a index
to a remote computer. I think it's kind of tricky, but it would work.
1. Create a fresh new IndexWriter(let's name it toTransfer) with
temporary FSDirectory, /usr/tmp/some/directory for example.
2. Invoke the toTransfer.addIndex
How do you pronounce Plucene, a Perl port of Lucene?
I think we can pronounce it as [p-lucene] or [plucene].
--
Cheolgoo
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
[EMAIL PROTECTED]) - tel. +358 40 7348034
> > Noromaa Solutions - see http://www.nm-sol.com/
> >
> >
> >
> > -
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
&
.
> >
> > -
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >
>
> --
23 matches
Mail list logo