mkjjyy
On 8/10/07, Askar Zaidi wrote:
Hey Guys,
I am trying to do something similar. Make the content search-able as
soon as
it is added to the website. The way it can work in my scenario is
that , I
create the Index for a every new user account created.
Then, whenever a new document is
Take a look at the ngram classes (probably in contrib, don't remember
for sure right now).
Patrick
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Did you try the IndexSearcher.doc(int i, FieldSelector fieldSelector) method?
Could be faster because Lucene don't have do "prepare" the whole document.
Patrick
On Sat, May 10, 2008 at 9:35 AM, Stephane Nicoll
<[EMAIL PROTECTED]> wrote:
> From the FAQ:
>
> "Don't iterate over more hits than nee
Hi,
I've looked around (mailing lists, jira) and I can't seem to find
information about how to generate maven artifacts, especially for
contrib.
I mean, I can get lucene from the maven repo, and I know I have to
build the contrib for myself.
But I kind of hoped I would be able to deploy contrib l
Add a field to your document.
document.add(new Field("id", idString));
Or something like that. (Don't have the doc handy right now).
Hope this helps.
Patrick
On Feb 9, 2008 7:38 AM, Gauri Shankar <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I would like to get the control over the docId field from my
Of course, it depends on the kind of query you are doing, but (I did
find the query parser in the mean time)
MultiFieldQueryParser mfqp = new MultiFieldQueryParser(useFields,
analyzer, boosts);
where analyzer can be a PerFieldAnalyzer
followed by
Query query = mfqp.parse(queryString);
would do the
Hi,
Don't know the size of your dataset. But, couldn't you index in 2
fields, with PerFieldAnalyzer, tokenizing with Standard for 1 field,
and WhiteSpace for the other.
Then use multiple field query (there is a query parser for that, just
don't remember the name right now).
Patrick
On 10/1/07,
Hi,
Did you use the same snowball analyzer to create your index? Remember
that you usually need to use the same analyzer for indexing and
searching.
Patrick
On 9/21/07, Galactikuh <[EMAIL PROTECTED]> wrote:
>
> Hi,
> After doing some more analysis with Luke it does seem like that it is
> somethi
Hi,
At first, we thought we would use a "dual" approach, an Lucene index
and a RDBMS for storage.
While prototyping, for simplicity sake, we used the Lucene index as
storage, thinking we could easily replace it later. So far, speed is
satisfying enough that we are going to keep data there util re
Hi,
Answers in the text.
> For each search request (it's a webapp) I currently create
> a new IndexSearcher, new Filter and new Sort, call
> searcher.search(query, filter, sorter) and later searcher.close().
>
> The literature says that it is desirable to cache the IndexSearcher,
> but there's no
Hi,
There is a Lucene-eXist trigger that allows you to do just that. Take
a look at patch
http://sourceforge.net/tracker/index.php?func=detail&aid=1654205&group_id=17691&atid=317691
Then, from exist, you can search either with XQuery or Lucene syntax.
Patrick
Thomas wrote:
My intention is to
use it!!
>
> M.
> Le 8 juin 07 à 14:57, Patrick Turcotte a écrit :
>
>> Hi,
>>
>> What we did was this:
>>
>> 1) When you application starts, it scans the index for terms values and
>> store them in a map or something.
>>
>> 2) When you r
ching anything.
> Is this the only way to do it? It doesn't seems quite efficient,
> especially when you just typed in the first character.
>
> I guess the "good" way is to go through the terms, and return as soon
> as, for example, 10 terms are found.
>
> I am
Hi,
Please suggest what should be the query String for a pharse search.
Did you take a look at:
http://lucene.apache.org/java/docs/queryparsersyntax.html ?
Patrick
Hi!
I am new to Lucene and I am trying to customise the query parser to default
to wildcard searches.
For example, if the user types in "fenc", it should find "fence" and
"fencing" and "fences" and "fenced".
Looks like stemming to me! Maybe you should consider using a stemming
analyzer instea
Hi,
From my own experience, yes, you can.
You just have to reopen your index with a new IndexWriter (making sure you
don't use a constructor that will recreate it; from the javadoc, stay clear
of those with a boolean argument). Then call optimize() then close() on your
writer, it will be optim
Hi Renaud,
Maybe you should take a look at the Morphalou project (
http://actarus.atilf.fr/lexiques/morphalou/) it is a database of lemma and
forms in French.
You could extract the data and create a synonym index or something.
Don't hesitate to contact me off list (and in French if needed) for
Hi,
How about this:
1) You copy the files that make your index in a new folder
2) You update your index in that new folder (forcing if necessary, old locks
will not be valid)
3) When update is completed, close your readers, and open them on the new
index.
4) Copy the fresh index files to the pre
On 12/14/06, Erik Hatcher <[EMAIL PROTECTED]> wrote:
On Dec 13, 2006, at 1:51 PM, Patrick Turcotte wrote:
> I would suggest you take a look at exist-db (http://exist-db.org/).
I really doubt eXist can handle 10M XML files. Last time I tried it,
it choked on 20k of them.
It
I would suggest you take a look at exist-db (http://exist-db.org/).
A database for XML documents that support XQuery.
We are using both products here (lucene and exist-db), and for what you are
looking for, exist-db seems better.
Our documents are far more complex than yours (about 500 differen
Hi,
How do we use a specific query on multiple fields ?
for eg.
I have to run a query "jakarta tomcat" (the string which i give in my
textbox is with double quotes as I have to get the string 'jakarta tomcat'
together)
on mutiple fields like "content" ,"title","examples"
Take a look at org.a
Hi,
Did you take a look at IsoLatin1AccentFilter ?
Patrick
On 11/6/06, hans meiser <[EMAIL PROTECTED]> wrote:
Hi,
Lucene indexes documents from 3 different countries here
(English, German and French). I want to normalize some
characters like umlauts. ä -> ae
I did it in the following way
It will make mails list more easy to read (I am using gmail and I do
not have client-side filters).
That is not true.
You can have labels, and, if you look at the top of the page, right beside
the "Search the Web" button, you have a "create filter" link.
Patrick
Should both results be returned in both cases?
If so, take a look at the IsoLatin1Filter class, it will remove those
accents for indexing and searching if needed.
Patrick
On 10/31/06, Valerio Schiavoni <[EMAIL PROTECTED]> wrote:
hello,
i use lucene to index documents in Italian. many terms en
he one referred here:
http://sourceforge.net/mailarchive/message.php?msg_id=11811387
(which I had to extract from the sources of the project mentioned in the
post)
or you were referring to something else ?
thanks a lot!
On 10/31/06, Patrick Turcotte <[EMAIL PROTECTED]> wrote:
>
> Shou
I don't remember the syntax right now, but how about giving a boost to
certain fields, either while indexing or while searching ?
Patrick
On 10/30/06, Amit Soni <[EMAIL PROTECTED]> wrote:
Hi Erick,
Thanks for the reply.
Actually the priorities mean when i search for example for cancer then
i
Hi,
I'm trying to come up with the best design for a problem.
I want to search texts for expressions that shouldn't be found in them.
My bad expressions list is quite stable. But the texts that I want to scan
change often.
Design I
Index my texts, and then loop on my expressions list to see i
Thanks Mark!
I have to mention Benoit Mercier here who worked with me so we could
understand how to expand a term and use TOKEN_MGR_DECLS.
Patrick
On 10/13/06, Mark Miller <[EMAIL PROTECTED]> wrote:
Great work Patrick. I was unfamiliar with the use of TOKEN_MGR_DECLS.
Looks
like a powerfull f
Submitted to Jira with key LUCENE-682
Patrick
Grant Ingersoll wrote:
Hi Patrick,
Thanks for the work. Create a bug in JIRA and upload a patch (see svn
diff). See the Wiki for information on how to contribute.
Thanks,
Grant
-
rocessing with matchedToken.image to get to the
matched string, set matchedToken.kind accordingly.
// USES fields set by the QueryParser to decide on behavior
}
Hope this answer your question.
Patrick
thanks,
-Mark
On 10/13/06, Patrick Turcotte <[EMAIL PROTECTED]> wrote:
Hello!
to the those who can decide to
integrate it? Where? In what format? Etc.
Thanks,
Patrick Turcotte
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Hi,
I was thinking of something along those lines.
Last week, I was able to take time to understand the JavaCC syntax and
possiblities.
I have some cleaning up, testing and documentation to do, but
basically, I
was able to expand the AND / OR / NOT patterns at r
I've started to look into this (and the whole javacc syntax) I'll keep
you posted on my results.
Patrick
Erik Hatcher wrote:
Currently AND/OR/NOT are hardcoded into the .jj file. A patch to make
this configurable would be welcome!
Erik
On Oct 3, 2006, at 11:15 AM, Patric
ser to do it?
We know we could always modify QueryParser.jj to add them to the list,
but we'd rather like not to have to recompile/rejar each time there is a
new version of Lucene.
Thanks
--
Patrick Turcotte
-
To unsub
34 matches
Mail list logo