Doron Cohen wrote:
Hi Antony, you cannot instruct the query parser to do that. Note that an
Thanks, I suspected as much. I've changed it to make the field tokenized.
field name. This is an application logic to know that a certain query is
not to be tokenized. In this case you could create yo
Has anyone dealt with the problem of constructing sub-queries given a
multi-word query ?
Here is an example to illustrate what I mean:
user queries for -> A B C D
right now I change that query to "A B C D" A B C D to give phrase
matches higher weightage.
What might happen though, is that the us
Hi,
Sorry Doron, if the code added in my last mail was confusing and thanks for
the reply. The code added in my last mail was not exactly the version that
was causing problem, this one is.
The lucene version is 1.2.
Waiting for a suggestion.
Code:
public void indexFile(File inde
Hi Antony, you cannot instruct the query parser to do that. Note that an
application can add both tokenized and un_tokenized data under the same
field name. This is an application logic to know that a certain query is
not to be tokenized. In this case you could create your query with:
query = new
On 10/16/06, EDMOND KEMOKAI <[EMAIL PROTECTED]> wrote:
Can somebody please clarify the intended behaviour of
IndexReader.deleteDocuments()?
It deletes documents containing the term. The API docs are correct,
the demo docs are incorrect if they say otherwise.
-Yonik
http://incubator.apache.org
Can somebody please clarify the intended behaviour of
IndexReader.deleteDocuments()?, between the various documentations and
implementations it seems this function is broken. API doc says it should
delete docs containing the provided term but instead it deletes all
documents not containg the given
On 10/12/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Does the Sort function create some kind of internal cache?
Yes, it's called the FieldCache, and there is a cache with a weak
reference to the index reader as a key. As long as there is a
reference to the index reader (even after close()
Hi,
I have a field "attname" that is indexed with Field.Store.YES,
Field.Index.UN_TOKENIZED. I have a document with the attname of
"IqTstAdminGuide2.pdf".
QueryParser parser = new QueryParser("body", new StandardAnalyzer());
Query query = parser.parse("attname:IqTstAdminGuide2.pdf");
fails
Thanks for the reply Otis.
I looked at the CHANGES.txt file and saw quit a bit of changes. For my port
from Java to C#, I can't rely on the trunk code as it is (to my knowledge)
changes on a monthly basic if not weekly. What I need is an official
release so that I can use it as the port point.
If anyone is using the new lazy field loading feature from the Lucene
trunk, you should turn it off or upgrade to the next nightly build
(lucene-2006-10-16) or later.
Bug details here:
http://issues.apache.org/jira/browse/LUCENE-683
-Yonik
http://incubator.apache.org/solr Solr, the open-source L
All: Thanks for the ideas and suggestions.
Bill: As Otis pointed out, Lucene already comes with a couple
of stemmers (I'm using Lucene 2.0). Besides PorterStemFilter,
you can also take a look at SnowballAnalyzer and SnowballFilter
classes which support more than just English. The integration
is p
In a way that certainly needs more testing (haven't had the time), but here
is the gist:
I modified the SpanNotQuery to allow a certain number of span crossings--
making it something of a WithinSpanQuery. So instead of just being able to
say find "something" and "something else" and don't let it
Ismail,
I was having the same type of problem (using v2) until I changed
my index to change the ID field from Field.Index.TOKENIZED to
Field.Index.UN_TOKENIZED. Can you try that, or create a secondary field
that is set up that way with your pk id in it?
Chris
"Ismail Siddiqui" <[EMA
Mark,
you wrote:
> > On another note...http://famestalker.com
> >
...
>
> http://famestalker.com/devwiki/
Could you explain how "Paragraph/Sentence Proximity Searching"
is implemented in Qsol?
Regards,
Paul Elschot
-
To unsub
I am trying to write an Ejb3Directory. It seems to work for index writing but
not for searching.
I get the EOF exception. I assume this means that either my OutputStream or
InputStream is doing
something wrong. It fails because the CSInputStream has a length of zero when
it reads the .fnm sectio
thanks, it worked
On 10/15/06, Doron Cohen <[EMAIL PROTECTED]> wrote:
> now pk is primary key which i am storing but not indexing it..
> doc.add(new Field("pk", message.getId().toString(),Field.Store.YES,
> Field.Index.NO));
You would need to index it for this to work.
I would very much like to see the .NET port in line with lucene java
This would result in index compatibility and equivalent features as that
lucene provides
George - Cheers for the continuous effort to keep lucene.net in line with
Lucene
Regards,
Prabhu
On 10/14/06, Otis Gospodnetic <[EMAIL
17 matches
Mail list logo