t; reader.maxDoc(); ++i) {
> if (liveDocs != null && !liveDocs.get(i)) {
> continue;
> }
>
> to replace
>
>TermDocs termDocs = reader.termDocs(null);
> while(termDocs.next())
>{
>
>
> 2013/7/8 Ian Lea
if (liveDocs != null && !liveDocs.get(i)) {
> continue;
> }
This is what termDocs(null) did internally in Lucene 3. So this code "looks"
not efficient but that is all you can do - and is not different than in earlier
Lucene versions.
Uwe
> 2013/7/8 Ian
)) {
continue;
}
to replace
TermDocs termDocs = reader.termDocs(null);
while(termDocs.next())
{
2013/7/8 Ian Lea
> There's a fair chunk of info on TermDocs and friends in the migration
> guide. http://lucene.apache.org/core/4_3_1/MIGRATE.html
&
There's a fair chunk of info on TermDocs and friends in the migration
guide. http://lucene.apache.org/core/4_3_1/MIGRATE.html
Does that cover your question?
--
Ian.
On Mon, Jul 8, 2013 at 12:32 PM, Yonghui Zhao wrote:
> Hi,
>
> What's proper replacement of
Hi,
What's proper replacement of "TermDocs termDocs = reader.termDocs(null);“
in lucene 4.x
It seems reader.termDocsEnum(term) can't take null as a input parameter.
Shay,
>>
>> would you mind open a jira issue for that?
>>
>> simon
>>
>> On Fri, Sep 24, 2010 at 2:53 AM, Shay Banon wrote:
>> > Hi,
>> >
>> > A user got this very strange exception, and I managed to get the
>> > index
>
> simon
>
> On Fri, Sep 24, 2010 at 2:53 AM, Shay Banon wrote:
> > Hi,
> >
> >A user got this very strange exception, and I managed to get the index
> > that it happens on. Basically, iterating over the TermDocs causes an
> AAOIB
> > exception. I easily re
Shay,
would you mind open a jira issue for that?
simon
On Fri, Sep 24, 2010 at 2:53 AM, Shay Banon wrote:
> Hi,
>
> A user got this very strange exception, and I managed to get the index
> that it happens on. Basically, iterating over the TermDocs causes an AAOIB
> exce
Hi,
A user got this very strange exception, and I managed to get the index
that it happens on. Basically, iterating over the TermDocs causes an AAOIB
exception. I easily reproduced it using the FieldCache which does exactly
that (the field in question is indexed as numeric). Here is the
p a BitSet like so:
>
> for ( int x = 0; x < fields.length; x++ ) {
>for ( int y = 0; y < values.length; y++ ) {
>TermDocs termDocs = reader.termDocs( new Term( fields[x], values[y] ) );
>try {
>while ( termDocs.next() ) {
>
values.length; y++ ) {
TermDocs termDocs = reader.termDocs( new Term( fields[x], values[y] ) );
try {
while ( termDocs.next() ) {
int doc = termDocs.doc();
bits.set( doc );
}
}
finally {
termDocs.close();
}
}
}
I notice that
lling clone() on
> the "real" IndexInputs and so for NIOFSDirectory, FSDirectory and
> RAMDirectory at least, when a clone's close() is called, that's a
> no-op.
>
> I think there are many places in Lucene where we don't close the
> TermDocs/TermPositions so
.
I think there are many places in Lucene where we don't close the
TermDocs/TermPositions so I think you're OK not calling them
until/unless this situation changes in Lucene.
Probably we should either remove close() entirely (because it sure
looks like it's supposed to be called), o
Greetings all,
I currently have a FieldExistsFilter which returns all documents that
contain a particular field. I'm in the process of converting my custom
filters to be DocIdSet based rather than BitSet based. This filter, however,
requires the use of a TermDocs object to iterate over term
der to populate values the first
time a given field is loaded. It's also what segment merging does.
In 2.9, we've switched searching to proceed segment by segment,
instead of using the MultiSegmentReader API to get TermEnum/TermDocs
(this was LUCENE-1483). This gives a good speedu
Thought I would report a performance increase noticed in migrating from
2.3.2 to 2.4.0.
Performing an iterated loop using termDocs & termEnums like below is
about 30% faster.
The example test set I'm running has about 70K documents to go through
and process (on a dual processor window
It seems like you are trying to use the TermDocs iterator to load the
term freq for that particular document (doc)?
It doesn't work that way -- instead, it simply iterates over all
documents that this term occurred in. (Ie it will replace the doc in
the int[] that you passed in,
Hello:
I have a problem with TermDocs#read operation.
the following code has an incorrect result:
.
int termFreq=0;
.
TermDocs termDocs = indexReader.termDocs(new
Term(((Field)field).name(),termCons));
int[] freqs = new int[]{0
Hello:
I have a problem with TermDocs#read operation.
the following code has an incorrect result:
.
int termFreq=0;
.
TermDocs termDocs = indexReader.termDocs(new
Term(((Field)field).name(),termCons));
int[] freqs = new int[]{0
:
: > termDocs = reader.termDocs(term);
: >while(termDocs.next()){
: >int index = termDocs.doc();
: >if(reader.document(index).get("id").equals(id)){
: >re
> From: Erick Erickson <[EMAIL PROTECTED]>
> To: java-user@lucene.apache.org
> Sent: Tuesday, June 24, 2008 9:26:03 AM
> Subject: Re: uniqueWords, and termDocs
>
> Isn't asking for unique words (actually tokens) equivalent to enumerating
> all the terms in a fie
at 6:03 PM, Cam Bazz <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I need to be able to select a random word out of all the words in my index.
> how can I do this tru termDocs() ?
>
> Also, I need to get a list of unique words as well. Is there a way to ask
> this to lucene?
>
> Best Regards,
> -C.B.
>
Hello,
I need to be able to select a random word out of all the words in my index.
how can I do this tru termDocs() ?
Also, I need to get a list of unique words as well. Is there a way to ask
this to lucene?
Best Regards,
-C.B.
t;
> 20 jun 2008 kl. 18.12 skrev Vinicius Carvalho:
>
>
> Hello there! I trying to query for a specific document on a efficient way.
>>
>
> Hi Vinicius,
>
>termDocs = reader.termDocs(term);
>> while(termDocs.next()){
>> in
20 jun 2008 kl. 18.12 skrev Vinicius Carvalho:
Hello there! I trying to query for a specific document on a
efficient way.
Hi Vinicius,
termDocs = reader.termDocs(term);
while(termDocs.next()){
int index = termDocs.doc();
if
String id){
> Term term = new Term("id",id);
> IndexReader reader = readerManager.getIndexReader();
>TermDocs termDocs = null;
>try {
>termDocs = reader.termDocs(term);
>while(termDocs.next()){
>i
st
it seems that its a bit of overhead, using a reader.termDocs(term) would be
faster.
Here's a piece of code:
private void deleteFromIndex(String id){
Term term = new Term("id",id);
IndexReader reader = readerManager.getIndexReader();
TermDocs termDocs
On 11/5/07, Mike Streeton <[EMAIL PROTECTED]> wrote:
> Can TermDocs be reused i.e. can you do.
>
> TermDocs docs = reader.termDocs();
> docs.seek(term1);
> int i = 0;
> while (docs.next()) {
> i++;
> }
> docs.seek(term2);
> int j = 0;
> while (docs
Hi!
Imagine an index holding documents in different languages and country.
Language+country is what I call a context and I build and hold a QueryFilter
for each context.
When performing a fuzzy search, FilteredTermEnum doesn't care about any
contexts at all (well, how should it :). It builds a
Can TermDocs be reused i.e. can you do.
TermDocs docs = reader.termDocs();
docs.seek(term1);
int i = 0;
while (docs.next()) {
i++;
}
docs.seek(term2);
int j = 0;
while (docs.next()) {
j++;
}
Reuse does seem to work but I get ArrayIndexOutOfBoundsExceptions from
BitVector it I
Ir is IndexReader.
termIdent is Term
int freq = ir.docFreq(termIdent);
if (freq > 1) {
TermDocs termDocs = ir.termDocs(termIdent);
int[] docsArr = new int[freq];
int[] freqArr = new int[freq];
int number = termDocs.read(docsArr,freqArr);
System.out.println(num
31 matches
Mail list logo