Simplest solution is to wrap your findFeatures.reader in a
SlowMultiReaderWrapper (as the exception suggests).
More performant solution is to change your code to visit the
sequential sub-readers of findFeatures.reader, directly. But if
performance isn't important here, just do the simple solution
I have changed the code according to MIGRATE.txt
but now i am getting an error at
public long getCorpCount(Vector clauses)
{
long count=0;
try {
SpanQuery [] clause= new SpanQuery[clauses.size()];
clause= clauses.toArray(clause);
//SpanNear
Hi,
If I understand correctly what you are trying to do as far as getting corpusTF,
you might want to look at the implementation of the "-t" flag in
org.apache.lucene.misc/HighFreqTerms.java in contib.
Take a look at the getTotalTermFreq method in trunk.
http://svn.apache.org/viewvc/lucene
MIGRATE.txt is here:
https://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/MIGRATE.txt
DocsEnum doesn't have a "getSpans()", so you mean you're hitting a
compilation error?
Maybe step back a bit and describe what you're trying to do...?
Mike
http://blog.mikemccandless.com
On Tue, Mar 2
Where i can find MIGRATE.txt ?
On Wed, Mar 23, 2011 at 3:07 AM, nitin hardeniya
wrote:
> hey
>
> no null doesn't work
>
> I have tried
> tds=MultiFields.getTermDocsEnum(reader, null, "content", term);
>
> this is not showing error but not it shows error at
> getSpans()
>
> Thanks
> Nitin
>
> On W
hey
no null doesn't work
I have tried
tds=MultiFields.getTermDocsEnum(reader, null, "content", term);
this is not showing error but not it shows error at
getSpans()
Thanks
Nitin
On Wed, Mar 23, 2011 at 1:30 AM, Michael McCandless-2 [via Lucene] <
ml-node+2716623-544966764-77...@n3.nabble.com>
Try looking at MIGRATE.txt?
Passing null for the skipDocs should be fine. Likely you need to use
MultiFields.getTermDocsEnum, but that entails a performance hit (vs
going segment by segment yourself).
Mike
http://blog.mikemccandless.com
On Tue, Mar 22, 2011 at 1:56 PM, nitinhardeniya
wrote:
>