hey

no null doesn't work

I have tried
tds=MultiFields.getTermDocsEnum(reader, null, "content", term);

this is not showing error but not it shows error at
getSpans()

Thanks
Nitin

On Wed, Mar 23, 2011 at 1:30 AM, Michael McCandless-2 [via Lucene] <
ml-node+2716623-544966764-77...@n3.nabble.com> wrote:

> Try looking at MIGRATE.txt?
>
> Passing null for the skipDocs should be fine.  Likely you need to use
> MultiFields.getTermDocsEnum, but that entails a performance hit (vs
> going segment by segment yourself).
>
> Mike
>
> http://blog.mikemccandless.com
>
> On Tue, Mar 22, 2011 at 1:56 PM, nitinhardeniya
> <[hidden 
> email]<http://user/SendEmail.jtp?type=node&node=2716623&i=0&by-user=t>>
> wrote:
>
> > hi
> >
> > I have a code that work fine with lucene 3.2 where i used TermDocs to
> find
> > the corpusTF here is the code
> >
> >
> > public void calculateCorpusTF(IndexReader reader) throws IOException {
> >                // TODO Auto-generated method stub
> >                Iterator it = word.iterator();
> >
> >                Iterator  iwp  = word_prop.iterator();
> >                wordProp wp;
> >                Term ta = null;
> >
> >                        TermDocs tds;
> >        //      DocsEnum tds;
> >                String text;
> >                tfDoc tfcoll;
> >                long freq=0;
> >                OpenBitSet skipDocs = null;
> >                skipDocs = new OpenBitSet(0);
> >                //System.out.println("Length: "+skipDocs.length());
> >                try {
> >
> >                        while(it.hasNext())
> >                        {
> >                                text=it.next();
> >                                wp=iwp.next();
> >
> >                                System.out.println("Word is "+text);
> >                                ta= new Term("content",text);
> >                                //BytesRef term = new
> BytesRef(text.toCharArray(),0,text.length());
> >
> >                                tfcoll = new tfDoc();
> >                                freq=0;
> >
> >
>  tds=reader.termDocs(ta);
> >                                                        //
>  tds=reader.terms("content");
> >
>  if(tds!=null)
> >                                                                {
> >
>  while(tds.next())
> >                                                                        {
> >
> >
>      freq+=tds.freq();
> >                                                                        //
>      System.out.print( text +"  "+ freq);
> >                                                                        }
> >                                                                }
> >
> >                                // New Code -->
> >
> > //                              tds = reader.termDocsEnum(skipDocs,
> "content", term);
> > //                              if(tds!=null)
> > //                              {
> > //                                      while(true) {
> > //                                              freq += tds.freq();
> > //                                              final int docID =
> tds.nextDoc();
> > //                                              if (docID ==
> DocsEnum.NO_MORE_DOCS) {
> > //                                                      break;
> > //                                              }
> > //                                      }
> > //                              }
> > //
> >                                // New code Ends <--
> >
> >                                tfcoll.tfA=freq;
> >                                System.out.print( text +"  "+ freq);
> >                                if(tfcoll.totalTF()==0)
> >                                {
> >                                        //System.out.println("
> "+tfcoll.tfA+" "+tfcoll.tfD+" "+tfcoll.tfC);
> >                                        System.out.println("Text "+text+ "
> Freq "+freq);
> >                                }
> >
> >                                wp.tfColl=tfcoll;
> >                        }
> >                }
> >                catch (Exception e) {
> >                        // TODO: handle exception
> >                        e.printStackTrace();
> >                }
> >
> >        }
> >
> > but now i have to use TermDocEnum because i am using lucene dev4.0 which
> > does not have TermDocs method i was trying to change my code .please
> refer
> > to new code [commented ] and tell me how to use this method in a proper
> way
> > . if you can provide an example that would be great.
> > tds = reader.termDocsEnum(skipDocs, "content", term);
> > I have tried using null at skipdoc because i don't want to skip anything
> but
> > it through error
> >
> > please help
> >
> > --
> > View this message in context:
> http://lucene.472066.n3.nabble.com/TermDoc-to-TermDocsEnum-tp2716046p2716046.html<http://lucene.472066.n3.nabble.com/TermDoc-to-TermDocsEnum-tp2716046p2716046.html?by-user=t>
> > Sent from the Lucene - Java Users mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [hidden 
> > email]<http://user/SendEmail.jtp?type=node&node=2716623&i=1&by-user=t>
> > For additional commands, e-mail: [hidden 
> > email]<http://user/SendEmail.jtp?type=node&node=2716623&i=2&by-user=t>
> >
> >
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://lucene.472066.n3.nabble.com/TermDoc-to-TermDocsEnum-tp2716046p2716623.html
>  To unsubscribe from TermDoc to TermDocsEnum, click 
> here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=2716046&code=bml0aW5oYXJkZW5peWFAZ21haWwuY29tfDI3MTYwNDZ8LTkwNzg0MTk0MA==>.
>
>



-- 
Nitin Kumar Hardeniya

M.Tech Computational Linguistics
IIIT Hyderabad

SAVE PAPER - THINK BEFORE YOU PRINT


--
View this message in context: 
http://lucene.472066.n3.nabble.com/TermDoc-to-TermDocsEnum-tp2716046p2717275.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.

Reply via email to