of every document that exists in a.
> Then use addIndexes with an AtomicReader which overrides getLiveDocs to
> return the modified live docs.
> Same as option 1, but you don't actually do the delete operation, which is
> more costly than just unsetting a bit.
>
> Shai
>
>
t, just make sure to use
> FacetFields.addFacets() on it, so its facets are re-indexed too.
>
> Shai
>
>
> On Wed, Jul 3, 2013 at 8:52 PM, Peng Gao wrote:
>
> > Shai,
> > Thanks.
> >
> > I went with option #3 since the temp indexes are actually created in
- Call IW.addIndexes() with an OrdinalMappingAtomicReader. Look at its
> jdocs for an example code.
>
> Let me know if that works for you.
>
> Shai
>
>
>
> On Wed, Jul 3, 2013 at 6:14 PM, Peng Gao wrote:
>
> > Hi Shai,
> > Thanks for the reply.
>
als would not match as well as you may hit such
> exceptions since one index may have bigger ordinals than what the taxonomy
> reader knows about.
>
> Can you share a little bit about your scenario and why do you need to use a
> MultiReader?
>
> Shai
>
>
>
> On T
How do I accumulate counts over a MultiReader (2 IndexReader)?
The following code causes an IOException:
ArrayList facetRequests = new ArrayList();
for (String groupField : groupFields)
facetRequests.add(new CountFacetRequest(new CategoryPath(groupField,
'/'), 1));
Face
n/package-summary.html>
>
> Steve
>
> On Feb 28, 2013, at 1:28 PM, Peng Gao wrote:
>
> > Hi,
> >
> > I have a Lucene 2.9.x app that uses
> > org.apache.lucene.analysis.snowball.SnowballAnalyzer for index
> > generation,
> >
>
Hi,
I have a Lucene 2.9.x app that uses
org.apache.lucene.analysis.snowball.SnowballAnalyzer for
index generation,
analyzer = new SnowballAnalyzer("English", StopAnalyzer.ENGLISH_STOP_WORDS);
and I want to upgrade it to 4.1.
SnowballAnalyzer is deprecated in 4.1. The doc simply states
"Depr