Hi

In our application,on a periodic basis, documents get deleted from the index. Although the deleted documents correctly cannot be found when searching the index, our users are complaining that their hard drive is fulling up, since the index continues to grow in size despite the fact that the entries should be deleted from the index. I cannot understand why this is the case since I call both writer.expungeDeletes and writer.commit(). Any ideas?

Here is my delete function:

public void deleteDocs(Term[] terms) throws MessageSearchException {
              logger.debug("delete docs");
              try {
                  indexLock.lock();
                  openIndex();
                  try {
                      writer.deleteDocuments(terms);
                  } catch (Exception e) {
throw new MessageSearchException("failed to delete doc from index."+e.getMessage(),e,logger);
                  } finally {
                      try {
                          writer.expungeDeletes(false);
                          writer.commit();
                      } catch (Exception io) {
throw new MessageSearchException("failed to expunge docs from index"+io.getMessage(),io,logger);
                      }
                  }
              } catch (Throwable t) {
logger.error("failed to delete docs from index."+t.getMessage(),t);
              } finally {
                  indexLock.unlock();
              }
}

As an aside note, we are moving to Lucene 3.0 from 2.9. What is the equivalent of:

QueryParser queryParser = new QueryParser("body",analyzer);
and "Field.Index.TOKENIZED". If I change this, will my index maintain compatibility?


Thanks

Jamie


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to