You should not setTokenized(true) for your id field? This splits it into tokens according to your analyzer.
Mike McCandless http://blog.mikemccandless.com On Tue, Jun 23, 2015 at 7:40 AM, Behnam Khoshsafar <b.khoshsa...@hamgam.ir> wrote: > I'm using Lucene 5.1.0 to index a document and search it. I have a lot of > documents, over 1000000, which are stored in a database. When I start running > the project for the first time, I use Lucene to index these documents. Now I > want to delete one document from the database and indexes. I also choose an > id for each index. I am using the following command to delete, but it's > delete all index. > > iw.deleteDocuments(new Term("id", doc.id)); > Also, I used a Query to delete but it's delete all index. > > I add documents to the index as follows: > > iDoc = new org.apache.lucene.document.Document(); > FieldType fieldType = new FieldType(); > fieldType.setIndexOptions( > IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS); > fieldType.setTokenized(true); > fieldType.setStored(true); > fieldType.setOmitNorms(true); > fieldType.setStoreTermVectors(true); > fieldType.setStoreTermVectorOffsets(true); > fieldType.setStoreTermVectorPayloads(true); > fieldType.setStoreTermVectorPositions(true); > iDoc.add(new Field("id", doc.id.toString(), fieldType)); > iw.addDocument(iDoc); > --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h...@lucene.apache.org