Working with Lucene 3.5, I'd like to append a payload to a specific field in the index, at indexing time. To get that, I use the following code to produce a token stream with the Standard Analyzer for the field I want to place the payload on.
public static TokenStream tokenStream(final String fieldName, Reader > reader, Analyzer analyzer, final String item) { > final TokenStream ts = analyzer.tokenStream(fieldName, reader) ; > TokenStream res = new TokenStream() { > CharTermAttribute termAtt = addAttribute(CharTermAttribute.class); > PayloadAttribute payAtt = addAttribute(PayloadAttribute.class); > > public boolean incrementToken() throws IOException { > while(true) { > boolean hasNext = ts.incrementToken(); > if(hasNext) { > termAtt.append("test"); > payAtt.setPayload(new Payload(item.getBytes())); > } > return hasNext; > } > } > > }; > return res; > } When I retrieve the token stream in the code and iterate over it to make sure all the items were added, I see both the term and the payload exist as part of the token stream. Then, I call reset() on the stream and pass it to the field for indexing. However, when I print the document off and look at the index in Luke, the field I'm trying to append the payload to has no terms or payloads associated with it even though I specified both term and payload attributes in the token stream code. I'm quite confused how these things are supposed to work and all the examples I could find ended with people writing their own custom analyzers. Is that path I need to being walking down instead of trying to pass in a custom token stream? Thanks! Stephen