We have recently moved to 3.6 from lucene 2.2 and have seen that the way tokens get indexed are not the same.
Although we are open to reindexing the data which was initially indexed with 2.2, I would like to know if there is a way I can avoid indexing? I am using IndexUpgrader tool to update the index and observe that it doesn't help in the tokenization problem. Any pointers?