What *analyzer* are you using for your queries? Have Luke explain your queries and I suspect you'll see your problem. Which if you're using StandardAnalyzer is probably that 'case study' will look like (lrt:case lrt:study). Note that the implied operator is OR unless you've changed it. But these are compared against your tokens
application assessment broadcast case study course demonstration drill and practice educational game EACH OF WHICH IS A TOKEN. No indexed value matches "case" (the closest is "case study", but it doesn't match, since you indexed it UN_TOKENIZED, "case" != "case study") and similarly for "study". If this is completely off base, perhaps you can provide some additional detail. Best Erick On Tue, Sep 30, 2008 at 9:24 AM, David Massart <[EMAIL PROTECTED]> wrote: > Hi all, > Here is a new newbee question. > > I'm adding to a lucene index, documents containing a field lrt created as > follows: > > doc.add( new Field( "lrt", learningResourceType > > .toLowerCase(), Field.Store.NO, > > Field.Index.UN_TOKENIZED ) ) ; > > > where possible values for learningResourceType are: > > > application > > assessment > > broadcast > > case study > > course > > demonstration > > drill and practice > > educational game > > > In this index, searching for a string that doesn't contain a blank > character > works fine > > > for example, lrt: application > > > but searching for a string with a blank does not return any results: > > > for example, lrt: case study doesn't work > > > Using lukeall, I can successfully query my index by escaping the blank with > a backslash > > > for example, lrt: case\ study > > > But it doesn't work when I try to do the same thing in Java > > > String query = "lrt: case\\ study" > > > doesn't return any result. > > > I've also try to QueryParser.escape( "case study") but it doesn't seem to > affect blank characters? > > > Thanks for your help. > > > David >