I suspect it is because QueryParser uses space characters to separate different
clauses in a query string while you want the space to represent some content in
your "name" field. Try escaping the space character.
Cheers
Mark
On 9 Feb 2010, at 07:26, Rohit Banga wrote:
> Hello
>
> i have a f
QueryParser uses the given Analyzer when constructing they query, so it will
never hit a NOT_ANALYZED term. In general, it is a bad idea to use QueryParser
on fields that are not analyzed. There are two possibilities to solve the
problem:
- Instantiate the query to match the not-analyzed (but i
Hello
i have a field that stores names of people. i have used the NOT_ANALYZED
parameter to index the names.
this is what happens during indexing
doc.add(new Field("name", "\"" + name + "\"", Field.Store.YES,
Field.Index.NOT_ANALYZED));
when i search it, i create a query parser using stan
Hi,
Just wanted to announce the release of a new open source project called
ElasticSearch (http://www.elasticsearch.com/). Its an open source (Apache
2), distributed, search engine built on top of Lucene. There are many
features for ElasticSearch, you can find them here:
http://www.elasticsearc
On Mon, Feb 8, 2010 at 9:33 AM, Chris Lu wrote:
> Since you already have RMI interface, maybe you can parallel search on
> several nodes, collect the data, pick top ones, and send back results via
> RMI.
>
One thing to be careful about this, which you might already be aware of:
Query (and subcla
Since you already have RMI interface, maybe you can parallel search on
several nodes, collect the data, pick top ones, and send back results
via RMI.
--
Chris Lu
-
Instant Scalable Full-Text Search On Any Database/Application
site: http://www.dbsight.net
demo: http://sea
Hmmm... I think that means you're using the default data mode
(ordered), which should properly preserve writes if the OS or machine
crashes.
And actually I was wrong before -- even if the mount had
data=writeback, since you are "only" kill -9ing the process (not
crashing the machine), the data mou
> Any thoughts on scaling / clustering? Whether i need to use Hadoop / Carrot
> etc...
>
Carrot2 does search results clustering (by content), while what you probably
need is server/index clustering. See the other responses in this thread for
suggestions.
S.
Solr has more powerful scalability than lucene, maybe you can try that
On Mon, Feb 8, 2010 at 6:14 PM, Ganesh wrote:
> Our indexes is growing and the sorted cache is taking huge amount of RAM.
> We want to add multiple nodes, and scale out the search.
>
> Currently my applaication supports RMI
http://katta.sourceforge.net/ sounds well worth a look.
--
Ian.
On Mon, Feb 8, 2010 at 10:14 AM, Ganesh wrote:
> Our indexes is growing and the sorted cache is taking huge amount of RAM. We
> want to add multiple nodes, and scale out the search.
>
> Currently my applaication supports RMI inte
Here is what I get with mount -l
/dev/mapper/lvm--raid-lvm0 on /data3 type ext3 (rw) []
Is there anything else to get more details of the mount options ?
On Mon, Feb 8, 2010 at 10:57 AM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> Thanks for sharing...
>
> Software RAID should be pe
Our indexes is growing and the sorted cache is taking huge amount of RAM. We
want to add multiple nodes, and scale out the search.
Currently my applaication supports RMI interface and it return appliaction
specific result set objects as hits. I could host multiple search instance in
different
Use IndexWriter.getReader to get a near real-time reader, after making
changes...
Mike
On Mon, Feb 8, 2010 at 3:45 AM, NanoE wrote:
>
> Hello,
>
> I am writing small library search and want to know what are the best
> practice for lucene 3.0.0 for almost real time index update?
>
> Thanks Nano
>
Thanks for sharing...
Software RAID should be perfectly fine for Lucene, in general, unless
the mount is configured to ignore fsync (I think the "data=writeback"
mount option for ext3 does so on Linux).
Can you check the mount options on your RAID filesystem?
Mike
On Mon, Feb 8, 2010 at 2:09 AM
Hello,
I am writing small library search and want to know what are the best
practice for lucene 3.0.0 for almost real time index update?
Thanks Nano
--
View this message in context:
http://old.nabble.com/Best-Practice-3.0.0-tp27496796p27496796.html
Sent from the Lucene - Java Users mailing lis
Hi,
I have a lucene document which has a field which appears repeatedly in the
document, I use doc.getFieldables(fieldName) to get the field values; when
the number of fields become huge; getting the field values is taking up a
lot of memory, is there some other way that I could get the field val
16 matches
Mail list logo