Which is the efficient way to create faster searches?
1. Single large index (or)
2. Multiple small indexes (opened with SearchManager, multireaders)
When creating the IndexSearcher(multireader), how do we quickly get the list
of IndexReaders. Should we use DirectoryReader.open(wr
h the version > that number; if you find
any, add them to the index, and do one more commit to bring the index up to
date.
This is probably beyond the scope of your original query, however.
On Fri, Jun 20, 2014 at 10:46 PM, Umashanker, Srividhya <
srividhya.umashan...@hp.com> wrote
e data that hasn't been committed. In
> other words, what difference does it make whether you lost 1 index record
> or 1M, if you can't determine which records were lost and need to reindex
> everything from the start anyway, to ensure consistency between SOR and
> Lucene?
&g
Then reindex
> only the data from the DB from that point onward (meaning only uncommitted
> data is lost and needs to be recovered, but you can figure out exactly
> where that point is).
>
>
>
> On Fri, Jun 20, 2014 at 8:02 PM, Umashanker, Srividhya <
> srividhy
y work tasks in the
> queue. Then add to that queue from your writer threads...
>
>
> On Fri, Jun 20, 2014 at 8:47 AM, Umashanker, Srividhya <
> srividhya.umashan...@hp.com> wrote:
>
>> Lucene Experts -
>>
>> Recently we upgraded to Lucene 4. We want to m
Lucene Experts -
Recently we upgraded to Lucene 4. We want to make use of concurrent flushing
feature Of Lucene.
Indexing for us includes certain db operations and writing to lucene ended by
commit. There may be multiple concurrent calls to Indexer to publish
single/multiple records.
So far,
Are there any performance test suites available in lucene codebase which can be
reused by us to benchmark against our lucene infrastructure?
We are looking at mainly multithreaded indexing tests.
-Vidhya
Umashanker, Srividhya <
srividhya.umashan...@hp.com>:
> HI Group -
>
> Is there anyone who has tried or researched on manual sharding and
> replication with Lucene?
>
> We are also evaluating ES, but trying to see if we can enhance our
> existing framework to do manua
HI Group -
Is there anyone who has tried or researched on manual sharding and replication
with Lucene?
We are also evaluating ES, but trying to see if we can enhance our existing
framework to do manual sharding and replication
When I looked for details. I found the MultiPassIndexSplitter - t
Mike,
More info -
Windows on an average takes 192 ms for 1 thread to index 100 json documents
Links on an average takes 711 ms for 1 thread to index 100 json documents
(same set of data)
We have set the heap size to 124 MB in both cases and runs on JDK 7
Windows runs on : 2CPU,
Group -
We have an Indexing and Searching Service (using Lucene 4.0) implemented over
REST as part of our framework, which all the related modules will use to
publish data that make it available for the UI.
Moreover, every rest call that our service receives has a proxy timeout limit
of 20
>>> What are you intending to do?
[VIDHYA] a field with following values should be sorted in "Natural Order"
Name field has Bay 1, Bay10, Bay 11, bay 2, Bay 3
should be sorted asBay 1, bay 2, Bay 3, Bay10, Bay 11
-Original Message-----
From: Umashank
ommon/org/apache/lucene/collation/CollationKeyAnalyzer.html
to index the field and then you can do a simple native sort on this field
(SortField.STRING).
Uwe
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
<http://www.thetaphi.de/> http://www.thetaphi.de
eMail: u...@thetaphi
Group -
We are looking at sorting lucene doc's based on a field in alphanumeric order,
as we expect fields to have Alpha numeric characters.
Attached is the AlphaNumericFieldComparatorSource and below is the snippet of
its usage.
final SortField sortField_id = new SortField(FieldName._id.name()
HI -
I are using Lucene 4.5 and want to support date comparisons with < > <= >= !=.
Right now, we parse the string and create RangeQuery. Is there an inbuilt way
to do date based comparisons ie., "dob <= 2013-10-23T14:47:54.776Z"
-Vidhya
15 matches
Mail list logo