to download the relevant files.
> >
> > I'd like to share what I've got for 1 and 3, based on S3 and DynamoDB,
> but
> > I'd like to do it with interfaces that lend themselves to other
> > implementations for blob and metadata storage.
> >
> &
https://lucene.apache.org/core/7_6_0/memory/org/apache/lucene/index/memory/MemoryIndex.html
Anton
On Mon, Dec 17, 2018 at 8:06 AM Valentin Popov
wrote:
> Hello.
>
> I need implement a feature, that answer for a question: is a Document
> match a Query.
>
> Right now, I’m implemented this such wa
Which version of Lucene are you using?
On Thu, Nov 12, 2015 at 11:39 AM, Valentin Popov
wrote:
> Hello everyone.
>
> We have ~10 indexes for 500M documents, each document has «archive date»,
> and «to» address, one of our task is calculate statistics of «to» for last
> year. Right now we are us
Hello,
I'm experimenting with Lucene 5.2.1 and I see something I cannot find an
easy explanation for in the api docs.
Depending on whether I pick BEST_COMPRESSION or BEST_SPEED mode for
StoredFieldsFormat almost all files become smaller for BEST_COMPRESSION
mode. I expected only .fdt files to be s
Are you sure you are not holding open readers somewhere?
On Mon, Aug 31, 2015 at 7:46 PM, Marcio Napoli
wrote:
> Hey! :)
>
> It seems IndexWriter is not closing the descriptors of the removed files,
> see the log below.
>
> Thanks,
> Napoli
>
> [root@server01 log]# ls -l /proc/59491/fd | grep i
You can always throw an exception in the collector to stop the collection
process.
Anton
On Tue, Jul 28, 2015 at 4:26 AM, Muhammad Ismail
wrote:
> Can we skip matching lucene document by using custom collector or some
> other way. Like I want to bring all document created by user xxx on
> speci
Reindexing. If I want to add new fields or change existing fields in the
index I need to go through all documents of the index.
On Wed, Jun 3, 2015 at 4:46 PM, Robert Muir wrote:
> On Wed, Jun 3, 2015 at 4:00 PM, Anton Zenkov
> wrote:
>
> >
> > for (int i = 0; i <
Hello,
I ran into a problem while trying to make a utility which loads all
documents in the index one by one. Loading was super slow. Orders of
magnitude slower then it is supposed to be. After some debugging and
looking at the code I figured that the culprit was the index compression
which I set