On Sep 24, 2007, at 9:54 AM, Bogdan Ghidireac wrote:

On 9/24/07, Doron Cohen <[EMAIL PROTECTED]> wrote:

For an already optimized index calling optimize() is a no-op.

You may try this: after opening the writer and setting compound=false, add a dummy (even empty) document to the index, then optimize(), and finally
optionally remove the dummy document.

Note that calling optimize() might be lengthy as well for a large index.
In
any case, try this first on a trial index, and also make a backup of the
existing index, just in case.


thanks, that worked smoothly ..


Why do you want to transform the index from compound to non compound form?


My index is quite large (14GB) and this is because I use stored fields. The requirements of the project force me to have a response time below 500ms and
I cannot do this at this moment (the tp99.9 is 700ms).

What I am planning to do is to use the standard format so I can move all relevant files except .fdt and .fdx to a memory disk. After conversion the
.fdt + .fdx files have around 11.5GB.

I don't think there is an index reader that supports two locations (one for index data and another one for stored data) but I will modify an existing
one and check the results.

I believe this is the use case for ParallelReader (although I have never used it) but it comes with caveats. Search the archives and checkout the javadocs for info.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to