If you can live with the loss of 385395 documents, running with -fix
is an option. I'd create a new index. I'd also worry about why the
existing index got messed up in the first place.
I've no idea about running fsck on ec2 file systems. General file
system commands hanging for 10 secs doesn't s
thanks this was really helpful to understand whats going on..
i got these for 2 of my indexes -
WARNING: 29 broken segments (containing 385395 documents) detected
WARNING: would write new segments file, and 385395 documents would be lost,
if -fix were specified
WARNING: 29 broken segments (contai
no, you can't delete those files, and you can't regenerate just those files,
all the various segment files are necessary and intertwined...
Consider using the CheckIndex facility, see:
http://solr.pl/en/2011/01/17/checkindex-for-the-rescue/
note, the CheckIndex class is contained in the lucene co
this is on local file system on amazon ec2 host. the file system was fine
until a week ago when the outage happened and there were probably some
system glitches. i have seen this issue since then.. sometimes regular
commands like less or ls hang for many seconds even though there is no
cpu/memory p
Is this on a local or remote file system? Is the file system itself
OK? Is something else messing with your lucene index at the same
time?
--
Ian.
On Sun, Jul 8, 2012 at 8:58 PM, T Vinod Gupta wrote:
> Hi,
> My log files are showing the below exceptions almost at twice a minute
> frequency.
Hi,
My log files are showing the below exceptions almost at twice a minute
frequency. what is causing it and how can i fix it? I am not using lucene
directly but instead using elasticsearch (0.18.7 version). but since the
stack trace is all lucene, i am sending it to this mailing list.
also, my qu