thanks Uwe for reply. we are indexing data in a cluster where there are many mount points so it is possible that one them has issue or slowness when this check first tried but now when i execute "mount" it is responding all the mount points.

I was wondering is there any configuration to skip this SSD check?

Tamer

On 12/07/2017 14:15, Uwe Schindler wrote:

Hi,

to figure out if you system is using an SSD drive for the index directory, the merge scheduler has to get the underlying mount point of the index directory. As there is no direct lookup for that, it needs to list all mount points in the system with a Java7 FS function. And that seems to hang for some reason. Could it be that you have a mount (like NFS or CIFS) that no longer responds?

Just list all with “cat /proc/mounts” or the “mount” command and check if any of them is stuck or no longer responding.

Uwe

-----

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de <http://www.thetaphi.de/>

eMail: u...@thetaphi.de

*From:*Tamer Gur [mailto:t...@ebi.ac.uk]
*Sent:* Wednesday, July 12, 2017 12:29 PM
*To:* java-user@lucene.apache.org
*Subject:* stucked indexing process

Hi all,

we are having an issue in our indexing pipeline time to time our indexing process are stucked. Following text&picture is from jvisualvm and it seems process is waiting at sun.nio.fs.UnixFileSystem$FileStoreIterator.hasNext() method all the time. we are using lucene 5.4.1 and java 1.8.0_65-b17.

what can be the reason of this?

Many Thanks

Tamer

text version

" org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.<init>()","100.0","73509067","73509067","3" " org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.<init>()","100.0","73509067","73509067","3" " org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.addCategory()","100.0","73509067","73509067","3" " org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.internalAddCategory()","100.0","73509067","73509067","3" " org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.addCategoryDocument()","100.0","73509067","73509067","3" " org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.getTaxoArrays()","100.0","73509067","73509067","3" " org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter.initReaderManager()","100.0","73509067","73509067","3" " org.apache.lucene.index.ReaderManager.<init>()","100.0","73509067","73509067","3" " org.apache.lucene.index.DirectoryReader.open()","100.0","73509067","73509067","3" " org.apache.lucene.index.IndexWriter.getReader()","100.0","73509067","73509067","3" " org.apache.lucene.index.IndexWriter.maybeMerge()","100.0","73509067","73509067","3" " org.apache.lucene.index.ConcurrentMergeScheduler.merge()","100.0","73509067","73509067","3" " org.apache.lucene.index.ConcurrentMergeScheduler.initDynamicDefaults()","100.0","73509067","73509067","3" " org.apache.lucene.util.IOUtils.spins()","100.0","73509067","73509067","3" " org.apache.lucene.util.IOUtils.spins()","100.0","73509067","73509067","3" " org.apache.lucene.util.IOUtils.spinsLinux()","100.0","73509067","73509067","3" " org.apache.lucene.util.IOUtils.getFileStore()","100.0","73509067","73509067","3" " sun.nio.fs.UnixFileSystem$FileStoreIterator.hasNext()","100.0","73509067","73509067","3"

image version


Reply via email to