Hi,

 

Yes, NIOFS would work. Please don’t use SimpleFSDirectory unless really needed. 
The problem with both implementations is a large slowdown when using DocValues 
(e.g, for sorting). Standard index queries are also slower due to additional 
buffering and copying, but it’s not as large.

 

> Unfortunately I am using a company server and the system admin refuses to 
> change those settings

 

As I said in the famous blog post [1]: “But there are some paranoid system 
administrators around, that want to control everything (with lack of 
understanding).” J

 

If you want to use Lucene in an optimal and performant way – especially for 
large indexes, you have to talk to them. It might be a good idea to send the 
famous MMapDirectory blog post [1] to them, because this is mostly a 
misunderstanding. There is really no reason no limit “virtual memory” usage. 
You have to explain to them that we are using the Lucene index like a swap file 
for optimal performance and less memory usage. In addition, 
Lucene/Solr/Elasticsearch installations should always run “alone” on the 
hardware / virtual machine, not together with other software.

 

Uwe

 

[1] http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html 

 

-----

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

 <http://www.thetaphi.de/> http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Ziqi Zhang [mailto:ziqi.zh...@sheffield.ac.uk] 
Sent: Saturday, October 03, 2015 1:04 PM
To: Uwe Schindler
Subject: Re: java.io.IOException: Map failed

 

Thanks Uwe

Unfortunately I am using a company server and the system admin refuses to 
change those settings. For now my only option is to explicitly use either 
SimpleFSDirectory or NIOFSDirectory. But at least it is working!




On 01/10/2015 20:53, Uwe Schindler wrote:

Hi,

You must ask the system administrator to raise those limits. Or use sudo or get 
root yourself if its your own machine. Those settings cannot be done as normal 
user because they affect whole system. In general, those settings don't survive 
reboots so its better to modify corresponding config files in /etc so its 
applied on system startup. This depends on your Linux distribution, we cannot 
give any help on this.

I would also recommend to review my blog post as stated with URL in the 
exception message!

Uwe

Am 1. Oktober 2015 21:25:30 MESZ, schrieb Ziqi Zhang  
<mailto:ziqi.zh...@sheffield.ac.uk> <ziqi.zh...@sheffield.ac.uk>: 

Hi,
 
I have a problem which I think is the same as that described here:
 
http://stackoverflow.com/questions/8892143/error-when-opening-a-lucene-index-map-failed
 
However the solution does not apply in this case so I am providing more 
details and asking again.
 
The index is created using Solr 5.3
 
The line of code causing the exception is:


  _____  

 
     IndexReader indexReader = 
DirectoryReader.open(FSDirectory.open(Paths.get("the_path")));
 
 
The exception stacktrace is:


  _____  

 
     Exception in thread "main" java.io.IOException: Map failed: 
MMapIndexInput(path="/mnt/fastdata/ac1zz/JATE/solr-5.3.0/server/solr/jate/data_aclrd/index/_5t.tvd")
 
[this may be caused by lack of enough unfragmented virtual address space 
or too
restrictive virtual memory limits enforced by the operating 
system, preventing us to map a chunk of 434505698 bytes. Please review 
'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl 
vm.max_map_count'. More information: 
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
     at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
     at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:265)
     at 
org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:239)
     at 
org.apache.lucene.codecs.compressing.CompressingTermVectorsReader.<init>(CompressingTermVectorsReader.java:144)
     at 
org.apache.lucene.codecs.compressing.CompressingTermVectorsFormat.vectorsReader(CompressingTermVectorsFormat.java:91)
     at 
org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
     at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
     at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
     at 
org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
     at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
     at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:50)
     at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
     at uk.ac.shef.dcs.jate.app.AppATTF.extract(AppATTF.java:39)
     at uk.ac.shef.dcs.jate.app.AppATTF.main(AppATTF.java:33)
 
 
The suggested solutions as in the exception message do not work in this 
case because I am running
the application on a server and I do not have 
permissions to change those.
 
Namely,
-----------
     ulimit -v unlimited
 
prints: "-bash: ulimit: virtual memory: cannot modify limit: Operation 
not permitted"
 
and
-----
     sysctl -w vm.max_map_count=10000000
 
gives:"error: permission denied on key 'vm.max_map_count'"
 
 
Is there any other way I can solve this?
 
Thanks
 
 
 


  _____  

 
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org
 

-- Uwe Schindler H.-H.-Meier-Allee 63, 28213 Bremen http://www.thetaphi.de 

-- 
Ziqi Zhang
Research Associate
Department of Computer Science
University of Sheffield

Reply via email to