Hi all,
We are indexing documents using apache lucene using several parallel
indexing pipelines(java process) to NFS mounted directory.
All of them follows same code and workflow most of the pipelines succeeds
without any issue, but only only few indexing pipelines remains in idle and
in RUN
much for your response.
>
> I would be really grateful if you can please provide me with an information
> where I can read(may be with examples) about new near-real-time replication
> ?
>
> Thanks,
> Alex
>
> 2016-07-04 12:57 GMT+03:00 Michael McCandless :
>
> >
hanks you very much for your response.
>
> I would be really grateful if you can please provide me with an information
> where I can read(may be with examples) about new near-real-time replication
> ?
>
> Thanks,
> Alex
>
> 2016-07-04 12:57 GMT+03:00 Michael McCandless :
>
Hi Mike,
Thanks you very much for your response.
I would be really grateful if you can please provide me with an information
where I can read(may be with examples) about new near-real-time replication
?
Thanks,
Alex
2016-07-04 12:57 GMT+03:00 Michael McCandless :
> NFS is dangerous
NFS is dangerous if different nodes may take turns writing to the shared
index.
Locking sometimes doesn't work correctly, client-side metadata caching
(e.g. the directory entry) can cause problems, NFS doesn't support "delete
on final close" semantics that Lucene relies on.
> 5.2.1. Right now I'm looking for a solution in order to share Lucene index
> via NFS or rsync between different Lucene nodes.
>
> Is it a good idea to use NFS for this purpose and if so will it be possible
> to read/write from different nodes to the same shared index ?
>
>
I need to organize a cluster for my stateless application based on Lucene
5.2.1. Right now I'm looking for a solution in order to share Lucene index
via NFS or rsync between different Lucene nodes.
Is it a good idea to use NFS for this purpose and if so will it be possible
to read/write
Hi,
In general, storing an index on NFS mounts is a really bad idea, because Lucene
Commits don't work correctly with NFS (this is an issue since the early
beginning and is not fixable). If you use NFS, you need to use
SimpleFSLockFactory for locking (because NativeFSLockFactory does not
Hi,
I'm using Lucene 2.9.3 on a 64 bit machine. Many a times we are observing that
the systems gets into to thrashing mode during merges.
We are experimenting with using MMapDirectory. Our index is stored on NFS/CIFS
mounted file shares.
My question, is this MMapDirectory useful in
Use SimpleFSLockFactory. See the javadocs about locks being left
behind on abnormal JVM termination.
There was a thread on this list a while ago about some pros and cons
of using lucene on NFS. 2-Oct-2012 in fact.
http://mail-archives.apache.org/mod_mbox/lucene-java-user/201210.mbox/thread
is because we are using NFS.
Has anyone gotten Lucene to work on NFS? What steps did you take?
Thanks
Bowden
> (a) Accessing index files over NFS from a "single" physical process on a
> single computer is safe and can be made to work.
To add: This means writing only. Reading is fine from as many threads as you
like - and using MMapDirectory for best performance. The problem with
OK, so it sounds like I'm hearing that
(a) Accessing index files over NFS from a "single" physical process on
a single computer is safe and can be made to work.
(b) Accessing index files over NFS from "multiple" processes/machines might
be problematic
(c) In all cases,
On Tue, Oct 2, 2012 at 9:38 AM, Nader, John P wrote:
> We've been in production on Lucene over NFS for about 4 years now. Though
> we've had performance issues related to NFS (similar to those mentioned on
> this thread), we've only seen some reliability issues. Ind
We've been in production on Lucene over NFS for about 4 years now. Though
we've had performance issues related to NFS (similar to those mentioned on
this thread), we've only seen some reliability issues. Index writing I/O
timeout exceptions are the primary issue. We'v
Ok, that saves you from concurrency issue, but in my experience is just
much slower than local file system, so still NFS can be used but with some
tradeoff on performance.
My 2 cents,
Tommaso
2012/10/2 Jong Kim
> The setup is I have a home-grown server process that has exclusive access
&
Uwe,
Thanks for the detailed information. Are you aware of an existing
implementation of the IndexDeletionPolicy interface that is "known" to work
reliably with NFS?
/Jong
On Tue, Oct 2, 2012 at 9:01 AM, Uwe Schindler wrote:
> There are no real issues with NFS regarding safety of
The setup is I have a home-grown server process that has exclusive access
to the index files. All reads and writes are done through this server. No
other process is reading the same index files whether it's local or over
NFS.
/Jong
On Tue, Oct 2, 2012 at 8:56 AM, Ian Lea wrote:
> I ag
My Lucene index is accessed by multiple threads in a single process.
/Jong
On Tue, Oct 2, 2012 at 8:45 AM, Paul Libbrecht wrote:
> I doubt NFS is an unreliable file-system.
> Lucene uses normal random access to files and this has no reason to be
> unreliable unless bad things such a
There are no real issues with NFS regarding safety of the data. The problem
with NFS is the following (maybe it is fixed in NFS4, I have no idea):
Lucene deletes index files while they are in use, which is perfectly fine for
local file systems (because the inode is still alive, although it is no
I agree that reliability/corruption is not an issue.
I would also put it that performance is likely to suffer, but that's
not certain. A fast disk mounted over NFS can be quicker than a slow
local disk. And how much do you care about performance? Maybe it
would be fast enough over NFS to
I doubt NFS is an unreliable file-system.
Lucene uses normal random access to files and this has no reason to be
unreliable unless bad things such as network drops happen (in which case you'd
get direct failures or timeouts rather than corruption). I've seen fairly
large infrastruct
Thank you all for reply.
So it soudns like it is a known fact that the performance would suffer
rather significantly when the index files are accessed over NFS. But how
about reliability and robustness (which seems even more important)? Isn't
there any increased possibility for intermi
My experience in the Lucene 1.x times were a factor of at least four in writing
to NFS and about two when reading from there. I'd discourage this as much as
possible!
(rsync is way more your friend for transporting and replication à la solr
should also be considered)
paul
Le 2 oct. 2
You'll certainly need to factor in the performance of NFS versus local disks.
My experience is that smallish low activity indexes work just fine on
NFS, but large high activity indexes are not so good, particularly if
you have a lot of modifications to the index.
You may want to install a c
How tolerant is your project of decreased search and indexing performance?
You could probably write a simple test that compares search and write
speeds of local and NFS-mounted indexes and make the decision based on the
results.
On Mon, Oct 1, 2012 at 3:06 PM, Jong Kim wrote:
>
Hi,
According to the Lucene In Action (Second Edition), the section 2.11.2
"Accessing an index over a remote file system" explains that there are
issues related to accessing a Lucene index across remote file system
including NFS.
I'm particuarly interested in NFS compatibility, a
The suggestion was that your single indexing job should update a local
copy of the index and copy that to NFS for searching by other nodes.
That should work.
As for updating, you could index new reports into a new lucene index
and then merge that into the existing index
(IndexWriter.addIndexes
the clustered environment?
The NFS given is working good for saving the report, but only while i
start the indexing the app freezes. it creates a lock file, i cant see
whats happening inside as i cant see my console outputs too.
And i have once more question. How do u update the index. As my
scheduler
You don't say what version of lucene you are using, but in recent
versions you may need to use SimpleFSLockFactory rather than the
default, NativeFSLockFactory. See the javadocs. Lucene in general
does work on NFS but there can be problems, particularly with
concurrent access from mul
dear all,
I have a problem using lucene in NFS. A scheduler runs periodically
generating reports in pdf format and saves it to a file server. The
drive of the file server is mounted to the scheduler server (NFS).
After generating reports finally the scheduler indexes the names of
the report and
: NFS, Stale File Handle Problem and my thoughts
You only have to create the deletion policy (merging uses it).
Mike
On Wed, Jan 20, 2010 at 11:27 AM, Sertic Mirko, Bedag
wrote:
> Ok, so does the merging go thru the IndexDeletionPolicy, or do I have to deal
> with the MergePolicy t
ards
> Mirko
>
> -Ursprüngliche Nachricht-
> Von: Michael McCandless [mailto:luc...@mikemccandless.com]
> Gesendet: Mittwoch, 20. Januar 2010 17:12
> An: java-user@lucene.apache.org
> Betreff: Re: NFS, Stale File Handle Problem and my thoughts
>
> Yes, normal merging will cause
@lucene.apache.org
Betreff: Re: NFS, Stale File Handle Problem and my thoughts
Yes, normal merging will cause this problem as well.
Generally you should always use IndexReader.reopen -- it gives much
better reopen speed, less resources used, less GC, etc.
Mike
On Wed, Jan 20, 2010 at 10:49 AM
feedback!
>
> Will the new IndexDeletionPolicy also be considered when segments are merged?
> Does merging also affect the NFS problem?
>
> Should I use IndexReader.reOpen() or just create a new IndexReader?
>
> Thanks in advance
> Mirko
>
> -Ursprüngliche N
Mike
Thank you so much for your feedback!
Will the new IndexDeletionPolicy also be considered when segments are merged?
Does merging also affect the NFS problem?
Should I use IndexReader.reOpen() or just create a new IndexReader?
Thanks in advance
Mirko
-Ursprüngliche Nachricht-
Von
eded files are taken from the
>> >> IndexDeletionPolicy, and deleted at 12:30. At this point the files to be
>> >> deleted should no longer be required by any IndexReader and can be
>> safely
>> >> deleted.
>> >>
>> >> So the IndexDeletionPolicy should be
t; >> Machine B has read only access.
> >>
> >> Would this be a valid setup? The only limitation is there is only ONE
> >> IndexWriter box, and multiple IndexReader boxes. Based on our
> requirements,
> >> this should fix very well. I really want to avoid s
this should fix very well. I really want to avoid some kind of index
>> replication between the boxes...
>>
>> Regards
>> Mirko
>>
>>
>>
>> -Ursprüngliche Nachricht-
>> Von: Michael McCandless [mailto:luc...@mikemccandless.com]
>> Gese
Right, it's only machine A that needs the deletion policy. All
read-only machines just reopen on their schedule (or you can use some
communication means a Shai describes to have lower latency reopen
after the writer commits).
Also realize that doing searching over NFS does not usually give
ween the boxes...
>
> Regards
> Mirko
>
>
>
> -Ursprüngliche Nachricht-----
> Von: Michael McCandless [mailto:luc...@mikemccandless.com]
> Gesendet: Mittwoch, 20. Januar 2010 14:45
> An: java-user@lucene.apache.org
> Betreff: Re: NFS, Stale File Handle Problem and
Nachricht-
Von: Michael McCandless [mailto:luc...@mikemccandless.com]
Gesendet: Mittwoch, 20. Januar 2010 14:45
An: java-user@lucene.apache.org
Betreff: Re: NFS, Stale File Handle Problem and my thoughts
Right, you just need to make a custom IndexDeletionPolicy. NFS makes
no effort to protec
Right, you just need to make a custom IndexDeletionPolicy. NFS makes
no effort to protect deletion of still-open files.
A simple approach is one that only deletes a commit if it's more than
XXX minutes/hours old, such that XXX is set higher than the frequency
that IndexReaders are guarante
h...@all
We are using Lucene 2.4.1 on Debian Linux with 2 boxes. The index is
stored on a common NFS share. Every box has a single IndexReader
instance, and one Box has an IndexWriter instance, adding new documents
or deleting existing documents at a given point in time. After adding or
I don't have any specific experience with iSCSI, so what follows is
speculation...
I think iSCSI, which just routes SCSI commands over TCP/IP to some
remote storage device, is at a much lower level than NFS. IE, to the
computer this remote device acts like a local device, and therefore
yo
So I've read a lot about nightmares with lucene over shared indices using NFS,
and was curious if anyone had any experience running Lucene over iSCSI?
Specifically if the same sort of lock failure issues occur as does with NFS.
I'm specifically looking into multple machines mounte
>What is the best way to handle this sort of situation? My inclination
is
> build a new Search Server (with fast HDDs and lots of Memory for
tomcat)
> and leave the indexer on the old server connected via NFS.
- Our current development is on similar lines. Almost no deletes, but
onl
x27;d want the index
hosted on the same machine doing searching. If you do it this way,
it's possible you won't need a custom IndexDeletionPolicy, because the
searchers will hold open files via the local filesystem which should
properly protect them when an NFS client (the indexer machin
Hi everyone,
There has been a lot of discussion regarding Lucene+NFS pitfalls. I'm
not sure how to proceed with a more distributed operation.
I'm trying to take the indexing load off of our search server. I can do
this either by building a new server which hosts the Indexer and the
I
t to mount the index on NFS so that it can be shared by
> the indexer and searcher nodes. To optimize several of our search workflows,
> we are caching the IndexSearcher and refreshing it every hour. Also to
> improve the performance of some complex workflows, we are caching the Lucene
Hi Everyone,
We are planning to distribute searches on the index and have a single indexing
node. We want to mount the index on NFS so that it can be shared by the indexer
and searcher nodes. To optimize several of our search workflows, we are caching
the IndexSearcher and refreshing it every
David Loeng wrote:
Hi,
We have a customer using lucene on an NFS directory, which contains
~10Gb of .nfs files. These files are the means by which NFS
implements delete-on-close semantics (that is, if the index writer
commits a delete of a file that is still held open by an index
Hi,
We have a customer using lucene on an NFS directory, which contains
~10Gb of .nfs files. These files are the means by which NFS
implements delete-on-close semantics (that is, if the index writer
commits a delete of a file that is still held open by an index
reader, the file is
ly using Lucene 2.0 for full-text
> > > > searches within our enterprise application,
> > which
> > > can
> > > > be deployed in clustered environment. We
> > generate
> > > > Lucene index for data stored inside relational
> > > > da
ly using Lucene 2.0 for full-text
> > > > searches within our enterprise application,
> > which
> > > can
> > > > be deployed in clustered environment. We
> > generate
> > > > Lucene index for data stored inside relational
> > > > datab
> > > searches within our enterprise application,
> which
> > can
> > > be deployed in clustered environment. We
> generate
> > > Lucene index for data stored inside relational
> > > database.
> > >
> > > As Lucene 2.0 did not have
tion, which
> can
> > be deployed in clustered environment. We generate
> > Lucene index for data stored inside relational
> > database.
> >
> > As Lucene 2.0 did not have solid NFS support and
> as we
> > wanted Lucene based searches to work properly in
&
Rajesh parab wrote:
Hi,
We are currently using Lucene 2.0 for full-text
searches within our enterprise application, which can
be deployed in clustered environment. We generate
Lucene index for data stored inside relational
database.
As Lucene 2.0 did not have solid NFS support and as we
]
Sent: Thursday, April 03, 2008 8:20 PM
To: java-user@lucene.apache.org
Subject: Lucene 2.3.0 and NFS
Hi,
We are currently using Lucene 2.0 for full-text
searches within our enterprise application, which can
be deployed in clustered environment. We generate
Lucene index for data stored inside
Hi,
We are currently using Lucene 2.0 for full-text
searches within our enterprise application, which can
be deployed in clustered environment. We generate
Lucene index for data stored inside relational
database.
As Lucene 2.0 did not have solid NFS support and as we
wanted Lucene based searches
:44 PM
Subject: Re: Why is lucene so slow indexing in nfs file system ?
Thanks for yours suggestions.
I'm sorry I didn't know but I would want to know what Do you mean with
"SAN"
and "FC"?
Another thing, I have visited the lucene home page and there is not
release
--
> > From: Ariel <[EMAIL PROTECTED]>
> > To: java-user@lucene.apache.org
> > Sent: Thursday, January 10, 2008 10:05:28 AM
> > Subject: Re: Why is lucene so slow indexing in nfs file system ?
> >
> > In a distributed enviroment the application should ma
s in advance.
Ariel
On Jan 10, 2008 2:59 PM, Otis Gospodnetic <[EMAIL PROTECTED]>
wrote:
> Ariel,
>
> Comments inline.
>
>
> - Original Message
> From: Ariel <[EMAIL PROTECTED]>
> To: java-user@lucene.apache.org
> Sent: Thursday, January 10, 2008
Ariel,
Comments inline.
- Original Message
From: Ariel <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Thursday, January 10, 2008 10:05:28 AM
Subject: Re: Why is lucene so slow indexing in nfs file system ?
In a distributed enviroment the application should m
97% of the time: premature optimization is the root of
> all evil." It's true.
>
> So the very *first* measurement I'd take is to get rid of the in-RAM
> stuff and just write the index to local disk. I suspect you'll be *far*
> better off doing this then just copy
If possible you should also test the soon-to-be-released version 2.3,
which has a number of speedups to indexing.
Also try the steps here:
http://wiki.apache.org/lucene-java/ImproveIndexingSpeed
You should also try an A/B test: A) writing your index to the NFS
directory and then B) to
evil." It's true.
So the very *first* measurement I'd take is to get rid of the in-RAM
stuff and just write the index to local disk. I suspect you'll be *far*
better off doing this then just copying your index to the nfs mount.
Best
Erick
On Jan 10, 2008 10:05 AM, Ariel <[
In a distributed enviroment the application should make an exhaustive use of
the network and there is not another way to access to the documents in a
remote repository but accessing in nfs file system.
One thing I must clarify: I index the documents in memory, I use
RAMDirectory to do that, then
t;[EMAIL PROTECTED]>
wrote:
> Ariel,
>
> I believe PDFBox is not the fastest thing and was built more to handle all
> possible PDFs than for speed (just my impression - Ben, PDFBox's author
> might still be on this list and might comment). Pulling data from NFS to
> index seems
Ariel,
I believe PDFBox is not the fastest thing and was built more to handle all
possible PDFs than for speed (just my impression - Ben, PDFBox's author might
still be on this list and might comment). Pulling data from NFS to index seems
like a bad idea. I hope at least the indice
Ariel wrote:
The problem I have is that my application spends a lot of time to index all
the documents, the delay to index 10 gb of pdf documents is about 2 days (to
convert pdf to text I am using pdfbox) that is of course a lot of time,
others applications based in lucene, for instance ibm omni
There's also Nutch. However, 10GB isn't that big... Perhaps you can
index where the docs/index lives, then just make the index available
via NFS? Or, better yet, use rsync to replicate it like Solr does.
-Grant
On Jan 9, 2008, at 10:49 AM, Steven A Rowe wrote:
Hi Ariel,
On
Hi Ariel,
On 01/09/2008 at 8:50 AM, Ariel wrote:
> Dou you know others distributed architecture application that
> uses lucene to index big amounts of documents ?
Apache Solr is an open source enterprise search server based on the Lucene Java
search library, with XML/HTTP and JSON APIs, hit high
do any of the indexing. No new Documents, don't
add any fields, etc. This will just time the PDF parsing.
(I'd run this for set number of documents rather than the
whole 10G). This'll tell you whether the issue is indexing or
PDFBox.
2> Perhaps try the above with local files rather
Hi:
I have seen the post in
http://www.mail-archive.com/[EMAIL PROTECTED]/msg12700.html and
I am implementing a similar application in a distributed enviroment, a
cluster of nodes only 5 nodes. The operating system I use is Linux(Centos)
so I am using nfs file system too to access the home
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> I cannot send you the source code without speaking to my manager
> first. I guess he would want me to change the code before sending it
> to you. You could have the log files now, but I expect you want to
> wait until the test application is ready t
>
> "pkimber" <[EMAIL PROTECTED]> wrote:
>
> > We are still getting various issues on our Lucene indexes running on
> > an NFS share. It has taken me some time to find some useful
> > information to report to the mailing list.
>
> Bummer!
>
&
"pkimber" <[EMAIL PROTECTED]> wrote:
> We are still getting various issues on our Lucene indexes running on
> an NFS share. It has taken me some time to find some useful
> information to report to the mailing list.
Bummer!
Can you zip up your test application that sh
Hi
We are still getting various issues on our Lucene indexes running on an NFS
share. It has taken me some time to find some useful information to report
to the mailing list.
I have created a test application which is running on two Linux servers.
The Lucene index is on an NFS share. After
Hi Michael
Just to let you know, I am on holiday for one week so will not be able
to send a progress report until I return.
I have deployed the new code to a test site so I will be informed if
the users notice any issues.
Thanks for your help
Patrick
On 04/07/07, Michael McCandless <[EMAIL P
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> Yes, there are many lines in the logs saying:
> hit FileNotFoundException when loading commit "segment_X"; skipping
> this commit point
> ...so it looks like the new code is working perfectly.
Super!
> I am sorry to be vague... but how do I check wh
Hi Michael
Yes, there are many lines in the logs saying:
hit FileNotFoundException when loading commit "segment_X"; skipping
this commit point
...so it looks like the new code is working perfectly.
I am sorry to be vague... but how do I check which segments file is
opened when a new writer is cr
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> I have been running the test for over an hour without any problem.
> The index writer log file is getting rather large so I cannot leave
> the test running overnight. I will run the test again tomorrow
> morning and let you know how it goes.
Ahhh, th
gt; > On 03/07/07, Michael McCandless <[EMAIL PROTECTED]> wrote:
> > >
> > > "Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> > >
> > > > I am using the NativeFSLockFactory. I was hoping this would have
> > > > stopped thes
e NativeFSLockFactory. I was hoping this would have
> > > stopped these errors.
> >
> > I believe this is not a locking issue and NativeFSLockFactory should
> > be working correctly over NFS.
> >
> > > Here is the whole of the stack trace:
> > >
mber" <[EMAIL PROTECTED]> wrote:
> >
> > > I am using the NativeFSLockFactory. I was hoping this would have
> > > stopped these errors.
> >
> > I believe this is not a locking issue and NativeFSLockFactory should
> > be working correctly over NF
as hoping this would have
> stopped these errors.
I believe this is not a locking issue and NativeFSLockFactory should
be working correctly over NFS.
> Here is the whole of the stack trace:
>
> Caused by: java.io.FileNotFoundException:
> /mnt/nfstest/repository/lucene/lucene-ic
"Patrick Kimber" <[EMAIL PROTECTED]> wrote:
> I am using the NativeFSLockFactory. I was hoping this would have
> stopped these errors.
I believe this is not a locking issue and NativeFSLockFactory should
be working correctly over NFS.
> Here is the whole of the s
I think you should get " NFS, Lock obtain timed out" Exception (that you
mentioned in subject line) , instead of "java.io.FileNotFoundException:".
Because if one server is holding lock on the directory then other server
will wait till default LockTime Out and will thro
er" <[EMAIL PROTECTED]>
07/03/2007 03:47 PM
Please respond to
java-user@lucene.apache.org
To
java-user@lucene.apache.org
cc
Subject
Re: Lucene 2.2, NFS, Lock obtain timed out
Hi
I have added more logging to my test application. I have two servers
writing to a shared Lucene ind
t
Re: Lucene 2.2, NFS, Lock obtain timed out
Hi
I have added more logging to my test application. I have two servers
writing to a shared Lucene index on an NFS partition...
Here is the logging from one server...
[10:49:18] [DEBUG] LuceneIndexAccessor closing cached writer
[
Hi
I have added more logging to my test application. I have two servers
writing to a shared Lucene index on an NFS partition...
Here is the logging from one server...
[10:49:18] [DEBUG] LuceneIndexAccessor closing cached writer
[10:49:18] [DEBUG] ExpirationTimeDeletionPolicy onCommit() delete
wrong deletion policy or even a buggy deletion policy (if
indeed file.lastModified() varies by too much across machines) can't
cause this (I think). At worse, the wrong deletion policy should
cause other already-open readers to hit "Stale NFS handle"
IOExceptions during
On 6/29/07, Doron Cohen <[EMAIL PROTECTED]> wrote:
> Note that some Solr users have reported a similar issue.
> https://issues.apache.org/jira/browse/SOLR-240
Seems the scenario there is without using native locks? -
"i get the stacktrace below ... with useNativeLocks turned off"
Yes... but th
em due to the unavailability of "delete on
last close" semantics over NFS. If a certain node in the cluster has not
released a writer (due to not being used to write to the index) in a
long time, another node could trigger the deletion of the files that a
Reader from t
Mark Miller wrote:
> You might try just using one of the nodes as
> the writer. In Michaels comments, he always seems
> to mention the pattern of one writer many
> readers on nfs. In this case you could use
> no LockFactory and perhaps gain a little speed there.
One thing I would
Yonik wrote:
> Note that some Solr users have reported a similar issue.
> https://issues.apache.org/jira/browse/SOLR-240
Seems the scenario there is without using native locks? -
"i get the stacktrace below ... with useNativeLocks turned off"
lock file not being removed?
Yes.
> - Is it safe to ignore this exception (probably not)?
No, let's fix it... /;->
> - Why would the segments file be missing? Could this
> be connected to the NFS issues in some way?
I would think so.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
: Perhaps i'm missing something, but i thought NativeFSLock was not suitable
: for NFS? ... or is is this what "lockd" provides? (my NFS knowledge is
: very out of date)
Do'h!
I just read the docs for NativeFSLockFactory and noticed the "For example,
for NFS servers
: We are sharing a Lucene index in a Linux cluster over an NFS share. We have
: multiple servers reading and writing to the index.
:
: I am getting regular lock exceptions e.g.
: Lock obtain timed out:
:
NativeFSLock@/mnt/nfstest/repository/lucene/lock/lucene-2d3d31fa7f19eabb73d692df44087d81-n
1 - 100 of 145 matches
Mail list logo