Hi
If I can guarantee only one JVM will update an index (not at a time - truly
just one JVM), can I disable locks, or is it really necessary only for
read-only devices? If I disable locks, will I see any performance
improvements?
Thanks
Shai
Right, so after looking at what was happening in SegmentInfos again I
noticed I was saving to the Datastore on IndexOutput.flush but not on close.
Persisting the file on close solved this particular problem.
Sorry about that.
On Sat, Aug 15, 2009 at 1:03 AM, Bryan Swift wrote:
> I'm attempting to
I'm attempting to create a Directory implementation (lucene-core 2.4.1) to
sit on top of Google's App Engine Datastore (written in Scala). In the
process of doing this I found something odd for which I'm hoping there is a
relatively simple solution.
When instantiating a new IndexWriter with my Dire
Alas I don't see it failing, with the optimize left in. Which exact
rev of 2.9 are you testing with? Which OS/filesystem/JRE?
I realize this is just a test so what follows may not apply to your
"real" usage of SnapshotDeletionPolicy...:
Since you're closing the writer before taking the backup,
public static class IndexBackup {
private SnapshotDeletionPolicy snapshotDeletionPolicy;
private File backupFolder;
private IndexCommit indexCommit;
public IndexBackup(final SnapshotDeletionPolicy snapshotDeletionPolicy,
final File backupFolder) {
this.snapshotD
Not as small as I would like, but shows the problem.
If you remove the statement
// Remove this and the backup works fine
optimize(policy);
the backup works wonderfully.
(More code in the next e-mail)
Lucas
public static void main(final String[] args) throws
CorruptIndexException, IO
I think you should also delete files that don't exist anymore in the index,
from the backup?
Shai
On Fri, Aug 14, 2009 at 10:02 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> Could you boil this down to a small standalone program showing the problem?
>
> Optimizing in between backu
Could you boil this down to a small standalone program showing the problem?
Optimizing in between backups should be completely fine.
Mike
On Fri, Aug 14, 2009 at 2:47 PM, Lucas Nazário dos
Santos wrote:
> Hi,
>
> I'm using the SnapshotDeletionPolicy class to backup my index. I basically
> call t
Hi,
I'm using the SnapshotDeletionPolicy class to backup my index. I basically
call the snapshot() method from the class SnapshotDeletionPolicy at some
point, get a list of files that changed, copy then to the backup folder, and
finish by calling the release() method.
The problem arises when, in
Okay let me try if i can create a standalone test which reproduces this issue
and get back.
Regards,
Rishi
Michael McCandless-2 wrote:
>
> But, if you can break out just the Lucene indexing part into a
> standalone test, that shows the exception, that can help us isolate
> it.
>
> At this poi
Hello,
using PatternAnalyzer solved the problem as well, using a whitespace pattern.
There's lowercase support as well (see
http://lucene.apache.org/java/2_2_0/api/org/apache/lucene/index/memory/PatternAnalyzer.html)
Many thanks.
Regards,
Ueli Kistler
-Ursprüngliche Nachricht-
Von: AHM
But, if you can break out just the Lucene indexing part into a
standalone test, that shows the exception, that can help us isolate
it.
At this point it seems likely the issue is something OS/filesystem
specific, because at one time (during optimize), we see a "file not
found" exception, yet at ano
This is a big project having lots of source files. The excerpt of the main
indexer program is:
FileIndexer indexer = new FileIndexer(props);
Analyzer defAnalyzer = new StandardAnalyzer();
FileIndexer.analyzer = defAnalyzer;
IndexWr
Can you post the full sources for the test that hits the exception on
your system?
Mike
On Fri, Aug 14, 2009 at 7:36 AM, rishisinghal wrote:
>
> It is reproducible in my system. If the number of files are small I am not
> able to see the crash.
>
>>>can you turn on infoStream on IndexWriter and p
It is reproducible in my system. If the number of files are small I am not
able to see the crash.
>>can you turn on infoStream on IndexWriter and post the output?
I am not clear on this. If you are talking of
writer.setInfoStream(System.out);
I already have posted the output while raising the iss
If it's easily reproducible, can you please provide a test that reproduces
it, even if on just your FS? And also, can you turn on infoStream on
IndexWriter and post the output?
If CheckIndex succeeds, then I'd compare the output w/ and w/o CheckIndex.
Shai
On Fri, Aug 14, 2009 at 12:17 AM, rishi
> Noticed that in Luke... is there any existing analyzer around that
> supports case-insensitive search and recognizes "RZ/G/17" as one token?
As far as I know there is no built-in analyzer that uses whitespace tokenizer
and lowercase filter together. But it is easy to cast tokenizer and token
Lets say we have 4 users : U1, U2, U3 and U4. Each user has a title and set
of documents created by him/her. Using this info, we can come up with a term
vector ( interest vector ) which would contain a set of top terms ( that
appeared in his/her docs ) along with frequency. So conceptually, we get
Noticed that in Luke... is there any existing analyzer around that supports
case-insensitive search and recognizes "RZ/G/17" as one token?
-Ursprüngliche Nachricht-
Von: AHMET ARSLAN [mailto:iori...@yahoo.com]
Gesendet: Donnerstag, 13. August 2009 22:52
An: java-user@lucene.apache.org
B
19 matches
Mail list logo