I'm curious (:-)) about what do you mean by *adjusted*? Also, not sure
I have the nomenclature here right, but isn't indexing functionally
separate from merging segments? (You *index* to a segment which may,
or may not, be later *merged* with other segments, no?)
On Tue, Jan 5, 2010 at 11:28 PM, C
Just curious, will it be adjusted during indexing when merging segments?
Thanks!
--
Chris Lu
-
Instant Scalable Full-Text Search On Any Database/Application
site: http://www.dbsight.net
demo: http://search.dbsight.com
Lucene Database Search in 3 minutes:
http://wiki.dbsi
Thanks!
On Tue, Jan 5, 2010 at 1:00 PM, Michael McCandless
wrote:
> Making that switch is fine.
>
> The change will not be retroactive, ie, all previously indexed docs
> with Store.YES will continue to store their fields. But new docs
> won't store their fields if you specify Store.NO.
>
> I don
Making that switch is fine.
The change will not be retroactive, ie, all previously indexed docs
with Store.YES will continue to store their fields. But new docs
won't store their fields if you specify Store.NO.
I don't think this (what happens when certain schema changes happen
mid-indexing) is
Hi,
A review of the requirements of the project I'm working on has led us
to conclude that going forward we don't need Lucene to store certain
field values--just index. Owing to the large size of the data, we
can't really afford to reindex everything, (Going forward, we plan to
treat these fields
Erick Erickson wrote:
One way to handle this is to run some searches and check to see
if the number of matched documents is what you expect.
Yes it works fine at the moment, but I'm trying to write junit tests to
protect against changes in the future, is there nothing simple I can code
Paul
-
One way to handle this is to run some searches and check to see
if the number of matched documents is what you expect.
You could also use some of the TermDocs to inspect your
index to see if things are what you expect.
You could knock yourself out and look at things like
TermPositionVector and Te
Would indexReader#termDocs() help? You get all docs containing a
specific term - that way you could iterate in reverse order though.
simon
On Tue, Jan 5, 2010 at 5:08 PM, Paul Taylor wrote:
>
> In my junittest code, I check the index has been created okay by checking
> the value of various field
In my junittest code, I check the index has been created okay by
checking the value of various fields that have been indexed (and stored)
i.e assertEquals("Farming Incident",
doc.getField(ArtistIndexField.ARTIST.getName()).stringValue());
But if I'm only indexing the field , but not storing
Ben Armstrong wrote:
I am trying to get Solr 1.4.0 to work on OpenVMS V8.3 Alpha with Java
1.5.0-6.p1.
...
If Lucene would consider the segment number to end at a final period
instead of scanning to the end of the string, then I could get past
this error.
I looked at the other possible JAVA$
Hi Andre,
you are using StandardAnalyzer for indexing but you search with an
un-analyzed string "Lucene" (q.add(new Term("title","Lucene"));)
If you pass this string to the query parser your query string will be
analyzed (will most likely result in a lowercased string). The
analyzed query will the
Hi,
I need search by phrase containing a particular sequence of terms , then I
am using Java Lucene 3.0, more specifically the PhraseQuery.
I'm using the code below, but does not work(PhraseQuery). Only does work
when I use the QueryParser:
Is there some problem or how can I use the PhraseQuery in
I am trying to get Solr 1.4.0 to work on OpenVMS V8.3 Alpha with Java
1.5.0-6.p1.
I see at least one other user has attempted to make Lucene work on
OpenVMS before, but ran into problems which appear to remain unresolved:
http://www.lucidimagination.com/search/document/8f4a752f43f34c6a/indexe
sqzaman wrote:
>
>
>
> Philip Puffinburger wrote:
>>
>> That depends on what you are trying to do.
>>
>> You could create the StandardAnalyzer and pass in your own stop word set,
>> but that would use that stop word set for all your analyzed fields.
>>
>> There is a PerFieldAnalyzer
So currently in my index I index and store a number of small fields, I
need both so I can search on the fields, then I use the stored versions
to generate the output document (which is either an XML or JSON
representation), because I read stored and index fields are dealt with
completely sepera
Philip Puffinburger wrote:
>
> That depends on what you are trying to do.
>
> You could create the StandardAnalyzer and pass in your own stop word set,
> but that would use that stop word set for all your analyzed fields.
>
> There is a PerFieldAnalyzerWrapper (I think that is the name
16 matches
Mail list logo