ou need to pass lowercase terms to
> TermQuery.
>
> You can still use an analyzer where appropriate e.g. to
> parse a string
> into a Query that you add to a BooleanQuery.
>
>
> --
> Ian.
>
>
> On Tue, Jan 13, 2009 at 1:43 AM, Rajesh parab
> wrote:
> >
Hi,
For proper results during searches, the recommendation is to use same analyzer
for indexing and querying. We can achieve this by passing the same analyzer,
which was used for indexing, to QueryParser to construct Lucene query and use
this query while searching the index.
The question is -
orrect, Rajesh. ParallelReader has its
> uses, but I guess your case is not one of them,
> unless we are all missing some key aspect of PR or a
> trick to make it work in your case.
>
> Otis
>
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr -
> Nutch
>
> ---
Thanks Yonik.
So, if rebuilding the second index is not an option
due to large no of documents, then ParallelReader will
not work :-(
And I believe there is no other way than
parallelReader to search across multiple indexes that
contain related data. Is there any other alternative?
I think, Multi
xt to nil. We are all
> volunteers with day
> > jobs.
> >
> > Have you bothered to search the dev and user
> mailing list for
> > information on the class in question? I would
> look for threads from
> > Doug or Chuck Williams.
> >
Hi Guys,
Any comments on this?
I was looking into Lucene archive and came across this
thread what asks the same question.
http://www.gossamer-threads.com/lists/lucene/java-user/50477?search_string=parallelreader;#50477
Any pointers will be helpful.
Regards,
Rajesh
--- Rajesh parab <[EM
Hi All,
Any suggestions/comments on my questions in this
thread will be really helpful.
We are planning to use Lucene indexes throughout the
application and exploring possibilites of partitioning
data between multiple indexes.
Regards,
Rajesh
--- Rajesh parab <[EMAIL PROTECTED]> wrote:
Hi,
This is from javadoc of ParallelReader:
==
An IndexReader which reads multiple, parallel indexes.
Each index added must have the same number of
documents, but typically each contains different
fields. Each document contains the union of the
--- Rajesh parab <[EMAIL PROTECTED]> wrote:
> Hi Mathieu,
>
> I can definitely store the foreign key inside the
> dynamic index. However if I understand correctly,
> for
> ParallelReader to work properly, doc ids for all
> documents in both primary and secondary (dynamic
the two indexes and keep doc ids in
sync so that we can use ParallelReader?
Regards,
Rajesh
--- Karl Wettin <[EMAIL PROTECTED]> wrote:
> Rajesh parab skrev:
> > How do we specify the primary key or doc id so
> that
> > newly added document will use the same doc id. Do
&
Hi Mathieu,
I can definitely store the foreign key inside the
dynamic index. However if I understand correctly, for
ParallelReader to work properly, doc ids for all
documents in both primary and secondary (dynamic)
index should be in same order.
How can we achieve it if there are frequest changes
production environments? Will this fix
get rolled into latest Lucene release?
Regards,
Rajesh
--- Karl Wettin <[EMAIL PROTECTED]> wrote:
> Rajesh parab skrev:
>
> > https://issues.apache.org/jira/browse/LUCENE-879
> > <>
> > As per the hack you mentioned inside
<>
How much data do you have? I have a hard time to
understand the relationship between your objects and
what sort of normalized data you add to the documents.
If you are lucky it is just a single or few fields
that needs to be updated and you can manage to keep it
in RAM and rebuild the whole thin
?
Regards,
Rajesh
--- Rajesh parab <[EMAIL PROTECTED]> wrote:
> Thanks for details Karl.
>
> I was looking for something like it. However, I have
> a
> question around the warning mentioned in javadoc of
> parallelReader.
>
> It says -
> It is up to you to make s
aceted search.
<>
I am looking for a way to use single query to run
across two indexes (static and dynamic index) and the
search query will have fields from both these indexes.
Rajesh
--- Mathieu Lecarme <[EMAIL PROTECTED]> wrote:
>
> Le 11 avr. 08 à 19:29, Rajesh parab a écri
> Warning: It is up to you to make sure all indexes
> are created and
> modified the same way. For example, if you add
> documents to one index,
> you need to add the same documents in the same order
> to the other
> indexes. Failure to do so will result in undefined
&g
?
Regards,
Rajesh
--- Mathieu Lecarme <[EMAIL PROTECTED]> wrote:
> Have a look at Compass 2.0M3
> http://www.kimchy.org/searchable-cascading-mapping/
>
> Your multiple index will be nice for massive write.
> In a classical
> read/write ratio, Compass will be much easier.
>
Hi,
We are using Lucene 2.0 to index data stored inside
relational database. Like any relational database, our
database has quite a few one-to-one and one-to-many
relationships. For example, lets say an Object A has
one-to-many relationship with Object X and Object Y.
As we need to de-normalize r
Hi All,
Has anyone used rsync or similar utilities on Windows
OS to replicate Lucene index across multiple machines?
Any pointers on it will be very useful?
Regards,
Rajesh
--- Rajesh parab <[EMAIL PROTECTED]> wrote:
> Hi Michael,
>
> Thanks a lot for your suggestions.
>
they will server the purpose. Has anyone using rsync
on Windows?
Regards,
Rajesh
--- Michael McCandless <[EMAIL PROTECTED]>
wrote:
>
> Rajesh parab wrote:
> > Hi,
> >
> > We are currently using Lucene 2.0 for full-text
> > searches within our enterprise applica
Hi,
We are currently using Lucene 2.0 for full-text
searches within our enterprise application, which can
be deployed in clustered environment. We generate
Lucene index for data stored inside relational
database.
As Lucene 2.0 did not have solid NFS support and as we
wanted Lucene based searches
One more alternative, though I am not sure if anyone
is using it.
Apache Compass has added a plug-in to allow storing
Lucene index files inside the database. This should
work in clustered environment as all nodes will share
the same database instance.
I am not sure the impact it will have on perf
Hi Everyone,
I understand that QueryParser allows searches using
double quote characters.
I was wondering if the double quote will also work
with TermQuery.
I am not using QueryParser in my application and
constructing queries (TermQuery, RangeQuery,
BooleanQuery, etc.) explicitly. But, it looks
Hi,
I have a question on index generation. What if the index generation fails for
some reason, may be disk full, or any other reason? Does it make the index
corrupt? I mean, can we still use the index created so far or we need to
re-generate the entire index?
Secondly, what are possible scenar
McCandless <[EMAIL PROTECTED]>
To: java-user@lucene.apache.org
Sent: Tuesday, November 14, 2006 10:09:49 AM
Subject: Re: Transaction support in Lucene
Rajesh parab wrote:
> Does anyone know if there is any plan in adding transaction support in Lucene?
I don't know of specific pl
Hi,
Does anyone know if there is any plan in adding transaction support in Lucene?
Regards,
Rajesh
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
> lets you declare fields to be indexed (and how) with annotations.
>
> - Mark
>
> Rajesh parab wrote:
> > Hi,
> >
> > As I understand, Lucene has a flat structure where you can define multiple
> > fields inside the document. There is no relationship between a
a cool little Lucene add-on that
lets you declare fields to be indexed (and how) with annotations.
- Mark
Rajesh parab wrote:
> Hi,
>
> As I understand, Lucene has a flat structure where you can define multiple
> fields inside the document. There is no relationship between any field
Hi,
As I understand, Lucene has a flat structure where you can define multiple
fields inside the document. There is no relationship between any field.
I would like to enable index based search for some of the components inside
relational database. For exmaple, let say "Folder" Object. The Folde
Hi,
Lets consider the following object structure.
X
|
- Y
|
- Z
The objects Y and Z does not have an existance on their own. They are owned by
object X.
How do we effectively search such object
30 matches
Mail list logo