Check out these articles on this topic. Hope it helps.
http://www.findbestopensource.com/article-detail/lucene-solr-as-nosql-db
http://www.lucidimagination.com/blog/2010/04/30/nosql-lucene-and-solr/
In nutshell, It is good to use Lucene as NoSQL but better have your data
stored in some persistent
Hi
If the data is not stored then it cannot be retrieved in the same format.
Using IndexReader as you listed you could retrieve the list of the terms
available in the doc. It may be analyzed. You may not be getting exact data.
Regards
Aditya
www.findbestopensource.com
On Fri, Jul 27, 2012 at 1:3
any code example for search words in paragraph or in line.
On Thu, Jul 26, 2012 at 3:34 PM, neerajshah84 wrote:
> can you send me any example for search words in paragraph or in line.
>
> On Wed, Jul 25, 2012 at 2:33 PM, Ian Lea [via Lucene] <
> ml-node+s472066n399717...@n3.nabble.com> wrote:
>
Hi Robert,
Thanks for your help. This cleared up all of the things I was having trouble
understanding about offsets and positions in term vectors.
Mike
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: Friday, July 20, 2012 5:59 PM
To: java-user@lucene.apache.org
Subje
Thanks for the reply Abdul.
I was exploring the API and I think we can retrieve all those words by
using a brute-force approach.
1) Get all the terms using indexReader.terms()
2) Process the term only if it belongs to the target field.
3) Get all the docs using indexReader.termDocs(term);
4) S
No , it's not possible to get the data which not stored ..
On Jul 26, 2012 10:27 PM, "Phanindra R [via Lucene]"
> Hi,
> I've an index to analyze (manually). Unfortunately, I cannot rebuild
> the index. Some of the fields are 'unstored'. I was wondering whether
> there's any way to get the ter
On Thu, Jul 26, 2012 at 12:16 PM, Johannes Neubarth wrote:
> For stopwords that are at the end of the tokenStream (e.g. "them"), the
> positionIncrement is not updated - after leaving the while-loop,
> skippedTokens is 0. My workaround is to append a unique number to every
> input text, so that e
If I want to set up a database that is totally flat with no joins, is there any
reason not to use lucene. The reasons I would be curious about are things like
insert performance and whether there are any queries that either don't work in
lucene or perform better in MySQL/postgres.
-
Hi,
I've an index to analyze (manually). Unfortunately, I cannot rebuild
the index. Some of the fields are 'unstored'. I was wondering whether
there's any way to get the terms from an unstored field for each doc.
Positional information is not necessary. Lucene version is 3.5.
The reason am tr
Hi.
Facetted search exists since 3.5 and will exist in 4.0 too !
Shai
On Jul 26, 2012 7:21 PM, "Subramanian, Ranjith" <
ranjith.subraman...@capgemini.com> wrote:
> Hi Team,
>
> ** **
>
> I would like to know if Lucene 4.0 will support facetted search.
>
> Thanks in advance.
>
> ** **
Hi Team,
I would like to know if Lucene 4.0 will support facetted search.
Thanks in advance.
Best regards,
Ranjith...
[cid:image001.gif@01CD6B26.B4C4D460]Ranjith Ratna Kumar S / Capgemini India /
Bangalore
Warner Bros. | Techni
Hello,
I want to align the output of two different analysis pipelines, but I
don't know how.
We are using Lucene for text analysis. First, every input text is
normalized using StandardTokenizer, StandardFilter and LowerCaseFilter.
This yields a list of tokens (list1). Second, the same input text is
> fear2dark tight3free is one single
> query and im using query parser. If i
> will pass
> "fear dark"~2 "tight free"~3 then i will not
> get result in which dark
> and tight near to eachother.
So you want to be dark and tight adjacent to each other. SurroundQueryParser
support nested proxim
On Thu, Jul 26, 2012 at 4:10 AM, yamo93 wrote:
> A possible workaround would be to call this constructor
> ElisionFilter(Version matchVersion, TokenStream input, Set articles).
>
Thats the way, just supply the list you want.
> But i don't understand why this "d" and "c" are not present in defaul
Hi Trejkaz,
I am using the Standard Analyzer for my indexing. Can you provide an
example with which I can use the BooleanQuery to add both the fields in the
same forms and yet the documents are searched?
I would be really grateful if you can provide example for parsing the text
as well which is p
Hi,
Sorry I forgot the most important : i use lucene 3.6.
Here is my code : tokenStream = new ElisionFilter(Version.LUCENE_36,
tokenStream);
I looked at the source code of ElisionFilter, and DEFAULT_ARTICLES
doesn't contain "d" and "c", in order to manage terms like /"d'une/" or
"/c'est"/.
fear2dark tight3free is one single query and im using query parser. If i
will pass
"fear dark"~2 "tight free"~3 then i will not get result in which dark
and tight near to eachother. and also
"fear dark"~2 "tight free"~3 will give me two different results then how
will i be able to take the i
can you send me any example for search words in paragraph or in line.
On Wed, Jul 25, 2012 at 2:33 PM, Ian Lea [via Lucene] <
ml-node+s472066n399717...@n3.nabble.com> wrote:
> Look into spans and line, or sentence, delimiters and tokens, and
> position increment gaps. Google will help you. You
18 matches
Mail list logo