Would someone please document that the PathHierarchyTokenizer has a
matching ReversePHT which is created by the PHT Factory if you say
reverse="true"? This took some doing to discover: I wanted to add a
factory for ReversePHT and discovered it was not needed. (My wiki
privileges are somehow broken.
Hi, everyone
2 solr server nodes point to the same data directory (same index). did
the two solr instances work independently ?
i found it was strange : one node (node0) can do complex search(for
example:q:"disease"&sort=dateCreated), but the other(node1) using the same
search re
On 8 March 2012 11:05, Abhishek tiwari wrote:
> Gora,
> we are not having the related search ...
> like u have mentioned ... * will a search on an Establishment
> also require results from Movie, such as what movies are showing
> at the establishment*
>
> Establishment doesnot require movie reults
Gora,
we are not having the related search ...
like u have mentioned ... * will a search on an Establishment
also require results from Movie, such as what movies are showing
at the establishment*
Establishment doesnot require movie reults .. each enitity has there
separate search..
On Thu, Mar 8,
when indexing a Solr document by sending XML files via HTTP POST you can
set it adding the boost element to the doc one, see
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_on_.22doc.22
If you plan to index using the java APIs (SolrJ, see
http://wiki.apache.org/solr/Solrj) you can
On 8 March 2012 10:40, Abhishek tiwari wrote:
> my page have layout in following manner
> *All tab* :which will contain all entities (Establishment/Event/Movie)
> Establishment: contain Establishment search results
> Event tab : will contain Event search results
> Movie tab : will contain Movie
my page have layout in following manner
*All tab* :which will contain all entities (Establishment/Event/Movie)
Establishment: contain Establishment search results
Event tab : will contain Event search results
Movie tab : will contain Movie search results
please suggest me how to design my schema
You should create multiple cores when each core is an independent search. If
you have three separate search pages, you may want three separate cores.
wunder
Search Guy, Chegg.com
On Mar 7, 2012, at 8:48 PM, Abhishek tiwari wrote:
> please suggest me when one should create multiple core..?
>
please suggest me when one should create multiple core..?
On Thu, Mar 8, 2012 at 12:12 AM, Walter Underwood wrote:
> Solr is not relational, so you will probably need to take a fresh look at
> your data.
>
> Here is one method.
>
> 1. Sketch your search results page.
> 2. Each result is a docum
Here is a performance question for you...
I want to be able to return results < 160 km from Denver, CO. We have
run some performance numbers and we know what
doing bbox is MUCH faster than geofilt.
However we want to order the queries and run bbox AND then run geofilt
on those results, OR we can
Just go here: https://issues.apache.org/jira/browse/SOLR
On Mar 7, 2012, at 11:57 AM, Ranjan Bagchi wrote:
> Hi --
>
> Hi, totally appreciate the guidance you've been giving me. And yes, my use
> case is having a sharded index where pieces can go in and out of service.
>
> How do file a jira
Hi Phil -
The default update chain now includes the distributed update processor by
default - and if in solrcloud mode it will be active.
Probably, what you want to do is define your own update chain (see the wiki).
Then you can add that update chain as the default for your json update handler
: where and when is the next Eurocon scheduled?
: I read something about denmark and autumn 2012(i don't know where *g*).
I do not know where, but sometime in the fall is probably the correct time
frame. I beleive the details will be announced at Lucene Revolution...
http://lucenerevol
New to Solr (3.5.0). I have one simple question.
How do I extend the configuration to perform suggestions on more than
one field?
I'm using the following solrconfig.xml (taken from the online Wiki
documentation).
suggest
org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling
I answered my own question after some digging on the code. The caches are
structured as maps. In the cases I looked at, auto-warming ignores the values
in the map. Instead, it uses the map's keys (usually a query) to perform and
cache a search result in the new searcher.
On Mar 7, 2012, at
As I understand it, auto-warming is the process of populating a new searcher's
caches from cached objects in the old searcher's caches. Let's say that a new
searcher is created to service a new index that came from replication. Because
the new searcher is operating on a new index, how is it po
: I don't have all my facets values used by my documents, but I would like to
: index theses facets values even if they returned 0 documents.
field faceting builds the list of constraints based on the indexed values
found in the field -- if you don't index it, it doesn't know about it.
if you w
As an insanity check, you might want to take the query that it is executing for
delta updates and run it manually through a SQL tool, or do an explain plan or
something. It almost sounds like there could be a silly error in the query
you're using and its doing a cartesian join or something like
Hi,
thank you fort he help. I've tried:
dataimport?command=full-import&clean=false&optimize=false
and this takes only 19 minutes the first run with optimihzie=true takes
about 3 hours... the tomcat logs doesn't show any errors
and 19 minutes is to long too, isn't it?
Thanks,
Ramo
-Ur
Hey guys,
Great stuff! Thanks a lot for replying.
To be honest, from the beginning I have already felt pretty inclined to
work with the trunk.
Of course, I also have to convince people (at work) that doing so is safe,
and test, and test again..
Thank you very much for your replies, they just mad
unsubscribe
> Unless you have warming happening, there should
> only be a single searcher open at any given time.
> So it seems to me that maxWarmingSearchers
> should give you what you need.
What I'm seeing is that if a query takes a very long time to run, and runs
across the duration of multiple commits (I
Thanks :)
We often disagree on many low-level details but thanks for the
confirmation: I felt this was long overdue to express: we take
releases very seriously but that doesn't mean you should immediately
discard the possibility of using a snapshot release:
In fact you can even manage your own le
Solr is not relational, so you will probably need to take a fresh look at your
data.
Here is one method.
1. Sketch your search results page.
2. Each result is a document in Solr.
3. Each displayed item is a stored field in Solr.
4. Each searched item is an indexed field in Solr.
It may help to
Hi,
We have a large index and would like to shard by a particular field value, in
our case surname. This way we can scale out to multiple machines, yet as most
queries filter on surname we can use some application logic to hit just the one
core to get the results we need.
Furthermore as we ant
I am here on lucene as a user since the project started, even before
solr came to life, many many years. And I was always using trunk
version for pretty big customers, and *never* experienced some serious
problems. The worst thing that can happen is to notice bug somewhere,
and if you have some rea
As a rule of thumb, many will say not to go to production with a pre-release baseline. So until
Solr4 goes "final" and "stable", it's best not to assume too much about it.
Second suggestion is to properly stage new technologies in your product such
that they go through their own validation. And
On 2/28/2012 8:16 AM, Shawn Heisey wrote:
Due to the End of Life announcement for Java6, I am going to need to
upgrade to Java 7 in the very near future. I'm running Solr 3.5.0
modified with a couple of JIRA patches.
https://blogs.oracle.com/henrik/entry/updated_java_6_eol_date
I saw the ann
Hi --
Hi, totally appreciate the guidance you've been giving me. And yes, my use
case is having a sharded index where pieces can go in and out of service.
How do file a jira ticket? Happy to do it.
Thanks,
Ranjan
--
Ranjan Bagchi
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Wed, Mar 7, 2012 at 11:47 AM, Dirceu Vieira wrote:
> Hi All,
>
> Has anybody started using Solr 4.0 in production environments? Is it stable
> enough?
> I'm planning to create a proof of concept using solr 4.0, we have some
> projects that will gain a lot with features such as near real time se
Finally get success to make it while upgrading transformer to use Saxon, I
will give you details soon, it can be useful and is nice feature to get nice
rss feed.
--
View this message in context:
http://lucene.472066.n3.nabble.com/XSLT-Response-Writer-and-content-transformation-tp3800251p3807212.h
Wenca,
I have an app with requirements similar to yours. We have maybe 40 caches that
need to be built, then when they're done (and if they all succeed), the main
indexing runs. For this I wrote some quick-n-squirrley code that executes a
configurable # of cache-building handlers at a time.
On Wed, Jan 25, 2012 at 12:55 PM, Nalini Kartha wrote:
>
> Is there any reason why Solr doesn't support using multiple spellcheckers
> for a query? Is it because of performance overhead?
>
Thats not the case really, see https://issues.apache.org/jira/browse/SOLR-2926
I think the issue is that th
Nalini,
You're at least the second person to mention a need to override "mm" in
conjunction with "maxCollationTries". I opened
https://issues.apache.org/jira/browse/SOLR-3211 to see about getting this
addressed. (not sure if it will be done soon though). The only workaround I
can think of i
Hi,
I'm researching options for handling a better geospatial solution. I'm
currently using Solr 3.5 for a read-only "database", and the
point/radius searches work great. But I'd like to start doing point in
polygon searches as well. I've skimmed through some of the geospatial
jira issues, and read
>
> > Would this also be affected if one of the fields that
> > contains that term is
> > a defined as solr.StrField where as
> > most of the other fields are defined
> > as solr.TextField?
>
> It could be. string fields are not analyzed. For example, one whitespace
> can prevent match. Cards a
--- On Wed, 3/7/12, Donald Organ wrote:
> From: Donald Organ
> Subject: Re: Need some quick help diagnosing query
> To: solr-user@lucene.apache.org
> Date: Wednesday, March 7, 2012, 4:59 PM
> >
> > Simply your collection does contain a doc having all
> these three terms?
> > Try different mm v
>
> Simply your collection does contain a doc having all these three terms?
> Try different mm values.
>
>
> http://wiki.apache.org/solr/DisMaxQParserPlugin#mm_.28Minimum_.27Should.27_Match.29
>
Would this also be affected if one of the fields that contains that term is
a defined as solr.StrField
--- On Wed, 3/7/12, Donald Organ wrote:
> From: Donald Organ
> Subject: Need some quick help diagnosing query
> To: "solr-user"
> Date: Wednesday, March 7, 2012, 4:43 PM
> Right now i am doing the following:
>
> qf=name^1.75 codeTXT^1.75 cat_search^1.5
> description^0.8 brand^5.0
> cat_s
Right now i am doing the following:
qf=name^1.75 codeTXT^1.75 cat_search^1.5 description^0.8 brand^5.0
cat_search^0.8
fl=code,score
defType=dismax
q=whitney brothers carts
if i change it to the following then i get results:
qf=name^1.75 codeTXT^1.75 cat_search^1.5 descript
--- On Wed, 3/7/12, Gian Marco Tagliani wrote:
> From: Gian Marco Tagliani
> Subject: docBoost with "fq" search
> To: solr-user@lucene.apache.org
> Date: Wednesday, March 7, 2012, 3:11 PM
> Hi All,
> I'm seeing strange behavior with my Solr (version 3.4).
>
> For searching I'm using the "q" a
Well, I'd upgrade to a newer Solr ...
But seriously, first there is an expected temporary spike during
replication, you can expect the size on disk to occasionally
up to double during replication, that's just how replication is
designed to work...
But if the files are *staying*, then that is, in
Hi All,
I'm seeing strange behavior with my Solr (version 3.4).
For searching I'm using the "q" and the "fq" params.
At index-time I'm adding a docBoost to each document.
When I perform a search with both "q" and "fq" params everything works.
For the search with "q=*:*" and something in the "fq"
MaxPermSize probably isn't what you want, try -Xmx1G or similar.
If that doesn't work, you need to post a lot more information about
your setup, it might help to review:
http://wiki.apache.org/solr/UsingMailingLists
Best
Erick
2012/3/7 C.Yunqin <345804...@qq.com>:
> Daniel,
> thanks very much:)
Well, I'm ManifoldCF ignorant, so I'll have to defer on this one
Best
Erick
On Tue, Mar 6, 2012 at 12:24 PM, Anupam Bhattacharya
wrote:
> Thanks Erick, for the prompt response,
>
> Both the suggestions will be useful for a one time indexing activity. Since
> DIH will be one time process of i
Unless you have warming happening, there should
only be a single searcher open at any given time.
So it seems to me that maxWarmingSearchers
should give you what you need.
And you can pretty easily insure this by making your
poll interval (assuming master/slave) longer
than your warmup time.
Best
On 7 March 2012 17:59, aditya jatnika martin wrote:
> Dear Developer,
>
> I have a problem with solr, every time I add document the result message
> always "'500' Status : Internal server error",
[...]
Have you looked in the Solr logs for further details on the
exception?
Regards,
Gora
Hi,
I would like to know how to index any document other than xml in SOLR ?
Any comments would be appreciated !!!
Thanks,
Rohan
CAUTION - Disclaimer *
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s).
I would say do both. If you have the capacity create a core for each and
one that combines it and do some tests.
There are pros and cons to both approaches. If you ever need joins in RDBMS
terms then you probably want one index.
If not then one index might still be a lot easier.
The only real reaso
> i've indexed my 2 Million documents with DIH on solr. It
> uses a simple
> select without joins where it fetches the distinct of title,
> and furthermore
> ids, descriptions, urls . the first time I've indexed this,
> it took about 1
> hour. Every 1-2 days I get new entries which I want to
> inde
Hi everyone,
My question is a little weird but i need to have all my facet values in solr
index :
I have a database with all possible values of my facets for my solr
documents.
I don't have all my facets values used by my documents, but I would like to
index theses facets values even if they ret
Hi,
I'm using one master and slave in SOLR. When I try to replicate from master
to slave, the data is getting replicated properly and the changes are
getting implemented rightly in the SOLR UI. But, the indexing size is
doubled in the slave when compared to the master. (i.e.) for eg:
If,
Master I
Hi,
i've indexed my 2 Million documents with DIH on solr. It uses a simple
select without joins where it fetches the distinct of title, and furthermore
ids, descriptions, urls . the first time I've indexed this, it took about 1
hour. Every 1-2 days I get new entries which I want to index. I'm d
Hello,
It seems you have some app which triggers these DIH requests. Can't you add
a precondition in that app? Before run the second DIH, check status of the
first one whether it RUNNING or IDLE.
Regards
2012/3/7 Wenca
> Hi,
>
> I have 2 DataImportHandlers configured. The first one prepares da
Hi,
I have 2 DataImportHandlers configured. The first one prepares data to
berkeley backed cache (SOLR-2382, SOLR-2613) and the second one then
indexes documents reading subentity data from the cache.
I need some way to prevent the second handler to run if the first one is
currently runnig t
55 matches
Mail list logo