Hi Eva,
You just need a script that:
* calls master with http://replication?command=backup
* copies the backup off of master and stores it somewhere
* removes that backup from the master if you don't have enough disk for it
there
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/i
Hi,
Does StandardTokenizerFactory remove your numbers?
Go to the Analysis page in Solr Admin, enter your query with numbers, and
see what happens.
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html
On Thu
Right, he has talked about this in various ways. But the key is take the
user-item matrix in full and generate a new data model for recommendation.
These approaches shove that datamodel into the search index. It is a batch
process.
LucidWorks does this for search clicks.
- Original Message
You don't need the transformers.
I think the paths should be what is in the XML file.
forEach="/add"
And the paths need to use the syntax for name="fname" and name="number". I
think this is it, but you should make sure.
xpath="/add/doc/field[@name='fname']"
xpath="/add/doc/field[@name='number
Hi Pravin,
Those unigrams... how are you using them? What are the queries like?
I wonder if it's the (probably) massive number of terms in your index
that's the problem.
When queries are in flight and your CPU is 100% busy, do a few thread dumps
(kill -3 PID) and look where the threads are. Tha
On the other hand, people have successfully built recommendation engines on
top of Lucene or Solr before, and I think Ted Dunning just mentioned this
over on the Mahout ML a few weeks ago. have a look at
http://search-lucene.com/m/dbxtb1ykRkM though I think I recall a separate
recent email wher
sagarzond- you are trying to embed a recommendation system into search.
Recommendations are inherently a matrix problem, where Solr and other search
engines are one-dimensional databases. What you have is a sparse user-product
matrix. This book has a good explanation of recommender systems:
Mah
Hi Felipe,
omitTermFrequencyAndPositions may help, but you may also want to implement
a custom similarity class that neutralizes idf. See
http://search-lucene.com/?q=custom+similarity
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/s
Hi,
I added authentication in Jetty and it works fine. However, it's strange
that url pattern like "/admin/cores*" is not working, but "/admin/*" works
correct.
Regards.
On 17 November 2012 01:10, Marcin Rzewucki wrote:
> Hi,
>
> Yes, I'm trying to add authentication to Jetty (for solr4), acco
Not sure if there is an automated way, but you could do it by computing a
hash of various/all fields at index time and later use that to compare
before updating. And you can hide this in a UpdateRequestProcessor. Could
be a generally useful feature, so consider contributing.
Otis
--
SOLR Performa
I would create a hash of the document content and store that in SOLR along with
any document info you wish to store. When a document is presented for indexing,
hash that and compare to the hash of the stored document, index if they are
different and skip if they are not.
François
On Nov 24,
Hi all,
I'm trying to configure our Solr 3.4 deployment to have multiple
spellcheckers based on 2 different fields, one in English and one in
Spanish. In solrconfig.xml, the SpellCheckComponent requires a
queryAnalyzerFieldType and each of these fields is based on a different
field type for lang
Hello.
You can query by *:* with start=0, rows=1, fl=contentid and sorting by
contentid. Get the bigger/smaller value for that field is from the first
(and only) document returned.
Regards,
- Luis Cappa.
El 24/11/2012 14:45, "Jack Krupansky" escribió:
> The "stats" component will give you the
The "stats" component will give you the minimum and maximum values for
fields, among other statistics (e.g., number of documents where a field is
missing).
Just add &stats=true&stats.field=contentid to your query. The counts will be
in the "stats" section of the query response.
-- Jack Krupa
I have indexed an XML file in solr, which looks like:
ABC
282307
121422
MNO
272307
188422
this file has around 10 documents. what is the way to get maximum value
from the field value "contentid" parsing the whole file.
--
View this message in context:
http://lucene.472066.n3.nabble.c
15 matches
Mail list logo