Hi
We have numerous collections each with numerous shards spread across
numerous machines. We just discovered that all documents have a field
with a wrong value and besides that we would like to add a new field to
all documents
* The field with the wrong value is a long, DocValued, Indexed and
Same with me too, in a multi-core Master/Slave.
11:17:30.476 [snapPuller-8-thread-1] INFO o.a.s.h.SnapPuller - Master's
generation: 87
11:17:30.476 [snapPuller-8-thread-1] INFO o.a.s.h.SnapPuller - Slave's
generation: 3
11:17:30.476 [snapPuller-8-thread-1] INFO o.a.s.h.SnapPuller - Starting
rep
Solr is trying to load "com/uwyn/jhighlight/renderer/XhtmlRendererFactory"
but that is not a class which is shipped or used by Solr. I think you have
some custom plugins (a highlighter perhaps?) which uses that class and the
classpath is not setup correctly.
On Wed, Jul 23, 2014 at 2:20 AM, Ameya
Do you mean something different from docId:[100 TO 200] ?
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
On Wed, Jul 23, 2014 at 11:49 AM, M
i guess you can use these two params in your query,
rows=100&start=100
which will give you 100 documents after 100th document.
On Wed, Jul 23, 2014 at 10:19 AM, Mukundaraman Valakumaresan <
muk...@8kmiles.com> wrote:
> Hi,
>
> Is it possible to execute queries using doc Id as a query parameter
Hi,
Is it possible to execute queries using doc Id as a query parameter
For eg, query docs whose doc Id is between 100 and 200
Thanks & Regards
Mukund
On 7/22/2014 5:00 PM, Robin Woods wrote:
> I think, I found the issue!
>
> I actually missed to mention a very important step that I did, which is,
> CORE SWAP
> otherwise, it's not replicating the full index.
>
> when we do CORE SWAP, doesn't it do the same checks of copying only deltas?
Yes, it
I think, I found the issue!
I actually missed to mention a very important step that I did, which is,
CORE SWAP
otherwise, it's not replicating the full index.
when we do CORE SWAP, doesn't it do the same checks of copying only deltas?
--
View this message in context:
http://lucene.472066.n3
Or possibly use the synonym filter at query or index time for common
misspellings or misunderstandings about the spelling. That would be
automatic, without the user needing to add the explicit fuzzy query
operator.
-- Jack Krupansky
-Original Message-
From: Anshum Gupta
Sent: Tuesda
Hi Warren,
Check out the section about fuzzy search here
https://cwiki.apache.org/confluence/display/solr/The+Standard+Query+Parser.
On Tue, Jul 22, 2014 at 1:29 PM, Warren Bell
wrote:
> What field type or filters do I use to get something like the word
> “Lacuma” to return results with “Lucum
Hi
I am running into below error while indexing a file in solr.
Can you please help to fix this?
ERROR - 2014-07-22 16:40:32.126; org.apache.solr.common.SolrException;
null:java.lang.RuntimeException: java.lang.NoClassDefFoundError:
com/uwyn/jhighlight/renderer/XhtmlRendererFactory
at
org.apache
What field type or filters do I use to get something like the word “Lacuma” to
return results with “Lucuma” in it ? The word “Lucuma” has been indexed in a
field with field type text_en_splitting that came with the original solar
examples.
Thanks,
Warren
Hello!
Yes, just edit your Jetty configuration file and add -Xmx and -Xms
parameters. For example, the file you may be looking at it
/etc/default/jetty.
--
Regards,
Rafał Kuć
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
> So can
So can i come over this exception by increasing heap size somewhere?
Thanks,
Ameya
On Tue, Jul 22, 2014 at 2:00 PM, Shawn Heisey wrote:
> On 7/22/2014 11:37 AM, Ameya Aware wrote:
> > i am running into java heap space issue. Please see below log.
>
> All we have here is an out of memory except
On 7/22/2014 11:37 AM, Ameya Aware wrote:
> i am running into java heap space issue. Please see below log.
All we have here is an out of memory exception. It is impossible to
know *why* you are out of memory from the exception. With enough
investigation, we could determine the area of code where
Hi
i am running into java heap space issue. Please see below log.
ERROR - 2014-07-22 11:38:59.370; org.apache.solr.common.SolrException;
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:790
Hi Gopal,
I just started a repository on github
(https://github.com/tballison/tallison-lucene-addons) to host a standalone
version of LUCENE-5205 (with other patches to come). SOLR-5410 is next (Solr
wrapper of the SpanQueryParser), and then I'll try to add LUCENE-5317
(concordance) and LUCEN
public static DocSet mapChildDocsToParentOnly(DocSet childDocSet) {
DocSet mappedParentDocSet = new BitDocSet();
DocIterator childIterator = childDocSet.iterator();
while (childIterator.hasNext()) {
int childDoc = childIterator.nextDoc();
int par
Query parentFilterQuery = new TermQuery(new Term("document_type",
"parent"));
int[] childToParentDocMapping = new int[searcher.maxDoc()];
DocSet allParentDocSet = searcher.getDocSet(parentFilterQuery);
DocIterator iter = allParentDocSet.iterator();
I am copy-pasting the file extensions /from /the text document /into /the
source code, not /from /the source code. My typing mistake.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Edit-Example-Post-jar-to-read-ALL-file-types-tp4148312p4148567.html
Sent from the Solr - User
So by using the SimplePostTool I can define the application type and handling
of specific documents (Such as word, powerpoint, xml, png, etcetera). I have
defined these and they are handled based on their type. In my file system
however, I have a large number of files that can be read as plain text
On 7/22/2014 6:14 AM, Michael Ryan wrote:
> I mean re-adding all of the documents in my index. The DocValues wiki page
> says that this is necessary, but I wanted to know if there was a way around
> it.
If your index meets the strict criteria for Atomic Updates, you could
"update" all the docume
Exactly. Thanks a lot Jack. +1 for "Your best bet is to get that RDBMS data
moved to Cassandra or DSE ASAP."
On Tue, Jul 22, 2014 at 5:15 PM, Jack Krupansky
wrote:
> I don't think the Solr Data Import Handler has a Cassandra plugin (entity
> processor) yet, so the most straight forward approach
I mean re-adding all of the documents in my index. The DocValues wiki page says
that this is necessary, but I wanted to know if there was a way around it.
-Michael
-Original Message-
From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com]
Sent: Tuesday, July 22, 2014 2:14 AM
To: solr
I don't think the Solr Data Import Handler has a Cassandra plugin (entity
processor) yet, so the most straight forward approach is to write a Java app
that reads from Cassandra, then reads the corresponding RDBMS data, combines
the data, and then uses SolrJ to add documents to Solr.
Your best
Deleted documents remain in the Lucene index until an "optimize" or segment
merge operation removes them. As a result they are still counted in document
frequency. An update is a combination of a delete and an add of a fresh
document.
-- Jack Krupansky
-Original Message-
From: Johann
I faced the same issue sometime back, root cause is docs getting deleted
and created again without getting optimized. Here is the discussion
http://www.signaldump.org/solr/qpod/22731/docfreq-coming-to-be-more-than-1-for-unique-id-field
On Tue, Jul 22, 2014 at 4:56 PM, Johannes Siegert <
johannes.
Hi.
My solr-index (version=4.7.2.) has an id-field:
...
id
The index will be updated once per hour.
I use the following query to retrieve some documents:
"q=id:2^2 id:1^1"
I would expect that the document(2) should be always before the
document(1). But after many index updates document(1)
Thanks, Umesh
You can get the parent bitset by running a the parent doc type query on
> the solr indexsearcher.
> Then child bitset by runnning the child doc type query. Then use these
> together to create a int[] where int[i] = parent of i.
>
Can you kindly add an example? I am not quite sure h
Hello,
I am using solr 4.2.1. I have the following use case.
I should find results inside bbox OR if there is none, first result outside
bbox within a 1000 km distance. I was wondering what is the best way to
proceed.
I was considering doing a geofilt search from the center of my bounding box
an
30 matches
Mail list logo