Hi @Atita,
We are using the latear version (Solr 7.1.0).
As the metrics are exposed with MBeans via JMX, you could use the
Prometheus JMX exportar to take the values of that metrics and expose them.
You could use it to monitor caches, response times, number of errors in all
the handlers you have
Hi,
The whole index has 100M but when I add the criteria, it will filter the
data to maybe 10k as a max number of rows.
The facet isn't working when the total number of records in the index is
100M but it was working at 5M.
I have social media & RSS data in the index and I am trying to get the wo
Shawn,
There won't be a Java 10, we'll get Java 18.3 instead. After 9 it is a guess
when CMS and friends are gone.
Regards,
Markus
-Original message-
> From:Shawn Heisey
> Sent: Tuesday 7th November 2017 0:24
> To: solr-user@lucene.apache.org
> Subject: Re: Java 9
>
> On 11/6/2017
Oh, blimey, have Oracle gone with Ubuntu-style numbering now? :)
On 7 November 2017 at 08:27, Markus Jelsma
wrote:
> Shawn,
>
> There won't be a Java 10, we'll get Java 18.3 instead. After 9 it is a
> guess when CMS and friends are gone.
>
> Regards,
> Markus
>
>
>
> -Original message-
>
Dr Krell
Item 11): It is best to get the solrconfig.xml provided with the new version of
Solr, and change it to suit your needs. Do not try to work from the old
version's solrconfig.xml.
I did not have time to read the other items.
Look in solr.log, and compare the successful query with the un
Hi everybody,
I am trying Cassandra solr integration. I configured solr files;
dataconfig.xml, solrconfig.xml and managed-schema. But solr does not connect
Cassandra and snakeyaml error which is;
Exception in thread "Thread-18" java.lang.NoClassDefFoundError:
org/yaml/snakeyaml/Yaml
Yes I am referring to the dataimport tab in the admin UI and issue
SOLR-10035. My previous setup w/ 6.3 did not show this error. I then
upgraded to 7.1.0 and the error shows. I upgraded(downgraded) to versions
6.5.0 and 6.6.2 and I do not see the error. Version 7.0.1 also shows the
error for me
Zheng,
Usually, the number of records returned is more than what is shown in the
> ngroup. For example, I may get a ngroup of 22, but there are 25 records
> being returned.
Does the 25 records being returned have duplicates? Grouping is subjected
to co-location of data of same group values in sa
On 11/7/2017 6:49 AM, richardg wrote:
> vs on the master that shows the error.
>
> 2017-11-07 13:29:14.131 INFO (qtp1839206329-36) [
> x:solr_aggregate_production] o.a.s.c.S.Request [solr_aggregate_production]
> webapp=/solr path=/admin/mbeans
> params={cat=QUERYHANDLER&wt=json&_=1510061366718}
Caused by: java.lang.ClassNotFoundException: org.yaml.snakeyaml.Yaml
You haven't included anything that tells Solr where that file is. You've
included
but that specifically loads the jar file. Try a regex pattern assuming
snakeyaml.Yaml is co-located with cassandra-jdbc-driver-0.6.4.jar
Best
As an update, I have confirmed that it doesn't seem to have anything to do
with child documents, or standard deletes, just deleteByQuery. If I do a
deleteByQuery on any collection while also adding/updating in separate
threads I am experiencing this blocking behavior on the non-leader replica.
Has
bq: 10k as a max number of rows.
This doesn't matter. In order to facet on the word count, Solr has to
be prepared to facet on all possible docs. For all Solr knows, a
_single_ document may contain every word so the size of the structure
that contains the counters has to be prepared for N buckets,
I don't think there's any way to do that within Solr. If you're using
Linux, the Logical Volume Manager can be used to create a single volume
from multiple devices (RAID), from which you can create partitions/file
systems as required. There may be equivalent Windows functionality - I
can't say.
be
Well, consider what happens here.
Solr gets a DBQ that includes document 132 and 10,000,000 other docs
Solr gets an add for document 132
The DBQ takes time to execute. If it was processing the requests in
parallel would 132 be in the index after the delete was over? It would
depend on when the DB
There's been discussion on the Solr JIRA list about allowing multiple
"roots" for cores although I can't find it right now.
Meanwhile, what people do is specify dataDir. It's a bit clumsy since
we can't really do this at a collection level it needs to be done with
ADDERPLICA individually.
Best,
E
Hi Team,
I heard news that App Studio for Solr and ElasticSearch being released by
Lucidworks in November. In the mail it was mentioned to drop mail to
solr-user DL if we like to get a preview release.
Could you please share preview release with me.
Thanks,
Sushil
Maybe not a relevant fact on this, but: "addAndDelete" is triggered by
"*Reordering
of DBQs'; *that means there are non-executed DBQs present in the updateLog
and an add operation is also received. Solr makes sure DBQs are executed
first and than add operation is executed.
Amrit Sarkar
Search Engi
If I am understanding you correctly, you think it is caused by the DBQ
deleting a document while a document with that same ID is being updated by
another thread? I'm not sure that is what is happening here, as we only
delete docs if they no longer exist in the DB, so nothing should be
adding/updati
OK although this was talked about as possibly coming in solr 6.x I guess it was
hearsay and from what I can tell after rereading everythying I can find on the
subject as of now the child docs are only retrievable as a one level hierarchy
when using the ChildDocTransformerFactory
_
Hi,
I am running Solrcloud version: 6.6.1
I have been trying to use graphite to report solr metrics and seem to get
the below error while doing so in the solr logs:
> java.lang.NullPointerException
> at
> com.codahale.metrics.graphite.PickledGraphite.pickleMetrics(PickledGraphite.java:313)
On 11/5/2017 12:20 PM, Chris Troullis wrote:
> The issue I am seeing is when some
> threads are adding/updating documents while other threads are issuing
> deletes (using deleteByQuery), solr seems to get into a state of extreme
> blocking on the replica
The deleteByQuery operation cannot coexist
I believe this is https://issues.apache.org/jira/browse/SOLR-11413,
which has a fix already slated for Solr 7.2.
On Tue, Nov 7, 2017 at 10:44 AM, sudershan madhavan
wrote:
> Hi,
> I am running Solrcloud version: 6.6.1
> I have been trying to use graphite to report solr metrics and seem to get
> t
: 1) When looking for Tübingen in the title, I am expecting the 3092484
Just to be clear -- I'm reading that as an 8 character word, where the 2nd
character is U+00FC and the other characters are plain ascii: T_bingen
Also to be clear: I'm attempting to reproduce the steps you describe using
bq: you think it is caused by the DBQ deleting a document while a
document with that same ID
No. I'm saying that DBQ has no idea _if_ that would be the case so
can't carry out the operations in parallel because it _might_ be the
case.
Shawn:
IIUC, here's the problem. For deleteById, I can guaran
Thank you Cassandra. Does seem like a thread unsafe operation issue. But
what confuses me is the error occurs every time and only occurs when I have
multiple metrics group configured. Also the exception is null pointer on
the linked list instead of already connected exception
Regards
Sudershan Mad
I'm trying to enable phrase suggestion in my application by using
*AnalyzingInfixLookupFactory *and *DocumentDictionaryFactory*. Following is
what my configuration looks like:
mySuggester
AnalyzingInfixLookupFactory
suggester_infix_dir
DocumentDictionaryFactory
title
I'm afraid that method doesn't work either. I am still perplexed as to how to
install Solr 7 on Ubuntu 17 on my local enviornment. Dane Michael Terrell
On Tuesday, October 24, 2017 9:44 AM, Shawn Heisey
wrote:
On 10/23/2017 9:11 PM, Dane Terrell wrote:
> Hi I'm new to apache solr. I'm
On 11/7/2017 11:51 AM, Dane Terrell wrote:
> I'm afraid that method doesn't work either. I am still perplexed as to how to
> install Solr 7 on Ubuntu 17 on my local enviornment.
How about we start over. The previous info shows that you have the Solr
download in /tmp. I will assume that the file
Is "id" the actual in your schema? If you indexed the same
document twice, the second one should overwrite the first one so
getting two docs back with the same ID is strange.
Best,
Erick
On Tue, Nov 7, 2017 at 10:43 AM, ruby wrote:
> I'm trying to enable phrase suggestion in my application by u
@Erick, I see, thanks for the clarification.
@Shawn, Good idea for the workaround! I will try that and see if it
resolves the issue.
Thanks,
Chris
On Tue, Nov 7, 2017 at 1:09 PM, Erick Erickson
wrote:
> bq: you think it is caused by the DBQ deleting a document while a
> document with that sam
Hi
I am developing my own custom filter in solr 5.4.1.
I have created a jar of a filter class with extend to TokenizerFactory
class .
When i loaded in to sol config and add my filter to managed-schema , i
found following error -
org.apache.solr.common.SolrException: Could not load conf for core
yes, id is an unique field.
I found following issue in Jira:
https://issues.apache.org/jira/browse/LUCENE-6336
It says affected versions are 4.10.3, 5.0. I'm using Solr 6.1 and seeing
this issue.
You can recreate it by indexing those documents I shared and querying.
--
Sent from: http://lucen
Yes, Id is an unique field in my schema.
I found following Jira issue:
https://issues.apache.org/jira/browse/LUCENE-6336
It looks related to me. It does not mention that it was fixed. Is it fixed
in Solr 6.1? I'm using Solr 6.1
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068
Looks to me like you're compiling against the jars from one version of
Solr and executing against another.
/root/solr-5.2.1/server/solr/#/conf/managed-schema
yet you claim to be using 5.4.1
On Tue, Nov 7, 2017 at 12:00 PM, kumar gaurav wrote:
> Hi
>
> I am developing my own custom filter in
Hi,
I am working on PoC of a front-end web to provide an interface to the end
user search and filter data on Solr indexes.
I am trying Streaming Expression for about a week and I am fairly keen
about using it to search and filter indexes on Solr side. But I am not sure
whether this is the right ap
Hi,
I have requirement now quite different as I need to set routing key hash for
document which confirm it to send to particular shard as its range.
I have solrcloud configuration with 4 shard & 4 replica with below shard range.
shard1: 8000-bfff
shard2: c000-
shard3: 0-3fff
you can chain two [subquery] transformer, but really it's better to receive
them flat and sort child and grands across levels in post processing.
On Tue, Nov 7, 2017 at 4:05 AM, Petersen, Robert (Contr) <
robert.peters...@ftr.com> wrote:
> OK no faceting, no filtering, I just want the hierarchy t
Kojo,
Not sure what do you mean by making two request to get documents. A
"search" streaming expression can be passed with "fq" parameter to filter
the results and rollup on top of that will fetch you desired results. This
maybe not mentioned in official docs:
Sample streaming expression:
expr=r
Hi Erick
I am very happy to see your reply .
It was mistakenly written 5.4.1 in last mail . I am developing plugin in
solr-5.2.1 .
i am compiling jars and executing for the same version i.e. 5.2.1 , yet i
am getting following error
Caused by: org.apache.solr.common.SolrException: Plugin init fa
Can someone confirm if this needs to be reported as a bug? As the exception
in the patch seems to be different than that of the the SOLR-11413? Also
the issue is not sporadic but occurs every time the Graphite Reporter is
invoked for multiple metrics.
Regards
Sudershan Madahavan
On Tue, Nov 7, 20
Ketan,
If you know defined indexing architecture; isn't it better to use
"implicit" router by writing logic on your own end.
If the document is of "Org1", send the document with extra param*
"_route_:shard1"* and likewise.
Snippet from official doc:
https://lucene.apache.org/solr/guide/6_6/shard
41 matches
Mail list logo