Hi Dominique,
You don't provide information about the number of documents. Anyway, all
> your cache size and mostly initial size are big. Cache are stored in JVM
> heap.
Document count is 101893353.
About cache size, most is not always better. Did you make some performance
> benchmarks in order
Hi Shawn,
>
>
> Do you have the actual OutOfMemoryError exception? Can we see that?
> There are several resources other than heap memory that will result in
> OOME if they are exhausted. It's important to be investigating the
> correct resource.
*Exception:*
Aug, 04 2021 15:38:36 org.apache.sol
Hello everyone,
We are currently using Solr 8.7 facets search to render a server generated
heatmap.
Our issue is the following :
Solr often return this kind of exception =>
"java.lang.IllegalArgumentException: Too many cells (743 x 261) for level 5
shape
Rect(minX=-18.610839843750004,maxX=13.
Hello,
I'm looking for the best way to go forward for a production-ready Solr
deployment
I've noticed there's 2 prevalent options:
1. Solr Operator
2. Bitnami helm chart
Any existing resources I can use to make a decision?
Thanks
Hello, my name is Egor
Is there any way to translate Solr's GUI to russian?
Looking forward to hearing from you
Nothing built in today. The Solr GUI is written in AngularJS, and while it
references support for i18n, no one has taken that step.
https://docs.angularjs.org/guide/i18n#how-does-angularjs-support-i18n-l10n-
This type of feature is why we’d love to see someone step up and move the GUI
to mor
Hello everyone,
We are using Solr 8.5 and facing the following issue.
We want to use number of clicks of a documents/page as a feature in LTR model.
We are using in-place updates to set/upload the number of clicks for each
document. When I try to extract features from Solr, all the documents/r
Cant chrome translate the page or is there too much JavaScript?
> On Aug 10, 2021, at 8:44 AM, Eric Pugh
> wrote:
>
> Nothing built in today. The Solr GUI is written in AngularJS, and while it
> references support for i18n, no one has taken that step.
> https://docs.angularjs.org/guide/i1
I’m confused. You don’t store it, nor index it?
> On Aug 10, 2021, at 9:06 AM, Nishanth Nayakanti
> wrote:
>
> Hello everyone,
>
> We are using Solr 8.5 and facing the following issue.
>
> We want to use number of clicks of a documents/page as a feature in LTR
> model. We are using in-place
automated translations are rough at best
On Tue, 10 Aug 2021 at 06:16, Dave wrote:
> Cant chrome translate the page or is there too much JavaScript?
>
> > On Aug 10, 2021, at 8:44 AM, Eric Pugh
> wrote:
> >
> > Nothing built in today. The Solr GUI is written in AngularJS, and
> while it refe
Gotcha, I don’t know Russian so I didn’t know that. the admin ui has very
limited English to translate there aren’t paragraphs, unless you want the
results translated, otherwise it’s pretty straight forward I would have thought
> On Aug 10, 2021, at 9:19 AM, Stephen Boesch wrote:
>
> automat
Machine translation often does a better job with paragraphs because the
words are in context. Single word labels such as "Logging" are what gets it
confused because it can't always figure out if that's about log files or
felling trees.
Op di 10 aug. 2021 om 15:27 schreef Dave :
> Gotcha, I don’t
Hi Dave,
Thanks for responding
I think that config setting is a pre-requisite for inplace updates as defined
here
https://solr.apache.org/guide/6_6/updating-parts-of-documents.html#UpdatingPartsofDocuments-Example.1
Here is from Solr documentation.
An atomic update operation is performed usin
On Tue, Aug 10, 2021 at 04:08:04PM +0200, Thomas Corthals wrote:
> Machine translation often does a better job with paragraphs because the
> words are in context. Single word labels such as "Logging" are what gets it
> confused because it can't always figure out if that's about log files or
> felli
Hello,
Started receiving this message. How to solve this problem?
2021-08-09 11:22:49,871 qtp399573350-29619 ERROR Appender solr-async is
unable to write primary appenders. queue is full
2021-08-09 11:22:49,889 qtp399573350-29619 ERROR Appender solr-async is
unable to write primary appenders. qu
On 2021-08-10 8:18 AM, Stephen Boesch wrote:
automated translations are rough at best
As a native Russian speaker I promise you that non-automated ones are
just as bad at best.
Dima
Hey Tomer,
I'm not aware of any resources that compare the two, however the Solr
Operator is the official project-supported way of running Solr on
Kubernetes.
You can find more information on the project at:
https://solr.apache.org/operator
Note, the next version of the Solr Operator (v0.4.0) wil
Pretty sure the issue is caused by caches size at new searcher warmup time.
Dominique
Le mar. 10 août 2021 à 09:07, Satya Nand a
écrit :
> Hi Dominique,
>
> You don't provide information about the number of documents. Anyway, all
>> your cache size and mostly initial size are big. Cache are st
Looking at the code of the FieldValueFeature in the LTR integration it
seems to be fully compatible with "DocValues" only fields :
org/apache/solr/ltr/feature/FieldValueFeature.java:128
It would require some additional investigation to understand why you get
only zeroes (have you tried with a diff
Hi Agree with Gora,
it could be anything, it requires some further investigation for sure, I
recommend to go for a consultancy service unless you are able to provide
more details here.
Regards
--
Alessandro Benedetti
Apache Lucene/Solr Committer
Director, R&D Software Engin
On 8/10/2021 1:06 AM, Satya Nand wrote:
Document count is 101893353.
The OOME exception confirms that we are dealing with heap memory. That
means we won't have to look into the other resource types that can cause
OOME.
With that document count, each filterCache entry is 12736670 bytes, plu
I found some helpful information while testing TRAs:
For our use-case I am hesitant to set up an autoDeleteAge (unless it can be
modified - still need to test). So I wondered about a little more manual
delete management approach.
I confirmed that I cannot simply delete a collection that is regis
Hi Dominique,
Thanks, But I still have one confusion, Please help me with it.
Pretty sure the issue is caused by caches size at new searcher warmup time.
We use leader-follower architecture with a replication interval of 3 hours.
This means every 3 hours we get a commit and the searcher warms u
Hi Shawn,
Thanks for explaining it so well. We will work on reducing the filter cache
size and auto warm count.
Though I have one question.
If your configured 4000 entry filterCache were to actually fill up, it
> would require nearly 51 billion bytes, and that's just for the one core
> with 101
If I were you, then I would stick to the 128 GB machine. And then look at
other parameters to tune...
Deepak
"The greatness of a nation can be judged by the way its animals are treated
- Mahatma Gandhi"
+91 73500 12833
deic...@gmail.com
Facebook: https://www.facebook.com/deicool
LinkedIn: www.l
25 matches
Mail list logo