Hi @Atita, We are using the latear version (Solr 7.1.0).
As the metrics are exposed with MBeans via JMX, you could use the Prometheus JMX exportar to take the values of that metrics and expose them. You could use it to monitor caches, response times, number of errors in all the handlers you have defined. To configure JMX in a Solr instance follow this link: https://lucene.apache.org/solr/guide/6_6/using-jmx-with-solr.html This page explains some of the JMX metrics that Solr exposes: https://lucene.apache.org/solr/guide/6_6/performance-statistics-reference.html Basically the JMX exporter is an Embedded Jetty server that read values exposed using JMX (in localhost or in a remote instance), parse that values and exposes them using the format that Prometheus could scrap. Best regards, Daniel El El mar, 7 nov 2017 a las 2:43, Atita Arora <atitaar...@gmail.com> escribió: > Hi @Daniel , > > What version of Solr are you using ? > We gave prometheus + Jolokia + InfluxDB + Grafana a try , that came out > well. > With Solr 6.6 the metrics are explosed through the /metrics api, but how do > we go about for the earlier versions , please guide ? > Specifically the cache monitoring. > > Thanks in advance, > Atita > > On Mon, Nov 6, 2017 at 2:19 PM, Daniel Ortega <danielortegauf...@gmail.com > > > wrote: > > > Hi Robert, > > > > We use the following stack: > > > > - Prometheus to scrape metrics (https://prometheus.io/) > > - Prometheus node exporter to export "machine metrics" (Disk, network > > usage, etc.) (https://github.com/prometheus/node_exporter) > > - Prometheus JMX exporter to export "Solr metrics" (Cache usage, QPS, > > Response times...) (https://github.com/prometheus/jmx_exporter) > > - Grafana to visualize all the data scrapped by Prometheus ( > > https://grafana.com/) > > > > Best regards > > Daniel Ortega > > > > 2017-11-06 20:13 GMT+01:00 Petersen, Robert (Contr) < > > robert.peters...@ftr.com>: > > > > > PS I knew sematext would be required to chime in here! 😊 > > > > > > > > > Is there a non-expiring dev version I could experiment with? I think I > > did > > > sign up for a trial years ago from a different company... I was > actually > > > wondering about hooking it up to my personal AWS based solr cloud > > instance. > > > > > > > > > Thanks > > > > > > Robi > > > > > > ________________________________ > > > From: Emir Arnautović <emir.arnauto...@sematext.com> > > > Sent: Thursday, November 2, 2017 2:05:10 PM > > > To: solr-user@lucene.apache.org > > > Subject: Re: Anyone have any comments on current solr monitoring > > favorites? > > > > > > Hi Robi, > > > Did you try Sematext’s SPM? It provides host, JVM and Solr metrics and > > > more. We use it for monitoring our Solr instances and for consulting. > > > > > > Disclaimer - see signature :) > > > > > > Emir > > > -- > > > Monitoring - Log Management - Alerting - Anomaly Detection > > > Solr & Elasticsearch Consulting Support Training - > http://sematext.com/ > > > > > > > > > > > > > On 2 Nov 2017, at 19:35, Walter Underwood <wun...@wunderwood.org> > > wrote: > > > > > > > > We use New Relic for JVM, CPU, and disk monitoring. > > > > > > > > I tried the built-in metrics support in 6.4, but it just didn’t do > what > > > we want. We want rates and percentiles for each request handler. That > > gives > > > us 95th percentile for textbooks suggest or for homework search results > > > page, etc. The Solr metrics didn’t do that. The Jetty metrics didn’t do > > > that. > > > > > > > > We built a dedicated servlet filter that goes in front of the Solr > > > webapp and reports metrics. It has some special hacks to handle some > > weird > > > behavior in SolrJ. A request to the “/srp” handler is sent as > > > “/select?qt=/srp”, so we normalize that. > > > > > > > > The metrics start with the cluster name, the hostname, and the > > > collection. The rest is generated like this: > > > > > > > > URL: GET /solr/textbooks/select?q=foo&qt=/auto > > > > Metric: textbooks.GET./auto > > > > > > > > URL: GET /solr/textbooks/select?q=foo > > > > Metric: textbooks.GET./select > > > > > > > > URL: GET /solr/questions/auto > > > > Metric: questions.GET./auto > > > > > > > > So a full metric for the cluster “solr-cloud” and the host “search01" > > > would look like “solr-cloud.search01.solr.textbooks.GET./auto.m1_rate”. > > > > > > > > We send all that to InfluxDB. We’ve configured a template so that > each > > > part of the metric name is mapped to a field, so we can write efficient > > > queries in InfluxQL. > > > > > > > > Metrics are graphed in Grafana. We have dashboards that mix > Cloudwatch > > > (for the load balancer) and InfluxDB. > > > > > > > > I’m still working out the kinks in some of the more complicated > > queries, > > > but the data is all there. I also want to expand the servlet filter to > > > report HTTP response codes. > > > > > > > > wunder > > > > Walter Underwood > > > > wun...@wunderwood.org > > > > http://observer.wunderwood.org/ (my blog) > > > > > > > > > > > >> On Nov 2, 2017, at 9:30 AM, Petersen, Robert (Contr) < > > > robert.peters...@ftr.com> wrote: > > > >> > > > >> OK I'm probably going to open a can of worms here... lol > > > >> > > > >> > > > >> In the old old days I used PSI probe to monitor solr running on > tomcat > > > which worked ok on a machine by machine basis. > > > >> > > > >> > > > >> Later I had a grafana dashboard on top of graphite monitoring which > > was > > > really nice looking but kind of complicated to set up. > > > >> > > > >> > > > >> Even later I successfully just dropped in a newrelic java agent > which > > > had solr monitors and a dashboard right out of the box, but it costs > > money > > > for the full tamale. > > > >> > > > >> > > > >> For basic JVM health and Solr QPS and time percentiles, does anyone > > > have any favorites or other alternative suggestions? > > > >> > > > >> > > > >> Thanks in advance! > > > >> > > > >> Robi > > > >> > > > >> ________________________________ > > > >> > > > >> This communication is confidential. Frontier only sends and receives > > > email on the basis of the terms set out at > > http://www.frontier.com/email_ > > > disclaimer. > > > > > > > > > > > > >