Hi!
I’m considering to use the following configuration for my producers and
consumers:
client.dns.lookup=”use_all_dns_ips”
So that I only have a single DNS entry to manage the list of brokers.
I also have another cluster connected via MirrorMaker 2, which serves as a
failover.
My question is,
I would re-iterate Prometheus,
it allows for node_collector that also gives you OS metrics, + the JMX
integration that tells you how the Java servers are doing + the topic depth
modules.
All going into prometheus, all visualised nicely as one pane of glass via
Grafana.
and well if you doing codi
How about using JMP -> Prometheus integration.
Then you can visualise using Grafana and you can use the Prometheus alert /
notification framework also.
G
On Thu, Mar 19, 2020 at 8:55 AM 张祥 wrote:
> Hi,
>
> I want to know what the best practice to collect Kafka JMX metrics is. I
> haven't found
I stand corrected...
Sent from my iPhone
George Leonard
__
george...@gmail.com
+27 82 655 2466
> On 21 Feb 2020, at 07:22, Liam Clarke wrote:
>
> Hi Sunil,
>
> Looks like Metricbeats has a Jolokia module that will capture JMX exposed
> metric
I’m going to say no...
Metric beat expose OS metrics. You talking about a JVM here exposing values via
JMX.
Have you looked at catching all of this using Prometheus and dash boarding it
using Grafana, as a real time dashboard.
Sent from my iPhone
George Leonard
oyments/replicator/replicator-quickstart.html#configure-and-run-replicator
>
> -- Peter
>
> > On Feb 19, 2020, at 7:41 PM, George wrote:
> >
> > Hi all.
> >
> > is it possible, for testing purposes to replicate topic A from Cluster 1
> to
> > topic
Hi all.
is it possible, for testing purposes to replicate topic A from Cluster 1 to
topic B on cluster 1/same cluster?
G
--
You have the obligation to inform one honestly of the risk, and as a person
you are committed to educate yourself to the total risk in any activity!
Once informed & total
s you from a CPU/memory
Utilisation.
The 90Mb/s, how did you calculate that?
G
On Wed, Feb 19, 2020 at 7:29 AM Gowtham S wrote:
> Thanks, George and Alexandre for the reply.
>
> George, to answer your question
>
> > How did you get to 90MB/s - this is our expected through
Asking as I also need to do some of this math.
How did you get to 90MB/s, the RAM size and CPU count ?
You missing some dedicated SSD's for the ZK's
Assume you doing licensing for 6 brokers...
I'd suggest maybe rather go with 5 x 1TB drives, you provisioning 24TB for
a 23.4 TB sizing, cutting it v
hmm Math here doe snot look right
> > Retention Period = Retention period + (log.retention.check.interval.ms +
> > log.segment.delete.delay.ms ) / 1000
> > = 86400 + (3 + 6000)/1000
> > = 86400 + 360
> > = 86760 Seconds
Hi Anjith
Might the be what you referring to:
https://docs.confluent.io/current/installation/versions-interoperability.html
G
On Tue, Feb 18, 2020 at 11:54 AM Robin Moffatt wrote:
> Hi,
>
> Can you be a bit more specific about what you're looking for?
>
> thanks, Robin.
>
>
> --
>
> Robin Mo
ignore comment
just saw the kSQL button.
G
On Fri, Jan 17, 2020 at 1:43 PM George wrote:
> Hi Bruno
>
> From the description this looks exactly whats needed, will have to see how
> to do this as I haven't touched Java in 20 yrs, someone did mention to me
> this is also pos
AM Bruno Cadonna wrote:
> Hi George,
>
> Could the following tutorial help you?
>
> https://kafka-tutorials.confluent.io/transform-a-stream-of-events/kstreams.html
>
> Best,
> Bruno
>
> On Fri, Jan 17, 2020 at 7:48 AM George wrote:
> >
> > I have a topic with
I have a topic with message as per:
{'file_name' : filename,
'line_number' : src_line_number,
'section' : vTag,
'line_data': line_data
}
I want to unpack line_data into multiple columns based on position.
Can I do this via stream processing. the output will go onto a new
topic/strea
;
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Thu, 16 Jan 2020 at 03:42, George wrote:
>
> > Hi all
> >
> > Is it possible to deploy just the connector/worker stack SQ onto say a
> Web
> > server. or a JBOSS server.
>
Hi all
Is it possible to deploy just the connector/worker stack SQ onto say a Web
server. or a JBOSS server.
Guessing the connector is then configured into standalone mode ?
G
--
You have the obligation to inform one honestly of the risk, and as a person
you are committed to educate yourself t
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Wed, 15 Jan 2020 at 12:48, George wrote:
>
> > Hi Tom
> >
> > will do. for now I have 4 specific file types I need to ingest.
> >
> > 1. reading apache web serv
nector for http logs files though
G
On Wed, Jan 15, 2020 at 12:32 PM Tom Bentley wrote:
> Hi George,
>
> Since you mentioned CDC specifically you might want to check out Debezium (
> https://debezium.io/) which operates as a connector of the sort Robin
> referred to and
:
> https://rmoff.dev/crunch19-zero-to-hero-kafka-connect
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Wed, 15 Jan 2020 at 03:43, George wrote:
>
> > Hi all.
> >
> > Please advise, a real noob here still,
Hi all.
Please advise, a real noob here still, unpacking how the stack still
works...
if I have a mySQL server, or a web server, or a 2 node JBOSS cluster.
If I want to use the mysql connector to connect to the MySQL DB to pull
data using CDC... then I need to install the Kafka stack on the DB s
Why not rather use Prometheus with node-app module deployed on all involved
hosts, Producers, consumers, brokers etc,
You can also use prometheus to integrate with Kafka via JMX to expose it's
kafka metrics,
Display all via a Grafana dashboard
G
On Fri, Jan 10, 2020 at 6:21 AM Peter Bukowinski
Hi all
is it possible to subscribe to a topic based on a specified key... without
specifying the hosting partition.
I know I can simply subscribe to the topic and filter the data as received
looking for relevant key, but I'd prefer to have this filter done on the
broker, reducing the consumer wor
ding to a topic partition even
> if there is no active consumer consuming from it.
>
> I hope this helps.
> --Vahid
>
>
>
>
> From: Jerry George
> To: users@kafka.apache.org
> Date: 05/26/2017 06:55 AM
> Subject:Trouble with querying offsets when usin
Sat, May 27, 2017 at 8:18 AM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:
> Hi Jerry,
>
> I am also facing the same issue. Did you found the solution?
>
> Regards,
> Abhimanyu
>
> On Fri, May 26, 2017 at 7:24 PM, Jerry George wrote:
>
> > Hi
> &
Hi
I had question about the new consumer APIs.
I am having trouble retrieving the offsets once the consumers are
*disconnected* when using new consumer v2 API. Following is what I am
trying to do,
*bin/kafka-consumer-groups.sh -new-consumer --bootstrap-server kafka:9092
--group group --describe*
I am using Kafka for Event Sourcing and I am interested in implementing Sagas
(or in general long-running distributed flows) using Kafka.
I did some research but I could not find anything on the topic.
There is plenty of information on Sagas but I feel an implementation using
Kafka might i
licate log lines) is observed when there is an
outburst of logs from the producers. The same is working fine when
throttling is disabled.
Please can someone help me to overcome this log duplication issue.
Regards,
Joseph George K
What is the best way to connect to broker with 0.8.2.1 release? I tried using
KafkaClient.poll() method but it appears that it hasn't been implemented yet.
Any suggestions would be greatly appreciated!
Cheers,
~george
8)
at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
... 7 more
-Chris George
29 matches
Mail list logo