Re: Two doubts about the use of kafka

2021-07-14 Thread Shilin Wu
1. Use something like zk-1:2181:/kafka-root, zk-2:2181:/kafka-root,... 2. You may checkout the jmx monitoring stack using Prometheus and Grafana here . [image: Confluent] Wu Shilin Solution Architect +6581007012 Fo

Two doubts about the use of kafka

2021-07-13 Thread ????
1.When I started kafka, I saw that the root directory registered in zookeeper is "/",but my zookeeper has other applications in use, so I want to modify it as "/kafka01". How should I configure it or document it? 2.Our machine scale is relatively large, so we need to monitor the changes of kafka

Re: Doubts

2019-07-17 Thread Omar Al-Safi
ii) If you want to move data in-house that in a DB (let's say a rational DB like MySql ..etc), I strongly advise you to look at https://debezium.io/ which is a CDC (Change Data Capture) Kafka Connect plugin which records DDL events from your DB and propagate them directly to Kafka, if you have a Ka

Doubts

2019-07-16 Thread Harry k
Hi, i)To upgrade to latest version of kafka from cluster to cloud(how will you upgrade).Any documents ii)How will you move the data from cluster we use sql database into cloud iii) Is same connectivity available in cloud for same systems.Which servers is taking to whom Thanks

Re: Doubts in Kafka

2019-07-16 Thread Jonathan Santilli
Hello Aruna, if the duplication you are referring to is the duplication of the events/records that arrive and are consumed to/from Kafka, exactly-once semantics and transactions are what you are looking for. Kafka is prepared (since version 0.11 IIRC) to support exactly once, it means that events

Re: Doubts in Kafka

2019-07-15 Thread aruna ramachandran
I want to process a single message at a time to avoid duplication. On Mon, Jul 15, 2019 at 9:45 PM Pere Urbón Bayes wrote: > The good question, Aruna, is why would you like to do that? > > -- Pere > > Missatge de aruna ramachandran del dia dl., 15 de > jul. 2019 a les 14:17: > > > Is there a w

Re: Doubts in Kafka

2019-07-15 Thread Pere Urbón Bayes
The good question, Aruna, is why would you like to do that? -- Pere Missatge de aruna ramachandran del dia dl., 15 de jul. 2019 a les 14:17: > Is there a way to get the Consumer to only read one message at a time and > commit the offset after processing a single message. > -- Pere Urbon-Baye

Re: Doubts in Kafka

2019-07-15 Thread Suman B N
Explore and set max.poll.records. On Mon, Jul 15, 2019 at 5:47 PM aruna ramachandran wrote: > Is there a way to get the Consumer to only read one message at a time and > commit the offset after processing a single message. > -- *Suman* *OlaCabs*

Doubts in Kafka

2019-07-15 Thread aruna ramachandran
Is there a way to get the Consumer to only read one message at a time and commit the offset after processing a single message.

Re: Re: Re: Doubts in Kafka

2019-01-14 Thread Eric Azama
:54 AM Sven Ludwig wrote: > One more question: > > Is there a way to ask Kafka which ProducerRecord.key is mapped to which > TopicPartition (for debugging purposes etc.)? > > > > Gesendet: Montag, 14. Januar 2019 um 13:49 Uhr > Von: "Sven Ludwig" > An: users@kaf

Aw: Re: Re: Doubts in Kafka

2019-01-14 Thread Sven Ludwig
One more question: Is there a way to ask Kafka which ProducerRecord.key is mapped to which TopicPartition (for debugging purposes etc.)? Gesendet: Montag, 14. Januar 2019 um 13:49 Uhr Von: "Sven Ludwig" An: users@kafka.apache.org Betreff: Aw: Re: Re: Doubts in Kafka Hi, >> .

Aw: Re: Re: Doubts in Kafka

2019-01-14 Thread Sven Ludwig
Uhr Von: "Peter Levart" An: users@kafka.apache.org, "Sven Ludwig" Betreff: Re: Aw: Re: Doubts in Kafka On 1/10/19 2:26 PM, Sven Ludwig wrote: > Okay, but > > what if one also needs to preserve the order of messages coming from a > particular device? > > With

Re: Aw: Re: Doubts in Kafka

2019-01-11 Thread Pulkit Manchanda
> records. You usually retain them for enough time so you don't loose them > before processing them + some safety time... > > Regards, Peter > > > > > Sven > > > > > > Gesendet: Donnerstag, 10. Januar 2019 um 08:35 Uhr > > Von: "Pete

Re: Aw: Re: Doubts in Kafka

2019-01-11 Thread Peter Levart
processed records. You usually retain them for enough time so you don't loose them before processing them + some safety time... Regards, Peter Sven Gesendet: Donnerstag, 10. Januar 2019 um 08:35 Uhr Von: "Peter Levart" An: users@kafka.apache.org, "aruna ramachandran"

Aw: Re: Doubts in Kafka

2019-01-10 Thread Sven Ludwig
na ramachandran" Betreff: Re: Doubts in Kafka Hi Aruna, On 1/10/19 8:19 AM, aruna ramachandran wrote: > I am using keyed partitions with 1000 partitions, so I need to create 1000 > consumers because consumers groups and re balancing concepts is not worked > in the case of manually as

Re: Doubts in Kafka

2019-01-09 Thread Peter Levart
Hi Aruna, On 1/10/19 8:19 AM, aruna ramachandran wrote: I am using keyed partitions with 1000 partitions, so I need to create 1000 consumers because consumers groups and re balancing concepts is not worked in the case of manually assigned consumers.Is there any replacement for the above problem.

Doubts in Kafka

2019-01-09 Thread aruna ramachandran
I am using keyed partitions with 1000 partitions, so I need to create 1000 consumers because consumers groups and re balancing concepts is not worked in the case of manually assigned consumers.Is there any replacement for the above problem.

Kafka doubts

2019-01-09 Thread aruna ramachandran
Hi, I don't know the device count ,the new devices may add to the system how can initially configure the partitions by the key(device id). The device count may increase up to 1 million and how Kafka scales based on the need.

Re: Doubts in Kafka

2019-01-08 Thread aruna ramachandran
Thanks for the solution please help me to try out the keyed partitioning with 1000 partitions.And also give suggestions to try Kafka with node js. On Tue, 8 Jan 2019, 9:53 pm Todd Palino OK, in that case you’ll want to do something like use the sensor ID as the > key of the message. This will ass

Re: Doubts in Kafka

2019-01-08 Thread Jan Filipiak
On 08.01.2019 17:11, aruna ramachandran wrote: > I need to process single sensor messages in serial (order of messages > should not be changed)at the same time I have to process 1 sensors > messages in parallel please help me to configure the topics and partitions. > If you want to process e

Re: Doubts in Kafka

2019-01-08 Thread Pulkit Manchanda
Yes, As Todd said you have to use some id as the key to partition. The rebalancing will be an over head and if you increase the partitions later you will lose the order. you can go through https://anirudhbhatnagar.com/2016/08/22/achieving-order-guarnetee-in-kafka-with-partitioning/ for more unders

Re: Doubts in Kafka

2019-01-08 Thread Todd Palino
OK, in that case you’ll want to do something like use the sensor ID as the key of the message. This will assure that every message for that sensor ID ends up in the same partition (which will assure strict ordering of messages for that sensor ID). Then you can create a number of partitions to get

Re: Doubts in Kafka

2019-01-08 Thread aruna ramachandran
I need to process single sensor messages in serial (order of messages should not be changed)at the same time I have to process 1 sensors messages in parallel please help me to configure the topics and partitions. On Tue, Jan 8, 2019 at 9:19 PM Todd Palino wrote: > I think you’ll need to expa

Re: Doubts in Kafka

2019-01-08 Thread Todd Palino
I think you’ll need to expand a little more here and explain what you mean by processing them in parallel. Nearly by definition, parallelization and strict ordering are mutually exclusive concepts. -Todd On Tue, Jan 8, 2019 at 10:40 AM aruna ramachandran wrote: > I need to process the 1 sen

Doubts in Kafka

2019-01-08 Thread aruna ramachandran
I need to process the 1 sensor messages in parallel but each sensor message should be in order.If I create 1 partition it doesn't give high throughput .Order is guaranteed only inside the partition. How can parallelize messages without changing the order pls help me to find the solution.

Re: Doubts about multiple instance in kafka

2018-02-22 Thread naresh Goud
Hi Pravin, Your correct. you can run application with multiple times so they will be started on multiples JVM's ( run1 :- java yourclass (which runs in one JVM) ; run2: java yourclass(which runs in another JVM ) ) or else you can run application on multiple machines i.e multiple appli

Doubts about multiple instance in kafka

2018-02-21 Thread pravin kumar
I have the Kafka confluent Document. But i cant understand the following line. "It is important to understand that Kafka Streams is not a resource manager, but a library that “runs” anywhere its stream processing application runs. Multiple instances of the application are executed either on the s

Re: Doubts in KStreams

2018-02-21 Thread Bill Bejeck
Hi Pravin, 1. Fault tolerance means that state stores are backed by topics, changelogs, storing the contents of the state store. For example, in a worst case scenario, your machine crashed destroying all your local state, on starting your Kafka Streams application back up the state stores would

Doubts in KStreams

2018-02-21 Thread pravin kumar
I have studied KafkaStreams, but not clearly understood 1.Can someone explain about Fault tolerence. 2.I have topicA and topicB with 4 partitions, so it created fourTasks, I have created it in singleJVM.But i need to knw how it works in multiple JVM and if one jvm goes down,how it another jvm take

Doubts regarding KafkaProducer implemetation

2017-03-13 Thread Madhukar Bharti
Hi, We have three brokers in a cluster with replication factor is 3. We are using Kafka-0.10.0.1. We see some failures on metadata timeout exceptions while producing. We have configured retries=3 and max in flight request=1. After comparing with the old scala Producer code found that in new Produc

Kafka basic doubts

2016-02-22 Thread Pariksheet Barapatre
Hi All, Greetings..!!! This is my first email to Kafka Community. I have just started exploring Kafka on CDH5.5 cluster which ships with Kafka 0.8.2.1. I am able to run sample programs for producer as well as consumer (both high level and low level). Now I am trying to load messages from Kafka

Re: Doubts Kafka

2015-02-08 Thread Gwen Shapira
We keep messages for the amount of time > > > > specified in *log.retention* parameters. If the disk is filled within > > > > minutes, either set log.retention.minutes very low (at risk of losing > > > data > > > > if consumers need restart), or make sure you

Re: Doubts Kafka

2015-02-08 Thread Christopher Piggott
es for the amount of time > > > specified in *log.retention* parameters. If the disk is filled within > > > minutes, either set log.retention.minutes very low (at risk of losing > > data > > > if consumers need restart), or make sure your disk capacity matches the > > &

Re: Doubts Kafka

2015-02-08 Thread Gwen Shapira
risk of losing > data > > if consumers need restart), or make sure your disk capacity matches the > > rates in which producers send data. > > > > Gwen > > > > > > On Sat, Feb 7, 2015 at 3:01 AM, Eduardo Costa Alfaia < > > e.costaalf...@unibs

Re: Doubts Kafka

2015-02-08 Thread Christopher Piggott
nsumers need restart), or make sure your disk capacity matches the >> rates in which producers send data. >> >> Gwen >> >> >> On Sat, Feb 7, 2015 at 3:01 AM, Eduardo Costa Alfaia < >> e.costaalf...@unibs.it >> > wrote: >> >> >

Re: Doubts Kafka

2015-02-08 Thread Christopher Piggott
3:01 AM, Eduardo Costa Alfaia < > e.costaalf...@unibs.it > > wrote: > > > Hi Guys, > > > > I have some doubts about the Kafka, the first is Why sometimes the > > applications prefer to connect to zookeeper instead brokers? Connecting > to > > zo

Re: Doubts Kafka

2015-02-08 Thread Gwen Shapira
og.retention.minutes very low (at risk of losing data if consumers need restart), or make sure your disk capacity matches the rates in which producers send data. Gwen On Sat, Feb 7, 2015 at 3:01 AM, Eduardo Costa Alfaia wrote: > Hi Guys, > > I have some doubts about the Kafka, t

Doubts Kafka

2015-02-08 Thread Eduardo Costa Alfaia
Hi Guys, I have some doubts about the Kafka, the first is Why sometimes the applications prefer to connect to zookeeper instead brokers? Connecting to zookeeper could create an overhead, because we are inserting other element between producer and consumer. Another question is about the

Re: Some doubts regarding kafka config parameters

2014-07-21 Thread Jun Rao
Those are good questions. See my answers inlined below. Thanks, Jun On Fri, Jul 18, 2014 at 1:33 PM, shweta khare wrote: > hi, > > I have the following doubts regarding some kafka config parameters: > > For example if I have a Throughput topic with replication factor 1

Some doubts regarding kafka config parameters

2014-07-18 Thread shweta khare
hi, I have the following doubts regarding some kafka config parameters: For example if I have a Throughput topic with replication factor 1 and a single partition 0,then i will see the following files under /tmp/kafka-logs/Throughput_0: .index .log