Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-11 Thread Ali Nazemian
blocked by the group coordinator for up to " > > max.poll.interval.ms""? Please explain that. > > > > There's no universal recipe for "long-running jobs", there's just > > particular issues you might be encountering and suggested solution

Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-08 Thread Ali Nazemian
as saying "when our consumer gets stuck, Kafka's > automatic partition reassignment kicks in and that's problematic for us." > Hence I suggested not using the automatic partition assignment, which per > my interpretation would address your issue. > > Chris > > O

Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-08 Thread Ali Nazemian
> > https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html > . > > Chris > > > On Thu, May 7, 2020 at 12:37 AM Ali Nazemian > wrote: > > > To help understanding my case in more details, the error I can see > > constantly is

Re: Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-07 Thread Ali Nazemian
failed, removing it from the group Thanks, Ali On Thu, May 7, 2020 at 2:38 PM Ali Nazemian wrote: > Hi, > > With the emerge of using Apache Kafka for event-driven architecture, one > thing that has become important is how to tune apache Kafka consumer to > manage long-running jobs.

Kafka long running job consumer config best practices and what to do to avoid stuck consumer

2020-05-06 Thread Ali Nazemian
Hi, With the emerge of using Apache Kafka for event-driven architecture, one thing that has become important is how to tune apache Kafka consumer to manage long-running jobs. The main issue raises when we set a relatively large value for "max.poll.interval.ms". Setting this value will, of course,

Re: Kafka partitioning and auto-scaling in k8s

2019-02-21 Thread Ali Nazemian
ong them. When load spikes, more consumers will > join the group and partitions will be reassigned across the larger pool. > > -- Peter (from phone) > > > On Feb 21, 2019, at 10:12 PM, Ali Nazemian > wrote: > > > > Hi All, > > > > I was wondering how an

Kafka partitioning and auto-scaling in k8s

2019-02-21 Thread Ali Nazemian
Hi All, I was wondering how an application can be auto-scalable if only a single instance can read from the single Kafka partition and two instances cannot read from the single partition at the same time with the same consumer group. Suppose there is an application that has 10 instances running o

Re: Kafka disk recommendation for production cluster

2018-07-12 Thread Ali Nazemian
AID 10 > has been the choice. Also, the replication you are mentioning is the s/w > replication nothing to do with RAID 0 setup. > > > > On 11 July 2018 at 23:59, Ali Nazemian wrote: > > > Thanks. As this proposal is not available for the version of Kafka that > we >

Re: Kafka disk recommendation for production cluster

2018-07-11 Thread Ali Nazemian
gt; > https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD > > Should give you some ideas. > > On 11 July 2018 at 14:31, Ali Nazemian wrote: > > > Hi All, > > > > I was wondering what the disk recommendation is for Kafka cluster? Is

Kafka disk recommendation for production cluster

2018-07-11 Thread Ali Nazemian
Hi All, I was wondering what the disk recommendation is for Kafka cluster? Is it acceptable to use RAID0 in the case that replication is 3? We are running on a cloud infrastructure and disk failure is addressed at another level, so the chance of single disk failure would be very low. Besides, our

Re: Kafka disk recommendation for cloud

2018-07-10 Thread Ali Nazemian
ll writes from > many publishers. What does your workload look like? > > Kind Regards, > -Dan > > -----Original Message- > From: Ali Nazemian > Sent: Wednesday, July 4, 2018 6:58 AM > To: users@kafka.apache.org > Subject: Kafka disk recommendation for cloud > > Hi

Kafka disk recommendation for cloud

2018-07-04 Thread Ali Nazemian
Hi All, I was wondering what the recommendations are for disk type for hosting Kafka on a cloud environment? As far as I know, most of the best practices suggest using spinning disks for Kafka due to the fact that Kafka architecture relies on sequential write/read. Hence, the increase in Kafka per

Re: Best practice for publishing byte messages to Kafka

2018-01-10 Thread Ali Nazemian
is sent as a byte array, so the default byte array serializer is as > "efficient" as it gets, as it's just sending your byte array through as the > message... there's no serialization happening. > -Thunder > > On Tue, Jan 9, 2018 at 8:17 PM Ali Nazemian wrote:

Re: Best practice for publishing byte messages to Kafka

2018-01-09 Thread Ali Nazemian
On Jan 9, 2018, at 8:12 AM, Ali Nazemian wrote: > > > > Hi All, > > > > I was wondering whether there is any best practice/recommendation for > > publishing byte messages to Kafka. Is there any specific Serializer that > is > > recommended for this matter? > > > > Cheers, > > Ali > > -- A.Nazemian

Best practice for publishing byte messages to Kafka

2018-01-09 Thread Ali Nazemian
Hi All, I was wondering whether there is any best practice/recommendation for publishing byte messages to Kafka. Is there any specific Serializer that is recommended for this matter? Cheers, Ali

Re: Kafka 0.10.0.2 reset offset in the new-consumer mode

2017-11-23 Thread Ali Nazemian
kafka-storm spout, but you could try using > > > the kafka-consumer-groups.sh cli to reset the offset. It has > > > a --reset-offsets option. > > > > > > On Thu, Nov 23, 2017 at 7:02 PM, Ali Nazemian > > > wrote: > > > > > > >

Re: Kafka 0.10.0.2 reset offset in the new-consumer mode

2017-11-23 Thread Ali Nazemian
> > On Thu, Nov 23, 2017 at 7:02 PM, Ali Nazemian > wrote: > > > Hi All, > > > > I am using Kafka 0.10.0.2 and I am not able to upgrade my Kafka version. > I > > have a situation that after removing Kafka topic, I am getting the > > following error in Kaf

Kafka 0.10.0.2 reset offset in the new-consumer mode

2017-11-23 Thread Ali Nazemian
Hi All, I am using Kafka 0.10.0.2 and I am not able to upgrade my Kafka version. I have a situation that after removing Kafka topic, I am getting the following error in Kafka-Storm Spout client because the offset hasn't been reset properly. I was wondering how I can reset the offset in the new-con

Kafka spout throughput dropped after upgrading to Storm Kafka Client

2017-06-10 Thread Ali Nazemian
Hi all, We have update Kafka-storm client for Kafka Spout from 1.0.3 to 1.1 recently. Before this upgrade, we were able to use our application without any issue, but after the upgrade, our performance dropped significantly wherever we are using more than a single partition for our Kafka topic! If

Re: Kafka best practice on bare metal hardware

2017-04-13 Thread Ali Nazemian
d SLA? Regards, Ali On Thu, Apr 13, 2017 at 10:57 AM, Marcos Juarez wrote: > Ali, > > I don't know of proper benchmarks out there, but I've done some work in > this area, when trying to determine what hardware to get for particular use > cases. My answers are in-line:

Kafka best practice on bare metal hardware

2017-04-10 Thread Ali Nazemian
Hi all, I was wondering if there is any benchmark or any recommendation for having physical HW vs virtual for the Kafka Brokers. I am trying to calculate the HW requirements for a Kafka Cluster with a hard SLA. My questions are as follows. - What is the effect of OS disk caching for a Kafka-Broke

Missing offset value for a Consumer Group

2017-04-10 Thread Ali Nazemian
Hi all, I have an issue with Kafka 0.10.1. I can produce a message to a specific Kafka topic and I am able to see the message using "Kafka-console-consumer". However, when I am trying to consume that message, it is like there is not any message inside the topic. I suspect maybe there is an issue l