blocked by the group coordinator for up to "
> > max.poll.interval.ms""? Please explain that.
> >
> > There's no universal recipe for "long-running jobs", there's just
> > particular issues you might be encountering and suggested solution
as saying "when our consumer gets stuck, Kafka's
> automatic partition reassignment kicks in and that's problematic for us."
> Hence I suggested not using the automatic partition assignment, which per
> my interpretation would address your issue.
>
> Chris
>
> O
>
> https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
> .
>
> Chris
>
>
> On Thu, May 7, 2020 at 12:37 AM Ali Nazemian
> wrote:
>
> > To help understanding my case in more details, the error I can see
> > constantly is
failed,
removing it from the group
Thanks,
Ali
On Thu, May 7, 2020 at 2:38 PM Ali Nazemian wrote:
> Hi,
>
> With the emerge of using Apache Kafka for event-driven architecture, one
> thing that has become important is how to tune apache Kafka consumer to
> manage long-running jobs.
Hi,
With the emerge of using Apache Kafka for event-driven architecture, one
thing that has become important is how to tune apache Kafka consumer to
manage long-running jobs. The main issue raises when we set a relatively
large value for "max.poll.interval.ms". Setting this value will, of course,
ong them. When load spikes, more consumers will
> join the group and partitions will be reassigned across the larger pool.
>
> -- Peter (from phone)
>
> > On Feb 21, 2019, at 10:12 PM, Ali Nazemian
> wrote:
> >
> > Hi All,
> >
> > I was wondering how an
Hi All,
I was wondering how an application can be auto-scalable if only a single
instance can read from the single Kafka partition and two instances cannot
read from the single partition at the same time with the same consumer
group.
Suppose there is an application that has 10 instances running o
AID 10
> has been the choice. Also, the replication you are mentioning is the s/w
> replication nothing to do with RAID 0 setup.
>
>
>
> On 11 July 2018 at 23:59, Ali Nazemian wrote:
>
> > Thanks. As this proposal is not available for the version of Kafka that
> we
>
gt;
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
>
> Should give you some ideas.
>
> On 11 July 2018 at 14:31, Ali Nazemian wrote:
>
> > Hi All,
> >
> > I was wondering what the disk recommendation is for Kafka cluster? Is
Hi All,
I was wondering what the disk recommendation is for Kafka cluster? Is it
acceptable to use RAID0 in the case that replication is 3? We are running
on a cloud infrastructure and disk failure is addressed at another level,
so the chance of single disk failure would be very low. Besides, our
ll writes from
> many publishers. What does your workload look like?
>
> Kind Regards,
> -Dan
>
> -----Original Message-
> From: Ali Nazemian
> Sent: Wednesday, July 4, 2018 6:58 AM
> To: users@kafka.apache.org
> Subject: Kafka disk recommendation for cloud
>
> Hi
Hi All,
I was wondering what the recommendations are for disk type for hosting
Kafka on a cloud environment? As far as I know, most of the best practices
suggest using spinning disks for Kafka due to the fact that Kafka
architecture relies on sequential write/read. Hence, the increase in Kafka
per
is sent as a byte array, so the default byte array serializer is as
> "efficient" as it gets, as it's just sending your byte array through as the
> message... there's no serialization happening.
> -Thunder
>
> On Tue, Jan 9, 2018 at 8:17 PM Ali Nazemian wrote:
On Jan 9, 2018, at 8:12 AM, Ali Nazemian wrote:
> >
> > Hi All,
> >
> > I was wondering whether there is any best practice/recommendation for
> > publishing byte messages to Kafka. Is there any specific Serializer that
> is
> > recommended for this matter?
> >
> > Cheers,
> > Ali
>
>
--
A.Nazemian
Hi All,
I was wondering whether there is any best practice/recommendation for
publishing byte messages to Kafka. Is there any specific Serializer that is
recommended for this matter?
Cheers,
Ali
kafka-storm spout, but you could try using
> > > the kafka-consumer-groups.sh cli to reset the offset. It has
> > > a --reset-offsets option.
> > >
> > > On Thu, Nov 23, 2017 at 7:02 PM, Ali Nazemian
> > > wrote:
> > >
> > > >
>
> On Thu, Nov 23, 2017 at 7:02 PM, Ali Nazemian
> wrote:
>
> > Hi All,
> >
> > I am using Kafka 0.10.0.2 and I am not able to upgrade my Kafka version.
> I
> > have a situation that after removing Kafka topic, I am getting the
> > following error in Kaf
Hi All,
I am using Kafka 0.10.0.2 and I am not able to upgrade my Kafka version. I
have a situation that after removing Kafka topic, I am getting the
following error in Kafka-Storm Spout client because the offset hasn't been
reset properly. I was wondering how I can reset the offset in the
new-con
Hi all,
We have update Kafka-storm client for Kafka Spout from 1.0.3 to 1.1
recently. Before this upgrade, we were able to use our application without
any issue, but after the upgrade, our performance
dropped significantly wherever we are using more than a single partition
for our Kafka topic! If
d SLA?
Regards,
Ali
On Thu, Apr 13, 2017 at 10:57 AM, Marcos Juarez wrote:
> Ali,
>
> I don't know of proper benchmarks out there, but I've done some work in
> this area, when trying to determine what hardware to get for particular use
> cases. My answers are in-line:
Hi all,
I was wondering if there is any benchmark or any recommendation for having
physical HW vs virtual for the Kafka Brokers. I am trying to calculate the
HW requirements for a Kafka Cluster with a hard SLA. My questions are as
follows.
- What is the effect of OS disk caching for a Kafka-Broke
Hi all,
I have an issue with Kafka 0.10.1. I can produce a message to a specific
Kafka topic and I am able to see the message using
"Kafka-console-consumer". However, when I am trying to consume that
message, it is like there is not any message inside the topic. I suspect
maybe there is an issue l
22 matches
Mail list logo