Congratulations Mickael!
On Fri, Nov 8, 2019 at 6:41 AM Vahid Hashemian
wrote:
> Congrats Mickael,
>
> Well deserved!
>
> --Vahid
>
> On Thu, Nov 7, 2019 at 9:10 PM Maulin Vasavada
> wrote:
>
> > Congratulations Mickael!
> >
> > On Thu, Nov 7, 2019 at 8:27 PM Manikumar
> > wrote:
> >
> > > Con
yes, thanks that helped.
That is why I prefer to do via command line, so that we can get exact
status of current status.
While doing via kafka manager we just don't know what is current status.
- Thanks
On Fri, Nov 8, 2019 at 11:48 AM SenthilKumar K
wrote:
> I faced a similar issue when r
Hello All,
I have a streaming job running in production which is processing over 2
billion events per day and it does some heavy processing on each event. We
have been facing some challenges in managing flink in production like
scaling in and out, restarting the job with savepoint etc. Flink provi
Hi,
Don’t get me wrong, I just want to understand what's going on.
so how do I figure out, how much partitions are required? Trial and Error?
And as far as I understand, if I have null as key for the record, the record is
stored in all partitions.
Is it then not also processed by each consumer,
Congrats Mickael,
Well deserved!
--Vahid
On Thu, Nov 7, 2019 at 9:10 PM Maulin Vasavada
wrote:
> Congratulations Mickael!
>
> On Thu, Nov 7, 2019 at 8:27 PM Manikumar
> wrote:
>
> > Congrats Mickeal!
> >
> > On Fri, Nov 8, 2019 at 9:05 AM Dong Lin wrote:
> >
> > > Congratulations Mickael!
>
I faced a similar issue when reassigning partitions to newly added brokers.
Out of 400 partitions, 380 were successfully reassigned & the remaining 20
partitions stuck for more than 3 hours.
I logged into the ZK server and cleaned the path rmr
/kafka.primary/admin/reassign_partitions. Pls make a n
Hello All,
We ran kafka partition reassignment is using kafka manager. It seems it is
stuck somewhere. How can I check what is current assignment which is
running and how can I stop it ? its been more that 12 hours and some of the
partitions are under replicated.
Really appreciate your help.
Th
Congratulations Mickael!
On Thu, Nov 7, 2019 at 8:27 PM Manikumar wrote:
> Congrats Mickeal!
>
> On Fri, Nov 8, 2019 at 9:05 AM Dong Lin wrote:
>
> > Congratulations Mickael!
> >
> > On Thu, Nov 7, 2019 at 1:38 PM Jun Rao wrote:
> >
> > > Hi, Everyone,
> > >
> > > The PMC of Apache Kafka is pl
Congrats Mickeal!
On Fri, Nov 8, 2019 at 9:05 AM Dong Lin wrote:
> Congratulations Mickael!
>
> On Thu, Nov 7, 2019 at 1:38 PM Jun Rao wrote:
>
> > Hi, Everyone,
> >
> > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> > Mickael
> > Maison.
> >
> > Mickael has been contrib
Congratulations Mickael!
On Thu, Nov 7, 2019 at 1:38 PM Jun Rao wrote:
> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Mickael
> Maison.
>
> Mickael has been contributing to Kafka since 2016. He proposed and
> implemented multiple KIPs. He has also been
Congratulations Mickael! Well deserved!
On Thu, Nov 7, 2019 at 1:38 PM Jun Rao wrote:
>
> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer Mickael
> Maison.
>
> Mickael has been contributing to Kafka since 2016. He proposed and
> implemented multiple KIPs. He
Matthias J. Sax wrote:
Congrats Mickeal!
-Matthias
Welcome Mickeal!
regards
Congrats Mickeal!
-Matthias
On 11/7/19 1:53 PM, Bill Bejeck wrote:
> Congratulations Mickael! Well deserved!
>
> -Bill
>
> On Thu, Nov 7, 2019 at 4:38 PM Jun Rao wrote:
>
>> Hi, Everyone,
>>
>> The PMC of Apache Kafka is pleased to announce a new Kafka committer
>> Mickael
>> Maison.
>>
>> M
Hi again,
On Thu, 7 Nov 2019 at 23:40, Oliver Eckle wrote:
> Hi,
>
> slow consumers - that could be the case. But why is that an issue? I mean
> I try to use kafka exactly for that and the ability to recover.
> So e.g if there is some burst scenario where a lot of data arrives and has
> to be pr
Congrats Mickael!
Guozhang
On Thu, Nov 7, 2019 at 1:53 PM Bill Bejeck wrote:
> Congratulations Mickael! Well deserved!
>
> -Bill
>
> On Thu, Nov 7, 2019 at 4:38 PM Jun Rao wrote:
>
> > Hi, Everyone,
> >
> > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> > Mickael
> > Ma
Hi,
slow consumers - that could be the case. But why is that an issue? I mean I try
to use kafka exactly for that and the ability to recover.
So e.g if there is some burst scenario where a lot of data arrives and has to
be processed, a "slow consumer" will be the default case.
What I could under
Hi,
> On 7 Nov 2019, at 09:18, SenthilKumar K wrote:
>
> Hello Experts , We are observing issues in Partition(s) when the Kafka
> broker is down & the Partition Leader Broker ID set to -1.
>
> Kafka Version 2.2.0
> Total No Of Brokers: 24
> Total No Of Partitions: 48
> Replication Factor: 2
Congratulations, Mickael!
cheers,
Colin
On Thu, Nov 7, 2019, at 13:53, Bill Bejeck wrote:
> Congratulations Mickael! Well deserved!
>
> -Bill
>
> On Thu, Nov 7, 2019 at 4:38 PM Jun Rao wrote:
>
> > Hi, Everyone,
> >
> > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> >
Hi,
> On 7 Nov 2019, at 22:39, Oliver Eckle wrote:
>
> Have a consumer group with one consumer for the topic .. by misunderstanding
> I have two partitions on the topic ..
> Due to having no key set for the record - I think having several consumers
> making no sense, or am I wrong.
>
I am no
Have a consumer group with one consumer for the topic .. by misunderstanding I
have two partitions on the topic ..
Due to having no key set for the record - I think having several consumers
making no sense, or am I wrong.
Is there any possibility to work around that?
Cause for example on laggi
Consuming not fast/frequent enough is one of the most common reasons for
it. Have you you checked how fast/much message you’re churning out vs. how
many consumers you have in the group the handle the workload?
Also, what are your partition setup for consumer groups?
Regards,
On Thu, 7 Nov 2019
unsubscribe
On Wed, Aug 21, 2019 at 2:23 AM sampath kumar wrote:
> Hi,
>
> Using Broker 5.3.0, new consumers(Consumers managed by brokers). Brokers
> are deployed in a Kubernetes environment
>
> Number of brokers : 3, Number of 3 Zookeeper setup
>
> One of the Topic "inventory.request" we have 3
Using kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe
-group my-app ..
put the output within the logs .. also its pretty obvious, cause no data will
flow anymore
Regards
-Ursprüngliche Nachricht-
Von: M. Manna
Gesendet: Donnerstag, 7. November 2019 22:10
An: u
Congratulations Mickael! Well deserved!
-Bill
On Thu, Nov 7, 2019 at 4:38 PM Jun Rao wrote:
> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Mickael
> Maison.
>
> Mickael has been contributing to Kafka since 2016. He proposed and
> implemented multiple
Hi, Everyone,
The PMC of Apache Kafka is pleased to announce a new Kafka committer Mickael
Maison.
Mickael has been contributing to Kafka since 2016. He proposed and
implemented multiple KIPs. He has also been propomating Kafka through blogs
and public talks.
Congratulations, Mickael!
Thanks,
Have you checked your Kafka consumer group status ? How did you determine
that your consumers are lagging ?
Thanks,
On Thu, 7 Nov 2019 at 20:55, Oliver Eckle wrote:
> Hi there,
>
>
>
> have pretty strange behaviour questioned here already:
> https://stackoverflow.com/q/58650416/7776688
>
>
>
>
Hi there,
have pretty strange behaviour questioned here already:
https://stackoverflow.com/q/58650416/7776688
As you could see from the logs: https://pastebin.com/yrSytSHD at a specific
point the client is stopping to receive records.
I have a strong suspicion that it relates to performance
Hello Experts , We are observing issues in Partition(s) when the Kafka
broker is down & the Partition Leader Broker ID set to -1.
Kafka Version 2.2.0
Total No Of Brokers: 24
Total No Of Partitions: 48
Replication Factor: 2
Min In sync Replicas: 1
Partition
28 matches
Mail list logo