Hi All,
I have a cluster of 3 nodes. Everything was good when started. When we deleted
a topic(including folders in kafka brokers and zookeeper) restarted brokers and
create topics. Now I see below error on 2 of the leaders which keeps coming
every other second on server.logs.
I have 1 partition
observing count measure, which gives number of events which have
been marked.
You can try monitoring 1/5/15 miniture averages.
On Wed, Jun 20, 2018 at 12:33 AM Arunkumar
wrote:
> Hi All,
> I am seeing IsrShrinksPerSec & IsrExpandsPerSec increases (>0) when one
> or more brokers goe
Hi All,
I am seeing IsrShrinksPerSec & IsrExpandsPerSec increases (>0) when one or
more brokers goes down and comes back up into the cluster. But the values
should get back to 0 once the servers catches up, is my understanding after
reading most of the documents. But in our production environme
Hi All
I am facing problem with ISR metrics. We have production cluster of 3 zookeeper
and 3 broker. We have implemented custom metrics code(Also seeing the same on
jvisualVM as well). When we initially start the brokers we dis not have any
issue and it worked fine. But when we restarted a broke
find example configs in their repo, which is pretty good and you
> > also have already done Grafana dashboards (https://grafana.com/
> dashboards
> > ):
> > https://github.com/prometheus/jmx_exporter/tree/master/example_configs
> >
> >
> > On Thu, Apr 26, 2018
HI All
I am working on setting up Monitoring and alerting for our production cluster.
As of now we have a cluster of 3 zookeeper and 3 kafka Brokers which will
expand later.
We are planning for basic metrics (important ones) on which we need to alert.
We are in a process of developing alerting s
On Wednesday, February 28, 2018, 3:04:06 PM CST, adrien ruffie
wrote:
#yiv0976467386 #yiv0976467386 -- P
{margin-top:0;margin-bottom:0;}#yiv0976467386
Hi Arunkumar,
have you take a look if your MBean are exposed with Zookeeper thank to
JVisualvm yet ? As like in my screen in
Dear Folks
We have plans implementing kafa and zookeeper metrics using java JMX API. We
were able to successfully implement metrics collection using the MBean exposed
for kafka. But when we try to do so for zookeeper I do not find much API
support like we have for kafka. Can someone help if you
Hi Jeff
Number of partition depends on number of consumers on that particular consumer
group. So you may have to create your partitions based on that.
ThanksArunkumar Pichaimuthu, PMP
On Monday, November 13, 2017, 5:25:35 PM CST, Jeff Widman
wrote:
We're considering an architecture t
dependency.
Ismael
On Thu, Nov 9, 2017 at 10:17 PM, Arunkumar
wrote:
> Hi All
> We have a requirement to migrate log4J 1.x to log4j 2 for our kafka
> brokers using log4j bridge utility. According to Apache Docs the code must
> not call DOMConfigurator or PropertyConfigurator class
Hi There
We are also trying to do the same and we are trying to over ride
PlainLoginModule as well. When I add a jar it is not identifying and loading
the jar. If there is any examples which we can follow will be usefull. Any help
is highly appreciated.
Thanks in advanceArunkumar Pichaimuthu, PM
#L233
On Thu, Nov 9, 2017 at 2:17 PM, Arunkumar
wrote:
> Hi All
> We have a requirement to migrate log4J 1.x to log4j 2 for our kafka
> brokers using log4j bridge utility. According to Apache Docs the code must
> not call DOMConfigurator or PropertyConfigurator class, But when I dig into
Hi All
We have a requirement to migrate log4J 1.x to log4j 2 for our kafka brokers
using log4j bridge utility. According to Apache Docs the code must not call
DOMConfigurator or PropertyConfigurator class, But when I dig into the code I
see on Tools package and other packages they have used Prop
ks in advance.
Thanks
Arunkumar Pichaimuthu, PMP
Request GROUP_COORDINATOR
failed on brokers List(broker1:9094 (id: -3 rack: null), broker2:9094 (id: -1
rack: null), broker3:9094 (id: -2 rack: null))
Thanks
Arunkumar Pichaimuthu, PMP
Thank you Vahid
I appreciate you time.
Arunkumar Pichaimuthu, PMP
On Fri, 6/16/17, Vahid S Hashemian wrote:
Subject: Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
To: users@kafka.apache.org
Date: Friday, June 16, 2017, 6:30 PM
Hi
for the same.
Thanks
Arunkumar Pichaimuthu, PMP
On Fri, 6/16/17, Arunkumar wrote:
Subject: Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
To: users@kafka.apache.org
Date: Friday, June 16, 2017, 4:15 PM
Hi Vahid
I am working on the
ciate your time.
Thanks
Arunkumar Pichaimuthu, PMP
On Fri, 6/16/17, Vahid S Hashemian wrote:
Subject: Re: UNKNOWN_TOPIC_OR_PARTITION with SASL_PLAINTEXT ACL
To: users@kafka.apache.org
Date: Friday, June 16, 2017, 1:56 PM
Hi Arunkumar,
Were
not available.
I googled to figure out the issue and many say that it may be because of the
port which I am not convinced. Any help is highly appreciated.
Thanks
Arunkumar Pichaimuthu, PMP
On Thu, 6/15/17, Vahid S Hashemian wrote:
Subject: Re
--group
test-consumer-group --add -allow-host hostname:9097 --allow-principal User:arun
--authorizer-properties zookeeper.connect=zookeeperhost:2182
Thanks
Arunkumar Pichaimuthu, PMP
On Thu, 6/15/17, Arunkumar wrote:
Subject: Re
"
password="Arun123";
};
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="arun"
password="Arun123";
};
Thanks
Arunkumar Pichaimuthu, PMP
On Thu, 6/15/17, Vahid S Hashemi
Arunkumar Pichaimuthu, PMP
extend the
PlainLoginModule class and write our own implementation any insight is highly
appreciated. Thanks in advance.
Thanks
Arunkumar Pichaimuthu, PMP
passcode. We would like to authenticate it against out Enterprise LDAP server
for authenticity. If there is no LDAP support available, we are planning to
customize the code. Any insight on this is highly appreciated.
Thanks in advance
Arunkumar Pichaimuthu, PMP
Hi There
I would like to subscribe to this mailing list and know more about kafka.
Please add me to the list. Thanks in advance
Thanks
Arunkumar Pichaimuthu, PMP
round / point
us to the exact issue so that we can custom patch it / provide a patch
ourselves.
Thanks
Arun
From: Arunkumar Srambikkal (asrambik)
Sent: Wednesday, March 04, 2015 5:27 PM
To: users@kafka.apache.org
Subject: JSON parsing causing rebalance to fail
Hi,
When I start a new consume
Hi,
When I start a new consumer, it throws a Rebalance exception.
However I hit it only on some machines where the run time libraries are
different
The stack given below is what I encounter - is this a known issue?
I saw this Jira but it's not resolved so thought to confirm -
https://issues.
message, so broker will
have duplicate messages, and that¹s also why we say Kafka guarantees at least
once.
-Jiangjie (Becket) Qin
On 3/3/15, 4:01 AM, "Arunkumar Srambikkal (asrambik)"
wrote:
>Hi,
>
>I'm running some tests with the Kafka embedded broker and I see cases
Hi,
I'm running some tests with the Kafka embedded broker and I see cases where the
producer gets the FailedToSendMessageException but in reality the message is
transferred and consumer gets it
Is this expected / known issue?
Thanks
Arun
My producer config =
props.put("producer.type"
If I may use the same thread to discuss the exact same issue
Assuming one can store the offset in an external location (redis/db etc), along
with the rest of the state that a program requires, wouldn't it be possible to
manage things such that, you use the High Level API with auto commit t
details to get this right, the lookup table has to survive
failures. But yes this is exactly what we would like to add:
https://cwiki.apache.org/confluence/display/KAFKA/Idempotent+Producer
-Jay
On Tue, Feb 17, 2015 at 12:44 AM, Arunkumar Srambikkal (asrambik) <
asram...@cisco.com> wrote:
Hi,
I guess message production duplicate scenario in Kafka is, when a producer
commits the data but does not get an ack (broker or network fails AFTER commit)
and retries.
I got thinking that the retry can be caught by the broker which could then
identify the previous message with a unique mes
message by
MessageAndMetadata.partition() and MessageAndMetadata.offset().
To your scenario you can turn off auto commit auto.commit.enable=false and then
commit by yourself after finishing message consumption.
On Mon, Feb 16, 2015 at 1:40 PM, Arunkumar Srambikkal (asrambik) <
asram...@cisco.
Hi,
Is there a way to get the current partition number and current offset, when
using the *high level consumer* in 0.8.2?
I went through the previous messages and in the previous version I think there
are none.
The reason we want to do this, is that I plan to have a consumer without the
34 matches
Mail list logo