Thanks for pointing this out. There was a broker instance of version
0.10.1.0 running.
On May 23, 2017 11:34 AM, "Ewen Cheslack-Postava" wrote:
> Version 2 of UpdateMetadataRequest does not exist in version 0.9.0.1. This
> suggests that you have a broker with a newer version of Kafka running
> a
Version 2 of UpdateMetadataRequest does not exist in version 0.9.0.1. This
suggests that you have a broker with a newer version of Kafka running
against the same ZK broker. Do you have any other versions running? Or is
it possible this is a shared ZK cluster and you're not using a namespace
within
On 22 May 2017 at 16:09, Guozhang Wang wrote:
> For
> that issue I'd suspect that there is a network issue, or maybe the network
> is just saturated already and the heartbeat request / response were not
> exchanged in time between the consumer and the broker, or the sockets being
> dropped becaus
Hi Rajini
Thanks for input. I think I may have done mistake in granting Create access
to Kafka-cluser. I did as follows, please correct me if this is not right:
[root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
zookeeper.connect=kafka1.example.com:2181 --add --allow-principal
User:CN
Hi Kafka users,
Recently I've been following
https://kafka.apache.org/documentation/#security_ssl to configure SSL
connections between Kafka and Filebeat (using Sarama library).
Basically the doc works perfectly for what is tested against - SSL between
Kafka and Kafka-console-producer/consumer. H
Hi Manhendra,
Sorry for the late reply.
Just to clarify my previous reply was only for your question about:
"
There is also another issue where a particular broker is marked as dead for
a group id and Streams process never recovers from this exception.
"
And I thought your attached logs are ass
Rajini
I tried to add permission for Kafka broker to write. Now I get this error.
Am I missing anything else ?
[2017-05-22 11:11:15,065] WARN Error while fetching metadata with
correlation id 1 : {kafka-testtopic=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)
[2017-05-22 11:
Forgot to mention - this is on 0.9. We can't upgrade to 0.10 yet, as we haven't
upgraded our brokers.
-Original Message-
From: Simon Cooper [mailto:simon.coo...@featurespace.co.uk]
Sent: 22 May 2017 16:05
To: users@kafka.apache.org
Subject: Getting the consumer to operate deterministical
Hi,
I'm having significant problems getting the kafka consumer to operate
deterministically with small message numbers & sizes (this is for local
testing).
I'm controlling the offset manually, and using manual partition/topic
assignment. I've set auto commit off, and set fetch.min.bytes to 1.
If you are using auto-create of topics, you also need to grant Create
access to kaka-cluster.
On Mon, May 22, 2017 at 9:51 AM, Raghav wrote:
> Hi Rajini
>
> I tried again with IP addresses this time, and I get the following error
> log for the given ACLS. Is there something wrong in the way I am
Hi Rajini
I tried again with IP addresses this time, and I get the following error
log for the given ACLS. Is there something wrong in the way I am giving
user name ?
*List of ACL*
[root@kafka-dev1 KAFKA]# bin/kafka-acls --authorizer-properties
zookeeper.connect=localhost:2181 --add --allow-prin
Raghav,
I don't believe we do reverse DNS lookup for matching ACL hosts. Have you
tried defining ACLs with host IP address?
On Mon, May 22, 2017 at 9:19 AM, Raghav wrote:
> Hi
>
> I enabled the DEBUG logs on Kafka authorizer, and I see the following logs
> for the given ACLs. Am I missing somet
Hi
I enabled the DEBUG logs on Kafka authorizer, and I see the following logs
for the given ACLs. Am I missing something in my config here ? Any help is
greatly appreciated. Thanks.
*List of ACL*
[root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
zookeeper.connect=localhost:2181 --l
Raghav,
*My guess about the problem is that I was generate a csr (certificate
signing request), which is different from actually extracting certificate.
Please correct me if I am wrong.*
Yes, that is correct. Use "keytool -exportcert" to extract the certificate.
*To actually address our problem
@Kant I was going through the offset related configurations before setting
offsets.retention.minutes so came accross this configuration and thought to
ask whether this should also be tuned or not.
Regards,
Abhimanyu
On Mon, May 22, 2017 at 2:24 PM, kant kodali wrote:
> @Abhimanyu Why do you
Hi Jun,
Do you mean by using CallBack mechanism? Since I am new to kafka would you mind
directing me how to do it if it's not to be done using CallBack?
Fathima.
@Abhimanyu Why do you think you need to set that? Did you try setting
offsets.retention.minutes
= 1440 * 30 and still seeing duplicates?
On Mon, May 22, 2017 at 12:37 AM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:
> Hi Girish ,
>
> Do I need to tune this configuration offsets.retentio
Hi Girish ,
Do I need to tune this configuration offsets.retention.check.interval.ms
also . Please let me know if I need to tune any other configuration.
Regards,
Abhimanyu
On Sun, May 21, 2017 at 8:01 PM, Girish Aher wrote:
> Yup, exactly as Kant said.
> Also make sure that the retention of
Hi,
I am getting the below exception while starting kafka broker 0.9.0.1:
kafka.common.KafkaException: Version 2 is invalid for
UpdateMetadataRequest. Valid versions are 0 or 1.
at
kafka.api.UpdateMetadataRequest$.readFrom(UpdateMetadataRequest.scala:58)
at kafka.api.RequestKeys$$anonfun$7
19 matches
Mail list logo