can mirror sync topic's partition and group's offset?
hi, all
i want to update my kafka cluster from 0.8.2.2 to 0.10.0. i follow the
rules in kafka.apache.org and some errors happened.
i don't want to stop my cluster. so i made these changes in
server.property:
change `port=9092` to `listener=PLAINTTEXT://:9092`
add`inner.broker.protocol.ve
> Thanks
>
> Tom Crayford
> Heroku Kafka
>
> On Tue, May 31, 2016 at 6:19 AM, Fredo Lee
> wrote:
>
> > thanks for your reply.
> >
> > yes, there are more than one controller. the msg of "soft failure" is
> > reported by the old controller
> your active brokers - if it's zero, then that's an indication of this bug.
>
> Thanks
>
> Tom
>
> On Tue, May 31, 2016 at 10:11 AM, Fredo Lee
> wrote:
>
> > i find the new broker with old broker id always fetch message from itself
> > for the reason
i find the new broker with old broker id always fetch message from itself
for the reason that it believe it's the leader of some partitions.
2016-05-31 15:56 GMT+08:00 Fredo Lee :
> we have a kafka cluster and one of them is down for the reason of disk
> damaged. so we use the same b
we have a kafka cluster and one of them is down for the reason of disk
damaged. so we use the same broker id in a new server machine.
when start kafka in the new machine, lots of error msg: "[2016-05-31
10:30:49,792] ERROR [ReplicaFetcherThread-0-1013], Error for partition
[consup-25,20] t
you can use kafka java metrics to monitor some events about kafka healthy.
as to kafka listen port, just check it.
2016-05-30 13:04 GMT+08:00 Joe San :
> Is there any such API on the consumer or the producer that I can use to
> check for the underlying connection to the kafka brokers from my pro
- was there any offline partition?
> - was there more than one active controller?
>
> CMIIW
>
>
> On Mon, May 30, 2016 at 2:41 PM, Fredo Lee
> wrote:
>
> > my server.log >>>>>>>>>>>>>>>>
> >
> > lots of err
eplica Manager on Broker 1008]: Error
when processing fetch request for partition [consup-03,35] offset
13848954 from consumer with correlation id 0. Possible cause: Request for
offset 13848954 but we only have log segments in the range 12356946 to
13847167. (kafka.server.ReplicaManager)
) for a request sent to broker id:1018,host:
consup-kafka10.com,port:9092 (state.change.logger)
2016-05-28 15:31 GMT+08:00 Muqtafi Akhmad :
> hello Fredo,
>
> Can you elaborate the 'soft' failure?
>
>
>
> On Sat, May 28, 2016 at 1:53 PM, Fredo Lee
> wrote:
>
&
some of my consumer always get out of range exception, and i can find that
the new leader who is follower before trunked its log file.
2016-05-30 15:05 GMT+08:00 Fredo Lee :
> my kafka cluster has twenty kafka brokers. and my producer sets
> request.required.acks = -1, my broke
my kafka cluster has twenty kafka brokers. and my producer sets
request.required.acks = -1, my brokers set min.insync.replicas=2, and
unclear leader is enable.
i think under this configuration, i think when one leader of the topic is
changed, there should are no any log trunked partition for the f
we have a kafka cluster with 19 nodes. every week we suffer a soft failure
from this cluster. how to resolve this problem.
i find the same situation in stackoverflow:
http://stackoverflow.com/questions/35857130/kafka-broker-removed-from-zookeeper-while-leader-election-occurs-error
but there are no explaination
2016-05-18 12:02 GMT+08:00 Fredo Lee :
> hi all
>
> we have 20 kafka brokers in on cluster.
hi all
we have 20 kafka brokers in on cluster. and this morning, we cannot find
one kafka broker when using "kafka-topic.sh" to get topic information.
we find lots of error: "[2016-05-18 09:59:59,998] ERROR
[ReplicaFetcherThread-0-1005], Error for partition [33,58] to
broker 1005:class k
how to config kafka security with plaintext && acl? i just want to deny
some ips.
This problem is caused by that the consumer offset lag producer offset,
and the messages are deleted, then i use the lagged offset of fetch
messages.But those messages have been deleted.
Can this be resoled with using `auto.offset.reset=largest` ?
2015-12-24 16:32 GMT+08:00 Fredo Lee :
> W
When using kafka-0.8.2.0, Some questions happaned to me.
I created one topic called `test` with partitions 60, replicaiton-factors 2
and set log.retention.hours to 24 , Then i send some messages to `test`.
some days later, i create a consumer for this topic. but i got `out of
range` (i store my of
you running any non-java client, especially a consumer?
>
> Thanks,
>
> Jun
>
> On Wed, Nov 25, 2015 at 6:38 PM, Fredo Lee
> wrote:
>
> > this is my config file for original file with some changed by me.
> >
> > broker.id=1
> > listeners=PLAINTEXT://
produce this issue easily?
>
> Jun
>
> On Tue, Nov 24, 2015 at 10:52 PM, Fredo Lee
> wrote:
>
> > The content below is the report for kafka
> >
> > when i try to fetch coordinator broker, i get 6 for ever.
> >
> >
> >
> > [2015-11-
four kafka nodes, i get these errors
one node, it works well.
2015-11-25 14:52 GMT+08:00 Fredo Lee :
>
> The content below is the report for kafka
>
> when i try to fetch coordinator broker, i get 6 for ever.
>
>
>
> [2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when
The content below is the report for kafka
when i try to fetch coordinator broker, i get 6 for ever.
[2015-11-25 14:48:28,638] ERROR [KafkaApi-1] error when handling request
Name: FetchRequest; Version: 1; CorrelationId: 643; ClientId:
ReplicaFetcherThread-0-4; ReplicaId:
1; MaxWait: 500 ms; Min
this situation occurs when i killed some nodes from kafka nodes cluster.
for the reason that the error code is zeor, i cannot find out the real
reason
-- Forwarded message --
From: Fredo Lee
Date: 2015-11-23 20:45 GMT+08:00
Subject: cannot decode consumer metadata response
To
this situation occurs when i killed some nodes from kafka nodes cluster.
for the reason that the error code is zeor, i cannot find out the real
reason.
2015-11-23 20:45 GMT+08:00 Fredo Lee :
> i have four kafka nodes with broker id: 1,2,3,4
> my kafka version is 0.8.2.2
>
> i wrot
i have four kafka nodes with broker id: 1,2,3,4
my kafka version is 0.8.2.2
i wrote a consumer client according to kafka protocol
i killed one kafka node and wanted to reget coordinator broker.
i got this binary string:
0,0,0,0,0,0,0,1,0,4,98,108,99,115,0,0,0,1,0,0,0,7,0,0,0,0,0,0,0,
0,0,0,0,0,0,
25 matches
Mail list logo