Leader is a replica
On Tue, 18 Sep 2018 at 22:52 jorg.heym...@gmail.com
wrote:
> Hi,
>
> Testing out some kafka consistency guarantees I have following basic
> producer config:
>
> ProducerConfig.ACKS_CONFIG=all
> ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG=true
> ProducerConfig.MAX_IN_FLIGHT_REQUE
Hi team,
I use Kafka 0.10.0.0 and recently encountered an issue where the hard disk
mounted in one of the nodes experienced performance degradation that caused
the node being unstable. But controller didn't remove the node from ISR of
partitions for which the node is a leader. I wonder if anyway t
sort of weird
> and is not self-explaining. Wondering whether it has anything to do with
> the fact that number of consumers is too large? In our example, we have
> around 100 consumer connections per broker.
> >
> > Regards,
> > Jeff
> >
> > On 12/5/17, 10:14 AM,
Hi,
any pointer will be highly appreciated
On Thu, 30 Nov 2017 at 14:56 tao xiao wrote:
> Hi There,
>
>
>
> We are running into a weird situation when using Mirrormaker to replicate
> messages between Kafka clusters across datacenter and reach you for help in
> case you
Hi There,
We are running into a weird situation when using Mirrormaker to replicate
messages between Kafka clusters across datacenter and reach you for help in
case you also encountered this kind of problem before or have some insights
in this kind of issue.
Here is the scenario. We have setu
Do you have unclean leader election turned on? If killing 100 is the only
way to reproduce the problem, it is possible with unclean leader election
turned on that leadership was transferred to out of ISR follower which may
not have the latest high watermark
On Sat, Oct 7, 2017 at 3:51 AM Dmitriy Vs
you need to use org.apache.kafka.common.serialization.StringSerializer as
your ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG
On Wed, 5 Jul 2017 at 19:18 罗 辉 wrote:
> hi guys:
>
> I got an exception which i searched searchhadoop.com and the archive as
> well and got no matches, here it is:
>
> l
Hi team,
As per my experimentation mirror maker doesn't compress messages and send
to target broker if it is not configured to do so even the messages in
source broker are compressed. I understand the current implementation of
mirror maker has no visibility to what compression codec the source
me
Hi team,
I know that Kafka deletes offset for a consumer group after a period of
time (configured by offsets.retention.minutes) if the consumer group is
inactive for this amount of time. I want to understand the definition of
"inactive". I came across this post[1] and it suggests that no offset
co
You may run into this bug https://issues.apache.org/jira/browse/KAFKA-4614
On Thu, 12 Jan 2017 at 23:38 Stephen Powis wrote:
> Per my email to the list in Sept, when I reviewed GC logs then, I didn't
> see anything out of the ordinary. (
>
> http://mail-archives.apache.org/mod_mbox/kafka-users/2
The default partition assignor is range assignor which assigns works on a
per-topic basis. If you topics with one partition only they will be
assigned to the same consumer. You can change the assignor to
org.apache.kafka.clients.consumer.RoundRobinAssignor
On Thu, 12 Jan 2017 at 22:33 Tobias Adams
I doubt the LB solution will work for Kafka. Client needs to connect to the
leader of a partition to produce/consume messages. If we put a LB in front
of all brokers which means all brokers share the same LB how does the LB
figure out the leader?
On Mon, Nov 21, 2016 at 10:26 PM Martin Gainty wrot
message to this topic to trigger mirror. Thanks!
Best Regards
Johnny
-Original Message-
From: tao xiao [mailto:xiaotao...@gmail.com]
Sent: 2016年10月24日 17:10
To: users@kafka.apache.org
Subject: Re: Mirror multi-embedded consumer's configuration
You need to set auto.offset.
You need to set auto.offset.reset=smallest to mirror data from beginning
On Mon, 24 Oct 2016 at 17:07 ZHU Hua B wrote:
> Hi,
>
>
> Thanks for your info!
>
>
> Before I launch mirror maker first time, there is a topic include some
> messages, which have been produced and consumed on source Kafka
Shouldn't the default offset retention minutes be 1440?
offsets.retention.minutes=1440
On Sat, 13 Aug 2016 at 03:05 Guozhang Wang wrote:
> Hi Yuanjia,
>
> New consumer's group registry information is stored on the Kafka brokers,
> not ZK any more, and the brokers use heartbeats to detect if the
Hi team,
I was seeing an issue where a mirror maker attempted to commit an offset
for a partition that was ahead of log end offset and in the mean time the
leader of the partition was being restarted. In theory only committed
messages can be consumed by consumer which means the messages received b
Hi,
As per the description in KIP-32 the timestamp of Kafka message is
unchanged mirrored from one cluster to another if createTime is used. But I
tested with mirror maker in Kafka-0.10 this doesn't seem the case. The
timestamp of the same message is different in source and target. I checked
the l
I noticed the same issue too with 0.9.
On Wed, 25 May 2016 at 09:49 Andrew Otto wrote:
> “We use the default log retention of 7 *days*" :)*
>
> On Wed, May 25, 2016 at 12:34 PM, Andrew Otto wrote:
>
> > Hiya,
> >
> > We’ve recently upgraded to 0.9. In 0.8, when we restarted a broker, data
> >
I am pretty sure consumer-group.sh uses tools-log4j.properties
On Tue, 24 May 2016 at 17:59 allen chan
wrote:
> Maybe i am doing this wrong
>
> [ac...@ekk001.scl ~]$ cat
> /opt/kafka/kafka_2.11-0.10.0.0/config/log4j.properties
> ..
> log4j.rootLogger=DEBUG, kafkaAppender
> ..
>
>
> See no extra
put the
> ConsumerRebalanceListener.
> I look forward for your results.
> Florin
>
> On Tue, May 10, 2016 at 9:51 PM, tao xiao wrote:
>
> > Hi team,
> >
> > I want to know what would happen if the consumer group rebalance takes
> long
> > time like l
Hi team,
I want to know what would happen if the consumer group rebalance takes long
time like longer than the session timeout?
For example I have two consumers A and B using the same group id. For some
reasons during rebalance consumer A takes long time to finish
onPartitionsRevoked what would h
this fix. How can I get patch for this on 0.8.2.1?
>
> Sent from my iPhone
>
> > On May 6, 2016, at 12:06 AM, tao xiao wrote:
> >
> > It said this is a duplication. This is the
> > https://issues.apache.org/jira/browse/KAFKA-2657 that KAKFA-3112
> duplicate
It said this is a duplication. This is the
https://issues.apache.org/jira/browse/KAFKA-2657 that KAKFA-3112 duplicates
to.
On Thu, 5 May 2016 at 22:13 Raj Tanneru wrote:
>
> Hi All,
> Does anyone know if KAFKA-3112 is merged to 0.9.0.1? Is there a place to
> check which version has this fix? Jir
You can use the built-in mirror maker to mirror data from one Kafka to the
other. http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
On Thu, 5 May 2016 at 10:47 Dean Arnold wrote:
> I'm developing a Streams plugin for Kafka 0.10, to be run in a dev sandbox,
> but pull data from a
The time is controlled by metadata.max.age.ms
On Wed, Apr 27, 2016 at 1:19 AM Luciano Afranllie
wrote:
> Hi
>
> I am doing some tests to understand how kafka behaves when adding
> partitions to a topic while producing and consuming.
>
> My test is like this
>
> I launch 3 brokers
> I create a to
My understanding is that offset topic is a compact topic which should never
be deleted but compacted. Is this true? If this is the case what
does offsets.retention.minutes here really mean?
On Tue, 12 Apr 2016 at 20:15 Sean Morris (semorris)
wrote:
> I have been seeing the same thing and now I u
Hi team,
We have a cluster of 40 nodes in production. We observed that one of the
nodes ran into a situation where it constantly shrunk and expanded ISR for
more than 10 hours and unable to recover until the broker was bounced.
Here are the snippet of server.log from the broker and controller log
You can look at this metric
buffer-available-bytes The total amount of buffer memory that is not being
used (either unallocated or in the free list).
kafka.producer:type=producer-metrics,client-id=([-.\w]+)
On Sat, 9 Apr 2016 at 00:53 Paolo Patierno wrote:
> Hi all,
> is there a way to know at r
the amount of data returned from each call
> to poll() so that you can better predict processing time.
>
> -Jason
>
>
> On Mon, Mar 14, 2016 at 7:04 AM, tao xiao wrote:
>
> > Hi team,
> >
> > I have about 10 consumers using the same consumer group connecting to
&
Hi team,
I have about 10 consumers using the same consumer group connecting to
Kafka. Occasional I can see UNKNOWN_MEMBER_ID assigned to some of the
consumers. I want to under what situation this would happen?
I use Kafka version 0.9.0.1
You need to change group.max.session.timeout.ms in broker to be larger than
what you have in consumer.
On Fri, 11 Mar 2016 at 00:24 Michael Freeman wrote:
> Hi,
> I'm trying to set the following on a 0.9.0.1 consumer.
>
> session.timeout.ms=12
> request.timeout.ms=144000
>
> I get the be
Hi team,
I found that GetOffsetShell doesn't work with SASL enabled Kafka. I believe
this is due to old producer being used in GetOffsetShell. I want to if
there is any alternative to provide the same information with secure Kafka
Kafka version 0.9.0.1
Exception
% bin/kafka-run-class.sh kafka.t
ec
> > check_command check_kafka_uncleanleader!1!10
> > check_interval 15
> > retry_interval 5
> > }
> > define service {
> > hostgroup_name KafkaBroker
> > use generic-service
> > service_description Kafka Under Replicated Partitions
> >
Jens Rantil wrote:
> Hi,
>
> I assume you first want to ask yourself what liveness you would like to
> check for. I guess the most realistic check is to put a "ping" message on
> the broken and make sure that you can consume it.
>
> Cheers,
> Jens
>
> On F
Hi team,
What is the best way to verify a specific Kafka node functions properly?
Telnet the port is one of the approach but I don't think it tells me
whether or not the broker can still receive/send traffics. I am thinking to
ask for metadata from the broker using consumer.partitionsFor. If it ca
One more additional note to new consumer. The new topic will not be picked
up by new consumer immediately. It takes as long as metadata.max.age.ms to
refresh metadata and pick up topics that match the pattern at the time of
check.
On Thu, 25 Feb 2016 at 08:48 Jason Gustafson wrote:
> Hey Luke,
>
The default value is false.
https://github.com/apache/kafka/blob/d5b43b19bb06e9cdc606312c8bcf87ed267daf44/clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java#L232
On Tue, 23 Feb 2016 at 21:14 Franco Giacosa wrote:
> Hi Guys,
>
> I was going over the producer kafka config
m a
> newbie of zk and kafka.
>
> On Fri, Feb 19, 2016 at 4:11 PM, tao xiao wrote:
>
> > I'd assume the offset is managed in zk. then kafka-run-class.sh
> > kafka.admin.ConsumerGroupCommand --list --zookeeper localhost:2181 should
> > output the consumer group that
> I wrote the consumer by myself in golang using Shopify's sarama lib updated
> of master branch.
>
> On Fri, Feb 19, 2016 at 4:03 PM, tao xiao wrote:
>
> > which version of consumer do you use?
> >
> >
> > On Fri, 19 Feb 2016 at 15:26 Amoxicillin wro
which version of consumer do you use?
On Fri, 19 Feb 2016 at 15:26 Amoxicillin wrote:
> I tried as you suggested, but still no output of any group info.
>
> On Fri, Feb 19, 2016 at 2:45 PM, tao xiao wrote:
>
> > That is what I mean alive. If you use new consumer connecting
:
> How to confirm the consumer groups are alive? I have one consumer in the
> group running at the same time, and could get messages correctly.
>
> On Fri, Feb 19, 2016 at 1:40 PM, tao xiao wrote:
>
> > when using ConsumerGroupCommand you need to make sure your consumer
>
when using ConsumerGroupCommand you need to make sure your consumer groups
are alive. It only queries offsets for consumer groups that are currently
connecting to brokers
On Fri, 19 Feb 2016 at 13:35 Amoxicillin wrote:
> Hi,
>
> I use kafka.tools.ConsumerOffsetChecker to view the consumer offse
You can follow the instructions here
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
On Mon, 15 Feb 2016 at 03:32 Nikhil Bhaware
wrote:
> Hi,
> I have 6 node kafka cluster on which i created topic with 3
> replication-factor and 1000
I think you can try with
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/utils/AppInfoParser.java
On Fri, 5 Feb 2016 at 07:01 wrote:
> Is there a way to detect the broker version (even at a high level 0.8 vs
> 0.9) using the kafka-clients Java library?
Hi team,
I want to understanding the meaning of request.timeout.ms that is used in
producer. As per the doc this property is used to expire records that have
been waiting for response from server for more than request.timeout.ms
which also means the records have been sitting in InFlightRequests fo
No. With receive.buffer.bytes setting to 64K I am unable to reproduce the
error. BTW I set the message size to 10K when testing with 64K buffer size
On Thu, 28 Jan 2016 at 01:35 Jason Gustafson wrote:
> Hey Tao,
>
> If you increase "receive.buffer.bytes" to 64K, can you still reproduce the
> pro
The new consumer only supports offset stores in Kafka
On Wed, 27 Jan 2016 at 05:26 wrote:
> Does the new KafkaConsumer support storing offsets in Zookeeper or only in
> Kafka? By looking at the source code I could not find any support for
> Zookeeper, but wanted to confirm this.
>
> --
> Best re
I managed to reproduce this issue on my mac with receive.buffer.bytes
setting to new consumer default value. My JVM is hotspot 64 bit 1.7.0_60
and mac 10.10.5
On Wed, 27 Jan 2016 at 02:54 Krzysztof Ciesielski <
krzysztof.ciesiel...@softwaremill.pl> wrote:
> Hi Jason,
>
> Lowering "receive.buffer.
Thank you.
On Fri, 22 Jan 2016 at 08:39 Guozhang Wang wrote:
> Done.
>
> On Thu, Jan 21, 2016 at 12:38 AM, tao xiao wrote:
>
> > Hi Guozhang,
> >
> > Thanks for that.
> >
> > Can you please grant kevinth the write access too? He is my colleague
2016 at 7:56 PM, Connie Yang
> wrote:
>
> > @Ismael, what's the status of the SASL/PLAIN PR,
> > https://github.com/apache/kafka/pull/341?
> >
> >
> >
> > On Tue, Jan 19, 2016 at 6:25 PM, tao xiao wrote:
> >
> > > The PR provides a new SAS
Hi team,
In the Kafka protocol wiki it states that version 0 is the only supported
version in all APIs. I want to know if this still remains true? if not
which APIs are now using version 1?
It is possible that you closed the producer before the messages accumulated
in batch had been sent out.
You can modify your producer as below to make it a sync call and test again.
producer.send(new ProducerRecord("test",
0, Integer.toString(i), Integer.toString(i))).get();
On Wed, 20 Jan 2016 a
e of the changes are in
> the following PR:
>
> https://github.com/apache/kafka/pull/341
>
> As per the discussion in the PR above, a KIP is also required.
>
> Ismael
>
> On Wed, Jan 20, 2016 at 1:48 AM, tao xiao wrote:
>
> > Hi Ismael,
> >
> > BTW l
Hi Ismael,
BTW looks like I don't have the permission to add a KIP in Kafka space. Can
you please grant me the permission?
On Wed, 20 Jan 2016 at 09:40 tao xiao wrote:
> Hi Ismael,
>
> Thank you for your reply. I am happy to have a writeup on this.
>
> Can you think of an
vement+Proposals
>
> Ismael
>
> On Mon, Jan 18, 2016 at 3:15 AM, tao xiao wrote:
>
> > Hi Kafka team,
> >
> > I want to know if I can plug-in my own security protocol to Kafka to
> > implement project specific authentication mechanism. The current
> supported
Hi Kafka team,
I want to know if I can plug-in my own security protocol to Kafka to
implement project specific authentication mechanism. The current supported
authentication protocols, SASL/GSSAPI and SSL, are not supported in my
company and we have own security protocol to do authentication.
Is
Hi,
I found that the latest mirror maker with new consumer enabled was unable
to commit offset in time when mirroring a topic with very infrequent
messages.
I have a topic with a few of messages produced every half hour. I setup
mirror maker to mirror this topic with default config. I observed th
Mirror maker white list uses Java pattern
https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html. In
you case the whitelist is ".*"
On Fri, 8 Jan 2016 at 20:54 Stephen Powis wrote:
> Hey!
>
> I'm having a little trouble with mirror maker, if I supply --whitelist "*"
> it will not
by design. Any tool we write will be up against the same
> restriction. We might be able to think of a way to bypass it, but that
> sounds dangerous.
>
> Out of curiosity, what's the advantage in your use case to setting offsets
> out-of-band? I would probably consider option
You can bump the log level to warn for a particular class
log4j.logger.kafka.network.Processor=WARN
On Tue, 5 Jan 2016 at 08:33 Dillian Murphey wrote:
> Constant spam of this INFO on my log.
>
> [2016-01-05 00:31:15,887] INFO Closing socket connection to /10.9.255.67.
> (kafka.network.Processor
& Elasticsearch Support
> > Sematext <http://sematext.com/> | Contact
> > <http://sematext.com/about/contact.html>
> >
> > On Mon, Jan 4, 2016 at 12:18 PM, tao xiao wrote:
> >
> > > Hi team,
> > >
> > > I have a
tact
> <http://sematext.com/about/contact.html>
>
> On Mon, Jan 4, 2016 at 12:18 PM, tao xiao wrote:
>
> > Hi team,
> >
> > I have a scenario where I want to write new offset for a list of topics
> on
> > demand. The list of topics is unknown until ru
Hi team,
I have a scenario where I want to write new offset for a list of topics on
demand. The list of topics is unknown until runtime and the interval
between each commit is undetermined. what would be the best way to do so?
One way I can think of is to create a new consumer and call
commitSync
created https://issues.apache.org/jira/browse/KAFKA-2993. I will submit a
patch for this
On Wed, 16 Dec 2015 at 00:53 Ismael Juma wrote:
> Yes, please.
>
> Ismael
> On 15 Dec 2015 11:52, "tao xiao" wrote:
>
> > Hi team,
> >
> > I found that the
Hi team,
I found that the producer metric compression-rate-avg always returns 0 even
with compression.type set to snappy. I drilled down the code and discovered
that the position of bytebuffer in
org.apache.kafka.common.record.Compressor is reset to 0 by
RecordAccumulator.drain() before calling me
nitial overhead before data can be fetched. For
> example,
> > > the
> > > > group has to be joined and topic metadata has to be fetched. Do you
> see
> > > > unexpected empty fetches beyond the first 10 polls?
> > > >
> > > > Thanks,
>
Jason Gustafson wrote:
>
> > There is some initial overhead before data can be fetched. For example,
> the
> > group has to be joined and topic metadata has to be fetched. Do you see
> > unexpected empty fetches beyond the first 10 polls?
> >
> > Thanks,
> &
; org.apache.kafka.tools.ProducerPerformance class to see if it makes a
> > difference?
> >
> > On Tue, Dec 1, 2015 at 11:35 AM tao xiao wrote:
> >
> > > Gerard,
> > >
> > > In your case I think you can set fetch.min.bytes=1 so that the server
&
lling I saw little difference. I did
> see for example that when producing only 1 message a second, still it
> sometimes wait to get three messages. So I also would like to know if there
> is a faster way.
>
> On Tue, Dec 1, 2015 at 10:35 AM tao xiao wrote:
>
> > Hi team,
>
Hi team,
I am using the new consumer with broker version 0.9.0. I notice that
poll(time) occasionally returns 0 message even though I have enough
messages in broker. The rate of returning 0 message is quite high like 4
out of 5 polls return 0 message. It doesn't help by increasing the poll
timeout
Hi team,
In 0.9.0.0 a new tool kafka-consumer-groups.sh is introduced to show offset
and lag for a particular consumer group prefer over the old
one kafka-consumer-offset-checker.sh. However I found that the new tool
only works for consumer groups that are currently active but not for
consumer gro
The new consumer only works with the broker in trunk now which will soon
become 0.9
On Sun, 1 Nov 2015 at 20:54 Alexey Pirogov wrote:
> Hi, I'm facing an issue with New Consumer. I'm getting
> GroupCoordinatorNotAvailableException during "poll" method:
> 12:32:50,942 ERROR ConsumerCoordinator:
Hi,
I am starting a new project that requires heavy use on Kafka consumer. I
did a quick look at the new Kafka consumer and found it provides some of
the features we definitely need. But as it is annotated as unable is it
safe to rely on it or it will still be evolving dramatically in coming
relea
Thanks James
On Thu, 15 Oct 2015 at 12:16 James Cheng wrote:
>
> > On Oct 15, 2015, at 11:29 AM, tao xiao wrote:
> >
> > Hi team,
> >
> > Does new consumer client (the one in trunk) work with 0.8.2.x broker? I
> am
> > planning to use the new con
Hi team,
Does new consumer client (the one in trunk) work with 0.8.2.x broker? I am
planning to use the new consumer in our development but don't want to
upgrade the broker to the latest. is it possible to do that?
0.8.2.1 already supports Kafka offset storage. You can
set offsets.storage=kafka in consumer properties and high level API is able
to pick it up and commit offsets to Kafka
Here is the code reference where kafka offset logic kicks in
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/k
Joris,
You checkout Burrow https://github.com/linkedin/Burrow which gives you
monitoring and alerting capability for offset stored in Kafka
On Wed, 16 Sep 2015 at 23:05 Joris Peeters
wrote:
> I intend to write some bespoke monitoring for our internal kafka system.
> Amongst others, it should gi
tion mechanism in the High level consumer.
>
>
>
> On Thu, Sep 10, 2015 at 8:43 AM, tao xiao wrote:
>
> > You can create message streams using regex that includes all topics. The
> > beauty of regex is that any new topic created will be automatically
> > consumed as
You can create message streams using regex that includes all topics. The
beauty of regex is that any new topic created will be automatically
consumed as long as the name of the topic matches the regex
You check the method createMessageStreamsByFilter in high level API
On Thu, Sep 10, 2015 at 11:0
Here is a good doc to describe how to choose the right number of partitions
http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/
On Fri, Sep 4, 2015 at 10:08 PM, Jörg Wagner wrote:
> Hello!
>
> Regarding the recommended amount of partitions I am a bit co
In the trunk code mirror maker provides the ability to filter out messages
on demand by supplying a message handler.
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/MirrorMaker.scala#L443
On Wed, 26 Aug 2015 at 11:35 Binh Nguyen Van wrote:
> Hi all,
>
> Is there any t
Hi team,
What is the best way to cancel an in progress partition reassignment job? I
know it saves the json in /admin/reassign_partition in zk. Is it ok to
delete the znode?
Hi,
I have a requirement that needs to call partition reassignment inside Java
code. At the current implementation of ReassignPartitionsCommand it expects
a json file to be passed in. Other than generating a json file and save it
somewhere in my code what are other options that I can invoke the co
It may be possibly caused by this issue
https://issues.apache.org/jira/browse/KAFKA-2281. Can you apply the patch
and try again?
On Thu, 6 Aug 2015 at 04:03 Liju John wrote:
> Just to add more info -
>
> Our message size = 1.5MB , so effectively there was no batching as our
> batch size is 200
> as the leader. This means that if you do a partition reassignment and
> change the replica list from [1, 2] to [2, 1], nothing happens at first.
> But upon the next preferred replica election, broker 2 will be selected as
> the leader.
>
> -Todd
>
>
> On Wed, Aug 5, 20
eader
On Wed, 5 Aug 2015 at 18:32 Jilin Xie wrote:
> Check the --replica-assignment parameter of the kafka-topics.sh.
> It does what you need.
> And there should also be similar configs in the api if you wanna do so by
> coding.
>
> On Wed, Aug 5, 2015 at 6:18 PM, tao xia
Hi team,
Is it possible to specify a leader broker for each topic partition when
doing partition reassignment?
For example I have following json. Is the first broker in the replicas list
by default the leader of the partition e.g. broker 3 is the leader of topic
test5 and broker 2 is the leader o
Correct me if I m wrong. If compaction is used +1 to indicate next offset
is no longer valid. For the compacted section the offset is not increasing
sequentially. i think you need to call the next offset of the last
processed record to figure out what the next offset will be
On Wed, 29 Jul 2015 at
James,
You can reference confluent IO schema registry implementation.
http://docs.confluent.io/1.0/schema-registry/docs/index.html
It does similar thing as what you described. A REST front end that serves
data from a compacted topic and HA is also provided in the solution.
On Tue, 21 Jul 2015 at
The OOME issue may be caused
by org.apache.kafka.clients.producer.internals.ErrorLoggingCallback holding
unnecessary byte[] value. Can you apply the patch in below JIRA and try
again?
https://issues.apache.org/jira/browse/KAFKA-2281
On Wed, 15 Jul 2015 at 06:42 francesco vigotti
wrote:
> I've
org.apache.kafka.clients.tools.ProducerPerformance resides in
kafka-clients-0.8.2.1.jar.
You need to make sure the jar exists in $KAFKA_HOME/libs/. I use
kafka_2.10-0.8.2.1
too and here is the output
% bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
USAGE: java org.apach
und. We need to check
> why this happens exactly and get to the root cause. What do you think?
> Getting to the root cause of this might be really useful.
>
> Thanks,
>
> Mayuresh
>
> On Sun, Jul 12, 2015 at 8:45 PM, tao xiao wrote:
>
> > Restart the consumers
I¹m not sure if throwing exception to user on this exception is a good
> handling or not. What are user supposed to do in that case other than
> retry?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On 7/12/15, 7:16 PM, "tao xiao" wrote:
>
> >We saw the error again
Hi team,
The trunk code of mirror maker now uses the old consumer API, Is there any
plan to use new Java consumer api in mirror maker?
We saw the error again in our cluster. Anyone has the same issue before?
On Fri, 10 Jul 2015 at 13:26 tao xiao wrote:
> Bump the thread. Any help would be appreciated.
>
> On Wed, 8 Jul 2015 at 20:09 tao xiao wrote:
>
>> Additional info
>> Kafka version: 0.8.2.1
>
Bump the thread. Any help would be appreciated.
On Wed, 8 Jul 2015 at 20:09 tao xiao wrote:
> Additional info
> Kafka version: 0.8.2.1
> zookeeper: 3.4.6
>
> On Wed, 8 Jul 2015 at 20:07 tao xiao wrote:
>
>> Hi team,
>>
>> I have 10 high level consumers conn
You can follow the instruction listed here
http://kafka.apache.org/contact.html
In short send an email to users-unsubscr...@kafka.apache.org
On Thu, 9 Jul 2015 at 18:52 Monika Garg wrote:
> Still I am not able to unsubscribe and still getting mails.
>
>
> Please help me out as It is filling my
Additional info
Kafka version: 0.8.2.1
zookeeper: 3.4.6
On Wed, 8 Jul 2015 at 20:07 tao xiao wrote:
> Hi team,
>
> I have 10 high level consumers connecting to Kafka and one of them kept
> complaining "conflicted ephemeral node" for about 8 hours. The log was
> fi
Hi team,
I have 10 high level consumers connecting to Kafka and one of them kept
complaining "conflicted ephemeral node" for about 8 hours. The log was
filled with below exception
[2015-07-07 14:03:51,615] INFO conflict in
/consumers/group/ids/test-1435856975563-9a9fdc6c data:
{"version":1,"subsc
That is so cool. Thank you
On Sun, 28 Jun 2015 at 04:29 Guozhang Wang wrote:
> Tao, I have added you to the contributor list of Kafka so you can assign
> tickets to yourself now.
>
> I will review the patch soon.
>
> Guozhang
>
> On Thu, Jun 25, 2015 at 2:54 AM, tao
1 - 100 of 218 matches
Mail list logo