I think this exception is logged by the consumer when there is a leadership
change for a partition (which might be caused by one of the brokers going
down or losing their session with zookeeper). Typically the consumer will
recover after the cluster finds a new leader and the consumer updates its
m
Carl,
I double if the change you proposed will have at-least-once guarantee.
consumedOffset
is the next offset of the message that is being returned from
iterator.next(). For example the message returned is A with offset 1 and
then consumedOffset will be 2 set to currentTopicInfo. While the consum
Hello
Is there a good reference for best practices on running Java consumers?
I'm thinking a FAQ format.
- How should we run them? We are currently running them in Tomcat on
Ubuntu, are there other approaches using services? Maybe the service
wrapper http://wrapper.tanukisoftware.com/d
ok, let me make it a little more clear.
Datacenter A has 3 nodes, each acting as a broker, publishing messages to
one of the nodes that has zookeeper running.
Datacenter B has the same set up.
Now, I am trying to publish message from one of the nodes in A to the ZK in
A and make one of the nodes
Hi Srividhya,
I'm a little confused about your setup. You have both clusters pointed to
the same zookeeper, right? You don't appear to be using the zookeeper
chroot option, so I think they would just form a single cluster.
-Jason
On Mon, Jun 22, 2015 at 3:50 PM, Srividhya Anantharamakrishnan <
s
The replicas do not have to decompress/recompress so I don't think
that would contribute to this.
There may be some corner cases such as:
- Multiple unclean leadership elections in sequence
- Changing the compression codec for a topic on the fly - different
brokers may see this config change at
Hi,
I have the following Kafka Set up - Two 3-node clusters A, B, where each
node is acting as a broker and is connected by on ZK running in one node in
cluster A.
I was able to publish messages from cluster A and could subscribe to
consuming from both A and B clusters.
However, I am suddenly ru
Gah. I traced it down to an IOException that wasn't being handled (since
BlockingChannel doesn't declare that it throws them) and was swallowed in
all the log output from the test servers.
Thanks for the help!
On Sun, Jun 21, 2015 at 4:37 PM Jiangjie Qin
wrote:
> Hmm, it might be a little bit d
Noob question here. I want to have a single consumer for each partition
that consumes only the messages that have been written locally. In other
words, I want the consumer to access the local disk and not pull
anything across the network. Possible?
How can I discover which partitions are local
Thanks Adam for a very detailed explanation.
Regards,
Kris
On Fri, Jun 19, 2015 at 7:23 AM, Adam Shannon
wrote:
> Basically it boils down to the fact that distributed computers and their
> networking are not reliable. [0] So, in order to ensure that messages do
> infact get across there are cas
Hi Shayne,
Each consumer has one partition to consume from and duplicates messages are
being received from different consumers.
Thanks,
Kris
On Fri, Jun 19, 2015 at 7:41 AM, Shayne S wrote:
> Duplicate messages might be due to network issues, but it is worthwhile to
> dig deeper.
>
> It sounds
when I add partitions in topic after creation, do restart of producers
required?
I am using java producers and messages are keyed messages, so when total no
of partitions change do we need to restart producers or it gets no of
current partitions in each send call?
As per org.apache.kafka.common.u
I assume that you are considering the data loss to be the difference in
size between the two directories? This is generally not a good guideline,
as the batching and compression will be different between the two replicas.
-Todd
On Mon, Jun 22, 2015 at 7:26 AM, Nirmal ram
wrote:
> Hi,
>
> I not
Hi,
please please throw an UnsupportedOperationException form unimplemented methods
instead of just returning null! I'm talking about the poll() method in
KafkaConsumer.
I just spent two hours trying to figure out why I always got null from it...
Best regards,
Tomas G.
Hi,
I noticed a data loss while storing in kafka logs.
Generally, leader hands the request to followers, is there a data loss in
that process?
topic 'jun8' with 2 replicas and 8 partitions
*Broker 1*[user@ jun8-6]$ ls -ltr
total 7337500
-rw-rw-r-- 1 user user 1073741311 Jun 22 12:45 000
Actually, there is one more case:
Case 3: skipping messages / cleaning up topics
It might be less applicable to production, but quite often for load testing:
sometimes you find issues with either events that are already pushed into
Kafka, or the app that processes them - and you know that you
Thanks, Raja, Guozhang, for your response!
Raja - the slides are great, very helpful information - woudl be good to have
them included into the Kafka's WIKI pages too.
Guozhang,
here are two use cases where I fund having a cmd tool very useful:
Case 1: failed events re-processing
While processi
ah, ok.
--config should be given in front of all config parameters
this worked:
--config min.insync.replicas=2 --config unclean.leader.election.enable=false
so it leaves only the default value issue.
i.e. if it's not given the value from server.properties should be used.
On Mon, Jun 22, 2015 a
Hi Again!
Unlike min.insync.replicas, unclean.leader.election.enable isn't set to
false even if it's given 'false' in create topic command.
here is the command used to create the topic:
$./kafka-topics.sh --create --topic bbb --zookeeper localhost
--replication-factor 3 --partitions 3 --config mi
OK, thanks. I agree, the current code is better if you get lots of
rebalancing, and you can do your own thing for stronger guarantees.
For the new consumer, it looks like it should be possible to use multiple
threads, as long as partition order is preserved in the processing, right?
So, one can bu
20 matches
Mail list logo