Hi,
You might want to take a look at the setting for "request.timeout.ms" for
the consumer (see
http://kafka.apache.org/documentation.html#newconsumerconfigs). The default
timeout is about 5 minutes). For the producer this timeout is around 30
seconds, so that possibly explains why it works for yo
Most likely something went wrong creating the keystores, causing the SSL
handshake to fail. Its important to have a valid chain, from the
certificate in the struststore, and then maybe intermediates tot the
keystore.
On Fri, Dec 16, 2016, 00:32 Raghu B wrote:
Thanks Derar & Kiran, your suggestio
Any solution?
-- --
??: "Xiaoyuan Chen"<253441...@qq.com>;
: 2016??12??9??(??) 10:15
??: "users";
: The connection between kafka and zookeeper is often closed byzookeeper,
lead to NotLeaderForPartitionException: This server
Thanks Derar & Kiran, your suggestions are very useful.
I enabled Log4J debug mode and found that my client is trying to connect to
the Kafka server with the *User:ANONYMOUS, *It is really strange.
I added a new Super.User with the name *User:ANONYMOUS *then I am able to
send and receive the mes
Hi all,
I apologize for flooding the list with questions lately. I guess I’m having a
rough week.
I thought my app was finally running fine after Damian’s help on Monday, but it
turns out that it hasn’t been (successfully) consuming 2 of the topics it
should be (out of 11 total).
I’ve been tr
Hello Kafka users, developers and client-developers,
This is the second, and hopefully the last candidate for the release of
Apache Kafka 0.10.1.1 before the break. This is a bug fix release and it
includes fixes and improvements from 30 JIRAs. See the release notes for
more details:
http://home.
What's the retention settings for these (-changelog and
-replication)? Im wondering about the relentless rebalancing issues Im
facing and wondering if it has anything to do with consumers that lag too
far behind.
If I delete all the topics associated with a KStream project and restart it
there are
for #2 definitely use a compacted topic. Compaction will remove old
messages and keep the last update for each key. To use this function you
will need to publish messages as Key/Value pairs. Apache Kafka 0.10.1 has
some important fixes to make compacted topics more reliable when scaling to
large nu
Also see https://github.com/confluentinc/kafka-rest-node for an example
JavaScript wrapper on the Confluent REST Proxy.
You definitely do not have to use Kafka Connect to pub/sub to Kafka via REST.
-hans
> On Dec 15, 2016, at 11:17 AM, Stevo Slavić wrote:
>
> https://github.com/confluentinc
https://github.com/confluentinc/kafka-rest
On Thu, Dec 15, 2016 at 8:12 PM, Gautham Kumaran <
gautham.kumara...@gmail.com> wrote:
> Hi Everyone,
>
> I'm trying to learn Kafka. So this might be basic :)
>
> I've come across the following API call to a kafka topic directly, but I
> was under the im
Hi Everyone,
I'm trying to learn Kafka. So this might be basic :)
I've come across the following API call to a kafka topic directly, but I
was under the impression that a connector API is needed to access a topic
through the REST API.
Index.html
// Call the Kafka Rest Api
$.ajax({
url:'/item-r
Thanks, Kenny for confirming. Message updates I mean to say that for same
document/message there will be updates coming in (for e.g. person details
may change). As you mentioned using the proper key should make that happen
so good on that.
On Thu, Dec 15, 2016 at 1:16 PM, Kenny Gorman wrote:
>
> On Dec 15, 2016, at 11:33, Damian Guy wrote:
>
> Technically you can, but not without writing some code. If you want to use
> consumer groups then you would need to write a custom PartitionAssignor and
> configure it in your Consumer Config, like so:
> consumerProps.put(ConsumerConfig.PARTITIO
A couple thoughts..
- If you plan on fetching old messages in a non-contiguous manner then this may
not be the best design. For instance, “give me messages from mondays for the
last 3 quarters” is better served with a database. But if you want to say “give
me messages from the last month until
Sorry, I do not have any info on backup and recovery plan at this point of
time. Please consider both cases (no backup AND back up)
On Thu, Dec 15, 2016 at 1:06 PM, Tauzell, Dave wrote:
> What is the plan for backup and recovery of the kafka data?
>
> -Dave
>
> -Original Message-
> From
What is the plan for backup and recovery of the kafka data?
-Dave
-Original Message-
From: Susheel Kumar [mailto:susheel2...@gmail.com]
Sent: Thursday, December 15, 2016 12:00 PM
To: users@kafka.apache.org
Subject: Kafka as a database/repository question
Hello Folks,
I am going thru an
Hello Folks,
I am going thru an existing design where Kafka is planned to be utilised in
below manner
1. Messages will pushed to Kafka by producers
2. There will be updates to existing messages on ongoing basis. The
expectation is that all the updates are consolidated in Kafka and the
I am also struck with same problem by using kafka client with java.
On Dec 15, 2016 8:09 AM, "Costache, Vlad"
wrote:
>
>
> Hello,
>
>
>
> We are trying to make a consumer for kafka, (client code alone and camel
> integrated) and we ended in a blocking point.
>
> Can you please give us an advice,
Hi Avi,
Technically you can, but not without writing some code. If you want to use
consumer groups then you would need to write a custom PartitionAssignor and
configure it in your Consumer Config, like so:
consumerProps.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
YourPartitionAssignor
I’m trying to debug something about a Kafka Streams app, and it would be
helpful to me to start up a new instance of the app that’s only consuming from
a subset of the topics from which this app consumes. I’m hesitating though
because I don’t know if the consumer group scheme will support this s
Hello,
We are trying to make a consumer for kafka, (client code alone and camel
integrated) and we ended in a blocking point.
Can you please give us an advice, or any other idea?
Our problem:
- We create a kafka consumer that connects to a wrong server (wrong
ip/port), and the consum
Hi,
Here are some of my experiences with MirrorMaker, but I'm also eager to read
what others do:
1. Main issue for me is rebalancing. If you have several instances of MM under
the same group, when one of them dies, loses network connectivity, or you just
need to add new partitions to whitelist
Attached is a debug log showing this exception.
Question: is it typical to have so many disconnections from brokers?
This log also includes the exception "Log end offset should not change
while restoring"
errors.log.gz
Description: GNU Zip compressed data
Hi,
Good Afternoon.
We are implementing Kafka MirrorMaker to replicate data from Production Kafka
cluster to DR Kafka cluster. I'm trying to understand answers to the following
queries:
what are the known bottlenecks / issues one needs to be aware from a
MirrorMaker perspective.
Also are the
I have just noticed that I am using the user which is not configured in the
kafka server jaas config file..
On Thu, Dec 15, 2016 at 6:38 PM, kiran kumar wrote:
> Hi Raghu,
>
> I am also facing the same issue but with the SASL_PLAINTEXT protocol.
>
> after enabling debugging I see that authenti
Hi Raghu,
I am also facing the same issue but with the SASL_PLAINTEXT protocol.
after enabling debugging I see that authentication is being completed. I
don't see any debug logs being generated for authorization part (I might be
missing something).
you can also set the log level to debug in prop
Update: the app ran well for several hours.. until I tried to update it. I
copied a new build up to one machine (of five) and then we went back to
near-endless-rebalance. After about an hour I ended up killing the other
four instances and watching the first (new one). It took 90 minutes before
it s
Folks any explanation for this. Or any link that can help me on that.
On Tue, Dec 13, 2016 at 1:00 PM, Sachin Mittal wrote:
> Hi,
> I have some trouble interpreting the result of GetOffsetShell command.
>
> Say if I run
> bin\windows\kafka-run-class.bat kafka.tools.GetOffsetShell --broker-list
>
Read this blog post. It's about Kafka Streams but you can apply
bin/kafka-streams-application-reset.sh to regular client applications, too.
https://www.confluent.io/blog/data-reprocessing-with-kafka-streams-resetting-a-streams-application/
-Matthias
On 12/14/16 10:25 PM, Jagat Singh wrote:
>
I agree. We got already multiple request to add an API for specifying
topic parameters for internal topic... I am pretty sure we will add it
if time permits -- feel free to contribute this new feature!
About chancing the value of until: that does not work, as the changelog
topic configuration woul
I agree. We got already multiple request to add an API for specifying
topic parameters for internal topic... I am pretty sure we will add it
if time permits -- feel free to contribute this new feature!
About chancing the value of until: that does not work, as the changelog
topic configuration woul
31 matches
Mail list logo