I am now able to consistently reproduce this issue with a dummy project.
1. Set "max.poll.interval.ms" to a low value
2. Have the pipeline take longer than the interval above
3. Profit
This happens every single time and never recovers.
I simulated the delay by adding a breakpoint on my IDE on a s
Tks matthias, I did't consider about the situation you mentioned. you're
right.
one or more input topic can add to one single source node:
builder.addSource("source-1", "topic-1", "topic-2");
Or they can add to different source node:
builder.addSource("source-1", "topic-1");
builder.addSource("sou
Well. You can also ready multiple topics as a single KStream.
> builder.stream("topic-1", "topic-2")
Of course both topics must contain data with same key and value type.
For this pattern, there is only one source node.
There is no 1-to-1 relationship between input topics and source node,
and th
Kafka streams topology can define one or many SourceNode.
The picture on official document <
http://kafka.apache.org/0102/documentation/streams#streams_architecture_tasks
>
only draw one source node in some place:
1. Stream Partitions and Tasks
2. Threading Model
3. Local StateStore
And the topolo
Hi,
I have a 3 node Kafka Broker cluster.
I have created a topic and the leader for the topic is broker 1(1001). And
the broker got died.
But when I see the information in zookeeper for the topic, I see the leader
is still set to broker 1 (1001) and isr is set to 1001. Is this a bug in
kafka, as no
Thanks, that fixed the issue !
On Thu, Jun 8, 2017 at 11:58 AM, Mostafa Zarifyar
wrote:
> maybe this will help.
>
> https://community.hortonworks.com/articles/24599/kafka-mirrormaker.html
>
> -M
>
> On Thu, Jun 8, 2017 at 11:42 AM, karan alang
> wrote:
>
> > Hi All,
> >
> > I'm trying to trans
We currently do not have a KIP for it yet.
On Wed, Jun 7, 2017 at 3:21 AM, Frank Lyaruu wrote:
> I tried to use a TimestampExtractor that uses our timestamps from the
> messages, and use a 'map' operation on the KTable to set it to current, to
> have a precise point where I discard our original
Hi All,
Has anybody tried to parse Kafka logs using Logstash ?
If yes , can you please share patterns used to parse .
Thanks in advance.
Lots of large messages will slow down throughput. From the client side you
might want to have a client for large messages and one for the others so that
they each have their own queue.
-Dave
-Original Message-
From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com]
Sent:
Hi there,
We observed if our payload size is larger we see "Failed to send; nested
exception is org.apache.kafka.common.errors.RecordTooLargeException" execption
so we changed the settings from 1 MB to 5 MB for both Producer and Consumer end.
Server.properties:
message.max.bytes=5242880
replic
maybe this will help.
https://community.hortonworks.com/articles/24599/kafka-mirrormaker.html
-M
On Thu, Jun 8, 2017 at 11:42 AM, karan alang wrote:
> Hi All,
>
> I'm trying to transfer data between kafka clusters using Kafka MirrorMaker
> & running into issues.
>
> I've created a consumer.con
Hi All,
I'm trying to transfer data between kafka clusters using Kafka MirrorMaker
& running into issues.
I've created a consumer.config & producer.config files & using the command
shown below.
The error indicates - equirement failed: Missing required property
'zookeeper.connect'
--
Hi all,
I'm resending my earlier note hoping it would spark some conversation this
time around :)
Thanks.
--Vahid
From: "Vahid S Hashemian"
To: dev , "Kafka User"
Date: 05/30/2017 08:33 AM
Subject:KIP-163: Lower the Minimum Required ACL Permission of
OffsetFetch
Hi,
I
So you are saying that in order to debug the ConnectStandalone from an IDE I
have to put the "json" module on the classpath in a run configuration ? or the
way I'm following attaching a remote debugger is the right one ?
Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on W
I expect that this is because the Gradle file does defines the JSON
converter module as "test". See
https://github.com/apache/kafka/blob/trunk/build.gradle#L1082
This is likely because this ensures that the runtime source code will not
directly depend upon the JSON converter module. Any custom sof
Yes exactly ... I see that for the Connect "runtime" module, the "json" module
is a dependency as "Test". I'm able to avoid the exception if I change the
dependency to be "Compile" so that it will be available even in test and at
runtime.
Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Are you talking about importing the Apache Kafka project into IntelliJ? If
so, IntelliJ should have the JSON converter's source code in the IDE's
project. Is that correct? And you're still getting the ConfigException for
the "org.apache.kafka.connect.json.JsonConverter"?
On Thu, Jun 8, 2017 at 10:
Hi Randall,
after running the "./gradlew idea" command I imported the project into IntelliJ
thanks to the Gradle support.
I did no changes on that.
For now the way I'm debugging is just building the kafka binaries, then running
the connect example on the command line with debugging enabled
Hi, Paolo.
How are the Kafka Connect libraries loaded into your IntelliJ project? If
they are loaded as external libraries, where in the order of the external
libraries does the "org.apache.kafka:connector:connect-json" Maven module
appear? Or, is that module loaded as source?
Best regards,
Rand
Hello Experts,
I am trying to replicate data between On Prem Kafka Cluster(Source) and
another Kafka cluster (Target?) set up in a Cloud provider environment.
The On-Prem to cloud is connected via IPSec VPN and Mirror maker tool is
used on 0.10.2.x version.
The consumer configs are as fo
Hi,
I setup Kafka Cluster with 3 kafka brokers in 3 separate VMs.
Upon testing the Fault tolerance among the 3 brokers, I see an issue with
the Consumer in displaying messages.
>> The below command display the messages, along with a content
stating “*Using
the ConsoleConsumer with old consumer
We wanted to have less frequent polling in kafka streams (mostly because we
have noticed quite a lot of object creation when polling a queue with no new
messages on it), so we have set polling to 10 seconds. On start up when
rebalance first happens, the onPartitionsAssigned of ConsumerRebalance
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 0.11.0.0. It's
worth noting that there are a small number of unresolved issues (including
documentation and system tests) related to the new AdminClient and
Exactly-once functionality[1] th
Hi ,
Is Apache Kafka along with storm can be used to design a ticketing system.
By ticketing system, I mean that there are millions of tasks stored in
Kafka queues and there are processes/humans to take some actions on the
task. there are come constraints that same task should not be assigned to
t
Hi,
I'm trying to run the ConnectStandalone application inside IntelliJ providing
the worker and console source properties files from the config dir but I
receive the following exception :
Exception in thread "main" org.apache.kafka.common.config.ConfigException:
Invalid value org.apache.kaf
What do these messages mean:
WARN kafka.network.Processor - Attempting to send response via channel
for which there is no open connection, connection id 2
Yes. To some extent. But the rebalancing is now taking a lot of time. There
are situations where we have to manually restart the Streams app because
rebalancing is kind of "stuck" for several minutes.
On 7 June 2017 at 06:28, Garrett Barton wrote:
> Mahendra,
>
> Did increasing those two proper
27 matches
Mail list logo