I'm experimenting with the Kafka Streams preview and understand that joins
can only happen between KStreams and/or KTables that are co-partitioned.
This is a reasonable limitation necessary to support large streams.
What if I have a small topic, though, that I'd like to be able to join
based on va
The Kafka docs on Monitoring don't mention anything specific for Kafka
Connect. Are metrics for connectors limited to just the standard
consumer/producer metrics? Do I understand correctly that the Connect API
doesn't provide any hooks for custom connector-specific metrics? If not, is
that somethin
/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
>
> Viktor
> On Tue, Sep 12, 2017 at 2:38 PM, Jeff Klukas wrote:
> The Kafka docs on Monitoring don't mention anything specific for Kafka
> Connect. Are metrics for connectors limited to just the standard
> consum
>From what I can tell, global state stores are managed separately from other
state stores and are accessed via different methods.
Do the proposed methods on TopologyTestDriver (such as getStateStore) cover
global stores? If not, can we add an interface for accessing and testing
global stores in th
I have a KStream that I want to enrich with some values from a lookup
table. When a new key enters the KStream, there's likely to be a
corresponding entry arriving on the KStream at the same time, so we end up
with a race condition. If the KTable record arrives first, then its value
is available fo
-- Forwarded message --
> From: Jeff Klukas
> To: users@kafka.apache.org
> Cc:
> Date: Wed, 30 Mar 2016 11:14:53 -0400
> Subject: KStream-KTable join with the KTable given a "head start"
> I have a KStream that I want to enrich with some values from
I'm developing a Kafka Streams application and trying to keep up with the
evolution of the API as 0.10.0.0 release candidates come out. We've got a
test cluster running RC0, but it doesn't look like client or streams jars
for the RCs are available on maven central.
Are there any plans to upload ja
The only hook I see for specifying a TimestampExtractor is in the
Properties that you pass when creating a KafkaStreams instance. Is it
possible to modify the timestamp while processing a stream, or does the
timestamp need to be extracted immediately upon entry into the topology?
I have a case whe
Is it true that the aggregation and reduction methods of KStream will emit
a new output message for each incoming message?
I have an application that's copying a Postgres replication stream to a
Kafka topic, and activity tends to be clustered, with many updates to a
given primary key happening in
As I'm developing a Kafka Streams application, I ended up copying the
content of streams/src/test/java/org/apache/kafka/test/ into my project in
order to use the KStreamTestDriver and associated functionality in tests,
which is working really well.
Would the Kafka team to open to refactoring these
I have a seemingly simple case where I want to join two KTables to produce
a new table with a different key, but I am getting NPEs. My understanding
is that to change the key of a KTable, I need to do a groupBy and a reduce.
What I believe is going on is that the inner join operation is emitting
n
a KStream in order to filter out the null
join values?
-- Forwarded message --
From: Jeff Klukas
To: users@kafka.apache.org
Cc: Guozhang Wang
Date: Wed, 8 Jun 2016 10:56:26 -0400
Subject: Handling of nulls in KTable groupBy
I have a seemingly simple case where I want to join t
Would the move to Java 8 be for all modules? I'd have some concern about
removing Java 7 compatibility for kafka-clients and for kafka streams
(though less so since it's still so new). I don't know how hard it will be
to transition a Scala 2.11 application to Scala 2.12. Are we comfortable
with the
On Thu, Jun 16, 2016 at 5:20 PM, Ismael Juma wrote:
> On Thu, Jun 16, 2016 at 11:13 PM, Stephen Boesch
> wrote:
>
> > @Jeff Klukas What is the concern about scala 2.11 vs 2.12? 2.11 runs on
> > both java7 and java8
> >
>
> Scala 2.10.5 and 2.10.6 also support
We've developed a Kafka client metrics reporter that registers Kafka
metrics with the Dropwizard Metrics framework:
https://github.com/SimpleFinance/kafka-dropwizard-reporter
I think this would be a useful contribution to the metrics section of
https://cwiki.apache.org/confluence/display/KAFKA/Eco
I'm doing some local testing on my Mac to get a feel for Kafka Connect, and
I'm running into several issues.
First, when I untar the Kafka 0.10.0.1 source and run
`./bin/connect-distributed.sh config/connect-distributed.properties`, I get
a "Usage" message. By digging through scripts a bit, I foun
I'm starting to look at upgrading to 0.10.1.1, but looks like the docs have
not been updated since 0.10.1.0.
Are there any plans to update the docs to explicitly discuss how to upgrade
from 0.10.1.0 -> 0.10.1.1, and 0.10.0.X -> 0.10.1.1?
17 matches
Mail list logo