Hello,
- all PGP signatures are good
- All md5, sha1sums and sha512sums pass
2648 unit test pass, 1 failure (ran twice)
ClientsMetricsTest.shouldAddCommitIdMetric Failed:
Java.lang.AssertionError:
Unexpected method call
StreamsMetricsImpl.addClientLevelImmutableMetric("commit-id", "The vers
Hi expert,
I don't understand a kafka behavior and I'm here to ask for explanation.
My processing task is pretty simple and it's quite similar to a
change-log one.
The record value contains a key/value pair: if the new value is
different respect the stored one, forward to the output topic a
> Why that? Just because there is explicit documentation?
Just that they target YARN.
Ryanne
On Thu, Nov 14, 2019, 1:59 AM Matthias J. Sax wrote:
> Why that? Just because there is explicit documentation?
>
>
> @Debraj: Kafka Streams can be deployed as a regular Java application.
> Hence, and t
Debugging into ConsoleConsumer.scala it eventually just calls this:
val convertedBytes = deserializer.map(_.deserialize(topic,
nonNullBytes).toString.
getBytes(StandardCharsets.UTF_8)).getOrElse(nonNullBytes)
See
https://github.com/apache/kafka/blob/33d06082117d971cdcddd4f01392006b543f3c
+1
On Fri, Nov 15, 2019 at 8:10 PM Robin Moffatt wrote:
> We can try, but you'll have to tell us what the problem is :)
> This is a good checklist for asking a good question:
> http://catb.org/~esr/faqs/smart-questions.html#beprecise
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...
We can try, but you'll have to tell us what the problem is :)
This is a good checklist for asking a good question:
http://catb.org/~esr/faqs/smart-questions.html#beprecise
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Thu, 14 Nov 2019 at 05:04, prashanth sri
w
Any chance that somebody can shed some light on this?
Thanks!
On Tue, 12 Nov 2019 at 17:40, Javier Holguera
wrote:
> Hi,
>
> Looking at the Kafka Connect code, it seems that the built-in support for
> DLQ queues only works for errors related to transformations and converters
> (headers, key, an
Hi Thilo,
You can influence the rate of updates of aggregations by configuring
the size of the record caches with `cache.max.bytes.buffering`.
Details can be found here:
https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#aggregating
https://kafka.apache.org/22/document
Hello everyone,
we are using the Kafka Streams DSL (v2.2.1) to perform aggregations on a
topic using org.apache.kafka.streams.kstream.KGroupedTable#aggregate. ATM
we are seeing one update being published when the subtractor is being
called and another one when the adder is called.
I was under the
Definitely not null keys. They are time based UUIDs. Basically the test set I’m
running is a collection of articles stored in Cassandra and their key is the
uuid generated when inserted there.
Get the articles from bing api and its the same set that bing returns in both
cases (same number (67)
That is puzzling to me, too. Could it be that you have `null` keys for
the "new topic" you mentioned in your original email? For `null` keys,
the fallback would be round-robin.
Or you just got lucky and the keys you write get distributed evenly "by
chance" -- in general, if the data is not skewed,
Well it definitely gives me something to move ahead with.
I am however puzzled how I could observe a really even distribution over
the partitions when specifying `PARTITIONER_CLASS_CONFIG`, whereas when I
remove it the same set of test messages are written to only one partition.
Thanks
Mikkel
--
In Kafka Streams the producer config `PARTITIONER_CLASS_CONFIG` does not
take effect, because Kafka Streams computes and set partition numbers
explicitly and thus the producer does never use the partitioner to
compute a partition, but accepts whatever Kafka Streams specifies on
each `ProducerRecord
13 matches
Mail list logo