I cannot completely follow what want to achieve. However, the JavaDoc for through() seems not to be correct to me. Using through() will not create an extra internal changelog topic with the described naming schema, because the topic specified in through() can be used for this (there is no point in duplicating the data).
If you have a KTable and apply a mapValues(), this will not write data to any topic. The derived KTable is in-memory because you can easily recreate it from its base KTable. What is the missing part you want to get? Btw: the internally created changelog topics are only used for recovery in case of failure. Streams does not consumer from those topic during "normal operation". -Matthias On 11/22/16 1:59 AM, Mikael Högqvist wrote: > Hi, > > in the documentation for KTable#through, it is stated that a new changelog > topic will be created for the table. It also states that calling through is > equivalent to calling #to followed by KStreamBuilder#table. > > http://kafka.apache.org/0101/javadoc/org/apache/kafka/streams/kstream/KTable.html#through(org.apache.kafka.common.serialization.Serde,%20org.apache.kafka.common.serialization.Serde,%20java.lang.String,%20java.lang.String) > > In the docs for KStreamBuilder#table it is stated that no new changelog > topic will be created since the underlying topic acts as the changelog. > I've verified that this is the case. > > Is there another API method to materialize the results of a KTable > including a changelog, i.e. such that kafka streams creates the topic and > uses the naming schema for changelog topics? The use case I have in mind is > aggregate followed by mapValues. > > Best, > Mikael >
signature.asc
Description: OpenPGP digital signature