> >
> > try {
> >
> > if (checkpoint == null) {
> >
> > checkpoint = new OffsetCheckpoint(new File(baseDir,
> > CHECKPOINT_FILE_NAME));
> >
> >
> > FYI
> >
> > On Thu, Jan 4, 2018 at 11:48 AM, Kristopher Kane
>
KStream stream =
builder.stream(Serdes.String(), genericAvroSerde, incomingTopic);
On Fri, Oct 20, 2017 at 12:50 AM, Kristopher Kane
wrote:
> I fixated on using the key/value deserializer classes in the consumer
> properties. Overloading the consumer constructor is the way to enable
&g
I just noticed /tmp/kafka-streams//0_[0,1]/{.checkpoint,.lock]
(there are two partitions on the incoming topic) being automatically
created during an integration test. My Streams app doesn't use a state
store and only contains mapValues and a .to termination operation.
Anyone know what this is fo
Is it possible that an exception returned by a consumer on commitSync could
internally succeed for one or more partitions but fail for others and
return or is it all or none of the partitions?
Thanks,
Kris
; > Damian
> >
> > On Tue, 3 Oct 2017 at 09:00 Ted Yu wrote:
> >
> > > I did a quick search in the code base - there doesn't seem to be
> caching
> > as
> > > you described.
> > >
> > > On Tue, Oct 3, 2017 at 6:36 AM, Kristop
Storm has the ability to distribute tuples to down stream processors based
on a key such that keys are grouped and can end up at the same destination
JVM. This is handy if you want consistent processing (say updating a
global state store - one thing should do that per key) and horizontal
scalabili
Not sure if I have missed it in the streams developer docs, but, is there a
mechanism to age data off the state store similar to a key based TTL? It
looks like RocksDB has TTL built in so would I pass that via some store
configuration?
Kris
If using a Byte SerDe and schema registry in the consumer configs of a
Kafka streams application, does it cache the Avro schemas by ID and version
after fetching from the registry once?
Thanks,
Kris