; .toStream()
> > >>>>> .map({ k, v ->
> > >>>>> new KeyValue<>(k.window().end(), v)
> > >>>>> })
> > >>>>> .to('output')
> > >>>>>
> > >>>>> def config = new Properties()
> > >>>>> config.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId)
> > >>>>> config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
> > 'localhost:9092')
> > >>>>> config.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG,
> > >>> TimeUnit.SECONDS.toMillis(60))
> > >>>>>
> > >>>>> KafkaStreams kafkaStreams = new KafkaStreams(builder.build(),
> > config)
> > >>>>> kafkaStreams.start()
> > >>>>>
> > >>>>>
> > >>>> I've tried add to config ConsumerConfig.AUTO_OFFSET_RESET_CONFIG set
> > to
> > >>>> 'latest' and 'earliest' but it didn't help.
> > >>>> Can you help me understand what I'm doing wrong?
> > >>>> Thank you.
> > >>>
> > >
> >
> >
>
--
Vincenzo D'Amore
be this blog post sheds some light:
> https://www.confluent.io/blog/watermarks-tables-event-time-dataflow-model/
>
>
> -Matthias
>
> On 1/25/19 9:31 AM, Vincenzo D'Amore wrote:
> > Hi all,
> >
> > I write here because it's a couple of days I'm strugg
his project that seems to reproduce the problem:
https://github.com/westec/ks-aggregate-debug
Given that I am using non-overlapping gap-less windows in kstream, the
correct output should NOT contain duplicate messages between windows?
Any ideas why the duplicates?
--
Vincenzo D'Amore
ing consumer
> automatically for this case.
>
>
> -Matthias
>
> On 11/22/17 12:22 PM, cours.syst...@gmail.com wrote:
> > I am testing a KafkaConsumer. How can I modify it to process records in
> parallel?
> >
>
>
--
Vincenzo D'Amore
Hi all,
I want setup a Kafka cluster in a production environment.
During latest years I've worked with Solr user and, comparing the Kafka
with Solr, it would be wonderful if even Kafka had an administration
console where see what's happening.
Looking around I've found this:
https://github.com/y
sure message at end of topic.
I'm curious to know your suggestion, if check for a special closure message
at end of topic is enough. Or if there are already best practices useful to
handle this type of scenario.
Best regards and again thanks in advance for your time,
Vincenzo
--
Vincenzo
27;forever'? Did you wait more than 5 minutes?
>
> On Fri, Dec 2, 2016 at 2:55 AM, Vincenzo D'Amore
> wrote:
>
> > Hi Kafka Gurus :)
> >
> > I'm creating process between few applications.
> >
> > First application create a producer and the
izer":
"org.apache.kafka.common.serialization.StringDeserializer",
"value.deserializer":
"org.apache.kafka.common.serialization.StringDeserializer"
usually there are 2 active groups (group_A and group_B).
Best regards,
Vincenzo
--
Vincenzo D'Amore
email: v.dam...@gmail.com
skype: free.dev
mobile: +39 349 8513251