Congratulations Rajini !
On Fri, 19 Jan 2018 at 07:11 Jun Rao wrote:
> Congratulations, Rajini !
>
> Jun
>
> On Wed, Jan 17, 2018 at 10:48 AM, Gwen Shapira wrote:
>
> > Dear Kafka Developers, Users and Fans,
> >
> > Rajini Sivaram became a committer in April 2017. Since then, she
> remained
>
Very interesting and usefull.
Thanks
On 18 January 2018 at 21:58, Damian Guy wrote:
> This might be a good read for you:
> https://www.confluent.io/blog/put-several-event-types-kafka-topic/
>
> On Thu, 18 Jan 2018 at 20:57 Maria Pilar wrote:
>
> > Hi everyone,
> >
> > I´m working in the config
Hi everyone,
I would like to register my company, Amadeus (http://www.amadeus.com), in the
Powered By page (http://kafka.apache.org/powered-by).
Can someone confirm I can do it via this mailing list?
I can provide on return all required information (logo + short company's
description).
Many tha
Our requirement is such that if a kafka-stream app is consuming a
partition, it should start it's consumption from latest offset of that
partition.
This seems like do-able using
streamsConfiguration.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
Now, let's say using above configuration,
In our architecture, we are assuming to run three jvm processes on one
machine (approx.) and each jvm machine can host upto 15 kafka-stream apps.
And if I am not wrong each kafka-stream app spawns one java thread. So,
this seems like an awkward architecture to have with around 45 kafka-stream
apps
I am a SA/SE on several kafka clusters, I was wondering if there was a
training and certification program or track you could recommend?
I could not find one
-Brian
--
Thanks,
Brian P Spallholtz
Hi,
We have a small kafka cluster (3 broker nodes) and as our kafka usage has
grown we are looking to add more brokers. In order for the new brokers to
take on the load of some of the existing topics, I assume we have to either
add partitions that are assigned to these new broker nodes or we have
We recently scaled up the number of brokers we had in our cluster. Instead of
adding partitions we just reassigned the partitions to distributed them better
across all the brokers we now had. We did this for internal streams topics,
too, and things went pretty smoothly.
You can find documentati
Multiple answers:
- a KafkaStreams instance start one *processing* thread by default (you
can configure more processing threads, too)
- internally, KafkaStreams uses two KafkaConsumers and one KafkaProducer
(if you turn on EOS, it uses even more KafkaProducers): a KafkaConsumer
starts a backgroun
That is not supported out-of-box.
Configuration "auto.offset.reset" only triggers, if there are not
committed offsets and there is KS config to change this behavior.
A possible workaround might be (but not sure if I want to recommend
this), to increase KafkaStreams commit interval via StreamsConf
Hi Brian,
Maybe this will be of help:
https://www.confluent.io/certification/
Thanks,
Subhash
Sent from my iPhone
> On Jan 19, 2018, at 1:15 PM, brian spallholtz
> wrote:
>
> I am a SA/SE on several kafka clusters, I was wondering if there was a
> training and certification program or trac
It is also used for rewinding consumer offsets.
On 19 January 2018 at 06:25, Matthias J. Sax wrote:
> The timestamp has many different purposes. As mentioned already, it used
> to expired data via retention time. It's also used for stream processing
> via Streams API. All processing is based on
12 matches
Mail list logo