Hello Boris,
I think your described scenario can still be achieved via a global state
store (if you are using high-level DSL it means via a GlobalKTable). The
global state store's source topic's all partitions will be piped to all
instances, and all records will be materialized to that store. Then
Ups. This was the wrong email-thread. Please ignore what I wrote. Sorry
for the confusion.
-Matthias
On 11/14/17 9:46 AM, Boris Lublinsky wrote:
> Its not a global state.
> I am using a custom state store
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightben
Its not a global state.
I am using a custom state store
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Nov 14, 2017, at 11:42 AM, Matthias J. Sax wrote:
>
> Boris,
>
> I just realized, that you want to update the state from your processor
> -- this
Boris,
I just realized, that you want to update the state from your processor
-- this is actually not supported by a global state (at least not directly).
Global state is populated from a topic at startup, and the global thread
should be the only thread that updates the state: even if it is
techn
I was thinking about controlled stream use case, where one stream is data for
processing, while the second one controls execution.
If I want to scale this, I want to run multiple instances. In this case I want
these instances to share data topic, but control topic should be delivered to
all
Inst
Boris,
What's your use case scenarios that you'd prefer to set different
subscriber IDs for different streams?
Guozhang
On Mon, Nov 13, 2017 at 6:49 AM, Boris Lublinsky <
boris.lublin...@lightbend.com> wrote:
> This seems like a very limiting implementation
>
>
> Boris Lublinsky
> FDP Archite
This seems like a very limiting implementation
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Nov 13, 2017, at 4:21 AM, Damian Guy wrote:
>
> Hi,
>
> The configurations apply to all streams consumed within the same streams
> application. There is n
Hi,
The configurations apply to all streams consumed within the same streams
application. There is no way of overriding it per input stream.
Thanks,
Damian
On Mon, 13 Nov 2017 at 04:49 Boris Lublinsky
wrote:
> I am writing Kafka Streams implementation (1.0.0), for which I have 2
> input stream
I am writing Kafka Streams implementation (1.0.0), for which I have 2 input
streams.
Is it possible to have different subscriber IDs for these 2 streams.
I see only one place where subscriber’s ID can be specified:
streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG,
ApplicationKafkaParamete
Thank you very much for the reply.
I'll try to implement it.
Best regards
KIM
2017-03-14 17:07 GMT+09:00 Michael Noll :
> Yes, of course. You can also re-use any existing JSON and/or YAML library
> for helping you with that.
>
> Also, in general, an application that uses the Kafka Streams API
Yes, of course. You can also re-use any existing JSON and/or YAML library
for helping you with that.
Also, in general, an application that uses the Kafka Streams API/library is
a normal, standard Java application -- you can of course also use any other
Java/Scala/... library for the application's
Dear Michael Noll,
I have a question; Is it possible converting JSON format to YAML format via
using Kafka Streams?
Best Regards
KIM
2017-03-10 11:36 GMT+09:00 BYEONG-GI KIM :
> Thank you very much for the information!
>
>
> 2017-03-09 19:40 GMT+09:00 Michael Noll :
>
>> There's actually a dem
Thank you very much for the information!
2017-03-09 19:40 GMT+09:00 Michael Noll :
> There's actually a demo application that demonstrates the simplest use case
> for Kafka's Streams API: to read data from an input topic and then write
> that data as-is to an output topic.
>
> https://github.co
There's actually a demo application that demonstrates the simplest use case
for Kafka's Streams API: to read data from an input topic and then write
that data as-is to an output topic.
https://github.com/confluentinc/examples/blob/3.2.x/kafka-streams/src/test/java/io/confluent/examples/streams/Pa
Hello.
I'm a new who started learning the one of the new Kafka functionality, aka
Kafka Stream.
As far as I know, the simplest usage of the Kafka Stream is to do something
like parsing, which forward incoming data from a topic to another topic,
with a few changing.
So... Here is what I'd want to
Hi Ivan,
If I understand you correct, the issue with the leftJoin is that your
stream does contain records with key==null and thus those records get
dropped?
What about this:
streamBB = streamB.selectKey(..);
streamC = streamB.leftJoin(tableA);
streamBNull = streamB.filter((k,v) -> k == null);
Hi Guys,
I am implementing a stream processor where I aggregate a stream of events
by their keys into a KTable tableA and then I am “enriching” another
streamB by the values of tableA.
So essentially I have this:
streamC = streamB
.selectKey(..)
.leftJoin(tableA);
This works great however i
Hi Mike
I'm don't have much experience with Kafka Streams yet, but from the common
sense point of view, maybe it would be easier to model as Kafka Stream for
actual data processing with output to another topic, that would be consumed by
Kafka Connect's sinks? I see one in discussion for HBase
Hello Mike,
What scenarios could cause your app to not be able to complete processing,
are your referring to a runtime exception, or some other app errors (like
writing to another external data service that is timed out and cannot be
retried etc)?
Guozhang
On Mon, Mar 14, 2016 at 9:55 AM, Mike T
I was reading a bit about Kafka Streams and was wondering if it is
appropriate for my team's use. We ingest data using Kafka and Storm. Data
gets pulled by Storm and sent off to bolts that publish the data into HBase
and Solr. One of the things we need is something analogous to Storm's
ability to f
20 matches
Mail list logo