,
ApplicationKafkaParameters.DATA_GROUP);
And it does not seem like either Topology or DSL APIs allow to overwrite it
during Stream creation.
Thanks for the help
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
This seems like a very limiting implementation
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Nov 13, 2017, at 4:21 AM, Damian Guy wrote:
>
> Hi,
>
> The configurations apply to all streams consumed within the same streams
> app
onNull(storeName, "storeName cannot be null");
if (!isInitialized()) {
return Collections.emptyList();
}
IsInitialized method
private boolean isInitialized() {
return clusterMetadata != null && !clusterMetadata.topics().isEmpty();
}
Check for the cluster, which is n
It looks like for the custom state store implementation the only option is to
use Topology APIs.
The problem is that in the case of DSL, Kafka streams does not provide any
option to create Store Builder for a custom store.
Am I missing something?
Boris Lublinsky
FDP Architect
boris.lublin
> On Nov 13, 2017, at 12:24 PM, Boris Lublinsky
> wrote:
>
> It looks like for the custom state store implementation the only option is to
> use Topology APIs.
> The problem is that in the case of DSL, Kafka streams does not provide any
> option to create Store Build
Is there an example code for this somewhere?
Also does it have to be key/value.
In my case a store is just a state, so key is not exposed.
It was working fine in the 11.0, but now semantics is very different
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com
Instances.
This means that I would like to control group IDs for streams individually
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Nov 13, 2017, at 2:47 PM, Guozhang Wang wrote:
>
> Boris,
>
> What's your use case scenarios that
Its not a global state.
I am using a custom state store
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Nov 14, 2017, at 11:42 AM, Matthias J. Sax wrote:
>
> Boris,
>
> I just realized, that you want to update the state from your proc
al val ZookeeperDataFolderName = "zookeeper_data"
}
Which starts fine
Very simple message writer and reader.
When I run message reader and writer on a true Kafka, it runs fine.
If I run them on a embedded server
Writer works fine. I can see messages in the log and can use cli tool
bin/k
Thanks Guozhang
I do not think its ever updated.
I waited for a while.
I did implement workaround.
Can you also look at my embedded Kafka question
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Nov 16, 2017, at 10:29 PM, Guozhang Wang wr
This works fine, we (Lightbend) are using this approach all over the place
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Jul 11, 2018, at 8:53 AM, Pulkit Manchanda wrote:
>
> Hi All,
>
> I want to build a datapipeline with the f
Yes, reactive Kafka
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Jul 11, 2018, at 9:46 AM, Pulkit Manchanda wrote:
>
> Thanks Boris.
> Are you using Alpakka for kafka -Akka integration?
>
> On Wed, Jul 11, 2018 at 9:56 A
Its actually quite simple, unfortunately you have to read, and then write to TSDB.Enclosed is an example doing this and dumping to InfluxDB
JMXCollector.scala
Description: Binary data
Boris LublinskyFDP Architectboris.lublin...@lightbend.comhttps://www.lightbend.com/
On Aug 8, 2018, at 8:46 PM,
com> wrote:Boris:BrokerWithJMX is referenced but I didn't find the class source after abrief search.FYIOn Wed, Aug 8, 2018 at 7:10 PM Boris Lublinsky <boris.lublin...@lightbend.com> wrote:Its actually quite simple, unfortunately you have to read, and then writeto TSDB.Enclosed is an example doing
14 matches
Mail list logo