The execution within the IDE is most likely not loading the flink-conf.yaml
file to read the configuration. When run from the IDE you get a
LocalStreamEnvironment, which starts a LocalFlinkMiniCluster.
LocalStreamEnvironment is created by
StreamExecutionEnvironment.createLocalEnvironment without p
UPDATE:
I'm trying to implement the version with one node and two task slots on my
laptop. I have also in configured flink-conf.yaml the key:
taskmanager.numberOfTaskSlots: 2
but when I execute my program in the IDE:
/org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
Nice, thank you for reply.
So if I call slotSharedGroup(groupname) on the last operator as here:
DataStream stream = env
.addSource(new FlinkKafkaConsumer010<>(TOPIC, new CustomDeserializer(),
properties))
.assignTimestampsAndWatermarks(new CustomTimestampExtractor())
.map(...)
.slotSha
Hi,
For the first question, I think both approaches should work. You only have to
be careful about startNewChain() because the behaviour can be somewhat
unexpected. What it does is specify, that a new chain should be started with
the operator on which you call startNewChain(). For example, in:
Hi, firstly excuse me for the long post.
I already read the documentation about parallelism, slots and the API about
it but I still have some doubts about practical implementations of them.
My program is composed essentially by three operations:
- get data from a kafka source
- perform a machine l