is the map() spread out only if I do as you suggest
messageStream.rebalance().map(..) ?
Best regards
Rob
> Оригинално писмо ----
>От: Carst Tankink ctank...@bol.com
>Относно: Re: kafka consumer
(Accidentally sent this to Timo instead of to-list...)
Hi,
What Timo says is true, but in case you have a higher parallism than the number
of partitions (because you want to make use of it in a future operator), you
could do a .rebalance() (see
https://ci.apache.org/projects/flink/flink-docs-r
/cluster_setup.html
for a cluster setup, but it depends on what cluster you have available). Flink
abstracts away the specifics of per-node communication in its API already.
Hope that helps,
Carst
On 6/15/17, 08:19, "Carst Tankink" wrote:
Hi,
Let me try to explain this fr
Hi,
Let me try to explain this from another user’s perspective ☺
When you run your application, Flink will map your logical/application topology
onto a number of task slots (documented in more detail here:
https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals/job_scheduling.html
be made as part of FLINK-6464, which
includes a clean-up/refactoring of operator names.
On 12.06.2017 14:45, Carst Tankink wrote:
Hi,
We accidentally forgot to give some operators in our flink stream a
custom/unique name, and ran into the following exception in Graphite:
‘exceptions.IO
Hi,
We accidentally forgot to give some operators in our flink stream a
custom/unique name, and ran into the following exception in Graphite:
‘exceptions.IOError: [Errno 36] File name too long:
'///TriggerWindow_SlidingEventTimeWindows_60_-60__-FoldingStateDescriptor_serializer=org-apac
Hi all,
Based on the advice of Aljoscha in this ’m trying to implement a
ProcessFunction that simulates the original sliding window (using Flink 1.2.1,
still).
My current setup is as follows for a window that is windowWidth wide and slides
every windowSlide:
- Keep a ListState>
- processEleme
2.0) but did not solve this case, which fits the “way
too much RocksDB access” explanation better.
Thanks again,
Carst
From: Aljoscha Krettek
Date: Wednesday, May 24, 2017 at 16:13
To: Stefan Richter
Cc: Carst Tankink , "user@flink.apache.org"
Subject: Re: large sliding window perf
Hi,
We are seeing a similar behaviour for large sliding windows. Let me put some
details here and see if they match up enough with Chen’s:
Technical specs:
- Flink 1.2.1 on YARN
- RocksDB backend, on HDFS. I’ve set the backend to
PredefinedOptions.SPINNING_DISK_OPTIMIZED_HIG