The userkey and value coding is controlled through serializer udfs that can be
user provided. Your assumption is right, RocksDB work like an ordered map and
we concatenate the actual keys as (keygroup_id(think of a shard id that is
functionally dependent on the element key’s hash to group keys i
Thanks Stefan. The logical data model of Map> makes total sense. A related question, the MapState supports
iterate. What's the encoding format at the RocksDB layer? Or rather
how a user could control the user key encoding?
I assume the implementation uses a compound key format:
EventKeyUserKey.
Hi,
you can imagine the internals of keyed map state working like a Map>, but you only deal with the Map part in
your user code. Under the hood, Flink will always present you the map that
corresponds to the currently processed even’s key. So for each element, it will
always swap in the inner ma
To be clear, I like the direction of Flink is going with State:
Querytable State, MapState etc. MapState in particular is a great
feature and I am trying to find more documentation and/or usage
patterns with it before I dive into the deep end of the code. As far
as I can tell, the key in MapState d
Hi,
you are right. There are some limitation about RichReduceFunctions on
windows. Maybe the new AggregateFunction `window.aggregate()` could
solve your problem, you can provide an accumulator which is your custom
state that you can update for each record. I couldn't find a
documentation page
Hi, Flink newbie here.
I played with the API (built from GitHub master), I encountered some
issues but I am not sure if they are limitations or actually by
design:
1. the data stream reduce method does not take a
RichReduceFunction. The code compiles but throws runtime exception
when submitted