I have encountered a problem in Flink when trying to upgrade from Flink 1.9.1 to
Flink 1.11.3.

The problem in a combination of 2 components:

* Keys implemented as case classes in Scala where we override the equals and
  hashCode methods. The case class has additional fields which we are not used 
in
  the keyBy (hashCode/equals) but can have different values for a specific key 
(the
 fields we care about).
* Checkpointing with RocksDB

In Flink 1.9.1 everything worked fine, but in Flink 1.11.3 we got aggregations
for each unique key including the parameters which we did not want to include in
the keyBy, which we exclicitly do not use in hashCode and equals. It looks likes
hashCode is ignored in the keyBy in our case when we use RocksDB for 
checkpoints.

We do not see this problem if we disable checkpointing or when using
FsStateBackend.

I have seen this with "Incremental Window Aggregation with AggregateFunction"
[1], but a colleague of mine reported he had seen the same issue with
KeyedProcessFunction too.

We are using Scala version 2.11.12 and Java 8.

This looks like a bug to me. Is it a known issue or a new one?

Best regards,
/David Haglund

[1] Incremental Window Aggregation with AggregateFunction
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/operators/windows.html#incremental-window-aggregation-with-aggregatefunction


        David Haglund
Systems Engineer
Fleet Perception for Maintenance        
[cid:nd_logo_d020cb30-0d08-4da8-8390-474f0e5447c8.png]
        NIRA Dynamics AB
Wallenbergs gata 4
58330 Link?ping
Sweden  Mobile: +46 705 634 848
david.hagl...@niradynamics.se
www.niradynamics.se
        Together for smarter safety

Reply via email to