Hi All,

Stephan pointed this out the other day to me, so here goes:  as some of you 
might now, there are end-to-end tests in flink-end-to-end tests that run a 
proper Flink cluster (on the local machine) and execute some tests. This 
catches bugs that you only catch when using Flink as a user because they 
exercise the whole system. We should add tests there that verify integration 
with other systems. For example, there's a bunch of Docker-compose 
configurations for starting complete Hadoop clusters [1] or Mesos [2] and there 
is other files for starting ZooKeeper, Kafka, ... We can use this to spin up a 
testing cluster and run Flink on YARN and Mesos and have a reproducible 
environment.

As a next step, we could perform the sort of tests we do for a release, as 
described here: [3]. For example, the test where we run a job and mess with 
processes and see that Flink correctly recovers and that the HA setup works as 
intended.

What do you think?

By the way, I'm also mostly writing to see if anyone has some experience with 
Docker/Docker compose and would be interested in getting started on this. I 
would do it myself because having more automated tests would help be sleep 
better at night but I'm currently too busy with other things. 😉

[1] 
https://github.com/big-data-europe/docker-hadoop/blob/master/docker-compose.yml
[2] https://github.com/bobrik/mesos-compose
[3]

Reply via email to