Spark streaming supports direct join from stream DataFrame and batch
DataFrame , and it's
easy to implement an enrich pipeline that joins a stream and a dimension
table.
I checked the doc of flink, seems that this feature is a jira ticket which
haven't been resolved yet.
So how can I implement s
Hi all:
I tried to install flink-1.7.2 free hadoop version on Azure with hadoop 2.7.
And when I start to submit a flink job to yarn, like this:
bin/flink run -m yarn-cluster -yn 2 ./examples/batch/WordCount.jar
Exceptions came out:
org.apache.flink.client.deployment.ClusterDeploymentExceptio