Thank you guys for helping me understand!
Precisely I was able to control the behavior on my research work with your
help.
Does anybody think, however, the behavior is not straightforward? (At least
there is another guy on StackOverflow who misunderstand the same way I did)
I'd like to ask the co
Thanks for your feedback! This is very valuable :)
Please share your experience (positive and negative) when doing more
complex stuff. And don't hesitate to ask if you have any questions.
-Matthias
On 11/21/2015 06:04 PM, Naveen Madhire wrote:
> FYI, I just saw this email chain and thought of sh
Sorry for delaying this discussion a bit. I was busy fixing bugs in 0.10.0
;)
@Nick: Thank you for the pointer to Yetus. I definitively like the idea of
having a central project for all the Hadoop-related project tooling.
Do you know if Hadoop/HBase is also using a maven plugin to fail a build on
FYI, I just saw this email chain and thought of sharing my exp. I used the
Storm Flink API few days ago. Just a simple example worked well, however I
will be testing few more next week.
One thing to note is, I had to include all Scala dependencies in the storm
topology since FlinkLocalCluster.java
I would not set
> ExecutionEnvironment env =
> ExecutionEnvironment.createLocalEnvironment().setParallelism(1);
because this changes the default parallelism of *all* operator to one.
Instead, only set the parallelism of the **sink** to one (as described
here:
https://stackoverflow.com/questions/
Additionally as having multiple files under /output1.txt is standard in the
Hadoop ecosystem you can transparently read all the files with
env.readTextFile("/output1.txt").
You can also set parallelism on individual operators (e.g the file writer)
if you really need a single output.
On Fri, Nov 2
Robert Metzger created FLINK-3056:
-
Summary: Show bytes sent/received as MBs/GB and so on in web
interface
Key: FLINK-3056
URL: https://issues.apache.org/jira/browse/FLINK-3056
Project: Flink