Hi
Flink and Spark take different approaches to do computation, each of which has
its pros and cons.
Who can elaborate the pros and cons for "Operator-centric, intermediate
data-centric"? Any help would be very appreciated.
First of all, share my understanding:
Operator-centric: Could have more
One improvement suggestion, please check if it is valid?
For checking system whether be adequately reliability, testers usually
designedly do some delete operation.
Steps:
1.go to "flink\build-target\log"
2.delete “flink-xx-jobmanager-linux-3lsu.log" file
3.Run jobs along with writing log info, m
is probably easier for them to help
you with problems if you use IntelliJ. Also, IntelliJ is easier to setup and
works better for Flink because of mixed Java/Scala code.
Cheers,
Aljoscha
On Mon, 6 Jul 2015 at 03:39 Chenliang (Liang, DataSight)
mailto:chenliang...@huawei.com>> wrote:
Than
apache-flink-from-source
For development, I recommend IntelliJ:
https://github.com/apache/flink/blob/master/docs/internals/ide_setup.md#intellij-idea
Greetings,
Stephan
On Fri, Jul 3, 2015 at 11:16 AM, Chenliang (Liang, DataSight)
mailto:chenliang...@huawei.com>> wrote:
Dear
In windows
Dear
In windows8 + VitualBox, how to build Flink development environment?