Interest in Python seems on the rise and so this is a good discussion to
have :)
So far there seems to be agreement that Beam's approach towards Python and
other non-JVM language support (language SDK, portability layer etc.) is
the right direction? Specification and execution are native Python an
Great to see this discussion seeded! The problems you face with the
Zeppelin integration are also affecting other downstream projects, like
Beam.
We just enabled the savepoint restore option in RemoteStreamEnvironment [1]
and that was more difficult than it should be. The main issue is that
enviro
>>> I'm not so sure whether the user should be able to define where the job
runs (in your example Yarn). This is actually independent of the job
development and is something which is decided at deployment time.
User don't need to specify execution mode programmatically. They can also
pass the exec
Till Rohrmann created FLINK-11208:
-
Summary: Decrease build/testing time of modules
Key: FLINK-11208
URL: https://issues.apache.org/jira/browse/FLINK-11208
Project: Flink
Issue Type: Improvem
+1
- checked signatures and checksum
- checked that no dependencies have changed between 1.7.0 and 1.7.1
- built Flink from source release with Hadoop 2.8.5 and Scala 2.12
- started standalone cluster with multiple TMs and executed streaming and
batch workloads on it
- checked that the web UI is w
+1
- checked signatures and checksums
- checked that no dependency changes have occurred between 1.6.2 and 1.6.3
- build Flink from source release with Hadoop 2.8.5
- executed all tests via `mvn verify -Dhadoop.version=2.8.5`
- started standalone cluster and tried out WindowJoin and WordCount (bat
+1
- checked signatures and checksums
- checked that no dependency changes have occurred between 1.5.6 and 1.5.5
- build Flink from source release with Hadoop 2.8.5
- run all tests via `mvn verify`
- executed standalone cluster with multiple TaskManagers and ran batch and
streaming examples
- Veri
Dear community,
this is the weekly community update thread #51. Please post any news and
updates you want to share with the community to this thread.
# Flink Forward China is happening
This week the Flink community meets in Beijing for the first Flink Forward
China which takes place from the 20t
You are probably right that we have code duplication when it comes to the
creation of the ClusterClient. This should be reduced in the future.
I'm not so sure whether the user should be able to define where the job
runs (in your example Yarn). This is actually independent of the job
development an
Nico Kruber created FLINK-11207:
---
Summary: Update Apache commons-compress from 1.4.1 to 1.18
Key: FLINK-11207
URL: https://issues.apache.org/jira/browse/FLINK-11207
Project: Flink
Issue Type: B
+1
- manually checked the commit diff and could not spot any issues
- run mvn clean verify locally with success
- run a couple of e2e tests locally with success
Thanks,
Timo
Am 18.12.18 um 11:28 schrieb Chesnay Schepler:
FLINK-10874 and FLINK-10987 were fixed for 1.7.0 .
I will remove FLINK-
IIRC this exception has always been there when running without hadoop.
On 19.12.2018 18:36, Aljoscha Krettek wrote:
+1
- signatures/hashes are ok
- manually checked the logs after running an example on a local cluster
There is this exception in the client log when running without Hadoop in the
ambition created FLINK-11206:
Summary: sql statement parser enhancement
Key: FLINK-11206
URL: https://issues.apache.org/jira/browse/FLINK-11206
Project: Flink
Issue Type: Improvement
Co
13 matches
Mail list logo