Github user steveloughran commented on the issue:
https://github.com/apache/flink/pull/5663
you don't need to shade it, just exclude it explicilty in your pom .it came
in with kinesis-video, and only stops that feature from working if excluded
[AWS 1488](https://github.com/aw
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/flink/pull/5521#discussion_r169604725
--- Diff:
flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java ---
@@ -819,6 +819,10 @@ public void open(FileInputSplit
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/flink/pull/5521#discussion_r169275364
--- Diff:
flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java ---
@@ -706,6 +700,9 @@ public void open(FileInputSplit
Github user steveloughran commented on the issue:
https://github.com/apache/flink/pull/5521
This is very inefficient against an object store, potentially adding a few
hundred millis and $0.01 per file. I would simply catch FileNotFoundExceptions
raised in the open() call and treat
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/flink/pull/5521#discussion_r169132404
--- Diff:
flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java ---
@@ -691,6 +691,12 @@ public void open(FileInputSplit
Github user steveloughran commented on the issue:
https://github.com/apache/flink/pull/4926
creating YarnConfiguration & HdfsConfiguration through some dynamic
classloading is enough to force in these files & configs underneath your own
Configurations. You shouldn't be r
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/flink/pull/4926#discussion_r148328894
--- Diff:
flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationMasterRunner.java
---
@@ -265,7 +266,8 @@ protected int runApplicationMaster
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/flink/pull/4926#discussion_r148328550
--- Diff:
flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/util/HadoopUtils.java
---
@@ -44,7 +44,9 @@
public
Github user steveloughran commented on the issue:
https://github.com/apache/flink/pull/4926
ah. everything lives in Configuration, and is has some historical structure
based on the history of how things all got split up from one big hadoop-default;
1. You can register a new
Github user steveloughran commented on the issue:
https://github.com/apache/flink/pull/4818
1. I hope you pick up Hadoop 2.8.1 for this, as it's got a lot of the
optimisations
1. And equally importantly: a later SDK
1. Though not one of the more recent 1.11 SDKs, where su
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/flink/pull/4818#discussion_r144372829
--- Diff:
flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/S3FileSystemFactory.java
---
@@ -0,0 +1,145
Github user steveloughran commented on the issue:
https://github.com/apache/flink/pull/4397
We've long experimented with the best way to do this in Hadoop, and I think
we're converting on moving off any form of enum to some `hasFeature(String)`
predicate. Why? Lets you han
12 matches
Mail list logo