Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3218
ok, let's close this PR as the issue is actually deeper than originally
though and can only be fixed with a new heap state backend or by locking for
queryable state queries as well
---
If
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/3218
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3512
[FLINK-6008] collection of BlobServer improvements
This PR improves the following things around the `BlobServer`/`BlobCache`:
* replaces config uptions in `config.md` with non-deprecated ones
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105742696
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputGateMetrics.java
---
@@ -0,0 +1,168
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105891279
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java ---
@@ -389,11 +389,20 @@ public Task(
++counter
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105891947
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputGateMetrics.java
---
@@ -0,0 +1,168
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r105898097
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/taskmanager/Task.java ---
@@ -389,11 +389,20 @@ public Task(
++counter
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3348
I'm wondering: is it actually useful to be able to enable/disable detailed
metric stats via `taskmanager.net.detailed-metrics` or can we enable them
always since they do not incur any overhead u
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3551
[FLINK-6064][flip6] fix BlobServer connection in TaskExecutor
The hostname used for the `BlobServer` was set to the akka address which is
invalid for this use. Instead, this adds the hostname to
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3551#discussion_r106376150
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcService.java
---
@@ -143,9 +142,17 @@ public C checkedApply(Object obj) throws
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3348
@zentol can you have a look again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3348
ok, sorry, these slipped through...
please note however, that the not-null checks in #3610 become obsolete with
this PR
---
If your project is set up for it, you can reply to this email and
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2778
[hotfix] fix duplicate "ms" time unit
as in "Restart with fixed delay (1 ms ms)." in the web interface under
"Max. number of execution retries"
(org.apache.flin
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2805
Flink 5059
Only serialise events once in RecordWriter#broadcastEvent.
While adapting this, also clean up some related APIs which is now unused or
used similar patterns.
You can merge this
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2806
[FLINK-5066] Prevent LocalInputChannel#getNextBuffer from de-serialising
all events when looking for EndOfPartitionEvent only
LocalInputChannel#getNextBuffer de-serialises all incoming events on the
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2805#discussion_r87965585
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
---
@@ -586,7 +588,18 @@ private boolean
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2805#discussion_r87968119
--- Diff:
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
---
@@ -586,7 +588,18 @@ private boolean
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
I tried with several versions of Eclipse and Scala IDE, even with the one
claimed to work. Unfortunately, I got none of them to work.
---
If your project is set up for it, you can reply to this email
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2829
Hotfix 2016 11 18
Prevent RecordWriter#flush() to clear the serializer twice.
Also add some documentation to RecordWriter, RecordSerializer and
SpanningRecordSerializer.
You can merge this pull
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2829#discussion_r88649470
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/serialization/SpanningRecordSerializer.java
---
@@ -151,6 +176,15 @@ private
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2829
I don't expect this to change any behaviour as clearing the serializer
twice does actually not hurt and is only some waste of resources so FLINK-4719
should not be affected at all
---
If
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2890
[hotfix] properly encapsulate the original exception in JobClient
In the job client, an exception was re-thrown without including the
original exception. This commit adds the original exception.
You
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2891
[FLINK-5129] make the BlobServer use a distributed file system
Previously, the BlobServer held a local copy and in case high availability
(HA)
is set, it also copied jar files to a distributed
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
I'll look into writing Flink programs with Eclipse and update the
documentation if needed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2891
Sorry for the hassle, found a regression and added a fix plus an
appropriate test for it. Should be fine now.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2911
[FLINK-5178] allow BLOB_STORAGE_DIRECTORY_KEY to point to a distributed
file system
Previously, this was restricted to a local file system path but now we can
allow it to be distributed, too
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2911
@uce can you have a look after processing #2891 (FLINK-5129)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2805#discussion_r91306180
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.java
---
@@ -131,35 +132,30 @@ private void sendToTarget(T
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2829
I put a bit more emphasis on that fact in the new docs. I'd say, that's
enough and after reading the docs, the difference should be clear.
---
If your project is set up for it, you can rep
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2806
done, and yes, the code is now not relevant for `LocalInputChannel` anymore
but for `PartitionRequestQueue` instead
---
If your project is set up for it, you can reply to this email and have your
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
Developing Flink programs still works with Eclipse (tested with Eclipse
4.6.1 and Scala IDE 4.4.1 for Scala 2.11). Alongside testing the quickstarts, I
also updated them as promised and made a switch
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2764
I wasn't able to test the Scala SBT path though, so this may need some
additional love by someone with a working SBT environment
---
If your project is set up for it, you can reply to this emai
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92608383
--- Diff: docs/quickstart/java_api_quickstart.md ---
@@ -46,39 +46,79 @@ Use one of the following commands to __create a
project__:
{% highlight bash
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92608012
--- Diff: docs/quickstart/run_example_quickstart.md ---
@@ -90,23 +92,23 @@ use it in our program. Edit the `dependencies` section
so that it looks like thi
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92611981
--- Diff: docs/quickstart/java_api_quickstart.md ---
@@ -46,39 +46,79 @@ Use one of the following commands to __create a
project__:
{% highlight bash
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/2764#discussion_r92619301
--- Diff: README.md ---
@@ -104,25 +104,11 @@ Check out our [Setting up
IntelliJ](https://github.com/apache/flink/blob/master/
### Eclipse Scala IDE
Github user NicoK commented on the pull request:
https://github.com/apache/flink/commit/79d7e3017efe7c96e449e6f339fd7184ef3d1ba2#commitcomment-20200802
In docs/Gemfile:
In docs/Gemfile on line 20:
was it necessary to increase this dependency?
---
If your project is set up
Github user NicoK commented on the pull request:
https://github.com/apache/flink/commit/79d7e3017efe7c96e449e6f339fd7184ef3d1ba2#commitcomment-20200919
In docs/Gemfile:
In docs/Gemfile on line 23:
seems that `./build_docs -p` is broken, i.e. it does neither enable
auto
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3683
[FLINK-6270] Port several network config parameters to ConfigOption
This ports some memory and network buffers related config options to new
`ConfigOption` instances. These include
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3683#discussion_r110106044
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java
---
@@ -39,10 +39,53 @@
key
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3683#discussion_r110108276
--- Diff:
flink-runtime/src/main/scala/org/apache/flink/runtime/minicluster/LocalFlinkMiniCluster.scala
---
@@ -350,23 +350,18 @@ class LocalFlinkMiniCluster
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3708
[FLINK-6292] fix transfer.sh upload by using https
Seems the upload via http is not supported anymore.
You can merge this pull request into a Git repository by running:
$ git pull https
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3713
[FLINK-6299] make all IT cases extend from TestLogger
This way, currently executed tests and their failures are properly logged.
You can merge this pull request into a Git repository by running
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3713
Strangely, this does not yet provide logging for all test cases. After also
applying #3708, `HBaseConnectorITCase`, for example does show up in the
`flink-fast-tests-a` standard output but neither in
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3721
[FLINK-4545] replace the network buffers parameter
(based on #3708 and #3713)
Instead, allow the configuration with the following three new (more
flexible) parameters
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3713
actually, the error was in the flink-hbase `pom.xml` - please let me
include these hotfixes here as well which fix the actual error and prevent
future errors.
---
If your project is set up for it
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3348
yes, that's exactly why
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wish
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3734
+1 (seems to work nicely, as expected)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r111969982
--- Diff: docs/setup/config.md ---
@@ -602,26 +612,66 @@ You have to configure `jobmanager.archive.fs.dir` in
order to archive terminated
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3512
* I removed the exposed `BlobService` from the `LibraryCacheManager`
* Also, I developed a new cleanup story that removes blobs only if there
are no tasks referring to the job ID anymore. The
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r112236022
--- Diff: flink-dist/src/main/flink-bin/bin/config.sh ---
@@ -398,3 +428,106 @@ readSlaves() {
useOffHeapMemory() {
[[ "`echo ${FLINK_TM_OF
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3721
For the two tests that failed on Travis CI: they were simply killed and a
"`Killed`" appeared in their logs which is usually an indicator that memory ran
out and the kernel killed a proces
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r112239687
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/TaskManagerServicesTest.java
---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r112242842
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/TaskManagerServicesTest.java
---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r112246105
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/taskexecutor/TaskManagerServicesTest.java
---
@@ -0,0 +1,272 @@
+/*
+ * Licensed to
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r112246505
--- Diff:
flink-core/src/main/java/org/apache/flink/api/common/io/DelimitedInputFormat.java
---
@@ -450,7 +450,7 @@ public FileBaseStatistics getStatistics
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r112246946
--- Diff:
flink-dist/src/test/java/org/apache/flink/dist/TaskManagerHeapSizeCalculationJavaBashTest.java
---
@@ -0,0 +1,306 @@
+/*
+ * Licensed to
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3721#discussion_r112248789
--- Diff: flink-dist/src/main/flink-bin/bin/config.sh ---
@@ -398,3 +428,106 @@ readSlaves() {
useOffHeapMemory() {
[[ "`echo ${FLINK_TM_OF
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3742
[FLINK-6046] Add support for oversized messages during deployment
(builds upon #3512)
This adds offloading of large data from the `TaskDeploymentDescriptor` to
the `BlobServer`, i.e
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3742
ok, between my last tests and the rebase, something must have broken - I'll
re-evaluate (merge conflicts + travis)
---
If your project is set up for it, you can reply to this email and have
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3512
Found a race between `BlobCache#deleteAll(JobID)` and
`BlobCache#getURL(BlobKey)` now that the former is actually being used - this
needs to be fixed first before merging:
`BlobCache
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3512
After investigating a bit further, I noticed that this problem is actually
a bit bigger:
Even in `FileSystemBlobStore`, there is no guarantee that a directory will
not be deleted concurrently
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/2764
[FLINK-5008] Update quickstart documentation
This PR updates the outdated quickstart guides regarding IDE setup and the
first example.
You can merge this pull request into a Git repository by
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3290#discussion_r100511182
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobManagerServices.java
---
@@ -116,12 +116,17 @@ public static JobManagerServices
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3293
[FLINK-5745] set an uncaught exception handler for netty threads
This adds a JVM-terminating handler that logs errors from uncaught
exceptions
and terminates the process so that critical
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
would be a different LOG handler though - does it make sense to have two or
is it enough to have a single one in an outer class?
---
If your project is set up for it, you can reply to this email and
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
wouldn't it be `NettyServer$FatalExitExceptionHandler` vs.
`ExecutorThreadFactory$FatalExitExceptionHandler`?
---
If your project is set up for it, you can reply to this email and have your
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
I was actually looking through the code to find something like this but it
seems that every class does this locally for now. Global exit codes make sense
though - also for documentation
---
If your
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
@uce I'll extract the inner class and use it here as well as soon as the
final #3290 is merged
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3298
[FLINK-5672] add special cases for a local setup in cluster start/stop
scripts
With this PR, if all slaves refer to `"localhost"` we run the daemons from
the script itself instead of using
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3299
[FLINK-5553] keep the original throwable in PartitionRequestClientHandler
This way, when checking for a previous error in any input channel, we can
throw a meaningful exception instead of the
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3298#discussion_r101008195
--- Diff: flink-dist/src/main/flink-bin/bin/stop-cluster.sh ---
@@ -25,14 +25,30 @@ bin=`cd "$bin"; pwd`
# Stop TaskManager instance(s)
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3298#discussion_r101008339
--- Diff: flink-dist/src/main/flink-bin/bin/stop-cluster.sh ---
@@ -25,14 +25,30 @@ bin=`cd "$bin"; pwd`
# Stop TaskManager instance(s)
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3293
@StephanEwen already did when #3290 got in
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3308
[FLINK-5796] fix some broken links in the docs
this probably also applies to the release-1.2 docs
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3309
[FLINK-5277] add unit tests for ResultPartition#add() in case of failures
This verifies that the given network buffer is recycled as expected and that
no notifiers are called upon failures to add
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3322
[FLINK-4813][flink-test-utils] make the hadoop-minikdc dependency optional
This removes the need to add the `maven-bundle-plugin`plugin for most
projects using `flink-test-utils`.
Instead
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3331
[FLINK-5814] fix packaging flink-dist in unclean source directory
If `/build-target` already existed, running `mvn package` for
flink-dist would create a symbolic link inside `/build-target
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3322
sure, that makes sense
actually, I only had to add it to the flink-test-utils sub-project since
all the others already included the bundler :)
---
If your project is set up for it, you can reply
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3341
Thanks, this looks like a really nice addition and simplifies the code a
lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3348
[FLINK-5090] [network] Add metrics for details about inbound/outbound
network queues
These metrics are optimised go go through the channels only once in order to
gather all metrics, i.e. min, max
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r102206831
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputGateMetrics.java
---
@@ -0,0 +1,167
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r102207017
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/ResultPartitionMetrics.java
---
@@ -0,0 +1,136
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3348#discussion_r102223763
--- Diff:
flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java ---
@@ -227,6 +227,14 @@
public static final String
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3348
Right, that was missing indeed. I also found some bugs and useful
extensions / inconsistencies that I fixed.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3467
[FLINK-4545] preparations for removing the network buffers parameter
This PR includes some preparations for following PRs that ultimately lead
to removing the network buffer parameter that was hard
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3467
Hi @zhijiangW,
actually, the solution I am working on is to replace the network buffers
parameter by something like "max memory in percent" and "min MB to use". For
this to not
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3480
[FLINK-4545] use size-restricted LocalBufferPool instances for network
communication
Note: this PR is based on #3467 and PR 2 of 3 in a series to get rid of the
network buffer parameter
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3499
[FLINK-6005] fix some ArrayList initializations without initial size
This is just to give some ArrayList initializations an initial size value
to reduce tests overhead.
You can merge this pull
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3467#discussion_r105143643
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/LocalBufferPool.java
---
@@ -265,11 +281,15 @@ public String toString
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/3480
added the requested changes and successfully rebased on the newest master
due to conflicts
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2891
despite the tests completing successfully, I do still need to check a few
things:
- `BlobService#getURL()` may now return a URL for a distributed file
system, however:
- related code, e.g
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3056
[FLINK-3150] make YARN container invocation configurable
By using the `yarn.container-start-command-template` configuration
parameter, the Flink start command can be altered/extended. By default, it
Github user NicoK commented on a diff in the pull request:
https://github.com/apache/flink/pull/3056#discussion_r94798020
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/BootstrapTools.java
---
@@ -347,43 +351,88 @@ public static String
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/2891
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2891
I need to adapt a few things and choose a different approach - I'll re-open
later
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/2911
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user NicoK commented on the issue:
https://github.com/apache/flink/pull/2911
I need to adapt a few things and choose a different approach - I'll re-open
later
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
GitHub user NicoK opened a pull request:
https://github.com/apache/flink/pull/3076
[FLINK-5129] make the BlobServer use a distributed file system
Make the BlobCache use the BlobServer's distributed file system in HA mode:
previously even in HA mode and if the cache has acce
Github user NicoK closed the pull request at:
https://github.com/apache/flink/pull/3076
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
1 - 100 of 1008 matches
Mail list logo