Thanks for reporting and providing a fix, Till! We have also fixed two
issues with the new job manager web frontend and pushed them to the
release-0.10 branch. Please review these changes in the new release
candidate.
On Wed, Oct 28, 2015 at 6:55 PM, Till Rohrmann wrote:
> -1 from my side. I jus
This vote is cancelled in favor of a new RC.
On Fri, Oct 30, 2015 at 9:03 AM, Maximilian Michels wrote:
> Thanks for reporting and providing a fix, Till! We have also fixed two
> issues with the new job manager web frontend and pushed them to the
> release-0.10 branch. Please review these change
Please vote on releasing the following candidate as Apache Flink version
0.10.0:
The commit to be voted on:
2cd5a3c05ceec7bb9c5969c502c2d51b1ec00d0c
Branch:
release-0.10.0-rc3 (see
https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git)
The release artifacts to be voted on can be found at:
Fabian Hueske created FLINK-2944:
Summary: Collect, expose and display operator-specific stats
Key: FLINK-2944
URL: https://issues.apache.org/jira/browse/FLINK-2944
Project: Flink
Issue Type:
For testing, please refer to this document:
https://docs.google.com/document/d/1OtiAwILpnIwCqPF1Sk_8EcXuJOVc4uYtlP4i8m2c9rg/edit
On Fri, Oct 30, 2015 at 9:05 AM, Maximilian Michels wrote:
> Please vote on releasing the following candidate as Apache Flink version
> 0.10.0:
>
> The commit to be v
I'm sorry, but I have to give a -1 for this RC.
Starting a Scala 2.11 build (hadoop2 and hadoop24) with
./bin/start-local.sh fails with a ClassNotFoundException:
ava.lang.NoClassDefFoundError:
org/apache/flink/shaded/org/apache/curator/RetryPolicy
at
org.apache.flink.runtime.jobmanager.Jo
Hmpf. Just looked into this. In the Hadoop 2.X Scala 2.11 jar, Curator is
not shaded. Thus, it fails to load the shaded classes. After we fix this,
we will have to create a new RC.
On Fri, Oct 30, 2015 at 11:57 AM, Fabian Hueske wrote:
> I'm sorry, but I have to give a -1 for this RC.
>
> Starti
Fabian Hueske created FLINK-2945:
Summary: Shutting down a local Flink instance on Windows doesn't
clean up blob cache
Key: FLINK-2945
URL: https://issues.apache.org/jira/browse/FLINK-2945
Project: Fl
This vote is cancelled in favor of a new RC.
On Fri, Oct 30, 2015 at 12:06 PM, Maximilian Michels wrote:
> Hmpf. Just looked into this. In the Hadoop 2.X Scala 2.11 jar, Curator is
> not shaded. Thus, it fails to load the shaded classes. After we fix this,
> we will have to create a new RC.
>
>
Please vote on releasing the following candidate as Apache Flink version
0.10.0:
The commit to be voted on:
6044b7f0366deec547022e4bc40c49e1b1c83f28
Branch:
release-0.10.0-rc4 (see
https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git)
The release artifacts to be voted on can be found at:
We can continue testing now:
https://docs.google.com/document/d/1keGYj2zj_AOOKH1bC43Xc4MDz0eLhTErIoxevuRtcus/edit
On Fri, Oct 30, 2015 at 3:49 PM, Maximilian Michels wrote:
> Please vote on releasing the following candidate as Apache Flink version
> 0.10.0:
>
> The commit to be voted on:
> 6044b
The logging of the TaskManager stops 3 seconds before the JobManager
detects that the connection to the TaskManager is failed. If the clocks are
remotely in sync and the TaskManager is still running, then we should also
see logging statements for the time after the connection has failed.
Therefore,
Hi Vasia,
I had a look at your new implementation and have a few ideas for
improvements.
1) Sending out the input iterator as you do in the last GroupReduce is
quite dangerous and does not give a benefit compared to collecting all
elements. Even though it is an iterator, it needs to be completely
I looked up if the Checkstyle plugin would also support tabs with a
fixed line length. Indeed, this is possible because a tab can be
mapped to a fixed number of spaces.
I've modified the default Google Style Checkstyle file. I changed the
indention to tabs (2 spaces) and increased the line length
Hi Fabian,
thanks so much for looking into this so quickly :-)
One update I have to make is that I tried running a few experiments with
this on a 6-node cluster. The current implementation gets stuck at
"Rebuilding Workset Properties" and never finishes a single iteration.
Running the plan of one
Timo Walther created FLINK-2946:
---
Summary: Add orderBy() to Table API
Key: FLINK-2946
URL: https://issues.apache.org/jira/browse/FLINK-2946
Project: Flink
Issue Type: New Feature
Comp
Chiwan Park created FLINK-2947:
--
Summary: Coloured Scala Shell
Key: FLINK-2947
URL: https://issues.apache.org/jira/browse/FLINK-2947
Project: Flink
Issue Type: Improvement
Components:
Do Le Quoc created FLINK-2948:
-
Summary: Cannot compile example code in SVM quickstart guide
Key: FLINK-2948
URL: https://issues.apache.org/jira/browse/FLINK-2948
Project: Flink
Issue Type: Bug
We can of course inject an optional ReduceFunction (or GroupReduce, or
combinable GroupReduce) to reduce the size of the work set.
I suggested to remove the GroupReduce function, because it did only collect
all messages into a single record by emitting the input iterator which is
quite dangerous. A
Suneel Marthi created FLINK-2949:
Summary: Add method 'writeSequencefile' to DataSet
Key: FLINK-2949
URL: https://issues.apache.org/jira/browse/FLINK-2949
Project: Flink
Issue Type: Improvem
Chiwan Park created FLINK-2950:
--
Summary: Markdown presentation problem in SVM documentation
Key: FLINK-2950
URL: https://issues.apache.org/jira/browse/FLINK-2950
Project: Flink
Issue Type: Bug
21 matches
Mail list logo