\o/ \o/ \o/
Thank you Max!
On Nov 13, 2015 2:23 AM, "Nick Dimiduk" wrote:
> Woo hoo!
>
> On Thu, Nov 12, 2015 at 3:01 PM, Maximilian Michels
> wrote:
>
> > Thanks for voting! The vote passes.
> >
> > The following votes have been cast:
> >
> > +1 votes: 7
> >
> > Stephan
> > Aljoscha
> > Robert
Woo hoo!
On Thu, Nov 12, 2015 at 3:01 PM, Maximilian Michels wrote:
> Thanks for voting! The vote passes.
>
> The following votes have been cast:
>
> +1 votes: 7
>
> Stephan
> Aljoscha
> Robert
> Max
> Chiwan*
> Henry
> Fabian
>
> * non-binding
>
> -1 votes: none
>
> I'll upload the release arti
Awesome news! Thanks a lot for driving the release, Max! =)
On Thu, Nov 12, 2015 at 3:59 PM, fhueske wrote:
> Thanks Max! :-)
>
>
> Von: Maximilian Michels
> Gesendet: Freitag, 13. November 2015 00:02
> An: dev@flink.apache.org
> Betreff: [VOTE] [RESULT] Release Apache Flink 0.10.0 (release-0.10.
Thanks Max! :-)
Von: Maximilian Michels
Gesendet: Freitag, 13. November 2015 00:02
An: dev@flink.apache.org
Betreff: [VOTE] [RESULT] Release Apache Flink 0.10.0 (release-0.10.0-rc8)
Thanks for voting! The vote passes.
The following votes have been cast:
+1 votes: 7
Stephan
Aljoscha
Robert
Ma
Thanks for voting! The vote passes.
The following votes have been cast:
+1 votes: 7
Stephan
Aljoscha
Robert
Max
Chiwan*
Henry
Fabian
* non-binding
-1 votes: none
I'll upload the release artifacts and release the Maven artifacts.
Once the changes are effective, the community may announce the
r
Hi everybody,
with 0.10.0 almost being released I started writing release nodes for the
Flink blog.
Please find the current draft here:
https://docs.google.com/document/d/1ULZAdxwneZAldhJ69tB3UEvjJQhS-ZASN5mdtumtJ48/edit?usp=sharing
Everybody has permissions to comment the draft. Please let me k
Ah, no problem.
Glad you could resolve your problem :-)
Thanks for reporting back.
Cheers, Fabian
2015-11-12 17:42 GMT+01:00 Kashmar, Ali :
> So the problem wasn’t in Flink after all. It turns out the data I was
> receiving at the socket was not complete. So I went back and looked at the
> way
So the problem wasn’t in Flink after all. It turns out the data I was
receiving at the socket was not complete. So I went back and looked at the
way I’m sending data to the socket and realized that the socket is closed
before sending all data. I just needed to flush the stream before closing
the so
Hilmi Yildirim created FLINK-3007:
-
Summary: Implement a parallel version of the Hidden Markov Model
Key: FLINK-3007
URL: https://issues.apache.org/jira/browse/FLINK-3007
Project: Flink
Issue
This seems to be an issue only occuring when using Java 8 lambdas, which is
still super annoying but may not be a release blocker.
Gyula Fóra ezt írta (időpont: 2015. nov. 12., Cs,
15:38):
> I am not sure if this issue affects the release or maybe I am just doing
> something wrong: https://issue
> On 12 Nov 2015, at 15:38, Gyula Fóra wrote:
>
> I am not sure if this issue affects the release or maybe I am just doing
> something wrong: https://issues.apache.org/jira/browse/FLINK-3006
I would address it in a bug fix release.
– Ufuk
I am not sure if this issue affects the release or maybe I am just doing
something wrong: https://issues.apache.org/jira/browse/FLINK-3006
Fabian Hueske ezt írta (időpont: 2015. nov. 12., Cs,
14:51):
> The failing tests on Windows should *NOT* block the release, IMO. ;-)
>
> 2015-11-12 14:48 GMT
Thanks, I opened the JIRA: https://issues.apache.org/jira/browse/FLINK-3006
This might affect the release as well.
Timo Walther ezt írta (időpont: 2015. nov. 12., Cs,
14:56):
> This looks like a bug. Can you open an issue for that? I will look into
> it later.
>
> Regards,
> Timo
>
>
> On 12.11
Gyula Fora created FLINK-3006:
-
Summary: TypeExtractor fails on custom type
Key: FLINK-3006
URL: https://issues.apache.org/jira/browse/FLINK-3006
Project: Flink
Issue Type: Bug
Componen
This looks like a bug. Can you open an issue for that? I will look into
it later.
Regards,
Timo
On 12.11.2015 13:16, Gyula Fóra wrote:
Hey,
I get a weird error when I try to execute my job on the cluster. Locally
this works fine but running it from the command line fails during
typeextractio
The failing tests on Windows should *NOT* block the release, IMO. ;-)
2015-11-12 14:48 GMT+01:00 Fabian Hueske :
> +1
>
> I checked:
> 1) on Windows 10 with Cygwin
> - building from source without tests (mvn -DskipTests clean install) works
> - building from source with tests (mvn clean install)
+1
I checked:
1) on Windows 10 with Cygwin
- building from source without tests (mvn -DskipTests clean install) works
- building from source with tests (mvn clean install) fails: FLINK-2757
- start/stop scripts (start-local.sh, start-local-streaming.sh,
stop-local.sh) work
- submitting example job
Hi Robert,
Thank you for the reply! At the moment we just "play" with Neo4j and
Flink but the InputFormat shall be available in Flink eventually.
Concerning the license: I did not think of that, but yes, I can make it
available in maven central. I just need to find out how to do this.
I cre
Thanks Fabian :)
Fabian Hueske ezt írta (időpont: 2015. nov. 12., Cs,
14:03):
> Hi Gyula,
>
> I just checked with jconsole that the memory allocation is correct.
> However, the log message is a bit misleading. In case of the streaming
> mode, the managed memory is lazily allocated and the logge
Hi Gyula,
I just checked with jconsole that the memory allocation is correct.
However, the log message is a bit misleading. In case of the streaming
mode, the managed memory is lazily allocated and the logged amount is an
upper bound.
Cheers, Fabian
2015-11-12 13:37 GMT+01:00 Gyula Fóra :
> Hey
Hey guys,
Is it normal that when I start the cluster with start-cluster-streaming.sh
out of the 16gb tm memory 10.6 gb becomes flink managed? (I get pretty much
the same number when I use start-cluster.sh)
I thought that Flink would only use a very small fraction in streaming mode.
Cheers,
Gyula
Hey,
I get a weird error when I try to execute my job on the cluster. Locally
this works fine but running it from the command line fails during
typeextraction:
input1.union(input2, input3).map(Either::
Left).returns(eventOrLongType);
This fails when trying to extract the output type from the map
Sorry for the delay.
So the plan of this work is to add a neo4j connector into Flink, right?
While looking at the pom files of neo4j I found that its GPLv3 licensed,
and Apache projects can not depend/link with GPL code [1].
So I we can not make the module part of the Flink source.
However, its ac
+1 for the proposed changes. But why not always create a snapshot on
shutdown? Does that break any assumptions in the checkpointing
interval? I see that if the user has checkpointing disabled, we can
just create a fake snapshot.
On Thu, Nov 12, 2015 at 9:56 AM, Gyula Fóra wrote:
> Yes, I agree wi
IMHO it’s not possible to have streaming/batch specific ExecutionConfig since
the user functions share a common interface, i.e.
getRuntimeContext().getExecutionConfig() simply returns the same type for both.
What could be done is to migrate batch/streaming specific stuff to the
ExecutionEnviron
+1 for separating concerns by having a StreamExecutionConfig and a
BatchExecutionConfig with inheritance from ExecutionConfig for general
options. Not sure about the pre-flight and runtime options. I think
they are ok in one config.
On Wed, Nov 11, 2015 at 1:24 PM, Robert Metzger wrote:
> I think
Hi,
you can do it using the register* methods on StreamExecutionEnvironment. So,
for example:
// set up the execution environment
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.registerType(InputType.class);
env.registerType(MicroModel.class);
I
Yes, I agree with you.
Once we have the graceful shutdown we can make this happen fairly simply
with the mechanism you described :)
Gyula
Stephan Ewen ezt írta (időpont: 2015. nov. 11., Sze,
15:43):
> I think you are touching on something important here.
>
> There is a discussion/PullRequest a
28 matches
Mail list logo