+1 makes also sense to me, Vasia.
+1 for one example project but let's also create a staging examples
project. Otherwise things might get mixed up.
On Wed, Jul 29, 2015 at 2:28 PM, Andra Lungu wrote:
> Makes perfect sense, Stephan, as long as there is a separate folder for
> each (stating the o
py4j looks really nice and the communication works in both ways. There is
also another Python to Java communication library called javabridge. I
think it is a pity we chose to implement a proprietary protocol for the
network communication of the Python API. This could have been abstracted
more nice
Hi Slim,
Off-heap memory has been postponed because it's not a pressing but rather a
nice-to-have feature. I know that Stephan continued to work on the off-heap
memory. I think we can get it in sometime this year.
Best,
Max
On Fri, Jul 31, 2015 at 11:57 AM, Slim Baltagi wrote:
> Hi
>
> I remem
This could potentially break external software but +1 for renaming it now.
On Mon, Aug 3, 2015 at 11:54 AM, Stephan Ewen wrote:
> Sounds fair to rename them...
>
> On Mon, Aug 3, 2015 at 11:46 AM, Matthias J. Sax <
> mj...@informatik.hu-berlin.de> wrote:
>
> > Hi,
> >
> > I think that the log fi
Hi,
The following commits have been added to the release-0.9 branch since the
0.9.0 release:
c7e8684 [FLINK-2229] Add equals() and hashCode() to ObjectArrayTypeInfo
451eb82 [FLINK-2280] GenericTypeComparator.compare() respects ascending flag
acd4317 [FLINK-2353] Respect JobConfigurable interface
Hi Matthias,
Is that the correct build URL? I can't spot any failing Gelly tests. The
build appears to be stuck in the YARNSessionFIFOITCase.
Cheers,
Max
On Sun, Aug 9, 2015 at 3:37 PM, Matthias J. Sax <
mj...@informatik.hu-berlin.de> wrote:
> Hi,
>
> I got a new failing test in this build (fli
wen wrote:
>
> > Good idea!
> >
> > On Mon, Jul 27, 2015 at 10:31 AM, Maximilian Michels
> > wrote:
> >
> >> We could open a pull request against the linguist repository which holds
> >> the exclusion rules for the graph. It already has rules to
I think this is a decision to be made by the people involved in the Gelly
library. I'm not very familiar with graph processing libraries. Thus, it is
hard for me to asses the value of this contribution.
However, you outlined pretty well that for highly skewed graphs your
technique results in a muc
I second Ufuk and Chensnay. Please provide us with a benchmark. I have a
hard time to believe your implementation, along with the overhead that
comes with it, will improve the streaming performance.
Please, feel free to prove us wrong :)
On Wed, Aug 12, 2015 at 11:48 AM, Chesnay Schepler <
chesna
+1 for your initiative, Henry.
We should have a very high standard on JIRA descriptions. That helps people
to understand issues correctly and makes it much easier for new people to
join the development.
On Mon, Aug 17, 2015 at 12:26 AM, Fabian Hueske wrote:
> +1 for what Henry said.
>
> Recentl
Hi Sachin,
Thanks for reporting. Which of the test cases failed in the MapTaskTest and
the MatchTaskTest?
Best,
Max
On Tue, Aug 18, 2015 at 5:35 PM, Sachin Goel
wrote:
> There appears to be some issue in DriverTestBase. I have observed two
> failures recently, once in MatchTaskTest and MapTask
Welcome and all the best, Chesnay!
On Sat, Aug 22, 2015 at 8:08 PM, Vasiliki Kalavri wrote:
> Congrats and welcome Chesnay!
>
> On 21 August 2015 at 12:19, Stephan Ewen wrote:
>
> > Welcome!
> >
> > On Fri, Aug 21, 2015 at 10:42 AM, Fabian Hueske
> wrote:
> >
> > > Welcome on board Chesnay!
>
+1 for labeling the JIRAs with "test-stability".
On Sat, Aug 22, 2015 at 8:21 PM, Márton Balassi
wrote:
> +1 for Vasia's suggestion
> On Aug 22, 2015 8:07 PM, "Vasiliki Kalavri"
> wrote:
>
> > I just came across 2 more :/
> > I'm also in favor of tracking these with JIRA. How about "test-stabil
Hi Matthias,
Thanks for reporting. The label test-stability exists now.
Cheers,
Max
On Sun, Aug 23, 2015 at 12:32 PM, Matthias J. Sax <
mj...@informatik.hu-berlin.de> wrote:
> Hi,
>
> because there is (not yet) a label for failing tests, I just report it
> over the mailing list again. I also op
Nice, Kostas. Do you think we can upload it to the Material page?
http://flink.apache.org/material.html
Cheers,
Max
On Sun, Aug 23, 2015 at 3:37 PM, Chiwan Park wrote:
> Thank you for sharing!
>
> Regards,
> Chiwan Park
>
> > On Aug 23, 2015, at 10:36 PM, Kostas Tzoumas
> wrote:
> >
> > Hi fol
Thanks @Kostas. I've added the color scheme to
http://flink.apache.org/material.html
Cheers,
Max
On Mon, Aug 24, 2015 at 5:15 PM, Kostas Tzoumas wrote:
> sure, can you do that?
>
> On Mon, Aug 24, 2015 at 12:25 PM, Maximilian Michels
> wrote:
>
>> Nice, Kostas. Do
Very nice read!
On Wed, Aug 26, 2015 at 7:22 AM, Henry Saputra wrote:
> Awesome +1
>
> - Henry
>
> On Tue, Aug 25, 2015 at 6:39 AM, Ufuk Celebi wrote:
>> Blog post is live:
>> http://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html
>>
>> Feel free to spread the word. :)
>>
>> On Tue
A bugfix release should not be forked from the current master. It is
very hard to asses whether we don't break the API because there are
many small fixes going in almost daily. However, I can see applying a
subset of carefully selected commits from the master branch as an
option. Only those commits
> reworks changes go together so tightly that you can get none or both.
>
> Not having the fixes voids the purpose of the bugfix release. Having both
> means it is hard to guarantee all changes are non-breaking.
>
> On Wed, Aug 26, 2015 at 11:08 AM, Maximilian Michels wrote:
>
&
We will have a proper minor release and a preview of 0.10. After all,
a good compromise.
+1
On Wed, Aug 26, 2015 at 2:57 PM, Chiwan Park wrote:
> Robert's suggestion looks good. +1
>
> Sent from my iPhone
>
>> On Aug 26, 2015, at 9:55 PM, Aljoscha Krettek wrote:
>>
>> +1 seems to be a viable so
+1
- Verified OpenPGP signatures
- Verified MD5 and SHA checksums
- Executed Java and Scala quickstart examples
- Ran tests on a cluster with Hadoop 2.4.2
On Mon, Aug 31, 2015 at 11:01 AM, Till Rohrmann wrote:
> +1
>
> - Tested against Hadoop 2.7 / Scala 2.10
> - Tested local-cluster and cluster
Well done, Matthias! Waiting for more exciting stuff to come. :)
Cheers,
Max
On Wed, Sep 2, 2015 at 2:29 PM, Stephan Ewen wrote:
> Welcome!
>
> On Wed, Sep 2, 2015 at 2:08 PM, Till Rohrmann
> wrote:
>
> > Congratulations Matthias! Welcome on board :-)
> >
> > On Wed, Sep 2, 2015 at 2:01 PM, Ro
Hi Matthias,
I'm totally with you on this issue. However, enforcing a strict
version is not a trivial thing. For some people, it might be difficult
to install a specific Jekyll version because of the dependencies on
libraries and Ruby versions that come with it.
> On my system, version 2.2.0 is i
> What I also did in the past was to have two commits, one with the changes and
> one with the content update.
+1 We should always do this to keep the history readable.
On Thu, Sep 3, 2015 at 10:50 AM, Ufuk Celebi wrote:
>
>> On 03 Sep 2015, at 09:56, Maximilian Michels
Hi Behrouz,
Thanks for starting the discussion. If I understand your question
correctly, you are asking if it breaks the training or other external
material if we convert the Flink examples to make use of the
ParameterTool?
We could make the changes such that the examples will accept the same
par
l doesn't seem to support
> positional arguments :) but we could fix that." should we create a separate
> ticket or should it also be part of FLINK-2021 ?
>
> BR,
> Behrouz
>
>
> On Fri, Sep 4, 2015 at 10:55 AM, Maximilian Michels wrote:
>
>> Hi Behrouz,
+1 for releasing a milestone release soon to encourage people to try
out the new features.
There is this bug: https://issues.apache.org/jira/browse/FLINK-2632
which affects the Web Client's error and results display for jobs.
Would be nice to fix it but IMHO it is not critical for the milestone
re
> > > Max
>> > >
>> > > On Fri, Sep 4, 2015 at 11:17 AM, Behrouz Derakhshan
>> > > wrote:
>> > > > Hi Max,
>> > > >
>> > > > What you said makes sense, for "ParameterTool doesn't seem to support
>
The junction plugin could not create the link "build-target" to the
build directory in flink-dist. Maybe this is a permission problem. You
could have turned on the Maven debug mode to see the underlying
exception.
On Thu, Sep 10, 2015 at 1:37 PM, Matthias J. Sax wrote:
> I could resolve this by m
e has its own set of examples.
> And all of them has to be changed.
> Is that OK?
>
> @Ufuk: I agree, I create a ticket for adding Javadocs.
>
> BR,
> Behrouz
>
>
> On Wed, Sep 9, 2015 at 3:53 PM, Maximilian Michels wrote:
>
>> It would be nice to support
ran into the same problem specified here:
> https://issues.apache.org/jira/browse/FLINK-1601 , and current logs does
> not specify what the underlying issue is, it just says "Runner thread died
> before the test was finished. Return value = 1" .
>
> I think it is a good idea
Thanks for fixing! For future reference, please open JIRA.
On Fri, Sep 25, 2015 at 11:38 AM, Matthias J. Sax wrote:
> Do we need a Jira for the WebClient fix? Or can I just commit it?
>
> If anybody whats to review, please find it here:
> https://github.com/mjsax/flink/tree/hotfixWebClient
>
> I
Hi Hanan,
Could you please, by any chance, run your program in a local cluster with
your dependencies in the lib folder? You can use "./bin/start-local.sh" and
try submitting your program to localhost. That would help us to find out if
it is a YARN issue.
Thanks,
Max
On Fri, Sep 25, 2015 at 8:39
Hi Fabian,
This is a very important topic. Thanks for starting the discussion.
1) JIRA discussion
Absolutely. No new feature should be introduced without a discussion.
Frankly, I see the problem that sometimes discussions only come up
when the pull request has been opened. However, this can be o
Hi Kostas,
I think it makes sense to cancel the proposed 0.10-milestone release.
We are not far away from completing all essential features of the 0.10
release. After we manage to complete those, we can test and release
0.10.
The 0.10 release will be a major step towards the 1.0 release and,
ther
+1 for the new Maven project structure
+1 for removing the flink-testing-utils module
+1 for moving flink-language-binding to flink-python
On Thu, Oct 1, 2015 at 6:27 PM, Aljoscha Krettek wrote:
> +1 For pulling out and the restructure. Enough good arguments have been
> brought forward and I agre
+1 Matthias, let's limit the overhead this has for the module maintainers.
On Fri, Oct 2, 2015 at 12:17 AM, Matthias J. Sax wrote:
> I will commit something to flink-storm-compatibility tomorrow that
> contains some internal package restructuring. I think, renaming the
> three modules in this com
You made very sensible choices for improving and finalizing the
Streaming API. The documentation is much clearer now. By the way, here
is the pull request: https://github.com/apache/flink/pull/1208
On Fri, Oct 2, 2015 at 3:02 PM, Stephan Ewen wrote:
> I added two comments to the pull request that
+1 Good idea. I think we can save quite some CPU cycles by not copying
records.
That is basically the behavior of the batch API, and there has so far never
> been an issue with that (people running into the trap of overwritten
> mutable elements).
As far as I know, this is only the case for chai
Hi Flinksters,
After a lot of development effort in the past months, it is about time
to move towards the next major release. We decided to move towards
0.10 instead of a milestone release. This release will probably be the
last release before 1.0.
For 0.10 we most noticeably have the new Streami
7d403aee703aaf33a68a839d2
>
> Greg
>
> On Mon, Oct 5, 2015 at 10:20 AM, Maximilian Michels
> wrote:
>
> > Hi Flinksters,
> >
> > After a lot of development effort in the past months, it is about time
> > to move towards the next major release. We decided to
; > > > > FLINK-2561, so maybe it's covered as is.
> > > > >
> > > > > If we go for Gelly graduation, I can take care of FLINK-2786
> "Remove
> > > > > Spargel from source code and update documentation in favor of
>
Hi Matthias,
Thanks for bringing up this idea. Actually, it has been discussed a
couple of times on the mailing list whether we should have a central
place for third-party extensions/contributions/libraries. This could
either be something package-based or, like you proposed, another
repository.
A
page where we gather links/short
>>> descriptions of all these contributions
>>> and let the maintenance and dependency management to the tool/library
>>> creators?
>>> This way we will at least have these contributions in one place and link to
>>> them s
IMHO we can do that. There should be a disclaimer that the third party
software is not officially supported.
On Thu, Oct 8, 2015 at 2:25 PM, Matthias J. Sax wrote:
> Should we add a new page at Flink project web page?
>
> On 10/08/2015 12:56 PM, Maximilian Michels wrote:
>> +1 for
t;
>> > >>> @Chiwan, sure. Will do that. Thanks for pointing it out :-)
>> > >>>
>> > >>> 2015-09-28 18:00 GMT+02:00 Chiwan Park > > >:
>> > >>>
>> > >>>> @Fabian, Could you cover FLINK-2712 in
In terms of usability, you might have a point Matthias. In terms of
JIRA workflow, I'd say that it makes more sense to tag and filter. An
issue with subtasks should be resolvable. Test instability, on the
other hand, requires a continuous effort and therefore not really
applicable to the concept of
age Name
>>
>> Available for Flink 0.8.x and 0.9.x
>>
>> Short description
>>
>> Please let us know, if we missed to list your package. Be aware, that we
>> might remove listed packages without notice.
>
> Can you please give me some input, what proj
sure what we should add and was hoping for input from the
>> community.
>>
>> I am aware of the following projects we might want to add:
>>
>> - Zeppelin
>> - SAMOA
>> - Mahout
>> - Cascading (dataartisan repo)
>> - BigPetStore
>&g
Hi Santosh,
Did you include the resource file in the JAR that you submit using the
web interface? Does it work using the command-line interface?
Best regards,
Max
On Wed, Oct 7, 2015 at 3:31 PM, santosh_rajaguru wrote:
> Hi all,
>
> I am experiencing some problem while writing to the jena-hbase
How do you load the resource? Could you supply the code section?
On Fri, Oct 9, 2015 at 3:53 PM, santosh_rajaguru wrote:
> Yes i have included the files in the jar. It throws the same error while
> executing from command prompt
>
>
>
> --
> View this message in context:
> http://apache-flink-mai
+1 Let's collect in the Wiki for now. At some point in time, we might
want to have a dedicated page on the Flink homepage.
On Mon, Oct 19, 2015 at 3:31 PM, Timo Walther wrote:
> Ah ok, sorry. I think linking to the wiki is also ok.
>
>
> On 19.10.2015 15:18, Fabian Hueske wrote:
>>
>> @Timo: The
I'm a little less excited about this. You might not be aware but, for
a large portion of the source code, we already follow the Google style
guide. The main changes will be tabs->spaces and 80/100 characters
line limit.
Out of curiosity, I ran the official Google Style Checkstyle
configuration to
Looks like either a Surefire bug or corrupt memory. Haven't seen this before.
On Tue, Oct 20, 2015 at 6:18 PM, Matthias J. Sax wrote:
> I never saw something like this before... Travis hick up? Can be
> ignored? Or severe issues?
>
> https://travis-ci.org/mjsax/flink/jobs/86431071
>
> -Matthias
>
versions, G1 is still causing core dumps once
> in a while...
>
> Stephan
>
>
> On Tue, Oct 20, 2015 at 6:23 PM, Maximilian Michels wrote:
>
>> Looks like either a Surefire bug or corrupt memory. Haven't seen this
>> before.
>>
>> On Tue, Oct
Hi Greg,
It would be very interesting to do a profiling of the job master to
see what it mostly spends time on. Did you run your experiments with
0.9.X or the 0.10-SNAPSHOT? Would be interesting to know if there is a
regression.
Best,
Max
On Wed, Oct 21, 2015 at 10:08 AM, Till Rohrmann wrote:
>
Dear community,
The past months we have been working very hard to push towards 0.10. I
would like to propose the first release candidate.
===
Please vote on releasing the following candidate as Apache Flink version
0.10.0:
The commit to be voted on:
b697064b71b97e
n the documents to test the release candidate:
https://docs.google.com/document/d/1TWCFj55xTyJjGYe8x9YEqmICgSvcexDPlbgP4CnLpLY/edit?usp=sharing
On Wed, Oct 21, 2015 at 7:10 PM, Maximilian Michels wrote:
> Dear community,
>
> The past months we have been working very hard to push towards 0
> > > >
> > > > > I have a couple of questions:
> > > > > - what about the blocker issue (according to the wiki) FLINK-2747?
> > > > > - weren't we going to get rid of staging altogether?
> > > > >
> > > > &
> -V.
>
> On 22 October 2015 at 11:20, Stephan Ewen wrote:
>
>> I am onto FLINK-2800 and FLINK-2888
>>
>> I would not disable YARN detached mode, it is used quite a bit by streaming
>> users and makes perfect sense for streaming jobs, which are always one-shot
>&
in memory. Should improve that in the future...
>>
>> On Thu, Oct 22, 2015 at 11:31 AM, Maximilian Michels
>> wrote:
>>
>> > @Stephan: That's right, the detached mode is very useful for streaming
>> > programs. Let's see if we can merge Sac
e (like 4).
>> >>>>
>> >>>> Let’s keep the discussion going a little longer. I think it has
>> >> proceeded
>> >>>> in a very reasonable manner so far. Thanks for this!
>> >>>>
>> >>>> – Ufuk
>> >>
+1 (binding)
On Wed, Aug 10, 2016 at 9:54 AM, Robert Metzger wrote:
> +1 to release 1.1.1
>
> I've checked the files in the staging repository and reproduced one of the
> issues reported on user@. With 1.1.1, the issue is gone.
>
> The exception with 1.1.0:
> Exception in thread "main"
> org.apac
+1 for Scala 2.11 and would be nice to update to Apache Flink 1.1.1
ASAP. After all, Homebrew users like to stay on the bleeding edge :)
On Wed, Aug 10, 2016 at 12:00 AM, Wright, Eron wrote:
> Will update the homebrew package to Flink 1.1.1 + Hadoop 2.7 + Scala 2.11.
>
>> On Aug 9, 2016, at 5:48
Hi Robert,
We had this discussion before when I suggested to use an external
repository to manage connectors. Ever since I have come to the
conclusion that the overhead of maintaining two source repositories
along with maintaining code and integration, documentation, and CI, is
not worth the effor
Hi Aljoscha,
I'm not very deep into the state backend implementation. However, I
think a breaking change is unavoidable with the new key groups. The
only way that we achieve backwards-compatibility is to include a
translator from the old state format to the new one. As you already
mentioned, this
Hi Sunny,
We are just getting started with Jenkins and may have to fine tune the
CI setup a bit. Apart from a few unreliable Flink tests, Travis has
its own issues because resources tend to be heavily limited there.
Feel free to contact one of the shepherds for the component:
https://cwiki.apache
Actually that is a good suggestion. I know from other Apache projects
that they only mirror the initial description of the pull request but
not the discussion. I agree with you that it's very hard to have
meaningful discussion in JIRA if it is overlapped with GitHub
comments.
Cheers,
Max
On Wed,
Sure, I've filed a JIRA: https://issues.apache.org/jira/browse/INFRA-12456
On Thu, Aug 18, 2016 at 10:57 AM, Stephan Ewen wrote:
> @max - can you contact infra about that?
>
> On Thu, Aug 18, 2016 at 10:25 AM, Maximilian Michels wrote:
>
>> Actually that is a good sugges
Very nice work Ufuk!
On Fri, Aug 19, 2016 at 12:07 PM, Till Rohrmann wrote:
> I second Aljoscha :-)
>
> On Fri, Aug 19, 2016 at 11:53 AM, Aljoscha Krettek
> wrote:
>
>> I checked it out and I liked it. :-)
>>
>> On Thu, 18 Aug 2016 at 19:40 Ufuk Celebi wrote:
>>
>> > Initial PR for the layout:
Hi Pavel!
Thanks for looking into code coverage! Now that Infra enabled access
to coveralls, could you open a Flink issue to address the next steps
to display coverage data?
Cheers,
Max
On Mon, Aug 22, 2016 at 11:38 AM, Till Rohrmann wrote:
> Thanks a lot for your help with that Pavel :-)
>
> O
+1 for a 1.1.2 release soon. I think we should have the fixes in by
the beginning of next week. Otherwise, further fixes could also make
it into 1.1.3.
On Wed, Aug 24, 2016 at 12:32 PM, Gyula Fóra wrote:
> Hi,
>
> I agree that there has been some critical issues discovered and it would be
> to
gnee there, but not authorized for
> this kind of action atm)
>
> 2016-08-24 13:32 GMT+03:00 Maximilian Michels :
>
>> Hi Pavel!
>>
>> Thanks for looking into code coverage! Now that Infra enabled access
>> to coveralls, could you open a Flink issue to address the n
Thanks for reporting Niels. We'll look into it ASAP.
On Mon, Aug 29, 2016 at 10:31 AM, Niels Basjes wrote:
> Hi,
>
> Last week I brought down one of our Yarn nodes because of this problem:
> https://issues.apache.org/jira/browse/FLINK-4485
>
> The Yarn node no longer accepted any Flink/Yarn jobs
Thanks for reporting Niels. We'll look into it ASAP.
On Mon, Aug 29, 2016 at 11:33 AM, Maximilian Michels
wrote:
> Danke für's Weiterleiten.
>
> On Mon, Aug 29, 2016 at 11:25 AM, Till Rohrmann
> wrote:
>> For the attention of the YARN shepherd.
>&g
all posts to JIRA
1) In the JIRA main comments
2) In the Work Log
I think it would be a nice setup to have the GitHub PR description and
comments directly in the JIRA comments. Diff comments should go in the
Work Log.
On Fri, Aug 19, 2016 at 2:56 PM, Maximilian Michels wrote:
> Sure, I'
This limitation doesn't exist anymore in the latest master. Jobs may
be monitored for infinite amount of time now. Note that it wouldn't
cancel the job if the submission timeout had been reached before the
job completed.
On Mon, Aug 29, 2016 at 6:06 PM, Till Rohrmann wrote:
> If I'm not mistaken
till lead to notifications on the mailing list?
>>
>> On Mon, Aug 29, 2016 at 11:52 AM, Maximilian Michels
>> wrote:
>>
>> > From what I understand so far, the message mirroring can be adjusted
>> > in the follow parts:
>> >
>> > 1) GitHub
+1 (binding)
Tested Flink 1.1.2 Scala 2.11 Hadoop2
- Ran ./flink run ../examples/streaming/Iteration.jar with
- ./start-local.sh
- ./start-cluster.sh
- ./yarn-session.sh -n 2
- ./yarn-session.sh -n 2 -d
- Test resuming and stopping of yarn session
- ./yarn-session.sh -yid
- CTRL-C (
Found a minor bug for detached job submissions but I wouldn't cancel
the release for it: https://issues.apache.org/jira/browse/FLINK-4540
On Wed, Aug 31, 2016 at 2:37 PM, Maximilian Michels wrote:
> +1 (binding)
>
> Tested Flink 1.1.2 Scala 2.11 Hadoop2
>
> - Ran ./f
Hi Ivan,
I don't have any experience with the sites but I wouldn't mind trying
out one of these. As far as I understand they help to visualize code
coverage and perform static analysis to find code problems. As for the
code coverage, we have to build in coverage checks into our build
system oursel
Hi Alexey,
You don't have to set the streaming mode. The Flink Runner will
automatically choose to use streaming mode when it discovers
UnboundedSources like Kafka. I'm wondering why that didn't work in
your case. I just ran your example and it chose streaming mode and
didn't return an error durin
If there are no objections, I will contact Infra to change the GitHub
JIRA notifications as follows:
Jira comments section
- initial PR description
- comments of the main GitHub thread
Jira Work Log
- all diff comments
On Mon, Aug 29, 2016 at 6:58 PM, Maximilian Michels wrote:
>>
This should be of concern mostly to the users of the Storm compatibility layer:
We just received a pull request [1] for updating the Storm
compatibility layer to support Storm versions >= 1.0.0. This is a
major change because all Storm imports have changed their namespace
due to package renaming.
Hi Vijay,
The test fails when a NodeReport with used resources set to null is
retrieved. The test assumes that a TaskManager is always exclusively
running in one Yarn NodeManager which doesn't have to be true as one
NodeManager can host multiple containers. The test only seems to
reliably fail whe
.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
> at
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
> at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
> RegardsVijay
>
> On Monday, Septem
Should be resolved now.
On Tue, Sep 6, 2016 at 11:21 AM, Maximilian Michels wrote:
> The testing code for Yarn is very fragile. Also, I'm puzzled why the
> code to test the VCores setting is in the TaskManagerFailure test.
> Running some more tests to fix the issue.
>
> On T
Welcome Hasan!
I also studied at FU Berlin. I would suggest to work on some of the
issues you experience as a data engineer working with Flink. Even
starting with something trivial like a one-liner or updating
documentation would be great.
Cheers,
Max
On Fri, Sep 9, 2016 at 1:40 PM, Hasan Gürcan
- Henry
>
> On Fri, Sep 2, 2016 at 8:42 AM, Till Rohrmann wrote:
>
>> +1
>>
>> On Fri, Sep 2, 2016 at 4:20 PM, Fabian Hueske wrote:
>>
>> > +1
>> >
>> > Thanks Max!
>> >
>> > 2016-09-02 15:20 GMT+02:00 Stephan
Hello Jinkui Shi,
Due to the nature of most of the Yarn tests, we need them to be in a
separate module. More concretely, these tests have a dependency on
'flink-dist' because they need to deploy the Flink fat jar to the Yarn
tests cluster. The fat jar also contains the 'flink-yarn' code. Thus,
'fl
What are the use cases where you actually need to delete a timer? How
about we only let users delete timers which they created themselves?
I guessing most of these use cases will be obsolete with the new
Trigger DSL because the trigger logic can be expressed more easily. So
+1 for removing the del
Hi Ufuk,
`read(buf)` is not always `read(buf, 0, buf.length)`. Whereas
`readFully(buf)` ensures `read(buf, 0, buf.length)`, right? The method
is clearly documented but these mistakes can happen like a forgotten
null pointer check.
If we want to prevent mistakes like this, we can replace `read(buf
Hi Liwei,
I created this a while ago which is pretty much what you want as well:
https://issues.apache.org/jira/browse/FLINK-3276
Cheers,
Max
On Tue, Sep 27, 2016 at 11:09 AM, Liwei Lin wrote:
> Thanks Stephan for the prompt response!
>
> Glad to know it's targeted for Flink 2.0. Is there any J
Hi Eron,
Great to see so much progress on the Mesos implementation! Thank you
for sharing the code with us.
I'm not entirely sure whether we actually wait on the completion of
FLIP-6. We might complete the Mesos support for the 1.2.0 release and
port its code to the new RPC abstraction that comes
I'll merge https://github.com/apache/flink/pull/2548 for the release.
It's cosmetic but it avoids a NPE in case the user jar doesn't contain
Flink jobs.
On Wed, Oct 5, 2016 at 12:59 PM, Kostas Kloudas
wrote:
> Hi Ufuk,
>
> Thanks for being the release manager.
>
> There is already an open PR unde
Kostas PR https://github.com/apache/flink/pull/2593 is merged. I think
we're good to go.
On Wed, Oct 5, 2016 at 3:44 PM, Maximilian Michels wrote:
> I'll merge https://github.com/apache/flink/pull/2548 for the release.
> It's cosmetic but it avoids a NPE in case the use
For a new Mesos framework implementation, it seems reasonable to go
with Mesos 1.0 and don't support legacy versions from day 1. I think
most users of Mesos are looking forward to the Mesos 1.0 release.
Still, we probably should check the migration plan of some potential
users.
On Tue, Oct 4, 2016
-1 overall (see below)
+1 for:
- scanned commit history for dubious changes
- ran "mvn clean install -Dhadoop.version=2.6.0 -Pinclude-yarn-tests"
successfully
- started cluster via "./bin/start-cluster.sh"
- run batch and streaming examples via web interface and CLI
- used web interface for monit
r the RocksDB state backend. That has the
> additional advantage that savepoints from that mode will most likely be
> compatible with Flink 1.2, which savepoints from the "semi async" mode will
> almost certainly not be compatible.
>
> Greetings,
> Stephan
>
> On Fri
+1 (binding)
- scanned commit history for changes
- ran "mvn clean install -Dhadoop.version=2.6.0 -Pinclude-yarn-tests"
successfully
- started cluster via "./bin/start-cluster.sh"
- run batch and streaming examples via web interface and CLI
- used web interface for monitoring
- ran example job wit
401 - 500 of 857 matches
Mail list logo