Thanks! I like the idea of renaming it. I'm fine with shepherd and I
also like Vasia's suggestion "champion".
I would like to add "Distributed checkpoints" as a separate component
to track development for check- and savepoints.
On Wed, Jun 1, 2016 at 10:59 AM, Aljoscha Krettek wrote:
> Btw, i
Yes, it's expected, but you are certainly not the first one to be
confused by this behaviour.
The reasoning behind the current behaviour is that we don't users
accidentally removing jobs, which seems worse than requiring users to
cancel manually. We thought about adding a flag to the start scripts
.
>> >>>
>> >>> On Tue, May 31, 2016 at 10:59 AM, Chiwan Park
>> >> wrote:
>> >>>> I think that the tests fail because of sharing ExecutionEnvironment
>> >> between test cases. I’m not sure why it is problem, but it is only
>
On Thu, Jun 2, 2016 at 1:26 PM, Maximilian Michels wrote:
> I thought this had been fixed by Chiwan in the meantime. Could you
Chiwan fixed the ML issues IMO. You can pick any of the recent builds
from https://travis-ci.org/apache/flink/builds
For example:
https://s3.amazonaws.com/archive.travi
Via YARN it's possible to set dynamically, but for standalone clusters
unfortunately not at the moment.
On Fri, Jun 3, 2016 at 3:46 PM, Vinay Patil wrote:
> Hi,
>
> I am unable to pass VM arguments to my jar, this is the way I am running it:
>
> *bin/flink run test.jar config.yaml*
>
> I cannot a
On Mon, Jun 6, 2016 at 8:35 AM, Vasiliki Kalavri
wrote:
> column "count"? Can you try renaming it to "myCount" or something else? I
For String expressions, it's also possible to escape it via "...as
`count`...", but I'm not sure how this translates to the DSL
expressions. Any ideas, Fabian or Alj
Hey Ozan! For cancel and submit, yes:
- cancel: /jobs/:jobid/cancel
- submit: /jars/upload and /jars/:jarid/run
You can look into WebRuntimeMonitor class for more details about the
submission. Cancellation should be straight forward.
Restart is currently not supported via the REST API and I'm no
Hey Julius,
You have to wait or find someone to review it. The PR looks
interesting. The big question though is that who is going to maintain
it if we merge it.
An alternative would be to keep it in your GitHub repository and we
link it from the project page.
– Ufuk
On Mon, Jun 13, 2016 at 1:05
I would like to do it if that's OK with you Robert. I would follow
your suggestion and wait a few days until the following important
fixes are in:
- Savepoint headers and proper disposal (FLINK-4067 and
https://github.com/apache/flink/pull/2083)
- Metrics (https://github.com/apache/flink/pull/2146)
On Fri, Jul 1, 2016 at 6:50 PM, Aljoscha Krettek wrote:
> Hmm, this sounds like we should also have a proper LICENSE/NOTICE for our
> binary releases.
True... to quote the linked ASF page: "As far as LICENSE and NOTICE
are concerned, only bundled bits matter."
Hey David,
could this be related:
http://stackoverflow.com/questions/1124788/java-unresolved-compilation-problem?
– Ufuk
On Mon, Jul 4, 2016 at 9:22 AM, David Herzog wrote:
> Dear Support,
>
> I make small print outs in: org.apache.flink.runtime.jobmanager.Jobmanager
> to better understand how it
The data exchange mode has been introduced recently as a replacement
for the pipeline break logic, which was buggy. I'm not too familiar
with the optimizer, but I would expect everything that goes back to
the DataExchangeMode to be correct. The rest should be an artifact of
the old pipeline breaker
t; (though, admittedly, some sort-based enhancements are yet to be worked
>>>>> on).
>>>>> This PR looks to be ripe.
>>>>>
>>>>> Also, as we tidy up a few things with Gelly and documentation, what is the
>>>>> schedule for
I would like that! +1
On Mon, Jul 4, 2016 at 4:59 PM, Aljoscha Krettek wrote:
> Hi,
> If we have some high-profile users that a worthwhile putting there and that
> are OK with us putting up their logos then this would be great.
>
> Cheers,
> Aljoscha
>
> On Mon, 4 Jul 2016 at 16:58 Stephan Ewen
>> > users that want to use the RocksDB backend or FsStateBackend on Amazon
>> EMR
>> > with S3.
>> >
>> > There is already an open PR that I'm hoping to get in this week.
>> >
>> > On Mon, 4 Jul 2016 at 13:48 Ufuk Celebi wrote:
>&g
There is also this:
https://flink.apache.org/contribute-code.html#snapshots-nightly-builds
The Hadoop 2 version is built for Hadoop 2.3. Depending on what you
are trying to do, this might be a problem or not.
On Tue, Jul 5, 2016 at 12:26 PM, Vinay Patil wrote:
> Yes, I had already done that yest
ained in flink-dist. Given that this reasoning is sound,
> we can keep the LICENSE and NOTICE file as it is modulo the changes we
> introduced between Flink 1.0 and 1.1.
>
> On Sat, Jul 2, 2016 at 6:07 PM, Ufuk Celebi wrote:
>
>> On Fri, Jul 1, 2016 at 6:50 PM, Aljoscha Krettek
On Wed, Jul 6, 2016 at 3:19 PM, Aljoscha Krettek wrote:
> In the future, it might be good to to discussions directly on the ML and
> then change the document accordingly. This way everyone can follow the
> discussion on the ML. I also feel that Google Doc comments often don't give
> enough space f
Hey Aljoscha,
thanks for this proposal. I've somehow missed it last week. I like the
idea very much and agree with your assessment about the problems with
the Google Doc approach.
Regarding the process: I'm also in favour of adopting it from Kafka. I
would not expect any problems with this, but w
Dear community,
===
FLINK 1.1.0 RC0
===
I've now created a "preview RC" for the upcoming 1.1.0 release,
including everything in master up to cb7824
(https://github.com/uce/flink/tree/release-1.1.0-rc0).
There are still some blocking issues and important pull requests to b
che.org/jira/browse/FLINK-4154
>> >
>> >
>> >
>> > On Tue, Jul 5, 2016 at 3:56 PM, Greg Hogan wrote:
>> >
>> > > Hi Ufuk,
>> > >
>> > > The old sort-based combine is still the default. The user calls
>> > > .setComb
Hey Alan,
as Marton said your contribution is more than welcome. :-)
The discussion around moving some contributions outside of the main
repository did never come to a final conclusion. Therefore, we
currently have most of the connectors inside of main Flink repo. As
long as there is no concrete
;>>> The wiki contains the current state of the proposal, while the
>> >>>>> discussion is covered over the dev-mailing list. IMHO, this makes a
>> lot
>> >>>>> of sense, as people tend to follow the mailing list but not wiki
>> >>>
Thanks for this very first proposal! Both the proposed functionality
and the way you explained it are super nice. :-)
I think that this has been long overdue in Flink. :-) Having worked on
both the ExecutionGraph and IntermediateResults before, I agree that
these are the relevant components for th
+1
I really like the re-organization you just did [1]! :-) Thanks!
Is there a way to reflect this in the left-side navigation as well by
having all User/Contributors/... child pages grouped together?
[1] https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home
On Mon, Jul 11, 2016 a
Hey devs,
we currently tag commits with the JIRA issue and component(s), like:
[FLINK-3943] [table] Add support for EXCEPT operator
I was wondering whether it makes sense to write down a set of common
commit tags for new contributors.
The set of commit tags is quite unregulated right now and I
I very much like this proposal. This is long overdue. Our
documentation never "broke up" with the old batch focus. That's where
the current structure comes from and why people often don't find what
they are looking for. We were trying to treat streaming and batch as
equals. We never were "brave" en
gt;> https://issues.apache.org/jira/browse/FLINK-4166 <
>> https://issues.apache.org/jira/browse/FLINK-4166> for this version,
>> because it can easily become a blocker when running multiple Flink
>> applications on a clusters with HA (see related issue for one example)
gt; need to look up the appropriate tag before writing a commit message.
>> If we would use an automated system to evaluate commits, I would agree to
>> fix this.
>>
>>
>>
>>
>> On Fri, Jul 15, 2016 at 10:57 AM, Ufuk Celebi wrote:
>>
>> >
I can see how issue creations only can be too noisy for people who
only check the dev list once in a while. If others feel like Theo, I
don't have an issue with not-mirroring issue creation to the dev list.
As a data point, in the past 30 days we have created 141 issues, which
should have been mirr
Hey Vishnu,
thanks for trying out the PR. :-) Would be great to move future
questions to the PR.
How are you starting your cluster? My guess is that you are running
the cluster in local mode, which is not starting up the network
components. Is that the case?
– Ufuk
On Thu, Jul 21, 2016 at 1:56
lowed lateness
@Aljoscha, Kostas: do you have an idea when these will be addressed?
On Mon, Jul 18, 2016 at 11:05 AM, Aljoscha Krettek wrote:
> Not yet, but Kostas is investigating.
>
> On Fri, 15 Jul 2016 at 18:21 Ufuk Celebi wrote:
>
>> Most actually have a pendi
require query
>> > for Tuple Stream in CassandraSink)
>> >
>> > For https://issues.apache.org/jira/browse/FLINK-4239 (Set Default
>> > Allowed Lateness to Zero and Make Triggers Non-Purging) I just opened a
>> PR:
>> > https://github.com/apache/
the window slowness fix and the multiple metrics
> reporters change.
>
> On Tue, 26 Jul 2016 at 14:47 Maximilian Michels wrote:
>
>> Yes, I'm done with the investigation for
>> https://github.com/apache/flink/pull/2257. Merging before 6 pm CET.
>>
>> On Tue, Ju
Everything has been merged and I've now created the release-1.1 branch
from commit 12bf7c1.
I will create the RC1 asap and start the vote soon. Thanks to everyone
who was involved in fixing and reporting issues! :-)
On Tue, Jul 26, 2016 at 4:32 PM, Ufuk Celebi wrote:
> Very happy to h
Dear Flink community,
Please vote on releasing the following candidate as Apache Flink version 1.1.0.
I've CC'd u...@flink.apache.org as users are encouraged to help
testing Flink 1.1.0 for their specific use cases. Please feel free to
report issues and successful tests on dev@flink.apache.org.
ls.createLeaderRetrievalService(LeaderRetrievalUtils.java:70)
> at
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderRetrievalTest.testTimeoutOfFindConnectingAddress(ZooKeeperLeaderRetrievalTest.java:187)
>
> I'll continue testing other parts and other Hadoop versions.
>
> On Wed, 27 Jul 20
On Sun, Jul 31, 2016 at 8:07 PM, Neelesh Salian wrote:
> I am Neelesh Salian; I recently joined the Flink community and I wanted to
> take this opportunity to formally introduce myself.
Thanks and welcome! :-)
Which Maven version are you using?
On Mon, Aug 1, 2016 at 5:56 PM, Aljoscha Krettek wrote:
> I tried it again now. I did:
>
> rm -r .m2/repository
> mvn clean verify -Dhadoop.version=2.6.0
>
> failed again. Also with versions 2.6.1 and 2.6.3.
>
> On Mon, 1 Aug 2016 at 08:23 Maximilian Michels wr
Dear community,
I would like to vote +1, but during testing I've noted that we should
have reverted FLINK-4154 (correction of murmur hash) for this release.
We had a wrong murmur hash implementation for 1.0, which was fixed for
1.1. We reverted that fix, because we thought that it broke savepoint
e streaming as the common case and make special
>> > > sections for batch.
>> > >
>> > > We can still have a few streaming-only sections (end to end exactly
>> once)
>> > > and a few batch-only sections (optimizer).
>> > >
>> >
new
>> RC as well.
>>
>> We certainly need to redo:
>> - signature validation
>> - Build & integration tests (that should catch any potential error caused
>> by a change of hash function)
>>
>> That is pretty lightweight, should be good within a da
This vote has been cancelled in favour of RC2.
On Tue, Aug 2, 2016 at 1:51 PM, Stephan Ewen wrote:
> @Ufuk - I agree, this looks quite dubious.
>
> Need to resolve that before proceeding with the release...
>
>
> On Tue, Aug 2, 2016 at 1:45 PM, Ufuk Celebi wrote:
>
>>
Dear Flink community,
Please vote on releasing the following candidate as Apache Flink version 1.1.0.
The commit to be voted on:
45f7825 (http://git-wip-us.apache.org/repos/asf/flink/commit/45f7825)
Branch:
release-1.1.0-rc2
(https://git1-us-west.apache.org/repos/asf/flink/repo?p=flink.git;a=sho
cala 2.11 (Maven 3.0.5)
>> >
>> > Tested Local standalone installation, logs, out, all good
>> >
>> > Tested different memory allocation schemes (heap/offheap)
>> (preallocate/lazy
>> > allocate)
>> >
>> > Web UI works as expected
&
I think this separation reflects the way that Flink is used currently
anyways. I would be in favor of it as well.
- What about the ongoing efforts (I think by Gyula) to combine both the
batch and stream processing APIs? I assume that this would only effect the
performance and wouldn't pose a funda
On 17 Feb 2015, at 09:40, Stephan Ewen wrote:
> Hi everyone!
>
> We have been time and time again struck by the problem that Hadoop bundles
> many dependencies in certain versions, that conflict either with versions
> of the dependencies we use, or with versions that users use.
>
> The most pr
Hey Flinksters and IntelliJers, ;-)
the tests resources directory of each Maven module contains a
log4j-test.properties files, which gets picked via the classpath by JUnit
tests, but not Scalatest. Instead Scalatest picks up log4j.properties, but
JUnit doesn't.
It works when I specify the file
On 25 Feb 2015, at 16:35, Till Rohrmann wrote:
> The reason for this behaviour is the following:
>
> The log4j-test.properties is not a standard log4j properties file. It is
> only used if it is explicitly given to the executing JVM by
> -Dlog4j.configuration. The parent pom defines for the sur
Nice, Max! :)
On 04 Mar 2015, at 15:36, Stephan Ewen wrote:
> Great, thanks Max!
>
> Concerning (1), in the snapshot master, we can have stubs, IMHO.
I agree :)
Thanks for reporting. Even if it runs on your Ubuntu box, it might still be
a problem. It's actually nice to hear that it is reproducible.
Can you do the following after it stalls:
jps
And then a jstack for each process with a name like
"surefirebooter425130371299859".
Then we can see, whic
On 08 Mar 2015, at 15:05, Stephan Ewen wrote:
> Different parts of the code currently use different utilities to validate
> the arguments.
>
> - Some parts use Guava (checkNotNull, checkArgument)
> - Other parts use Validate from Apache commons-lang(3).
>
> How about we use one consistently,
Hey Stephan,
On 08 Mar 2015, at 23:17, Stephan Ewen wrote:
> Hi everyone!
>
> I would like to start an open discussion about some issue with the
> heterogeneity of the Flink code base.
Thanks for bringing this up. I agree with your position. The related discussion
about using Guava vs. Valida
Hey all,
I currently find it a little bit frustrating to navigate between different task
manager operations like cancel or submit task. Some of these operations are
directly done in the event loop (e.g. cancelling), whereas others forward the
msg to a method (e.g. submitting).
For me, navigati
Hey Gyula,
Syncing between the two sometimes takes time. :( I don't think that
anything is broken. Let's wait a little longer.
– Ufuk
On Wednesday, March 11, 2015, Gyula Fóra wrote:
> Hey,
>
> I pushed some commits yesterday evening and it seems like the git repos
> somehow became inconsistent
On 10 Mar 2015, at 22:02, Vasiliki Kalavri wrote:
> Hi all,
>
> I would like your opinion on whether we should deprecate the Spargel API in
> 0.9.
>
> Gelly doesn't depend on Spargel, it actually contains it -- we have copied
> the relevant classes over. I think it would be a good idea to depr
+1 I think it's a good idea to remove it and finish the deprecation. ;)
Thanks for looking into it Fabian.
– Ufuk
On 10 Mar 2015, at 20:42, Henry Saputra wrote:
> Thanks guys,
>
> I have filed FLINK-1681 [1] to track this issue.
>
> Maybe Fabian would like to take stab at this?
>
> [1] http
On Tue, Mar 10, 2015 at 11:20 AM, Robert Metzger
wrote:
> I think
> it is time to evaluate whether we are confident that the master is stable.
>
In the course of finishing up #471 [1] I ran 20 Travis builds over night,
of which 7 failed.
The (unexpected) failing test cases:
- ExternalSortITCas
On Thu, Mar 12, 2015 at 10:11 AM, Robert Metzger
wrote:
> So you're saying regarding the release you don't feel very confident that
> we manage to fork off release-0.9 next week?
>
Yes. At the moment I would be uncomfortable with forking off.
Regarding the failing tests: I thought that so
On Thursday, March 12, 2015, Till Rohrmann wrote:
> Have you run the 20 builds with the new shading code? With new shading the
> TaskManagerFailsITCase should no longer fail. If it still does, then we
> have to look into it again.
No, rebased on Monday before shading. Let me rebase and rerun to
On Saturday, March 14, 2015, Aljoscha Krettek wrote:
> I'm in favor of strict coding styles. And I like the google style.
+1 I would like that. We essentially all agree that we want more
homogeneity and I think strict rules are the only way to go. Since this is
a very subjective matter it makes
There was an issue for this:
https://issues.apache.org/jira/browse/FLINK-1634
Can we close it then?
On Sat, Mar 14, 2015 at 9:16 PM, Dulaj Viduranga
wrote:
> Hay Stephan,
> Great to know you could fix the issue. Thank you on the update.
> Best regards.
>
> > On Mar 14, 2015, at 9:19 PM, Stephan
On Fri, Mar 13, 2015 at 6:08 PM, Maximilian Michels wrote:
>
> Thanks for starting the discussion. We should definitely not keep
> flink-expressions.
>
> I'm in favor of DataTable for the DataSet abstraction equivalent. For
> consistency, the package name should then be flink-table. At first
> si
+1 I like the proposed structure.
The only thing I was wondering about is whether to name "core" => "batch"?
On Tue, Mar 17, 2015 at 11:37 AM, Márton Balassi
wrote:
> +1 for the proposed structure.
>
> I have no explicit preference for having batch and streaming scala together
> or separated. T
ava API into the same module means that we'll have
> more mixed Java/Scala projects, right? I just want to check if everyone is
> aware of it considering our latest experiences with these kind of modules.
>
> On Tue, Mar 17, 2015 at 2:21 PM, Ufuk Celebi wrote:
>
> > +1
On 19 Mar 2015, at 09:43, Stephan Ewen wrote:
> I like this proposal very much. We should do that as much as possible.
Same here. Makes it also easier to track progress.
(I think this should go hand in hand with better design descriptions in the
corresponding JIRAs.)
Thanks. I will have a look later :-)
+1 for the Wiki. I think the low overhead does not only make it easier to
contribute for newcomers, but for committers as well. :-)
On 20 Mar 2015, at 12:46, Kostas Tzoumas wrote:
> I added a document for data exchange between tasks:
> https://cwiki.apache.
On 23 Mar 2015, at 10:44, Stephan Ewen wrote:
> Hi everyone!
>
> With the streaming stuff getting heavier exposure, I think it needs a few
> more tests. With so many changes, untested features are running a high risk
> of being "patched away" by accident.
>
> For the runtime and batch API part,
sults.
I am actually very happy that we moved this to the Wiki... it is so much easier
to fix minor things now. :-)
On 20 Mar 2015, at 12:48, Ufuk Celebi wrote:
> Thanks. I will have a look later :-)
>
> +1 for the Wiki. I think the low overhead makle
>
> On 20 Mar 2015, at 1
Let's see what Travis replies to Robert, but in general I agree with Max.
Travis helped a lot to discover certain race conditions in the last weeks... I
would like to not ditch it completely as Max suggested.
On 24 Mar 2015, at 16:03, Maximilian Michels wrote:
> I would also like to continue u
I saw a similar issue yesterday as well:
The following test gets stuck: TaskManagerFailsITCase should handle hard
failing task manager
Apparently, the task managers never registers. Can someone confirm this from
the stack trace? Did someone run into this as well?
$ jps 23800
surefirebooter6764
+Table, DataTable
---
How are votes counted? When voting for the name of the project, we didn't vote
for one name, but gave a preference ordering.
In this case, I am for Table or DataTable, but what happens if I vote for Table
and then there is a tie between DataTable and Relation? Will Table
On 26 Mar 2015, at 11:01, Robert Metzger wrote:
> Two weeks have passed since we've discussed the 0.9 release the last time.
>
> The ApacheCon is in 18 days from now.
> If we want, we can also release a "0.9.0-beta" release that contains known
> bugs, but allows our users to try out the new fea
On Thursday, March 26, 2015, Robert Metzger wrote:
> I'm fine with milestone.
> But I would really like to call it "milestone" instead of "M1" .. because I
> actually never though about that weird version name of Jetty ... I fear
> that our users would also be confused by this.
Same here.
On Friday, March 27, 2015, Maximilian Michels wrote:
> +1 for 0.9.0-milestone-1
>
+1
On Fri, Mar 27, 2015 at 7:41 PM, Henry Saputra
wrote:
> Developers that solve the problem by fixing the issue should change
> the status to "Resolved" and the person who create the issue could
> change the status to "Closed" to verify.
>
Yes. JIRA itself says the following (there is a small tex
On a high level we call intermediate data produced by programs "intermediate
results". For example in a WordCount map-reduce program the map function
produces an intermediate result, which consists of (word, 1) pairs and the
reduce function consumes this intermediate result. Kostas has recently
Hey Henry,
1) There is no extra message, but this is piggy backed with the finished
state transition (see Execution#markAsFinished). There it is essentially
the same mechanism.
2) It's part of my plan for this week to add documentation for exactly
this flow of RPC messages related to the runtime
Little side projects ftw. Very nice :-)
Can you give some points on how this works internally? Is it making use of
anything generic from the Python API pull request?
On Wednesday, April 1, 2015, Márton Balassi
wrote:
> Woot!
>
> On Wed, Apr 1, 2015 at 9:01 AM, Aljoscha Krettek >
> wrote:
>
> >
up ticket to update Kostas' awesome wiki page [2].
>
> - Henry
>
> [1]
> http://ci.apache.org/projects/flink/flink-docs-master/internal_job_scheduling.html
> [2]
> https://cwiki.apache.org/confluence/display/FLINK/Data+exchange+between+tasks
>
> On Tue, Mar 31, 2015 a
Hey all,
I think our documentation has grown to a point where we need to think about
how to make it more accessible.
I would like to add a custom Google search limited to the docs *excluding*
the API docs (otherwise the results are very noisy).
Mocks:
http://tinypic.com/view.php?pic=2qtlg1k&s=8
OK, I've added the change as PRs. Nothing fancy. Would be still nice if someone
checked it out locally and make sure that the search results refer to the
correct doc version.
https://github.com/apache/flink/pull/563
https://github.com/apache/flink/pull/564
– Ufuk
On 02 Apr 2015, at 12:08, Maximilian Michels wrote:
> Works really nicely.
>
> Two things:
> - Formatting issues: http://i.imgur.com/AUy53Oj.png
I'll see what we can be done about this. But as you see from the commit, it's
essentially just a JavaScript include.
> - I don't like the placement
Please vote on releasing the following candidate as Apache Flink version
0.9.0-milestone-1.
We've decided to create a release outside the regular 3 monthly release
schedule for the ApacheCon announcement and for giving our users a convenient
way of trying out our great new features.
--
tput paths that contain windows drive
> > > letters (FLINK-1848).
> > > Given that this issue can be worked around by using paths without drive
> > > letters, the release is just a milestone release, and Windows is not
> our
> > > primary target platform, I wo
to be set correctly.
>
>
> On Thu, Apr 9, 2015 at 5:06 PM, Kostas Tzoumas wrote:
>
>> +1
>>
>> Ran tests on a debian machine.
>> Ran examples on a 4-node cluster via the YARN client.
>>
>>
>>
>> On Thu, Apr 9, 2015 at 12:15 PM, Ufu
Hey all,
I am not very proficient with Scala and have some questions regarding the
Scala Table API:
The logical queries in the Java API are all String-based, e.g.
table.groupBy("word")
In the Scala API, this works as well, but what's further possible is this:
expr.groupBy('word)
For comparisi
On 15 Apr 2015, at 14:11, Aljoscha Krettek wrote:
> This is just a personal annoyance and I think we are to advanced to
> change this now, but here goes: Could we rename the classes in the
> API, so that for example MapOperator becomes MapDataSet or MapSet, and
> the actual operators in the comm
On 15 Apr 2015, at 15:01, Stephan Ewen wrote:
> I think we can rename the base operators.
>
> Renaming the subclass of DataSet would be extremely api breaking. I think
> that is not worth it.
Oh, that's right. We return MapOperator for DataSet operations. Stephan's point
makes sense.
Hey all,
I've been asking myself: how can we make it as easy as possible for future
users who run into problems to find existing answers?
As an example take this answer from Stephan [1]. This is super valuable
feedback and a very specific question. Ideally, a new user who runs into
the same probl
l maintain user@ list
> as the official channel for communication for the project
>
> - Henry
>
> On Sat, Apr 18, 2015 at 2:16 AM, Ufuk Celebi > wrote:
> > Hey all,
> >
> > I've been asking myself: how can we make it as easy as possible for
> future
&
On 28 Apr 2015, at 12:31, Stephan Ewen wrote:
> +1 for the breaking change
>
> Let's not to this any more than necessary, bu this is a good case...
+1
Stephan and I came up with the following document about how to handle failures
of tasks and how to make sure we properly attribute the failure to the correct
root cause and suppress follow-up failures. The document defines the behaviour
that should be followed for different kinds of task failure
On 28 Apr 2015, at 13:49, Maximilian Michels wrote:
> Hi Robert,
>
> Thanks for investigating the Travis build issues. I'm very much in favor
> for dropping Java 6. It's deprecated. All major Linux distributions are
> shipping at least Java 7. It's a rare use case that requires a lot of
> effor
I agree with Stephan's points. Thanks for reporting and let's investigate
this further.
To keep in mind: I think VisualVM is using hprof for CPU sampling, which
has some known issues (
http://www.brendangregg.com/blog/2014-06-09/java-cpu-sampling-using-hprof.html).
For one thing, it's profiling Ja
Hey all,
I reworked the project website the last couple of days and would like to share
the preview:
http://uce.github.io/flink-web/
I would like to get this in asap. We can push incremental updates at any time,
but I think this version is a big improvement over the current status quo. If I
g
On 11 May 2015, at 17:42, 程浩 wrote:
> Hi, I am trying to setup the development env with Scala-Ide or InteiliJ,
> however seems both links
> https://github.com/apache/flink/blob/master/docs/internal_setup_eclipse.md
> https://github.com/apache/flink/blob/master/docs/internal_setup_intellij.md
>
gt;>>>
>>>>>>> Hi Ufuk,
>>>>>>>>
>>>>>>>> I really like the idea of redesigning the start page. But in my
>>>>>>>> opinion your page design looks more like a documentation webpage
>>>>>&
On 14 May 2015, at 12:39, Vasiliki Kalavri wrote:
> Hey Ufuk,
>
> the logo still looks too big for the menu and so does the stack image now :S
> See attached image.. This is Chrome again. Not sure how you could fix it
> though.
Just had a small debug session with Vasia. The problem is fixed (
The website is now online.
For all future feedback, either file a JIRA or start a new thread on the ML.
On 15 May 2015, at 14:34, Maximilian Michels wrote:
> +1 great work
>
> I don't like that the website design is now pretty much like the
> documentation design. This is confusing if you're o
201 - 300 of 991 matches
Mail list logo