my local env.
>
> - Henry
>
> On Wednesday, October 28, 2015, Maximilian Michels wrote:
>
>> The issue with our Java Docs has been resolved. The link works again.
>>
>> On Tue, Oct 27, 2015 at 3:57 PM, Henry Saputra > > wrote:
>> > Ah thanks Max, sending
t;> It would be nice if pure Java users would not see any Scala versioning (on
>> flink-core, flink-java, later also flink-sreaming-java). I guess for any
>> runtime-related parts (including flink-client and currently all streaming
>> projects), we need the Scala versions...
fhue...@gmail.com) wrote:
>
> That would mean to have "flink-java_2.10" and "flink-java_2.11" artifacts
> (and others that depend on flink-java and have no other Scala dependency)
> in the 0.10.0 release and only "flink-java" in the next 1.0 release.
>
>
ug in the Scala DataSet API: FLINK-2953
>>>
>>> Should be easy to solve. I will provide a fix soon.
>>>
>>> 2015-10-30 15:51 GMT+01:00 Maximilian Michels :
>>>
>>>> We can continue testing now:
>>>>
>>>> https:
:apacheds-kerberos-codec:jar:2.0.0-M15:compile
>> [INFO] | | \- org.codehaus.jackson:jackson-xc:jar:1.8.3:compile
>> [INFO] | | \- org.xerial.snappy:snappy-java:jar:1.0.5:compile
>> [INFO] | | | +- javax.xml.stream:stax-api:jar:1.0-2:compile
>>
>> In other
Please vote on releasing the following candidate as Apache Flink version
0.10.0:
The commit to be voted on:
347c2b00d4c32da118460a8da0a417f752f9177e
Branch:
release-0.10.0-rc5 (see
https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git)
The release artifacts to be voted on can be found at:
Here is the testing document:
https://docs.google.com/document/d/1BYtis1Xg2Z9UpeV5UuH2kB6DnisnneOB19L1YBIw4Zs/edit
On Tue, Nov 3, 2015 at 12:24 PM, Maximilian Michels wrote:
> Please vote on releasing the following candidate as Apache Flink version
> 0.10.0:
>
> The commit to
LINK-2964 for more details.
>
>
> On Tue, Nov 3, 2015 at 12:43 PM, Maximilian Michels
> wrote:
>
> > Here is the testing document:
> >
> >
> https://docs.google.com/document/d/1BYtis1Xg2Z9UpeV5UuH2kB6DnisnneOB19L1YBIw4Zs/edit
> >
> > On Tue, Nov 3,
This vote is cancelled in favor of a new RC.
On Wed, Nov 4, 2015 at 10:59 AM, Maximilian Michels wrote:
> That's a big release blocker. Thanks for the fix, Till! I'm really glad we
> managed to fix this subtle bug.
>
> On Wed, Nov 4, 2015 at 2:19 AM, Till Rohrmann
>
d, Nov 4, 2015 at 11:00 AM, Maximilian Michels wrote:
>
> > This vote is cancelled in favor of a new RC.
> >
> > On Wed, Nov 4, 2015 at 10:59 AM, Maximilian Michels
> > wrote:
> >
> > > That's a big release blocker. Thanks for the fix, Till! I'm really gla
Thanks Gyula. Here is the issue:
https://issues.apache.org/jira/browse/FLINK-2965
On Wed, Nov 4, 2015 at 12:00 PM, Gyula Fóra wrote:
> done
>
> Till Rohrmann ezt írta (időpont: 2015. nov. 4., Sze,
> 11:19):
>
>> Could you please open or update the corresponding JIRA issue if existing.
>>
>> On W
Hi Gyula,
Trying to reproduce this error now. I'm assuming this is 0.10-SNAPSHOT?
Cheers,
Max
On Wed, Nov 4, 2015 at 1:49 PM, Gyula Fóra wrote:
> Hey,
>
> Running the following simple application gives me an error:
>
> //just counting by key, the
> streamOfIntegers.keyBy(x -> x).timeWindow(Time
It's a bug. It also occurs in the Java API. Perhaps we can find a fix
for the release..
On Wed, Nov 4, 2015 at 2:40 PM, Maximilian Michels wrote:
> Hi Gyula,
>
> Trying to reproduce this error now. I'm assuming this is 0.10-SNAPSHOT?
>
> Cheers,
> Max
>
> On
dded.
>
> Cheers,
>
> Till
>
>
> On Wed, Nov 4, 2015 at 2:54 PM, Gyula Fóra wrote:
>
>> This was java 8, snapshot 1.0 :)
>>
>> Maximilian Michels ezt írta (időpont: 2015. nov. 4., Sze,
>> 14:47):
>>
>> > It's a bug. It also occurs
Please vote on releasing the following candidate as Apache Flink version
0.10.0:
The commit to be voted on:
b2e9bf1ed1e3b60d8012b217166145b936835ea7
Branch:
release-0.10.0-rc6 (see
https://git1-us-west.apache.org/repos/asf/flink/?p=flink.git)
The release artifacts to be voted on can be found at:
Here's the current testing document:
https://docs.google.com/document/d/1PP9ar_Astl9TZ7rX2kkGXxxuM0gPr51QFvriNVVGK3s/edit
On Thu, Nov 5, 2015 at 10:28 AM, Maximilian Michels wrote:
> Please vote on releasing the following candidate as Apache Flink version
> 0.10.0:
>
> The comm
+1 The web client needs to go.
I don't really see how how a submission page on the job manager would make
security worse. At the moment, anyone who sends an Akka message to the job
manager, can execute arbitrary code..
On Thu, Nov 5, 2015 at 11:16 AM, Flavio Pompermaier
wrote:
> +1 to remove th
age 1][image: Inline image 3]
> > >
> > > -- Sachin Goel
> > > Computer Science, IIT Delhi
> > > m. +91-9871457685
> > >
> > > On Thu, Nov 5, 2015 at 4:12 PM, Maximilian Michels
> > wrote:
> > >
> > >> +1 The web clie
Hi Martin,
You're right. The documentation needs to be updated. I have already filed a
JIRA for that: https://issues.apache.org/jira/browse/FLINK-2938
It's actually called getKeyValueState(...) now.
Cheers,
Max
On Thu, Nov 5, 2015 at 4:41 PM, Martin Neumann wrote:
> Hej,
>
> I'm working with
ting an update in the state docs now...
> >
> > On Thu, Nov 5, 2015 at 5:58 PM, Maximilian Michels
> wrote:
> >
> >> Hi Martin,
> >>
> >> You're right. The documentation needs to be updated. I have already
> filed
> >> a
> >>
n order to distinguish both
> > >> clearly.
> > >>
> > >> -Matthias
> > >>
> > >> On 11/05/2015 12:04 PM, Sachin Goel wrote:
> > >> > Disabling the access for secure installation: Pretty easy. Writing
> it
> > >
Hi Do,
Thanks for the script. I'm sure it will be helpful to people who want
to setup their own cluster. Some people use a tool for performance
testing called Yoka which also takes care of setting up a Flink and
Hadoop cluster. For example, the Flink part is available here:
https://github.com/mxm/
t; - run fault-tolerant job with Kafka with randomly killing TMs and JM
>> - check that java/scala quickstarts work (also with IntelliJ)
>> - run an example against a running cluster with RemoteEnvironment
>> - run the manual tests in flink-te
; examples on a 5-node cluster with Hadoop 2.7.1.
>>
>> +1
>>
>> Cheers,
>> -Vasia.
>>
>> On 9 November 2015 at 13:53, Maximilian Michels wrote:
>>
>> > +1
>> >
>> > - Checked the source files for binaries
>> > - Ran mvn c
Please note that this vote has a slightly shorter voting period of 48
hours. The previous RC was cancelled due to licensing issues which have
been resolved in this release candidate. Since the community has already
done extensive testing of the previous release candidates, I'm assuming 48
hours wil
t;>> much
>> >>>> harder.
>> >>>> Max provided some numbers for applying the Google code style on the
>> >>> current
>> >>>> code base: The style checker found 28k violations (possibly multiple
>> >> ones
>> >
using streaming api and scala 2.11. ran locally
>
> Why I want to cancel the release:
>
> While trying to run the batch wordcount on the streaming API on a cluster,
> the client was failing due to a classloading issue:
> https://issues.apache.org/jira/browse/FLINK-2992
>
>
&g
Please note that this vote has a slightly shorter voting period of 48
hours. Only very small changes have been made since the last release
candidate. Since the community has already done extensive testing of the
previous release candidates, I'm assuming 48 hours will suffice to vote on
this release
ChaosMonkeyITCase
> > - Checked the LICENSE and NOTICE files
> > - Tested streaming program with window implementation with custom session
> > timeout example
> >
> >
> > On Tue, Nov 10, 2015 at 9:41 PM, Maximilian Michels
> wrote:
> >
> >> Please
+1 for separating concerns by having a StreamExecutionConfig and a
BatchExecutionConfig with inheritance from ExecutionConfig for general
options. Not sure about the pre-flight and runtime options. I think
they are ok in one config.
On Wed, Nov 11, 2015 at 1:24 PM, Robert Metzger wrote:
> I think
+1 for the proposed changes. But why not always create a snapshot on
shutdown? Does that break any assumptions in the checkpointing
interval? I see that if the user has checkpointing disabled, we can
just create a fake snapshot.
On Thu, Nov 12, 2015 at 9:56 AM, Gyula Fóra wrote:
> Yes, I agree wi
Thanks for voting! The vote passes.
The following votes have been cast:
+1 votes: 7
Stephan
Aljoscha
Robert
Max
Chiwan*
Henry
Fabian
* non-binding
-1 votes: none
I'll upload the release artifacts and release the Maven artifacts.
Once the changes are effective, the community may announce the
r
Thanks Fabian for drafting the release announcement. The release
artifacts have already been synced and I've updated the website for
the 0.10.0 release.
Seems like we are set up to publish the release announcement later today.
On Fri, Nov 13, 2015 at 1:41 PM, Ufuk Celebi wrote:
> https://cwiki.a
hy did you change the dop from 4 to 1 WordCountTopology ? We should
> test in parallel fashion...
>
> * Too many reformatting changes ;) You though many classes without any
> actual code changes.
>
>
>
>
>
>
> Forwarded Message
> Subject: Re: Storm C
Hi Robert.
Good suggestion. Generally, it would be nice to have complete code
examples available in the documentation. Even better, a way to only
show excerpts of the complete example with the option of copying the
complete working example.
For instance:
public Example {
public static void ma
+1 for a 0.10.1 release pretty soon.
I merged FLINK-2989 (job cancel button doesn't work on YARN).
On Thu, Nov 19, 2015 at 10:10 AM, Till Rohrmann wrote:
> If they cover things which are also wrongly documented in 0.10, then they
> should be merged to 0.10-release as well.
>
> On Thu, Nov 19, 20
otely. Still, we can save some code by reusing Storm's
TopologyBuilder.
I'll open a pull request with the changes. This also includes some
more examples and features (e.g. multiple inputs per Bolt).
On Mon, Nov 16, 2015 at 4:33 PM, Maximilian Michels wrote:
> You are right in sa
; the members we need to access are private). Your idea to get access via
> >> reflection is good!
> >>
> >> Btw: can you also have a look here:
> >> https://github.com/apache/flink/pull/1387
> >> I would like to merge this ASAP but need some feedback.
>
Thanks for being the release manager. I promise the release script works
like a charm :)
On Sun, Nov 22, 2015 at 12:30 PM, Robert Metzger
wrote:
> It seems that we have merged all critical fixes into the release-0.10
> branch.
> Since nobody else stepped up as a release manager, I'll do it again
I would rather not block the minor release on this issue. We don't
know if we have a valid fix for it. Let's get out the minor release
first and have another one when we have the fix.
On Tue, Nov 24, 2015 at 11:34 AM, Gyula Fóra wrote:
> Hi,
> Regarding my previous comment for the Kafka/Zookeeper
Hi André, hi Martin,
This looks very much like a bug. Martin, I would be happy if you
opened a JIRA issue.
Thanks,
Max
On Sun, Nov 22, 2015 at 12:27 PM, Martin Junghanns
wrote:
> Hi,
>
> What he meant was MultipleProgramsTestBase, not FlinkTestBase.
>
> I debugged this a bit.
>
> The NPE is thr
ence,
>> > especially when refactor tools are involved. I once looked into a doc
>> tool
>> > for automatically extracting snippets from source code, but that turned
>> > into a rat-hole and didn't pursue it further. Maybe tooling has improved
>> > sinc
Hi Martin,
Great. Thanks for the fix!
Cheers,
Max
On Tue, Nov 24, 2015 at 7:40 PM, Martin Junghanns
wrote:
> Hi Max,
>
> fixed in https://github.com/apache/flink/pull/1396
>
> Best,
> Martin
>
>
> On 24.11.2015 13:46, Maximilian Michels wrote:
>>
>> Hi
Great. We released that one fast. Thanks Robert.
On Fri, Nov 27, 2015 at 3:27 PM, Robert Metzger wrote:
> The Flink PMC is pleased to announce the availability of Flink 0.10.1.
>
> The official release announcement:
> http://flink.apache.org/news/2015/11/27/release-0.10.1.html
> Release binaries:
Thank for your getting us started on annotating the API. The list
looks good so far. I have the feeling it could even be extended a bit.
Just curious, how did you choose which classes you annotate? Did you
go through all the classes in flink-core, flink-java, and
flink-clients Maven projects?
What
dn’t add the parsers in org.apache.flink.types.parser
>
>
>
>
> On Mon, Nov 30, 2015 at 10:19 AM, Maximilian Michels wrote:
>
>> Thank for your getting us started on annotating the API. The list
>> looks good so far. I have the feeling it could even be extended a bit.
&g
Hi Matthias,
Thank you for the blog post. You had already shared a first draft with
me. This one looks even better!
I've made some minor comments. +1 to merge if these are addressed.
Cheers,
Max
On Wed, Dec 9, 2015 at 1:20 PM, Matthias J. Sax wrote:
> Just updated the draft (thanks to Till and
Hi squirrels,
By this time, we have numerous connectors which let you insert data
into Flink or output data from Flink.
On the streaming side we have
- RollingSink
- Flume
- Kafka
- Nifi
- RabbitMQ
- Twitter
On the batch side we have
- Avro
- Hadoop compatibility
- HBase
- HCatalog
- JDBC
Ma
rs and flink
>> and have very good checks that ensure that we don’t inadvertently break
>> things.
>>
>> > On 10 Dec 2015, at 15:45, Fabian Hueske wrote:
>> >
>> > Sounds like a good idea to me.
>> >
>> > +1
>> >
>> > F
On 12/09/2015 08:12 PM, Vasiliki Kalavri wrote:
>>>> Thanks Matthias! This is a very nice blog post and reads easily.
>>>>
>>>> On 9 December 2015 at 19:21, Ufuk Celebi wrote:
>>>>
>>>>> Great post! Thanks!
>>>>>
>&g
pproached is the way to go for Flink connectors,
> could we do the same for Flink ML libraries?
>
>
> - Henry
>
> On Fri, Dec 11, 2015 at 1:33 AM, Maximilian Michels wrote:
>> We should have release branches which are in sync with the release
>> branches in the main r
Hi Aljoscha,
Thanks for the informative technical description.
> - function state: this is the state that you get when a user function
> implements the Checkpointed interface. it is not partitioned
> - operator state: This is the state that a StreamOperator can snapshot, it
> is similar to th
gainst the name "plugins" because everything (documentation, code,
> code comments, ...) is called "connectors" and it would be a pretty
> breaking change. I also think that "connector" describes much better what
> the whole thing is about.
>
>
>
> On
more efficiently.
> >> Think
> >>> of incremental checkpoints, for example, these are easy to do if you
> know
> >>> that state is a list to which stuff is only appended.
> >>>> On 14 Dec 2015, at 10:52, Stephan Ewen wrote:
> >>>>
>
rograms
>> get a lot of extension points in Flink ;)
>>
>> I've opened a pull request with my current suggestion:
>> https://github.com/apache/flink/pull/1426
>>
>>
>> On Tue, Dec 1, 2015 at 2:13 PM, Maximilian Michels wrote:
>>
>>> Than
Hi Aljoscha,
I'm in favor of option 2: Keep the setStreamTimeCharacteristic to set
the default time behavior. Then add a method to the operators to set a
custom time behavior.
The problem explanatory in SlidingTimeWindows:
@Override
public Trigger
getDefaultTrigger(StreamExecutionEnvironment env
Hi Ali,
Could you please also post the Hadoop version output of the task
manager log files? It looks like the task managers are running a
different Hadoop version.
Thanks,
Max
On Tue, Dec 22, 2015 at 4:28 PM, Kashmar, Ali wrote:
> Hi Robert,
>
> I found the version in the job manager log file:
Flink and Spark are open source projects which both have similar
problem domains. In some parts, their methodologies are similar, e.g.
because they build on Hadoop, use the Akka library, or implement
machine learning algorithms. In other parts, they are very different,
e.g. pipelined (Flink) vs bat
files:
>
> 11:25:04,100 WARN org.apache.hadoop.util.NativeCodeLoader
> - Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
>
> Do you think it has anything to do with it?
>
> Thanks,
> Ali
>
> On 201
+1 for a protected master.
+1 for creating release tags under rel/.
On Thu, Jan 14, 2016 at 10:07 AM, Chiwan Park wrote:
> +1 for protecting all branches including master.
>
>> On Jan 14, 2016, at 1:20 AM, Aljoscha Krettek wrote:
>>
>> +1 on protecting the master
>>> On 13 Jan 2016, at 14:46, Má
Hi Nick,
That was an oversight when the release was created. As Stephan
mentioned, we have a policy that the corresponding final release
branch is read-only. Creating the tag is just a formality but of
course important. I've pushed a 'release-0.10.1' release tag. The
corresponding hash is 2e9b231.
Please keep the history, just delete everything in a commit. Also, it
would be helpful to update the GitHub project description with a
deprecation warning. Not all people scroll to the readme.
On Fri, Jan 15, 2016 at 7:26 PM, Vasiliki Kalavri
wrote:
> Do we want to keep the history or shall I for
Pleased to have you with us Chengxiang!
Cheers,
Max
On Tue, Jan 19, 2016 at 11:13 AM, Chiwan Park wrote:
> Congrats! Welcome Chengxiang Li!
>
>> On Jan 19, 2016, at 7:13 PM, Vasiliki Kalavri
>> wrote:
>>
>> Congratulations! Welcome Chengxiang Li!
>>
>> On 19 January 2016 at 11:02, Fabian Huesk
Hi Gordon,
You may use "topic" and "offset" for whatever you like. Note that this
is just an interface. If it does not work for your Kinesis adapter,
you may create a new interface. For existing usage of the
KeyedDeserializationSchema, please have a look at the
FlinkKafkaConsumer.
Cheers,
Max
On
I've filed an issue at infra to protect the master:
https://issues.apache.org/jira/browse/INFRA-11088
On Fri, Jan 15, 2016 at 3:40 PM, Maximilian Michels wrote:
> +1 for a protected master.
> +1 for creating release tags under rel/.
>
> On Thu, Jan 14, 2016 at 10:07 AM, Chiwan P
Hueske wrote:
>>
>> > Ah OK. Sorry, I misunderstood your intention.
>> >
>> > 2015-11-02 14:07 GMT+01:00 Maximilian Michels :
>> >
>> > > > That would mean to have "flink-java_2.10" and "flink-java_2.11"
"flink-streaming-java"
grow from 3,1 MB (without shading) to 59,7 MB (with shading). That
seems a bit too large for users although it would get rid of the Scala
suffix without refactoring.
On Thu, Jan 21, 2016 at 11:33 PM, Ufuk Celebi wrote:
>
>> On 21 Jan 2016, at 17:51, Maxi
ions, e.g. combining
flink-streaming-java (instead of flink-streaming-java_2.11) with
flink-runtime_2.11. Once 1.0.0 is out, the old artifacts won't be a
problem anymore.
Any objections?
On Fri, Jan 22, 2016 at 6:30 PM, Maximilian Michels wrote:
> +1 for a big notice once we merge this.
>
Dear users and developers,
We have merged changes [1] that will affect how you build Flink
programs with the latest snapshot version of Flink and with future
releases. Maven artifacts which depend on Scala are now suffixed with
the Scala major version, e.g. "2.10" or "2.11".
While some of the Mav
Hi Ufuk,
+1 If the fixes are straightforward such that we don't need to test
extensively, I'm all for it! Releasing doesn't take much time then.
On Mon, Feb 1, 2016 at 9:57 PM, Nick Dimiduk wrote:
> +1 for a 0.10.2 maintenance release.
>
> On Monday, February 1, 2016, Ufuk Celebi wrote:
>
>> He
There is currently only a FlumeSink. The FlumeSource is a dummy file
(copied from the AvroSource) and needs to be removed.
On Wed, Feb 3, 2016 at 3:11 PM, Matthias J. Sax wrote:
> FlumeSink is there. FlumeSource and FlumeTopology is all put in
> comments... Not sure about it.
>
> There is no JIR
missing documentation?
>
> On 02/03/2016 03:18 PM, Maximilian Michels wrote:
> > There is currently only a FlumeSink. The FlumeSource is a dummy file
> > (copied from the AvroSource) and needs to be removed.
> >
> > On Wed, Feb 3, 2016 at 3:11 PM, Matthias J. Sax
> wrote:
>
Thanks for the RC. The documentation version is a really minor issue
which most people won't even notice because they use the online
documentation. I wouldn't create a new RC just for fixing that.
On Fri, Feb 5, 2016 at 12:45 PM, Ufuk Celebi wrote:
>
>> On 05 Feb 2016, at 12:17, Fabian Hueske wr
Hi Robert,
Thanks a lot for all the work of going through the classes. At first
sight, the classes look quite well chosen.
One question concerning the @Public, @Experimental, and @Internal annotations:
@Public may only be used for classes or interfaces. @Experimental or
@Internal are used for ma
Hi Stefano,
1) Please open a pull request. If the String depends on the locale,
this looks like a bug.
2) "mvn clean install" is the way to go for the complete check. If you
only want to run certain tests, selecting and running them from
IntelliJ works pretty well. In addition, it is nice to push
f
>> >> you
>> >> find something that does not look correct, please start a discussion
>> >> about.
>> >> The annotations can still be changed before the 1.0 release.
>> >>
>> >> I agree with Max that annotating more classes wi
ommand line to do what can be done with IntelliJ?
> I mean build only what's needed and run a specific test? I've tried `mvn
> test` but it skips the build phase and I don't know how to perform an
> incremental build on Maven.
>
> Thank you again!
>
> On Mon,
Bravo! Thank you Ufuk for managing the release!
On Fri, Feb 12, 2016 at 2:02 PM, Fabian Hueske wrote:
> Thanks Ufuk!
>
> 2016-02-12 12:57 GMT+01:00 Ufuk Celebi :
>
>> The Flink PMC is pleased to announce the availability of Flink 0.10.2.
>>
>> On behalf of the Flink PMC, I would like to thank eve
Hi Deepak,
The job manager doesn't have to know about task managers. They will
simply register at the job manager using the provided configuration.
In HA mode, they will lookup the currently leading job manager first
and then connect to it. The job manager can then assign work.
Cheers,
Max
On Tu
Welcome to the Flink community, Josep!
While reading the documentation, if anything catches your eye, feel
free to correct or make an addition via a pull request. That's a great
way to contribute.
Cheers,
Max
On Wed, Feb 17, 2016 at 8:29 PM, Jamie Grier wrote:
> Welcome, Josep!
>
> On Wed, Feb
nk. The language/tone probably needs a bit of
> refinement.
>
> best regards
> martin
>
> [1] https://github.com/blog/2111-issue-and-pull-request-templates
>
> Till Rohrmann schrieb am Do., 15. Okt. 2015 um
> 11:58 Uhr:
>
>> Thanks for leading the effort F
; Best, Fabian
>
> 2016-02-19 9:35 GMT+01:00 Martin Liesenberg :
>
>> Cool, if no one objects, I'll create a JIRA ticket and open a corresponding
>> PR during the weekend.
>>
>> Best regards
>> Martin
>>
>> On Thu, 18 Feb 2016, 17:36 Maximil
Hi Martin,
Thanks for the proposal. This is a great idea and will help new contributors.
How about having three sections and less check boxes? I think checking
all those boxes will get announcing for regular contributors.
[ ] Pull Request
- JIRA issue associated
- Pull request only addresses
Hi Greg,
I agree that we should encourage people to use the "fix version" field
more carefully. I think we need to agree on how we use "fix version".
How about going through the existing "fix version" tagged issues
instead of just removing the tag? I do think that the tagged issues
represent overa
Thanks for driving the release and for the new RC. I'll check it out!
On Tue, Mar 1, 2016 at 10:01 AM, Robert Metzger wrote:
> Here is the doc:
> https://docs.google.com/document/d/1hoQ5k4WQteNj2OoPwpQPD4ZVHrCwM1pTlUVww8ld7oY/edit?usp=sharing
>
> On Tue, Mar 1, 2016 at 9:49 AM, Vasiliki Kalavri
Hi Janardhan,
I just fixed that on the master and the release-1.0 branch because you
mentioned this on the user mailing list.
Thanks,
Max
On Fri, Mar 11, 2016 at 10:32 AM, Aljoscha Krettek wrote:
> Hi,
> could you please open a Jira issue for that.
>
> Cheers,
> Aljoscha
>> On 11 Mar 2016, at 0
Hi Deepak,
We'll look more into this problem this week. Until now we considered it a
configuration issue if the bind address was not externally reachable.
However, one might not always have the possibility to change this network
configuration.
Looking further, it is actually possible to let the b
27;re currently discussing this here:
https://issues.apache.org/jira/browse/FLINK-2821
Best,
Max
On Mon, Mar 14, 2016 at 4:49 PM, Deepak Jha wrote:
> Hi Maximilian,
> Thanks for your response. I will wait for the update.
>
> On Monday, March 14, 2016, Maximilian Michels wrote:
>
+1
On Wed, Mar 23, 2016 at 12:19 PM, Till Rohrmann wrote:
> +1
>
> On Wed, Mar 23, 2016 at 11:24 AM, Stephan Ewen wrote:
>
>> Yes, there is also the Rich Scala Window Functions, and the tests that used
>> to address wrong JAR directories.
>>
>> On Wed, Mar 23, 2016 at 11:15 AM, Ufuk Celebi wrot
Hi Stefano,
Sounds great. Please go ahead! Note that Flink already provides the
proposed feature for per-job Yarn clusters. However, it is a valuable
addition to realize this feature for the Yarn session.
The only blocker that I can think of is probably this PR which changes
a lot of the Yarn cla
Hi Eron,
Thank you for your feedback! Indeed, we have seen in the past, that
Hadoop's Delegation Tokens are not meant to renewed over a long
period. Plus, they have a number of subtle bugs in older versions that
sometimes prevent renewal.
What you suggest, sounds like a good approach to me. It wo
Hi Stefano,
Thanks for pointing out this bug. Your analysis is correct. The per-job
cluster does not ship the /lib directory by default. Would you like to open
an issue/PR? We should let the ship_path default to the /lib directory.
The mechanism with the environment variables is the same. They us
Yeah! I'm a little late to the party but exciting stuff! :)
On Fri, Mar 18, 2016 at 3:15 PM, Vasiliki Kalavri wrote:
> Hi all,
>
> tableOnCalcite has been merged to master :)
>
> Cheers,
> -Vasia.
>
> On 17 March 2016 at 11:11, Fabian Hueske wrote:
>
> > Thanks for the initiative Vasia!
> > I w
Hi Ozan,
You probably want to look at a custom Trigger implementation. Please see
the different triggers in
org/apache/flink/streaming/api/windowing/triggers/. You can write your own
event-based trigger. Best thing would be to extend the EventTimeTrigger
with your logic.
Then you can use windowed
Hi Matthias,
Thanks for spotting the test failure. It's actually a bug in the code
and not a test problem. Fixing it.
Cheers,
Max
On Fri, Apr 1, 2016 at 9:33 AM, Ufuk Celebi wrote:
> Hey Matthias,
>
> the test has been only recently added with the resource management
> refactoring. It's probabl
Fixed with the resolution of https://issues.apache.org/jira/browse/FLINK-3689.
On Fri, Apr 1, 2016 at 12:40 PM, Maximilian Michels wrote:
> Hi Matthias,
>
> Thanks for spotting the test failure. It's actually a bug in the code
> and not a test problem. Fixing it.
>
> C
Made a few suggestions. Reads well, Till!
On Mon, Apr 4, 2016 at 10:10 AM, Ufuk Celebi wrote:
> Same here.
>
> +1 to publish
>
> On Mon, Apr 4, 2016 at 10:04 AM, Aljoscha Krettek wrote:
>> Hi,
>> I like it. Very dense and very focused on the example but I think it should
>> be good for the Flink
The version of force-shading had to be a snapshot version, otherwise
the nightly deployment of snapshots wouldn't work. However, since we
have 1.0.0 already in place, we could revert the version to 1.0.0
again and skip the deployment from now on. I don't think it makes much
of a difference but it w
Hi Ufuk,
Thanks for updating the page. The "latest documentation" points to the
page itself and not the documentation. I've fixed that and added the
slides from Big Data Warsaw.
Cheers,
Max
On Mon, Apr 4, 2016 at 12:09 PM, Ufuk Celebi wrote:
> @Paris: Just added it. Thanks for the pointer. Grea
the release. Just a
minor improvement.
On Mon, Apr 4, 2016 at 12:16 PM, Robert Metzger wrote:
> We can probably move the force shading module into a (deactivated) build
> profile and set the dependency to 1.0.0, so that it just looks like a
> regular dependency.
>
> On Mon, Apr 4, 20
101 - 200 of 857 matches
Mail list logo