Gyula Fora created FLINK-4155:
-
Summary: Get Kafka producer partition info in open method instead
of constructor
Key: FLINK-4155
URL: https://issues.apache.org/jira/browse/FLINK-4155
Project: Flink
it's just because Fabian said that it's better to not mix java and scala
(as you can see in the comments of that PR)
On 5 Jul 2016 18:53, "Aljoscha Krettek" wrote:
> I think it's not strictly required that all code be in Scala. There is
> already some Java code in there so we shouldn't force peop
That's good news. :-) Thanks for looking into it Till and Stephan.
On Tue, Jul 5, 2016 at 5:37 PM, Till Rohrmann wrote:
> I talked to Stephan and he pointed out that the flink-dist binary file,
> which is Flink's fat jar, is not part of the official Flink release. We do
> offer to download this f
I think it's not strictly required that all code be in Scala. There is
already some Java code in there so we shouldn't force people to write Scala
code if they make a valuable contribution in Java.
On Tue, 5 Jul 2016 at 17:33 Flavio Pompermaier wrote:
> Hi to all,
> if Flink 1.1 will introduce u
I talked to Stephan and he pointed out that the flink-dist binary file,
which is Flink's fat jar, is not part of the official Flink release. We do
offer to download this file as part of a zip file from the Flink website.
However, this is only for convenience. In contrast to that, other binary
files
Hi to all,
if Flink 1.1 will introduce ufficially the Table API, do you think someone
could take care of rewriting in scala the necessary java code of my PR
about reading CSV as Rows instead of tuples[1]?
For our use cases, and many new users approaching to Flink IMHO, that will
be definitely usef
Btw, another blocking issue, IMHO:
https://issues.apache.org/jira/browse/FLINK-4149
I'm working on a fix.
On Tue, 5 Jul 2016 at 17:08 Till Rohrmann wrote:
> I found another critical issue [1]. The murmur hash correction introduced
> between Flink 1.0 and 1.1 breaks the backwards compatibility
I found another critical issue [1]. The murmur hash correction introduced
between Flink 1.0 and 1.1 breaks the backwards compatibility with respect
to savepoints. I think we have to fix this for the release.
@Ufuk, I'm not sure whether I find time this week to work on FLINK-4150. I
could make it a
Till Rohrmann created FLINK-4154:
Summary: Correction of murmur hash breaks backwards compatibility
Key: FLINK-4154
URL: https://issues.apache.org/jira/browse/FLINK-4154
Project: Flink
Issue
Hi Ufuk,
The old sort-based combine is still the default. The user calls
.setCombineHint(CombineHint) to make a selection (I think this was
originally overloaded on DataSet and it looks like the pr1517 documentation
update does not reflect the new usage).
I'd be glad to merge this in but I didn't
Hi,
The re-balance actually distributes it to all the task managers, and now
all TM's are getting utilized, You were right , I am seeing two
boxes(Tasks) now.
I have one question regarding the task slots :
For the source the parallelism is set to 56, now when we see on the UI and
click on source
This is what I was looking for.
Thank you Ufuk
Regards,
Vinay Patil
On Tue, Jul 5, 2016 at 5:39 PM, Ufuk Celebi wrote:
> There is also this:
> https://flink.apache.org/contribute-code.html#snapshots-nightly-builds
>
> The Hadoop 2 version is built for Hadoop 2.3. Depending on what you
> are try
Aljoscha Krettek created FLINK-4153:
---
Summary: ExecutionGraphRestartTest Fails
Key: FLINK-4153
URL: https://issues.apache.org/jira/browse/FLINK-4153
Project: Flink
Issue Type: Bug
Affec
Robert Metzger created FLINK-4152:
-
Summary: TaskManager registration exponential backoff doesn't work
Key: FLINK-4152
URL: https://issues.apache.org/jira/browse/FLINK-4152
Project: Flink
Iss
There is also this:
https://flink.apache.org/contribute-code.html#snapshots-nightly-builds
The Hadoop 2 version is built for Hadoop 2.3. Depending on what you
are trying to do, this might be a problem or not.
On Tue, Jul 5, 2016 at 12:26 PM, Vinay Patil wrote:
> Yes, I had already done that yest
With lots of discussion about branding issues with Apahe projects, eg
Apache Spark, I would recommend us to move cautiously.
We need to make sure companies listed in the homepage do not represent
contributions or control over Apche Flink project.
We got some concerns about our Shepard initiative
Good discussion and I think we could bring this to dev list instead.
One reminder we should reduce cross posting to private& and dev@ lists to
avoid accidental exposure to internal PMC business.
- Henry
On Monday, July 4, 2016, Stephan Ewen wrote:
> Hi all!
>
> I was wondering if we want to pu
Yes, I had already done that yesterday but got some dependency error while
doing it (since it was not able to download one jar from nexux) , so
thought if there was any other way.
Anyways will try to do that.
Thanks
Regards,
Vinay Patil
On Tue, Jul 5, 2016 at 3:11 PM, Aljoscha Krettek
wrote:
You would have to manually build a binary distribution from to source to
run it on a cluster. This is the relevant section of the doc:
https://ci.apache.org/projects/flink/flink-docs-master/setup/building.html
What it boils down to, though, is that you have to checkout the Flink
source and run "mv
Correct , it means I cannot use it for running on cluster ?
In my code I have updated my dependency to 1.1-SNAPSHOT, so I wanted to
test it on cluster with version 1.1
Regards,
Vinay Patil
On Tue, Jul 5, 2016 at 2:56 PM, Aljoscha Krettek
wrote:
> Flink 1.1-SNAPSHOT is not a released version, th
Flink 1.1-SNAPSHOT is not a released version, this is the name of the
latest master builds of what will eventually be released as Flink 1.1.
On Mon, 4 Jul 2016 at 18:08 Vinay Patil wrote:
> Hi,
>
> Can you please tell how do I download flink1.1-SNAPSHOT for running the job
> on cluster, on the f
Robert Metzger created FLINK-4151:
-
Summary: Address Travis CI build time: We are exceeding the 2
hours limit
Key: FLINK-4151
URL: https://issues.apache.org/jira/browse/FLINK-4151
Project: Flink
Great that we are all on the same page :-) Thanks for pointing out the
two issues Aljoscha and Till. I agree with you and I've updated them
to blockers ;-)
The FsStateBackend looks like it will be done soon. @Till: do you have
time to look into FLINK-4150 this week? I can also do it after I've
add
Stefan Richter found the following problem with HA:
https://issues.apache.org/jira/browse/FLINK-4150
I think we should fix it for the 1.1 release.
On Mon, Jul 4, 2016 at 9:05 PM, Robert Metzger wrote:
> +1 to do a RC0 this week, but the master-forking with RC1. I would like to
> reduce the time
+1, I like the idea :-)
On Tue, Jul 5, 2016 at 3:48 AM, Jark Wu wrote:
> It’s a great idea! I would be happy if I can help something.
>
> In addition, maybe we can move the full “Powered By” wiki page to the
> website to reduce external link.
>
> - Jark Wu
>
> > 在 2016年7月4日,下午11:15,Stephan Ewen
25 matches
Mail list logo