Ufuk Celebi created FLINK-4328:
--
Summary: Harden JobManagerHA*ITCase
Key: FLINK-4328
URL: https://issues.apache.org/jira/browse/FLINK-4328
Project: Flink
Issue Type: Improvement
Compon
Hi,
thanks for sharing the design doc, these are valuable ideas.
We might have to revisit the specifics once the re-sharding/key-group
changes are in Flink and once you actually want to start working on this.
Cheers,
Aljoscha
On Sat, 6 Aug 2016 at 07:32 Chen Qin wrote:
> Aljoscha
>
> Sorry abo
Hi,
I have done something similar in the past for storing state in sharded
MySql databases. We used this for a while for state size scaling reasons
but have switched to RocksDB later and therefore this statebackend has been
removed from Flink to cut some maintenance costs.
You can find the initia
Aljoscha Krettek created FLINK-4329:
---
Summary: Streaming File Source Must Correctly Handle
Timestamps/Watermarks
Key: FLINK-4329
URL: https://issues.apache.org/jira/browse/FLINK-4329
Project: Flink
Robert Metzger created FLINK-4330:
-
Summary: Consider removing min()/minBy()/max()/maxBy()/sum()
utility methods from the DataStream API
Key: FLINK-4330
URL: https://issues.apache.org/jira/browse/FLINK-4330
Hi all!
We have a problem in the *DataStream API* around Windows for *CoGroup* and
*Join*.
These operations currently do not allow to set a parallelism, which is a
pretty heavy problem.
To fix it properly, we need to change the return types of the coGroup() and
join() operations, which *breaks th
Pushpendra Jaiswal created FLINK-4331:
-
Summary: Flink is not able to serialize scala classes / Task Not
Serializable
Key: FLINK-4331
URL: https://issues.apache.org/jira/browse/FLINK-4331
Project:
Thank you for bringing this discussion to the mailing list.
I agree with Chesnay's comment on GitHub that we should consider the
"casting option" as well. I don't think we should break the API if there is
a (admittedly ugly) work-around for those who run into the problem.
If we put the work-around
The Flink PMC is pleased to announce the availability of Flink 1.1.0.
On behalf of the PMC, I would like to thank everybody who contributed
to the release.
The release announcement:
http://flink.apache.org/news/2016/08/08/release-1.1.0.html
Release binaries:
http://apache.openmirror.de/flink/fli
yoo-hoo finally announced 🎉
Thanks for managing the release Ufuk!
On 8 August 2016 at 18:36, Ufuk Celebi wrote:
> The Flink PMC is pleased to announce the availability of Flink 1.1.0.
>
> On behalf of the PMC, I would like to thank everybody who contributed
> to the release.
>
> The release anno
Stephan Ewen created FLINK-4332:
---
Summary: Savepoint Serializer mixed read()/readFully()
Key: FLINK-4332
URL: https://issues.apache.org/jira/browse/FLINK-4332
Project: Flink
Issue Type: Bug
Stephan Ewen created FLINK-4333:
---
Summary: Name mixup in Savepoint versions
Key: FLINK-4333
URL: https://issues.apache.org/jira/browse/FLINK-4333
Project: Flink
Issue Type: Bug
Compon
Great work indeed, and big thanks, Ufuk!
On Mon, Aug 8, 2016 at 6:55 PM, Vasiliki Kalavri
wrote:
> yoo-hoo finally announced 🎉
> Thanks for managing the release Ufuk!
>
> On 8 August 2016 at 18:36, Ufuk Celebi wrote:
>
> > The Flink PMC is pleased to announce the availability of Flink 1.1.0.
>
Aljoscha,
Sure thing, will do after key/group feature in place when we got bandwith :)
Gyula,
That's where we started, many terms are copied over(logical timestamp,
compaction, lazy restore). we have to use Cassandra which offer less in
transaction and consistency to gain availability and cross
Great work all. Great Thanks to Ufuk as RE :)
On Monday, August 8, 2016, Stephan Ewen wrote:
> Great work indeed, and big thanks, Ufuk!
>
> On Mon, Aug 8, 2016 at 6:55 PM, Vasiliki Kalavri <
> vasilikikala...@gmail.com >
> wrote:
>
> > yoo-hoo finally announced 🎉
> > Thanks for managing the rele
Shannon Carey created FLINK-4334:
Summary: Shaded Hadoop1 jar not fully excluded in Quickstart
Key: FLINK-4334
URL: https://issues.apache.org/jira/browse/FLINK-4334
Project: Flink
Issue Type:
Hello,
With the release of 1.1, I’m happy to update the apache-flink homebrew package
accordingly. Quick question, any objection to updating the package to use
Hadoop 2.7 and Scala 2.11? I mean, the homebrew package is hardcoded to use
Hadoop 2.6 and Scala 2.10 at the moment.
Note there’
Zhenzhong Xu created FLINK-4335:
---
Summary: Add jar id, and job parameters information to job status
rest call
Key: FLINK-4335
URL: https://issues.apache.org/jira/browse/FLINK-4335
Project: Flink
Zhenzhong Xu created FLINK-4336:
---
Summary: Expose ability to take a savepoint from job manager rest
api
Key: FLINK-4336
URL: https://issues.apache.org/jira/browse/FLINK-4336
Project: Flink
Iss
Hi Stephan,
I did some research about blocking intermediate results. It turns out that
neither PipelinedSubpartition (see line 178) nor blocking intermediate
results (see SpillableSubpartition line: 189) can be read multiple times.
Moreover blocking intermediate results are currently not supported
20 matches
Mail list logo