I think what you're suggesting is to load a large file into Kafka which will
replicate it and make it available to all nodes. However, that is not what I
want.
What I want is to run a specific transformation step on a specific
TaskManager.
--
View this message in context:
http://apache-flink-m
Any ideas?
--
View this message in context:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Use-specific-Task-Manager-for-heavy-computations-tp13747p13771.html
Sent from the Apache Flink Mailing List archive. mailing list archive at
Nabble.com.
I need to load a PFA (portable format for analytics) that can be around 30
GB and later process it with hadrian which is the java implementation for
PFA's (https://github.com/opendatagroup/hadrian).
I would like to execute this transformation step inside a specific worker
of the cluster (since I d
Mariano Gonzalez created FLINK-4629:
---
Summary: Kafka v 0.10 Support
Key: FLINK-4629
URL: https://issues.apache.org/jira/browse/FLINK-4629
Project: Flink
Issue Type: Wish
I couldn't find any repo or documentation about when Flink will start
supporting Kafka v 0.10.
Is there any document that you can point me out where i can see Flink's
roadmap?
Thanks in advance
Mariano Gonzalez