Hi
Could someone share the 1.4 release timeline. I intend to use the Kafka 11
connector.
Thanks
Moiz
Thanks Stefan. I found the issue in my application. Everything is working
as excepted now.
Once again thanks for the help and advice.
On Fri, Oct 20, 2017 at 4:51 AM, vipul singh wrote:
> Thanks Stefan for the answers. The serialization is happening during the
> creation of snapshot state. I hav
Dear All,
My name is Han. I'm very interested in your advanced Flink system, and I'm
learning it.
I'm writing to your group for communicating about my personal question. I
tried to use Table API&SQL and register a TableSource by the
KafkaJsonTableSource method, I have to say it works v
Hi,
The document you are looking at is pretty old, you can check the newest
version here:
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/batch/dataset_transformations.html
Regarding to your question, you can use combineGroup
Best,
Kurt
On Mon, Oct 23, 2017 at 5:22 AM, Le Xu wr
Hello!
I'm new to Flink and I'm wondering if there is a explicit local combiner to
each mapper so I can use to perform a local reduce on each mapper? I looked
up on
https://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html
but couldn't find anything that matches.
T
Hi,
i see in version 1.3, it add the ResultPartitionMetrics with
issue:https://issues.apache.org/jira/browse/FLINK-5090
but i am doubt what is the difference between totalQueueLen and
inputQueueLength in
https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/metrics.html#network
I have Flink 1.2.1 running on Docker, with Task Managers distributed across
different VMs as part of a Docker Swarm.
I understand Dynamic Scaling is not yet available in Flink. Therefore, if I
wanted to increase the number of containers running Flink's task manager
(scale up), I would need to crea
HI,
Yes, on all nodes the the same /etc/hbase/conf/hbase-site.xml that contains
the correct settings for hbase to find zookeeper.
That is why adding that files as an additional resource to the
configuration works.
I have created a very simple project that reproduces the problem on my
setup:
https:
I have folder where new files arrive at schedule. Why is my flink readfile
not reading new files. I have used but *PROCESS_ONCE* and
*PROCESS_CONTINUOUSLY*. When I use *PROCESS_CONTINUOUSLY* it reads the same
file but the execution does not terminate whereas for PROCESS_ONCE it
terminates in IDE.