Bump.
On Thu, Oct 25, 2018 at 9:11 AM Shailesh Jain
wrote:
> Hi Dawid,
>
> I've upgraded to flink 1.6.1 and rebased by changes against the tag 1.6.1,
> the only commit on top of 1.6 is this:
> https://github.com/jainshailesh/flink/commit/797e3c4af5b28263fd98fb79daaba97cabf3392c
>
> I ran two sep
Hi Henry,
You can specify a specific Hadoop version to build against:
> mvn clean install -DskipTests -Dhadoop.version=2.6.1
More details here[1].
Best, Hequn
[1]
https://ci.apache.org/projects/flink/flink-docs-master/flinkDev/building.html#hadoop-versions
On Tue, Oct 30, 2018 at 10:02 AM vi
Hi vino,
The current generation environment is on yarn. We do not want to increase
the operation and maintenance cost of the Standalone mode.
Is there any other way to make better use of the resources of the yarn
cluster, try to allocate tasks to containers on different nodes.
Thanks, Marvin.
v
Hi Henry,
You just need to change the node of "hadoop.version" in the parent pom file.
Thanks, vino.
徐涛 于2018年10月29日周一 下午11:23写道:
> Hi Vino,
> Because I build the project with Maven, maybe I can not use the jars
> directly download from the web.
> If built with Maven, how can I adjust the Hado
Is there a way to make a checkpoint/savepoint after the batch job has
finished and then run the job in a streaming mode with state that has been
initialized in batch mode?
Or more generally speaking, what are the battle-tested solutions to "job
initialization" problem, especially when there are te
I am testing Flink in a Kubernetes cluster and am finding that a job gets
caught in a recovery loop. Logs show that the issue is that a checkpoint
cannot be found although checkpoints are being taken per the Flink web UI. Any
advice on how to resolve this is most appreciated.
Note on below: I
Hi Vino,
Because I build the project with Maven, maybe I can not use the jars
directly download from the web.
If built with Maven, how can I adjust the Hadoop version with the
Hadoop version really used?
Thanks a lot!!
Best
Henry
> 在 2018年10月26日,上午10:02,vino yang 写道:
>
Hi,
all supported connectors and formats for the SQL Client with YAML can be
found in the connect section [1]. However, the JDBC sink is not
available for the SQL Client so far. It still needs to be ported, see [2].
However, if you want to use it. You could implement your own table
factory t
Flink team,
I am exploring the Flink SQL client and trying to configure JDBC Sink in
YAML
I only find some sample YAML configuration in the documentation.
https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/table/sqlClient.html
Where can I find the entire definitions for that YAML conf
Awesome! Thanks a lot to you Chesnay for being our release manager and to
the community for making this release happen.
Cheers,
Till
On Mon, Oct 29, 2018 at 8:37 AM Chesnay Schepler wrote:
> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.6.2, which is the s
Great news. Thanks a lot to you Chesnay for being our release manager and
the community for making this release possible.
Cheers,
Till
On Mon, Oct 29, 2018 at 8:36 AM Chesnay Schepler wrote:
> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.5.5, which is the
The Apache Flink community is very happy to announce the release of
Apache Flink 1.6.2, which is the second bugfix release for the Apache
Flink 1.6 series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streami
The Apache Flink community is very happy to announce the release of
Apache Flink 1.5.5, which is the fifth bugfix release for the Apache
Flink 1.5 series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
streamin
Hi Marvin,
YARN is a resource management and scheduling framework.
When you run Flink on YARN, Flink will hand over the container's scheduling
tasks to YARN.
This is also the reason why YARN is used.
If you want to control the start and stop of TM, then I recommend you use
standalone mode and set
14 matches
Mail list logo