Hi,
i was reading through the Flink docs, and i have got to an understanding
that each application will have its own instance of Jobamanager and
TaskManagers and so every application will have to have a initial configuration
for defining the application topology to be drawn in the flink clus
Dear community,
happy to share this week's community update with the release of Flink
1.8.2, more work in the area of dynamic resource management, three
proposals in the SQL space and a bit more.
Flink Development
==
* [releases] *Flink 1.8.2* has been released. [1]
* [resource mana
Hi guys,
I have a flink job running in standalone mode with a parallelism of >1 ,
that produces data to a kafka sink. My topic is replicated with a
replication factor of 2. Now suppose one of the kafka brokers goes down ,
then will my streaming job fail ? Is there a way where in I can continue
proc
I do not think you will have any problem with one broker going down as long as
you have provided enough brokers on the bootstrap.
Thanks,
Shakir
From: Vishwas Siravara
Date: Sunday, September 15, 2019 at 2:52 PM
To: user
Subject: [EXTERNAL] High availability flink job
Hi guys,
I have a flink
Thanks for bringing this up, Stephan.
I am +1 on dropping support for Kafka 0.8. It is a pretty old version and I
don't think there are many users on that version now.
However, for Kafka 0.9, I think there are still quite some users on that
version. It might be better to keep it a little longer.
Hi Debasish,
>From the information in the corresponding JIRA[1] 1.9.1 is not a fixed
version
of the issue you referred to. Technically Flink back ports notable fixes to
brach release-1.9 and start the release of 1.9.1 from that branch.
Visually, it looks like
... - ... - PR#9565 - ... - master
Thanks for the reply Oytun (and sorry for the late response, somehow just
noticed).
Requirement received, interesting one. Let's see whether this could draw
any attention from the committers (smile).
Best Regards,
Yu
On Fri, 6 Sep 2019 at 22:14, Oytun Tez wrote:
> Hi Yu,
>
> Excuse my late re
Hi everyone!
I found that everytime I start a flink-yarn application, client will ship
flink-uber jar and other dependencies to hdfs and start appMaster. Is there any
approaches to locate flink-uber jar and other library jars on hdfs and let only
configuration file being shipped. Therefore the y
Hi Shengnan,
I think you mean to avoid uploading flink-dist jars in submission every
time.
I have created a JIRA[1] to use Yarn public cache to speed up the launch
duration of JM and TM. After this feature merged, you could submit a flink
job like below.
./bin/flink run -d -m yarn-cluster -p 20 -
Actually there is a discussion on
https://issues.apache.org/jira/browse/FLINK-12501 regarding backporting
PR#9565 to 1.9 branch. It would help us a lot since we are using Avro and
Scala and stuck on this issue.
regards.
On Mon, Sep 16, 2019 at 8:05 AM Zili Chen wrote:
> Hi Debasish,
>
> From th
10 matches
Mail list logo