Hey Yang,
Thank you for fast response.
I get your point but, assuming 3 Job managers are up, in case the leader fails,
one of the other 2 should become the new leader, no?
If the cluster fails, the new leader should handle that.
Another scenario could be that the Job manager stops(get killed b
For native K8s integration, the Flink ResourceManager will delete the
JobManager K8s deployment as well as the HA data once the job reached a
globally terminal state.
However, it is indeed a problem for standalone mode since the JobManager
will be restarted again even the job has finished. I think
Given that you are running multiple JobManagers, it does not matter for the
"already exists" exception during leader election.
BTW, I think running multiple JobManagers does not take enough advantages
when deploying Flink on Kubernetes. Because a new JobManager will be
started immediately once the
Hey Hjw,
Under the current Flink architecture (i.e., task states are stored locally
and periodically uploaded to remote durable storage during checkpointing),
there is no other way rather than scaling out your application to solve the
problem. This is equivalent to making the state size in each ta
Hey Gil,
I'm referring to when a pod exits on its own, not when being deleted.
Deployments only support the "Always" restart policy [1].
In my understanding, the JM only cleans up HA data when it is shutdown[2],
after which the process will exit which leads to the problem with k8s
Deployment rest
Hello Austin,
I'm not aware of any limitations of deployement not letting pod exit
(correctly or incorrectly). What do you mean by that exactly? Would it be
possible for you to point out to piece of documentation that make you think
that ?
A pod, if correctly setup will be exited when receiving i
The way that Flink handles session windows is that every new event is
initially assigned to its own session window, and then overlapping sessions
are merged. I imagine this is why you are seeing so many calls
to createAccumulator.
This implementation choice is deeply embedded in the code; I don't
Cool, thanks! How does it clean up the HA data, if the cluster is never
able to shut down (due to the k8s Deployment restriction)?
Best,
Austin
On Mon, Sep 5, 2022 at 6:51 PM Gyula Fóra wrote:
> Hi!
>
> The operator supports both Flink native and standalone deployment modes
> and in both cases
Thanks a lot for your answers, this is reassuring!
Cheers
Le mer. 7 sept. 2022 à 13:12, Chesnay Schepler a
écrit :
> Just to squash concerns, we will make sure this license change will not
> affect Flink users in any way.
>
> On 07/09/2022 11:14, Robin Cassan via user wrote:
> > Hi all!
> > It
The class contains single test method, which runs single job (the job is quite
complex), then waits for job being running after that waits for data being
populated in output topic, and this doesn't happen during 5 minutes (test
timeout). Tried under debugger, set breakpoint in Kafka record deser
Are you running into this in the IDE, or when submitting the job to a
Flink cluster?
If it is the first, then you're probably affected by the Scala-free
Flink efforts. Either add an explicit dependency on
flink-streaming-scala or migrate to Flink tuples.
On 07/09/2022 14:17, Lars Skjærven wr
The test that gotten slow; how many test cases does it actually contain
/ how many jobs does it actually run?
Are these tests using the table/sql API?
On 07/09/2022 14:15, Alexey Trenikhun wrote:
We are also observing extreme slow down (5+ minutes vs 15 seconds) in
1 of 2 integration tests . Bo
Hello,
When upgrading from 1.14 to 1.15 we bumped into a type issue when
attempting to sink to Cassandra (scala 2.12.13). This was working nicely in
1.14. Any tip is highly appreciated.
Using a MapFunction() to generate the stream of tuples:
CassandraSink
.addSink(
mystream.map(new ToTupleM
We are also observing extreme slow down (5+ minutes vs 15 seconds) in 1 of 2
integration tests . Both tests use Kafka. The slow test uses
org.apache.flink.runtime.minicluster.TestingMiniCluster, this test tests
complete job, which consumes and produces Kafka messages. Not affected test
extends
Just to squash concerns, we will make sure this license change will not
affect Flink users in any way.
On 07/09/2022 11:14, Robin Cassan via user wrote:
Hi all!
It seems Akka have announced a licensing change
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka
If I understa
Hi,
I would recommend you to check the release notes of 1.14[1] and 1.15[2]. If
your Flink jobs are using Flink features that have big improvements in
these two releases, it would be better to upgrade step by step without
skipping 1.14.x.
In general, depending on how complicated your jobs are, it
There is some more discussion going on in the related PR [1]. Based on the
current state of the discussion, akka 2.6.20 will be the last version under
Apache 2.0 license. But, I guess, we'll have to see where this discussion
is heading considering that it's kind of fresh.
[1] https://github.com/ak
We'll have to look into it.
The license would apply to usages of Flink.
That said, I'm not sure if we'd even be allowed to use Akka under that
license since it puts significant restrictions on the use of the software.
If that is the case, then it's either use a fork created by another
party or
Hi all!
It seems Akka have announced a licensing change
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka
If I understand correctly, this could end-up increasing cost a lot for
companies using Flink in production. Do you know if the Flink developers
have any initial reaction a
In addition to the state compatibility mentioned above, the interfaces
provided by Flink are stable if they have public annotation[1]
[1]
https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/annotation/Public.java
Best,
Congxian
Hangxiang Yu 于2022年9月7日周三
20 matches
Mail list logo