Using a webhook is really a good direction to support some unreleased Flink
native
k8s features. We are doing the same thing internally.
Best,
Yang
Bohinski, Kevin 于2020年6月29日周一 上午3:09写道:
> Hi Yang,
>
>
>
> Awesome, looking forward to 1.11!
>
> In the meantime, we are using a mutating web hook
>
> Since changing off-heap removes memory from '.task.heap.size' is there a
> ratio that I should follow for better performance?
>
I don't think so. It could really be specific to your workload. Some
workload may need more heap memory while others may need more off-heap.
Also, my guess (since I a
In Flink 1.10, there's a huge change in the memory management compared to
previous versions. This could be related to your observations, because with
the same configurations, it is possible that there's less JVM heap space
(with more off-heap memory). Please take a look at this migration guide [1].
Hi Yang,
Awesome, looking forward to 1.11!
In the meantime, we are using a mutating web hook in case anyone else is facing
this...
Best,
kevin
From: Yang Wang
Date: Saturday, June 27, 2020 at 11:23 PM
To: "Bohinski, Kevin"
Cc: "user@flink.apache.org"
Subject: [EXTERNAL] Re: Native K8S IAM R
On Sun 28 Jun 2020 at 01:34, Stephen Connolly <
stephen.alan.conno...@gmail.com> wrote:
>
>
> On Thu 25 Jun 2020 at 12:48, ivo.kn...@t-online.de
> wrote:
>
>> Whats up guys,
>>
>>
>>
>> I'm trying to run an Apache Flink Application with the GraalVM Native
>> Image but I get the following error: (
Hi Xintong,
Thank you for the quick response.
doing 1), without increasing 'task.off-heap.size' does not change the
issue, but increasing the off-heap alone does.
What should the off-heap value size be? Since changing off-heap removes
memory from '.task.heap.size' is there a ratio that I should f
Thanks for the suggestions!
> i recently tried 1.10 and see this error frequently. and i dont have the
same issue when running with 1.9.1
I did downgrade to Flink 1.9 and there's certainly no change in the
occurrences in the heartbeat timeout
>
- Probably the most straightforward way is to t
Hello,
I am using Flink as the query engine for running SQL queries on both batch
and streaming data. I use the Blink planner in batch and streaming mode
respectively for the two cases.
In my current setup, I execute the batch queries synchronously via
StreamTableEnvironment::execute method. The
Hi, all
I am trying to do something like this
```
tEnv
.sqlQuery("SELECT rawEvent.id, collect(rawEvent.name) FROM rawEvent GROUP BY
rawEvent.id")
.toRetractStream[(Long, java.util.Map[String, java.lang.Integer])]
```
An exception is thrown when I ran the above code with the default planner
sett
Hi All,
In a flink job I have a pipeline. It is consuming data from one kafka topic
and storing data to Elastic search cluster.
without restarting the job can we add another kafka cluster and another
elastic search sink to the job. Which means i will supply the new kafka
cluster and elastic searc
10 matches
Mail list logo