Hi Team,
How can we configure multiple task managers and multiple jobs with the
same deployment file with flink operator.
*Deployment.yaml*
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: basic-example
spec:
image: flink:1.17
flinkVersion: v1_17
flinkConfigurati
restore is supported between 1.9.x and
1.11.x versions.
Please provide inputs on how to resolve this issue.
Regards,
Shravan
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
very convincing.
Regards,
M S Shravan
Chesnay Schepler wrote
> If you use 1.10.0 or above the framesize for which it failed is part of
> the exception message, see FLINK-14618.
>
> If you are using older version, then I'm afraid there is no way to tell.
>
> On 9/18/2020
release you are using."/
We found out the default size from the configuration but we are unable to
identify the size for which it fails. Could you help out on this?
Awaiting a response.
Regards,
Shravan
Chesnay Schepler wrote
>> how can we know the expected size for which it is f
essage does not indicate that. Does the operator state
have any impact on the expected Akka frame size? What is the impact of
increasing it?
Awaiting a response.
Regards,
Shravan
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
that. Does the operator state
have any impact on the expected Akka frame size? What is the impact of
increasing it?
Awaiting a response.
Regards,
Shravan
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
on-databind in table APIs
Looking forward on a response.
Thanks,
Shravan
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Our understanding is to stop job with savepoint, all the task manager
will persist their state during savepoint. If a Task Manager receives a
shutdown signal while savepoint is being taken, does it complete the
savepoint before shutdown ?
[Ans ] Why task manager is shutdown suddenly? Are you saying
Job Manager , Task Manager are run as separate pods within K8S cluster in
our setup. As job cluster is not used, job jars are not part of Job Manager
docker image. The job is submitted from a different Flink client pod. Flink
is configured with RocksDB state backend. The docker images are created
he
> Yarn nodes. Therefore, if you wanted to access it directly instead of
> through the Yarn web proxy, you'd have to find what machine and port it is
> running on.
>
> -Shannon
>
> From: Shravan R
> Date: Thursday, May 11, 2017 at 12:43 PM
> To:
> Subject: Jo
ere?
- Shravan
11 matches
Mail list logo