I am unable to restore a 1.9 savepoint into a 1.11 runtime for the very
interesting reason that the Savepoint class was renamed and repackaged between
those two releases. Apparently a Kryo serializer has that class registered in
the 1.9 runtime. I can’t think of a good reason for that clas
Well usually the plugins should be properly isolated but Flink 1.9 is quite
old so there is a chance the plugin classloader was not fully isolated.
But I also have a hard time concluding anything with the small stacktrace.
Do you need aws-java-sdk-core because of Kinesis?
On Fri, Jul 30, 2021 at
Hi Arvid,
Yes, we do have AWSCredentialsProvider in our user JAR. It’s coming from
aws-java-sdk-core. Must we exclude that, then?
// ah
From: Arvid Heise
Sent: Friday, July 30, 2021 11:26 AM
To: Ingo Bürk
Cc: user
Subject: Re: Unable to use custom AWS credentials provider - 1.9.2
Can you do
Can you double-check if you have a AWSCredentialsProvider in your user jar
or in your flink/lib/ ? Same for S3AUtils?
On Fri, Jul 30, 2021 at 9:50 AM Ingo Bürk wrote:
> Hi Andreas,
>
> Such an exception can occur if the class in question (your provider) and
> the one being checked (AWSCredential
Hello Yangze, thanks for responding.
I'm attempting to perform this programmatically on YARN, so looking at a log
just won't do :) What's the appropriate way to get an instance of a
ClusterClient? Do you know of any examples I can look at?
// ah
-Original Message-
From: Yangze Guo
Sen
I am using RocksDB as the state backend. My pipeline checkpoint size is
hardly ~100kb.
I will add gc and heap dump config and will let you know of any findings
Right now I have doubts that there is some memory leak either in flink cdc
code or in iceberg sink https://iceberg.apache.org/flink/#over
Hello, I have been trying to Use StreamingFileSink to write to parquetFiles into azure blob storage. I am getting the following error. I did see in the ticket https://issues.apache.org/jira/browse/FLINK-17444 that support for StreamingFileSink is not yet provided.
code.java
Description: Binary
Hi Dan,
sorry for the mixup. I think the idleness definition [1] is orthogonal to
the used source interface. The new source interface just makes it more
obvious to the user that he can override the watermark strategy.
I'd still recommend having a look at the new Kafka source though. One
interesti
Hi :
When I use Over Window Aggregation,windows size is 1 hour,I find the processing
speed decreases over time. How can I tuning the over window?
Best regards
Hui Wang
Thanks Chesnay. Will try and report back.
On Fri, Jul 30, 2021, 10:19 Chesnay Schepler wrote:
> Of course if is finding the file, you are actively pointing it towards it.
> The BashJavaUtils are supposed to use the log4j configuration file *that
> is bundled in the BashJavaUtils.jar, *which you
Hi Andreas,
Such an exception can occur if the class in question (your provider) and
the one being checked (AWSCredentialsProvider) were loaded from
different class loaders.
Any chance you can try once with 1.10+ to see if it would work? It does
look like a Flink issue to me, but I'm not sure thi
Of course if is finding the file, you are actively pointing it towards it.
The BashJavaUtils are supposed to use the log4j configuration file /that
is bundled in the BashJavaUtils.jar, /which you are now interfering
with. That's also why it doesn't require all of lib/ to be on the
classpath; th
12 matches
Mail list logo