Hi,
We noticed it first when running out of space on the S3 PV. We worked out
> the subdirectory is the job Id, so I don't know how that is meant to be
> cleaned up when the job is recreated with a new job Id . It might be we
> need to work something out outside of flink.
>
A job id subdirectory
How about porting the `TestFileSystemCatalogFactory` back to 1.19 and rebuild
this catalog jar?
--
Best!
Xuyang
在 2025-01-03 06:23:25,"Vinay Agarwal" 写道:
Thanks again for your answer. I have to use Flink version 1.20.0 because
`TestFileSystemCatalogFactory` doesn't exist in pri
Thanks again for your answer. I have to use Flink version 1.20.0 because
`TestFileSystemCatalogFactory` doesn't exist in prior versions.
Unfortunately, I am not able to run 1.20.0 due to the following error.
(Version 1.91.1 works just fine.)
```
14:19:47.094 [main] INFO
org.apache.flink.runtime.r
Flink handles its parallelism independently from the number of partitions
in the topic(s) being read. The parallelism comes from whatever is set in
the cluster configuration, without any concern for the source's native
parallelism. If there are fewer kafka partitions than the flink
parallelism, the
Unsubscribe
Let's suppose we have two topics, one with 20 partitions and another with 1
partition. If we set parallelism.default to 10, I understand that it will
create 10 subtasks in total for each source. In the case of the topic with
20 partitions, it will work correctly, but for the topic with only one
par