Dear Apache Flink Community,
I hope this message finds you well. We are currently exploring the option
of utilizing Amazon S3 as a checkpoint storage solution alongside our
Apache Flink server. As part of this effort, we understand that AWS S3
access must be configured properly, and checkpoints
Hi Community,
Any suggestions on these queries.
Regards
Anuj
On Mon, Dec 23, 2024 at 8:10 AM Anuj Jain wrote:
> Hi,
> Please suggest if
> 1. Flink 2.0 will have experimental or full support of Java 17. If it is
> experimental, any idea in which 2.x release Java 17 support is t
should use Java
17 will Flink 2.0
https://nightlies.apache.org/flink/flink-docs-release-2.0-preview1/docs/deployment/java_compatibility/#java-17
2. What will be the end of support timeline Flink 1.20 LTS release.
Regards
Anuj
On Wed, Dec 18, 2024 at 9:04 AM Anuj Jain wrote:
> Hi,
>
> T
e.org/confluence/display/FLINK/2.0+Release#id-2.0Release-TimelinePlan
>
>
> --
> Best!
> Xuyang
>
>
> At 2024-12-17 15:42:54, "Anuj Jain" wrote:
>
> Hi,
> Will Flink 2.0 support Open JDK - Java 17 ?
> And Is there any plan for adding Java
Hi,
Will Flink 2.0 support Open JDK - Java 17 ?
And Is there any plan for adding Java 17 support in the Flink 1.x series ?
All I could see in the documentation is that Java 17 experimental support
is there.
Thanks in advance for any help !!
Regards
Anuj
uj,
>
> I recalled another ticket on this topic, which had some things to test. I
> don't know if that resolved the issue, can you verify it? See
> https://issues.apache.org/jira/browse/FLINK-31095
>
> Best regards,
>
> Martijn
>
> On Tue, May 23, 2023 at 7:04 AM Anu
Hello,
Please provide some pointers on this issue.
Thanks !!
Regards
Anuj
On Fri, May 19, 2023 at 1:34 PM Anuj Jain wrote:
> Hi Community,
> Looking forward to some advice on the problem.
>
> I also found this similar Jira, but not sure if a fix has been done for
> the Hadoop
NK-23487>
Is there any other way to integrate Flink source/sink with AWS IAM from EKS
?
Regards
Anuj
On Thu, May 18, 2023 at 12:41 PM Anuj Jain wrote:
> Hi,
> I have a flink job running on EKS, reading and writing data records to S3
> buckets.
> I am trying to set up access crede
works.
Am I using the correct credential provider for IAM integration, not sure if
Hadoop S3a supports it.
https://issues.apache.org/jira/browse/HADOOP-18154
Please advise if I am doing anything wrong in setting up credentials via
IAM.
Regards
Anuj Jain
ials must be stored in flink-conf.yaml. The
>> recommended method for setting up credentials is by using IAM, not via
>> Access Keys. See
>> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/s3/#configure-access-credentials
>> for more details.
credentials-from-hashicorp-vault>
> for
> more detailed instructions.
> Besides, it should be possible to override Configuration object in your
> job code. Are you using Application mode to run the job?
>
> Best regards,
> Biao Geng
>
> Anuj Jain 于2023年5月8日周一
. I
think flink creates the connection pool at startup even before the job is
started.
Thanks and Regards
Anuj Jain
>
>
> Hi Community,
>
>
> I am trying to use flink-parquet for reading and writing parquet files
> from the Flink filesystem connectors.
>
> In File source, I would be decoding parquet files and converting them to
> avro records and similarly in file sink i would be encoding avro records to
> parqu
Hi Community,
I am trying to use flink-parquet for reading and writing parquet files from
the Flink filesystem connectors.
In File source, I would be decoding parquet files and converting them to
avro records and similarly in file sink i would be encoding avro records to
parquet files.
For col
gt;
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-196%3A+Source+API+stability+guarantees
>
> Best,
> Yangze Guo
>
> On Wed, May 3, 2023 at 12:08 PM Anuj Jain wrote:
> >
> > Hi Community,
> > I saw some flink classes annotated with
> >
Hi Community,
I saw some flink classes annotated with
@Experimental
@PublicEvolving
@Internal
What do these annotations mean? Can I use these classes in production?
How the class APIs would evolve in future. Can they break backward
compatibility in terms of API declaration or implementation, in mi
Hi Community,
Does Flink File Sink support compression of output files, to reduce the
file size?
I think File source supports reading of compressed formats like gzip, bzip2
etc.; is there any way for sinking the files in compressed format ?
Any help is appreciated.
Regards
Anuj
17 matches
Mail list logo