1.20.0 - Error writing java.util.Instant to parquet

2025-05-15 Thread Levan Huyen
Hi all, I need to write a DataStream with a java.util.Instant field to parquet files in S3. I couldn't find any straightforward way to do that, so I changed that POJO class to Avro SpecificRecord (I followed this example https://github.com/aws-samples/amazon-managed-service-for-apache-flink-exampl

RE: Flink task manager PODs autoscaling - K8s installation

2025-05-15 Thread Kamal Mittal via user
Thanks for describing. Just to know that why K8s HPA doesn't work well with Flink? Any limitations? Also instead of HPA, Kubernetes Event Driven auto scaler (KEDA) can be used? From: Zhanghao Chen Sent: 14 May 2025 06:47 To: user@flink.apache.org; Kamal Mittal Subject: Re: Flink task manager

Unsubscribe

2025-05-15 Thread Phil Stavridis
Unsubscribe

Re: Apache Flink Serialization Question

2025-05-15 Thread Мосин Николай
For List I just setup TypeInfoFactory like: public class SomeDataDTOTypeInfoFactory extends TypeInfoFactory {    @Override    public TypeInformation createTypeInfo(Type t, Map> genericParameters) {        return Types.POJO(SomeDataDTO.class, new HashMap>() {            {                       

Graceful stopping of Flink on K8s

2025-05-15 Thread Nikola Milutinovic
Hello all. We are running Flink 1.20 on Kubernetes cluster. We deploy using Flink K8s Operator. I was wandering, when Kubernets decides to kill a running Flink cluster, is it using some regular graceful method or does it just kill the pod? Just for the reference, Docker has a way to specify a

Re: Apache Flink Serialization Question

2025-05-15 Thread Richard Cheung
Thanks for all the replies! I’ve decided to just update my UUID field to a String for POJO compliance. However, I’m getting the same log issues for List and Set saying that fields will be processed as GenericType. I want everything to be fully POJO compatible so I can have schema evolution for the

Re: Apache Flink Serialization Question

2025-05-15 Thread Mosin Nick
Github already contain some serializer for UUID https://github.com/gAmUssA/datalorean/blob/main/src/main/java/com/example/datadelorean/serialization/UUIDSerializer.javaWork well for me   Кому: Zhanghao Chen (zhanghao.c...@outlook.com), Richard Cheung (rcheungsi...@gmail.com), user@f

RE: Apache Flink Serialization Question

2025-05-15 Thread Schwalbe Matthias
Hi Richard, Same problem, 12 Flink versions later, I created my own TypeInformation/Serializer/Snapshot for UUID (Scala in that case), along: class UUIDTypeInformation extends TypeInformation[UUID] … class UUIDSerializer extends TupleSerializerBase[UUID]( … class UUIDSerializerSnapshot(serializ

Re: Keyed watermarks: A fine-grained watermark generation for Apache Flink

2025-05-15 Thread Мосин Николай
Yes, metrics from IoT is my case now.In additional to unsync clocks I also have devices that could buffer data when offline and resend it later when become online and that data also must be processed in common pipeline. But now it will be marked as 'late' and will be drop. I do some workarounds, bu

Kafka Sink Timestamp Behavior Change From 1.14.0 to 1.20.1

2025-05-15 Thread guoliubi...@foxmail.com
Hi, I have some jobs running under flink 1.14.0. For security reason, we have to update flink to 1.20.1. The problem is, when I sink records to kafka, in 1.14.0, the timestamp in kafka is the log append time. However, in 1.20.1, the timestamp in kafka is the event time. I have checked the sou

Re: Keyed watermarks: A fine-grained watermark generation for Apache Flink

2025-05-15 Thread Zhanghao Chen
Thanks for the insightful sharing! Best, Zhanghao Chen From: Lasse Nedergaard Sent: Thursday, May 15, 2025 13:10 To: Zhanghao Chen Cc: mosin...@yandex.ru ; user@flink.apache.org Subject: Re: Keyed watermarks: A fine-grained watermark generation for Apache Flin