Hi all,
I need to write a DataStream with a java.util.Instant field to
parquet files in S3. I couldn't find any straightforward way to do that, so
I changed that POJO class to Avro SpecificRecord (I followed this example
https://github.com/aws-samples/amazon-managed-service-for-apache-flink-exampl
Thanks for describing.
Just to know that why K8s HPA doesn't work well with Flink? Any limitations?
Also instead of HPA, Kubernetes Event Driven auto scaler (KEDA) can be used?
From: Zhanghao Chen
Sent: 14 May 2025 06:47
To: user@flink.apache.org; Kamal Mittal
Subject: Re: Flink task manager
Unsubscribe
For List I just setup TypeInfoFactory like: public class SomeDataDTOTypeInfoFactory extends TypeInfoFactory { @Override public TypeInformation createTypeInfo(Type t, Map> genericParameters) { return Types.POJO(SomeDataDTO.class, new HashMap>() { {
Hello all.
We are running Flink 1.20 on Kubernetes cluster. We deploy using Flink K8s
Operator.
I was wandering, when Kubernets decides to kill a running Flink cluster, is it
using some regular graceful method or does it just kill the pod?
Just for the reference, Docker has a way to specify a
Thanks for all the replies!
I’ve decided to just update my UUID field to a String for POJO compliance.
However, I’m getting the same log issues for List and Set
saying that fields will be processed as GenericType. I want everything to
be fully POJO compatible so I can have schema evolution for the
Github already contain some serializer for UUID https://github.com/gAmUssA/datalorean/blob/main/src/main/java/com/example/datadelorean/serialization/UUIDSerializer.javaWork well for me Кому: Zhanghao Chen (zhanghao.c...@outlook.com), Richard Cheung (rcheungsi...@gmail.com), user@f
Hi Richard,
Same problem, 12 Flink versions later,
I created my own TypeInformation/Serializer/Snapshot for UUID (Scala in that
case), along:
class UUIDTypeInformation extends TypeInformation[UUID]
…
class UUIDSerializer extends TupleSerializerBase[UUID](
…
class UUIDSerializerSnapshot(serializ
Yes, metrics from IoT is my case now.In additional to unsync clocks I also have devices that could buffer data when offline and resend it later when become online and that data also must be processed in common pipeline. But now it will be marked as 'late' and will be drop. I do some workarounds, bu
Hi,
I have some jobs running under flink 1.14.0. For security reason, we have to
update flink to 1.20.1.
The problem is, when I sink records to kafka, in 1.14.0, the timestamp in kafka
is the log append time. However, in 1.20.1, the timestamp in kafka is the event
time.
I have checked the sou
Thanks for the insightful sharing!
Best,
Zhanghao Chen
From: Lasse Nedergaard
Sent: Thursday, May 15, 2025 13:10
To: Zhanghao Chen
Cc: mosin...@yandex.ru ; user@flink.apache.org
Subject: Re: Keyed watermarks: A fine-grained watermark generation for Apache
Flin
11 matches
Mail list logo