Hi.
I was wondering how others handle the situation where you flink job has to read
or write a data structure in json mapped to a pojo defined in an external
library and these external pojo’s contains data types, like uuid, generic list
etc, that you need to annotate with type information in o
Hello,
Is there an example for FileSource that continuously monitors NFS
directories looking out for files that match pattern specified at runtime?
I was searching for documents around it but could not find.
Can flink file source monitor on nfs directories without any issues?
I was using a custo
I was able to fix with the bellow modifications. Turns out flink can
encode genericRecords to avro when the schema is provided (returns
chained). Also the kryo stuff is not anymore needed then.
@@ -22,14 +22,12 @@
env.fromSource(source,
WatermarkStrategy.noWatermarks(),
Hi All,
What is the recommended way to support dynamic configuration with
Kubernetes Flink Operator as flink conf is mounted as a config map which is
not modifiable on the application startup. e.g. I want to retrieve
s3.access-key
and s3.secret-key the from an external system for our ceph cluster
Hi,
We are planning to use Flink 1.19.1 with kubernetes operator, I wanted to
check if 1.19.1 is Java 17 compatible or not. In documentation it says in
version 1.18 we added experimental support for it but nothing concrete is
said whether it supports Java 17 completely.
Thanks & Regards,
Sachin S
Hi,
This simplified code fails with the bellow stacktrace. In case I remove
the timestamp logicalType to a regular long in the avro schema, it
works fine. Also, it is independant of the sink (also did NPE with a
kafka sink).
Is it needed to implement a custom kryo serializer for built-in avro
logi