Hi, Jingsong. It's hard to provide an option regarding to the fact that we also
want to decouple Hive with flink planner.
If we still need this fall back behavior, we will still depend on `ParserImpl`
provided by flink-table-planner on HiveParser.
But to try best to minimize the impact to users
Hello Team,
I would like to bring attention to a potential bug regarding Kubernetes HA
in Flink 1.15.
In our implementation, we utilize the TRAP command in our entrypoint script
to perform cleanup tasks based on the exit code of the Jobmanager. However,
we have observed an issue where, when using
Hi Jannik,
By default, Kafka client applications automatically register new schemas
[1]. You should be able to influence that by using properties, e.g. setting:
'properties.auto.register.schemas' = 'false'
'properties.use.latest.version' = 'true'
Best regards,
Martijn
[1]
https://docs.confluen
Hello Jannik,
Some things to consider (I had a similar problem a couple of years before):
* The schemaRegistryClient actually caches schema ids, so it will hit the
schema registry only once,
* The schema registered in schema registry needs to be byte-equal,
otherwise schema registry co
Hi community,
It would be of great help if anyone can help us with root cause of this issue?
Refer below mail thread for more details.
We look forward to hearing from you.
Thanks,
Rakesh
From: "Chinthakrindi, Rakesh"
Date: Wednesday, 24 May 2023 at 5:15 PM
To: "user@flink.apache.org"
Cc: "GU
Hello,
I'm trying to use the avro-confluent-registry format with the Confluent Cloud
Schema Registry in our company.
Our schemas are managed via Terraform and global write access is denied for all
Kafka clients in our environments (or at least in production).
Therefore, when using the avro-confl
flink14 batch mode can read iceberg table but stream mode can not ,why?
Thanks in advance
Kobe24