Hi Flink users,
I am a big fan of table API and we are extensively using it on petabytes
with ad-hoc queries. In our system, nobody goes and writes table creation
DDL, we inherit table schema from schema registry(avro data in Kafka)
dynamically and create temporary tables in the session.
I am us
Hi Timo and Dawid,
Thank you for a detailed answer; it looks like we need to reconsider all job
submission flow.
What is the best way to compare the new job graph? Can we use Flink
visualizer to ensure that the new job graph shares the table as you mention
It is not guaranteed?
Best regards,
Hi Till,
Thank you for your comment. I am looking forward to hearing from Timo and
Dawid as well.
Best regards,
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Kostas,
Yes, that would satisfy my use case as the platform is always
future-oriented. Any arbitrary query is executed on the latest data.
>From your comment, I understand that even the session mode does not optimize
our readers. I wish Flink could support arbitrary job submission and graph
ge
Hi Kostas,
Thank you for your response.
Is what you are saying valid for session mode? I can submit my jobs to the
existing Flink session, will they be able to share the sources?
We do register our Kafka tables to `GenericInMemoryCatalog`, and the
documentation says `The GenericInMemoryCatalog i
Hi all,
I would like to consult with you regarding deployment strategies.
We have +250 Kafka topics that we want users of the platform to submit SQL
queries that will run indefinitely. We have a query parsers to extract topic
names from user queries, and the application locally creates Kafka tabl