Hi Anil I am glad the new (yet to be only) source API is getting attention, according to the execution model the assignments of splits is the responsibility of the enumerator and it seems that the enumerator is not assigning the readers any splits. Check kafka source for reference[1] 1- https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/enumerator/KafkaSourceEnumerator.java#L286 Best Regards Ahmed Hamdy
On Sun, 6 Oct 2024 at 06:41, Anil Dasari <adas...@guidewire.com> wrote: > Hello, > I have implemented a custom source that reads tables in parallel, with > each split corresponding to a table and custom source implementation can be > found here - > https://github.com/adasari/mastering-flink/blob/main/app/src/main/java/org/example/paralleljdbc/DatabaseSource.java > > However, it seems the source splits are not being scheduled and data is > not being read from the tables. Can someone help me identify the issue in > the implementation? > > Thanks > >