You are right that split is still not supported. Does it make sense for you to split the stream using a filter function? There is some overhead compared the built-in stream.split as you need to provide a filter function for each sub-stream and so a record will evaluated multiple times.
> 2021年6月24日 上午3:08,Curt Buechter <tricksho...@gmail.com> 写道: > > Hi, > New PyFlink user here. Loving it so far. The first major problem I've run > into is that I cannot create a Kafka Producer with dynamic topics. I see that > this has been available for quite some time in Java with Keyed Serialization > using the getTargetTopic method. Another way to do this in Java may be with > stream.split(), and adding a different sink for the split streams. Stream > splitting is also not available in PyFlink. > Am I missing anything? Has anyone implemented this before in PyFlink, or know > of a way to make it happen? > The use case here is that I'm using a CDC Debezium connector to populate > kafka topics from a multi-tenant database, and I'm trying to use PyFlink to > split the records into a different topic for each tenant. > > Thanks