Hi, yes you could attach a custom processor that writes to another Kafka cluster. The problem is going to be guaranteeing at least once delivery without impacting throughput. To guarantee at least once you would need to do a blocking send on every call to process, i.e., producer.send(..).get(), this is going to have an impact on throughput, but i can't currently think of another way of doing it (with the current framework) that will guarantee at-least-once delivery.
On Thu, 2 Feb 2017 at 17:26 Roger Vandusen <roger.vandu...@ticketmaster.com> wrote: > Thanks for the quick reply Damian. > > So the work-around would be to configure our source topology’s with a > processor component that would use another app component as a stand-alone > KafkaProducer, let’s say an injected spring bean, configured to the other > (sink) cluster, and then publish sink topic messages through this producer > to the sink cluster? > > Sound like a solution? Have a better suggestion or any warnings about this > approach? > > -Roger > > > On 2/2/17, 10:10 AM, "Damian Guy" <damian....@gmail.com> wrote: > > Hi Roger, > > This is not currently supported and won't be available in 0.10.2.0. > This has been discussed, but it doesn't look there is a JIRA for it > yet. > > Thanks, > Damian > > On Thu, 2 Feb 2017 at 16:51 Roger Vandusen < > roger.vandu...@ticketmaster.com> > wrote: > > > We would like to source topics from one cluster and sink them to a > > different cluster from the same topology. > > > > If this is not currently supported then is there a KIP/JIRA to track > work > > to support this in the future? 0.10.2.0? > > > > -Roger > > > > > > >