Hi Jan, Forgive my ignorance, but I am slightly confused here.
*"You should be able to get everything running on Windows anyhow"* - Confluent quickstart for JDBC Connector/Sink Connector requires Schema Registry up and running - currently Confluent does not provide any windows batch script list for the following: schema-registry-start schema-registry-stop schema-registry-run-class Without the above scripts, JDBC source/sink connectors will not work any platform. This means manual conversion of the above scripts since the the guide says "This example assumes you are running Kafka and Schema Registry locally on the default ports". If you are saying that it works without could you kindly point me to that portion of documentation? I tried to find gists but wasn't successful. *"Nothing expect the broker is really extensively using OS support for operating" - *My question was about Confluent's support for all the scripts/configuration for Windows OS - not it's operations/functions on Windows. i know it works on Windows, just not fully tested as Linux. Thanks, On 16 September 2017 at 22:07, Jan Filipiak <jan.filip...@trivago.com> wrote: > Hi, > > entirely depends on how you want to serialize. You should be able to get > everything running on Windows anyhow. Nothing expect the broker is really > extensively using OS support for operating. > > To answer your initial question: You would simply start multiple sinks and > give each sink a different connect string. That should do what you want > instantly > > Best Jan > > > On 16.09.2017 22:51, M. Manna wrote: > >> Yes I have, I do need to build and run Schema Registry as a pre-requisite >> isn't that correct? because the QuickStart seems to start AVRO - without >> AVRO you need your own implementation of transformer/serdes etc. >> >> I am only asking since my deployment platform is Windows Server 2012 - and >> Confluent pkg is meant to be run on Linux. I guess there is a lot of >> manual >> conversion I need to do here? >> >> On 16 September 2017 at 21:43, Ted Yu <yuzhih...@gmail.com> wrote: >> >> Have you looked at https://github.com/confluentinc/kafka-connect-jdbc ? >>> >>> On Sat, Sep 16, 2017 at 1:39 PM, M. Manna <manme...@gmail.com> wrote: >>> >>> Sure. But all these are not available via Kafka open source (requires >>>> manual coding), correct? Only Confluence seems to provide some >>>> off-the-shelf connector but Confluent isn't compatible on Windows (yet), >>>> also correct? >>>> >>>> >>>> >>>> On 13 September 2017 at 18:11, Sreejith S <srssreej...@gmail.com> >>>> wrote: >>>> >>>> This is possible. Once you have records in your put method, its up your >>>>> logic how you are redirecting it to multiple jdbc connections for >>>>> insertion. >>>>> >>>>> In my use case i have implemented many to many sources and sinks. >>>>> >>>>> Regards, >>>>> Srijith >>>>> >>>>> On 13-Sep-2017 10:14 pm, "M. Manna" <manme...@gmail.com> wrote: >>>>> >>>>> Hi, >>>>> >>>>> I need a little help/suggestion if possible. Does anyone know if it's >>>>> possible in Kafka to develop a connector that can sink for multiple >>>>> >>>> JDBC >>> >>>> urls for the same topic (i.e. table) ? >>>>> >>>>> The examples I can see on Confluent talks about one JDBC url >>>>> >>>> (one-to-one >>> >>>> sink). Would it be possible to achieve a one-to-many ? >>>>> >>>>> What I am trying to do is the following: >>>>> >>>>> 1) Write to a topic >>>>> 2) Sink it to multiple DBs (they all will have the same table). >>>>> >>>>> Is this doable/correct way for Connect API? >>>>> >>>>> Kindest Regards, >>>>> >>>>> >