You're right that today you need to distribute jars manually today -- we don't have a built-in distribution mechanism, we just depend on what's on the classpath. Once you've got the jars installed, to make the jars accessible you'll need to do a rolling bounce with updated classpaths.
We know that installing plugins dynamically would be ideal and we're thinking about how to best implement this, but that functionality is not available yet today. -Ewen On Sat, Jun 25, 2016 at 12:33 PM, Dean Arnold <renodino...@gmail.com> wrote: > I'm looking for a comprehensive example for deploying a new connector > plugin into an existing Kafka cluster. Is there any standard solution for > distributing a connector jar across nodes, and then starting the connector > ? Or is it a manual copy process (e.g., via pdcp), and then run the Connect > REST API ? > > At present, my understanding of the the procedure looks like: > > Copy the connector to all kafka nodes: > pdcp -w kafka-node[0-8] shiny-new-connector.jar $KAFKA_HOME/lib > # copy other resources as needed > > Use REST to start the connector: > curl -H "Content-Type: application/json" -H "Accept: application/json" \ > -X POST http://kafka-node0:8083/connectors \ > --data-binary @shiny-new-connector.json > > Presumably, connect-distributed has already been executed to spawn worker > processes. How does the Kafka Connect CLASSPATH get updated to include the > connector jar when using the REST interface ? Or do all the workers have to > be stopped/restarted to update the CLASSPATH ? > -- Thanks, Ewen