This is an automated email from the ASF dual-hosted git repository.
acosentino pushed a commit to branch master
in repository
https://gitbox.apache.org/repos/asf/camel-kafka-connector-examples.git
The following commit(s) were added to refs/heads/master by this push:
new 6e20c6b Slack Sink Example: Added openshift instruction
6e20c6b is described below
commit 6e20c6b06106a421cbd2214129d041ef4f29eb31
Author: Andrea Cosentino <[email protected]>
AuthorDate: Wed Oct 14 18:32:48 2020 +0200
Slack Sink Example: Added openshift instruction
---
slack/slack-sink/README.adoc | 193 +++++++++++++++++++++
slack/slack-sink/config/openshift/slack-sink.yaml | 16 ++
.../config/openshift/slack-webhook.properties | 1 +
3 files changed, 210 insertions(+)
diff --git a/slack/slack-sink/README.adoc b/slack/slack-sink/README.adoc
index afabcca..2c0dbe4 100644
--- a/slack/slack-sink/README.adoc
+++ b/slack/slack-sink/README.adoc
@@ -79,3 +79,196 @@ In another terminal, using kafkacat, you should be able to
send messages
% Auto-selecting Producer mode (use -P or -C to override)
```
+## Openshift
+
+### What is needed
+
+- A Slack App
+- A Slack channel
+- An Openshift instance
+
+### Running Kafka using Strimzi Operator
+
+First we install the Strimzi operator and use it to deploy the Kafka broker
and Kafka Connect into our OpenShift project.
+We need to create security objects as part of installation so it is necessary
to switch to admin user.
+If you use Minishift, you can do it with the following command:
+
+[source,bash,options="nowrap"]
+----
+oc login -u system:admin
+----
+
+We will use OpenShift project `myproject`.
+If it doesn't exist yet, you can create it using following command:
+
+[source,bash,options="nowrap"]
+----
+oc new-project myproject
+----
+
+If the project already exists, you can switch to it with:
+
+[source,bash,options="nowrap"]
+----
+oc project myproject
+----
+
+We can now install the Strimzi operator into this project:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+oc apply -f
https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml
+----
+
+Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and
then create a Kafka Connect image with the Debezium connectors installed:
+
+[source,bash,options="nowrap",subs="attributes"]
+----
+# Deploy a single node Kafka broker
+oc apply -f
https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/kafka/kafka-persistent-single.yaml
+
+# Deploy a single instance of Kafka Connect with no plug-in installed
+oc apply -f
https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
+----
+
+Optionally enable the possibility to instantiate Kafka Connectors through
specific custom resource:
+[source,bash,options="nowrap"]
+----
+oc annotate kafkaconnects2is my-connect-cluster
strimzi.io/use-connector-resources=true
+----
+
+### Add Camel Kafka connector binaries
+
+Strimzi uses `Source2Image` builds to allow users to add their own connectors
to the existing Strimzi Docker images.
+We now need to build the connectors and add them to the image,
+if you have built the whole project (`mvn clean package`) decompress the
connectors you need in a folder (i.e. like `my-connectors/`)
+so that each one is in its own subfolder
+(alternatively you can download the latest officially released and packaged
connectors from maven):
+
+So we need to do something like this:
+
+```
+> cd my-connectors/
+> wget
https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-slack-kafka-connector/0.5.0/camel-slack-kafka-connector-0.5.0-package.zip
+> unzip camel-slack-kafka-connector-0.5.0-package.zip
+```
+
+Now we can start the build
+
+[source,bash,options="nowrap"]
+----
+oc start-build my-connect-cluster-connect --from-dir=./my-connectors/ --follow
+----
+
+We should now wait for the rollout of the new image to finish and the replica
set with the new connector to become ready.
+Once it is done, we can check that the connectors are available in our Kafka
Connect cluster.
+Strimzi is running Kafka Connect in a distributed mode.
+
+To check the available connector plugins, you can run the following command:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l
strimzi.io/name=my-connect-cluster-connect
-o=jsonpath='{.items[0].metadata.name}'` -- curl -s
http://my-connect-cluster-connect-api:8083/connector-plugins
+----
+
+You should see something like this:
+
+[source,json,options="nowrap"]
+----
+[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.slack.CamelSlackSinkConnector","type":"sink","version":"0.5.0"},{"class":"org.apache.camel.kafkaconnector.slack.CamelSlackSourceConnector","type":"source","version":"0.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","v
[...]
+----
+
+### Set the Bot Token as secret (optional)
+
+You can also set the aws creds option as secret, you'll need to edit the file
config/aws2-s3-cred.properties with the correct credentials and then execute
the following command
+
+[source,bash,options="nowrap"]
+----
+oc create secret generic slack-webhook
--from-file=config/openshift/slack-webhook.properties
+----
+
+Now we need to edit KafkaConnectS2I custom resource to reference the secret.
For example:
+
+[source,bash,options="nowrap"]
+----
+spec:
+ # ...
+ config:
+ config.providers: file
+ config.providers.file.class:
org.apache.kafka.common.config.provider.FileConfigProvider
+ #...
+ externalConfiguration:
+ volumes:
+ - name: slack-webhook
+ secret:
+ secretName: slack-webhook
+----
+
+In this way the secret slack-webhook will be mounted as volume with path
/opt/kafka/external-configuration/slack-webhook/
+
+### Create connector instance
+
+Now we can create some instance of the Slack sink connector:
+
+[source,bash,options="nowrap"]
+----
+oc exec -i `oc get pods --field-selector status.phase=Running -l
strimzi.io/name=my-connect-cluster-connect
-o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
+ -H "Accept:application/json" \
+ -H "Content-Type:application/json" \
+ http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
+{
+ "name": "slack-sink-connector",
+ "config": {
+ "connector.class":
"org.apache.camel.kafkaconnector.slack.CamelSlackSinkConnector",
+ "tasks.max": "1",
+ "key.converter": "org.apache.kafka.connect.storage.StringConverter",
+ "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+ "topics": "slack-topic",
+ "camel.sink.path.channel": "general",
+ "camel.component.slack.webhookUrl": "<webhook_url>"
+ }
+}
+EOF
+----
+
+Altenatively, if have enabled `use-connector-resources`, you can create the
connector instance by creating a specific custom resource:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f - << EOF
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+ name: slack-sink-connector
+ namespace: myproject
+ labels:
+ strimzi.io/cluster: my-connect-cluster
+spec:
+ class: org.apache.camel.kafkaconnector.slack.CamelSlackSinkConnector
+ tasksMax: 1
+ config:
+ key.converter: org.apache.kafka.connect.storage.StringConverter
+ value.converter: org.apache.kafka.connect.storage.StringConverter
+ topics: slack-topic
+ camel.sink.path.channel: general
+ camel.component.slack.webhookUrl: webhook_url
+EOF
+----
+
+If you followed the optional step for secret webhook you can run the following
command:
+
+[source,bash,options="nowrap"]
+----
+oc apply -f config/openshift/slack-sink.yaml
+----
+
+Check for messages in your channel
+
+In another terminal, using kafkacat, you should be able to send messages
+
+```
+oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-producer.sh
--bootstrap-server localhost:9092 --topic slack-topic
+Hello from Apache Camel
+```
+
+
diff --git a/slack/slack-sink/config/openshift/slack-sink.yaml
b/slack/slack-sink/config/openshift/slack-sink.yaml
new file mode 100644
index 0000000..2150f69
--- /dev/null
+++ b/slack/slack-sink/config/openshift/slack-sink.yaml
@@ -0,0 +1,16 @@
+apiVersion: kafka.strimzi.io/v1alpha1
+kind: KafkaConnector
+metadata:
+ name: slack-sink-connector
+ namespace: myproject
+ labels:
+ strimzi.io/cluster: my-connect-cluster
+spec:
+ class: org.apache.camel.kafkaconnector.slack.CamelSlackSinkConnector
+ tasksMax: 1
+ config:
+ key.converter: org.apache.kafka.connect.storage.StringConverter
+ value.converter: org.apache.kafka.connect.storage.StringConverter
+ topics: slack-topic
+ camel.sink.path.channel: general
+ camel.component.slack.webhookUrl:
${file:/opt/kafka/external-configuration/slack-webhook/slack-webhook.properties:webhook}
diff --git a/slack/slack-sink/config/openshift/slack-webhook.properties
b/slack/slack-sink/config/openshift/slack-webhook.properties
new file mode 100644
index 0000000..81ca1d7
--- /dev/null
+++ b/slack/slack-sink/config/openshift/slack-webhook.properties
@@ -0,0 +1 @@
+webhook=xxx