knaufk commented on a change in pull request #18725:
URL: https://github.com/apache/flink/pull/18725#discussion_r807701936



##########
File path: docs/content/docs/deployment/resource-providers/standalone/docker.md
##########
@@ -247,68 +246,175 @@ You can see that certain tags include the version of 
Hadoop, e.g. (e.g. `-hadoop
 Beginning with Flink 1.5, image tags that omit the Hadoop version correspond 
to Hadoop-free releases of Flink
 that do not include a bundled Hadoop distribution.
 
+## Flink with Docker Compose
 
-### Passing configuration via environment variables
+[Docker Compose](https://docs.docker.com/compose/) is a way to run a group of 
Docker containers locally.
+The next sections show examples of configuration files to run Flink.
 
-When you run Flink image, you can also change its configuration options by 
setting the environment variable `FLINK_PROPERTIES`:
+### General
 
-```sh
-$ FLINK_PROPERTIES="jobmanager.rpc.address: host
-taskmanager.numberOfTaskSlots: 3
-blob.server.port: 6124
-"
-$ docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{{< stable 
>}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable 
>}}latest{{< /unstable >}} <jobmanager|standalone-job|taskmanager>
-```
+* Create the docker-compose.yaml file. Please check the examples in the 
sections below:
+    * [Application Mode](#app-cluster-yml)
+    * [Session Mode](#session-cluster-yml)
+    * [Session Mode with SQL Client](#session-cluster-sql-yaml)
 
-The [`jobmanager.rpc.address`]({{< ref "docs/deployment/config" 
>}}#jobmanager-rpc-address) option must be configured, others are optional to 
set.
+* Launch a cluster in the foreground (use `-d` for background)
 
-The environment variable `FLINK_PROPERTIES` should contain a list of Flink 
cluster configuration options separated by new line,
-the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence 
over configurations in `flink-conf.yaml`.
+    ```sh
+    $ docker-compose up
+    ```
 
-### Provide custom configuration
+* Scale the cluster up or down to `N` TaskManagers
 
-The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in 
the `/opt/flink/conf` directory in the Flink image.
-To provide a custom location for the Flink configuration files, you can
+    ```sh
+    $ docker-compose scale taskmanager=<N>
+    ```
 
-* **either mount a volume** with the custom configuration files to this path 
`/opt/flink/conf` when you run the Flink image:
+* Access the JobManager container
 
     ```sh
-    $ docker run \
-        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf 
\
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+    $ docker exec -it $(docker ps --filter name=jobmanager --format={{.ID}}) 
/bin/sh
     ```
 
-* or add them to your **custom Flink image**, build and run it:
-
+* Kill the cluster
 
-    ```dockerfile
-    FROM flink
-    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
+    ```sh
+    $ docker-compose kill
     ```
 
-{{< hint info >}}
-The mounted volume must contain all necessary configuration files.
-The `flink-conf.yaml` file must have write permission so that the Docker entry 
point script can modify it in certain cases.
-{{< /hint >}}
+* Access Web UI
 
-### Using filesystem plugins
+  When the cluster is running, you can visit the web UI at 
[http://localhost:8081](http://localhost:8081).
 
-As described in the [plugins]({{< ref "docs/deployment/filesystems/plugins" 
>}}) documentation page: In order to use plugins they must be
-copied to the correct location in the Flink installation in the Docker 
container for them to work.
+### Application Mode
 
-If you want to enable plugins provided with Flink (in the `opt/` directory of 
the Flink distribution), you can pass the environment variable 
`ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
-The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names 
separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{< 
version >}}.jar`
+In application mode you start a Flink cluster that is dedicated to run only 
the Flink Jobs which have been bundled with the images.
+Hence, you need to build a dedicated Flink Image per application.
+Please check [here](#application-mode-on-docker) for the details.
+See also [how to specify the JobManager 
arguments](#jobmanager-additional-command-line-arguments) in the `command` for 
the `jobmanager` service.
 
-```sh
-    $ docker run \
-        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+<a id="app-cluster-yml">`docker-compose.yml`</a> for *Application Mode*.

Review comment:
       I think was that is only an anchor.

##########
File path: docs/content/docs/deployment/resource-providers/standalone/docker.md
##########
@@ -247,68 +246,175 @@ You can see that certain tags include the version of 
Hadoop, e.g. (e.g. `-hadoop
 Beginning with Flink 1.5, image tags that omit the Hadoop version correspond 
to Hadoop-free releases of Flink
 that do not include a bundled Hadoop distribution.
 
+## Flink with Docker Compose
 
-### Passing configuration via environment variables
+[Docker Compose](https://docs.docker.com/compose/) is a way to run a group of 
Docker containers locally.
+The next sections show examples of configuration files to run Flink.
 
-When you run Flink image, you can also change its configuration options by 
setting the environment variable `FLINK_PROPERTIES`:
+### General
 
-```sh
-$ FLINK_PROPERTIES="jobmanager.rpc.address: host
-taskmanager.numberOfTaskSlots: 3
-blob.server.port: 6124
-"
-$ docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{{< stable 
>}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable 
>}}latest{{< /unstable >}} <jobmanager|standalone-job|taskmanager>
-```
+* Create the docker-compose.yaml file. Please check the examples in the 
sections below:
+    * [Application Mode](#app-cluster-yml)
+    * [Session Mode](#session-cluster-yml)
+    * [Session Mode with SQL Client](#session-cluster-sql-yaml)
 
-The [`jobmanager.rpc.address`]({{< ref "docs/deployment/config" 
>}}#jobmanager-rpc-address) option must be configured, others are optional to 
set.
+* Launch a cluster in the foreground (use `-d` for background)
 
-The environment variable `FLINK_PROPERTIES` should contain a list of Flink 
cluster configuration options separated by new line,
-the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence 
over configurations in `flink-conf.yaml`.
+    ```sh
+    $ docker-compose up
+    ```
 
-### Provide custom configuration
+* Scale the cluster up or down to `N` TaskManagers
 
-The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in 
the `/opt/flink/conf` directory in the Flink image.
-To provide a custom location for the Flink configuration files, you can
+    ```sh
+    $ docker-compose scale taskmanager=<N>
+    ```
 
-* **either mount a volume** with the custom configuration files to this path 
`/opt/flink/conf` when you run the Flink image:
+* Access the JobManager container
 
     ```sh
-    $ docker run \
-        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf 
\
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+    $ docker exec -it $(docker ps --filter name=jobmanager --format={{.ID}}) 
/bin/sh
     ```
 
-* or add them to your **custom Flink image**, build and run it:
-
+* Kill the cluster
 
-    ```dockerfile
-    FROM flink
-    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
+    ```sh
+    $ docker-compose kill
     ```
 
-{{< hint info >}}
-The mounted volume must contain all necessary configuration files.
-The `flink-conf.yaml` file must have write permission so that the Docker entry 
point script can modify it in certain cases.
-{{< /hint >}}
+* Access Web UI
 
-### Using filesystem plugins
+  When the cluster is running, you can visit the web UI at 
[http://localhost:8081](http://localhost:8081).
 
-As described in the [plugins]({{< ref "docs/deployment/filesystems/plugins" 
>}}) documentation page: In order to use plugins they must be
-copied to the correct location in the Flink installation in the Docker 
container for them to work.
+### Application Mode
 
-If you want to enable plugins provided with Flink (in the `opt/` directory of 
the Flink distribution), you can pass the environment variable 
`ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
-The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names 
separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{< 
version >}}.jar`
+In application mode you start a Flink cluster that is dedicated to run only 
the Flink Jobs which have been bundled with the images.
+Hence, you need to build a dedicated Flink Image per application.
+Please check [here](#application-mode-on-docker) for the details.
+See also [how to specify the JobManager 
arguments](#jobmanager-additional-command-line-arguments) in the `command` for 
the `jobmanager` service.
 
-```sh
-    $ docker run \
-        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+<a id="app-cluster-yml">`docker-compose.yml`</a> for *Application Mode*.
+
+```yaml
+version: "2.2"
+services:
+  jobmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    ports:
+      - "8081:8081"
+    command: standalone-job --job-classname com.job.ClassName [--job-id <job 
id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job 
arguments]
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        parallelism.default: 2
+
+  taskmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    depends_on:
+      - jobmanager
+    command: taskmanager
+    scale: 1
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        taskmanager.numberOfTaskSlots: 2
+        parallelism.default: 2
 ```
 
-There are also more [advanced ways](#advanced-customization) for customizing 
the Flink image.
+### Session Mode
+
+In Session Mode you use docker-compose to spin up a long-running Flink Cluster 
to which you can then submit Jobs.
+
+<a id="session-cluster-yml">`docker-compose.yml`</a> for *Session Mode*:

Review comment:
       I think was that is only an anchor.

##########
File path: docs/content/docs/deployment/resource-providers/standalone/docker.md
##########
@@ -247,68 +246,175 @@ You can see that certain tags include the version of 
Hadoop, e.g. (e.g. `-hadoop
 Beginning with Flink 1.5, image tags that omit the Hadoop version correspond 
to Hadoop-free releases of Flink
 that do not include a bundled Hadoop distribution.
 
+## Flink with Docker Compose
 
-### Passing configuration via environment variables
+[Docker Compose](https://docs.docker.com/compose/) is a way to run a group of 
Docker containers locally.
+The next sections show examples of configuration files to run Flink.
 
-When you run Flink image, you can also change its configuration options by 
setting the environment variable `FLINK_PROPERTIES`:
+### General
 
-```sh
-$ FLINK_PROPERTIES="jobmanager.rpc.address: host
-taskmanager.numberOfTaskSlots: 3
-blob.server.port: 6124
-"
-$ docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{{< stable 
>}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable 
>}}latest{{< /unstable >}} <jobmanager|standalone-job|taskmanager>
-```
+* Create the docker-compose.yaml file. Please check the examples in the 
sections below:
+    * [Application Mode](#app-cluster-yml)
+    * [Session Mode](#session-cluster-yml)
+    * [Session Mode with SQL Client](#session-cluster-sql-yaml)
 
-The [`jobmanager.rpc.address`]({{< ref "docs/deployment/config" 
>}}#jobmanager-rpc-address) option must be configured, others are optional to 
set.
+* Launch a cluster in the foreground (use `-d` for background)
 
-The environment variable `FLINK_PROPERTIES` should contain a list of Flink 
cluster configuration options separated by new line,
-the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence 
over configurations in `flink-conf.yaml`.
+    ```sh
+    $ docker-compose up
+    ```
 
-### Provide custom configuration
+* Scale the cluster up or down to `N` TaskManagers
 
-The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in 
the `/opt/flink/conf` directory in the Flink image.
-To provide a custom location for the Flink configuration files, you can
+    ```sh
+    $ docker-compose scale taskmanager=<N>
+    ```
 
-* **either mount a volume** with the custom configuration files to this path 
`/opt/flink/conf` when you run the Flink image:
+* Access the JobManager container
 
     ```sh
-    $ docker run \
-        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf 
\
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+    $ docker exec -it $(docker ps --filter name=jobmanager --format={{.ID}}) 
/bin/sh
     ```
 
-* or add them to your **custom Flink image**, build and run it:
-
+* Kill the cluster
 
-    ```dockerfile
-    FROM flink
-    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
+    ```sh
+    $ docker-compose kill
     ```
 
-{{< hint info >}}
-The mounted volume must contain all necessary configuration files.
-The `flink-conf.yaml` file must have write permission so that the Docker entry 
point script can modify it in certain cases.
-{{< /hint >}}
+* Access Web UI
 
-### Using filesystem plugins
+  When the cluster is running, you can visit the web UI at 
[http://localhost:8081](http://localhost:8081).
 
-As described in the [plugins]({{< ref "docs/deployment/filesystems/plugins" 
>}}) documentation page: In order to use plugins they must be
-copied to the correct location in the Flink installation in the Docker 
container for them to work.
+### Application Mode
 
-If you want to enable plugins provided with Flink (in the `opt/` directory of 
the Flink distribution), you can pass the environment variable 
`ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
-The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names 
separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{< 
version >}}.jar`
+In application mode you start a Flink cluster that is dedicated to run only 
the Flink Jobs which have been bundled with the images.
+Hence, you need to build a dedicated Flink Image per application.
+Please check [here](#application-mode-on-docker) for the details.
+See also [how to specify the JobManager 
arguments](#jobmanager-additional-command-line-arguments) in the `command` for 
the `jobmanager` service.
 
-```sh
-    $ docker run \
-        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+<a id="app-cluster-yml">`docker-compose.yml`</a> for *Application Mode*.
+
+```yaml
+version: "2.2"
+services:
+  jobmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    ports:
+      - "8081:8081"
+    command: standalone-job --job-classname com.job.ClassName [--job-id <job 
id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job 
arguments]
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        parallelism.default: 2
+
+  taskmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    depends_on:
+      - jobmanager
+    command: taskmanager
+    scale: 1
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        taskmanager.numberOfTaskSlots: 2
+        parallelism.default: 2
 ```
 
-There are also more [advanced ways](#advanced-customization) for customizing 
the Flink image.
+### Session Mode
+
+In Session Mode you use docker-compose to spin up a long-running Flink Cluster 
to which you can then submit Jobs.
+
+<a id="session-cluster-yml">`docker-compose.yml`</a> for *Session Mode*:
+
+```yaml
+version: "2.2"
+services:
+  jobmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    ports:
+      - "8081:8081"
+    command: jobmanager
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+
+  taskmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    depends_on:
+      - jobmanager
+    command: taskmanager
+    scale: 1
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        taskmanager.numberOfTaskSlots: 2
+```
+### Flink SQL Client with Session Cluster
 
-### Enabling Python
+In this example, you spin up a long-running session cluster and a Flink SQL 
CLI which uses this clusters to submit jobs to.
+
+<a id="session-cluster-sql-yaml">`docker-compose.yml`</a> for Flink SQL Client 
with *Session Cluster*:

Review comment:
       I think was that is only an anchor.

##########
File path: docs/content/docs/deployment/resource-providers/standalone/docker.md
##########
@@ -247,68 +246,175 @@ You can see that certain tags include the version of 
Hadoop, e.g. (e.g. `-hadoop
 Beginning with Flink 1.5, image tags that omit the Hadoop version correspond 
to Hadoop-free releases of Flink
 that do not include a bundled Hadoop distribution.
 
+## Flink with Docker Compose
 
-### Passing configuration via environment variables
+[Docker Compose](https://docs.docker.com/compose/) is a way to run a group of 
Docker containers locally.
+The next sections show examples of configuration files to run Flink.
 
-When you run Flink image, you can also change its configuration options by 
setting the environment variable `FLINK_PROPERTIES`:
+### General
 
-```sh
-$ FLINK_PROPERTIES="jobmanager.rpc.address: host
-taskmanager.numberOfTaskSlots: 3
-blob.server.port: 6124
-"
-$ docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{{< stable 
>}}{{< version >}}-scala{{< scala_version >}}{{< /stable >}}{{< unstable 
>}}latest{{< /unstable >}} <jobmanager|standalone-job|taskmanager>
-```
+* Create the docker-compose.yaml file. Please check the examples in the 
sections below:
+    * [Application Mode](#app-cluster-yml)
+    * [Session Mode](#session-cluster-yml)
+    * [Session Mode with SQL Client](#session-cluster-sql-yaml)
 
-The [`jobmanager.rpc.address`]({{< ref "docs/deployment/config" 
>}}#jobmanager-rpc-address) option must be configured, others are optional to 
set.
+* Launch a cluster in the foreground (use `-d` for background)
 
-The environment variable `FLINK_PROPERTIES` should contain a list of Flink 
cluster configuration options separated by new line,
-the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence 
over configurations in `flink-conf.yaml`.
+    ```sh
+    $ docker-compose up
+    ```
 
-### Provide custom configuration
+* Scale the cluster up or down to `N` TaskManagers
 
-The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in 
the `/opt/flink/conf` directory in the Flink image.
-To provide a custom location for the Flink configuration files, you can
+    ```sh
+    $ docker-compose scale taskmanager=<N>
+    ```
 
-* **either mount a volume** with the custom configuration files to this path 
`/opt/flink/conf` when you run the Flink image:
+* Access the JobManager container
 
     ```sh
-    $ docker run \
-        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf 
\
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+    $ docker exec -it $(docker ps --filter name=jobmanager --format={{.ID}}) 
/bin/sh
     ```
 
-* or add them to your **custom Flink image**, build and run it:
-
+* Kill the cluster
 
-    ```dockerfile
-    FROM flink
-    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
+    ```sh
+    $ docker-compose kill
     ```
 
-{{< hint info >}}
-The mounted volume must contain all necessary configuration files.
-The `flink-conf.yaml` file must have write permission so that the Docker entry 
point script can modify it in certain cases.
-{{< /hint >}}
+* Access Web UI
 
-### Using filesystem plugins
+  When the cluster is running, you can visit the web UI at 
[http://localhost:8081](http://localhost:8081).
 
-As described in the [plugins]({{< ref "docs/deployment/filesystems/plugins" 
>}}) documentation page: In order to use plugins they must be
-copied to the correct location in the Flink installation in the Docker 
container for them to work.
+### Application Mode
 
-If you want to enable plugins provided with Flink (in the `opt/` directory of 
the Flink distribution), you can pass the environment variable 
`ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
-The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names 
separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{< 
version >}}.jar`
+In application mode you start a Flink cluster that is dedicated to run only 
the Flink Jobs which have been bundled with the images.
+Hence, you need to build a dedicated Flink Image per application.
+Please check [here](#application-mode-on-docker) for the details.
+See also [how to specify the JobManager 
arguments](#jobmanager-additional-command-line-arguments) in the `command` for 
the `jobmanager` service.
 
-```sh
-    $ docker run \
-        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
-        flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+<a id="app-cluster-yml">`docker-compose.yml`</a> for *Application Mode*.
+
+```yaml
+version: "2.2"
+services:
+  jobmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    ports:
+      - "8081:8081"
+    command: standalone-job --job-classname com.job.ClassName [--job-id <job 
id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job 
arguments]
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        parallelism.default: 2
+
+  taskmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    depends_on:
+      - jobmanager
+    command: taskmanager
+    scale: 1
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        taskmanager.numberOfTaskSlots: 2
+        parallelism.default: 2
 ```
 
-There are also more [advanced ways](#advanced-customization) for customizing 
the Flink image.
+### Session Mode
+
+In Session Mode you use docker-compose to spin up a long-running Flink Cluster 
to which you can then submit Jobs.
+
+<a id="session-cluster-yml">`docker-compose.yml`</a> for *Session Mode*:
+
+```yaml
+version: "2.2"
+services:
+  jobmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    ports:
+      - "8081:8081"
+    command: jobmanager
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+
+  taskmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    depends_on:
+      - jobmanager
+    command: taskmanager
+    scale: 1
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        taskmanager.numberOfTaskSlots: 2
+```
+### Flink SQL Client with Session Cluster
 
-### Enabling Python
+In this example, you spin up a long-running session cluster and a Flink SQL 
CLI which uses this clusters to submit jobs to.
+
+<a id="session-cluster-sql-yaml">`docker-compose.yml`</a> for Flink SQL Client 
with *Session Cluster*:
+
+```yaml
+version: "2.2"
+services:
+  jobmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    ports:
+      - "8081:8081"
+    command: jobmanager
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+
+  taskmanager:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    depends_on:
+      - jobmanager
+    command: taskmanager
+    scale: 1
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+        taskmanager.numberOfTaskSlots: 2
+  sql-client:
+    image: flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}}
+    command: bin/sql-client.sh
+    depends_on:
+      - jobmanager
+    environment:
+      - |
+        FLINK_PROPERTIES=
+        jobmanager.rpc.address: jobmanager
+```
+* In order to start the SQL Client run
+  ```sh
+  docker-compose run sql-client
+  ```
+  You can then start creating tables and queries those.
+
+* Note, that all required dependencies (e.g. for connectors) need to be 
available in the cluster as well as the client.

Review comment:
       Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to