Re: [ANNOUNCE] Apache Flink Stateful Functions 2.2.2 released
Great to hear! Thanks a lot to everyone who helped make this release possible. Cheers, Till On Sat, Jan 2, 2021 at 3:37 AM Tzu-Li (Gordon) Tai wrote: > The Apache Flink community released the second bugfix release of the > Stateful Functions (StateFun) 2.2 series, version 2.2.2. > > *We strongly recommend all users to upgrade to this version.* > > *Please check out the release announcement:* > https://flink.apache.org/news/2021/01/02/release-statefun-2.2.2.html > > The release is available for download at: > https://flink.apache.org/downloads.html > > Maven artifacts for Stateful Functions can be found at: > https://search.maven.org/search?q=g:org.apache.flink%20statefun > > Python SDK for Stateful Functions published to the PyPI index can be found > at: > https://pypi.org/project/apache-flink-statefun/ > > Official Dockerfiles for building Stateful Functions Docker images can be > found at: > https://github.com/apache/flink-statefun-docker > > Alternatively, Ververica has volunteered to make Stateful Function's images > available for the community via their public Docker Hub registry: > https://hub.docker.com/r/ververica/flink-statefun > > The full release notes are available in Jira: > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349366 > > We would like to thank all contributors of the Apache Flink community who > made this release possible! > > Cheers, > Gordon >
Tumbling Time Window
Hello All, First of all Happy New Year!! Thanks for the excellent community support. I have a job which requires a 2 seconds tumbling time window per key, For each user we wait for 2 seconds to collect enough data and proceed to further processing. My question is should I use the regular DSL windowing or write a custom process function which does the windowing. I have heard that the DSL window has more overhead versus the custom window function. What do you guys suggest and can someone provide an example of custom window function per key. Also given the window time is very less (2 secs) would there be more overhead in firing so many timers for each key? Thanks! Regards, Navneeth
CICD
Hi All, Currently we are using flink in session cluster mode and we manually deploy the jobs i.e. through the web UI. We use AWS ECS for running the docker container with 2 services definitions, one for JM and other for TM. How is everyone managing the CICD process? Is there a better way to run a job in job cluster mode and use jenkins to perform CICD? Any pointers on how this is being done would really help and greatly appreciated. Thanks, Navneeth
Re: CICD
Could you not use the JM web address to utilize the rest api? You can start/stop/save point/restore + upload new jars via the rest api. While I did not run on ECS( ran on EMR) I was able to use the rest api to do deployments. On Sun, Jan 3, 2021 at 19:09 Navneeth Krishnan wrote: > Hi All, > > Currently we are using flink in session cluster mode and we manually > deploy the jobs i.e. through the web UI. We use AWS ECS for running the > docker container with 2 services definitions, one for JM and other for TM. > How is everyone managing the CICD process? Is there a better way to run a > job in job cluster mode and use jenkins to perform CICD? > > Any pointers on how this is being done would really help and greatly > appreciated. > > Thanks, > Navneeth >
Re: CICD
Thanks Vikash for the response. Yes thats very much feasible but we are planning to move to job/application cluster model where in the artifacts are bundled inside the container. When there is a new container image then we might have to do the following. - Take a savepoint - Upgrade the JM and TM container images and provide the save point path during start up I would like to know if this is the standard way or if there are some better options. We currently use terraform for managing the infrastructure and it would be greatly helpful if someone has already done this. Thanks On Sun, Jan 3, 2021 at 4:17 PM Vikash Dat wrote: > Could you not use the JM web address to utilize the rest api? You can > start/stop/save point/restore + upload new jars via the rest api. While I > did not run on ECS( ran on EMR) I was able to use the rest api to do > deployments. > > On Sun, Jan 3, 2021 at 19:09 Navneeth Krishnan > wrote: > >> Hi All, >> >> Currently we are using flink in session cluster mode and we manually >> deploy the jobs i.e. through the web UI. We use AWS ECS for running the >> docker container with 2 services definitions, one for JM and other for TM. >> How is everyone managing the CICD process? Is there a better way to run a >> job in job cluster mode and use jenkins to perform CICD? >> >> Any pointers on how this is being done would really help and greatly >> appreciated. >> >> Thanks, >> Navneeth >> >
Re: [ANNOUNCE] Apache Flink Stateful Functions 2.2.2 released
@Gordon Thanks a lot for the release and for being the release manager. And thanks to everyone who made this release possible! Best, Xingbo Till Rohrmann 于2021年1月3日周日 下午8:31写道: > Great to hear! Thanks a lot to everyone who helped make this release > possible. > > Cheers, > Till > > On Sat, Jan 2, 2021 at 3:37 AM Tzu-Li (Gordon) Tai > wrote: > > > The Apache Flink community released the second bugfix release of the > > Stateful Functions (StateFun) 2.2 series, version 2.2.2. > > > > *We strongly recommend all users to upgrade to this version.* > > > > *Please check out the release announcement:* > > https://flink.apache.org/news/2021/01/02/release-statefun-2.2.2.html > > > > The release is available for download at: > > https://flink.apache.org/downloads.html > > > > Maven artifacts for Stateful Functions can be found at: > > https://search.maven.org/search?q=g:org.apache.flink%20statefun > > > > Python SDK for Stateful Functions published to the PyPI index can be > found > > at: > > https://pypi.org/project/apache-flink-statefun/ > > > > Official Dockerfiles for building Stateful Functions Docker images can be > > found at: > > https://github.com/apache/flink-statefun-docker > > > > Alternatively, Ververica has volunteered to make Stateful Function's > images > > available for the community via their public Docker Hub registry: > > https://hub.docker.com/r/ververica/flink-statefun > > > > The full release notes are available in Jira: > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349366 > > > > We would like to thank all contributors of the Apache Flink community who > > made this release possible! > > > > Cheers, > > Gordon > > >
Re: Tumbling Time Window
Hi Navneeth For me I think you may start with using the window function and an example for the custom window function could be found in [1]. From the description I think it should be a standard Tumbling window, if implementing with the customized process function, it would end up have a similar functionality with the current window operator. Best, Yun [1] https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/stream/operators/windows.html#processwindowfunction -- Sender:Navneeth Krishnan Date:2021/01/04 08:05:17 Recipient:user Theme:Tumbling Time Window Hello All, First of all Happy New Year!! Thanks for the excellent community support. I have a job which requires a 2 seconds tumbling time window per key, For each user we wait for 2 seconds to collect enough data and proceed to further processing. My question is should I use the regular DSL windowing or write a custom process function which does the windowing. I have heard that the DSL window has more overhead versus the custom window function. What do you guys suggest and can someone provide an example of custom window function per key. Also given the window time is very less (2 secs) would there be more overhead in firing so many timers for each key? Thanks! Regards, Navneeth
Re: Facing issues on kafka while running a job that was built with 1.11.2-scala-2.11 version onto flink version 1.11.2-scala-2.12
Hi Narasimha, Since the Kafka-connect itself is purely implemented with Java, thus I guess that with high probabililty it is not the issue of scala version. I think may first have a check of the kafka cluster's status ? Best, Yun -- Sender:narasimha Date:2020/12/30 13:43:28 Recipient:user Theme:Facing issues on kafka while running a job that was built with 1.11.2-scala-2.11 version onto flink version 1.11.2-scala-2.12 Hi, Facing issues on kafka while running a job that was built with 1.11.2-scala-2.11 version onto flink version 1.11.2-scala-2.12. kafka-connector with 1.11.2-scala-2.11 is getting packaged with the job. Kafka cluster was all good when writing to topics, but when someone reads intermittently the cluster becomes unstable with unresponsive brokers. Do these differences in scala binary versions cause any issues to Kafka? Flink version 1.11.2-scala-2-12 Kafka version - 2.1.0 -- A.Narasimha Swamy