[jira] [Created] (FLINK-18169) ES6 sql sink should allow users to configure basic authentication parameters
haoyuwen created FLINK-18169: Summary: ES6 sql sink should allow users to configure basic authentication parameters Key: FLINK-18169 URL: https://issues.apache.org/jira/browse/FLINK-18169 Project: Flink Issue Type: Improvement Components: Connectors / ElasticSearch Affects Versions: 1.10.1, 1.10.0 Reporter: haoyuwen For the es http api configured with basic authentication, the existing version of es sql connector cannot be configured with the corresponding username and password, and cannot be written to a cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [ANNOUNCE] New Apache Flink Committer - Xintong Song
Congratulations Best, Guowei Matt Wang 于2020年6月6日周六 上午9:56写道: > Congratulations! > > > --- > Best, > Matt Wang > > > On 06/5/2020 22:34,Andrey Zagrebin wrote: > Welcome to committers and congrats, Xintong! > > Cheers, > Andrey > > On Fri, Jun 5, 2020 at 4:22 PM Till Rohrmann wrote: > > Congratulations! > > Cheers, > Till > > On Fri, Jun 5, 2020 at 10:00 AM Dawid Wysakowicz > wrote: > > Congratulations! > > Best, > > Dawid > > On 05/06/2020 09:10, tison wrote: > Congrats, Xintong! > > Best, > tison. > > > Jark Wu 于2020年6月5日周五 下午3:00写道: > > Congratulations Xintong! > > Best, > Jark > > On Fri, 5 Jun 2020 at 14:32, Danny Chan wrote: > > Congratulations Xintong ! > > Best, > Danny Chan > 在 2020年6月5日 +0800 PM2:20,dev@flink.apache.org,写道: > Congratulations Xintong > > >
[jira] [Created] (FLINK-18170) E2E tests manually for PostgresCatalog
Leonard Xu created FLINK-18170: -- Summary: E2E tests manually for PostgresCatalog Key: FLINK-18170 URL: https://issues.apache.org/jira/browse/FLINK-18170 Project: Flink Issue Type: Sub-task Components: Connectors / JDBC Affects Versions: 1.11.0 Reporter: Leonard Xu Fix For: 1.11.0 PostgresCatalog was imported in current version but without e2e tests -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18171) Do not take client side config options to cluster
Yang Wang created FLINK-18171: - Summary: Do not take client side config options to cluster Key: FLINK-18171 URL: https://issues.apache.org/jira/browse/FLINK-18171 Project: Flink Issue Type: Improvement Components: Deployment / Kubernetes, Deployment / YARN Affects Versions: 1.11.0, 1.12.0 Reporter: Yang Wang Follow the discussion in this PR[1], some client side config options should not be taken into the cluster. Once it takes effect, it will cause some issue(e.g. FLINK-18149). For K8s deployment, we have explicitly remove {{KubernetesConfigOptions.KUBE_CONFIG_FILE}} and {{DeploymentOptionsInternal.CONF_DIR}}. For Yarn deployment, we have at least two options {{DeploymentOptionsInternal.CONF_DIR}} and {{YarnConfigOptionsInternal.APPLICATION_LOG_CONFIG_FILE}} could be removed in client side. Benefit from this, we could avoid some unexpected configuration loading or secret issues. Also we could avoid the logs in the jobmanager confusing the users since it is a client local path. {code:java} 2020-06-05 14:38:38,656 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: $internal.deployment.config-dir, /home/danrtsey.wy/flink-1.11-SNAPSHOT/conf 2020-06-05 14:38:38,656 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: $internal.yarn.log-config-file, /home/danrtsey.wy/flink-1.11-SNAPSHOT/conf/log4j.properties {code} [1]. https://github.com/apache/flink/pull/12501#pullrequestreview-425452351 -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [ANNOUNCE] New Apache Flink Committer - Xintong Song
Congratulations and welcome! Thanks, Zhu Zhu Guowei Ma 于2020年6月8日周一 上午10:00写道: > Congratulations > > Best, > Guowei > > > Matt Wang 于2020年6月6日周六 上午9:56写道: > > > Congratulations! > > > > > > --- > > Best, > > Matt Wang > > > > > > On 06/5/2020 22:34,Andrey Zagrebin wrote: > > Welcome to committers and congrats, Xintong! > > > > Cheers, > > Andrey > > > > On Fri, Jun 5, 2020 at 4:22 PM Till Rohrmann > wrote: > > > > Congratulations! > > > > Cheers, > > Till > > > > On Fri, Jun 5, 2020 at 10:00 AM Dawid Wysakowicz > > > wrote: > > > > Congratulations! > > > > Best, > > > > Dawid > > > > On 05/06/2020 09:10, tison wrote: > > Congrats, Xintong! > > > > Best, > > tison. > > > > > > Jark Wu 于2020年6月5日周五 下午3:00写道: > > > > Congratulations Xintong! > > > > Best, > > Jark > > > > On Fri, 5 Jun 2020 at 14:32, Danny Chan wrote: > > > > Congratulations Xintong ! > > > > Best, > > Danny Chan > > 在 2020年6月5日 +0800 PM2:20,dev@flink.apache.org,写道: > > Congratulations Xintong > > > > > > >
[RESULT] [VOTE] Apache Flink Stateful Functions 2.1.0, release candidate #1
The voting time has passed, and we now have enough votes. Thank you for testing and voting everyone! I'm happy to announce that we have unanimously approved this candidate as the 2.1.0 release for Apache Flink Stateful Functions. There are 7 approving votes, 3 of which are binding: * Igal Shilman * Congxian Qiu * Hequn Cheng (binding) * Tzu-Li (Gordon) Tai (binding) * Yu Li * Robert Metzger (binding) * Matt Wang There are no disapproving notes. The announcements for the release will happen in a separate thread once all released artifacts are available. Cheers, Gordon On Sat, Jun 6, 2020 at 8:34 PM Matt Wang wrote: > +1 (non-binding) > > > - signatures & hash, ok > - mvn clean install -Prun-e2e-tests on 1.8.0_77, ok > - source archives do not contains any binaries, ok > - version of POM files and Dockerfiles are correct, ok > > > --- > Best, > Matt Wang > > > On 06/5/2020 16:58,Robert Metzger wrote: > Thanks a lot for creating this release Gordon! > > +1 (binding) > > - maven staging repo looks fine (version tags, license files) > - source archive looks good (no binaries, no unexpected files, pom has > right version) > - quickly checked diff: > > https://github.com/apache/flink-statefun/compare/release-2.0.0-rc6...release-2.1.0-rc1 > > > On Fri, Jun 5, 2020 at 5:05 AM Congxian Qiu > wrote: > > @Tzu-Li (Gordon) Tai Thanks for the info. `mvn > clean > install -Prun=e2e-tests` works for me. before verified demo on a clean > source directory. > > Best, > Congxian > > > Tzu-Li (Gordon) Tai 于2020年6月4日周四 下午6:35写道: > > +1 (binding) > > Legal > > - Verified signatures and hashes of staged Maven artifacts, source > distribution and Python SDK distribution > - Checked NOTICE file of statefun-flink-distribution and > statefun-ridesharing-example-simulator > > Functional > > - Full build with end-to-end-tests, JDK 8: mvn clean install > -Prun-e2e-tests > - Manually verified state TTL for remote functions > - Manually verified checkpointing with failure recovery > - Manually verified savepointing + manual restore > - Generated quickstart project from archetype works > > > On Thu, Jun 4, 2020 at 3:10 PM Hequn Cheng wrote: > > +1 (binding) > > - Signatures and hash are correct. > - All artifacts to be released to Maven in the staging Nexus > repository. > - Verify that the source archives do not contain any binaries. > - Go through all commits from the last release. No license problem > spotted. > - Check end-to-end tests. All tests have been passed on Travis(both for > JDK > 1.8 and 1.11). > > Best, > Hequn > > On Thu, Jun 4, 2020 at 12:50 PM Tzu-Li (Gordon) Tai < > tzuli...@apache.org > > wrote: > > Hi Hequn, > > Sorry, I mis-tagged the wrong commit. > Just fixed this, the tag [1] [2] should now be pointing to the > correct > commit that contains the updated version. > > Gordon > > [1] > > > > > > https://gitbox.apache.org/repos/asf?p=flink-statefun.git;a=tag;h=c08c9850147d818fc8fed877a01ff87021f3cf21 > [2] https://github.com/apache/flink-statefun/tree/release-2.1.0-rc1 > > On Thu, Jun 4, 2020 at 12:10 PM Hequn Cheng > wrote: > > It seems the release tag is not correct? The version in the poms > should > be 2.1.0 instead of 2.1-SNAPSHOT. > > Best, > Hequn > > > On Thu, Jun 4, 2020 at 10:33 AM Congxian Qiu < > qcx978132...@gmail.com > > wrote: > > +1 (non-binding) > > maybe there is something that needs to be updated in > README.md(currently > the official docs link points to the master instead of 2.1) > > and have another question: do we need to add the command used to > build > the > base image locally(which was on the README.md in release-2.0.0)? > > checked > - sha & gpg, ok > - mvn clean install -Prun-e2e-test on 1.8.0_252, ok > - source archives do not contains any binaries > - maven clean install -Papache-release, ok (this step need a gpg > secret > key) > - check all pom files, dockerfiles, examples point to the same > version, > ok > - check READM.md, nothing unexpected. > - but the official docs link points to the master instead of > 2.1 > - run greeter&ride-share demo, ok > > Best, > Congxian > > > Tzu-Li (Gordon) Tai 于2020年6月1日周一 下午3:25写道: > > Hi everyone, > > Please review and vote on the *release candidate #1* for the > version > 2.1.0 > of > Apache Flink Stateful Functions, > as follows: > [ ] +1, Approve the release > [ ] -1, Do not approve the release (please provide specific > comments) > > ***Testing Guideline*** > > You can find here [1] a page in the project wiki on > instructions > for > testing. > To cast a vote, it is not necessary to perform all listed > checks, > but please mention which checks you have performed when voting. > > ***Release Overview*** > > As an overview, the release consists of the following: > a) Stateful Functions canonical source distribution, to be > deployed > to > the > release repository at dist.apache.org > b) Stateful Functions Python SDK distributions to be deployed > to > PyPI > c) Maven artifacts to be deployed to the Maven Central > Repository > > *
[jira] [Created] (FLINK-18172) Select bounded source not work in SQL-CLI with streaming mode
Jingsong Lee created FLINK-18172: Summary: Select bounded source not work in SQL-CLI with streaming mode Key: FLINK-18172 URL: https://issues.apache.org/jira/browse/FLINK-18172 Project: Flink Issue Type: Bug Reporter: Jingsong Lee create table csv_t (i int, j int) with ('connector'='filesystem', 'path'='/tmp/1', 'format'='csv'); select * from csv_t; * Can not display records. * Hang forever, but the job is finished. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-18173) Bundle flink-csv,flink-json,flink-avro jars in lib
Jingsong Lee created FLINK-18173: Summary: Bundle flink-csv,flink-json,flink-avro jars in lib Key: FLINK-18173 URL: https://issues.apache.org/jira/browse/FLINK-18173 Project: Flink Issue Type: Bug Components: Build System, Table SQL / API Reporter: Jingsong Lee Fix For: 1.11.0 The biggest problem for distributions I see is the variety of problems caused by users' lack of format dependency. These three formats are very small and no third party dependence, and they are widely used by table users. Actually, we don't have any other built-in table formats now... In total 151K... 73K flink-avro-1.10.0.jar 36K flink-csv-1.10.0.jar 42K flink-json-1.10.0.jar We can just bundle them in "flink/lib/". It not solve all problems and it is independent of "fat" and "slim". But also improve usability. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Releasing "fat" and "slim" Flink distributions
Hi, Thanks all for your feedback. I created JIRA for bundling format jars in lib. [1] FYI. [1]https://issues.apache.org/jira/browse/FLINK-18173 Best, Jingsong Lee On Fri, Jun 5, 2020 at 3:59 PM Rui Li wrote: > +1 to add light-weighted formats into the lib > > On Fri, Jun 5, 2020 at 3:28 PM Leonard Xu wrote: > > > +1 for Jingsong’s proposal to put flink-csv, flink-json and flink-avro > > under lib/ directory. > > I have heard many SQL users(most of newbies) complaint the out-of-box > > experience in mail list. > > > > Best, > > Leonard Xu > > > > > > > 在 2020年6月5日,14:39,Benchao Li 写道: > > > > > > +1 to include them for sql-client by default; > > > +0 to put into lib and exposed to all kinds of jobs, including > > DataStream. > > > > > > Danny Chan 于2020年6月5日周五 下午2:31写道: > > > > > >> +1, at least, we should keep an out of the box SQL-CLI, it’s very poor > > >> experience to add such required format jars for SQL users. > > >> > > >> Best, > > >> Danny Chan > > >> 在 2020年6月5日 +0800 AM11:14,Jingsong Li ,写道: > > >>> Hi all, > > >>> > > >>> Considering that 1.11 will be released soon, what about my previous > > >>> proposal? Put flink-csv, flink-json and flink-avro under lib. > > >>> These three formats are very small and no third party dependence, and > > >> they > > >>> are widely used by table users. > > >>> > > >>> Best, > > >>> Jingsong Lee > > >>> > > >>> On Tue, May 12, 2020 at 4:19 PM Jingsong Li > > >> wrote: > > >>> > > Thanks for your discussion. > > > > Sorry to start discussing another thing: > > > > The biggest problem I see is the variety of problems caused by > users' > > >> lack > > of format dependency. > > As Aljoscha said, these three formats are very small and no third > > party > > dependence, and they are widely used by table users. > > Actually, we don't have any other built-in table formats now... In > > >> total > > 151K... > > > > 73K flink-avro-1.10.0.jar > > 36K flink-csv-1.10.0.jar > > 42K flink-json-1.10.0.jar > > > > So, Can we just put them into "lib/" or flink-table-uber? > > It not solve all problems and maybe it is independent of "fat" and > > >> "slim". > > But also improve usability. > > What do you think? Any objections? > > > > Best, > > Jingsong Lee > > > > On Mon, May 11, 2020 at 5:48 PM Chesnay Schepler < > ches...@apache.org> > > wrote: > > > > > One downside would be that we're shipping more stuff when running > on > > > YARN for example, since the entire plugins directory is shiped by > > >> default. > > > > > > On 17/04/2020 16:38, Stephan Ewen wrote: > > >> @Aljoscha I think that is an interesting line of thinking. the > > >> swift-fs > > > may > > >> be rarely enough used to move it to an optional download. > > >> > > >> I would still drop two more thoughts: > > >> > > >> (1) Now that we have plugins support, is there a reason to have a > > > metrics > > >> reporter or file system in /opt instead of /plugins? They don't > > >> spoil > > > the > > >> class path any more. > > >> > > >> (2) I can imagine there still being a desire to have a "minimal" > > >> docker > > >> file, for users that want to keep the container images as small as > > >> possible, to speed up deployment. It is fine if that would not be > > >> the > > >> default, though. > > >> > > >> > > >> On Fri, Apr 17, 2020 at 12:16 PM Aljoscha Krettek < > > >> aljos...@apache.org> > > >> wrote: > > >> > > >>> I think having such tools and/or tailor-made distributions can > > >> be nice > > >>> but I also think the discussion is missing the main point: The > > >> initial > > >>> observation/motivation is that apparently a lot of users (Kurt > > >> and I > > >>> talked about this) on the chinese DingTalk support groups, and > > >> other > > >>> support channels have problems when first using the SQL client > > >> because > > >>> of these missing connectors/formats. For these, having > > >> additional tools > > >>> would not solve anything because they would also not take that > > >> extra > > >>> step. I think that even tiny friction should be avoided because > > >> the > > >>> annoyance from it accumulates of the (hopefully) many users that > > >> we > > > want > > >>> to have. > > >>> > > >>> Maybe we should take a step back from discussing the > > >> "fat"/"slim" idea > > >>> and instead think about the composition of the current dist. As > > >>> mentioned we have these jars in opt/: > > >>> > > >>> 17M flink-azure-fs-hadoop-1.10.0.jar > > >>> 52K flink-cep-scala_2.11-1.10.0.jar > > >>> 180K flink-cep_2.11-1.10.0.jar > > >>> 746K flink-gelly-scala_2.11-1.10.0.jar > > >>> 626K flink-gelly_2.11-1.10.0.jar > > >>> 512K flink-metrics-datadog-1.10.0.jar > > >>> 159K flink-metrics-grap
[jira] [Created] (FLINK-18174) EventTimeWindowCheckpointingITCase crashes with exit code 127
Robert Metzger created FLINK-18174: -- Summary: EventTimeWindowCheckpointingITCase crashes with exit code 127 Key: FLINK-18174 URL: https://issues.apache.org/jira/browse/FLINK-18174 Project: Flink Issue Type: Bug Components: API / DataStream, Tests Affects Versions: 1.12.0 Reporter: Robert Metzger https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2882&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0 {code} 2020-06-07T21:27:27.5645995Z [INFO] 2020-06-07T21:27:27.5646433Z [INFO] BUILD FAILURE 2020-06-07T21:27:27.5646928Z [INFO] 2020-06-07T21:27:27.5647248Z [INFO] Total time: 13:56 min 2020-06-07T21:27:27.5647818Z [INFO] Finished at: 2020-06-07T21:27:27+00:00 2020-06-07T21:27:28.1548022Z [INFO] Final Memory: 143M/3643M 2020-06-07T21:27:28.1549222Z [INFO] 2020-06-07T21:27:28.1550001Z [WARNING] The requested profile "skip-webui-build" could not be activated because it does not exist. 2020-06-07T21:27:28.1633207Z [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (integration-tests) on project flink-tests: There are test failures. 2020-06-07T21:27:28.1634081Z [ERROR] 2020-06-07T21:27:28.1634607Z [ERROR] Please refer to /__w/2/s/flink-tests/target/surefire-reports for the individual test results. 2020-06-07T21:27:28.1635808Z [ERROR] Please refer to dump files (if any exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream. 2020-06-07T21:27:28.1636306Z [ERROR] ExecutionException The forked VM terminated without properly saying goodbye. VM crash or System.exit called? 2020-06-07T21:27:28.1637602Z [ERROR] Command was /bin/sh -c cd /__w/2/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=1 -XX:+UseG1GC -jar /__w/2/s/flink-tests/target/surefire/surefirebooter5508139442489684354.jar /__w/2/s/flink-tests/target/surefire 2020-06-07T21-13-41_745-jvmRun1 surefire2128152879875854938tmp surefire_200576917820794528868tmp 2020-06-07T21:27:28.1638423Z [ERROR] Error occurred in starting fork, check output in log 2020-06-07T21:27:28.1638766Z [ERROR] Process Exit Code: 127 2020-06-07T21:27:28.1638995Z [ERROR] Crashed tests: 2020-06-07T21:27:28.1639297Z [ERROR] org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase 2020-06-07T21:27:28.1640007Z [ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: ExecutionException The forked VM terminated without properly saying goodbye. VM crash or System.exit called? 2020-06-07T21:27:28.1641432Z [ERROR] Command was /bin/sh -c cd /__w/2/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=1 -XX:+UseG1GC -jar /__w/2/s/flink-tests/target/surefire/surefirebooter5508139442489684354.jar /__w/2/s/flink-tests/target/surefire 2020-06-07T21-13-41_745-jvmRun1 surefire2128152879875854938tmp surefire_200576917820794528868tmp 2020-06-07T21:27:28.1645745Z [ERROR] Error occurred in starting fork, check output in log 2020-06-07T21:27:28.1646464Z [ERROR] Process Exit Code: 127 2020-06-07T21:27:28.1646902Z [ERROR] Crashed tests: 2020-06-07T21:27:28.1647394Z [ERROR] org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase 2020-06-07T21:27:28.1648133Z [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510) 2020-06-07T21:27:28.1648856Z [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457) 2020-06-07T21:27:28.1649769Z [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298) 2020-06-07T21:27:28.1650587Z [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246) 2020-06-07T21:27:28.1651376Z [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183) 2020-06-07T21:27:28.1652213Z [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011) 2020-06-07T21:27:28.1652986Z [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857) 2020-06-07T21:27:28.1653705Z [ERROR] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) 2020-06-07T21:27:28.1654292Z [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) 2020-06-07T21:27:28.1655049Z [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) 2020-06-07T21:27:28.1655819Z [ERRO