[jira] [Created] (FLINK-24376) Operator name in OperatorCoordinator should not use chained name
Qingsheng Ren created FLINK-24376: - Summary: Operator name in OperatorCoordinator should not use chained name Key: FLINK-24376 URL: https://issues.apache.org/jira/browse/FLINK-24376 Project: Flink Issue Type: Bug Components: Runtime / Task Affects Versions: 1.13.2, 1.12.5, 1.14.0, 1.14.1 Reporter: Qingsheng Ren Currently the operator name passed to {{CoordinatedOperatorFactory#getCoordinatorProvider}} is a chained operator name (e.g. Source -> Map) instead of the name of coordinating operator, which might be misleading. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24377) TM resource may not be properly released after heartbeat timeout
Xintong Song created FLINK-24377: Summary: TM resource may not be properly released after heartbeat timeout Key: FLINK-24377 URL: https://issues.apache.org/jira/browse/FLINK-24377 Project: Flink Issue Type: Bug Components: Deployment / Kubernetes, Deployment / YARN, Runtime / Coordination Affects Versions: 1.13.2, 1.14.0 Reporter: Xintong Song Assignee: Xintong Song Fix For: 1.14.0, 1.13.3, 1.15.0 In native k8s and yarn deploy modes, RM disconnects a TM when its heartbeat times out. However, it does not actively release the pod / container of that TM. The releasing of pod / container relies on the TM to terminate itself after failing to re-register to the RM. In some rare conditions, the TM process may not terminate and hang out for long time. In such cases, k8s / yarn sees the process running, thus will not release the pod / container. Neither will Flink's resource manager. Consequently, the resource is leaked until the entire application is terminated. To fix this, we should make {{ActiveResourceManager}} to actively release the resource to K8s / Yarn after a TM heartbeat timeout. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24378) Dim table join operator Push down
ranqiqiang created FLINK-24378: -- Summary: Dim table join operator Push down Key: FLINK-24378 URL: https://issues.apache.org/jira/browse/FLINK-24378 Project: Flink Issue Type: Improvement Components: Connectors / JDBC Reporter: ranqiqiang hello: If binlog source table-A data is : \{id : 1 , name:'king' } and mysql dim table-B data is \{id : 1, type : 1} ,\{id:1 , type : 2} I used sql : {code:java} //代码占位符 select * from table-A a left join table-B .. as b on a.id = b.id and b.type = 2 {code} I want to know the "{color:#de350b}b.type = 2" {color:#172b4d}could be push down,maybe is helpful . unless get all data to filter . {color}{color} {color:#de350b}{color:#172b4d}If the filters follow with on, push then operators down , will be better agile!{color}{color} {color:#de350b}{color:#172b4d}Do you think so ?{color}{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [VOTE] Release 1.14.0, release candidate #3
+1 (binding) - checked signatures and checksums - built from source - ran large scale jobs (parallelism = 1) on YARN and K8S. checked webUI and logs. - the website PR looks good Thanks, Zhu Yangze Guo 于2021年9月26日周日 下午2:45写道: > +1 (non-binding) > - built from source > - executed example jobs with standalone / yarn / native kubernetes > deployments, nothing unexpected > - test the fine-grained resource management, nothing unexpected > > Best, > Yangze Guo > > On Fri, Sep 24, 2021 at 5:45 PM Xintong Song > wrote: > > > > +1 (binding) > > > >- reviewed release blog post (the new iteration) [1] > >- verified checksums and signatures > >- built from source > >- checked NOTICE files w.r.t. all pom changes since 1.13 > >- executed example jobs > > - standalone cluster and native kubernetes application (custom > image) > > - w/ and w/o fine-grained resource requirements > > > > Thank you~ > > > > Xintong Song > > > > > > [1] https://github.com/apache/flink-web/pull/468 > > > > On Thu, Sep 23, 2021 at 12:13 PM Jeff Zhang wrote: > > > > > Thanks Dian, it works > > > > > > > > > Dian Fu 于2021年9月23日周四 上午9:18写道: > > > > > > > Hi Jeff, > > > > > > > > It has split the jars into a separate package since 1.13 and so > there are > > > > two packages now. > > > > > > > > You need install `apache-flink-libraries` firstly: > > > > pip install apache-flink-libraries-1.14.0.tar.gz > > > > > > > > Note that this is only necessary when installing from source. Could > also > > > > refer to [1] for more details. > > > > > > > > Regards, > > > > Dian > > > > > > > > [1] > > > > > > > > https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/flinkdev/building/#build-pyflink > > > > > > > > > 2021年9月22日 下午10:46,Jeff Zhang 写道: > > > > > > > > > > Can anyone successfully install pyflink ? I failed to install it > and > > > get > > > > > the following error: > > > > > > > > > > (python3.7) [hadoop@emr-header-1 zeppelin]$ pip install > > > > > apache_flink-1.14.0-cp37-cp37m-manylinux1_x86_64.whl > > > > > Processing ./apache_flink-1.14.0-cp37-cp37m-manylinux1_x86_64.whl > > > > > Collecting apache-beam==2.27.0 > > > > > Using cached > apache_beam-2.27.0-cp37-cp37m-manylinux2010_x86_64.whl > > > (9.0 > > > > > MB) > > > > > Collecting pandas<1.2.0,>=1.0 > > > > > Using cached pandas-1.1.5-cp37-cp37m-manylinux1_x86_64.whl (9.5 > MB) > > > > > Collecting requests>=2.26.0 > > > > > Using cached requests-2.26.0-py2.py3-none-any.whl (62 kB) > > > > > Collecting fastavro<0.24,>=0.21.4 > > > > > Using cached fastavro-0.23.6-cp37-cp37m-manylinux2010_x86_64.whl > (1.4 > > > > MB) > > > > > Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1 > > > > > Using cached avro_python3-1.9.2.1-py3-none-any.whl > > > > > ERROR: Could not find a version that satisfies the requirement > > > > > apache-flink-libraries<1.14.1,>=1.14.0 (from apache-flink) > > > > > ERROR: No matching distribution found for > > > > > apache-flink-libraries<1.14.1,>=1.14.0 > > > > > > > > > > Dawid Wysakowicz 于2021年9月22日周三 下午9:56写道: > > > > > > > > > >> Hi everyone, > > > > >> Please review and vote on the release candidate #3 for the version > > > > 1.14.0, > > > > >> as follows: > > > > >> [ ] +1, Approve the release > > > > >> [ ] -1, Do not approve the release (please provide specific > comments) > > > > >> > > > > >> The complete staging area is available for your review, which > > > includes: > > > > >> * JIRA release notes [1], > > > > >> * the official Apache source release and binary convenience > releases > > > to > > > > be > > > > >> deployed to dist.apache.org [2], which are signed with the key > with > > > > >> fingerprint 31D2DD10BFC15A2D [3], > > > > >> * all artifacts to be deployed to the Maven Central Repository > [4], > > > > >> * source code tag "release-1.14.0-rc3" [5], > > > > >> * website pull request listing the new release and adding > announcement > > > > >> blog post [6]. > > > > >> > > > > >> The vote will be open for at least 72 hours. It is adopted by > majority > > > > >> approval, with at least 3 PMC affirmative votes. > > > > >> > > > > >> Thanks, > > > > >> Xintong, Joe and Dawid > > > > >> > > > > >> [1] > > > > >> > > > > > > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12349614 > > > > >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.14.0-rc3 > > > > >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS > > > > >> [4] > > > > > https://repository.apache.org/content/repositories/orgapacheflink-1451 > > > > >> [5] > https://github.com/apache/flink/releases/tag/release-1.14.0-rc3 > > > > >> [6] https://github.com/apache/flink-web/pull/466 > > > > >> > > > > > > > > > > > > > > > -- > > > > > Best Regards > > > > > > > > > > Jeff Zhang > > > > > > > > > > > > > > -- > > > Best Regards > > > > > > Jeff Zhang > > > >
Re: 退订
Hi, To unsubscribe emails from Flink dev mail list, send an email to dev-unsubscr...@flink.apache.org To unsubscribe emails from Flink user mail list, send an email to user-unsubscr...@flink.apache.org To unsubscribe emails from Flink user -zh mail list, send an email to user-zh-unsubscr...@flink.apache.org For more information, please go to [1]. [1] https://flink.apache.org/community.html#mailing-lists Best, JING ZHANG maozhaolin 于2021年9月26日周日 下午3:28写道: > 退订
[jira] [Created] (FLINK-24379) AWS Glue Schema Registry support doesn't work for table API
Brad Davis created FLINK-24379: -- Summary: AWS Glue Schema Registry support doesn't work for table API Key: FLINK-24379 URL: https://issues.apache.org/jira/browse/FLINK-24379 Project: Flink Issue Type: Improvement Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Affects Versions: 1.14.0 Reporter: Brad Davis Unlike most (all?) of the other Avro formats, the AWS Glue Schema Registry version doesn't include a META-INF/services/org.apache.flink.table.factories.Factory resource or a class implementing org.apache.flink.table.factories.DeserializationFormatFactory and org.apache.flink.table.factories.SerializationFormatFactory. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24380) Flink should handle the state transition of the pod from Pending to Failed
Yangze Guo created FLINK-24380: -- Summary: Flink should handle the state transition of the pod from Pending to Failed Key: FLINK-24380 URL: https://issues.apache.org/jira/browse/FLINK-24380 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.13.2, 1.14.0 Reporter: Yangze Guo Fix For: 1.14.0, 1.13.3, 1.15.0 In K8s, there is five phases in pod's lifecycle: Pending, Running, Secceeded, Failed and Unknown. Currently, Flink does not handle the state transition of the pod from Pending to Failed. If a pod failed from Pending by `OutOfCPU` or `OutOfMem`, it will never be released and Flink keep waiting for it. To fix this issue, Flink should terminate the pod in Failed phase proactively. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24381) Hidden password value when Flink SQL connector throw exception.
Ada Wong created FLINK-24381: Summary: Hidden password value when Flink SQL connector throw exception. Key: FLINK-24381 URL: https://issues.apache.org/jira/browse/FLINK-24381 Project: Flink Issue Type: Improvement Components: Table SQL / Planner Affects Versions: 1.13.2 Reporter: Ada Wong This following is error message. Password is 'bar' and is displayed. Could we hidden it to password='***' or password='' inspired by Apache Kafka source code. {code:java} Missing required options are: hosts Unable to create a sink for writing table 'default_catalog.default_database.dws'. Table options are: 'connector'='elasticsearch7-x' 'index'='foo' 'password'='bar' at org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:208) at org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:369) at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:221) at org.apache.flink.table.planner.delegation.PlannerBase.$anonfun$translate$1(PlannerBase.scala:159 {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24382) RecordsOut metric for sinks is inaccurate
Fabian Paul created FLINK-24382: --- Summary: RecordsOut metric for sinks is inaccurate Key: FLINK-24382 URL: https://issues.apache.org/jira/browse/FLINK-24382 Project: Flink Issue Type: Improvement Components: Connectors / Common, Runtime / Metrics Affects Versions: 1.14.0, 1.15.0 Reporter: Fabian Paul Currently, the metric is computed on the operator level and it is assumed that every record flowing into the sink also generates one outgoing record. This is often not reasonable because the sinks can transform incoming records into multiple outgoing records, thus the metric should be implemented by the sink implementors and not be reasoned by the framework. -- This message was sent by Atlassian Jira (v8.3.4#803005)