[jira] [Created] (FLINK-26382) Add Chinese documents for flink-training exercises

2022-02-27 Thread tonny (Jira)
tonny created FLINK-26382:
-

 Summary: Add Chinese documents for flink-training exercises
 Key: FLINK-26382
 URL: https://issues.apache.org/jira/browse/FLINK-26382
 Project: Flink
  Issue Type: New Feature
  Components: Documentation / Training / Exercises
Affects Versions: 1.14.3
Reporter: tonny
 Fix For: 1.14.3


Provide Chinese documents for all `README` and `DISCUSSION` accompanied by 
Chinese documents of Flink 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26383) Create model stream sink demo

2022-02-27 Thread weibo zhao (Jira)
weibo zhao created FLINK-26383:
--

 Summary: Create model stream sink demo
 Key: FLINK-26383
 URL: https://issues.apache.org/jira/browse/FLINK-26383
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: weibo zhao






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26384) This exception indicates that the query uses an unsupported SQL feature.

2022-02-27 Thread Spongebob (Jira)
Spongebob created FLINK-26384:
-

 Summary: This exception indicates that the query uses an 
unsupported SQL feature.
 Key: FLINK-26384
 URL: https://issues.apache.org/jira/browse/FLINK-26384
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.14.3
Reporter: Spongebob


Got an unsupported exception when generating a valid execution plan.
{code:java}
//代码占位符
TableEnvironment tableEnvironment = 
TableEnvironment.create(EnvironmentSettings.newInstance().inBatchMode().build());

// HERE'S DDL AND DML SQL
CREATE TABLE IF NOT EXISTS TABLE_A (CIR_CAPT DECIMAL(20,4),FULL_MKT_TSCP 
DECIMAL(20,4),TSCP DECIMAL(20,4),FULL_MKT_CIR_CAPT DECIMAL(20,4),SCR_INCD 
DECIMAL(19,0),AST_TYPE DECIMAL(3,0),SCR_TYPE DECIMAL(3,0)) WITH ('connector' = 
'jdbc'...)

CREATE TABLE IF NOT EXISTS TABLE_B (AST_TYPE DECIMAL(4,0),CIR_CAPT 
DECIMAL(20,4),FULL_MKT_CIR_CAPT DECIMAL(20,4),FULL_MKT_TSCP 
DECIMAL(20,4),SCR_INCD DECIMAL(19,0),SCR_TYPE DECIMAL(8,0),TSCP DECIMAL(20,4),  
PRIMARY KEY (SCR_INCD) NOT ENFORCED)  WITH ('connector' = 'jdbc'...)

INSERT INTO TABLE_B 
SELECT T_SOURCE.* FROM (
SELECT CASE WHEN B.AST_TYPE IS NULL THEN 1 ELSE B.AST_TYPE END AS AST_TYPE, 
B.CIR_CAPT AS CIR_CAPT, B.FULL_MKT_CIR_CAPT AS FULL_MKT_CIR_CAPT, 
B.FULL_MKT_TSCP AS FULL_MKT_TSCP, B.SCR_INCD AS SCR_INCD, CASE WHEN B.SCR_TYPE 
IS NULL THEN 1 ELSE B.SCR_TYPE END AS SCR_TYPE, B.TSCP AS TSCP FROM TABLE_A B
) T_SOURCE INNER JOIN ( SELECT DISTINCT SCR_INCD FROM TABLE_B) T_SINK ON 
T_SOURCE.SCR_INCD = T_SINK.SCR_INCD    

// EXCEPTION TRACK
This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.
    at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:76)
    at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:62)
    at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
    at 
scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
    at scala.collection.Iterator.foreach(Iterator.scala:937)
    at scala.collection.Iterator.foreach$(Iterator.scala:937)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
    at scala.collection.IterableLike.foreach(IterableLike.scala:70)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
    at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
    at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
    at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
    at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:58)
    at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:85)
    at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
    at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:44)
    at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:44)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
    at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:300)
    at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:183)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1665)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:752)
    at 
org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:124)
    at 
com.xctech.cone.etl.migrate.batch.runner.RunSingleTask.main(RunSingleTask.java:111)
Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are 
not enough rules to produce a node with desired properties: 
convention=BATCH_PHYSICAL, FlinkRelDistributionTraitDef=any, sort=[].
Missing conversions are FlinkLogicalTableSourceScan[convention: LOGICAL -> 
BATCH_PHY

[jira] [Created] (FLINK-26385) Use new Kafka source and sink in Python API

2022-02-27 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-26385:
-

 Summary: Use new Kafka source and sink in Python API
 Key: FLINK-26385
 URL: https://issues.apache.org/jira/browse/FLINK-26385
 Project: Flink
  Issue Type: Improvement
  Components: API / Python
Affects Versions: 1.16.0
Reporter: Qingsheng Ren


Currently Python API is still using the legacy {{FlinkKafkaConsumer}} and 
{{FlinkKafkaProducer}} for Kafka connectors. As these two classes are marked as 
deprecated since 1.14, new source and sink should be used in Python API. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26386) CassandraConnectorITCase.testCassandraTableSink failed on azure

2022-02-27 Thread Yun Gao (Jira)
Yun Gao created FLINK-26386:
---

 Summary: CassandraConnectorITCase.testCassandraTableSink failed on 
azure
 Key: FLINK-26386
 URL: https://issues.apache.org/jira/browse/FLINK-26386
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Cassandra
Affects Versions: 1.15.0
Reporter: Yun Gao


{code:java}
Feb 28 02:39:19 [ERROR] Tests run: 20, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 226.77 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
Feb 28 02:39:19 [ERROR] 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.testCassandraTableSink
  Time elapsed: 52.49 s  <<< ERROR!
Feb 28 02:39:19 
com.datastax.driver.core.exceptions.InvalidConfigurationInQueryException: 
Keyspace flink doesn't exist
Feb 28 02:39:19 at 
com.datastax.driver.core.exceptions.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:37)
Feb 28 02:39:19 at 
com.datastax.driver.core.exceptions.InvalidConfigurationInQueryException.copy(InvalidConfigurationInQueryException.java:27)
Feb 28 02:39:19 at 
com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
Feb 28 02:39:19 at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
Feb 28 02:39:19 at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
Feb 28 02:39:19 at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
Feb 28 02:39:19 at 
org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:391)
Feb 28 02:39:19 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Feb 28 02:39:19 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Feb 28 02:39:19 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Feb 28 02:39:19 at java.lang.reflect.Method.invoke(Method.java:498)
Feb 28 02:39:19 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
Feb 28 02:39:19 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
Feb 28 02:39:19 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
Feb 28 02:39:19 at 
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
Feb 28 02:39:19 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
Feb 28 02:39:19 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Feb 28 02:39:19 at 
org.apache.flink.testutils.junit.RetryRule$RetryOnExceptionStatement.evaluate(RetryRule.java:192)
Feb 28 02:39:19 at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
Feb 28 02:39:19 at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
Feb 28 02:39:19 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Feb 28 02:39:19 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
Feb 28 02:39:19 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
Feb 28 02:39:19 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
Feb 28 02:39:19 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
Feb 28 02:39:19 at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
Feb 28 02:39:19 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
Feb 28 02:39:19 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
Feb 28 02:39:19 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
Feb 28 02:39:19 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
Feb 28 02:39:19 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
Feb 28 02:39:19 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
Feb 28 02:39:19 at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
Feb 28 02:39:19 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
Feb 28 02:39:19 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32272&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=12193



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26387) KafkaSourceLegacyITCase.testBrokerFailure hang on azure

2022-02-27 Thread Yun Gao (Jira)
Yun Gao created FLINK-26387:
---

 Summary: KafkaSourceLegacyITCase.testBrokerFailure hang on azure
 Key: FLINK-26387
 URL: https://issues.apache.org/jira/browse/FLINK-26387
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Yun Gao


{code:java}
"main" #1 prio=5 os_prio=0 tid=0x7f489c00b000 nid=0x170e waiting on 
condition [0x7f48a64d2000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x81f14838> (a 
java.util.concurrent.CompletableFuture$Signaller)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
at 
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
at 
java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:58)
at 
org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBrokerFailureTest(KafkaConsumerTestBase.java:1509)
at 
org.apache.flink.connector.kafka.source.KafkaSourceLegacyITCase.testBrokerFailure(KafkaSourceLegacyITCase.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.apache.flink.testutils.junit.RetryRule$RetryOnFailureStatement.evaluate(RetryRule.java:135)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32272&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=39718



--
This message was sent by Atlassian Jira
(v8.20.1#820001)