Re: [Discussion] Job generation / submission hooks & Atlas integration

2020-03-19 Thread jackylau
Hi:
  i think flink integrate atlas also need add catalog information such as
spark atlas project
.https://github.com/hortonworks-spark/spark-atlas-connector
 when user use catalog such as JDBCCatalog/HiveCatalog, flink atlas project
will sync this information to atlas.
  But i don't find any Event Interface for flink to implement it as
spark-atlas-connector does. Does anyone know how to do it



--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/


Re: [Discussion] Job generation / submission hooks & Atlas integration

2020-03-27 Thread jackylau
Hi Márton Balassi:
   I am very glad to look at it and where to find .
   And it is my issue , which you can see
https://issues.apache.org/jira/browse/FLINK-16774



--
Sent from: http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/


[jira] [Created] (FLINK-14053) blink planner dense_rank corner case bug

2019-09-11 Thread jackylau (Jira)
jackylau created FLINK-14053:


 Summary: blink planner dense_rank corner case bug
 Key: FLINK-14053
 URL: https://issues.apache.org/jira/browse/FLINK-14053
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.9.0
Reporter: jackylau
 Fix For: 1.10.0


sql :

val rank =
 """
 |SELECT
 | gradeId,
 | classId,
 | stuId,
 | score,
 | dense_rank() OVER (PARTITION BY gradeId, classId ORDER BY score asc) as 
dense_rank_num
 |FROM student
 |
 """.stripMargin

sample date:

row("grade2", "class2", "0006", 90),
row("grade1", "class2", "0007", 90),
row("grade1", "class1", "0001", 95),
row("grade1", "class1", "0002", 94),
row("grade1", "class1", "0003", 97),
row("grade1", "class1", "0004", 95),
row("grade1", "class1", "0005", 0)

the dense_rank ranks from 0, but it should be from 1

 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (FLINK-31902) cast expr to type with not null should throw exception like calcite

2023-04-23 Thread jackylau (Jira)
jackylau created FLINK-31902:


 Summary: cast expr to type with not null should throw exception 
like calcite
 Key: FLINK-31902
 URL: https://issues.apache.org/jira/browse/FLINK-31902
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31904) fix current serveral flink nullable type handle

2023-04-24 Thread jackylau (Jira)
jackylau created FLINK-31904:


 Summary: fix current serveral flink nullable type handle
 Key: FLINK-31904
 URL: https://issues.apache.org/jira/browse/FLINK-31904
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31906) typeof should only return type exclude nullable

2023-04-24 Thread jackylau (Jira)
jackylau created FLINK-31906:


 Summary: typeof should only return type exclude nullable 
 Key: FLINK-31906
 URL: https://issues.apache.org/jira/browse/FLINK-31906
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31908) cast expr to type with not null should not change nullable of expr

2023-04-24 Thread jackylau (Jira)
jackylau created FLINK-31908:


 Summary: cast expr to type with not null  should not change 
nullable of expr
 Key: FLINK-31908
 URL: https://issues.apache.org/jira/browse/FLINK-31908
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau


{code:java}
Stream getTestSetSpecs() {
return Stream.of(
TestSetSpec.forFunction(BuiltInFunctionDefinitions.CAST)
.onFieldsWithData(new Integer[]{1, 2}, 3)
.andDataTypes(DataTypes.ARRAY(INT()), INT())
.testSqlResult(
"CAST(f0 AS ARRAY)",
new Double[]{1.0d, 2.0d},
DataTypes.ARRAY(DOUBLE().notNull(;
} {code}
but the result type should DataTypes.ARRAY(DOUBLE())), the root cause is 
calcite bug



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32363) calcite 1.21 supports type coercion but flink don't enable it in validate

2023-06-15 Thread jackylau (Jira)
jackylau created FLINK-32363:


 Summary: calcite 1.21 supports type coercion but flink don't 
enable it in validate
 Key: FLINK-32363
 URL: https://issues.apache.org/jira/browse/FLINK-32363
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


1) calcite 1.21 supports type coercion and enabled default while flink disabled

2) spark /mysql can run it 

3) although, we can make it run by select count(distinct `if`(1>5, 'x', 
cast(null as varchar)));

i think we should enable it or offers a config to enable it

 
{code:java}
Flink SQL> select count(distinct `if`(1>5, 'x', null));
[ERROR] Could not execute SQL statement. Reason:
org.apache.calcite.sql.validate.SqlValidatorException: Illegal use of 
'NULL'{code}
{code:java}
// it can run in spark
spark-sql (default)> select count(distinct `if`(1>5, 'x', null)); 
0
{code}
 
{code:java}
private def createSqlValidator(catalogReader: CalciteCatalogReader) = {
  val validator = new FlinkCalciteSqlValidator(
operatorTable,
catalogReader,
typeFactory,
SqlValidator.Config.DEFAULT
  .withIdentifierExpansion(true)
  .withDefaultNullCollation(FlinkPlannerImpl.defaultNullCollation)
  .withTypeCoercionEnabled(false)
  ) // Disable implicit type coercion for now.
  validator
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-16692) flink joblistener can register from config

2020-03-20 Thread jackylau (Jira)
jackylau created FLINK-16692:


 Summary: flink joblistener can register from config
 Key: FLINK-16692
 URL: https://issues.apache.org/jira/browse/FLINK-16692
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


we should do as spark does ,which can register listener from conf such as 

"spark.extraListeners"。 And it will be convinient for users when users just 
want to set hook



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16698) fllink need catalog listener to do such as preCreate/PreDrop* afterCreate/AfterDrop* things

2020-03-20 Thread jackylau (Jira)
jackylau created FLINK-16698:


 Summary: fllink need catalog listener to do such as 
preCreate/PreDrop* afterCreate/AfterDrop* things
 Key: FLINK-16698
 URL: https://issues.apache.org/jira/browse/FLINK-16698
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


In order to support other things such as atlas or authentication, i think flink 
need catalog listener to do such as preCreate/PreDrop* afterCreate/AfterDrop* 
things, just like spark/hive does



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16774) expose HBaseUpsertSinkFunction hTableName and schema for other system

2020-03-25 Thread jackylau (Jira)
jackylau created FLINK-16774:


 Summary: expose HBaseUpsertSinkFunction hTableName and schema for 
other system
 Key: FLINK-16774
 URL: https://issues.apache.org/jira/browse/FLINK-16774
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / HBase
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


i want to expose hTableName and schema of HBaseUpsertSinkFunction  such as 
getTableName, getTableScheme for other system such as atlas and i think it is 
needed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16775) expose FlinkKafkaConsumer/FlinkKafkaProducer Properties for other system

2020-03-25 Thread jackylau (Jira)
jackylau created FLINK-16775:


 Summary: expose FlinkKafkaConsumer/FlinkKafkaProducer Properties 
for other system
 Key: FLINK-16775
 URL: https://issues.apache.org/jira/browse/FLINK-16775
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


i want to expose Properties  FlinkKafkaConsumer/FlinkKafkaProducer Properties 
such as getProperties for other system such as atlas and i think it is needed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16777) expose Pipeline in JobClient

2020-03-25 Thread jackylau (Jira)
jackylau created FLINK-16777:


 Summary: expose Pipeline in JobClient
 Key: FLINK-16777
 URL: https://issues.apache.org/jira/browse/FLINK-16777
 Project: Flink
  Issue Type: Improvement
  Components: Client / Job Submission
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


we should also expose Pipeline in JobClient for atlas to extract information



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17155) flink in memory catalog can not support hive udf

2020-04-15 Thread jackylau (Jira)
jackylau created FLINK-17155:


 Summary: flink in memory catalog can not support hive udf
 Key: FLINK-17155
 URL: https://issues.apache.org/jira/browse/FLINK-17155
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


we found that we can use hive udf only in hive catalog, but can not do it in 
memory catalog



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17326) flink sql gateway should support persist meta information such as SessionContext in order to recover

2020-04-22 Thread jackylau (Jira)
jackylau created FLINK-17326:


 Summary: flink sql gateway should support  persist meta 
information such as SessionContext in order to recover
 Key: FLINK-17326
 URL: https://issues.apache.org/jira/browse/FLINK-17326
 Project: Flink
  Issue Type: Bug
  Components: Client / Job Submission
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


flink sql gateway should support persist meta information such as 
SessionContext in order to recover



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17384) support read hbase conf dir from flink.conf just like hadoop_conf

2020-04-25 Thread jackylau (Jira)
jackylau created FLINK-17384:


 Summary: support read hbase conf dir from flink.conf just like 
hadoop_conf
 Key: FLINK-17384
 URL: https://issues.apache.org/jira/browse/FLINK-17384
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase, Deployment / Scripts
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


hi all:

when user interacts with hbase should do 2 things when using sql
 # export HBASE_CONF_DIR
 # add hbase libs to flink_lib(because the hbase connnector doesn't have client 
jar)

i think it needs to optimise it.

for 1) we should support read hbase conf dir from flink.conf just like 
hadoop_conf in  config.sh

for 2) we should support HBASE_CLASSPATH in  config.sh. In case of jar 
conflicts such as guava , we also should support flink-hbase-shaded as hadoop ,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14159) flink rocksdb StreamCompressionDecorator not right

2019-09-20 Thread jackylau (Jira)
jackylau created FLINK-14159:


 Summary: flink rocksdb StreamCompressionDecorator not right
 Key: FLINK-14159
 URL: https://issues.apache.org/jira/browse/FLINK-14159
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.9.0
Reporter: jackylau
 Fix For: 1.10.0


I think the current flink rocksdb StreamCompressionDecorator is not right 
calling method 

getCompressionDecorator(executionConfig) which defalut value is false.That is 
to say, current compression is none.But I find rocksdb  using 
{{options.compression}} to specify the compression to use. By default it is 
Snappy, which you can see here 
[https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide]. And I use 
rocksdb tool sstdump to find it is indeed snappy compression.

So I think it should be return SnappyStreamCompressionDecorator.INSTANCE  
rather than getCompressionDecorator( executionConfig) 

Coud i commit a PR?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14243) flink hiveudf need some check whether it is using cache

2019-09-27 Thread jackylau (Jira)
jackylau created FLINK-14243:


 Summary: flink hiveudf need some check whether it is using cache
 Key: FLINK-14243
 URL: https://issues.apache.org/jira/browse/FLINK-14243
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.9.0
Reporter: jackylau
 Fix For: 1.10.0


Flink1.9 bring in hive connector, but it will have some problem when the 
original hive udf using cache. We konw that hive is  processed level parallel 
based on jvm, while flink/spark is task level parallel. If flink just calls the 
hive udf, it wll exists thread-safe problem when using cache.

So it may need check the hive udf code and if it is not thread-fase, and set 
the flink parallize=1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15625) flink sql multiple statements syntatic validation supports

2020-01-16 Thread jackylau (Jira)
jackylau created FLINK-15625:


 Summary: flink sql multiple statements syntatic validation supports
 Key: FLINK-15625
 URL: https://issues.apache.org/jira/browse/FLINK-15625
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Legacy Planner
Reporter: jackylau
 Fix For: 1.10.0


we konw that flink supports multiple statement syntatic validation by calcite, 
which validates sql statements one by one, and it will not validate the 
previous tablenames and others. and we only know the sql syntatic error when we 
submit the flink applications. 

I think it is eagerly need for users. we hope the flink community to support it 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19755) flink cep

2020-10-21 Thread jackylau (Jira)
jackylau created FLINK-19755:


 Summary: flink cep 
 Key: FLINK-19755
 URL: https://issues.apache.org/jira/browse/FLINK-19755
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Affects Versions: 1.11.0
 Environment: I think it will exist looping match when coming to 17, 14 
using AFTER MATCH SKIP TO LAST A 
Reporter: jackylau
 Fix For: 1.12.0


{code:java}
 symbol   tax   price rowtime
 = === =
 XYZ  1 7   2018-09-17 10:00:01
 XYZ  2 9   2018-09-17 10:00:02
 XYZ  1 10  2018-09-17 10:00:03
 XYZ  2 5   2018-09-17 10:00:04
 XYZ  2 17  2018-09-17 10:00:05
 XYZ  2 14  2018-09-17 10:00:06


SELECT *
FROM Ticker
MATCH_RECOGNIZE(
PARTITION BY symbol
ORDER BY rowtime
MEASURES
SUM(A.price) AS sumPrice,
FIRST(rowtime) AS startTime,
LAST(rowtime) AS endTime
ONE ROW PER MATCH
[AFTER MATCH STRATEGY]
PATTERN (A+ C)
DEFINE
A AS SUM(A.price) < 30
)
 

{code}
h5. {{}}
{code:java}
AFTER MATCH SKIP TO LAST A 

symbol   sumPricestartTime  endTime
 == = =
 XYZ  26 2018-09-17 10:00:01   2018-09-17 10:00:04
 XYZ  15 2018-09-17 10:00:03   2018-09-17 10:00:05
 XYZ  22 2018-09-17 10:00:04   2018-09-17 10:00:06
 XYZ  17 2018-09-17 10:00:05   2018-09-17 10:00:06

Again, the first result matched against the rows #1, #2, #3, #4.Compared to the 
previous strategy, the next match includes only row #3 (mapped to A) again for 
the next matching.

Therefore, the second result matched against the rows #3, #4, #5.

The third result matched against the rows #4, #5, #6.

The last result matched against the rows #5, #6.{code}
h5. {{}}
h5. {{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19765) flink SqlUseCatalog.getCatalogName is not unified with SqlCreateCatalog and SqlDropCatalog

2020-10-22 Thread jackylau (Jira)
jackylau created FLINK-19765:


 Summary: flink SqlUseCatalog.getCatalogName is not unified with 
SqlCreateCatalog and SqlDropCatalog
 Key: FLINK-19765
 URL: https://issues.apache.org/jira/browse/FLINK-19765
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


when i develop flink ranger plugin at operation level, i find this method not 
unified.

And SqlToOperationConverter.convert needs has the good order for user to find 
code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19848) flink docs of Building Flink from Source bug

2020-10-28 Thread jackylau (Jira)
jackylau created FLINK-19848:


 Summary: flink docs of Building Flink from Source bug
 Key: FLINK-19848
 URL: https://issues.apache.org/jira/browse/FLINK-19848
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


To speed up the build you can skip tests, QA plugins, and JavaDocs:

 

{{mvn clean install -DskipTests -Dfast}}

 

{{mvn clean install -DskipTests -Dscala-2.12}}

{{fast and }}{{scala-2.12}}{{ is profile, not properties}}

{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19915) wrong comments of cep test

2020-11-01 Thread jackylau (Jira)
jackylau created FLINK-19915:


 Summary: wrong comments of cep test
 Key: FLINK-19915
 URL: https://issues.apache.org/jira/browse/FLINK-19915
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Affects Versions: 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


@Test
public void testNFACompilerPatternEndsWithNotFollowedBy() {

 // adjust the rule
 expectedException.expect(MalformedPatternException.class);
 expectedException.expectMessage("NotFollowedBy is not supported as a last part 
of a Pattern!");

 Pattern invalidPattern = Pattern.begin("start").where(new 
TestFilter())
 .followedBy("middle").where(new TestFilter())
 .notFollowedBy("end").where(new TestFilter());

 // here we must have an exception because of the two "start" patterns with the 
same name.
 compile(invalidPattern, false);
}

 

// here we must have an exception because of the two "start" patterns with the 
same name.

It is not right



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19950) LookupJoin can not support view or subquery and so on. o

2020-11-03 Thread jackylau (Jira)
jackylau created FLINK-19950:


 Summary: LookupJoin can not support view or subquery and so on. o
 Key: FLINK-19950
 URL: https://issues.apache.org/jira/browse/FLINK-19950
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


{code:java}
// code placeholder
val sql0 = "create view v1 AS SELECT * FROM user_table"

val sql = "SELECT T.id, T.len, T.content, D.name FROM src AS T JOIN v1 " +
  "for system_time as of T.proctime AS D ON T.id = D.id"

val sink = new TestingAppendSink
tEnv.executeSql(sql0)
tEnv.sqlQuery(sql).toAppendStream[Row].addSink(sink)
env.execute()
{code}
{code:java}
// code placeholder
private void convertTemporalTable(Blackboard bb, SqlCall call) {
  final SqlSnapshot snapshot = (SqlSnapshot) call;
  final RexNode period = bb.convertExpression(snapshot.getPeriod());

  // convert inner query, could be a table name or a derived table
  SqlNode expr = snapshot.getTableRef();
  convertFrom(bb, expr);
  final TableScan scan = (TableScan) bb.root;

  final RelNode snapshotRel = relBuilder.push(scan).snapshot(period).build();

  bb.setRoot(snapshotRel, false);
}

{code}
it will exist cast Exception at final TableScan scan = (TableScan) bb.root;

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19952) flink SecurityOptions.class use depracted method so mant times

2020-11-03 Thread jackylau (Jira)
jackylau created FLINK-19952:


 Summary: flink SecurityOptions.class use depracted method so mant 
times
 Key: FLINK-19952
 URL: https://issues.apache.org/jira/browse/FLINK-19952
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


we should use it in this way
{code:java}
// code placeholder
public static final ConfigOption KERBEROS_LOGIN_USETICKETCACHE =
 key("security.kerberos.login.use-ticket-cache")
 .booleanType()
 .defaultValue(true)
 .withDescription("Indicates whether to read from your Kerberos ticket cache.");
{code}
instead of 
{code:java}
public static final ConfigOption KERBEROS_LOGIN_USETICKETCACHE = 
key("security.kerberos.login.use-ticket-cache")
.defaultValue(true) 
.withDescription("Indicates whether to read from your Kerberos ticket cache.");
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20474) flink cep results of doc is not right

2020-12-03 Thread jackylau (Jira)
jackylau created FLINK-20474:


 Summary: flink cep results of doc is not right
 Key: FLINK-20474
 URL: https://issues.apache.org/jira/browse/FLINK-20474
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Affects Versions: 1.12.0
Reporter: jackylau
 Fix For: 1.13.0


h4. Contiguity within looping patterns

You can apply the same contiguity condition as discussed in the previous 
[section|https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/libs/cep.html#combining-patterns]
 within a looping pattern. The contiguity will be applied between elements 
accepted into such a pattern. To illustrate the above with an example, a 
pattern sequence {{"a b+ c"}} ({{"a"}} followed by any(non-deterministic 
relaxed) sequence of one or more {{"b"}}’s followed by a {{"c"}}) with input 
{{"a", "b1", "d1", "b2", "d2", "b3" "c"}} will have the following results:
 # *Strict Contiguity*: {{{a b3 c}}} – the {{"d1"}} after {{"b1"}} causes 
{{"b1"}} to be discarded, the same happens for {{"b2"}} because of {{"d2"}}.

 # *Relaxed Contiguity*: {{{a b1 c}}}, {{{a b1 b2 c}}}, {{{a b1 b2 b3 c}}}, 
{{{a b2 c}}}, {{{a b2 b3 c}}}, {{{a b3 c}}} - {{"d"}}’s are ignored.

 # *Non-Deterministic Relaxed Contiguity*: {{{a b1 c}}}, {{{a b1 b2 c}}}, {{{a 
b1 b3 c}}}, {{{a b1 b2 b3 c}}}, {{{a b2 c}}}, {{{a b2 b3 c}}}, {{{a b3 c}}} - 
notice the {{{a b1 b3 c}}}, which is the result of relaxing contiguity between 
{{"b"}}’s.

 

 {{"a b+ c"}} ({{"a"}} followed by any(non-deterministic relaxed) sequence of 
one or more {{"b"}}’s followed by a {{"c"}}) is not correct at *followed by a 
{{"c". it is nexted by a "c"}}*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-31006) job is not finished when using pipeline mode to run bounded source like kafka/pulsar

2023-02-09 Thread jackylau (Jira)
jackylau created FLINK-31006:


 Summary: job is not finished when using pipeline mode to run 
bounded source like kafka/pulsar
 Key: FLINK-31006
 URL: https://issues.apache.org/jira/browse/FLINK-31006
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Connectors / Pulsar
Affects Versions: 1.17.0
Reporter: jackylau
 Fix For: 1.17.0
 Attachments: image-2023-02-10-13-20-52-890.png, 
image-2023-02-10-13-23-38-430.png, image-2023-02-10-13-24-46-929.png

when i do failover works like kill jm/tm when using  pipeline mode to run 
bounded source like kafka, i found job is not finished, when every partition 
data has consumed.

 

After dig into code, i found this logical not run when JM recover. the 
partition infos are not changed. so noMoreNewPartitionSplits is not set to 
true. then this will not run 

 

!image-2023-02-10-13-23-38-430.png!

 

!image-2023-02-10-13-24-46-929.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31098) Add ARRAY_SIZE supported in SQL & Table API

2023-02-15 Thread jackylau (Jira)
jackylau created FLINK-31098:


 Summary: Add ARRAY_SIZE supported in SQL & Table API
 Key: FLINK-31098
 URL: https://issues.apache.org/jira/browse/FLINK-31098
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


Returns the size of an array.

Syntax:
array_size(array)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}
SELECT array_size(ARRAY[1, 2, 3, 2, 1]);
-- 5
SELECT array_size(ARRAY[1, NULL, 1]);
-- 3
{code}
See also
spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_size]

snowflake https://docs.snowflake.com/en/sql-reference/functions/array_size
h4.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31102) Add ARRAY_REMOVE supported in SQL & Table API

2023-02-16 Thread jackylau (Jira)
jackylau created FLINK-31102:


 Summary: Add ARRAY_REMOVE supported in SQL & Table API
 Key: FLINK-31102
 URL: https://issues.apache.org/jira/browse/FLINK-31102
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


Remove all elements that equal to element from array.

Syntax:
array_remove(array)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}

 SELECT array_remove(array(1, 2, 3, null, 3), 3); 

SELECT array_size(ARRAY[1, 2, 3, 2, 1]);
-- [1,2,null]
{code}
See also
spark 
[[https://spark.apache.org/docs/latest/api/sql/index.html#array_size]|https://spark.apache.org/docs/latest/api/sql/index.html#array_remove]

presto https://prestodb.io/docs/current/functions/array.html

postgresql 
https://w3resource.com/PostgreSQL/postgresql_array_remove-function.php 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31118) Add ARRAY_UNION supported in SQL & Table API

2023-02-17 Thread jackylau (Jira)
jackylau created FLINK-31118:


 Summary: Add ARRAY_UNION supported in SQL & Table API
 Key: FLINK-31118
 URL: https://issues.apache.org/jira/browse/FLINK-31118
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


Remove all elements that equal to element from array.

Syntax:
array_remove(array)

Arguments:
array: An ARRAY to be handled.

Returns:

An ARRAY. If value is NULL, the result is NULL. 
Examples:
{code:sql}
> SELECT array_union(array(1, 2, 3), array(1, 3, 5));
 [1,2,3,5] {code}
See also
spark https://spark.apache.org/docs/latest/api/sql/index.html#array_union

presto https://prestodb.io/docs/current/functions/array.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31161) upgrade MojoHaus Versions Maven Plugin to 2.14.2

2023-02-20 Thread jackylau (Jira)
jackylau created FLINK-31161:


 Summary: upgrade MojoHaus Versions Maven Plugin to 2.14.2
 Key: FLINK-31161
 URL: https://issues.apache.org/jira/browse/FLINK-31161
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0
 Attachments: image-2023-02-21-12-17-20-195.png

when we use multiple project, the parrent struct like this 
[https://stackoverflow.com/questions/39449275/update-parent-version-in-a-maven-projects-module.]

when i use 

mvn org.codehaus.mojo:versions-maven-plugin:2.8.1:update-parent  
-DparentVersion=[1.15.2.1] -DallowSnapshots

could not change parrent version.

!image-2023-02-21-12-17-20-195.png!

 

it is fixed added by skipResolution by upgrading to 2.14.2  
[https://www.mojohaus.org/versions/versions-maven-plugin/update-parent-mojo.html]

https://github.com/mojohaus/versions/tree/2.14.2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31166) array_contains element type error

2023-02-21 Thread jackylau (Jira)
jackylau created FLINK-31166:


 Summary: array_contains element type error
 Key: FLINK-31166
 URL: https://issues.apache.org/jira/browse/FLINK-31166
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0
 Attachments: image-2023-02-21-18-37-45-202.png, 
image-2023-02-21-18-38-04-226.png, image-2023-02-21-18-41-19-385.png, 
image-2023-02-21-18-41-27-757.png

!image-2023-02-21-18-41-27-757.png!

 

!image-2023-02-21-18-41-19-385.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31199) Add MAP_KEYS supported in SQL & Table API

2023-02-23 Thread jackylau (Jira)
jackylau created FLINK-31199:


 Summary: Add MAP_KEYS supported in SQL & Table API
 Key: FLINK-31199
 URL: https://issues.apache.org/jira/browse/FLINK-31199
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


Returns an unordered array containing the keys of the map.

Syntax:
map_keys(map)

Arguments:
map An Map to be handled.

Returns:

An Map. If value is NULL, the result is NULL. 
Examples:
{code:sql}
> SELECT map_keys(map(1, 'a', 2, 'b'));
 [1,2] {code}
See also
spark https://spark.apache.org/docs/latest/api/sql/index.html#map_keys

 
h4.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31200) Add MAP_VALUES supported in SQL & Table API

2023-02-23 Thread jackylau (Jira)
jackylau created FLINK-31200:


 Summary: Add MAP_VALUES supported in SQL & Table API
 Key: FLINK-31200
 URL: https://issues.apache.org/jira/browse/FLINK-31200
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


Returns an unordered array containing the values of the map.

Syntax:
map_values(map)

Arguments:
map An Map to be handled.

Returns:

An Map. If value is NULL, the result is NULL. 
Examples:
{code:sql}
> SELECT map_values(map(1, 'a', 2, 'b'));
 - ["a","b"]{code}
See also
spark https://spark.apache.org/docs/latest/api/sql/index.html#map_values



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31207) supports high order function like other engine

2023-02-23 Thread jackylau (Jira)
jackylau created FLINK-31207:


 Summary: supports high order function like other engine
 Key: FLINK-31207
 URL: https://issues.apache.org/jira/browse/FLINK-31207
 Project: Flink
  Issue Type: Sub-task
Reporter: jackylau


spark [https://spark.apache.org/docs/latest/api/sql/index.html#transform]

after calcite  https://issues.apache.org/jira/browse/CALCITE-3679s upports high 
order functions, we should supports many high order funcsions like spark/presto



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31237) Fix possible bug of array_distinct

2023-02-27 Thread jackylau (Jira)
jackylau created FLINK-31237:


 Summary: Fix possible bug of array_distinct
 Key: FLINK-31237
 URL: https://issues.apache.org/jira/browse/FLINK-31237
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


as talked here [https://github.com/apache/flink/pull/19623,] we should use 
builtin expressions/functions. because the sql semantic is different from  java 
equals



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31279) fix times using number and interval type bug

2023-03-01 Thread jackylau (Jira)
jackylau created FLINK-31279:


 Summary: fix times using number and interval type bug 
 Key: FLINK-31279
 URL: https://issues.apache.org/jira/browse/FLINK-31279
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


{code:java}
// code placeholder
Flink SQL> select interval 3  day * 2;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.planner.codegen.CodeGenException: Interval expression 
type expected. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31336) interval type process has problem in table api and sql

2023-03-06 Thread jackylau (Jira)
jackylau created FLINK-31336:


 Summary: interval type process has problem in table api and sql
 Key: FLINK-31336
 URL: https://issues.apache.org/jira/browse/FLINK-31336
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


{code:java}
// code placeholder
select typeof(interval '1' day);
 - INTERVAL SECOND(3) NOT NULL {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31377) BinaryArrayData getArray/getMap should Handle null correctly AssertionError: valueArraySize (-6) should >= 0

2023-03-09 Thread jackylau (Jira)
jackylau created FLINK-31377:


 Summary: BinaryArrayData getArray/getMap should Handle null 
correctly AssertionError: valueArraySize (-6) should >= 0 
 Key: FLINK-31377
 URL: https://issues.apache.org/jira/browse/FLINK-31377
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


{code:java}
// code placeholder
when i use , if the element has map which is null
Object getElementOrNull(ArrayData array, int pos);{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31381) UnsupportedOperationException: Unsupported type when convertTypeToSpec: MAP

2023-03-09 Thread jackylau (Jira)
jackylau created FLINK-31381:


 Summary: UnsupportedOperationException: Unsupported type when 
convertTypeToSpec: MAP
 Key: FLINK-31381
 URL: https://issues.apache.org/jira/browse/FLINK-31381
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


when i fix this https://issues.apache.org/jira/browse/FLINK-31377, and find 
another bug.

which is not fixed completely
{code:java}
SELECT array_contains(ARRAY[CAST(null AS MAP), MAP[1, 2]], MAP[1, 
2]); {code}
{code:java}
Caused by: java.lang.UnsupportedOperationException: Unsupported type when 
convertTypeToSpec: MAPat 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1069)
at 
org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1091)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.castTo(SqlValidatorUtils.java:82)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForMultisetConstructor(SqlValidatorUtils.java:74)
at 
org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForArrayConstructor(SqlValidatorUtils.java:39)
at 
org.apache.flink.table.planner.functions.sql.SqlArrayConstructor.inferReturnType(SqlArrayConstructor.java:44)
at 
org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:504)at 
org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:605)at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6218)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6203)
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:161)at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1861)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1852)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:74)
at 
org.apache.flink.table.planner.functions.inference.CallBindingCallContext$1.get(CallBindingCallContext.java:69)
at 
org.apache.flink.table.types.inference.strategies.RootArgumentTypeStrategy.inferArgumentType(RootArgumentTypeStrategy.java:58)
at 
org.apache.flink.table.types.inference.strategies.SequenceInputTypeStrategy.inferInputTypes(SequenceInputTypeStrategy.java:76)
at 
org.apache.flink.table.planner.functions.inference.TypeInferenceOperandInference.inferOperandTypesOrError(TypeInferenceOperandInference.java:91)
at org.apache.flink.table. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31602) Add ARRAY_POSITION supported in SQL & Table API

2023-03-23 Thread jackylau (Jira)
jackylau created FLINK-31602:


 Summary: Add ARRAY_POSITION supported in SQL & Table API
 Key: FLINK-31602
 URL: https://issues.apache.org/jira/browse/FLINK-31602
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31621) Add ARRAY_REVERSE supported in SQL & Table API

2023-03-26 Thread jackylau (Jira)
jackylau created FLINK-31621:


 Summary: Add ARRAY_REVERSE supported in SQL & Table API
 Key: FLINK-31621
 URL: https://issues.apache.org/jira/browse/FLINK-31621
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31622) Add ARRAY_APPEND supported in SQL & Table API

2023-03-26 Thread jackylau (Jira)
jackylau created FLINK-31622:


 Summary: Add ARRAY_APPEND supported in SQL & Table API
 Key: FLINK-31622
 URL: https://issues.apache.org/jira/browse/FLINK-31622
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31664) Add ARRAY_INTERSECT supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)
jackylau created FLINK-31664:


 Summary: Add ARRAY_INTERSECT supported in SQL & Table API
 Key: FLINK-31664
 URL: https://issues.apache.org/jira/browse/FLINK-31664
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31665) Add ARRAY_CONCAT supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)
jackylau created FLINK-31665:


 Summary: Add ARRAY_CONCAT supported in SQL & Table API
 Key: FLINK-31665
 URL: https://issues.apache.org/jira/browse/FLINK-31665
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31666) Add ARRAY_OVERLAP supported in SQL & Table API

2023-03-29 Thread jackylau (Jira)
jackylau created FLINK-31666:


 Summary: Add ARRAY_OVERLAP supported in SQL & Table API
 Key: FLINK-31666
 URL: https://issues.apache.org/jira/browse/FLINK-31666
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31677) Add MAP_ENTRIES supported in SQL & Table API

2023-03-30 Thread jackylau (Jira)
jackylau created FLINK-31677:


 Summary: Add MAP_ENTRIES supported in SQL & Table API
 Key: FLINK-31677
 URL: https://issues.apache.org/jira/browse/FLINK-31677
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31682) map_from_arrays should take whether allow duplicate keys and null key into consideration

2023-03-31 Thread jackylau (Jira)
jackylau created FLINK-31682:


 Summary: map_from_arrays should take whether allow duplicate keys 
and null key into consideration
 Key: FLINK-31682
 URL: https://issues.apache.org/jira/browse/FLINK-31682
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


after research the spark/presto/maxcompute about 
map_from_arrays/map_from_entries, there all support duplicate keys and null key 
 for the most part



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31691) Add MAP_FROM_ENTRIES supported in SQL & Table API

2023-04-02 Thread jackylau (Jira)
jackylau created FLINK-31691:


 Summary: Add MAP_FROM_ENTRIES supported in SQL & Table API
 Key: FLINK-31691
 URL: https://issues.apache.org/jira/browse/FLINK-31691
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31751) array return type SpecificTypeStrategies.ARRAY and ifThenElse return type is not correct

2023-04-07 Thread jackylau (Jira)
jackylau created FLINK-31751:


 Summary: array return type SpecificTypeStrategies.ARRAY and 
ifThenElse return type is not correct
 Key: FLINK-31751
 URL: https://issues.apache.org/jira/browse/FLINK-31751
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: jackylau
 Fix For: 1.18.0


like array return type

Type strategy that returns a \{@link DataTypes#ARRAY(DataType)} with element 
type equal to the type of the first argument, which is not equals calcite 
semantic.

for example
{code:java}
ARRAY and ARRAY NOT NULL
it should return  ARRAY instead of ARRAY{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-26678) make flink-console.sh script clean

2022-03-16 Thread jackylau (Jira)
jackylau created FLINK-26678:


 Summary: make flink-console.sh script clean
 Key: FLINK-26678
 URL: https://issues.apache.org/jira/browse/FLINK-26678
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Scripts
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


log_setting=("-Dlog.file=${log}" 
"-Dlog4j.configuration=file:${FLINK_CONF_DIR}/log4j-console.properties" 
"-Dlog4j.configurationFile=file:${FLINK_CONF_DIR}/log4j-console.properties" 
"-Dlogback.configurationFile=file:${FLINK_CONF_DIR}/logback-console.xml")

 

we can use variable to replace log4j-console.properties



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26732) DefaultResourceAllocationStrategy need more log info to troubleshoot

2022-03-18 Thread jackylau (Jira)
jackylau created FLINK-26732:


 Summary: DefaultResourceAllocationStrategy need more log info to 
troubleshoot
 Key: FLINK-26732
 URL: https://issues.apache.org/jira/browse/FLINK-26732
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


{code:java}
// code placeholder

2022-03-17 19:16:31,653 INFO 
org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] 
- Received resource requirements from job 0ffd9c39e3c91bbebbf5db350a0e0fa4: 
[ResourceRequirement{resourceProfile=ResourceProfile{cpuCores=0.13, 
taskHeapMemory=512.000mb (536870912 bytes), taskOffHeapMemory=32.000mb 
(33554432 bytes), managedMemory=0 bytes, networkMemory=0 bytes}, 
numberOfRequiredSlots=1}]2022-03-17 19:16:31,660 WARN 
org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] 
- Could not fulfill resource requirements of job 
0ffd9c39e3c91bbebbf5db350a0e0fa4. Free slots: 02022-03-17 19:16:31,662 WARN 
org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge [] - 
Could not acquire the minimum required resources, failing slot requests. 
Acquired: []. Current slot pool status: Registered TMs: 0, registered slots: 0 
free slots: 02022-03-17 19:16:31,680 INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: 
TableSourceScan(table=[[default_catalog, default_database, datas... -> Sink: 
Sink(table=[default_catalog.default_database.sink], fields=[a, b]) (1/1) 
(327b64873d9098a9997d629320a683d1) switched from SCHEDULED to FAILED on 
[unassigned 
resource].org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
 Could not acquire the minimum required resources. {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27621) expose SafetyNetWrapperClassLoader from private to public and add addUrl method for flink client

2022-05-15 Thread jackylau (Jira)
jackylau created FLINK-27621:


 Summary: expose SafetyNetWrapperClassLoader from private to public 
and add addUrl method for flink client
 Key: FLINK-27621
 URL: https://issues.apache.org/jira/browse/FLINK-27621
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Task
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


expose SafetyNetWrapperClassLoader from private to public and add addUrl method 
for flink client when user need dynamic add jars



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27630) maven-source-plugin for table planner values connector for debug

2022-05-15 Thread jackylau (Jira)
jackylau created FLINK-27630:


 Summary: maven-source-plugin for table planner values connector 
for debug
 Key: FLINK-27630
 URL: https://issues.apache.org/jira/browse/FLINK-27630
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


{code:java}
// code placeholder
just like kafka/pulsar

   org.apache.maven.plugins
   maven-source-plugin
   
  
 attach-test-sources
 
test-jar-no-fork
 
 

   
   false


   **/factories/**
   META-INF/LICENSE
   META-INF/NOTICE

 
  
   
 {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27899) deactivate the shade plugin doesn't take effect

2022-06-05 Thread jackylau (Jira)
jackylau created FLINK-27899:


 Summary: deactivate the shade plugin doesn't take effect
 Key: FLINK-27899
 URL: https://issues.apache.org/jira/browse/FLINK-27899
 Project: Flink
  Issue Type: Improvement
  Components: Quickstarts
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0
 Attachments: image-2022-06-06-14-35-00-438.png

{code:java}
We need to specify id



   org.apache.maven.plugins
   maven-shade-plugin
   
  
 
  
   
 {code}
logs here:

 

!image-2022-06-06-14-35-00-438.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27992) cep StreamExecMatch need check the parallelism and maxParallelism of the two transformation in it

2022-06-09 Thread jackylau (Jira)
jackylau created FLINK-27992:


 Summary: cep StreamExecMatch need check the parallelism and 
maxParallelism of the two transformation in it
 Key: FLINK-27992
 URL: https://issues.apache.org/jira/browse/FLINK-27992
 Project: Flink
  Issue Type: Improvement
  Components: Library / CEP
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


StreamExecMatch node has two transformation (StreamRecordTimestampInserter -> 
Match), the upstream of StreamExecMatch is hash edge when use set different 
parallelism and maxParallelism it will cause problem.

because the window operator using downstream node's max parallelism compute 
keygroup and cep operator  using max parallelism of itself and it may not equal

such as:

window - --(hash edge)>  StreamRecordTimestampInserter --(forward edge)–> Cep 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-26145) k8s docs jobmanager-pod-template artifacts-fetcher:latest image is not exist, we can use busybox to replace it

2022-02-15 Thread jackylau (Jira)
jackylau created FLINK-26145:


 Summary: k8s docs jobmanager-pod-template  
artifacts-fetcher:latest image is not exist, we can use busybox to replace it 
 Key: FLINK-26145
 URL: https://issues.apache.org/jira/browse/FLINK-26145
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.15.0
Reporter: jackylau
 Fix For: 1.15.0
 Attachments: image-2022-02-15-16-54-42-861.png

!image-2022-02-15-16-54-42-861.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-21512) flink MemorySize.parseBytes bug if users use it by setting dynamic properties like

2021-02-26 Thread jackylau (Jira)
jackylau created FLINK-21512:


 Summary: flink MemorySize.parseBytes bug if users use it by 
setting dynamic properties like
 Key: FLINK-21512
 URL: https://issues.apache.org/jira/browse/FLINK-21512
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.12.0
Reporter: jackylau
 Fix For: 1.13.0


when i develop flink on ray mode, when user set dynamic properties by 

BootstrapTools.getDynamicPropertiesAsString, which will auto add " or ' on 
diffrent os. 

And it wll cause  MemorySize.parseByte cannot parser and throw exception

"text does not start with a number"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21991) flink varchar which exposes to users it not unified

2021-03-26 Thread jackylau (Jira)
jackylau created FLINK-21991:


 Summary: flink varchar which exposes to users it not unified
 Key: FLINK-21991
 URL: https://issues.apache.org/jira/browse/FLINK-21991
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.0
Reporter: jackylau
 Fix For: 1.13.0



sql  VARCHAR: If no length is specified, n is equal to 1. which is not right. 


calcite will transform the varchar to string
{code:java}
SqlDataTypeSpec type = regularColumn.getType();
boolean nullable = type.getNullable() == null ? true : 
type.getNullable();
RelDataType relType = type.deriveType(sqlValidator, 
nullable);
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23697) flink standalone Isolation is poor, why not do as spark does

2021-08-09 Thread jackylau (Jira)
jackylau created FLINK-23697:


 Summary: flink standalone Isolation is poor, why not do as spark 
does
 Key: FLINK-23697
 URL: https://issues.apache.org/jira/browse/FLINK-23697
 Project: Flink
  Issue Type: Bug
Reporter: jackylau


flink standalone Isolation is poor, why not do as spark does.
spark abstract cluster manager, executor just like flink taskmanager.
spark worker(standalone) is process, and executor is as child process of worker.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24884) flink flame graph webui bug

2021-11-11 Thread jackylau (Jira)
jackylau created FLINK-24884:


 Summary: flink flame graph webui bug
 Key: FLINK-24884
 URL: https://issues.apache.org/jira/browse/FLINK-24884
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.13.3, 1.14.0
Reporter: jackylau
 Fix For: 1.15.0
 Attachments: image-2021-11-12-15-48-08-140.png

i can not compile success when i port the flame graph feature to our low 
version of flink. 

but it is success in the high version of flink 
 !image-2021-11-12-15-48-08-140.png! 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24886) flink flame args bug, may we need to modify TimeUtils to supports m

2021-11-12 Thread jackylau (Jira)
jackylau created FLINK-24886:


 Summary: flink flame args bug, may we need to modify TimeUtils to 
supports m
 Key: FLINK-24886
 URL: https://issues.apache.org/jira/browse/FLINK-24886
 Project: Flink
  Issue Type: Bug
  Components: Runtime / REST
Affects Versions: 1.13.3, 1.14.0
Reporter: jackylau
 Fix For: 1.15.0
 Attachments: image-2021-11-12-17-38-28-969.png

When I refer to the configuration documentation setting 
rest.flamegraph.cleanup-interval to "10 m", it will be bug

 
{code:java}
public static final ConfigOption FLAMEGRAPH_CLEANUP_INTERVAL =
key("rest.flamegraph.cleanup-interval")
.durationType()
.defaultValue(Duration.ofMinutes(10))
.withDescription(
"Time after which cached stats are cleaned up if not 
accessed. It can"
+ " be specified using notation: \"100 s\", 
\"10 m\"."); {code}
 

 

!image-2021-11-12-17-38-28-969.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-28507) supports set hadoop mount path in pod

2022-07-12 Thread jackylau (Jira)
jackylau created FLINK-28507:


 Summary: supports set hadoop mount path in pod
 Key: FLINK-28507
 URL: https://issues.apache.org/jira/browse/FLINK-28507
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


the hadoop configmap mounted path is hard code in flink k8s at 
/opt/hadoop/conf, but we want to set diffirent path because of my company has 
already set in /opt/flink/conf, and the conf is using in old code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28583) make flink dist log4j dependency simple and clear

2022-07-18 Thread jackylau (Jira)
jackylau created FLINK-28583:


 Summary: make flink dist log4j dependency simple and clear
 Key: FLINK-28583
 URL: https://issues.apache.org/jira/browse/FLINK-28583
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Scripts
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


flink don't shade the log4j and want to put it flink/lib using shade exclusion 
like this 
{code:java}

   
   org.apache.logging.log4j:* {code}
and add this to put them to flink/lib
{code:java}

   org.apache.logging.log4j:log4j-api
   org.apache.logging.log4j:log4j-core
   org.apache.logging.log4j:log4j-slf4j-impl
   org.apache.logging.log4j:log4j-1.2-api
 {code}
i suggest to make the log4j to provided.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28776) RowTimeMiniBatchAssginerOperator doesn't need separate chain with upstream WatermarkAssignerOperator

2022-08-02 Thread jackylau (Jira)
jackylau created FLINK-28776:


 Summary: RowTimeMiniBatchAssginerOperator doesn't need separate 
chain with upstream WatermarkAssignerOperator
 Key: FLINK-28776
 URL: https://issues.apache.org/jira/browse/FLINK-28776
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28780) function docs of dayofmonth is not correct

2022-08-02 Thread jackylau (Jira)
jackylau created FLINK-28780:


 Summary: function docs of dayofmonth is not correct
 Key: FLINK-28780
 URL: https://issues.apache.org/jira/browse/FLINK-28780
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0
 Attachments: image-2022-08-02-20-32-22-309.png

!image-2022-08-02-20-32-22-309.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28813) new stach builtinfunction validateClassForRuntime it not correct

2022-08-04 Thread jackylau (Jira)
jackylau created FLINK-28813:


 Summary: new stach builtinfunction  validateClassForRuntime it not 
correct
 Key: FLINK-28813
 URL: https://issues.apache.org/jira/browse/FLINK-28813
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


{code:java}
org.apache.flink.table.api.ValidationException: Could not find an 
implementation method 'eval' in class 
'org.apache.flink.table.runtime.functions.scalar.ConvFunction' for function 
'CONV' that matches the following signature:
org.apache.flink.table.data.StringData 
eval(org.apache.flink.table.data.StringData, java.lang.Byte, java.lang.Integer) 
 at 
org.apache.flink.table.functions.UserDefinedFunctionHelper.validateClassForRuntime(UserDefinedFunctionHelper.java:319)
 {code}
{code:java}
// code placeholder
public class ConvFunction extends BuiltInScalarFunction {

public ConvFunction(SpecializedFunction.SpecializedContext context) {
super(BuiltInFunctionDefinitions.CONV, context);
}

public static StringData eval(StringData input, Integer fromBase, Integer 
toBase) {
if (input == null || fromBase == null || toBase == null) {
return null;
}
return StringData.fromString(BaseConversionUtils.conv(input.toBytes(), 
fromBase, toBase));
}

public static StringData eval(long input, Integer fromBase, Integer toBase) 
{
return eval(StringData.fromString(String.valueOf(input)), fromBase, 
toBase);
}
}


@Test
public void testRowScalarFunction1() throws Exception {
tEnv().executeSql(
"CREATE TABLE TestTable(s STRING) " + "WITH ('connector' = 
'COLLECTION')");

// the names of the function input and r differ
tEnv().executeSql("INSERT INTO TestTable select conv(3, cast(1 AS TINYINT), 
4)").await();

} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28830) new stack udtf doesn't support atomic type

2022-08-05 Thread jackylau (Jira)
jackylau created FLINK-28830:


 Summary: new stack udtf doesn't support atomic type 
 Key: FLINK-28830
 URL: https://issues.apache.org/jira/browse/FLINK-28830
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


{code:java}
// code placeholder
public class GenerateSeriesFunction extends BuiltInTableFunction {

private static final long serialVersionUID = 1L;

public GenerateSeriesFunction(SpecializedContext specializedContext) {
super(BuiltInFunctionDefinitions.GENERATE_SERIES, specializedContext);
}

public void eval(long start, long stop) {
eval(start, stop, 1);
}

public void eval(long start, long stop, long step) {
long s = start;
while (s <= stop) {
collect(s);
s += step;
}
}
}


public static final BuiltInFunctionDefinition GENERATE_SERIES =
BuiltInFunctionDefinition.newBuilder()
.name("GENERATE_SERIES")
.kind(TABLE)
.inputTypeStrategy(
or(
sequence(
logical(LogicalTypeFamily.NUMERIC),
logical(LogicalTypeFamily.NUMERIC)),
sequence(
logical(LogicalTypeFamily.NUMERIC),
logical(LogicalTypeFamily.NUMERIC),
logical(LogicalTypeFamily.NUMERIC
.outputTypeStrategy(explicit(DataTypes.BIGINT()))
.runtimeClass(

"org.apache.flink.table.runtime.functions.table.GenerateSeriesFunction")
.build(); {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28893) Add conv function supported in SQL & Table API

2022-08-09 Thread jackylau (Jira)
jackylau created FLINK-28893:


 Summary: Add conv function supported in SQL & Table API
 Key: FLINK-28893
 URL: https://issues.apache.org/jira/browse/FLINK-28893
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


Returns true if {{array}} contains {{{}value{}}}.

Syntax:
{code:java}
conv(number, fromBase, toBase) {code}
Arguments:
 * number: INTEGER_NUMERIC or CHARACTER_STRING family.

 * fromBase: INTEGER_NUMERIC family.
 * toBase: INTEGER_NUMERIC family.

Returns:

Converts numbers between different number bases. Returns a string 
representation of the number {_}{{N}}{_}, converted from base _{{from_base}}_ 
to base {_}{{to_base}}{_}. Returns {{NULL}} if any argument is {{{}NULL{}}}. 
The argument _{{N}}_ is interpreted as an integer, but may be specified as an 
integer or a string. The minimum base is {{2}} and the maximum base is 
{{{}36{}}}. If _{{from_base}}_ is a negative number, _{{N}}_ is regarded as a 
signed number. Otherwise, _{{N}}_ is treated as unsigned. 
[{{CONV()}}|https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_conv]
 works with 64-bit precision.

Examples:
{code:java}
> SELECT CONV(100, 10, -8);
 "144"
> SELECT CONV(, 2, 10);
 15
> SELECT CONV(100, 2, NULL);
 NULL {code}
See more:
 * pg: 
[https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_conv]
 * mariadb: https://mariadb.com/kb/en/conv/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28919) Add built-in generate_series function.

2022-08-11 Thread jackylau (Jira)
jackylau created FLINK-28919:


 Summary: Add built-in generate_series function.
 Key: FLINK-28919
 URL: https://issues.apache.org/jira/browse/FLINK-28919
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


 

Syntax:
{code:java}
conv(number, fromBase, toBase) 
generate_series ( start numeric, stop numeric [, step numeric ] ) → setof 
numeric{code}
and it does n't support timestamp now, just like mysql

Returns:

When _{{step}}_ is positive, zero rows are returned if _{{start}}_ is greater 
than {_}{{stop}}{_}. Conversely, when _{{step}}_ is negative, zero rows are 
returned if _{{start}}_ is less than {_}{{stop}}{_}. Zero rows are also 
returned if any input is {{{}NULL{}}}. It is an error for _{{step}}_ to be 
zero. Some examples follow:

Examples:
{code:java}
SELECT * FROM generate_series(2,4);
 generate_series
-
   2
   3
   4
(3 rows)

SELECT * FROM generate_series(5,1,-2);
 generate_series
-
   5
   3
   1
(3 rows)

SELECT * FROM generate_series(4,3);
 generate_series
-
(0 rows)

SELECT generate_series(1.1, 4, 1.3);
 generate_series
-
 1.1
 2.4
 3.7{code}
See more:
 * pg: https://www.postgresql.org/docs/current/functions-srf.html
 * mysql: 
https://docs.microsoft.com/en-us/sql/t-sql/functions/generate-series-transact-sql?view=sql-server-ver16



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28929) Add built-in datediff function.

2022-08-11 Thread jackylau (Jira)
jackylau created FLINK-28929:


 Summary: Add built-in datediff function.
 Key: FLINK-28929
 URL: https://issues.apache.org/jira/browse/FLINK-28929
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


Syntax:
{code:java}
DATEDIFF(expr1,expr2){code}
 * 
[{{}}|https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_datediff]
 
[{{}}|https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_datediff]

Returns:

returns _{{expr1}}_ − _{{expr2}}_ expressed as a value in days from one date to 
the other. _{{expr1}}_ and _{{expr2}}_ are date or date-and-time expressions. 
Only the date parts of the values are used in the calculation.

This function returns {{NULL}} if _{{expr1}}_ or _{{expr2}}_ is {{{}NULL{}}}.

Examples:
{code:java}
> SELECT DATEDIFF('2007-12-31 23:59:59','2007-12-30');
-> 1
> SELECT DATEDIFF('2010-11-30 23:59:59','2010-12-31');
-> -31{code}
See more:
 * mysql: 
[https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_datediff|https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_conv]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29010) flink planner test is not correct

2022-08-17 Thread jackylau (Jira)
jackylau created FLINK-29010:


 Summary: flink planner test is not correct
 Key: FLINK-29010
 URL: https://issues.apache.org/jira/browse/FLINK-29010
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


{code:java}
// code placeholder
private def prepareResult(seq: Seq[Row], isSorted: Boolean): Seq[String] = {
  if (!isSorted) seq.map(_.toString).sortBy(s => s) else seq.map(_.toString)
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29012) flink function doc is not correct

2022-08-17 Thread jackylau (Jira)
jackylau created FLINK-29012:


 Summary: flink function doc is not correct
 Key: FLINK-29012
 URL: https://issues.apache.org/jira/browse/FLINK-29012
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0
 Attachments: image-2022-08-17-17-15-39-702.png

!image-2022-08-17-17-15-39-702.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29096) json_value when the path has blank, the result is not right

2022-08-24 Thread jackylau (Jira)
jackylau created FLINK-29096:


 Summary: json_value when the path has blank, the result is not 
right
 Key: FLINK-29096
 URL: https://issues.apache.org/jira/browse/FLINK-29096
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


!https://aone.alipay.com/v2/api/workitem/adapter/file/url?fileIdentifier=workitem%2Falipay%2Fdefault%2F1661334308971image.png!

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29150) TIMESTAMPDIFF microsecond unit for table api

2022-08-30 Thread jackylau (Jira)
jackylau created FLINK-29150:


 Summary: TIMESTAMPDIFF  microsecond unit for table api
 Key: FLINK-29150
 URL: https://issues.apache.org/jira/browse/FLINK-29150
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.0
Reporter: jackylau
 Fix For: 1.17.0


like mysql https://www.geeksforgeeks.org/timestampdiff-function-in-mysql/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29182) SumAggFunction redundant computations

2022-09-01 Thread jackylau (Jira)
jackylau created FLINK-29182:


 Summary: SumAggFunction redundant computations
 Key: FLINK-29182
 URL: https://issues.apache.org/jira/browse/FLINK-29182
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.16.0
Reporter: jackylau
 Fix For: 1.16.0


{code:java}
// code placeholder
public Expression[] accumulateExpressions() {
return new Expression[] {
/* sum = */ ifThenElse(
isNull(operand(0)),
sum,
ifThenElse(
isNull(operand(0)),
sum,
ifThenElse(isNull(sum), operand(0), adjustedPlus(sum, 
operand(0)
};
} {code}
it exists redundant computations



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29227) shoud package disruptor(com.lmax) to flink lib for aync logger when xxconnector using it.

2022-09-07 Thread jackylau (Jira)
jackylau created FLINK-29227:


 Summary: shoud package disruptor(com.lmax) to flink lib for aync 
logger when xxconnector using it.
 Key: FLINK-29227
 URL: https://issues.apache.org/jira/browse/FLINK-29227
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.17.0
Reporter: jackylau
 Fix For: 1.17.0


when i develop xxConnector which dependecncy like this

      xxconnector -> log4j2 -> AsyncLoggerConfig (jar: disruptor(com.lmax))

xconnector loaded by user(childFirst) classloader

log4j2 which using  loaded by app classloader, which make AsyncLoggerConfig 
load by app classloader, according to the principle of classloader

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29479) support whether using system env for pythonpath

2022-09-29 Thread jackylau (Jira)
jackylau created FLINK-29479:


 Summary: support whether using system env for pythonpath
 Key: FLINK-29479
 URL: https://issues.apache.org/jira/browse/FLINK-29479
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.17.0
Reporter: jackylau
 Fix For: 1.17.0


It exists PYTHONPATH env in system,like yarn/k8s images, it will cause conflict 
with users python depdendency sometimes. so i suggest add a config to do 
whether using system env of PYTHONPATH



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29483) flink python udf arrow in thread model bug

2022-09-30 Thread jackylau (Jira)
jackylau created FLINK-29483:


 Summary: flink python udf arrow in thread model bug
 Key: FLINK-29483
 URL: https://issues.apache.org/jira/browse/FLINK-29483
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.15.2, 1.16.0
Reporter: jackylau
 Fix For: 1.16.0, 1.17.0, 1.15.3
 Attachments: image-2022-09-30-17-03-05-005.png

!image-2022-09-30-17-03-05-005.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29749) flink info command support dynamic properties

2022-10-24 Thread jackylau (Jira)
jackylau created FLINK-29749:


 Summary: flink info command support dynamic properties
 Key: FLINK-29749
 URL: https://issues.apache.org/jira/browse/FLINK-29749
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.17.0
Reporter: jackylau
 Fix For: 1.17.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29934) maven-assembly-plugin 2.4 make assemble plugin could find xml in flink clients module

2022-11-08 Thread jackylau (Jira)
jackylau created FLINK-29934:


 Summary: maven-assembly-plugin 2.4 make assemble plugin could find 
xml in flink clients module
 Key: FLINK-29934
 URL: https://issues.apache.org/jira/browse/FLINK-29934
 Project: Flink
  Issue Type: Bug
  Components: Client / Job Submission
Affects Versions: 1.17.0
Reporter: jackylau
 Fix For: 1.17.0
 Attachments: image-2022-11-08-20-28-00-814.png

!image-2022-11-08-20-28-00-814.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-18208) flink es connector exists 2 Spelling mistakes

2020-06-09 Thread jackylau (Jira)
jackylau created FLINK-18208:


 Summary: flink es connector exists 2 Spelling mistakes
 Key: FLINK-18208
 URL: https://issues.apache.org/jira/browse/FLINK-18208
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


flink es connector exists Spelling mistakes

1. es connector6 exists Elasticsearch7RequestFactory, it should be 
Elasticsearch6RequestFactory

2. es connector 6,7 Elasticsearch6DynamicSinkFactory exists not unified method 
(validate and validateOption)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18213) refactor kafka sql connector to use just one shade to compatible 0.10.0.2 +

2020-06-09 Thread jackylau (Jira)
jackylau created FLINK-18213:


 Summary: refactor kafka sql connector to use just one shade to 
compatible 0.10.0.2 +
 Key: FLINK-18213
 URL: https://issues.apache.org/jira/browse/FLINK-18213
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


Flink master supports 0.10/0.11/2.x, with three flink-sql-connector shade jar 
currently (1.12-snapshot).

As we all know ,kafka client is compatible after 0.10.0.2, so we can use kafka 
client 2.x to access to brocker server are  0.10/0.11/2.x. 

So we can just use one kafka sql shade jar. 

for this , we should do 2 things

1) refactor to 1 shade jar

2) rename flink-kafka-connector mudules with same qualified name in case of 
conflicts such as NoSuchMethod or ClassNotFound error



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18231) flink kafka connector exists same qualified class name, whch will cause conflicts when user has all the versions of flink-kafka-connector

2020-06-09 Thread jackylau (Jira)
jackylau created FLINK-18231:


 Summary: flink kafka connector exists same qualified class name, 
whch will cause conflicts when user has all the versions of  
flink-kafka-connector
 Key: FLINK-18231
 URL: https://issues.apache.org/jira/browse/FLINK-18231
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


 

There are 0.9/0.10/0.11/2.x four version of kafka connector in 1.11

There are 0.10/0.11/2.x three version of kafka connector in 1.12-snapshot

But flink kafka connector exists same qualified class name such as 
KafkaConsumerThread and Handover  whch will cause conflicts when i have all the 
versions of flink-kafka-connector.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18236) flink elasticsearch IT test ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right

2020-06-10 Thread jackylau (Jira)
jackylau created FLINK-18236:


 Summary: flink elasticsearch IT test 
ElasticsearchSinkTestBase.runElasticsearchSink* verify it not right
 Key: FLINK-18236
 URL: https://issues.apache.org/jira/browse/FLINK-18236
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0, 1.12.0


we can see there are diffirent tests

runElasticsearchSinkTest

runElasticsearchSinkCborTest

runElasticsearchSinkSmileTest

runElasticSearchSinkTest

etc.

And use SourceSinkDataTestKit.verifyProducedSinkData(client, index) to ensure 
the correctness of results. But all of them use the same index.

That is to say, if the second unit test sink doesn't send successfully. they 
are also equal when use verifyProducedSinkData

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18495) flink filterpushdown can not take effect in blink planner

2020-07-06 Thread jackylau (Jira)
jackylau created FLINK-18495:


 Summary: flink filterpushdown can not take effect in blink planner
 Key: FLINK-18495
 URL: https://issues.apache.org/jira/browse/FLINK-18495
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.10.0, 1.11.0
Reporter: jackylau
 Fix For: 1.12.0


When i use the filterpushdown , i found the FilterPushDown only take effects in 
orc/parquet/testFilter  when using *flink planner.*

Because the call  applyPredicate(List predicates) predicates 

1) *PlannerExpressin* flink planner the PushFilterIntoTableSourceScanRule  
RexProgramExtractor.visitCall return PlannerExpression

2) *CallExpression*  blink planner , *can not match current And Or EqualTo and 
so on*, which it PlannerExpresion  the PushFilterIntoTableSourceScanRule  
RexNodeExtractor.visitCall return ResolvedExpression

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)