[jira] [Created] (FLINK-24853) Report errors if jobs using iteration has runtime mode set to BATCH

2021-11-10 Thread Yun Gao (Jira)
Yun Gao created FLINK-24853:
---

 Summary: Report errors if jobs using iteration has runtime mode  
set to BATCH
 Key: FLINK-24853
 URL: https://issues.apache.org/jira/browse/FLINK-24853
 Project: Flink
  Issue Type: Sub-task
  Components: Library / Machine Learning
Affects Versions: 0.1.0
Reporter: Yun Gao
Assignee: Yun Gao






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24854) StateHandleSerializationTest unit test error

2021-11-10 Thread zlzhang0122 (Jira)
zlzhang0122 created FLINK-24854:
---

 Summary: StateHandleSerializationTest unit test error
 Key: FLINK-24854
 URL: https://issues.apache.org/jira/browse/FLINK-24854
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.14.0
Reporter: zlzhang0122
 Fix For: 1.15.0


StateHandleSerializationTest.ensureStateHandlesHaveSerialVersionUID() will fail 
beacuse RocksDBStateDownloaderTest has an anonymous class of StreamStateHandle, 
and this class is a subtype of StateObject, since the class is an anonymous, 
the assertFalse will fail as well as this unit test.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24855) Source Coordinator Thread already exists. There should never be more than one thread driving the actions of a Source Coordinator.

2021-11-10 Thread WangMinChao (Jira)
WangMinChao created FLINK-24855:
---

 Summary: Source Coordinator Thread already exists. There should 
never be more than one thread driving the actions of a Source Coordinator.
 Key: FLINK-24855
 URL: https://issues.apache.org/jira/browse/FLINK-24855
 Project: Flink
  Issue Type: Bug
  Components: API / Core, Runtime / Coordination
Affects Versions: 1.13.3
 Environment: flink 1.13.3

flink-cdc 2.1
Reporter: WangMinChao


 

When I am synchronizing large tables, have the following problems :

2021-11-09 20:33:04,222 INFO 
com.ververica.cdc.connectors.mysql.source.enumerator.MySqlSourceEnumerator [] - 
Assign split MySqlSnapshotSplit\{tableId=db.table, splitId='db.table:383', 
splitKeyType=[`id` BIGINT NOT NULL], splitStart=[9798290], splitEnd=[9823873], 
highWatermark=null} to subtask 1
2021-11-09 20:33:04,248 INFO 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering 
checkpoint 101 (type=CHECKPOINT) @ 1636461183945 for job 
3cee105643cfee78b80cd0a41143b5c1.
2021-11-09 20:33:10,734 ERROR 
org.apache.flink.runtime.util.FatalExitExceptionHandler [] - FATAL: Thread 
'SourceCoordinator-Source: mysqlcdc-source -> Sink: kafka-sink' produced an 
uncaught exception. Stopping the process...
java.lang.Error: Source Coordinator Thread already exists. There should never 
be more than one thread driving the actions of a Source Coordinator. Existing 
Thread: Thread[SourceCoordinator-Source: mysqlcdc-source -> Sink: 
kafka-sink,5,main]
at 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorProvider$CoordinatorExecutorThreadFactory.newThread(SourceCoordinatorProvider.java:119)
 [flink-dist_2.12-1.13.3.jar:1.13.3]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619)
 ~[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) 
~[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1025)
 ~[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) 
~[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_191]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] FLIP-187: Adaptive Batch Job Scheduler

2021-11-10 Thread Till Rohrmann
+1 (binding)

Cheers,
Till

On Wed, Nov 10, 2021 at 6:37 AM Zhu Zhu  wrote:

> +1 (binding)
>
> Thanks,
> Zhu
>
> Jingsong Li  于2021年11月9日周二 下午4:01写道:
>
> > +1 (non-binding)
> >
> > This greatly enhances the ease of use of batch jobs. (Parallelism
> > setting is really a challenge)
> >
> > (non-binding: I'm not familiar with runtime, just a general
> understanding)
> >
> > Best,
> > Jingsong
> >
> > On Tue, Nov 9, 2021 at 3:48 PM David Morávek  wrote:
> > >
> > > Thanks for the FLIP, this is going to be a great improvement to the
> batch
> > > execution.
> > >
> > > +1 (non-binding)
> > >
> > > Best,
> > > D.
> > >
> > > On Tue, Nov 9, 2021 at 1:05 AM Guowei Ma  wrote:
> > >
> > > > Thanks for your excellent FLIP!
> > > > +1 binding
> > > >
> > > > Lijie Wang 于2021年11月8日 周一下午2:53写道:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start the vote for FLIP-187[1], which proposes to
> > > > introduce
> > > > > a new scheduler to Flink: adaptive batch job scheduler. The new
> > scheduler
> > > > > can automatically decide parallelisms of job vertices for batch
> jobs,
> > > > > according to the size of data volume each vertex needs to process.
> > This
> > > > > FLIP was discussed in [2].
> > > > >
> > > > > The vote will last at least 72 hours (Nov 11th 12:00 GMT) unless
> > there is
> > > > > an objection or insufficient votes.
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-187%3A+Adaptive+Batch+Job+Scheduler
> > > > >
> > > > > [2]
> > > > >
> > > > >
> > > >
> >
> https://lists.apache.org/thread.html/r32bd8f521daf4446fb70366bfabc4603bf3f56831c04e65bee6709aa%40%3Cdev.flink.apache.org%3E
> > > > >
> > > > > Best,
> > > > >
> > > > > Lijie
> > > > >
> > > > --
> > > > Best,
> > > > Guowei
> > > >
> >
> >
> >
> > --
> > Best, Jingsong Lee
> >
>


[jira] [Created] (FLINK-24856) Upgrade SourceReaderTestBase to Use Junit 5

2021-11-10 Thread Yufei Zhang (Jira)
Yufei Zhang created FLINK-24856:
---

 Summary: Upgrade SourceReaderTestBase to Use Junit 5
 Key: FLINK-24856
 URL: https://issues.apache.org/jira/browse/FLINK-24856
 Project: Flink
  Issue Type: Technical Debt
  Components: Test Infrastructure, Tests
Reporter: Yufei Zhang
Assignee: Martijn Visser
 Fix For: 1.15.0


We should update to the latest version of JUnit5, v5.8.1



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24857) Upgrade SourceReaderTestBase to Use Junit 5

2021-11-10 Thread Yufei Zhang (Jira)
Yufei Zhang created FLINK-24857:
---

 Summary: Upgrade SourceReaderTestBase to Use Junit 5
 Key: FLINK-24857
 URL: https://issues.apache.org/jira/browse/FLINK-24857
 Project: Flink
  Issue Type: Technical Debt
  Components: Test Infrastructure
Reporter: Yufei Zhang


Currently SourceReaderTestBase uses Junit 4, it needs to be upgraded to Junit 5 
for new tests to use.

It affects two subclasses.

 
 * org/apache/flink/connector/kafka/source/reader/KafkaSourceReaderTest.java
 * org/apache/flink/connector/base/source/reader/SourceReaderBaseTest.java

 

These two classes needs to be fixed as well



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] FLIP-187: Adaptive Batch Job Scheduler

2021-11-10 Thread Zakelly Lan
+1 (non-binding)

It will be a great improvement.

Best,
Zakelly

On Wed, Nov 10, 2021 at 5:24 PM Till Rohrmann  wrote:

> +1 (binding)
>
> Cheers,
> Till
>
> On Wed, Nov 10, 2021 at 6:37 AM Zhu Zhu  wrote:
>
> > +1 (binding)
> >
> > Thanks,
> > Zhu
> >
> > Jingsong Li  于2021年11月9日周二 下午4:01写道:
> >
> > > +1 (non-binding)
> > >
> > > This greatly enhances the ease of use of batch jobs. (Parallelism
> > > setting is really a challenge)
> > >
> > > (non-binding: I'm not familiar with runtime, just a general
> > understanding)
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Tue, Nov 9, 2021 at 3:48 PM David Morávek  wrote:
> > > >
> > > > Thanks for the FLIP, this is going to be a great improvement to the
> > batch
> > > > execution.
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Best,
> > > > D.
> > > >
> > > > On Tue, Nov 9, 2021 at 1:05 AM Guowei Ma 
> wrote:
> > > >
> > > > > Thanks for your excellent FLIP!
> > > > > +1 binding
> > > > >
> > > > > Lijie Wang 于2021年11月8日 周一下午2:53写道:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I would like to start the vote for FLIP-187[1], which proposes to
> > > > > introduce
> > > > > > a new scheduler to Flink: adaptive batch job scheduler. The new
> > > scheduler
> > > > > > can automatically decide parallelisms of job vertices for batch
> > jobs,
> > > > > > according to the size of data volume each vertex needs to
> process.
> > > This
> > > > > > FLIP was discussed in [2].
> > > > > >
> > > > > > The vote will last at least 72 hours (Nov 11th 12:00 GMT) unless
> > > there is
> > > > > > an objection or insufficient votes.
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-187%3A+Adaptive+Batch+Job+Scheduler
> > > > > >
> > > > > > [2]
> > > > > >
> > > > > >
> > > > >
> > >
> >
> https://lists.apache.org/thread.html/r32bd8f521daf4446fb70366bfabc4603bf3f56831c04e65bee6709aa%40%3Cdev.flink.apache.org%3E
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > Lijie
> > > > > >
> > > > > --
> > > > > Best,
> > > > > Guowei
> > > > >
> > >
> > >
> > >
> > > --
> > > Best, Jingsong Lee
> > >
> >
>


[jira] [Created] (FLINK-24858) TypeSerializer version mismatch during eagerly restore

2021-11-10 Thread Fabian Paul (Jira)
Fabian Paul created FLINK-24858:
---

 Summary: TypeSerializer version mismatch during eagerly restore
 Key: FLINK-24858
 URL: https://issues.apache.org/jira/browse/FLINK-24858
 Project: Flink
  Issue Type: Bug
  Components: API / Type Serialization System
Affects Versions: 1.13.3, 1.14.0, 1.15.0
Reporter: Fabian Paul


Currently, some of our TypeSerializer snapshots assume information about the 
binary layout of the actual data rather than only holding information about the 
TypeSerialzer.

Multiple users ran into this problem 
i.e.[https://lists.apache.org/thread/4q5q7wp0br96op6p7f695q2l8lz4wfzx|https://lists.apache.org/thread/4q5q7wp0br96op6p7f695q2l8lz4wfzx]
{quote}This manifest itself when state is restored egarly (for example an 
operator state) but, for example a user doesn't register the state on their 
intializeState/open,* and then a checkpoint happens.
The result is that we will have elements serialized according to an old binary 
layout, but our serializer snapshot declares a new version which indicates that 
the elements are written with a new binary layout.
The next restore will fail.
{quote}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24859) Document new File formats

2021-11-10 Thread Etienne Chauchot (Jira)
Etienne Chauchot created FLINK-24859:


 Summary: Document new File formats
 Key: FLINK-24859
 URL: https://issues.apache.org/jira/browse/FLINK-24859
 Project: Flink
  Issue Type: Technical Debt
  Components: Documentation
Reporter: Etienne Chauchot


The project recently introduced new formats: _BulkFormat_ and _StreamFormat_ 
interfaces. 

There are already implementations of these formats: hive, parquet, orc and 
textLine formats that need to be documented.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24860) Fix the wrong position mappings in the Python UDTF

2021-11-10 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-24860:


 Summary: Fix the wrong position mappings in the Python UDTF
 Key: FLINK-24860
 URL: https://issues.apache.org/jira/browse/FLINK-24860
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.13.3, 1.12.5
Reporter: Huang Xingbo
Assignee: Huang Xingbo
 Fix For: 1.12.6, 1.13.4


The failed example:
{code:python}
@udtf(result_types=[DataTypes.STRING(), DataTypes.STRING()])
def StoTraceMqSourcePlugUDTF(s: str):
import json
try:
data = json.loads(s)
except Exception as e:
return None
source_code = "trace"
try:
shipment_no = data['shipMentNo']
except Exception as e:
return None
yield source_code, shipment_no

class StoTraceFindNameUDTF(TableFunction):
def eval(self, shipment_no):
yield shipment_no, shipment_no

sto_trace_find_name = udtf(StoTraceFindNameUDTF(),
   result_types=[DataTypes.STRING(), 
DataTypes.STRING()])

# self.env.set_parallelism(1)
self.t_env.create_temporary_system_function(
"StoTraceMqSourcePlugUDTF", StoTraceMqSourcePlugUDTF)
self.t_env.create_temporary_system_function(
"sto_trace_find_name", sto_trace_find_name
)
source_table = self.t_env.from_elements([(
'{"shipMentNo":"84210186879"}',)],
['biz_context'])
# self.t_env.execute_sql(source_table)
self.t_env.register_table("source_table", source_table)

t = self.t_env.sql_query(
"SELECT biz_context, source_code, shipment_no FROM source_table 
LEFT JOIN LATERAL TABLE(StoTraceMqSourcePlugUDTF(biz_context)) as 
T(source_code, shipment_no)"
" ON TRUE")
self.t_env.register_table("Table2", t)
t = self.t_env.sql_query(
"SELECT source_code, shipment_no, shipment_name, shipment_type FROM 
Table2 LEFT JOIN LATERAL TABLE(sto_trace_find_name(shipment_no)) as 
T(shipment_name, shipment_type)"
" ON TRUE"
)
print(t.to_pandas())
{code}
In the failed example, the input arguments of the second Python Table Function 
has the wrong positions mapping.




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24861) Flink MySQL Look cache update for empty hit

2021-11-10 Thread Gaurav Miglani (Jira)
Gaurav Miglani created FLINK-24861:
--

 Summary: Flink MySQL Look cache update for empty hit
 Key: FLINK-24861
 URL: https://issues.apache.org/jira/browse/FLINK-24861
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.14.0
Reporter: Gaurav Miglani


Ideally, in case of cache miss for a key, or with null value fetch for key, key 
shouldn't be cached



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24862) The user-defined hive udaf/udtf cannot be used normally in hive dialect

2021-11-10 Thread xiangqiao (Jira)
xiangqiao created FLINK-24862:
-

 Summary: The user-defined hive udaf/udtf cannot be used normally 
in hive dialect
 Key: FLINK-24862
 URL: https://issues.apache.org/jira/browse/FLINK-24862
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.14.0, 1.13.0
Reporter: xiangqiao
 Attachments: image-2021-11-10-20-55-11-988.png

Here are two questions:
1.First question, I added a unit test in HiveDialectITCase to reproduce this 
question:
{code:java}
@Test
public void testTemporaryFunctionUDAF() throws Exception {
// create temp function
tableEnv.executeSql(
String.format(
"create temporary function temp_count as '%s'",
GenericUDAFCount.class.getName()));
String[] functions = tableEnv.listUserDefinedFunctions();
assertArrayEquals(new String[] {"temp_count"}, functions);
// call the function
tableEnv.executeSql("create table src(x int)");
tableEnv.executeSql("insert into src values (1),(-1)").await();
assertEquals(
"[+I[2]]",
queryResult(tableEnv.sqlQuery("select temp_count(x) from 
src")).toString());
// switch DB and the temp function can still be used
tableEnv.executeSql("create database db1");
tableEnv.useDatabase("db1");
assertEquals(
"[+I[2]]",
queryResult(tableEnv.sqlQuery("select temp_count(x) from 
`default`.src"))
.toString());
// drop the function
tableEnv.executeSql("drop temporary function temp_count");
functions = tableEnv.listUserDefinedFunctions();
assertEquals(0, functions.length);
tableEnv.executeSql("drop temporary function if exists foo");
} {code}
!image-2021-11-10-20-55-11-988.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24863) Azure agents fail git checkout after agent update

2021-11-10 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-24863:


 Summary: Azure agents fail git checkout after agent update
 Key: FLINK-24863
 URL: https://issues.apache.org/jira/browse/FLINK-24863
 Project: Flink
  Issue Type: Technical Debt
  Components: Build System / CI
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler


After updating the Azure Agents on one of the CI machines the git checkout is 
now failing, due to some mismatch between the git version that we provide / 
azure expects.

I have taken the affected machine offline for the time being while I continue 
to investigate.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24864) Release TaskManagerJobMetricGroup with the last slot rather than task

2021-11-10 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-24864:
-

 Summary: Release TaskManagerJobMetricGroup with the last slot 
rather than task
 Key: FLINK-24864
 URL: https://issues.apache.org/jira/browse/FLINK-24864
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Metrics, Runtime / State Backends
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.15.0


[https://docs.google.com/document/d/1k5WkWIYzs3n3GYQC76H9BLGxvN3wuq7qUHJuBPR9YX0/edit?usp=sharing]
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24865) Support MATCH_RECOGNIZE in Batch mode

2021-11-10 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-24865:
--

 Summary: Support MATCH_RECOGNIZE in Batch mode
 Key: FLINK-24865
 URL: https://issues.apache.org/jira/browse/FLINK-24865
 Project: Flink
  Issue Type: Sub-task
  Components: Library / CEP
Reporter: Martijn Visser


Currently MATCH_RECOGNIZE only works in Streaming mode. It should also be 
supported in Batch mode



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24866) AZP crashed in Post-job: Cache Maven local repo

2021-11-10 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-24866:
-

 Summary: AZP crashed in Post-job: Cache Maven local repo
 Key: FLINK-24866
 URL: https://issues.apache.org/jira/browse/FLINK-24866
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines
Affects Versions: 1.15.0
Reporter: Till Rohrmann
 Fix For: 1.15.0


An AZP build failed while running the post-job: Cache Maven local repo step 
with exit code 2:

{code}
Resolved to: maven|Linux|kI+vc4kUoz33JEfRluJAo4vEVFz7aQdIKJJbq3fbuGw=
ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session 
2919f31f-021b-468b-851e-f92f99f5681f
Getting a pipeline cache artifact with one of the following fingerprints:
Fingerprint: `maven|Linux|kI+vc4kUoz33JEfRluJAo4vEVFz7aQdIKJJbq3fbuGw=`
There is a cache miss.
tar: f202add2a23c497f93e0ceff83df8823_archive.tar: Wrote only 6144 of 10240 
bytes
tar: Error is not recoverable: exiting now
ApplicationInsightsTelemetrySender correlated 1 events with X-TFS-Session 
2919f31f-021b-468b-851e-f92f99f5681f
##[error]Process returned non-zero exit code: 2
{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26271&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=85df99a3-dd32-4a6c-8fa0-7c375f4cbc3a&l=212



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24867) E2e tests take longer than the maximum 310 minutes on AZP

2021-11-10 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-24867:
-

 Summary: E2e tests take longer than the maximum 310 minutes on AZP
 Key: FLINK-24867
 URL: https://issues.apache.org/jira/browse/FLINK-24867
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines, Tests
Affects Versions: 1.13.3
Reporter: Till Rohrmann


The e2e tests took longer than the maximum 310 minutes in one AZP run. This 
made the build step fail.

{code}
##[error]The job running on agent Azure Pipelines 9 ran longer than the maximum 
time of 310 minutes. For more information, see 
https://go.microsoft.com/fwlink/?linkid=2077134
Agent: Azure Pipelines 9
Started: Today at 09:25
Duration: 5h 10m 11s
{code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26257&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] FLIP-188: Introduce Built-in Dynamic Table Storage

2021-11-10 Thread Eron Wright
Jingsong, regarding the LogStore abstraction, I understand that you want to
retain some flexibility as the implementation evolves.  It makes sense that
the abstract interfaces would be @Internal for now.  Would you kindly
ensure the minimal extensibility is in place, so that the Pulsar dev
community may hack on a prototype implementation?

I believe this is important for maintaining the perception that Flink
doesn't unduly favor Kafka.

-Eron

On Tue, Nov 9, 2021 at 6:53 PM Jingsong Li  wrote:

> Hi all,
>
> I have started the voting thread [1]. Please cast your vote there or
> ask additional questions here.
>
> [1] https://lists.apache.org/thread/v3fzx0p6n2jogn86sptzr30kr3yw37sq
>
> Best,
> Jingsong
>
> On Mon, Nov 1, 2021 at 5:41 PM Jingsong Li  wrote:
> >
> > Hi Till,
> >
> > Thanks for your suggestion.
> >
> > At present, we do not want users to use other storage implementations,
> > which will undoubtedly require us to propose interfaces and APIs with
> > compatibility guarantee, which will complicate our design. And some
> > designs are constantly changing, we will constantly adjust according
> > to the needs of end users.
> >
> > However, this does not prevent us from proposing some internal
> > interfaces, such as ManagedTableStorageProvider you said, which can
> > make our code more robust and testable. However, these interfaces will
> > not be public, which means that we have no compatibility burden.
> >
> > Best,
> > Jingsong
> >
> > On Mon, Nov 1, 2021 at 3:57 PM Till Rohrmann 
> wrote:
> > >
> > > Hi Kurt,
> > >
> > > Thanks a lot for the detailed explanation. I do see that implementing
> this
> > > feature outside of Flink will be a bigger effort because we probably
> have
> > > to think more about the exact interfaces and contracts. On the other
> hand,
> > > I can also imagine that users might want to use different storage
> > > implementations (e.g. Pulsar instead of Kafka for the changelog
> storage) at
> > > some point.
> > >
> > > Starting with a feature branch and keeping this question in mind is
> > > probably a good compromise. Getting this feature off the ground in
> order to
> > > evaluate it with users is likely more important than thinking of all
> > > possible storage implementations and how to arrange the code. In case
> we
> > > should split it, maybe we need something like a
> ManagedTableStorageProvider
> > > that encapsulates the logic where to store the managed tables.
> > >
> > > Looking forward to this feature and the improvements it will add to
> Flink's
> > > SQL usability :-)
> > >
> > > Cheers,
> > > Till
> > >
> > > On Mon, Nov 1, 2021 at 2:46 AM Kurt Young  wrote:
> > >
> > > > Hi Till,
> > > >
> > > > We have discussed the possibility of putting this FLIP into another
> > > > repository offline
> > > > with Stephan and Timo. This looks similar with another under going
> effort
> > > > which trying
> > > > to put all connectors outside the Flink core repository.
> > > >
> > > > From the motivation and scope of this FLIP, it's quite different from
> > > > current connectors in
> > > > some aspects. What we are trying to offer is some kind of built-in
> storage,
> > > > or we can call it
> > > > internal/managed tables, compared with current connectors, they kind
> of
> > > > express external
> > > > tables of Flink SQL. Functionality-wise, this managed table would
> have more
> > > > ability than
> > > > all these connectors, since we controlled the implementation of such
> > > > storage. Thus this table
> > > > storage will interact with lots SQL components, like metadata
> handling,
> > > > optimization, execution,
> > > > etc.
> > > >
> > > > However we do see some potential benefits if we choose to put it
> outside
> > > > Flink:
> > > > - We may achieve more rapid development speed and maybe more frequent
> > > > release.
> > > > - Force us to think really clearly about the interfaces it should be,
> > > > because we don't have
> > > > the shortcut to modify core & connector codes all at the same time.
> > > >
> > > > But we also can't ignore the overhead:
> > > > - We almost need everything that is discussed in the splitting
> connectors
> > > > thread.
> > > > - We have to create lots more interface than TableSource/TableSink
> to make
> > > > it just work at the first
> > > > place, e.g. interfaces to express such tables should be managed by
> Flink,
> > > > interfaces to express the
> > > > physical capability of the storage then it can be bridged to SQL
> optimizer
> > > > and executor.
> > > > - If we create lots of interfaces with only one implementation, that
> sounds
> > > > overengineering to me.
> > > >
> > > > Combining the pros and cons above, what we are trying to do is
> firstly
> > > > implement it in a feature branch,
> > > > and also keep good engineering and design in mind. At some point we
> > > > re-evaluate the decision whether
> > > > to put it inside or outside the Flink core. What do you think?
> > > >
> > > > Best,
> > > > Kurt
> > > >

[jira] [Created] (FLINK-24868) Use custom serialization for storing checkpoint metadata in CompletedCheckpointStore

2021-11-10 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-24868:


 Summary: Use custom serialization for storing checkpoint metadata 
in CompletedCheckpointStore
 Key: FLINK-24868
 URL: https://issues.apache.org/jira/browse/FLINK-24868
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Reporter: Dawid Wysakowicz


We are using a java serialization for storing {{CompletedCheckpoint}} in 
{{CompletedCheckpointStore}}. This makes maintaining backwards compatibility of 
entries stored hard, even between minor versions. Maintaining this kind of 
backwards compatibility is required for ever considering rolling upgrades.

In particular, we do have {{MetadataSerializer}} for storing checkpoints 
metadata in a backwards-compatible way.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [ANNOUNCE] Documentation now available at nightlies.apache.org

2021-11-10 Thread Chesnay Schepler

A redirect from ci.apache.org to nightlies.apache.org has been set up.

On 05/11/2021 04:44, Yun Tang wrote:

Hi Chesnay,

It seems that the redirection has not been completed. As old documentation hasn't updated 
for days, shall we add a waring on top of documentation, like current warning of 
"This documentation is for an out-of-date version of Apache Flink." to tell 
people to go to new location?

Best
Yun Tang

From: Chesnay Schepler 
Sent: Friday, September 10, 2021 13:32
To: dev@flink.apache.org ; Leonard Xu 
Subject: Re: [ANNOUNCE] Documentation now available at nightlies.apache.org

A redirection will be setup by infra at some point.

On 10/09/2021 05:23, Leonard Xu wrote:

Thanks Chesnay for the migration work.

Should we add a redirection for the old documentation site: 
https://ci.apache.org/flink/flink-docs-master/  to make
it redirect to the new one: 
https://nightlies.apache.org/flink/flink-docs-master/ ?

The bookmark in users’ browser should still be the old one, I googled "flink 
documents" which also returned the old one.
And the old one won’t be updated and would be outdated soon.

Best,
Leonard


在 2021年9月6日,17:11,Chesnay Schepler  写道:

Website has been updated to point to nightlies.apache.org as well.

On 03/09/2021 08:03, Chesnay Schepler wrote:

The migration is pretty much complete and the documentation is now available at 
nightlies.apache.org .

Please click around a bit and check if anything is broken.

If no issues are reported by the end of today I will update the links on the 
website.

On 01/09/2021 10:11, Chesnay Schepler wrote:

We are in the final steps of migrating the documentation to the new buildbot 
setup.

Because of that the documentation currently available at ci.apache.org will NOT 
be updated until further notice because the old builders have been deactivated 
while we iron out kinks in the new ones.

I will keep you updated on the progress.







Re: [ANNOUNCE] Documentation now available at nightlies.apache.org

2021-11-10 Thread Xintong Song
That's perfect. Thanks for taking care of this, Chesnay.

Thank you~

Xintong Song



On Thu, Nov 11, 2021 at 6:58 AM Chesnay Schepler  wrote:

> A redirect from ci.apache.org to nightlies.apache.org has been set up.
>
> On 05/11/2021 04:44, Yun Tang wrote:
> > Hi Chesnay,
> >
> > It seems that the redirection has not been completed. As old
> documentation hasn't updated for days, shall we add a waring on top of
> documentation, like current warning of "This documentation is for an
> out-of-date version of Apache Flink." to tell people to go to new location?
> >
> > Best
> > Yun Tang
> > 
> > From: Chesnay Schepler 
> > Sent: Friday, September 10, 2021 13:32
> > To: dev@flink.apache.org ; Leonard Xu <
> xbjt...@gmail.com>
> > Subject: Re: [ANNOUNCE] Documentation now available at
> nightlies.apache.org
> >
> > A redirection will be setup by infra at some point.
> >
> > On 10/09/2021 05:23, Leonard Xu wrote:
> >> Thanks Chesnay for the migration work.
> >>
> >> Should we add a redirection for the old documentation site:
> https://ci.apache.org/flink/flink-docs-master/  to make
> >> it redirect to the new one:
> https://nightlies.apache.org/flink/flink-docs-master/ ?
> >>
> >> The bookmark in users’ browser should still be the old one, I googled
> "flink documents" which also returned the old one.
> >> And the old one won’t be updated and would be outdated soon.
> >>
> >> Best,
> >> Leonard
> >>
> >>> 在 2021年9月6日,17:11,Chesnay Schepler  写道:
> >>>
> >>> Website has been updated to point to nightlies.apache.org as well.
> >>>
> >>> On 03/09/2021 08:03, Chesnay Schepler wrote:
>  The migration is pretty much complete and the documentation is now
> available at nightlies.apache.org .
> 
>  Please click around a bit and check if anything is broken.
> 
>  If no issues are reported by the end of today I will update the links
> on the website.
> 
>  On 01/09/2021 10:11, Chesnay Schepler wrote:
> > We are in the final steps of migrating the documentation to the new
> buildbot setup.
> >
> > Because of that the documentation currently available at
> ci.apache.org will NOT be updated until further notice because the old
> builders have been deactivated while we iron out kinks in the new ones.
> >
> > I will keep you updated on the progress.
> >
> >
>
>


[jira] [Created] (FLINK-24869) flink-core should be provided in flink-file-sink-common

2021-11-10 Thread Konstantin Gribov (Jira)
Konstantin Gribov created FLINK-24869:
-

 Summary: flink-core should be provided in flink-file-sink-common
 Key: FLINK-24869
 URL: https://issues.apache.org/jira/browse/FLINK-24869
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.14.0
Reporter: Konstantin Gribov


As example {{flink-connector-files}} brings {{flink-core}} with {{compile}} 
scope via {{flink-file-sink-common}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[subscribe]

2021-11-10 Thread Yufei Zhang



Re: [VOTE] FLIP-188 Introduce Built-in Dynamic Table Storage

2021-11-10 Thread Yufei Zhang
Hi, 

+1 (non-binding)

Very interesting design. I saw a lot of discussion on the generic interface 
design, good to know it will address extensibility.

Cheers,
Yufei


On 2021/11/10 02:51:55 Jingsong Li wrote:
> Hi everyone,
> 
> Thanks for all the feedback so far. Based on the discussion[1] we seem
> to have consensus, so I would like to start a vote on FLIP-188 for
> which the FLIP has now also been updated[2].
> 
> The vote will last for at least 72 hours (Nov 13th 3:00 GMT) unless
> there is an objection or insufficient votes.
> 
> [1] https://lists.apache.org/thread/tqyn1cro5ohl3c3fkjb1zvxbo03sofn7
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage
> 
> Best,
> Jingsong
> 


Re: [ANNOUNCE] Documentation now available at nightlies.apache.org

2021-11-10 Thread Leonard Xu
Nice !  
Thanks chesnay for the continuous effort.

> 在 2021年11月11日,06:58,Chesnay Schepler  写道:
> 
> A redirect from ci.apache.org to nightlies.apache.org has been set up.
> 
> On 05/11/2021 04:44, Yun Tang wrote:
>> Hi Chesnay,
>> 
>> It seems that the redirection has not been completed. As old documentation 
>> hasn't updated for days, shall we add a waring on top of documentation, like 
>> current warning of "This documentation is for an out-of-date version of 
>> Apache Flink." to tell people to go to new location?
>> 
>> Best
>> Yun Tang
>> 
>> From: Chesnay Schepler 
>> Sent: Friday, September 10, 2021 13:32
>> To: dev@flink.apache.org ; Leonard Xu 
>> 
>> Subject: Re: [ANNOUNCE] Documentation now available at nightlies.apache.org
>> 
>> A redirection will be setup by infra at some point.
>> 
>> On 10/09/2021 05:23, Leonard Xu wrote:
>>> Thanks Chesnay for the migration work.
>>> 
>>> Should we add a redirection for the old documentation site: 
>>> https://ci.apache.org/flink/flink-docs-master/  to make
>>> it redirect to the new one: 
>>> https://nightlies.apache.org/flink/flink-docs-master/ ?
>>> 
>>> The bookmark in users’ browser should still be the old one, I googled 
>>> "flink documents" which also returned the old one.
>>> And the old one won’t be updated and would be outdated soon.
>>> 
>>> Best,
>>> Leonard
>>> 
 在 2021年9月6日,17:11,Chesnay Schepler  写道:
 
 Website has been updated to point to nightlies.apache.org as well.
 
 On 03/09/2021 08:03, Chesnay Schepler wrote:
> The migration is pretty much complete and the documentation is now 
> available at nightlies.apache.org .
> 
> Please click around a bit and check if anything is broken.
> 
> If no issues are reported by the end of today I will update the links on 
> the website.
> 
> On 01/09/2021 10:11, Chesnay Schepler wrote:
>> We are in the final steps of migrating the documentation to the new 
>> buildbot setup.
>> 
>> Because of that the documentation currently available at ci.apache.org 
>> will NOT be updated until further notice because the old builders have 
>> been deactivated while we iron out kinks in the new ones.
>> 
>> I will keep you updated on the progress.
>> 
>> 
> 



Re: [DISCUSS] FLIP-188: Introduce Built-in Dynamic Table Storage

2021-11-10 Thread Jingsong Li
Hi Eron,

There is a POC LogStore abstraction: [1].

However, our current focus is not on the abstract log store, because
it is a very complex system. We can't clarify all requirements and
abstractions at the beginning, such as whether to use log store as the
WAL of file store. File store and log store can have more
collaboration, so we can't put forward the interface design with
compatibility commitment.

I believe that when the MVP comes out, it will be much clearer. Then
we will consider the extensibility of log store.

On the other hand, I think we can also have some communication in the
implementation process, and try to use Pulsar as the log store too.

[1] 
https://github.com/JingsongLi/flink/blob/table_storage/flink-table/flink-table-storage/src/main/java/org/apache/flink/table/storage/logstore/LogStoreFactory.java

Best,
Jingsong

On Thu, Nov 11, 2021 at 12:57 AM Eron Wright
 wrote:
>
> Jingsong, regarding the LogStore abstraction, I understand that you want to
> retain some flexibility as the implementation evolves.  It makes sense that
> the abstract interfaces would be @Internal for now.  Would you kindly
> ensure the minimal extensibility is in place, so that the Pulsar dev
> community may hack on a prototype implementation?
>
> I believe this is important for maintaining the perception that Flink
> doesn't unduly favor Kafka.
>
> -Eron
>
> On Tue, Nov 9, 2021 at 6:53 PM Jingsong Li  wrote:
>
> > Hi all,
> >
> > I have started the voting thread [1]. Please cast your vote there or
> > ask additional questions here.
> >
> > [1] https://lists.apache.org/thread/v3fzx0p6n2jogn86sptzr30kr3yw37sq
> >
> > Best,
> > Jingsong
> >
> > On Mon, Nov 1, 2021 at 5:41 PM Jingsong Li  wrote:
> > >
> > > Hi Till,
> > >
> > > Thanks for your suggestion.
> > >
> > > At present, we do not want users to use other storage implementations,
> > > which will undoubtedly require us to propose interfaces and APIs with
> > > compatibility guarantee, which will complicate our design. And some
> > > designs are constantly changing, we will constantly adjust according
> > > to the needs of end users.
> > >
> > > However, this does not prevent us from proposing some internal
> > > interfaces, such as ManagedTableStorageProvider you said, which can
> > > make our code more robust and testable. However, these interfaces will
> > > not be public, which means that we have no compatibility burden.
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Mon, Nov 1, 2021 at 3:57 PM Till Rohrmann 
> > wrote:
> > > >
> > > > Hi Kurt,
> > > >
> > > > Thanks a lot for the detailed explanation. I do see that implementing
> > this
> > > > feature outside of Flink will be a bigger effort because we probably
> > have
> > > > to think more about the exact interfaces and contracts. On the other
> > hand,
> > > > I can also imagine that users might want to use different storage
> > > > implementations (e.g. Pulsar instead of Kafka for the changelog
> > storage) at
> > > > some point.
> > > >
> > > > Starting with a feature branch and keeping this question in mind is
> > > > probably a good compromise. Getting this feature off the ground in
> > order to
> > > > evaluate it with users is likely more important than thinking of all
> > > > possible storage implementations and how to arrange the code. In case
> > we
> > > > should split it, maybe we need something like a
> > ManagedTableStorageProvider
> > > > that encapsulates the logic where to store the managed tables.
> > > >
> > > > Looking forward to this feature and the improvements it will add to
> > Flink's
> > > > SQL usability :-)
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Mon, Nov 1, 2021 at 2:46 AM Kurt Young  wrote:
> > > >
> > > > > Hi Till,
> > > > >
> > > > > We have discussed the possibility of putting this FLIP into another
> > > > > repository offline
> > > > > with Stephan and Timo. This looks similar with another under going
> > effort
> > > > > which trying
> > > > > to put all connectors outside the Flink core repository.
> > > > >
> > > > > From the motivation and scope of this FLIP, it's quite different from
> > > > > current connectors in
> > > > > some aspects. What we are trying to offer is some kind of built-in
> > storage,
> > > > > or we can call it
> > > > > internal/managed tables, compared with current connectors, they kind
> > of
> > > > > express external
> > > > > tables of Flink SQL. Functionality-wise, this managed table would
> > have more
> > > > > ability than
> > > > > all these connectors, since we controlled the implementation of such
> > > > > storage. Thus this table
> > > > > storage will interact with lots SQL components, like metadata
> > handling,
> > > > > optimization, execution,
> > > > > etc.
> > > > >
> > > > > However we do see some potential benefits if we choose to put it
> > outside
> > > > > Flink:
> > > > > - We may achieve more rapid development speed and maybe more frequent
> > > > > release.
> > > > > - Force us to think really c

[jira] [Created] (FLINK-24870) Cannot cast "java.util.Date" to "java.time.Instant"

2021-11-10 Thread wangbaohua (Jira)
wangbaohua created FLINK-24870:
--

 Summary: Cannot cast "java.util.Date" to "java.time.Instant"
 Key: FLINK-24870
 URL: https://issues.apache.org/jira/browse/FLINK-24870
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.13.1
Reporter: wangbaohua


        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:582)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:562)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkRuntimeException: 
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
        at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:76)
        at 
org.apache.flink.table.data.conversion.StructuredObjectConverter.open(StructuredObjectConverter.java:80)
        ... 11 more
Caused by: 
org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
        at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:74)
        ... 12 more
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
        at 
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:89)
        at 
org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:74)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
        at 
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
        ... 15 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 120, Column 
101: Cannot cast "java.util.Date" to "java.time.Instant"
        at 
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12211)
        at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5051)
        at org.codehaus.janino.UnitCompiler.access$8600(UnitCompiler.java:215)
        at org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4418)
        at org.codehaus.janino.UnitCompiler$16.visitCast(UnitCompiler.java:4396)
        at org.codehaus.janino.Java$Cast.accept(Java.java:4898)
        at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4396)
        at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5057)
        at org.codehaus.janino.UnitCompiler.access$8100(UnitCompiler.java:215)
        at 
org.codehaus.janino.UnitCompiler$16$1.visitParenthesizedExpression(UnitCompiler.java:4409)
        at 
org.codehaus.janino.UnitCompiler$16$1.visitParenthesizedExpression(UnitCompiler.java:4400)
        at 
org.codehaus.janino.Java$ParenthesizedExpression.accept(Java.java:4924)
        at 
org.codehaus.janino.UnitCompiler$16.visitLvalue(UnitCompiler.java:4400)
        at 
org.codehaus.janino.UnitCompiler$16.visitLvalue(UnitCompiler.java:4396)
        at org.codehaus.janino.Java$Lvalue.accept(Java.java:4148)
        at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4396)
        at 
org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5662)
        at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:5182)
        at org.codehaus.janino.UnitCompiler.access$9100(UnitCompiler.java:215)
        at 
org.codehaus.janino.UnitCompiler$16.visitMethodInvocation(UnitCompiler.java:4423)
        at 
org.codehaus.

[jira] [Created] (FLINK-24871) Flink SQL hive reports IndexOutOfBoundsException when using trim in where clause

2021-11-10 Thread Liu (Jira)
Liu created FLINK-24871:
---

 Summary: Flink SQL hive reports IndexOutOfBoundsException when 
using trim in where clause
 Key: FLINK-24871
 URL: https://issues.apache.org/jira/browse/FLINK-24871
 Project: Flink
  Issue Type: Improvement
Reporter: Liu


The problem can be reproduced as follow:

In class HiveDialectITCase, define the test testTrimError

 
{code:java}
@Test
public void testTrimError() {
tableEnv.executeSql("create table src (x int,y string)");
tableEnv.executeSql("select * from src where trim(y) != ''");
} {code}
Executing it will throw the following exception.

 
{panel}
java.lang.IndexOutOfBoundsException: index (2) must be less than size (1)

    at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1345)
    at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1327)
    at 
com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:43)
    at 
org.apache.calcite.rex.RexCallBinding.getOperandType(RexCallBinding.java:136)
    at 
org.apache.calcite.sql.type.OrdinalReturnTypeInference.inferReturnType(OrdinalReturnTypeInference.java:40)
    at 
org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:56)
    at 
org.apache.calcite.sql.type.SqlTypeTransformCascade.inferReturnType(SqlTypeTransformCascade.java:56)
    at org.apache.calcite.sql.SqlOperator.inferReturnType(SqlOperator.java:482)
    at org.apache.calcite.rex.RexBuilder.deriveReturnType(RexBuilder.java:283)
    at org.apache.calcite.rex.RexBuilder.makeCall(RexBuilder.java:257)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:107)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:56)
    at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
    at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:158)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:107)
    at 
org.apache.flink.table.planner.delegation.hive.SqlFunctionConverter.visitCall(SqlFunctionConverter.java:56)
    at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterRelNode(HiveParserCalcitePlanner.java:914)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterRelNode(HiveParserCalcitePlanner.java:1082)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genFilterLogicalPlan(HiveParserCalcitePlanner.java:1099)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genLogicalPlan(HiveParserCalcitePlanner.java:2736)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.logicalPlan(HiveParserCalcitePlanner.java:284)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParserCalcitePlanner.genLogicalPlan(HiveParserCalcitePlanner.java:272)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParser.analyzeSql(HiveParser.java:290)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParser.processCmd(HiveParser.java:238)
    at 
org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:208)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:735)
    at 
org.apache.flink.connectors.hive.HiveDialectITCase.testTrimError(HiveDialectITCase.java:366)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    at org.junit.runners.ParentR

Re: [ANNOUNCE] Documentation now available at nightlies.apache.org

2021-11-10 Thread David Morávek
Also big thanks to Gavin for setting the redirects up! ;)

Best,
D.

On Thu, Nov 11, 2021 at 2:34 AM Leonard Xu  wrote:

> Nice !
> Thanks chesnay for the continuous effort.
>
> > 在 2021年11月11日,06:58,Chesnay Schepler  写道:
> >
> > A redirect from ci.apache.org to nightlies.apache.org has been set up.
> >
> > On 05/11/2021 04:44, Yun Tang wrote:
> >> Hi Chesnay,
> >>
> >> It seems that the redirection has not been completed. As old
> documentation hasn't updated for days, shall we add a waring on top of
> documentation, like current warning of "This documentation is for an
> out-of-date version of Apache Flink." to tell people to go to new location?
> >>
> >> Best
> >> Yun Tang
> >> 
> >> From: Chesnay Schepler 
> >> Sent: Friday, September 10, 2021 13:32
> >> To: dev@flink.apache.org ; Leonard Xu <
> xbjt...@gmail.com>
> >> Subject: Re: [ANNOUNCE] Documentation now available at
> nightlies.apache.org
> >>
> >> A redirection will be setup by infra at some point.
> >>
> >> On 10/09/2021 05:23, Leonard Xu wrote:
> >>> Thanks Chesnay for the migration work.
> >>>
> >>> Should we add a redirection for the old documentation site:
> https://ci.apache.org/flink/flink-docs-master/  to make
> >>> it redirect to the new one:
> https://nightlies.apache.org/flink/flink-docs-master/ ?
> >>>
> >>> The bookmark in users’ browser should still be the old one, I googled
> "flink documents" which also returned the old one.
> >>> And the old one won’t be updated and would be outdated soon.
> >>>
> >>> Best,
> >>> Leonard
> >>>
>  在 2021年9月6日,17:11,Chesnay Schepler  写道:
> 
>  Website has been updated to point to nightlies.apache.org as well.
> 
>  On 03/09/2021 08:03, Chesnay Schepler wrote:
> > The migration is pretty much complete and the documentation is now
> available at nightlies.apache.org .
> >
> > Please click around a bit and check if anything is broken.
> >
> > If no issues are reported by the end of today I will update the
> links on the website.
> >
> > On 01/09/2021 10:11, Chesnay Schepler wrote:
> >> We are in the final steps of migrating the documentation to the new
> buildbot setup.
> >>
> >> Because of that the documentation currently available at
> ci.apache.org will NOT be updated until further notice because the old
> builders have been deactivated while we iron out kinks in the new ones.
> >>
> >> I will keep you updated on the progress.
> >>
> >>
> >
>
>