+1, it is really nice to have the N-Ary stream operator which is meaningful in
some scenarios.
best,
Zhijiang
--
From:Jingsong Li
Send Time:2020 Jan. 10 (Fri.) 11:00
To:dev
Subject:Re: [VOTE] FLIP-92: Add N-Ary Stream Operator in
Rui Li created FLINK-15547:
--
Summary: Support access to Hive avro table
Key: FLINK-15547
URL: https://issues.apache.org/jira/browse/FLINK-15547
Project: Flink
Issue Type: Task
Components:
non-binding +1
On Fri, Jan 10, 2020 at 9:11 AM Zhijiang
wrote:
> +1, it is really nice to have the N-Ary stream operator which is
> meaningful in some scenarios.
>
> best,
> Zhijiang
>
>
> --
> From:Jingsong Li
> Send Time:2020 Jan
Hi Jingsong Lee
You are right that the connectors don't validate data types either now.
We seems lack a mechanism to validate with properties[1], data types, etc
for CREATE TABLE.
[1] https://issues.apache.org/jira/browse/FLINK-15509
*Best Regards,*
*Zhenghua Gao*
On Fri, Jan 10, 2020 at 2:59
wangsan created FLINK-15548:
---
Summary: Make KeyedCoProcessOperatorWithWatermarkDelay extends
KeyedCoProcessOperator instead of LegacyKeyedCoProcessOperator
Key: FLINK-15548
URL: https://issues.apache.org/jira/browse/FLI
caojian0613 created FLINK-15549:
---
Summary: integer overflow in
SpillingResettableMutableObjectIterator
Key: FLINK-15549
URL: https://issues.apache.org/jira/browse/FLINK-15549
Project: Flink
Is
Yun Tang created FLINK-15550:
Summary: testCancelTaskExceptionAfterTaskMarkedFailed failed on
azure
Key: FLINK-15550
URL: https://issues.apache.org/jira/browse/FLINK-15550
Project: Flink
Issue T
Leonard Xu created FLINK-15551:
--
Summary: Streaming File Sink s3 end-to-end test FAIL
Key: FLINK-15551
URL: https://issues.apache.org/jira/browse/FLINK-15551
Project: Flink
Issue Type: Bug
Terry Wang created FLINK-15552:
--
Summary: SQL Client can not correctly create kafka table using
--library to indicate a kafka connector directory
Key: FLINK-15552
URL: https://issues.apache.org/jira/browse/FLINK-1555
hailong wang created FLINK-15553:
Summary: Create table ddl support comment after computed column
Key: FLINK-15553
URL: https://issues.apache.org/jira/browse/FLINK-15553
Project: Flink
Issue
Chesnay Schepler created FLINK-15554:
Summary: Bump jetty-util-ajax to 9.3.24
Key: FLINK-15554
URL: https://issues.apache.org/jira/browse/FLINK-15554
Project: Flink
Issue Type: Improvemen
hailong wang created FLINK-1:
Summary: Delete TABLE_OPTIMIZER_REUSE_SOURCE_ENABLED option for
subplaner reuse
Key: FLINK-1
URL: https://issues.apache.org/jira/browse/FLINK-1
Project: Flin
hailong wang created FLINK-15556:
Summary: Add a switch for PushProjectIntoTableSourceScanRule
Key: FLINK-15556
URL: https://issues.apache.org/jira/browse/FLINK-15556
Project: Flink
Issue Typ
Thanks a lot for starting this discussion Patrick! I think it is a very
good idea to move Flink's docker image more under the jurisdiction of the
Flink PMC and to make it releasing new docker images part of Flink's
release process (not saying that we cannot release new docker images
independent of
Chris created FLINK-15557:
-
Summary: Cannot connect to Azure Event Hub/Kafka since Jan 5th
2020. Kafka version issue
Key: FLINK-15557
URL: https://issues.apache.org/jira/browse/FLINK-15557
Project: Flink
Thanks everyone for the prompt feedback. Please see my response below.
> In Postgress, the TIME/TIMESTAMP WITH TIME ZONE has the java.time.Instant
semantic, and should be mapped to Flink's TIME/TIMESTAMP WITH LOCAL TIME
ZONE
Zhenghua, you are right that pg's 'timestamp with timezone' should be
tr
Hi Zhenghua,
For external systems with schema, I think the schema information is
available most of the time and should be the single source of truth to
programmatically mapping column precision via Flink catalogs, to minimize
users efforts creating schema redundantly again and avoid any human erro
vinoyang created FLINK-15558:
Summary: Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7
connector
Key: FLINK-15558
URL: https://issues.apache.org/jira/browse/FLINK-15558
Project: Flink
Iss
18 matches
Mail list logo