Xintong Song created FLINK-24069:
Summary:
IgnoreInFlightDataITCase.testIgnoreInFlightDataDuringRecovery hangs on azure
Key: FLINK-24069
URL: https://issues.apache.org/jira/browse/FLINK-24069
Project:
Yun Gao created FLINK-24068:
---
Summary: CheckpointBarrierHandler may skip the markAlignmentStart
for alignment -with-timeout checkpoint
Key: FLINK-24068
URL: https://issues.apache.org/jira/browse/FLINK-24068
Yun Gao created FLINK-24067:
---
Summary: CheckpointBarrierHandler may skip the markAlignmentStart
for alignment -with-timeout checkpoint
Key: FLINK-24067
URL: https://issues.apache.org/jira/browse/FLINK-24067
liuzhuo created FLINK-24066:
---
Summary: Provides a new stop entry for Kubernetes session mode
Key: FLINK-24066
URL: https://issues.apache.org/jira/browse/FLINK-24066
Project: Flink
Issue Type: Impro
Yun Gao created FLINK-24065:
---
Summary: Upgrade the TwoPhaseCommitSink to support empty
transaction after finished
Key: FLINK-24065
URL: https://issues.apache.org/jira/browse/FLINK-24065
Project: Flink
Thomas Weise created FLINK-24064:
Summary: HybridSource recovery from savepoint fails
Key: FLINK-24064
URL: https://issues.apache.org/jira/browse/FLINK-24064
Project: Flink
Issue Type: Bug
Hi Jark and Jingsong,
Thanks for your reply! Since modifying the SQL type system needs a lot of
work, I agree that we should postpone this until we get more requests from
users.
For my own case, according to the domain knowledge, I think a precision of
38 would be enough (though the fields were d
Aitozi created FLINK-24063:
--
Summary: Reconsider the behavior of ClusterEntrypoint#startCluster
failure handler
Key: FLINK-24063
URL: https://issues.apache.org/jira/browse/FLINK-24063
Project: Flink
Dian Fu created FLINK-24062:
---
Summary: Exception encountered during timer serialization in
Python DataStream API
Key: FLINK-24062
URL: https://issues.apache.org/jira/browse/FLINK-24062
Project: Flink
Hi Xingcan,
As a workaround, can we convert large decimal to varchar?
If Flink SQL wants to support large decimal, we should investigate
other big data and databases. As Jark said, this needs a lot of work.
Best,
Jingsong Lee
On Tue, Aug 31, 2021 at 11:16 AM Jark Wu wrote:
>
> Hi Xingcan, Timo
Xintong Song created FLINK-24061:
Summary: RMQSourceITCase.testAckFailure fails on azure
Key: FLINK-24061
URL: https://issues.apache.org/jira/browse/FLINK-24061
Project: Flink
Issue Type: Bug
Hi Xingcan, Timo,
Yes, flink-cdc-connector and JDBC connector also don't support larger
precision or no precision.
However, we didn't receive any users reporting this problem.
Maybe it is not very common that precision is higher than 38 or without
precision.
I think it makes sense to support this
Aitozi created FLINK-24060:
--
Summary: Move ZooKeeperUtilTest to right place
Key: FLINK-24060
URL: https://issues.apache.org/jira/browse/FLINK-24060
Project: Flink
Issue Type: Technical Debt
Brian Zhou created FLINK-24059:
--
Summary: SourceReaderTestBase should allow NUM_SPLITS to be
overridden in implementation
Key: FLINK-24059
URL: https://issues.apache.org/jira/browse/FLINK-24059
Project:
Xintong Song created FLINK-24058:
Summary:
TaskSlotTableImplTest.testMarkSlotActiveDeactivatesSlotTimeout fails on azure
Key: FLINK-24058
URL: https://issues.apache.org/jira/browse/FLINK-24058
Project
James Kim created FLINK-24057:
-
Summary: Flink SQL client Hadoop is not in the
classpath/dependencies error even thugh Hadoop S3 File system plugin was added
Key: FLINK-24057
URL: https://issues.apache.org/jira/browse
Hi Timo,
Though it's an extreme case, I still think this is a hard blocker if we
would ingest data from an RDBMS (and other systems supporting large
precision numbers).
The tricky part is that users can declare numeric types without any
precision and scale restrictions in RDBMS (e.g., NUMBER in O
Aitozi created FLINK-24056:
--
Summary: Remove unused ZooKeeperUtilityFactory
Key: FLINK-24056
URL: https://issues.apache.org/jira/browse/FLINK-24056
Project: Flink
Issue Type: Bug
Component
Fabian Paul created FLINK-24055:
---
Summary: Deprecate FlinkKafkaConsumer
Key: FLINK-24055
URL: https://issues.apache.org/jira/browse/FLINK-24055
Project: Flink
Issue Type: Improvement
Timo Walther created FLINK-24054:
Summary: Let SinkUpsertMaterializer emit +U instead of only +I
Key: FLINK-24054
URL: https://issues.apache.org/jira/browse/FLINK-24054
Project: Flink
Issue T
刘方奇 created FLINK-24053:
---
Summary: stop with savepoint timeout
Key: FLINK-24053
URL: https://issues.apache.org/jira/browse/FLINK-24053
Project: Flink
Issue Type: Bug
Components: Runtime / Che
Moses created FLINK-24052:
-
Summary: Flink SQL reads S3 bucket data.
Key: FLINK-24052
URL: https://issues.apache.org/jira/browse/FLINK-24052
Project: Flink
Issue Type: Improvement
Component
Fabian Paul created FLINK-24051:
---
Summary: Make consumer.group-id optional for KafkaSource
Key: FLINK-24051
URL: https://issues.apache.org/jira/browse/FLINK-24051
Project: Flink
Issue Type: Imp
Ingo Bürk created FLINK-24050:
-
Summary: Support primary keys on metadata columns
Key: FLINK-24050
URL: https://issues.apache.org/jira/browse/FLINK-24050
Project: Flink
Issue Type: Improvement
I think Flink 1.10.x used Travis. So I agree with Tison's proposal. +1 for
removing the "@flinkbot run travis" from the command documentation.
cc @Chesnay Schepler
Cheers,
Till
On Sun, Aug 29, 2021 at 4:48 AM tison wrote:
> Hi,
>
> I can still see "@flinkbot run travis" in flinkbot's toast bu
Hi Xingcan,
in theory there should be no hard blocker for supporting this. The
implementation should be flexible enough at most locations. We just
adopted 38 from the Blink code base which adopted it from Hive.
However, this could be a breaking change for existing pipelines and we
would need
26 matches
Mail list logo