moran created FLINK-22271:
-
Summary: FlinkSQL Read Hive(parquet file) field does not exist
Key: FLINK-22271
URL: https://issues.apache.org/jira/browse/FLINK-22271
Project: Flink
Issue Type: Bug
Guowei Ma created FLINK-22270:
-
Summary: Python test pipeline no output for 900 seconds
Key: FLINK-22270
URL: https://issues.apache.org/jira/browse/FLINK-22270
Project: Flink
Issue Type: Bug
Guowei Ma created FLINK-22269:
-
Summary:
JobMasterStopWithSavepointITCase.throwingExceptionOnCallbackWithRestartsShouldSimplyRestartInSuspend
failed.
Key: FLINK-22269
URL: https://issues.apache.org/jira/browse/FLINK-
Guowei Ma created FLINK-22268:
-
Summary:
JobMasterStopWithSavepointITCase.testRestartCheckpointCoordinatorIfStopWithSavepointFails
fail because of "Not all required tasks are currently running."
Key: FLINK-22268
URL:
Carl created FLINK-22267:
Summary: Savepoint an application for source of upsert-kafka, then
restart the application from the savepoint, state not be recovered.
Key: FLINK-22267
URL: https://issues.apache.org/jira/brows
Guowei Ma created FLINK-22266:
-
Summary:
JobMasterStopWithSavepointITCase.throwingExceptionOnCallbackWithoutRestartsHelper
fail
Key: FLINK-22266
URL: https://issues.apache.org/jira/browse/FLINK-22266
Pro
BoYi Zhang created FLINK-22265:
--
Summary: Abnormal document display
Key: FLINK-22265
URL: https://issues.apache.org/jira/browse/FLINK-22265
Project: Flink
Issue Type: Improvement
Compo
您好!
有什么问题随时与我联系和沟通,期待您的回信,祝好!
2021-04-14
Fuyao Li created FLINK-22264:
Summary: Fix misleading statement about per-job mode support for
Kubernetes in Concept/Flink Architecture page
Key: FLINK-22264
URL: https://issues.apache.org/jira/browse/FLINK-22264
Hi, Flink dev
Could you share your thoughts about
https://issues.apache.org/jira/browse/FLINK-22164 ?
context:
We expose all flink metrics to an external system for monitoring and
alerting. However, JobManager metrics only have one variable ,
which is not enough to target to one job when job is d
Thanks all for this discussion. Looks like there are lots of ideas and
folks that are eager to do things, so let's see how we can get this moving.
My take on this is the following:
There will probably not be one Hybrid source, but possibly multiple ones,
because of different strategies/requiremen
Hi all!
Generally, avoiding API changes in Bug fix versions is the right thing, in
my opinion.
But this case is a bit special, because we are changing something that
never worked properly in the first place.
So we are not breaking a "running thing" here, but making it usable.
So +1 from my side
Cool. Thanks!
Best
Lu
On Mon, Apr 12, 2021 at 11:27 PM Piotr Nowojski
wrote:
> Hi,
>
> Yes. Back-pressure from AsyncOperator should be correctly reported via
> isBackPressured, backPressuredMsPerSecond metrics and by extension in the
> WebUI from 1.13.
>
> Piotre
>
> pon., 12 kwi 2021 o 23:17 L
hehuiyuan created FLINK-22263:
-
Summary: Using TIMESTAMPADD function with partition value has some
problem when push partition into TableSource
Key: FLINK-22263
URL: https://issues.apache.org/jira/browse/FLINK-22263
Andrea Peruffo created FLINK-22262:
--
Summary: Flink on Kubernetes ConfigMaps are created without
OwnerReference
Key: FLINK-22262
URL: https://issues.apache.org/jira/browse/FLINK-22262
Project: Flink
Jark Wu created FLINK-22261:
---
Summary: Python StreamingModeDataStreamTests is failed on Azure
Key: FLINK-22261
URL: https://issues.apache.org/jira/browse/FLINK-22261
Project: Flink
Issue Type: Bug
Ingo Bürk created FLINK-22260:
-
Summary: Source schema in CREATE TABLE LIKE statements is not
inferred correctly
Key: FLINK-22260
URL: https://issues.apache.org/jira/browse/FLINK-22260
Project: Flink
Dawid Wysakowicz created FLINK-22259:
Summary: UnalignedCheckpointITCase fails with "Value too large for
header, this indicates that the test is running too long"
Key: FLINK-22259
URL: https://issues.apache.or
@Robert We can workaround the snapshot limit issue fairly easily; this
limit is imposed per version, so if we modify the version to include the
commit hash this limit does not apply. This should also make it easier
to work with from the Flink side because a commit hash is easier to
copy&paste t
Robert Metzger created FLINK-22258:
--
Summary: Adaptive Scheduler: Show history of rescales in the Web UI
Key: FLINK-22258
URL: https://issues.apache.org/jira/browse/FLINK-22258
Project: Flink
Fabian Paul created FLINK-22257:
---
Summary: Clarify Flink ConfigOptions Usage
Key: FLINK-22257
URL: https://issues.apache.org/jira/browse/FLINK-22257
Project: Flink
Issue Type: Improvement
Fabian Paul created FLINK-22256:
---
Summary: Persist checkpoint type information
Key: FLINK-22256
URL: https://issues.apache.org/jira/browse/FLINK-22256
Project: Flink
Issue Type: Improvement
Till Rohrmann created FLINK-22255:
-
Summary: AdaptiveScheduler improvements
Key: FLINK-22255
URL: https://issues.apache.org/jira/browse/FLINK-22255
Project: Flink
Issue Type: Improvement
Till Rohrmann created FLINK-22254:
-
Summary: Only trigger scale up if the resources have stabilized
Key: FLINK-22254
URL: https://issues.apache.org/jira/browse/FLINK-22254
Project: Flink
Issu
Thanks for creating this proposal Chesnay. I do understand the problem you
want to fix.
What I am wondering is why we don't release flink-shaded more often. Does
the release process cause too much overhead? If this is the case, then we
could look into what is causing the overhead and whether we ca
Hi Chenqin,
The current rationale behind assuming a leadership loss when seeing a
SUSPENDED connection is to assume the worst and to be on the safe side.
Yang Wang is correct. FLINK-10052 [1] has the goal to make the behaviour
configurable. Unfortunately, the community did not have enough time to
Dawid Wysakowicz created FLINK-22253:
Summary: Update backpressure monitoring documentation
Key: FLINK-22253
URL: https://issues.apache.org/jira/browse/FLINK-22253
Project: Flink
Issue Ty
Thanks a lot for your responses.
I didn't know that you can explicitly refer to the timestamped snapshots of
the artifacts. The limitation to the last 2 snapshots means that a push to
flink-shaded can break our main CI? This sounds very fragile to me, given
that the setup itself is probably a bit
28 matches
Mail list logo