Fuyao Li created FLINK-27483:
Summary: Support adding HTTP header for HTTP based Jar fetch
Key: FLINK-27483
URL: https://issues.apache.org/jira/browse/FLINK-27483
Project: Flink
Issue Type: Impro
Hi,
>> Do you mean to ignore it while processing records, but keep using
`maxBuffersPerChannel` when calculating the availability of the output?
Yes, it is correct.
>> Would it be a big issue if we changed it to check if at least
"overdraft number of buffers are available", where "overdraft
The flink-kubernetes-operator project is only published
via apache/flink-kubernetes-operator on docker hub and github packages.
We do not find the obvious advantages by using docker hub official images.
Best,
Yang
Xintong Song 于2022年4月28日周四 19:27写道:
> I agree with you that doing QA for the imag
Aarsh Shah created FLINK-27482:
--
Summary: Dashboard not showing after installation
Key: FLINK-27482
URL: https://issues.apache.org/jira/browse/FLINK-27482
Project: Flink
Issue Type: Bug
Thanks Lijie and Zhu for creating the proposal.
I want to share some thoughts about Flink cluster operations.
In the production environment, the SRE(aka Site Reliability Engineer)
already has many tools to detect the unstable nodes, which could take the
system logs/metrics into consideration.
The
Hi Piotr,
Thanks for the comment.
Just to clarify, I am not against the decorative interfaces, but I do think
we should use them with caution. The main argument for adding the methods
to the SourceReader is that these methods are effectively NON-OPTIONAL to
SourceReader impl, i.e. starting from t
Hi
Thanks for Martijn Visser and Piotrek's feedback. I agree with
ignoring the legacy source, it will affect our design. User should
use the new Source Api as much as possible.
Hi Piotrek, we may still need to discuss whether the
overdraft/reserve/spare should use extra buffers or buffers
in (ex
Hi Steven,
Isn't this redundant to FLIP-182 and FLIP-217? Can not Iceberg just emit
all splits and let FLIP-182/FLIP-217 handle the watermark alignment and
block the splits that are too much into the future? I can see this being an
issue if the existence of too many blocked splits is occupying too
Hi All Contributors and Committers,
This is a first reminder email that travel
assistance applications for ApacheCon NA 2022 are now open!
We will be supporting ApacheCon North America in New Orleans, Louisiana,
on October 3rd through 6th, 2022.
TAC exists to help those that would like to attend
Hi,
Sorry for chipping in so late, but I was OoO for the last two weeks.
Regarding the interfaces, I would be actually against adding those methods
to the base interfaces for the reasons mentioned above. Clogging the base
interface for new users with tons of methods that they do not need, do not
u
Hi fanrui,
> Do you mean don't add the extra buffers? We just use (exclusive buffers *
> parallelism + floating buffers)? The LocalBufferPool will be available
when
> (usedBuffers+overdraftBuffers <=
exclusiveBuffers*parallelism+floatingBuffers)
> and all subpartitions don't reach the maxBuffersPe
Monika Hristova created FLINK-27481:
---
Summary: Flink checkpoints are very slow after upgrading from
Flink 1.13.1 to Flink 1.14.3
Key: FLINK-27481
URL: https://issues.apache.org/jira/browse/FLINK-27481
Fabian Paul created FLINK-27480:
---
Summary: KafkaSources sharing the groupId might lead to
InstanceAlreadyExistException warning
Key: FLINK-27480
URL: https://issues.apache.org/jira/browse/FLINK-27480
Pr
Hi Qingsheng, Leonard and Jark,
Thanks for your detailed feedback! However, I have questions about
some of your statements (maybe I didn't get something?).
> Caching actually breaks the semantic of "FOR SYSTEM_TIME AS OF proc_time”
I agree that the semantics of "FOR SYSTEM_TIME AS OF proc_time"
Mason Chen created FLINK-27479:
--
Summary: HybridSource refreshes availability future
Key: FLINK-27479
URL: https://issues.apache.org/jira/browse/FLINK-27479
Project: Flink
Issue Type: Improvemen
Hi everyone,
Just wanted to chip in on the discussion of legacy sources: IMHO, we should
not focus too much on improving/adding capabilities for legacy sources. We
want to persuade and push users to use the new Source API. Yes, this means
that there's work required by the end users to port any cus
16 matches
Mail list logo