+1 (binding)
On Thu, Mar 2, 2023 at 1:30 PM Yu Li wrote:
>
> +1 (binding)
>
> Best Regards,
> Yu
>
>
> On Thu, 2 Mar 2023 at 09:53, Jark Wu wrote:
>
> > +1 (binding)
> >
> > Best,
> > Jark
> >
> > > 2023年3月2日 05:03,Gyula Fóra 写道:
> > >
> > > +1 (binding)
> > >
> > > Gyula
> > >
> > > On Wed, Ma
Hi everyone,
Weihua Hu [1] notified me of a section in Flink's Azure Pipeline
documentation [2] where it's suggested to create PRs against
flink-ci/mirror as a workaround if you're not having a private Azure
Pipeline account and want to run CI with your code changes. Even though
it's a viable solut
Hello Daniel! Quite a while ago, I started porting the Pub/Sub connector
(from an existing PR) to the new source API in the new
flink-connector-gcp-pubsub repository [PR2]. As Martijn mentioned, there
hasn't been a lot of attention on this connector; any community involvement
would be appreciated
+1 (binding)
- Checked the diff between 1.15.3 and 1.15.4-rc1: *OK* (
https://github.com/apache/flink/compare/release-1.15.3...release-1.15.4-rc1)
- AWS SDKv2 version has been bumped to 2.19.14 through FLINK-30633 and
all NOTICE files updated correctly
- Checked release notes: *OK*
- Checked
Matthias Pohl created FLINK-31297:
-
Summary:
FineGrainedSlotManagerTest.testTaskManagerRegistrationDeductPendingTaskManager
untable when running it a single time
Key: FLINK-31297
URL: https://issues.apache.org/ji
Matthias Pohl created FLINK-31298:
-
Summary:
ConnectionUtilsTest.testFindConnectingAddressWhenGetLocalHostThrows swallows
IllegalArgumentException
Key: FLINK-31298
URL: https://issues.apache.org/jira/browse/FLINK
Hi
Thanks for the feedback from Jingsong and Benchao.
For @Jingsong
> If the user does not cast into a FlinkResultSet, will there be
serious consequences here (RowKind is ignored)?
I agree with you that it's indeed a big deal if users ignore the row kind
when they must know it. One idea that com
Maximilian Michels created FLINK-31299:
--
Summary: PendingRecords metric might not be available
Key: FLINK-31299
URL: https://issues.apache.org/jira/browse/FLINK-31299
Project: Flink
Issu
Hello Ryan,
Unfortunately there's not much shared logic between the two- the clients
have to look fundamentally different since the Pub/Sub Lite client exposes
partitions to the split level for repeatable reads.
I have no objection to this living in the same repo as the Pub/Sub
connector, if this
Sergey Nuyanzin created FLINK-31300:
---
Summary: TRY_CAST fails for constructed types
Key: FLINK-31300
URL: https://issues.apache.org/jira/browse/FLINK-31300
Project: Flink
Issue Type: Bug
lincoln lee created FLINK-31301:
---
Summary: Unsupported nested columns in column list of insert
statement
Key: FLINK-31301
URL: https://issues.apache.org/jira/browse/FLINK-31301
Project: Flink
yuzelin created FLINK-31302:
---
Summary: Split spark modules according to version
Key: FLINK-31302
URL: https://issues.apache.org/jira/browse/FLINK-31302
Project: Flink
Issue Type: Improvement
Márton Balassi created FLINK-31303:
--
Summary: k8s operator should gather job cpu and memory utilization
metrics
Key: FLINK-31303
URL: https://issues.apache.org/jira/browse/FLINK-31303
Project: Flink
Yordan Pavlov created FLINK-31304:
-
Summary: Very slow job start if topic has been used before
Key: FLINK-31304
URL: https://issues.apache.org/jira/browse/FLINK-31304
Project: Flink
Issue Typ
Hi everyone,
This vote thread is now closed. We have reached a consensus with 6
binding votes, 8 non-binding votes and no vetoes, I will follow up and
apply the changes.
+1s
- Matthias
- Jing
- Weijie
- Weihua
- Yuxia
- Junrui
- Samrat
- Sergey
- Thomas (binding)
- Gyula (binding)
- Jark (binding
Mason Chen created FLINK-31305:
--
Summary: KafkaWriter doesn't wait for errors for in-flight records
before completing flush
Key: FLINK-31305
URL: https://issues.apache.org/jira/browse/FLINK-31305
Project
Hi,
Thanks Kui for driving this Flip and thanks all for the informative
discussion.
@Timo
Your suggestion about the naming convention is excellent. Thanks! I was
wondering why you, exceptionally, suggested 'scan.idle-timeout' instead of
'scan.watermark.idle-timeout'. I must miss something here.
Jiang Xin created FLINK-31306:
-
Summary: Add Servable for PipelineModel
Key: FLINK-31306
URL: https://issues.apache.org/jira/browse/FLINK-31306
Project: Flink
Issue Type: Improvement
Co
Hi Danny,
I'm sorry that I'm coming to this thread a little late. It seems that this
will be the last bugfix release of Flink 1.15? If so, I'd like to also
include https://issues.apache.org/jira/browse/FLINK-31272 into this release
which fixes a serious issue of PyFlink.
Regards,
Dian
On Thu,
Wujunzhe created FLINK-31307:
Summary: RocksDB:java.lang.UnsatisfiedLinkError
Key: FLINK-31307
URL: https://issues.apache.org/jira/browse/FLINK-31307
Project: Flink
Issue Type: Bug
Affects Ve
tanjialiang created FLINK-31308:
---
Summary: JobManager's metaspace out-of-memory when submit a
flinksessionjobs
Key: FLINK-31308
URL: https://issues.apache.org/jira/browse/FLINK-31308
Project: Flink
Jingsong Lee created FLINK-31309:
Summary: Rollback DFS schema if hive sync fail in
HiveCatalog.createTable
Key: FLINK-31309
URL: https://issues.apache.org/jira/browse/FLINK-31309
Project: Flink
Jingsong Lee created FLINK-31310:
Summary: Force clear directory no matter what situation in
HiveCatalog.dropTable
Key: FLINK-31310
URL: https://issues.apache.org/jira/browse/FLINK-31310
Project: Flin
Hi everyone,
This FLIP[1] aims to support connectors in avoiding overwriting non-target
columns with null values when processing partial column updates, we propose
adding information on the target column list to DynamicTableSink#Context.
FLINK-18726[2] supports inserting statements with specified
Hi Samrat/Prabhu
My preliminary question here would be that though YARN is the platform for
Flink, but the yarn also runs over K8s ? Is that the reason why you wanted
the autoscale logic to be generic but inside the operator itself?
So if the case is that Yarn is the resource management, then the
Jingsong Lee created FLINK-31311:
Summary: Supports Bounded Watermark streaming read
Key: FLINK-31311
URL: https://issues.apache.org/jira/browse/FLINK-31311
Project: Flink
Issue Type: Improve
Jiang Xin created FLINK-31312:
-
Summary: EnableObjectReuse cause different behaviors
Key: FLINK-31312
URL: https://issues.apache.org/jira/browse/FLINK-31312
Project: Flink
Issue Type: Bug
Hi, Shammon,
I took a look at JDBC `ResultSet` and `Statement`. They are
complicated and have many interfaces. Some of the interfaces may not
be very suitable for streaming.
I think maybe we can just implement JDBC for batch/olap only. It is
hard to have an integration for JDBC and streaming...
Hi All,
The native implementation of the App mode and session mode - does not have
any replica set .
Instead it just allows the JM to create TM pods on demand.
This is simple and easy in terms of creation of resources, but for an
upgrade story, how is this managed? Leaving K8s to manage a replica
Hi, Matthias
Thanks for bringing this discussion.
When I wanted to trigger a CI pipeline, my first thought was to submit a PR
to flink repo. But considering that the PR was not intended to be merged
in,
it might interfere with others. So I tried to retrieve how to run the CI
pipeline
without PR,
Hi all,
Thanks for all. There are more questions and I will answer one by one.
@Jark Thanks for your tips. For the first question, I will add more details
in the flip, and give a POC[1] so that pepole can know how I'm currently
implementing these features.
> IIRC, this is the first time we intro
Hi,
Thanks jinsong. I think implementing JDBC for batch mode first sounds good.
This will simplify the implementation and we can also remove the row kind
first. We can claim this in the FLIP and docs, I will update the FLIP.
Best,
Shammon
On Fri, Mar 3, 2023 at 2:36 PM Jingsong Li wrote:
> Hi
32 matches
Mail list logo