Hi Xintong,
I'm also not in favour of option 2, I think that two systems will result in
an administrative burden and less-efficient workflow. I'm also not in
favour of option 3, I think that this will result in first time
users/contributors in not-filling their first bug report, user question or
f
Jingsong Lee created FLINK-29735:
Summary: Introduce Metadata tables for table store
Key: FLINK-29735
URL: https://issues.apache.org/jira/browse/FLINK-29735
Project: Flink
Issue Type: New Fea
Jingsong Lee created FLINK-29736:
Summary: Abstract a table interface for both data and metadata
tables
Key: FLINK-29736
URL: https://issues.apache.org/jira/browse/FLINK-29736
Project: Flink
+1 (non-binding) for this candidate
* Built from the source code.
* Verified the signature and checksum
* Ran both streaming/batch jobs on yarn cluster
* The new speculative execution works as expected
Best,
Lijie
Yun Tang 于2022年10月22日周六 15:20写道:
> +1 (non-binding)
>
>
> *
chenzihao created FLINK-29737:
-
Summary: Support DataGen on waveform function
Key: FLINK-29737
URL: https://issues.apache.org/jira/browse/FLINK-29737
Project: Flink
Issue Type: Improvement
Hi Saurabh,
>From the scope of implementation, I think stopping with native savepoint is
>very close to stopping with checkpoint. The only different part is the fast
>duplication over distributed file systems, which could be mitigated via
>distributed file system shallow copy. Thus, I don't thi
I agree with you that option 1) would be the best for us. Let's keep hoping
for the best.
Option 4), as you said, comes with prices. At the moment, I don't have
thorough answers to your questions.
Just one quick response, I think there's a good chance that we can import
current Jira tickets into
Hi Saurabh and Yun Tang,
I tend to agree with Yun Tang. Exposure of stop-with-checkpoint would
complicate the system for most of the users a bit too much, with very
little gain.
> 1. In producing a full snapshot, I see this is noted as follows in the
Flip
If you want to recover in CLAIM or NO_CL
Bill G created FLINK-29738:
--
Summary: Allow UDT codec registration for CassandraSinkBuilder
Key: FLINK-29738
URL: https://issues.apache.org/jira/browse/FLINK-29738
Project: Flink
Issue Type: New Fea
Hi all,
The vote has passed per the Flink Bylaws [1]
+1 votes:
- Márton Balassi (binding)
- Maximilian Michels (binding)
- Matthias Pohl (binding)
- David Anderson (binding)
- Jing Ge (non-binding)
- Alexander Fedulov (non-binding)
-1 votes:
- Kevin Lam (non-binding)
Best regards,
Martijn
[1]
Martijn Visser created FLINK-29739:
--
Summary: [FLIP-265] Deprecate and remove Scala API support
Key: FLINK-29739
URL: https://issues.apache.org/jira/browse/FLINK-29739
Project: Flink
Issue T
Martijn Visser created FLINK-29740:
--
Summary: Deprecate all customer-facing Scala APIs
Key: FLINK-29740
URL: https://issues.apache.org/jira/browse/FLINK-29740
Project: Flink
Issue Type: Sub-
Martijn Visser created FLINK-29741:
--
Summary: Remove all Scala APIs
Key: FLINK-29741
URL: https://issues.apache.org/jira/browse/FLINK-29741
Project: Flink
Issue Type: Sub-task
Comp
yuzelin created FLINK-29742:
---
Summary: Support completing statement
Key: FLINK-29742
URL: https://issues.apache.org/jira/browse/FLINK-29742
Project: Flink
Issue Type: Sub-task
Reporter:
Jane Chan created FLINK-29743:
-
Summary: CatalogPropertiesUtil supports de/serializing column
comment
Key: FLINK-29743
URL: https://issues.apache.org/jira/browse/FLINK-29743
Project: Flink
Issue
Hi,
we should consider it very carefully if we should build something like
stop-with-checkpoint at all. Semantically and conceptually, Checkpoint
should be more and more internally managed by Flink[1] and users should
use it very sparingly from the developmernt's perspective. Savepoint is the
righ
Hey Hangxiang,
Thanks for driving this issue. I've read through all the discussions and
suggestions in this thread, and here is my take:
1. I agree that the compatibility check should be done in the opposite
direction.
The current interface *causes some real issues* for users using their
own
+1 (non-binding)
- verify signatures and checksums
- no binaries found in source archive
- build from source code
- verify python wheel package contents
- pip install apache-flink-libraries and apache-flink wheel packages
- thread mode works as expected in Python DataStream API
- the Python DataSt
+1 (non-binding)
- verified signatures and hashsums
- built from source code succeeded
- checked all dependency artifacts are 1.16
- started a cluster, ran a wordcount job, the result is expected, no suspicious
log output
- started SQL Gateway, tested several rest APIs, the SQL query results are
+1 (non-binding)
- checked hashes and signatures
- built from sources
- started cluster, ran different simple jobs
- checked sql client
On Mon, Oct 24, 2022 at 3:14 PM Leonard Xu wrote:
> +1 (non-binding)
>
> - verified signatures and hashsums
> - built from source code succeeded
> - checked a
+1 (non-binding)
* Downloaded artifacts
* Verified checksums/GPG signatures
* Compared checkout with provided sources
* Verified pom file versions
* Went over NOTICE file/pom files changes without finding anything
suspicious
* Build Flink from sources
* Deployed standalone session cluster and ran
+1 (binding)
* Verified checksums/GPG signatures
* Built from source
* Tested with Kubernetes operator, including simple jobs, checkpointing etc.
* Metrics, logs look good.
Gyula
On Mon, Oct 24, 2022 at 4:54 PM Matthias Pohl
wrote:
> +1 (non-binding)
>
> * Downloaded artifacts
> * Verified che
Hi,
The plan was and the ideal process is to try the externalizing connector
development and release with the elastic connector first and make it stable
before starting the migration of other connectors. There are already many
connectors like Iceberg and AWS connectors that are trying externalizin
Hi,
just pinging this thread in case someone missed it and has any opinion about
the discussed actions.
Best,
F
--- Original Message ---
On Tuesday, October 11th, 2022 at 23:29, Ferenc Csaky
wrote:
>
>
> Hi Martijn,
>
> Thank you for your comment. About HBase 2.x, correct, tha
Hi all,
@Etienne many thanks for the PR for the Cassandra Source. Hopefully we can
make this available after the 1.16 release.
With regards to the connector externalization comments, while the wiki for
the release plan for 1.17 is not available yet, the externalization of
connectors is definitely
Hi all,
There are many valid points raised in this discussion thread, but I think
we should not mix up different topics. From my perspective, there's two
things ongoing:
1. This thread is about the Flink community accepting the Iceberg
connector, with various maintainers from Iceberg volunteering
+1 (binding)
On Thu, Oct 20, 2022 at 12:37 AM wrote:
> Hi all,
>
> Thanks for all the feedback for FLIP 267[1]: Iceberg Connector in the
> discussion thread [2].
>
> I would like to start a vote thread for it. The vote will be open for
> atleast 72 hours.
>
>
> [1] https://lists.apache.org/threa
+1 (non-binding)
On Mon, Oct 24, 2022 at 11:32 AM Martijn Visser
wrote:
> +1 (binding)
>
> On Thu, Oct 20, 2022 at 12:37 AM wrote:
>
> > Hi all,
> >
> > Thanks for all the feedback for FLIP 267[1]: Iceberg Connector in the
> > discussion thread [2].
> >
> > I would like to start a vote thread f
+1 (non-binding)
On Mon, Oct 24, 2022 at 8:46 PM Steven Wu wrote:
> +1 (non-binding)
>
> On Mon, Oct 24, 2022 at 11:32 AM Martijn Visser
> wrote:
>
> > +1 (binding)
> >
> > On Thu, Oct 20, 2022 at 12:37 AM wrote:
> >
> > > Hi all,
> > >
> > > Thanks for all the feedback for FLIP 267[1]: Iceber
I don't think we want to talk about the Flink community accepting the
Iceberg connector just yet. The goal of Abid's exploration is to see
what it would look like as an external connector. We'd need to decide
in the Iceberg community if that's something that we'd want to do long
term. If it were me
I agree that leaving everything as is would be the best option. I also tend
to lean towards option 4 as a fallback for the reasons already mentioned.
I'm still not a big fan of the Github issues. But that's probably only
because I'm used to the look-and-feel and the workflows of Jira. I see
certain
+1 (non-binding)
* Hashes and Signatures look good
* All required files on dist.apache.org
* Tag is present in Github
* Verified source archive does not contain any binary files
* Source archive builds using maven
* Deployed standalone session cluster and ran TopSpeedWindowing example in
streamin
I like the single repo with single version idea.
Pros:
- Better discoverability for connectors for AWS services means a better
experience for Flink users
- Natural placement of AWS-related utils (Credentials, SDK Retry strategy)
Caveats:
- As you mentioned, it is not desirable if we have to evol
Hi Danny,
I'm also leaning slightly towards the single AWS connector repo direction.
Bumps in the underlying AWS SDK would bump all of the connectors in any
case. And if a change occurs that is isolated to a single connector, then
those that do not use that connector can just skip the release.
C
Matyas Orhidi created FLINK-29744:
-
Summary: Throw DeploymentFailedException on ImagePullBackOff
Key: FLINK-29744
URL: https://issues.apache.org/jira/browse/FLINK-29744
Project: Flink
Issue T
+1 (non-binding)
On Mon, Oct 24, 2022 at 2:54 PM Shqiprim Bunjaku
wrote:
> +1 (non-binding)
>
> On Mon, Oct 24, 2022 at 8:46 PM Steven Wu wrote:
>
> > +1 (non-binding)
> >
> > On Mon, Oct 24, 2022 at 11:32 AM Martijn Visser <
> martijnvis...@apache.org>
> > wrote:
> >
> > > +1 (binding)
> > >
>
+1 (binding)
On Mon, Oct 24, 2022 at 5:14 PM Xinbin Huang wrote:
> +1 (non-binding)
>
> On Mon, Oct 24, 2022 at 2:54 PM Shqiprim Bunjaku <
> shqiprimbunj...@gmail.com>
> wrote:
>
> > +1 (non-binding)
> >
> > On Mon, Oct 24, 2022 at 8:46 PM Steven Wu wrote:
> >
> > > +1 (non-binding)
> > >
> > >
+1 (non-binding)
* Build from source
* Use Flink Sql client create catalog/tables
* Use Hive dialect to run some queries and insert statements
Best regards,
Yuxia
- 原始邮件 -
发件人: "Teoh, Hong"
收件人: "dev"
发送时间: 星期二, 2022年 10 月 25日 上午 4:35:39
主题: Re: [VOTE] Release 1.16.0, release candidate
Shammon created FLINK-29745:
---
Summary: Split reader/writer factory for compaction in
MergeTreeTest
Key: FLINK-29745
URL: https://issues.apache.org/jira/browse/FLINK-29745
Project: Flink
Issue Type
Shammon created FLINK-29746:
---
Summary: Add workflow in github for micro benchmarks
Key: FLINK-29746
URL: https://issues.apache.org/jira/browse/FLINK-29746
Project: Flink
Issue Type: Sub-task
+1 (non-binding) for this candidate
* Built from the source code.
* Ran batch wordcount jobs with slow nodes of different source types on
the yarn cluster.
* The new source speculative execution works as expected, the result is
expected, no suspicious log output.
* Slow nodes are s
TBH, I suspect the way of “a single repository per connector”, considering
there are hundreds of connectors out there (Airbyte[1], Kafka[2]).
I don’t think the community is feasible to maintain hundreds of repositories.
It makes sense to combine some connectors to reduce the maintenance burden.
Junhan Yang created FLINK-29747:
---
Summary: [UI] Refactor runtime web from module-based to standalone
components
Key: FLINK-29747
URL: https://issues.apache.org/jira/browse/FLINK-29747
Project: Flink
Aitozi created FLINK-29748:
--
Summary: Expose the optimize phase in the connector context
Key: FLINK-29748
URL: https://issues.apache.org/jira/browse/FLINK-29748
Project: Flink
Issue Type: Improvemen
+1 (non-binding)
* Build from source code.
* Launched a standalone cluster with examples for TopSpeedWindowing.
* The configuration of changelog state backend in web UI is shown as
expected.
* The files of changelog state backend are scattered into separate
directories by job id as expected.
BTW,
Hi, everyone.
Thanks for your suggestions!
Let me summarize the remaining questions in the thread and share my ideas
based on your suggestions:
1. Should we put the new opposite interface in TypeSerializer or
TypeSerializerSnapshot ?
Just as I replied to Dawid, I'd like to put it in TypeS
jackylau created FLINK-29749:
Summary: flink info command support dynamic properties
Key: FLINK-29749
URL: https://issues.apache.org/jira/browse/FLINK-29749
Project: Flink
Issue Type: Bug
Aff
Hi Ryan,
Thanks for your input.
I think the Flink Connector API is relatively stable now, compared to the
previous versions.
We have verified the latest Iceberg connector with the upcoming 1.16
release, and it works well.
I think API stability is something for the future and we should have some
w
-1 to wait a bit for the conclusion of the discussion thread.
On Tue, 25 Oct 2022 at 08:26, Maximilian Michels wrote:
> +1 (binding)
>
> On Mon, Oct 24, 2022 at 5:14 PM Xinbin Huang
> wrote:
>
> > +1 (non-binding)
> >
> > On Mon, Oct 24, 2022 at 2:54 PM Shqiprim Bunjaku <
> > shqiprimbunj...@gm
Mingliang Liu created FLINK-29750:
-
Summary: Improve PostgresCatalog#listTables() by reusing resources
Key: FLINK-29750
URL: https://issues.apache.org/jira/browse/FLINK-29750
Project: Flink
I
yuzelin created FLINK-29751:
---
Summary: Migrate SQL Client Local Mode to use sql gateway
Key: FLINK-29751
URL: https://issues.apache.org/jira/browse/FLINK-29751
Project: Flink
Issue Type: Sub-task
> BTW, the "Add New" button in "Submit New Job" tab can't work in my local
> standalone cluster, is this as expected?
I checked this case it works well in my local env(MacOS + Chrome),it should be
your env issue.
Best,
Leonard Xu
>> +1 (non-binding) for this candidate
>>
>> * Built
+1 (non-binding)
* Built from source
* Verified signature and checksum
* Verified behavior/metrics/logs with internal stateful applications using
the Kafka source/sink connectors on K8s
Best,
Mason
On Mon, Oct 24, 2022 at 11:16 PM Leonard Xu wrote:
>
> > BTW, the "Add New" button in "Submit Ne
53 matches
Mail list logo