https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
Kroy themselves state that v5 likely can't read v2 data.
However, both versions can be on the classpath without classpath as v5
offers a versioned artifact that includes the version in the package.
It probably would
Hi everyone,
Thanks for your participation.
@Gordon, I looked at the several questions you raised:
1. Should we use the firstDiscovery flag or two separate
OffsetsInitializers? Actually, I have considered later. If we follow my
initial idea, we can provide a default earliest OffsetsIniti
Matthias Pohl created FLINK-31678:
-
Summary: NonHAQueryableStateFsBackendITCase.testAggregatingState:
Query did no succeed
Key: FLINK-31678
URL: https://issues.apache.org/jira/browse/FLINK-31678
Proje
Hi Dong and Zhipeng,
Thanks for starting the discussion. Glad to see a new release of Flink ML.
Cheers!
On Fri, Mar 31, 2023 at 2:34 PM Zhipeng Zhang
wrote:
> Hi Dong,
>
> Thanks for starting the discussion. +1 for the Flink ML 2.1.0 release.
>
Hi Dong,
Thanks for starting the discussion. +1 for the Flink ML 2.1.0 release.
jackylau created FLINK-31677:
Summary: Add MAP_ENTRIES supported in SQL & Table API
Key: FLINK-31677
URL: https://issues.apache.org/jira/browse/FLINK-31677
Project: Flink
Issue Type: Improvement
Hi Feng,
Thanks for driving this FLIP. I have some questions about this FLIP:
1. I agree with Shammon's comment. The two methods `registerCatalog(String
catalogName,Catalog catalog)` and `registerCatalog(String catalogName,
Map properties)` make users confused.
I think we should add a method `get
Hi Feng
Thanks for driving this FLIP. The idea of catalog configuration is cool and
I think it will be very useful. I have some comments about the FLIP
1. `CatalogStore` in the FLIP stores information of catalogs, what's the
relationship between `Map catalogs` and `CatalogStore
catalogStore`? How
Hi, Etienne.
Thanks for Etienne for sharing this article. I really like it and learn much
from it.
I'd like to raise some questions about implementing batch source. Welcome devs
to share insights about them.
The first question is how to generate splits:
As the article mentioned:
"Whenever poss
Martijn Visser created FLINK-31676:
--
Summary: Pulsar connector should not rely on Flink Shaded
Key: FLINK-31676
URL: https://issues.apache.org/jira/browse/FLINK-31676
Project: Flink
Issue Ty
Hi all,
How do Flink formats relate to or interact with Paimon (formerly
Flink-Table-Store)? If the Flink format interface is used there, then it
may be useful to consider Arrow along with other columnar formats.
Separately, from previous experience, I've seen the Arrow format be useful
as an ou
Apologies reopening the thread until we get confirmation that the test run
is not a problem with the connector
On Thu, Mar 30, 2023 at 5:03 PM Danny Cranmer
wrote:
> This vote is now closed, I will announce the result in a separate thread.
>
> Thanks,
> Danny
>
> On Thu, Mar 30, 2023 at 5:02 PM
This vote is now closed, I will announce the result in a separate thread.
Thanks,
Danny
On Thu, Mar 30, 2023 at 5:02 PM Danny Cranmer
wrote:
> +1 (binding)
>
> - CI build rerun passed on the RC1 tag commit (on the second rerun)
> - Validated hashes
> - Verified signature
> - Verified that no bi
+1 (binding)
- CI build rerun passed on the RC1 tag commit (on the second rerun)
- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
Thanks,
Danny
On Thu, Mar 30, 2023 at 3:32 PM Jiabao Sun
wrote:
> T
Hey,
> 1. The Flink community agrees that we upgrade Kryo to a later version,
which means breaking all checkpoint/savepoint compatibility and releasing a
Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
dropped. This is probably the quickest way, but would still mean tha
Hi all,
After creating the Cassandra source connector (thanks Chesnay for the
review!), I wrote a blog article about how to create a batch source with
the new Source framework [1]. It gives field feedback on how to
implement the different components.
I felt it could be useful to people inter
Antonio Vespoli created FLINK-31675:
---
Summary: Deadlock in AWS Connectors following content-length AWS
SDK exception
Key: FLINK-31675
URL: https://issues.apache.org/jira/browse/FLINK-31675
Project:
Thanks Danny,
The test case fails again with stack trace:
14:24:57,709 [Source: Sequence Source -> Map -> Map -> Sink: Writer (1/1)#94]
ERROR org.apache.flink.connector.mongodb.sink.writer.MongoWriter [] - Bulk
Write to MongoDB failed
com.mongodb.MongoBulkWriteException: Bulk write operation
Ryan Skraba created FLINK-31674:
---
Summary: [JUnit5 Migration] Module: flink-table-planner
(BatchAbstractTestBase)
Key: FLINK-31674
URL: https://issues.apache.org/jira/browse/FLINK-31674
Project: Flink
I'm happy to announce that we have unanimously approved this release.
There are 5 approving votes, 3 of which are binding:
* Robert (binding)
* Hong
* Samrat
* Martijn (binding)
* Danny (binding)
There are no disapproving votes.
Thanks everyone!
Danny,
This vote is now closed, I will announce the result in a separate thread.
Thanks,
Danny
On Thu, Mar 30, 2023 at 3:15 PM Danny Cranmer
wrote:
> +1 (binding)
>
> - Validated hashes
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven
> -
+1 (binding)
- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
Thanks,
Danny
On Thu, Mar 30, 2023 at 12:37 PM Martijn Visser
wrote:
> +1 (binding)
>
> - Validated hashes
> - Verified signature
> - V
I have restarted the build [1].
Thanks,
Danny
[1]
https://github.com/apache/flink-connector-mongodb/actions/runs/4435527099
On Wed, Mar 29, 2023 at 4:33 PM Jiabao Sun
wrote:
> Thanks Chesnay,
>
> I noticed that problem before, the error message shows:
>
> Caused by: java.lang.RuntimeException:
Hi Jane,
Thanks for your detailed response.
You mentioned that there are 10k+ SQL jobs in your production
> environment, but only ~100 jobs' migration involves plan editing. Is 10k+
> the number of total jobs, or the number of jobs that use stateful
> computation and need state migration?
>
10k
I realize I'm not looking at all angles here, but the biggest hitch that I
see with including both serializers and a migration process is that it
requires Flink and its users to stay on JDK < 17 for at least one "bump"
version before moving up. At this time they can upgrade to the version of
Flink
Hi,
To be honest, I haven't seen that much demand for supporting the Arrow
format directly in Flink as a flink-format. I'm wondering if there's really
much benefit for the Flink project to add another file format, over
properly supporting the format that we already have in the project.
Best regar
Hi all,
I also saw a thread on this topic from Clayton Wohl [1] on this topic,
which I'm including in this discussion thread to avoid that it gets lost.
>From my perspective, there's two main ways to get to Java 17:
1. The Flink community agrees that we upgrade Kryo to a later version,
which mea
It is a good point that flink integrates apache arrow as a format.
Arrow can take advantage of SIMD-specific or vectorized optimizations,
which should be of great benefit to batch tasks.
However, as mentioned in the issue you listed, it may take a lot of work
and the community's consideration for i
Hi everyone,
I'm forwarding the following information from the ASF Travel Assistance
Committee (TAC):
---
Hi All,
The ASF Travel Assistance Committee is supporting taking up to six (6)
people
to attend Berlin Buzzwords [1] In June this year.
This includes Conference passes, and travel & accomm
+1 (binding)
- Validated hashes
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven
- Verified licenses
- Verified web PRs
On Tue, Mar 28, 2023 at 1:53 PM Samrat Deb wrote:
> +1 (non-binding)
>
> - checked Source archive builds using maven
Hi all,
Thanks for stepping up as volunteers, much appreciated!
Best regards,
Martijn
On Thu, Mar 30, 2023 at 5:59 AM Leonard Xu wrote:
> Thanks Konstantin and Qingsheng for kicking off and pushing forward the
> discussion.
>
> Thanks Qingsheng, Jing, Konstantin, Sergey, Yun for volunteering
Benchao Li created FLINK-31673:
--
Summary: Add E2E tests for flink jdbc driver
Key: FLINK-31673
URL: https://issues.apache.org/jira/browse/FLINK-31673
Project: Flink
Issue Type: Sub-task
Chesnay Schepler created FLINK-31672:
Summary: Requirement validation does nto take user-specified
maxParallelism into account
Key: FLINK-31672
URL: https://issues.apache.org/jira/browse/FLINK-31672
Hi, devs,
Thank you all for your inspirational responses, and sorry for the late
reply. Since more and more people are joining the discussion and may have
missed some of the previous replies, I'd like to summarize the unresolved
comments so far here, to make sure we're all on the same page and won
Yuxin Tan created FLINK-31671:
-
Summary: [Connectors/HBase] Update flink to 1.17.0
Key: FLINK-31671
URL: https://issues.apache.org/jira/browse/FLINK-31671
Project: Flink
Issue Type: Improvement
Weijie Guo created FLINK-31670:
--
Summary: ElasticSearch connector's document was not incorrect
linked to external repo
Key: FLINK-31670
URL: https://issues.apache.org/jira/browse/FLINK-31670
Project: Fli
36 matches
Mail list logo