Hi,
Please don't use the mailing list for this purpose.
Best regards,
Martijn
On Wed, Feb 21, 2024 at 4:08 PM sri hari kali charan Tummala
wrote:
>
> Hi Folks,
>
> I am currently seeking full-time positions in Flink Scala in India or the USA
> (non consulting) , specifically at the Principal
icles around these would
> help.
>
> Regards,
> Kartik
>
>
> On Mon, Feb 12, 2024, 10:24 AM Martijn Visser
> wrote:
>>
>> Sources don't need to support two phase commits, that's something for
>> sinks. I think the example of exactly-once-proces
sk crash or job restarts, taking into account that regular checkpoint is
> also enabled and restart and recovery should not lead to duplicates from the
> user managed state vs the checkpointed state.
>
>
> Regards
> Kartik
>
>
> On Mon, Feb 12, 2024, 9:50 AM Martijn Visse
Hi Kartik,
I don't think there's much that the Flink community can do here to
help you. The Solace source and sink aren't owned by the Flink
project, and based on the source code they haven't been touched for
the last 7 years [1] and I'm actually not aware of anyone who uses
Solace at all.
Best r
The Apache Flink community is very happy to announce the release of
Apache flink-connector-kafka v3.1.0. This release is compatible with
Apache Flink 1.17 and 1.18.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
str
Hi,
I would definitely expect a FLIP on this topic before moving to
implementation.
Best regards,
Martijn
On Fri, Feb 2, 2024 at 12:47 PM Xuyang wrote:
> Hi, Prabhjot.
>
> IIUC, the main reasons why the community has not previously considered
> supporting join hints only in batch mode are as
exists to help those that would like to attend Community
over Code events, but are unable to do so for financial reasons. I'm hoping
that we'll have a wide variety of Flink community members over there!
All the details and more information can be found in the message below.
Best regards
Hi Charlotta,
I've just pushed out a vote for RabbitMQ connector v3.0.2 which
includes support for Flink 1.18. See
https://lists.apache.org/thread/jmpmrnnwv6yw4ol1zjc5t0frz67jpnqr
Best regards,
Martijn
On Tue, Jan 9, 2024 at 1:08 PM Jiabao Sun wrote:
>
> Hi Charlotta,
>
> The latest news about
chMethodError: 'scala.collection.immutable.ArraySeq
> scala.runtime.ScalaRunTime$.wrapRefArray(java.lang.Object[])'
>
>
>
> So this issue seems weird and does look like Flink is using the Scala 2.12
> runtime even if the flink-scala packages are not installed. The question is
&g
Hi Praveen,
There have been discussions around an LTS version [1] but no consensus
has yet been reached on that topic.
Best regards,
Martijn
[1] https://lists.apache.org/thread/qvw66of180t3425pnqf2mlx042zhlgnn
On Wed, Jan 10, 2024 at 12:08 PM Praveen Chandna via user
wrote:
>
> Hello
>
>
>
>
Hi Patrick,
You're on the right track, because you can't use any of the Flink
Scala APIs in order to use any arbitrary Scala version. Have you seen
the examples with Scala 3? [2] Do you have an example of your
code/setup?
Best regards,
Martijn
[1] https://flink.apache.org/2022/02/22/scala-free-
Hi Prasanna,
I think this is as expected. There is no support for monitoring
changes to existing files.
Best regards,
Martijn
On Fri, Jan 5, 2024 at 10:22 AM Prasanna kumar
wrote:
>
> Hi Flink Community,
>
>
> I hope this email finds you well. I am currently in the process of migrating
> my F
Hi all,
I want to get some insights on how many users are still using Hadoop 2
vs how many users are using Hadoop 3. Flink currently requires a
minimum version of Hadoop 2.10.2 for certain features, but also
extensively uses Hadoop 3 (like for the file system implementations)
Hadoop 2 has a large
Hi,
If there's nothing that pushes the watermark forward, then the window
won't be able to close. That's a common thing and expected for every
operator that relies on watermarks. You can also decide to configure
an idleness in order to push the watermark forward if needed.
Best regards,
Martijn
Hi Gordon,
Thanks for the release! I've pushed one hotfix [1], to make sure that
the Flink documentation shows the correct version number for the Flink
version it's compatible with.
Best regards,
Martijn
[1]
https://github.com/apache/flink-connector-kafka/commit/6c3d3d06689336f2fd37bfa5a3b17a5
Hi Gordon,
I'm wondering if this might be a difference between how Maven and
Gradle build their projects, since you've done your validations with
Maven, but Günter uses Gradle.
In the end, the quickest fix would be to backport FLINK-30400 to the
Flink Kafka 3.0 release branch.
Best regards,
Mart
Ah, I actually misread checkpoint and savepoints, sorry. The purpose
of a checkpoint in principle is that Flink manages its lifecycle.
Which S3 interface are you using for the checkpoint storage?
On Tue, Nov 7, 2023 at 6:39 PM Martijn Visser wrote:
>
> Hi Yang,
>
> If you use the N
Hi Yang,
If you use the NO_CLAIM mode, Flink will not assume ownership of a
snapshot and leave it up to the user to delete them. See the blog [1]
for more details.
Best regards,
Martijn
[1]
https://flink.apache.org/2022/05/06/improvements-to-flink-operations-snapshots-ownership-and-savepoint-f
Hi,
That's by design: you can't dynamically add and remove topics from an
existing Flink job that is being restarted from a snapshot. The
feature you're looking for is being planned as part of FLIP-246 [1]
Best regards,
Martijn
[1] https://cwiki.apache.org/confluence/pages/viewpage.action?pageI
Thank you all who have contributed!
Op do 26 okt 2023 om 18:41 schreef Feng Jin
> Thanks for the great work! Congratulations
>
>
> Best,
> Feng Jin
>
> On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu wrote:
>
> > Congratulations, Well done!
> >
> > Best,
> > Leonard
> >
> > On Fri, Oct 27, 2023 at
om/apache/flink/blob/master/tools/ci/maven-utils.sh#L59
>
> Although the hostname we are referring here is hardcoded so it can be
> mitigated.
>
> Thanks and Regards,
> Ankur Singhal
>
> -Original Message-
> From: Martijn Visser
> Sent: Thursday, October
Hi Kirti Dhar,
There isn't really enough information to answer it: are you using
Flink in bounded mode, how have you created your job, what is
appearing in the logs etc.
Best regards,
Martijn
On Mon, Oct 16, 2023 at 7:01 AM Kirti Dhar Upadhyay K via user
wrote:
>
> Hi Community,
>
>
>
> Can so
Hi Patricia,
There's no guarantee of compatibility between different Flink minor
versions and it's not supported. If it works, that can be specific to
this use case and could break at any time. It's up to you to determine
if that is sufficient for you or not.
Best regards,
Martijn
On Mon, Oct 1
Hi Ankur,
Where do you see Flink using/bundling Curl?
Best regards,
Martijn
On Wed, Oct 11, 2023 at 9:08 AM Singhal, Ankur wrote:
>
> Hi Team,
>
>
>
> Do we have any plans to update flink to support Curl 8.4.0 with earlier
> versions having severe vulnerabilities?
>
>
>
> Thanks & Regards,
>
Hi Krzysztof,
The bundled Flink Kafka connector for 1.17 uses Kafka 3.2.3, see
https://github.com/apache/flink/blob/release-1.17/flink-connectors/flink-connector-kafka/pom.xml#L38
That's also the case for the externalized Flink Kafka connector v3.0,
see https://github.com/apache/flink-connector-ka
CVE-2023-41834: Apache Flink Stateful Functions allowed HTTP header
injection due to Improper Neutralization of CRLF Sequences
Severity: moderate
Vendor:
The Apache Software Foundation
Versions Affected:
Stateful Functions 3.1.0 to 3.2.0
Description:
Improper Neutralization of CRLF Sequences in
sion=12351276
We would like to thank all contributors of the Apache Flink community who
made this release possible!
Regards,
Martijn Visser
Hi Kamal,
The best starting point would be to look at how to write a custom source
connector. Have a look at
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/sources/
which also includes links to the various classes that you'll need. Please
let us know what else you've trie
Hi,
Please send an email to user-unsubscr...@flink.apache.org in order to be
removed from the User mailing list.
Best regards,
Martijn
On Wed, Jul 26, 2023 at 3:44 AM Lu Weizheng
wrote:
> Unsubscribe
>
Hi,
As documented [1] this option "enables uploading and starting jobs through
the Flink UI (true by default). Please note that even when this is
disabled, session clusters still accept jobs through REST requests (HTTP
calls). This flag only guards the feature to upload jobs in the UI."
It won't
_BROKER_LISTENER_NAME: 'PLAINTEXT'
> KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
> KAFKA_LOG_DIRS: '/tmp/kraft-controller-logs'
> # Replace CLUSTER_ID with a unique base64 UUID using "bin/kafka-storage.sh
> random-uuid"
> # See
> https:/
Hi Mengxi Wang,
Which Flink version are you using?
Best regards,
Martijn
On Thu, Jul 13, 2023 at 3:21 PM Wang, Mengxi X via user <
user@flink.apache.org> wrote:
> Hi community,
>
>
>
> We got this kuerberos error with Hadoop as file system on ECS Fargate
> deployment.
>
>
>
> Caused by: org.ap
Hi Kamal,
It would require you to find a way to create a TCP connection on task
managers where you would only read the assigned part of the TCP connection.
Looking at the protocol itself, that most likely would be an issue. A TCP
connection would also be problematic in case of replays and checkpoi
Lock() method, right?
>
> Thankyou,
> Sanket
>
> On Fri, Jul 7, 2023 at 5:39 AM Martijn Visser
> wrote:
>
>> Hi Sanket,
>>
>> Have you read the release notes for Flink 1.11 at
>> https://nightlies.apache.org/flink/flink-docs-release-1.11/release-notes/flink-
n
> in the official release.
>
>
>
> *Von:* Meissner, Dylan
> *Gesendet:* Freitag, 30. Juni 2023 17:26
> *An:* Martijn Visser ; Schmeier, Jannik
>
> *Cc:* Schwalbe Matthias ;
> user@flink.apache.org
> *Betreff:* Re: Using pre-registered schemas with avro-confluent-registry
&g
Hi Sanket,
Have you read the release notes for Flink 1.11 at
https://nightlies.apache.org/flink/flink-docs-release-1.11/release-notes/flink-1.11.html#removal-of-deprecated-streamtaskgetcheckpointlock-flink-12484
?
Given that Flink 1.11 is a version that's no longer supported in the Flink
community
nd restart the application using the latest info as our starting
> point. I'd like to avoid this, though, because it would certainly create a
> bit of complexity.
>
> Thanks,
> Mike
>
> On Fri, Jul 7, 2023 at 10:58 AM Martijn Visser
> wrote:
>
>> Hi Michael,
>>
Hi Michael,
In the current Table API/SQL, there's no guarantee that a change to either
the query or the Flink version won't lead to state incompatibility. That's
also documented at
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/ops/upgrading/#table-api--sql
Best regards,
Martijn
Hi Kamal,
There's no such limitation, so most likely this is related to the
implementation of your TCP source connector. Do keep in mind that just by
the nature of TCP, I doubt that you will have any guarantees when it comes
to this source. E.g. if you roll back to a savepoint of one day ago, how
Hi Mahmoud,
While it's not an answer to your questions, I do want to point out
that the DataSet API is deprecated and will be removed in a future
version of Flink. I would recommend moving to either the Table API or
the DataStream API.
Best regards,
Martijn
On Thu, Jun 22, 2023 at 6:14 PM Mahmo
The Apache Flink community is very happy to announce the release of
Apache flink-connector-jdbc v3.1.1. This version is compatible with
Flink 1.16 and Flink 1.17.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data
strea
Hi Dani,
There are two things that I notice:
1. You're mixing different Flink versions (1.16 and 1.17): all Flink
artifacts should be from the same Flink version
2. S3 plugins need to be added to the plugins folder of Flink, because they
are loaded via the plugin mechanism. See
https://nightlies.
Hi Alexis,
There are a couple of recent Flink tickets on watermark alignment,
specifically https://issues.apache.org/jira/browse/FLINK-32414 and
https://issues.apache.org/jira/browse/FLINK-32420 - Could the later be also
applicable in your case?
Best regards,
Martijn
On Wed, Jun 28, 2023 at 11:
Hi Vladislav,
I think it might be worthwhile to upgrade to Flink 1.17, given the
improvements that have been made in Flink 1.16 and 1.17 on batch
processing. See for example the release notes of 1.17, with an entire
section on batch processing
https://flink.apache.org/2023/03/23/announcing-the-rel
Thanks for reaching out Stephen. I've also updated the Slack invite link at
https://flink.apache.org/community/#slack
Best regards, Martijn
On Thu, Jun 29, 2023 at 3:20 AM yuxia wrote:
> Hi, Stephen.
> Welcome to join Flink Slack channel. Here's my invitation link:
>
> https://join.slack.com/t/
like exactly-once verification)
>>2. Updating the flink-statefun-playground repo and manually running
>>all language examples there.
>>
>> If upgrading Flink versions was the only change in the release, I'd
>> probably say that this is sufficient.
hich
> are pretty straightforward. Perhaps he could weigh in on whether the
> combination of automated tests plus those smoke tests should be sufficient
> for testing with new Flink versions (I believe the answer is yes).
>
> -- Galen
>
>
>
> On Thu, Jun 8, 2023 at 8:01
to leverage Scylla Java Driver
> once the migration is done.
> ~
> Karthik
>
>
> On Mon, Jun 12, 2023 at 4:56 PM Martijn Visser
> wrote:
>
>> Hi,
>>
>> Why wouldn't you just use the Flink Kafka connector and the Flink
>> Cassandra connector for yo
case the
> source fails completely. (Something similar to
> "ActionRequestFailureHandler" for ElasticsearchSink)
>
> Many thanks in advance,
> Anirban
>
> On 09-06-2023 20:01, Martijn Visser wrote:
>
> Hi,
>
> This consumer should not be used. This only occurs in really old a
Hi,
Why wouldn't you just use the Flink Kafka connector and the Flink Cassandra
connector for your use case?
Best regards,
Martijn
On Mon, Jun 12, 2023 at 12:03 PM Karthik Deivasigamani
wrote:
> Hi,
>I have a use case where I need to read messages from a Kafka topic,
> parse it and write
Hi,
This consumer should not be used. This only occurs in really old and no
longer supported Flink versions. You should really upgrade to a newer
version of Flink and use the KafkaSource.
Best regards,
Martijn
On Fri, Jun 9, 2023 at 11:05 AM Anirban Dutta Gupta <
anir...@indicussoftware.com> wr
>
> I am currently using Stateful Functions in my application.
>
> I use Apache Flink for stream processing, and StateFun as a hand-off
> point for the rest of the application.
> It serves well as a bridge between a Flink Streaming job and
> micro-services.
>
> I would be dis
ing parquet encoder/decoder and during decoding if any corrupt
> record comes then need to raise alarm and maintain metrices visible over
> Flink Metrices GUI.
>
>
>
> So any custom metrices can be created in Flink? Please give some reference
> of any such documentation.
>
&
Hi Kamal,
No, but it should be straightforward to create metrics or events for these
types of situations and integrate them with your own alerting solution.
Best regards,
Martijn
On Wed, Jun 7, 2023 at 8:25 AM Kamal Mittal via user
wrote:
> Hello Community,
>
>
>
> Is there any way Flink prov
Hey Ryan,
I've never encountered a use case for writing Protobuf encoded files to a
filesystem.
Best regards,
Martijn
On Fri, May 26, 2023 at 6:39 PM Ryan Skraba via user
wrote:
> Hello all!
>
> I discovered while investigating FLINK-32008[1] that we can write to the
> filesystem connector wi
Hi,
This question is better suited for the Iceberg community, since they've
built the Flink-Iceberg integration.
Best regards,
Martijn
On Wed, May 31, 2023 at 9:48 AM 湘晗刚 <1016465...@qq.com> wrote:
> flink14 batch mode can read iceberg table but stream mode can not ,why?
> Thanks in advance
>
Hi Jannik,
Can you share how you've set those properties, because I've been able to
use this without any problems.
Best regards,
Martijn
On Thu, Jun 1, 2023 at 2:43 PM Schmeier, Jannik
wrote:
> Hello Thias,
>
>
>
> thank you for your answer.
>
>
>
> We've tested registering an existing (byte
Hi Jannik,
By default, Kafka client applications automatically register new schemas
[1]. You should be able to influence that by using properties, e.g. setting:
'properties.auto.register.schemas' = 'false'
'properties.use.latest.version' = 'true'
Best regards,
Martijn
[1]
https://docs.confluen
Same here as with Flink 1.16.2, thank you Weijie and those who helped with
testing!
On Fri, May 26, 2023 at 1:08 PM weijie guo
wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.17.1, which is the first bugfix release for the Apache Flink 1.17
> ser
Thank you Weijie and those who helped with testing!
On Fri, May 26, 2023 at 1:06 PM weijie guo
wrote:
> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.16.2, which is the second bugfix release for the Apache
> Flink 1.16 series.
>
>
>
> Apache Flink® is an op
Hi Hatem,
Could it be that you don't have checkpointing enabled? Flink only commits
its offset when a checkpoint has been completed successfully, as explained
on
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#consumer-offset-committing
Best regards,
Martij
Hi Anuj,
I recalled another ticket on this topic, which had some things to test. I
don't know if that resolved the issue, can you verify it? See
https://issues.apache.org/jira/browse/FLINK-31095
Best regards,
Martijn
On Tue, May 23, 2023 at 7:04 AM Anuj Jain wrote:
> Hello,
> Please provide s
Hi Amenreet Singh Sodhi,
Flink is compatible with JDK8 and JDK11, not with JDK17. You can find the
Jira issue that tracks compatibility at
https://issues.apache.org/jira/browse/FLINK-15736. The biggest problem is
the Kryo serializer that's currently being used. That doesn't work with
JDK17, but up
The Apache Flink community is very happy to announce the release of Apache
flink-connector-gcp-pubsub v3.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate d
The Apache Flink community is very happy to announce the release of Apache
flink-connector-elasticsearch v1.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurat
The Apache Flink community is very happy to announce the release of Apache
flink-connector-opensearch v1.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate d
The Apache Flink community is very happy to announce the release of Apache
flink-connector-pulsar v4.0.0. This release is compatible with Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applica
The Apache Flink community is very happy to announce the release of Apache
flink-shaded v17.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download at:
https:
The Apache Flink community is very happy to announce the release of Apache
flink-connector-rabbitmq v3.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate dat
Hi Anuj,
You can't provide the values for S3 in job code, since the S3 filesystems
are loaded via plugins. Credentials must be stored in flink-conf.yaml. The
recommended method for setting up credentials is by using IAM, not via
Access Keys. See
https://nightlies.apache.org/flink/flink-docs-master
>> *To:* Chesnay Schepler
>> *Cc:* Piotr Nowojski ; Alexis Sarda-Espinosa <
>> sarda.espin...@gmail.com>; Martijn Visser ;
>> d...@flink.apache.org ; user
>> *Subject:* Re: [Discussion] - Release major Flink version to support JDK
>> 17 (LTS)
>>
&g
Hi,
Have you followed the documentation at
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#security
?
Best regards,
Martijn
On Fri, Apr 21, 2023 at 3:00 AM Shammon FY wrote:
> Hi Naga
>
> Could you provide detailed error information? I think it may be us
Hi Kirti Dhar,
1. The SourceReader downloads the file, which is assigned to him by the
SplitEnumerator
2. This depends on the format; a BulkFormat like Parquet or ORC can be read
in batches of records at a time.
3. The SplitEnumerator runs on the JobManager, not on a TaskManager. Have
you read som
Hi Michael,
I'm looping in Andrey since he has worked a lot on the Opensearch
connector. A contribution is very welcome in case this can be improved.
Best regards,
Martijn
On Tue, Apr 18, 2023 at 8:45 AM Michael Hempel Jørgensen
wrote:
> Hi,
>
> we need to use OAuth2 (Client Credentials Flo
Hi,
Only the S3 Presto and S3 Hadoop filesystem plugins don't rely on Hadoop
dependencies, all other filesystems do. See
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/configuration/advanced/#hadoop-dependencies
for how to make them available.
Best regards,
Martijn
On Mon, Apr 17
docs, it works with Scala latest version
> without any issue. Otherwise, Scala users will have issues if they won't
> use an extra Scala wrapper for Java API. If that Scala wrapper is not an
> official part of Flink project, then it will be unsafe to use Scala at all.
> Günter has me
Hi Prateek,
You will need to stop and restart your jobs with the new connector
configuration.
Best regards,
Martijn
On Thu, Apr 13, 2023 at 10:10 AM Prateek Kohli
wrote:
> Hi,
>
> I am using Flink Kafka connectors to communicate with Kafka broker over
> mutual TLS.
> Is there any way or recom
Hi Alexey,
> Taking into account my Scala experience for the last 8 years, I predict
these wrappers will eventually be abandoned, unless such a Scala library is
a part of some bigger community like ASF.
For the past couple of years, there have been no maintainers for Scala in
the Flink community.
Hi everyone,
I want to open a discussion on the status of the Statefun Project [1] in
Apache Flink. As you might have noticed, there hasn't been much development
over the past months in the Statefun repository [2]. There is currently a
lack of active contributors and committers who are able to hel
Hi Tian,
Thanks for flagging this. This is the first time that we've released a
Flink version with connectors externalized and we're still discussing
what's the best way to release connectors for new versions in a simple way.
This is something that we're trying to get done asap.
Best regards,
Ma
Hi all,
I also saw a thread on this topic from Clayton Wohl [1] on this topic,
which I'm including in this discussion thread to avoid that it gets lost.
>From my perspective, there's two main ways to get to Java 17:
1. The Flink community agrees that we upgrade Kryo to a later version,
which mea
Hi everyone,
I'm forwarding the following information from the ASF Travel Assistance
Committee (TAC):
---
Hi All,
The ASF Travel Assistance Committee is supporting taking up to six (6)
people
to attend Berlin Buzzwords [1] In June this year.
This includes Conference passes, and travel & accomm
Hi Reem
My thinking is that this might be related to recently reported
https://issues.apache.org/jira/browse/FLINK-31632.
Best regards,
Martijn
On Wed, Mar 29, 2023 at 7:07 PM Reem Razak via user
wrote:
> Hey Martijn,
>
> The version is 1.16.0
>
> On Wed, Mar 29, 2023 at
Hi Reem,
What's the Flink version where you're encountering this issue?
Best regards,
Martijn
On Wed, Mar 29, 2023 at 5:18 PM Reem Razak via user
wrote:
> Hey there!
>
> We are seeing a second Flink pipeline encountering similar issues when
> configuring both `withWatermarkAlignment` and `wit
st and need to test everything , so is there any chance of
> running the with the flink 1.10.1 by doing any configuration changes , that
> make jobs visible in yarn Web UI.
>
> On Tue, 28 Mar 2023 at 19:59, Martijn Visser
> wrote:
>
>> Hi,
>>
>> You can't m
You could consider trying out the experimental version upgrade that was
introduced as part of FLIP-190: https://cwiki.apache.org/confluence/x/KZBnCw
On Tue, Mar 21, 2023 at 12:11 PM Ashish Khatkar via user <
user@flink.apache.org> wrote:
> Hi Shammon,
>
> Schema evolution works with avro type sta
Hi,
This is tracked under https://issues.apache.org/jira/browse/FLINK-31612 and
a fix has been merged and will be made available when the first patch
version for Flink 1.16.1 will be released.
Best regards,
Martijn
On Sat, Mar 25, 2023 at 9:37 AM ChangZhuo Chen (陳昌倬)
wrote:
> On Sat, Mar 25,
Hi,
You can't mix and match different versions; they all need to be the same
version. Flink 1.9.3 is no longer supported by the Flink community, I would
recommend upgrading to a still supported version (which is currently Flink
1.16 and Flink 1.17)
Best regards,
Martijn
On Tue, Mar 28, 2023 at
Hi Danny,
Thanks a lot for driving this release!
Best regards,
Martijn
On Wed, Mar 15, 2023 at 5:38 PM Danny Cranmer
wrote:
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.15.4, which is the fourth bugfix release for the Apache Flink 1.15
> series.
>
> A
Hi Penny,
When you complete step 1 and step 2, it means that you have subscribed to
the User mailing list so you can post the email that you want to send to
the User mailing list by performing step 3. I can see why the email can be
confusing though.
Best regards,
Martijn
On Sat, Mar 11, 2023 at
Hi Razin,
I believe this is a false positive; the CVE talks about "Wildfly version
7.2.0.GA, 7.2.3.GA and 7.2.5.CR2 are believed to be vulnerable" which I
believe are related to https://github.com/wildfly/wildfly-core/
However, the included Wildfly is wildfly-ssl, which I believe is
https://githu
t this is kind of a blocker for coming
> to a sane state before proceeding.
>
> Is there any generic guide for version upgrading ?
>
>
>
>
> On Mon, Feb 6, 2023 at 11:38 AM Martijn Visser
> wrote:
>
>> Hi Milind Vaidya,
>>
>> I would highly recomme
Hi Frank,
Parquet always requires Hadoop. There is a Parquet ticket to make it
possible to read/write Parquet without depending on Hadoop, but that's
still open. So in order for Flink to be able to work with Hadoop, it
requires the necessary Hadoop dependencies as outlined in
https://nightlies.apa
Hi Frank,
There's currently no workaround for this as far as I know. I'm looping in
Timo who at one point wanted to work on
https://issues.apache.org/jira/browse/FLINK-29267 to mitigate this.
Best regards,
Martijn
On Mon, Feb 13, 2023 at 9:16 AM Frank Lyaruu wrote:
> HI Flink community, I'm t
Moving the Dev mailing list to BCC and adding the User ML in this thread
On Wed, Feb 8, 2023 at 8:08 AM Amir Hossein Sharifzadeh <
amirsharifza...@gmail.com> wrote:
> Thanks. If you look at the code, I am defining/creating the table as:
>
> create_kafka_source_ddl = """
> CREATE TABLE pay
Hi all,
Is there anything that the Flink community could do to raise awareness?
Perhaps it would be interesting for the maintainers to write a short blog
post about it, which potentially could drive traffic?
Best regards,
Martijn
On Sun, Feb 5, 2023 at 4:39 PM Alexey Novakov via user <
user@fli
Hi Milind Vaidya,
I would highly recommend upgrading your Flink cluster and
applications. Flink 1.9 was released in August 2019 and is no longer
supported by the community. Newer Kafka versions are supported on
newer Flink versions.
Best regards,
Martijn
Op ma 6 feb. 2023 om 20:19 schreef Milin
the release process. Our goal is to constantly improve the
release process. Feedback on what could be improved or things that didn't
go so well are appreciated.
Best regards,
Martijn Visser
now,
> it could perhaps be an issue later on. What about a certain partition going
> idle will result in state buildup?
>
> Thanks,
> Vishal
> On 25 Jan 2023 at 9:14 PM +0530, Martijn Visser ,
> wrote:
>
> Hi Vishal,
>
> Could idleness be an issue? I could see t
Hi Vishal,
Could idleness be an issue? I could see that if idleness occurs and the
Kafka Source not going in an idle state, that more internal state (to
commit Kafka transactions) can build up over time that ultimately causes an
out of memory problem. See
https://nightlies.apache.org/flink/flink-d
1 - 100 of 309 matches
Mail list logo