The Apache Flink community is very happy to announce the release of Apache
flink-connector-kafka 3.2.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is
The Apache Flink community is very happy to announce the release of Apache
flink-connector-jdbc 3.2.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is a
Apologies, this was RC2, not RC1.
On Fri, Jun 7, 2024 at 11:12 AM Danny Cranmer
wrote:
> I'm happy to announce that we have unanimously approved this release.
>
> There are 7 approving votes, 3 of which are binding:
> * Ahmed Hamdy
> * Hang Ruan
> * Leonard Xu (b
I'm happy to announce that we have unanimously approved this release.
There are 7 approving votes, 3 of which are binding:
* Ahmed Hamdy
* Hang Ruan
* Leonard Xu (binding)
* Yuepeng Pan
* Zhongqiang Gong
* Rui Fan (binding)
* Weijie Guo (binding)
There was one -1 vote that was cancelled.
* Yuepen
The Apache Flink community is very happy to announce the release of Apache
flink-connector-cassandra 3.2.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release
ease possible!
Regards,
Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-gcp-pubsub 3.1.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The releas
ease possible!
Regards,
Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-opensearch 1.1.0. This release supports Apache Flink 1.17
and 1.18.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
who
made this release possible!
Best Regards
Danny Cranmer
Hey all,
I believe this is because of FLINK-30400. Looking at the pom I cannot see
any other dependencies that would cause a problem. To workaround this, can
you try to remove that dependency from your build?
org.apache.flink
flink-connector-kafka
3.0.1-1.18
Hey,
The FlinkKinesisProducer is deprecated in favour of the KinesisSink. The
new sink does not rely on KPL, so this would not be a problem here. Is
there a reason you are using the FlinkKinesisProducer instead of
KinesisSink?
Thanks for the deep dive, generally speaking I agree it would be
p
The Apache Flink community is very happy to announce the release of Apache
flink-connector-mongodb 1.0.2.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download
The Apache Flink community is very happy to announce the release of Apache
flink-connector-cassandra 3.1.0.
This connector supports Flink 1.16 and 1.17.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
appli
The Apache Flink community is very happy to announce the release of Apache
flink-connector-jdbc 3.1.0. This connector supports Flink 1.16 and 1.17.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applicatio
Hello,
Kinesalite does not support EFO, so unfortunately you will need to hit the
real service for any end to end test.
Thanks,
Danny
On Tue, 25 Apr 2023, 20:10 Charles Tan, wrote:
> Hi all,
>
> I’ve tried a simple Flink application which uses FlinkKinesisConsumer. I
> noticed that when trying
The Apache Flink community is very happy to announce the release of Apache
flink-connector-aws 4.1.0 for Apache Flink 1.17.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is avai
The Apache Flink community is very happy to announce the release of Apache
flink-connector-mongodb 1.0.1 for Apache Flink 1.16 and 1.17.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The re
The Apache Flink community is very happy to announce the release of Apache
flink-connector-aws v4.1.0
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download at:
The Apache Flink community is very happy to announce the release of Apache
flink-connector-mongodb v1.0.0
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download
The Apache Flink community is very happy to announce the release of Apache
Flink 1.15.4, which is the fourth bugfix release for the Apache Flink 1.15
series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
/connectors/datastream/opensearch/
- Table API:
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/opensearch/
We would like to thank all contributors of the Apache Flink community who
made this release possible!
Regards,
Danny Cranmer
Hello David,
There is a FLIP [1] to add native Glue Catalog support and we already have
Glue Schema Registry format plugins [2][3], however these are data
stream API only. Are you intending on just using the Glue schema features
or leveraging other features? Would either of the things I mentioned
sion=12352538
We would like to thank all contributors of the Apache Flink community who
made this release possible!
Best Regards
Danny Cranmer
ible!
Regards,
Danny Cranmer
Hello,
By default the sink will not fail, the underlying connector has a flag
"failOnError" which defaults to false. Unfortunately this cannot be set for
Flink 1.14 in the Table API, however in 1.15 it can via
'sink.fail-on-error: true'
Thanks
On Wed, Nov 30, 2022 at 5:41 AM Dan Hill wrote:
>
Hey Matt,
Thanks for the feedback, I have updated the SinkIntoDynamoDb [1] sample to
avoid this in future. We have recently added support for @DynamoDbBean
annotated POJOs which you might find interesting. This removes the need to
create a custom ElementConverter all together,
see SinkDynamoDbBean
Hello Prasanna,
1) Of course we would always recommend you keep up to date. To receive
support and fixes from the Flink community you should try to stick to the
current/previous minor version, as per the policy. Releases for older
versions are rare and typically only performed under
exceptional ci
Hey, Akka frame maximum size is 2GB, which is limited by maximum java
byte[] size. Not sure why you config is being rejected. If your Akka frames
are getting large you can consider reducing
state.storage.fs.memory-threshold [1],l. If you are using RocksDB with
incremental checkpoints, triggering co
Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12351829
We would like to thank all contributors of the Apache Flink community who
made this release possible!
Regards,
Danny Cranmer
Hi Zain,
Glad you found the problem, good luck!
Thanks,
Danny Cranmer
On Fri, May 20, 2022 at 10:05 PM Zain Haider Nemati
wrote:
> Hi Danny,
> I looked into it in a bit more thorough detail, the bottleneck seems to be
> the transform function which is at 100% and causing back press
the timer,
and it does not look overly bursty. Seems to sit at around 3 records per 15
seconds, or 1 record every 5 seconds. This seems very low, is it expected?
Thanks,
Danny Cranmer
On Mon, May 16, 2022 at 10:57 PM Zain Haider Nemati
wrote:
> Hey Danny,
> Thanks for having a look at the
Hello Zain,
When you say "converting them to chunks of <= 1MB " does this mean you are
creating these chunks in a custom Flink operator, or you are relying on
the connector to do so? If you are generating your own chunks you can
potentially disable Aggregation at the sink.
Your throughput is incr
Hey Guoqin,
In order to achieve this you would need to either:
- Restart the job and resume from an old savepoint (taken before the events
you want to replay), assuming the state is still compatible with your
bugfix, or
- Restart the job without any state and seed the consumer with the start
posit
Hello Vijay,
> Once i do that my flink consumer need to be restarted with changed
parallelism.
Why is this? The Flink consumer continuously scans for new shards, and will
auto scale up/down the number of shard consumer threads to
accommodate Kinesis resharding. Flink job/operator parallelism does
+Jeremy who can help answer this question.
Thanks,
On Wed, Feb 16, 2022 at 10:26 AM Puneet Duggal
wrote:
> Hi,
>
> Just wanted to ask the community various pros and cons of deploying flink
> using AWS Kinesis vs using K8s application mode. Currently we are deploying
> flink cluster in HA sessio
ntribution process.
>
>
>
> Thanks
>
> -Saravan
>
>
>
> *From: *Danny Cranmer
> *Date: *Wednesday, January 19, 2022 at 3:10 AM
> *To: *Gnanamoorthy, Saravanan
> *Cc: *user@flink.apache.org
> *Subject: *Re: Flink Kinesis connector - EFO connection error w
/org/apache/flink/streaming/connectors/kinesis/util/AwsV2Util.java#L113
Thanks,
Danny Cranmer.
On Tue, Jan 18, 2022 at 12:52 AM Gnanamoorthy, Saravanan <
saravanan.gnanamoor...@fmr.com> wrote:
> Hello,
>
> We are using Flink kinesis connector for processing the streaming data
>
Hey Tarun,
Your application looks ok and should work. I did notice this, however I
cannot imagine it is an issue, unless you are not setting the region
correctly:
- getKafkaConsumerProperties()
Make sure you are setting the correct region
(AWSConfigConstants.AWS_REGION) in the properties.
convert to a number.
- shardId-
- shardId-0001
- shardId-0002
Thanks,
Danny Cranmer
On Mon, Jul 26, 2021 at 3:11 AM Caizhi Weng wrote:
> Hi!
>
> It's stated on the line just below that in the document.
>
> It is recommended to monitor the sh
40 matches
Mail list logo