c/pull/3633
> <https://github.com/apache/flink-cdc/pull/3633>
>
>
>
> 2024年10月28日 22:46,Anil Dasari 写道:
>
> Thanks for the information.
> I created https://issues.apache.org/jira/browse/FLINK-36605
> <https://issues.apache.org/jira/browse/FLINK-36605>
> yester
r a period of time, and upgrading to Debezium 2. x
> also requires a lot of adaptation work, so the expected upgrade time should
> be in Flink 2.3 or one or two versions after that.
>
>
> 2024年10月23日 13:40,Anil Dasari 写道:
>
> Hi,
>
> We are planning to explore Flink CDC for
?
Thanks
On Fri, Oct 25, 2024 at 2:35 AM Anil Dasari wrote:
> Hello all,
>
> Are there Flink patterns that support microbatching and ensure all data
> for a microbatch is written to a specific prefix in the destination with
> exactly-once delivery?
> I’ve explored both window a
PhaseCommittingSink.html
> <https://nightlies.apache.org/flink/flink-docs-release-1.20/api/java/org/apache/flink/api/connector/sink2/TwoPhaseCommittingSink.html>
>
>
>
> 2024年10月14日 12:49,Anil Dasari 写道:
>
> Hello,
>
> I am looking to implement a Flink sink for the fol
Hi,
We are planning to explore Flink CDC for our CDC pipelines and have quickly
noticed that Flink CDC is still using DBZ 1.9.2.Final.
DBZ 2.0.0.Final is a major release that requires JDK 11, while the latest
version, DBZ 3.0.0.Final, requires JDK 17. Currently, Flink CDC 3.2.0 is
using Flink 1.1
time. Any feedback is appreciated.
Thanks.
On Mon, Oct 14, 2024 at 8:30 PM Yanquan Lv wrote:
> Sorry, I couldn't find any clear and detailed user guidance other than
> FLIP in the official documentation too.
>
>
> 2024年10月15日 01:39,Anil Dasari 写道:
>
> Hi Yanquan,
&
FLIPs or
> specific implementations.
>
> Anil Dasari 于2024年10月15日周二 00:55写道:
>
>> Got it. thanks.
>> Sink improvements have many FLIP confluence pages i.e FLIP-143, 171, 177
>> and 191. So, Is there a sequence of steps flow charts for better
>> understand
APIs are almost
> identical.
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/api/java/org/apache/flink/api/connector/sink2/SupportsCommitter.html
> <https://nightlies.apache.org/flink/flink-docs-master/api/java/org/apache/flink/api/connector/sink2/SupportsCommitter.html
Sink.html
> <https://nightlies.apache.org/flink/flink-docs-release-1.20/api/java/org/apache/flink/api/connector/sink2/TwoPhaseCommittingSink.html>
>
>
>
> 2024年10月14日 12:49,Anil Dasari 写道:
>
> Hello,
>
> I am looking to implement a Flink sink for the following use case, where
&g
Hello,
I am looking to implement a Flink sink for the following use case, where
the steps below are executed for each microbatch (using Spark terminology)
or trigger:
1. Group data by category and write it to S3 under its respective prefix.
2. Update category metrics in a manifest file stor
Hello Leonard, Could you please send an invite to join the slack community
? thanks in advance.
Regards,
Anil
On Wed, Oct 9, 2024 at 8:29 PM Leonard Xu wrote:
> Welcome Ken, I’ve sent the invitation to your email.
>
>
> Best,
> Leonard
>
>
> 2024年10月10日 上午3:52,Ken CHUAN YU 写道:
>
> Hi there
>
>
java>
> Best Regards
> Ahmed Hamdy
>
>
> On Sun, 6 Oct 2024 at 16:48, Anil Dasari wrote:
>
>> Hi Ahmed,
>> Thanks for the response.
>> This is the part that I find unclear in the documentation and FLIP-27.
>> The actual split assignment happens in the
>
k/connector/kafka/source/enumerator/KafkaSourceEnumerator.java#L286
> <https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/source/enumerator/KafkaSourceEnumerator.java#L286>
> Best Regards
> Ahmed Hamdy
>
>
>
Hello,
I have implemented a custom source that reads tables in parallel, with each
split corresponding to a table and custom source implementation can be
found here -
https://github.com/adasari/mastering-flink/blob/main/app/src/main/java/org/example/paralleljdbc/DatabaseSource.java
However, it see
14 matches
Mail list logo