-connector-kafka 3.1.0
What am i missing?
On Fri, 7 Mar 2025 at 8:13 AM, Taher Koitawala wrote:
> Hi Leonard,
> Yes i did see Xianqian’s reply however i thought my email did not go
> through as the community is often very active but I did not receive a
> response until Xian
on/sink/PaimonMetadataApplier.java
> > [3]
> https://nightlies.apache.org/flink/flink-cdc-docs-release-3.3/docs/core-concept/data-pipeline/
> > [4] https://github.com/apache/flink-cdc/pull/3445
> > [5] https://github.com/apache/flink-cdc/pull/3507
> >
> > Best Rega
Any help here?
On Sun, 2 Mar 2025 at 1:40 PM, Taher Koitawala wrote:
> Hi All,
> Curious question! Has someone done a benchmark on native flink
> vs beam-flink-runner?
>
> Im wanting to know or ask, if there are differences in the following
> areas( Consider S
Hi Devs,
Any response here?
On Tue, 11 Feb 2025 at 11:59 AM, Taher Koitawala wrote:
> Hi Devs,
>As a POC we are trying to create a steaming pipeline from MSSQL cdc
> to Paimon:
>
> To do this we are doing
> 1. msSql server cdc operator
> 2. Transform
4. Chaining
I ask this to assess what is a better approach for us as we are wanting to
choose an engine for our stream processing platform.
Regards,
Taher Koitawala
mentioned above!
Regards,
Taher Koitawala
Folks any idea on this?
On Wed, 15 Jan 2025 at 7:08 AM, Taher Koitawala wrote:
> Adding Flink community here just incase anyone has more info on this.
>
> On Tue, 14 Jan 2025 at 1:00 PM, Taher Koitawala
> wrote:
>
>> Hi All,
>> Been moving from Flink develo
Adding Flink community here just incase anyone has more info on this.
On Tue, 14 Jan 2025 at 1:00 PM, Taher Koitawala wrote:
> Hi All,
> Been moving from Flink development to Beam development i want am
> using FileIO parquet Io.
>
> Folks in Flink we could use strea
//mvnrepository.com/artifact/org.apache.iceberg/iceberg-flink-runtime-1.18
>
>
> Best,
> Feng
>
>
>
> On Thu, Sep 5, 2024 at 9:21 PM Taher Koitawala wrote:
>
> > Hi All,
> > I am using flink 1.18.1 with iceberg.
> >
> > I get the following er
rg.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
at java.base/java.lang.Thread.run(Thread.java:1570)
Regards,
Taher Koitawala
lost set of records only?
3. If let's say 200 records are to be sent and only 100 are sent and
flink fails on restart does it send only 100 more?
If AsyncIO is not the right operator please can you tell me what we can use
instead to achieve all this.
Regards,
Taher Koitawala
.java#L186
> > - RocksIncrementalSnapshotStrategy:
> >
> >
> https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java#L291
> >
> > Tah
rocks
files after checkpoints?
Please share the code reference where this is done in flink.
Regards,
Taher Koitawala
wrote:
> Hi Taher,
>
> Could you explain a bit more your use case and what do you expect Flink SQL
> to support?
> That could help us to better understand and plan the future roadmap.
>
> Best,
> Jark
>
> On Wed, 5 May 2021 at 19:42, Taher Koitawala wrote:
>
> &
appreciated.
Regards,
Taher Koitawala
On Wed, May 5, 2021 at 3:53 PM Jark Wu wrote:
> Hi Taher,
>
> Currently, Flink (SQL) CDC doesn't support automatic schema change
> and doesn't support to consume schema change events in source.
> But you can upgrade schema manually,
modify column data
type query that hits the source RDBMS, how does flink handle that schema
change and what changes are supported. If someone can give a full example
it will be very very helpful.
Regards,
Taher Koitawala
As far as I know, Atlas entries can be created with a rest call. Can we not
create an abstracted Flink operator that makes the rest call on job
execution/submission?
Regards,
Taher Koitawala
On Wed, Feb 5, 2020, 10:16 PM Flavio Pompermaier
wrote:
> Hi Gyula,
> thanks for taking c
AFAIK, Flink SQL is really pretty strong for production only and only if
you know what queries you are running. Further, if you open up Flink SQL to
the end-users then NO. As Flink SQL is still not that mature and still not
that rich in terms of functionalities like Spark SQL.
On Mon, Sep 16, 20
Hi Shilpa,
The easiest way to do this is the make the Rocks DB state queryable.
Then use the Flink queryable state client to access the state you have
created.
Regards
Taher Koitawala
On Tue, Jul 30, 2019, 4:58 PM Shilpa Deshpande wrote:
> Hello All,
>
> I am new to Apache Fli
Sounds smashing; I think the initial integration will help 60% or so flink
sql users and a lot other use cases will emerge when we solve the first one.
Thanks,
Taher Koitawala
On Fri 12 Oct, 2018, 10:13 AM Zhang, Xuefu, wrote:
> Hi Taher,
>
> Thank you for your input. I think you e
ing"
The way we use this is:
Using streaming_table as configuration select count(*) from processingtable
as streaming;
This way users can now pass Flink SQL info easily and get rid of the Flink
SQL configuration file all together. This is simple and easy to understand
and I think most user
tream;
select count(*) from flink_mailing_list process as batch;
This way we could completely get rid of Flink SQL configuration files.
Thanks,
Taher Koitawala
Integrating
On Fri 12 Oct, 2018, 2:35 AM Zhang, Xuefu, wrote:
> Hi Rong,
>
> Thanks for your feedback. Some of my earlier comment
22 matches
Mail list logo