Hello, Dan
> 2022年2月21日 下午9:11,Dan Serb 写道:
> 1.Have a processor that uses Flink JDBC CDC Connector over the table that
> stores the information I need. (This is implemented currently - working)
You mean you’ve implemented a Flink JDBC Connector? Maybe the Flink CDC
Connectors[1] would help yo
Hey Marco,
There’s unfortunately no perfect fit here, at least that I know of. A
Deployment will make it possible to upgrade the image, but does not support
container exits (eg if the Flink job completes, even successfully, K8s will
still restart the container). If you are only running long lived
Thanks a lot Yufei and Wong.
I was able to get a version working by combining both the aspects mentioned in
each of your responses.
1. Trying the sample code base that Wong mentioned below resulted in a
no-response from JobManager. I had to use the non-sql connector jar in my
python script
Hello flink community,
I am deploying a flink application cluster using a helm chart , the problem
is that the jobmanager component type is a "Job" , and with helm i can't do
an upgrade of the chart in order to change the application image version
because helm is unable to upgrade the docker image
Thanks Guovei and Francis for your references.
On Monday, February 21, 2022, 01:05:58 AM EST, Guowei Ma
wrote:
Hi,
You can try flink's cdc connector [1] to see if it meets your needs.
[1] https://github.com/ververica/flink-cdc-connectors
Best,Guowei
On Mon, Feb 21, 2022 at 6:23 AM
Hello all,
I kind of need the community’s help with some ideas, as I’m quite new with
Flink and I feel like I need a little bit of guidance in regard to an
implementation I’m working on.
What I need to do, is to have a way to store a mysql table in Flink, and expose
that data to other jobs, as
Luning Wong 于2022年2月21日周一 19:38写道:
> import logging
> import sys
>
> from pyflink.common import SimpleStringSchema, WatermarkStrategy
> from pyflink.datastream import StreamExecutionEnvironment
> from pyflink.datastream.connectors import PulsarSource,
> PulsarDeserializationSchema, SubscriptionTy
Hello all,
I have to perform a join between two large csv sets that do not fit in ram. I
process this two files in batch mode. I also need a side output to catch csv
processing errors.
So my question is what is the best way to this kind of join operation ? I think
I should use a valueState stat
Hi Ananth,
>From the steps you described, the steps involved using
`flink-sql-connector-pulsar-1.15-SNAPSHOT.jar`, however to my knowledge
pulsar connector has not supported Table API yet, so would you mind
considering using the `flink-connector-pulsar-1.14.jar` (without sql,
though the classes
Thanks Guowei.
A small correction in the telnet result command below. I had a typo in the
telnet command earlier (did not separate the port from host name ). Issuing the
proper telnet command resolved the jobmanagers host properly.
Regards,
Ananth
From: Guowei Ma
Date: Monday, 21 February 202
Thanks Ananth for your clarification.But I am not an expert on Pulsar.
I would cc the author of the connector to have a look. Would Yufei like to
give some insight?
Best,
Guowei
On Mon, Feb 21, 2022 at 2:10 PM Ananth Gundabattula <
agundabatt...@darwinium.com> wrote:
> Thanks for the response G
Hi Ryan,
Thanks for bringing up this topic. Currently, your analysis is
correct, and reading parquet files outside the Table API is rather
difficult. The community started an effort in Flink 1.15 to
restructure some of the formats to make them better applicable to the
DataStream and Table API. You
12 matches
Mail list logo