Hi,
We have a job that is reading from kafka and writing some endpoints. The
job does not have any shuffling steps. I implement it with multiple
steps. Flink chained those operators in one operator in submission time.
However I see all operators are doing checkpointing.
Is there any way to crea
HI, melin li.
Could you please explain a bit more about unified access check in flink?
Best regards,
Yuxia
发件人: "melin li"
收件人: "User"
发送时间: 星期三, 2023年 2 月 01日 下午 2:39:15
主题: How to add permission validation? flink reads and writes hive table。
flink supports both sql and jar types.H
Good Morning Sajjad,
I’ve once had a similar problem. As you’ve found out, directly using
KeyedBroadcastProcessFunction is a little tricky.
What I ended up with instead is to use the rather new @PublicEvolving
MultipleInputStreamOperator.
It allows you to connect and process any (reasonable) num
flink supports both sql and jar types.How can we implement a unified access
check in flink? spark supports extensions; flink lacks extensions.
HI,
> about the question can I assume that ResolvedCatalogTable will be always a
> runtime type.
Sorry for I don't really understand your question , why do you have such
assumtion?
Best regards,
Yuxia
发件人: "Krzysztof Chmielewski"
收件人: "User"
发送时间: 星期六, 2023年 1 月 21日 上午 3:13:12
主题: R
Hi.
Just FYI, I have seen some catalogs are still use deprecated TableSchema in
flink hive, Iceberg, etc connector. But it's in Flink plan to drop the
deprecated table schema [1]. In long term, seems use new schema api is a better
choice.
If it's for the case of Catalog's createTable method,
When enabling exactly_once on my kafka sink, I am seeing extremely long
initialization times (over 1 hour), especially after restoring from a
savepoint. In the logs I see the job constantly initializing thousands of
kafka producers like this:
2023-01-31 14:39:58,150 INFO org.apache.kafka.clients.p
The script looks good to me, did you run the SDK harness? External
environment needs the SDK harness to be run externally, see [1].
Generally, the best option is DOCKER, but that usually does not work in
k8s. For this, you might try PROCESS environment and build your own
docker image for flink,
Great, thank you so much for your responses. It all makes sense now. :)
On Mon, Jan 30, 2023 at 10:41 PM Dian Fu wrote:
> >> What is the reason for including
> opt/python/{pyflink.zip,cloudpickle.zip,py4j.zip} in the base
> distribution then? Oh, a guess: to make it easier for TaskManagers to
Makes sense, thank you!
On Tue, Jan 31, 2023 at 10:48 AM Gyula Fóra wrote:
> Thanks @Anton Ippolitov
> At this stage I would highly recommend the native mode if you have the
> liberty to try that.
> I think that has better production characteristics and will work out of
> the box with the autos
Hi,
can you please share the also the script itself? I'd say that the
problem is that the flink jobmanager is not accessible through
localhost:8081, because it runs inside the minikube. You need to expose
it outside of the minikube via [1], or run the script from pod inside
the minikube and a
Thanks @Anton Ippolitov
At this stage I would highly recommend the native mode if you have the
liberty to try that.
I think that has better production characteristics and will work out of the
box with the autoscaler. (the standalone mode won't)
Gyula
On Tue, Jan 31, 2023 at 10:41 AM Anton Ippoli
I am using the Standalone Mode indeed, should've mentioned it right away.
This fix looks exactly like what I need, thank you!!
On Tue, Jan 31, 2023 at 9:16 AM Gyula Fóra wrote:
> There is also a pending fix for the standalone + k8s HA case :
> https://github.com/apache/flink-kubernetes-operator/
Hi,
Is it possible for the arm64/v8 architecture images to be published for >1.16
flink-docker (apache/flink)?
I’m aware that the official docker flink image is now published in the arm64
arch, but that image doesn’t include a JDK, so it’d be super helpful to have
the apache/flink images publi
There is also a pending fix for the standalone + k8s HA case :
https://github.com/apache/flink-kubernetes-operator/pull/518
You could maybe try and review the fix :)
Gyula
On Tue, Jan 31, 2023 at 8:36 AM Yang Wang wrote:
> I assume you are using the standalone mode. Right?
>
> For the native K
Thanks Yanfei for driving the release ! !
Best,
Leonard
> On Jan 31, 2023, at 3:43 PM, Yun Tang wrote:
>
> Thanks Yuanfei for driving the frocksdb release!
>
> Best
> Yun Tang
> From: Yuan Mei
> Sent: Tuesday, January 31, 2023 15:09
> To: Jing Ge
> Cc: Yanfei Lei ; d...@flink.apache.org
>
16 matches
Mail list logo