Yes, you are right !
On Thu, Aug 3, 2023 at 1:35 PM Kamal Mittal
wrote:
> Hello Shammon,
>
>
>
> As it is said one split enumerator for one source means multiple sub-tasks
> of that source (if parallelism >1) will use same split enumerator instance
> right?
>
>
>
> Rgds,
>
> Kamal
>
>
>
> *From:
Hi Kirti,
Simply speaking, sink needs to support `two-stage commit`, the sink can
`write` data as normal and only `commit` data after the checkpoint is
successful. This ensures that even if a failover occurs and data needs to
be replayed, the previously written data is not visible to the
user. How
Hello Shammon,
As it is said one split enumerator for one source means multiple sub-tasks of
that source (if parallelism >1) will use same split enumerator instance right?
Rgds,
Kamal
From: Shammon FY
Sent: 03 August 2023 10:54 AM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Flink
Hi Kamal,
I think it depends on your practical application. In general, the built-in
source in Flink such as kafka or hive will proactively fetch splits from
source, instead of starting a service with a source service pushed over.
Returning to the issue of port conflicts, you may need to check if
Hi Team,
I am using Flink File Source in one of my use case.
I observed that, while reading file by source reader it stores its position in
checkpointed data.
In case application crashes, it restores its position from checkpointed data,
once application comes up, which may result in re-emitting
Hello Shammon,
Please have a look for below and share views.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 02 August 2023 08:02 AM
To: Shammon FY ; user@flink.apache.org
Subject: RE: Flink netty connector for TCP source
Thanks Shammon.
Purpose of opening server socket in Split Enumerator was
Hello,
We have a client sending TCP traffic towards Flink application and to support
that there is server socket (Configurable port) opened which is accepting
socket connections.
So to accept multiple client connections we have used a thread pool
(configurable size) which will execute in a fli
Hi Kamal,
It’s okay if you don’t mind the data order.
But it’s not very commonly seen to accept client sockets from Flink jobs, as
the socket server address is dynamic and requires service discovery.
Would you like to share more about the background?
Best,
Paul Lam
> 2023年8月3日 10:26,Kamal Mit
Hello Community,
Please share views for the below mail.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 02 August 2023 08:19 AM
To: user@flink.apache.org
Subject: Flink operator task opens threads internally
Hello Community,
I have a operator pipeline like as below, is it ok if "source" task op
Hi,
I noticed that the newest documentation of the flink-operator has pointed
to v1.6.0, yet when using the `helm repo add flink-operator-repo
https://downloads.apache.org/flink/flink-kubernetes-operator-1.6.0/`
command to install, it turns out that the given URL does not exist.
I suppose that 1.
Hi Tucker,
Can you describe more about your running job and how the trigger timer is
configured? Also it would be better if you can attach a FlameGraph to show
the CPU usage when the timer is triggered.
Best,
Xiangyu
Tucker Harvey via user 于2023年8月1日周二 05:51写道:
> Hello Flink community! My team
Under low workload it is INFO but under heavy workload, it causes system
crushes.
On Wed, Aug 2, 2023 at 7:18 AM liu ron wrote:
> Hi, Kenan
>
> I think you maybe can get help from Kafka community. IMO, it is just an
> info level log, whether it has a real impact?
>
>
> Best,
> Ron
>
> Kenan Kıl
Hi haishui,
The enum type cannot be mapped as flink table type directly.
I think the easiest way is to convert enum to string type first:
DataStreamSource> source = env.fromElements(
new Tuple2<>("1", TestEnum.A.name()),
new Tuple2<>("2", TestEnum.B.name())
);
Or add a map trans
13 matches
Mail list logo