Hi
first good job and tank you
i don't find in docker hub the new version 1.12
when it will be there ?
nick
בתאריך יום ה׳, 10 בדצמ׳ 2020 ב-14:17 מאת Robert Metzger <
rmetz...@apache.org>:
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.12.0, whic
Hello,
We noticed the following behavior:
If we enable the flink checkpoints, we saw that there is a delay between
the time we write a message to the KAFKA topic and the time the flink kafka
connector consumes this message.
The delay is closely related to checkpointInterval and/or
minPauseBetweenC
hi
i am confused
the delay in in the source when reading message not on the sink
nick
בתאריך יום ב׳, 21 בדצמ׳ 2020 ב-18:12 מאת Yun Gao <yungao...@aliyun.com
>:
> Hi Nick,
>
> Are you using EXACTLY_ONCE semantics ? If so the sink would use
> transactions, and only commit the transa
delay in the write side,
> right ? Does the delay in the read side keeps existing ?
>
> Best,
> Yun
>
>
>
> ------Original Mail --
> *Sender:*nick toker
> *Send Date:*Tue Dec 22 01:43:50 2020
> *Recipients:*Yun Gao
> *CC:*user
> *Su
Hi
any idea?
is it a bug?
regards'
nick
בתאריך יום ד׳, 23 בדצמ׳ 2020 ב-11:10 מאת nick toker <
nick.toker@gmail.com>:
> Hello
>
> We noticed the following behavior:
> If we enable the flink checkpoints, we saw that there is a delay between
> the time we write a message to the KAFKA
ום ה׳, 24 בדצמ׳ 2020 ב-3:36 מאת lec ssmi <
shicheng31...@gmail.com>:
> Checkpoint can be done synchronously and asynchronously, the latter is
> the default .
> If you chooese the synchronous way , it may cause this problem.
>
> nick toker 于2020年12月23日周三 下午3:53写道:
>
Hello,
We wrote a very simple streaming pipeline containing:
1. Kafka consumer
2. Process function
3. Kafka producer
The code of the process function is listed below:
private transient MapState testMapState;
@Override
public void processElement(Map value, Context ctx,
Collector> out) throws
Hello,
We are using RocksDB as the backend state.
At first we didn't enable the checkpoints mechanism.
We observed the following behaviour and we are wondering why ?
When using the rocksDB *without* checkpoint the performance was very
extremely bad.
And when we enabled the checkpoint the perform
/apache/flink/contrib/streaming/state/RocksDBMapState.java#L241
> [3]
> https://github.com/apache/flink/blob/efd497410ced3386b955a92b731a8e758223045f/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBMapState.java#L254
>
> Best
> Yun Ta
ng a bit performance downside to
> the whole job.
>
> Could you give more details e.g. Flink version, configurations of RocksDB
> and simple code which could reproduce this problem.
>
> Best
> Yun Tang
> --
> *From:* nick toker
> *Sent:*
Hi
i have a standalone cluster with 3 nodes and rocksdb backend
when one task manager fails ( the process is being killed)
it takes very long time until the job is totally canceled and a new job is
resubmitted
i see that all slots on all nodes are being canceled except from the slots
of the dead
t
hello
we replaced deprecated kafka producer with kafka sink
and from time to time when we submit a job he stack for 5 min in inisazaing
( on sink operators)
we verify the the transaction prefix is unique
it's not happened when we use kafka producer
What can be the reason?
Shammon FY <zjur...@gmail.com
>:
> Hi nick,
>
> Is there any error log? That may help to analyze the root cause.
>
> On Sun, Jul 23, 2023 at 9:53 PM nick toker
> wrote:
>
>> hello
>>
>>
>> we replaced deprecated kafka producer with kaf
-3b1f-4f6b-b9a0-6dacb4d5408b',
> dataBytes=291}}]}, operatorStateFromStream=StateObjectCollection{[]},
> keyedStateFromBackend=StateObjectCollection{[]}, keyedStateFromStream=
> StateObjectCollection{[]}, inputChannelState=StateObjectCollection{[]},
> resultSubpartitionState=StateObjectColle
Hi
i am configured with exactly ones
i see that flink producer send duplicate messages ( sometime few copies)
that consumed latter only ones by other application,
How can I avoid duplications ?
regards'
nick
Hi
when i add or remove an operator in the job graph , using savepoint i must
cancel the job to be able run the new graph
e.g. by adding or removing operator (like new sink target)
it was working in the past
i using flink 1.17.1
1. is it a known bug? if so when planned to be fix
2. do i need to
;
> --
> Best!
> Xuyang
>
>
> At 2023-12-03 21:49:23, "nick toker" wrote:
>
> Hi
>
> when i add or remove an operator in the job graph , using savepoint i must
> cancel the job to be able run the new graph
>
> e.g. by adding or removing ope
the job to make it work
How can I solve this issue?
nick
בתאריך יום ב׳, 4 בדצמ׳ 2023 ב-10:27 מאת Xuyang <xyzhong...@163.com>:
> Hi,
> Can you attach the log about the exception when job failed?
>
> --
> Best!
> Xuyang
>
>
> 在 2023-12-04 15:56:04,&
tate that does not belong to it.
>
>
> Best,
> Feng
>
>
> On Thu, Jan 25, 2024 at 1:19 AM nick toker
> wrote:
>
>> hi
>>
>> i didn't found anything in the log
>> but i found that it happened when i add a new sink operator
>> and because i wor
Hello,
We have noticed that when we add a *new kafka sink* operator to the graph, *and
start from the last save point*, the operator is 100% busy for several
minutes and *even 1/2-1 hour* !!!
The problematic code seems to be the following for-loop in
getTransactionalProducer() method:
*org.apach
Hello,
We are encountering a connection issue with our Kafka sink when using AWS
MSK IAM authentication. While our Kafka source connects successfully, the
sink fails to establish a connection.
Here's how we're building the sink:
```java
KafkaSinkBuilder builder = KafkaSink.builder()
.setBootst
21 matches
Mail list logo