Flink SQL 1.15作业未能成功提交消费kafka offset

2024-08-09 Thread casel.chen
kafka 2.3集群发生脑裂重启后,在flink重试机制作用下flink sql 
1.15作业有正常恢复kafka消费(写入下游数据库,能够从数据库查询到最新写入数据),但没有提交flink消费kafka消息的offset,导致消息积压告警。请问这是flink
 1.15已知的issue么?如果是的话 issue编号是多少?谢谢!

AW: Can we share states across tasks/operators

2024-08-09 Thread Christian Lorenz via user
Hi Matthias,

I am facing a similar issue with the need to have state on 2 keyed input 
streams, but the state can only be cleared with events occurring in a third 
input stream.
Do you have maybe some examples how to utilize MultipleInputStreamOperator you 
have mentioned?

Thanks and kind regards,
Christian

Von: Schwalbe Matthias 
Datum: Mittwoch, 7. August 2024 um 12:54
An: Sachin Mittal , user@flink.apache.org 

Betreff: RE: Can we share states across tasks/operators
This email has reached Mapp via an external source

Hi Sachin,

Just as an idea, while you cannot easily share state across operators, you can 
do so within the same operator:

  *   For two such input streams you could connect() the two streams into a 
ConnectedStreams and then process() by means of a KeyedCoProcessFunction
  *   For more than two input streams, implement some 
MultipleInputStreamOperator …
  *   In both cases you can yield multiple independent output streams (if need 
be), by means of multiple side outputs (see here e.g. 
org.apache.flink.streaming.api.functions.co.KeyedCoProcessFunction.Context#output)

I do that all the time 😊

WDYT?

Sincere Flink greetings

Thias



From: Sachin Mittal 
Sent: Wednesday, August 7, 2024 12:37 PM
To: user@flink.apache.org
Subject: Can we share states across tasks/operators

Hi,
I have a stream which starts from a source and is keyed by a field f.
With the stream process function, I can emit the processed record downstream 
and also update state based on the records it received for the same key.

Now I have another stream which starts from another source and is of the same 
type as the first stream and it is also keyed by the same field f.

In its process function I want to access the last state updated by the first 
stream's process function for the same key, do some processing (update the 
state) and also send the record downstream.

Is there any way I can achieve this in Flink by connecting to the same state 
store ?

Is there any concept of global state in Flink if I cannot achieve this by using 
keyed states associated with an operator's process function ?

Any other way you can think of achieving the same ?

Thanks
Sachin

Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und beinhaltet 
unter Umständen vertrauliche Mitteilungen. Da die Vertraulichkeit von 
e-Mail-Nachrichten nicht gewährleistet werden kann, übernehmen wir keine 
Haftung für die Gewährung der Vertraulichkeit und Unversehrtheit dieser 
Mitteilung. Bei irrtümlicher Zustellung bitten wir Sie um Benachrichtigung per 
e-Mail und um Löschung dieser Nachricht sowie eventueller Anhänge. Jegliche 
unberechtigte Verwendung oder Verbreitung dieser Informationen ist streng 
verboten.

This message is intended only for the named recipient and may contain 
confidential or privileged information. As the confidentiality of email 
communication cannot be guaranteed, we do not accept any responsibility for the 
confidentiality and the intactness of this message. If you have received it in 
error, please advise the sender by return e-mail and delete this message and 
any attachments. Any unauthorised use or dissemination of this information is 
strictly prohibited.
Mapp Digital Germany GmbH with registered offices at Schönhauser Allee 148, 
10435 Berlin.
Registered with the District Court München HRB 226181
Managing Directors: Frasier, Christopher & Warren, Steve
This e-mail is from Mapp Digital Group and its international legal entities and 
may contain information that is confidential.
If you are not the intended recipient, do not read, copy or distribute the 
e-mail or any attachments. Instead, please notify the sender and delete the 
e-mail and any attachments.


Unsubscribe

2024-08-09 Thread yadong shi
Unsubscribe


Unsubscribe

2024-08-09 Thread Angel Francisco Orta
Unsubscribe


Flink jobs failed in Kerberos delegation token

2024-08-09 Thread dpp
Hello, I am currently using Flink version 1.15.2 and have encountered an issue 
with the HDFS delegation token expiring after 7 days in a Kerberos scenario.
I have seen a new delegation token framework 
(https://issues.apache.org/jira/browse/FLINK-21232)  and I have merged the 
code commits from 1 to 12 (Sub-Tasks 1-12) in the link into my Flink version 
1.15.2.
Now, it is possible to refresh the delegation token periodically. However, 
after 7 days, if the JobManager or TaskManager needs to be restarted due to an 
exception, I found that the Yarn container used to start JM/TM still uses the 
HDFS_DELEGATION_KIND that was generated the first time the job was 
submitted.And it also reports an error similar to 'token (HDFS_DELEGATION_TOKEN 
token 31615466 for xx) can't be found in cache'.
So,the new delegation token framework did not take effect. I'm using the 
default method of Flink and delegation tokens are not managed elsewhere.
Could anyone help me with this issue? Thank you very much.

Unsubscribe

2024-08-09 Thread amenreet sodhi
Unsubscribe