Thanks both for your advice, I will give them a try!
From: Schwalbe Matthias
Sent: 10 October 2022 08:35
To: 仙路尽头谁为峰 ; Qing Lim
Cc: User
Subject: RE: Re:Question about Flink Broadcast State event ordering
Hi Qing again,
Another point to consider: broadcast streams are subject to watermarking
the
time 😊 )
Best regards
Thias
From: 仙路尽头谁为峰
Sent: Wednesday, October 5, 2022 10:13 AM
To: Qing Lim
Cc: User
Subject: [SPAM] 回复: Re:Question about Flink Broadcast State event ordering
⚠EXTERNAL MESSAGE – CAUTION: Think Before You Click ⚠
Hi Qing:
The key point is that the broadcast
Oh, thank you for your explanation!
From: 仙路尽头谁为峰
Sent: 05 October 2022 09:13
To: Qing Lim
Cc: User
Subject: 回复: Re:Question about Flink Broadcast State event ordering
Hi Qing:
The key point is that the broadcast side may have different partitions that
interleaves. If you can make sure
收件人: xljtswf2022
抄送: User
主题: RE: Re:Question about Flink Broadcast State event ordering
Hi, thanks for answering my question.
Is there anyway to make the order reflecting the upstream? I wish to broadcast
messages that has deletion semantic, so ordering matters here.
I guess worst case I can
: 05 October 2022 03:02
To: Qing Lim
Cc: User
Subject: Re:Question about Flink Broadcast State event ordering
Hi Qing:
> I think this is refering to the order between broadcasted element and non
> broadcasted element, right?
No, as broadcast and nonbroadcast stream are different streams
Hi Qing:
> I think this is refering to the order between broadcasted element and non
> broadcasted element, right?
No, as broadcast and nonbroadcast stream are different streams, they will
usually transfer with different tcp connection, we can not control the order of
elements in different con
Hi Flink user group,
I have a question around broadcast.
Reading the docs
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/broadcast_state/#important-considerations,
it says the following:
> Order of events in Broadcast State may differ across tasks: Alt
-flink.html
>>
>> In the article:
>>
>>- You get additional features like side inputs and cross-language
>>pipelines that are not supported natively in Flink but only supported when
>>using Beam with Flink
>>
>> Ultimately, Beam pipeline wi
m with Flink
>
> Ultimately, Beam pipeline will be translated into Flink job. So does
> beam's side input translates into Flink Broadcast stream?
>
> If I look at org.apache.beam.runners.flink.FlinkStreamingTransformTranslators,
> it looks like converting the the side input
pipelines that are not supported natively in Flink but only supported when
using Beam with Flink
Ultimately, Beam pipeline will be translated into Flink job. So does beam's
side input translates into Flink Broadcast stream?
If I look at org.apache.beam.runners.flink.FlinkStreamingTransformTransl
stance(x._2, y._2)
> (toId,score)
> }.toList.sortWith((x,y)=>x._2>y._2).take(20)
> (fromId,docSims)
> }
> res.writeAsText(..)
>
> - 原始邮件 -
> 发件人:Stephan Ewen
> 收件人:user@flink.apache.org
> 抄送人:亘谷
> 主题:Re: flink Broadcast
&
= y._1 val score = 1-cosDistince.distance(x._2,
y._2)(toId,score)
}.toList.sortWith((x,y)=>x._2>y._2).take(20) (fromId,docSims)
}res.writeAsText(..)
- 原始邮件 -
发件人:Stephan Ewen
收件人:user@flink.apache.org
抄送人:亘谷
主题:Re: flink Broadcast
日期:20
The program consists of two executions - one that only collects() back to
the client, one that executes the map function.
Are you running this as a "YARN single job" execution? IN that case, there
may be an issue that this incorrectly tries to submit to a stopping YARN
cluster.
On Fri, Mar 24,
Hi,
Can you provide more logs to help us understand whats going on?
One note regarding your application: You are calling .collect() and send
the collection with the map() call to the cluster again.
This is pretty inefficient and can potentially break your application (in
particular the RPC system
Hi ,alll,
i have a 36000 documents,and the document all transfer a vector , one doc is a
vector,and dimension is the same,so have DataSet
val data :DataSet[(String,SparseVector)]= //36000 record
val toData = data.collect()
val docSims = data.map{x=>
val fromId=x._
15 matches
Mail list logo