I'm trying to set up a 3 node Flink cluster (version 1.9) on the following
machines:
Node 1 (Master) : 4 GB (3.8 GB) Core2 Duo 2.80GHz, Ubuntu 16.04 LTS
Node 2 (Slave) : 16 GB, Core i7-3.40GHz, Ubuntu 16.04 LTS
Node 3 (Slave) : 16 GB, Core i7-3,40GHz, Ubuntu 16.04 LTS
I have followed the instruc
/127.0.0.1': Invalid
argument (connect failed)
2019-09-12 15:56:39,753 INFO org.apache.flink.runtime.net.ConnectionUtils
- Failed to connect from address
'/fe80:0:0:0:1e10:83f4:a33a:a208%enp5s0f1': Network is unreachable (connect
failed)
"flink-komal-taskexecutor-0-salman-hpc.log" 157L, 29954C
> could you check that every node can reach the other nodes? It looks a
> little bit as if the TaskManager cannot talk to the JobManager running on
> 150.82.218.218:6123.
>
> Cheers,
> Till
>
> On Thu, Sep 12, 2019 at 9:30 AM Komal Mariam
> wrote:
>
>> I managed
Hello all,
I'm trying to do a fairly simple task that is to find the maximum value
(Double) received so far in a stream. This is what I implemented:
POJO class:
public class Fish{
public Integer fish_id;
public Point coordinate; //position
public Fish() {};
public Fish(fish_id,double x
> About your reduce function:
> You execute it by fish_id if I see it correctly. This will create one
> result by fish_id . I propose to map first all fish coordinates under a
> single key and then reduce by this single key.
>
> Am 03.10.2019 um 08:26 schrieb Komal Mariam :
>
Thank you for your help all. I understand now and made the changes.
Since I needed return the entire object that contained the max value of X,
I used reduce instead of max.
Hi everyone,
Suppose I have to compute a filter condition
Integer threshold = compute threshold();
If I:
temperatureStream.filter(new FilterFunction() {
@Override
public boolean filter(Integer temperature) throws Exception {
Integer threshold = compute threshold();
return temperature > threshol
de public boolean filter(Integer temperature) {
> return temperature > threshold; }
> });
>
> final int threshold = computeThreshold();temperatureStream.filter(temperature
> -> temperature > threshold);
>
>
> On 08/10/2019 12:46, Komal Mariam wrote
Hello,
I have a few questions regarding flink’s dashboard and monitoring tools.
I have a fixed number of records that I process through the datastreaming
API on my standalone cluster and want to know how long it takes to process
them. My questions are:
1)How can I see the time taken in milli
Hi all,
I'm trying to create a MapState, Set,
List>> for KeyedBroadcastProcessFunction but I'm not sure how to
initialize its MapStateDescriptor.
I have written it in two ways as given below and my IDE isn't showing an
error either way (haven't tested on runtime yet).
I'd really appreciate if an
Dear all,
I want to clear some of my variables in KeyedBroadcastProcessFunction after
a certain time. I implemented the onTimer() function but even though I am
using ProcessingTime
like so:
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime), I am
getting null when ctx.timestamp() i
;> Consider snippet 2, now our type inference in TypeInformation.of can not
>> infer the nested information. (It not get the information: List)
>>
>> On Fri, Nov 1, 2019 at 11:34 AM Komal Mariam
>> wrote:
>>
>>> Hi all,
>>>
>>> I'm trying t
ocessingTime()`.
>
>
>
> [1]
> https://github.com/apache/flink/blob/9b43f13a50848382fbd634081b82509f464e62ca/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/co/BaseBroadcastProcessFunction.java#L50
>
> Best
>
> Yun Tang
>
>
>
> *From: *Ko
Dear all,
Thank you for your help regarding my previous queries. Unfortunately, I'm
stuck with another one and will really appreciate your input.
I can't seem to produce any outputs in "flink-taskexecutor-0.out" from my
second job after submitting the first one in my 3-node-flink standalone
clust
nagers out file?
>
> Best,
> Vino
>
> Komal Mariam 于2019年11月22日周五 下午6:59写道:
>
>> Dear all,
>>
>> Thank you for your help regarding my previous queries. Unfortunately, I'm
>> stuck with another one and will really appreciate your input.
>>
>>
Hello everyone,
I want to get some insights on the KeyBy (and Rebalance) operations as
according to my understanding they partition our tasks over the defined
parallelism and thus should make our pipeline faster.
I am reading a topic which contains 170,000,000 pre-stored records with 11
Kafka par
Anyone?
On Fri, 6 Dec 2019 at 19:07, Komal Mariam wrote:
> Hello everyone,
>
> I want to get some insights on the KeyBy (and Rebalance) operations as
> according to my understanding they partition our tasks over the defined
> parallelism and thus should make our pipeline f
nk-docs-release-1.9/concepts/runtime.html#tasks-and-operator-chains
>
> Komal Mariam 于2019年12月9日周一 上午9:11写道:
>
>> Anyone?
>>
>> On Fri, 6 Dec 2019 at 19:07, Komal Mariam wrote:
>>
>>> Hello everyone,
>>>
>>> I want to get some insights
network-buffers
>
> On Mon, Dec 9, 2019 at 10:25 AM vino yang wrote:
>
>> Hi Komal,
>>
>> Actually, the main factor about choosing the type of the partition
>> depends on your business logic. If you want to do some aggregation logic
>> based on a group. You m
19 matches
Mail list logo