From: Kostas Tzoumas
To: "d...@flink.apache.org" ; user@flink.apache.org
Sent: Friday, November 18, 2016 7:28 AM
Subject: Flink survey by data Artisans
Hi everyone!
The Apache Flink community has evolved quickly over the past 2+ years, and
there are now many production Flink depl
expected.Thanks
again.
From: amir bahmanyari
To: Till Rohrmann
Cc: "user@flink.apache.org"
Sent: Thursday, November 10, 2016 9:35 AM
Subject: Re: Why did the Flink Cluster JM crash?
Thanks Till.I did all of that with one difference.I have only 1 topic with 64
partitions corr
neck in my config and object
creation.I send data to 1 topic across a 2 nodes Kafka cluster with 64
partitions.And KafkaIo() in Beam app is set to receive from it.How can "more
Kafka topics" translate to KafkaIo() settings in Beam API?Thanks+regardsAmir-
From: Till Rohrmann
To:
t the test.I appreciate your response.Amir-
From: Till Rohrmann
To: amir bahmanyari
Cc: "user@flink.apache.org"
Sent: Wednesday, November 9, 2016 1:27 AM
Subject: Re: Why did the Flink Cluster JM crash?
Hi Amir,
I fear that 900 slots per task manager is a bit too many unle
mTask.java:56)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:224)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559) at
java.lang.Thread.run(Thread.java:745)
From: Till Rohrmann
To: user@flink.apache.org; amir bahmanyari
Sent: Tuesday, November 8, 2016 2:11 PM
Subject: Re: Why did th
OOps! sorry Till.I replicated it and I see exceptions in JM logs.How can I send
the logs to you? or what "interesting" part of it do you need so I can
copy/paste it here...Thanks
From: Till Rohrmann
To: user@flink.apache.org; amir bahmanyari
Sent: Tuesday, November 8, 20
Clean .No errors...no exceptions :-(Thanks Till.
From: Till Rohrmann
To: user@flink.apache.org; amir bahmanyari
Sent: Tuesday, November 8, 2016 2:11 PM
Subject: Re: Why did the Flink Cluster JM crash?
Hi Amir,
what does the JM logs say?
Cheers,Till
On Tue, Nov 8, 2016 at 9:33 PM
Hi colleagues,I started the cluster all fine. Started the Beam app running in
the Flink Cluster fine.Dashboard showed all tasks being consumed and open for
business.I started sending data to the Beam app, and all of the sudden the
Flink JM crashed.Exceptions below.Thanks+regardsAmir
java.lang.Ru
Hi colleagues,Is there a link that described Flink Matrices & provides example
on how to utilize it pls?I really appreciate it...Cheers
From: Till Rohrmann
To: user@flink.apache.org
Cc: d...@flink.apache.org
Sent: Monday, October 17, 2016 12:52 AM
Subject: Re: Flink Metrics
Hi Govi
their data exchange
report?Thanks+regards,Amir-
From: Stephan Ewen
To: user@flink.apache.org; amir bahmanyari
Cc: Felix Dreissig
Sent: Monday, September 26, 2016 2:18 AM
Subject: Re: How can I prove
You do not need to create any JSON.
Just click on "Running Jobs"
here although the data is actually being processed for
sure.Shouldnt they dynamically change as data is being processed?
Thanks+regardsAmir-
From: Stephan Ewen
To: user@flink.apache.org; amir bahmanyari
Cc: Felix Dreissig
Sent: Monday, September 26, 2016 2:18 AM
Subject: Re: How
Thanks Felix.Interesting. I tried to create the JASON but didnt work according
to the sample code I found in docs.There is a way to get the same JASON from
the command line.Is there an example?Thanks+regardsAmir-
From: Felix Dreissig
To: amir bahmanyari
Cc: user@flink.apache.org
ce :-(So I incremented my
total-slots = 448. Kafka topic also has 448 partitions.Why am I having such a
bad luck with this!!!??? LOL!!Thanks for your attention Aljoscha.
From: amir bahmanyari
To: Aljoscha Krettek ; User
Sent: Thursday, September 22, 2016 10:10 AM
Subject: Re: H
again
Aljoscha.
From: amir bahmanyari
To: Aljoscha Krettek ; User
Sent: Thursday, September 22, 2016 9:16 AM
Subject: Re: How can I prove
Thanks Aljoscha,Thats why I am wondering about this. I dont see send/receive
columns change at alljust 0's all the time.The only
From: Aljoscha Krettek
To: amir bahmanyari ; User
Sent: Thursday, September 22, 2016 5:01 AM
Subject: Re: How can I prove
Hi,depending on the data source you might not be able to stress CPU/MEM because
the source might be to slow. As long as you see the numbers increasing in the
That all nodes in a Flink Cluster are involved simultaneously in processing the
data?Programmatically, graphically...I need to stress CPU , MEM and all
resources to their max.How can I guarantee this is happening in Flink
Cluster?Out of 4 nodes, this is the highest resource usage I see from
"to
ram. You also have to make sure to
> write to all partitions and not just to one.
>
> Cheers,
> Aljoscha
>
>> On Sun, 18 Sep 2016 at 21:50 amir bahmanyari wrote:
>> Hi Aljoscha,
>> Thanks for your kind response.
>> - We are really benchmarking
ould I alter that for better performance?
Thanks Aljoscha & have a great weekend.Amir-
From: Aljoscha Krettek
To: Amir Bahmanyari ; user
Sent: Sunday, September 18, 2016 1:48 AM
Subject: Re: Flink Cluster Load Distribution Question
This is not related to Flink, but in Beam you
ns while #slots=64 is the same.
Its still slow for a relatively large file though.Pls advice if something I can
try to improve the cluster performance.Thanks+regards
From: Aljoscha Krettek
To: user@flink.apache.org; amir bahmanyari
Sent: Wednesday, September 14, 2016 1:48 AM
Subject
Hi Aljoscha,The JM logs is also attached. Seems like everything is ok,
assigned...to all nodes...Not sure why I dont get performance?
:-(Thanks+regards,Amir-
From: Aljoscha Krettek
To: user@flink.apache.org; amir bahmanyari
Sent: Wednesday, September 14, 2016 1:48 AM
Subject: Re: Fw
ve a wonderful day &
thanks for your attention.Amir-
From: Aljoscha Krettek
To: user@flink.apache.org; amir bahmanyari
Sent: Wednesday, September 14, 2016 1:48 AM
Subject: Re: Fw: Flink Cluster Load Distribution Question
Hi,this is a different job from the Kafka Job that
Amir-
- Forwarded Message -
From: Robert Metzger
To: "d...@flink.apache.org" ; amir bahmanyari
Sent: Tuesday, September 13, 2016 1:15 AM
Subject: Re: Flink Cluster Load Distribution Question
Hi Amir,
I would recommend to post such questions to the user@flink mailing lis
22 matches
Mail list logo