Hi Ivan,
Just to add up to chaining: When splitting the map into two parts, objects
need to be copied from one operator to the chained operator. Since your
objects are very heavy that can take quite long, especially if you don't
have a specific serializer configured but rely on Kryo.
You can avoi
Generally there should be no difference.
Can you check whether the maps are running as a chain (as a single task)?
If they are running in a chain, then I would suspect that /something/
else is skewing your results.
If not, then the added network/serialization pressure would explain it.
I will a
Hi,
We have a Flink job that reads data from an input stream, then converts each
event from JSON string Avro object, finally writes to parquet files using
StreamingFileSink with OnCheckPointRollingPolicy of 5 mins. Basically a
stateless job. Initially, we use one map operator to convert Json st
I think that's independent of the serializer registration.
What's important is registering the types at the execution environment.
On Fri, Feb 24, 2017 at 7:06 PM, Dmitry Golubets
wrote:
> Hi Robert,
>
> The bottleneck operator is working with a state (many hash maps basically)
> and it's algor
Hi Robert,
The bottleneck operator is working with a state (many hash maps basically)
and it's algorithm is not parallelizeable.
We took an approach of preloading all required data from external systems,
so that operators don't have to do any network communication during a
data-record processing (
Hi Dmitry,
Cool! Looks like you've taken the right approach to analyze the performance
issues!
Often the deserialization of the input is already a performance killer :)
What is this one operator that is the bottleneck doing?
Does it have a lot of state? Is it CPU intensive, or talking to an exter
Hi Robert,
In dev environment I load data via zipped csv files from s3.
Data is parsed in a case classes.
It's quite fast, I'm able to get ~80k/sec with only source and "dev/null"
sink.
Checkpointing is enabled with 1 hour intervals.
Yes, one of the operators is a bottleneck and it backpressures
Hi Dmitry,
sorry for the late response.
Where are you reading the data from?
Did you check if one operator is causing backpressure?
Are you using checkpointing?
Serialization is often the cause for slow processing. However, its very
hard to diagnose potential other causes without any details on
Hi Daniel,
I've implemented a macro that generates message pack serializers in our
codebase.
Resulting code is basically a series of writes\reads like in hand-written
structured serialization.
E.g. given
case class Data1(str: String, subdata: Data2)
case class Data2(num: Int)
serialization code
Hello Dimitry,
Could you please elaborate on your tuning on ->
environment.addDefaultKryoSerializer(..) .
I'm interested on knowing what have you done there for a boost of about
50% .
Some small or simple example would be very nice.
Thank you very much in advance.
Kind Regards,
Daniel Sa
One network setting is mentioned here:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html#controlling-latency
From: Dmitry Golubets mailto:dgolub...@gmail.com>>
Date: Friday, February 17, 2017 at 6:43 AM
To: mailto:user@flink.apache.org>>
Subject:
Hi,
My streaming job cannot benefit much from parallelization unfortunately.
So I'm looking for things I can tune in Flink, to make it process
sequential stream faster.
So far in our current engine based on Akka Streams (non distributed ofc) we
have 20k msg/sec.
Ported to Flink I'm getting 14k so
Task Managers
>
> 21
>
> Task Slots
>
> 20
>
> Available Task Slots
>
>
>
>
>
> Best regards,
>
> Serhiy.
>
>
>
> *From:* Robert Metzger [mailto:rmetz...@apache.org]
> *Sent:* 13 May 2016 15:26
> *To:* user@flink.apache.org
> *Su
eing occupied. Something I am doing is wrong..
3
Task Managers
21
Task Slots
20
Available Task Slots
Best regards,
Serhiy.
From: Robert Metzger [mailto:rmetz...@apache.org]
Sent: 13 May 2016 15:26
To: user@flink.apache.org
Subject: Re: Flink performance tuning
Hi,
Can you try running the job with
One issue may be that the selection of YARN containers is not HDFS locality
aware here.
Hence, Flink may read more splits remotely, where MR reads more splits
locally.
On Fri, May 13, 2016 at 3:25 PM, Robert Metzger wrote:
> Hi,
>
> Can you try running the job with 8 slots, 7 GB (maybe you need
Hi,
Can you try running the job with 8 slots, 7 GB (maybe you need to go down
to 6 GB) and only three TaskManagers (-n 3) ?
I'm suggesting this, because you have many small JVMs running on your
machines. On such small machines you can probably get much more use out of
your available memory by run
Hey,
I have successfully integrated Flink into our very small test cluster (3
machines with 8 cores, 8GBytes of memory and 2x1TB disks). Basically I am
started the session to use YARN as RM and the data is being read from HDFS.
/yarn-session.sh -n 21 -s 1 -jm 1024 -tm 1024
My code is very simpl
17 matches
Mail list logo