Hi All,
We are running flink in AWS and we are observing a strange behavior. We are
using docker containers, EBS for storage and Rocks DB state backend. We
have a few map and value states with checkpointing every 30 seconds and
incremental checkpointing turned on. The issue we are noticing is the
Hi,
We have a streaming job that runs on flink in docker and checkpointing
happens every 10 seconds. After several starts and cancellations we are
facing this issue with file handles.
The job reads data from kafka, processes it and writes it back to kafka and
we are using RocksDB state backend. F
Hi,
I have two streams reading from kafka, one for data and other for control.
The data stream is split by type and there are around six types. Each type
has its own processing logic and finally everything has to be merged to get
the collective state per device. I was thinking I could connect mul
Hi All,
I see the below error after running my streaming job for a while and when
the load increases. After a while the task manager becomes completely dead
and the job keeps on restarting.
Also when I checked if there is an back pressure in the UI, it kept on
saying sampling in progress and no r
t; org/elasticsearch/index/mapper/MapperParsingException) is usually caused
> by
> elasticsearch version conflict or a bad shading when creating the uber jar.
> Can you check if one of the 2 is causing the problem?
>
> On 25 Feb 2017 23:13, "Govindarajan Srinivasaraghavan" <
1.2.0'
compile group: 'org.apache.flink', name: 'flink-clients_2.10', version: '1.2.0'
compile group: 'org.apache.flink', name:
'flink-connector-elasticsearch2_2.10', version: '1.2.0'
On Sat, Feb 25, 2017 at 1:26 AM, Flavio Pomperma
Hi All,
I'm getting the below exception when I start my flink job. I have verified
the elastic search host and it seems to be working well. I have also tried
including the below dependecies to my project but nothing works. Need some
help. Thanks.
compile group: 'org.apache.lucene', name: 'lucene-
Hi All,
I'm trying to run a streaming job with flink 1.2 version and there are 3
task managers with 12 task slots. Irrespective of the parallelism that I
give it always fails with the below error and I found a JIRA link
corresponding to this issue. Can I know by when this will be resolved since
I'
Hi All,
It would great if someone can help me with my questions. Appreciate all the
help.
Thanks.
> On Dec 23, 2016, at 12:11 PM, Govindarajan Srinivasaraghavan
> wrote:
>
> Hi,
>
> We have a computation heavy streaming flink job which will be processing
> around 1
Hi,
We have a computation heavy streaming flink job which will be processing
around 100 million message at peak time and around 1 million messages in
non peak time. We need the capability to dynamically scale so that the
computation operator can scale up and down during high or low work loads
resp
Hi All,
I have a use case for which I need some suggestions. It's a streaming
application with kafka source and then groupBy, keyBy and perform some
calculations. The output of each calculation has to be a side input for the
next calculation and also it needs to be sent to a sink.
Right now I'm a
Hi,
I am currently using flink 1.2 snapshot and instrumenting my pipeline with
flink metrics. One small suggestion I have is currently the Meter interface
only supports getRate() which is always the one minute rate.
It would great if all the rates (1 min, 5 min & 15 min) are exposed to get
a bett
Hi,
I have few questions on how I need to model my use case in flink. Please
advise. Thanks for the help.
- I'm currently running my flink program on 1.2 SNAPSHOT with kafka source
and I have checkpoint enabled. When I look at the consumer offsets in kafka
it appears to be stagnant and there
the same state. For example:
>
> val dataStream = env.addSource(dataSource).keyBy("userId")val
> filterStream = env.addSource(filterSource).broadcast()
> val connectedStream = dataStream
> .connect(filterStream)
> .flatMap(yourFilterFunction)
>
>
> I hop
Hi,
My requirement is to stream millions of records in a day and it has huge
dependency on external configuration parameters. For example, a user can go
and change the required setting anytime in the web application and after
the change is made, the streaming has to happen with the new application
Hi,
I'm working on apache flink for data streaming and I have few questions.
Any help is greatly appreciated. Thanks.
1) Are there any restrictions on creating tumbling windows. For example, if
I want to create a tumbling window per user id for 2 secs and let’s say if
I have more than 10 million
16 matches
Mail list logo