Hi Flavio,
It’s most likely related to a problem with Maven.
I’m pretty sure this actually isn’t a problem anymore. Could you verify by
rebuilding Flink and see if the problem remains? Thanks a lot.
Best,
Gordon
On 16 June 2017 at 6:25:10 PM, Tzu-Li (Gordon) Tai (tzuli...@apache.org) wrote:
Th
Thanks for pointers Aljoscha.
I was just wondering, Since Custom partition will run in separate thread .
Is it possible that from map -> custom partition -> flat map can take more
than 200 seconds if parallelism is still 1 .
--
View this message in context:
http://apache-flink-user-mailing-li
From my understanding, the benchmark was done using Structured Streaming
that is still based on micro batching.
There are not throughput numbers for the new "Continuous Processing"
model Spark want to introduce. Only some latency numbers. Also note,
that the new "Continuous Processing" will not gi
databricks.com/blog/2017/06/06/simple-super-fast-streaming-engine-apache-spark.html
Should flink users be worry about this huge difference?
End of microbatch with 65M benchmark.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Latest-spark-ya
Hello everybody! First of all, thanks for reading :D
I am currently working on my bachelor's final project which is a
comparison between Spark Streaming and Flink. Now let's focus on the
problem:
- THE PROBLEM: the problem is that my program is writing to Kafka more
than once every window (
Hey Milad,
since you cannot look into the future which element comes next, you have to
"lag" one behind. This requires building an operator that creates 2-tuples
from incoming elements containing (current-1, current), so basically a
single value state that emits the last and the current element in
Hi guys,
I want to sessionize this stream: 1,1,1,2,2,2,2,2,3,3,3,3,3,3,3,0,3,3,3,5,
... to these sessions:
1,1,1
2,2,2,2,2
3,3,3,3,3,3,3
0
3,3,3
5
I've wrote CustomTrigger to detect when stream elements change from 1 to 2
(2 to 3, 3 to 0 and so on) and then fire the trigger. But this is not the
s
Thanks for your clarification, Ziyad! I will try it out.
On Sat, Jun 17, 2017 at 3:45 PM, Ziyad Muhammed wrote:
> Hi,
>
> To set the rocksdb state, you have two options:
>
> 1. Set the default state of the flink cluster, using the below parameters
> in flink-conf.yaml file
>
> state.backend: roc