Thank you very much. got it.
On Tue, Nov 27, 2018 at 12:53 PM Fabian Hueske wrote:
> Hi Avi,
>
> I'd definitely go for approach #1.
> Flink will hash partition the records across all nodes. This is basically
> the same as a distributed key-value store sharding keys.
> I would not try to fine tun
Hi Avi,
I'd definitely go for approach #1.
Flink will hash partition the records across all nodes. This is basically
the same as a distributed key-value store sharding keys.
I would not try to fine tune the partitioning. You should try to use as
many keys as possible to ensure an even distribution
General approach#1 is ok, but you may have to use some hash based key
selector if you have a heavy data skew.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Thanks a lot! got it :)
On Wed, Nov 21, 2018 at 11:40 PM Jamie Grier wrote:
> Hi Avi,
>
> The typical approach would be as you've described in #1. #2 is not
> necessary -- #1 is already doing basically exactly that.
>
> -Jamie
>
>
> On Wed, Nov 21, 2018 at 3:36 AM Avi Levi wrote:
>
>> Hi ,
>>
Hi Avi,
The typical approach would be as you've described in #1. #2 is not
necessary -- #1 is already doing basically exactly that.
-Jamie
On Wed, Nov 21, 2018 at 3:36 AM Avi Levi wrote:
> Hi ,
> I am very new to flink so please be gentle :)
>
> *The challenge:*
> I have a road sensor that s
Hi ,
I am very new to flink so please be gentle :)
*The challenge:*
I have a road sensor that should scan billons of cars per day. for starter
I want to recognise if each car that passes by is new or not. new cars
(never been seen before by that sensor ) will be placed on a different
topic on kafk