at some points… Somehow
breaking the “slot number” contract).
For the RAM cache, I believe that the hearbeat timeout may also times out
because of a busy network.
Cheers,
Arnaud
De : Till Rohrmann
Envoyé : jeudi 22 juillet 2021 11:33
À : LINZ, Arnaud
Cc : Gen Luo ; Yang Wang ; dev
; user
Hello,
From a user perspective: we have some (rare) use cases where we use “coarse
grain” datasets, with big beans and tasks that do lengthy operation (such as ML
training). In these cases we had to increase the time out to huge values
(heartbeat.timeout: 50) so that our app is not killed.
Hello,
I think that's a good idea. I suppose that most corporate users use a vendor
version and already compile their flink version with vendor repos anyway.
Arnaud
PS - FYI, for CDH6 (based on hadoop3), I've managed to build a 1.10 version by
modifying pom.xml files and using "hidden" Cloudera p
Hello,
Have you shared it somewhere on the web already?
Best,
Arnaud
De : vino yang
Envoyé : mercredi 4 décembre 2019 11:55
À : Márton Balassi
Cc : Chesnay Schepler ; Foster, Craig
; u...@flink.apache.org; dev@flink.apache.org
Objet : Re: Building with Hadoop 3
Hi Marton,
Thanks for your expl
blem with the Maven
> artifacts of 1.1.0 :-( I've added a warning to the release note and
> will start a emergency vote for 1.1.1 which only updates the Maven
> artifacts.
>
> On Tue, Aug 9, 2016 at 9:45 AM, LINZ, Arnaud wrote:
>> Hello,
>>
>>
>>
>&g
Hello,
I’ve switched to 1.1.0, but part of my code doesn’t work any longer.
Despite the fact that I have no Hadoop 1 jar in my dependencies (2.7.1 clients
& flink-hadoop-compatibility_2.10 1.1.0), I have a weird JobContext version
mismatch error, that I was unable to understand.
Code is a hive