We had the same issue. Get the hdp version, from
/usr/hdp/current/hadoop-client/hadoop-common-<version>.jar for example.
Then rebuild flink from src:

mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version=<version>

for example: mvn clean install -DskipTests -Pvendor-repos
-Dhadoop.version=2.7.3.2.6.1.0-129

Copy and setup build-target/ to the cluster. Export HADOOP_CONF_DIR and
YARN_CONF_DIR according to your env. You should have no problem starting
the session.

On Wed, Aug 30, 2017 at 6:45 AM, Federico D'Ambrosio <fedex...@gmail.com>
wrote:

> Hi,
> What is your "hadoop version" output? I'm asking because you said your
> hadoop distribution is in /usr/hdp so it looks like you're using
> Hortonworks HDP, just like myself. So, this would be a third party
> distribution and you'd need to build Flink from source according to this:
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.3/setup/building.html#vendor-specific-versions
>
> Federico D'Ambrosio
>
> Il 30 ago 2017 13:33, "albert" <alb...@datacamp.com> ha scritto:
>
>> Hi Chesnay,
>>
>> Thanks for your reply. I did download the binaries matching my Hadoop
>> version (2.7), that's why I was wondering if the issue had something to do
>> with the exact hadoop version flink is compiled again, or if there might
>> be
>> things that are missing in my environment.
>>
>>
>>
>> --
>> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.
>> nabble.com/
>>
>

Reply via email to