Hi Gary,

No, I am running a YARN session (which I start with: flink-yarn-session
--slots 4 --taskManagerMemory 16GB --jobManagerMemory 3GB --detached) and
submit jobs through the REST interface. Thank you for the tips - I will
probably shade it on my side. Is there an official location that the uber
jar dependencies are documented that I can reference for future dependency
additions?

Best,
Austin

On Thu, Feb 28, 2019 at 7:13 AM Gary Yao <g...@ververica.com> wrote:

> Hi Austin,
>
> Are you running your job detached in a per-job cluster? In that case
> inverted
> class loading does not work. This is because we add the user jar to the
> system
> class path, and there is no dynamic class loading involved at the moment
> [1].
>
> You can try the YARN session mode, or – as Chesnay already suggested –
> shade
> the dependency on your side.
>
> Best,
> Gary
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/debugging_classloading.html#overview-of-classloading-in-flink
>
>
> On Wed, Feb 27, 2019 at 8:57 PM Austin Cawley-Edwards <
> austin.caw...@gmail.com> wrote:
>
>> Thanks Gary,
>>
>> I will try to look into why the child-first strategy seems to have failed
>> for this dependency.
>>
>> Best,
>> Austin
>>
>> On Wed, Feb 27, 2019 at 12:25 PM Gary Yao <g...@ververica.com> wrote:
>>
>>> Hi,
>>>
>>> Actually Flink's inverted class loading feature was designed to mitigate
>>> problems with different versions of libraries that are not compatible
>>> with
>>> each other [1]. You may want to debug why it does not work for you.
>>>
>>> You can also try to use the Hadoop free Flink distribution, and export
>>> the
>>> HADOOP_CLASSPATH variable [2].
>>>
>>> Best,
>>> Gary
>>>
>>> [1]
>>> https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/debugging_classloading.html#inverted-class-loading-and-classloader-resolution-order
>>> [2]
>>> https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/deployment/hadoop.html#configuring-flink-with-hadoop-classpaths
>>>
>>> On Wed, Feb 27, 2019 at 5:23 AM Austin Cawley-Edwards <
>>> austin.caw...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I recently experienced versioning clashes with the okio and okhttp when
>>>> trying to deploy a Flink 1.6.0 app to AWS EMR on Hadoop 2.8.4. After
>>>> investigating and talking to the okio team (see this issue)
>>>> <https://github.com/square/okio/issues/559>, I found that both okio
>>>> and okhttp exist in the Flink uber jar with versions 1.4.0 and 2.4.0,
>>>> respectively, whereas I'm including versions 2.2.2 and 3.13.1 in my shaded
>>>> jar. The okio team suggested that Flink should shade the uber jar to fix
>>>> the issue, but I'm wondering if there is something I can do on my end to
>>>> have all versions exist simultaneously.
>>>>
>>>> From the issue, here are the okio contents of the uber jar:
>>>>
>>>> *jar -tf flink-shaded-hadoop2-uber-1.6.0.jar | grep okio*
>>>>
>>>> META-INF/maven/com.squareup.okio/
>>>> META-INF/maven/com.squareup.okio/okio/
>>>> META-INF/maven/com.squareup.okio/okio/pom.properties
>>>> META-INF/maven/com.squareup.okio/okio/pom.xml
>>>> okio/
>>>> okio/AsyncTimeout$1.class
>>>> okio/AsyncTimeout$2.class
>>>> okio/AsyncTimeout$Watchdog.class
>>>> okio/AsyncTimeout.class
>>>> okio/Base64.class
>>>> okio/Buffer$1.class
>>>> okio/Buffer$2.class
>>>> okio/Buffer.class
>>>> okio/BufferedSink.class
>>>> okio/BufferedSource.class
>>>> okio/ByteString.class
>>>> okio/DeflaterSink.class
>>>> okio/ForwardingSink.class
>>>> okio/ForwardingSource.class
>>>> okio/ForwardingTimeout.class
>>>> okio/GzipSink.class
>>>> okio/GzipSource.class
>>>> okio/InflaterSource.class
>>>> okio/Okio$1.class
>>>> okio/Okio$2.class
>>>> okio/Okio$3.class
>>>> okio/Okio.class
>>>> okio/RealBufferedSink$1.class
>>>> okio/RealBufferedSink.class
>>>> okio/RealBufferedSource$1.class
>>>> okio/RealBufferedSource.class
>>>> okio/Segment.class
>>>> okio/SegmentPool.class
>>>> okio/SegmentedByteString.class
>>>> okio/Sink.class
>>>> okio/Source.class
>>>> okio/Timeout$1.class
>>>> okio/Timeout.class
>>>> okio/Util.class
>>>>
>>>> Thank you,
>>>> Austin Cawley-Edwards
>>>>
>>>

Reply via email to