Re: [DISCUSS] Adding support for Hadoop 3 and removing flink-shaded-hadoop

2020-05-03 Thread Konstantin Knauf
Hi Chesnay, Hi Robert,

I have a bit of a naive question. I assume the reason for introducing
flink-shaded-hadoop were dependency conflicts between Hadoop, Flink and/or
user code. When we drop it now is it because

a) it was not worth it (value provided did not justify maintenance overhead
and issues introduced)
b) we don't think it is a problem anymore
c) prioritizes have shifted and it *now *not worth it anymore
d) something else

Cheers,

Konstantin

On Sun, Apr 26, 2020 at 10:25 PM Stephan Ewen  wrote:

> Indeed, that would be the assumption, that Hadoop does not expose its
> transitive libraries on its public API surface.
>
> From vague memory, I think that pretty much true so far. I only remember
> Kinesis and Calcite as counter examples, who exposed Guava classes as part
> of the public API.
> But that is definitely the "weak spot" of this approach. Plus, as with all
> custom class loaders, the fact that the Thread Context Class Loader does
> not really work well any more.
>
> On Thu, Apr 23, 2020 at 11:50 AM Chesnay Schepler 
> wrote:
>
> > This would only work so long as all Hadoop APIs do not directly expose
> > any transitive non-hadoop dependency.
> > Otherwise the user code classloader might search for this transitive
> > dependency in lib instead of the hadoop classpath (and possibly not find
> > it).
> >
> > On 23/04/2020 11:34, Stephan Ewen wrote:
> > > True, connectors built on Hadoop make this a bit more complex. That is
> > also
> > > the reason why Hadoop is on the "parent first" patterns.
> > >
> > > Maybe this is a bit of a wild thought, but what would happen if we had
> a
> > > "first class" notion of a Hadoop Classloader in the system, and the
> user
> > > code classloader would explicitly fall back to that one whenever a
> class
> > > whose name starts with "org.apache.hadoop" is not found? We could also
> > > generalize this by associating plugin loaders with class name prefixes.
> > >
> > > Then it would try to load from the user code jar, and if the class was
> > not
> > > found, load it from the hadoop classpath.
> > >
> > > On Thu, Apr 23, 2020 at 10:56 AM Chesnay Schepler 
> > > wrote:
> > >
> > >> although, if you can load the HADOOP_CLASSPATH as a plugin, then you
> can
> > >> also load it in the user-code classloader.
> > >>
> > >> On 23/04/2020 10:50, Chesnay Schepler wrote:
> > >>> @Stephan I'm not aware of anyone having tried that; possibly since we
> > >>> have various connectors that require hadoop (hadoop-compat, hive,
> > >>> orc/parquet/hbase, hadoop inputformats). This would require
> connectors
> > >>> to be loaded as plugins (or having access to the plugin classloader)
> > >>> to be feasible.
> > >>>
> > >>> On 23/04/2020 09:59, Stephan Ewen wrote:
> >  Hi all!
> > 
> >  +1 for the simplification of dropping hadoop-shaded
> > 
> > 
> >  Have we ever investigated how much work it would be to load the
> >  HADOOP_CLASSPATH through the plugin loader? Then Hadoop's crazy
> >  dependency
> >  footprint would not spoil the main classpath.
> > 
> >  - HDFS might be very simple, because file systems are already
> >  Plugin aware
> >  - Yarn would need some extra work. In essence, we would need to
> >  discover
> >  executors also through plugins
> >  - Kerberos is the other remaining bit. We would need to switch
> >  security
> >  modules to ServiceLoaders (which we should do anyways) and also pull
> >  them
> >  from plugins.
> > 
> >  Best,
> >  Stephan
> > 
> > 
> > 
> >  On Thu, Apr 23, 2020 at 4:05 AM Xintong Song  >
> >  wrote:
> > 
> > > +1 for supporting Hadoop 3.
> > >
> > > I'm not familiar with the shading efforts, thus no comment on
> > > dropping the
> > > flink-shaded-hadoop.
> > >
> > >
> > > Correct me if I'm wrong. Despite currently the default Hadoop
> > > version for
> > > compiling is 2.4.1 in Flink, I think this does not mean Flink
> should
> > > support only Hadoop 2.4+. So no matter which Hadoop version we use
> > for
> > > compiling by default, we need to use reflection for the Hadoop
> > > features/APIs that are not supported in all versions anyway.
> > >
> > >
> > > There're already many such reflections in `YarnClusterDescriptor`
> and
> > > `YarnResourceManager`, and might be more in future. I'm wondering
> > > whether
> > > we should have a unified mechanism (an interface / abstract class
> or
> > > so)
> > > that handles all these kind of Hadoop API reflections at one place.
> > Not
> > > necessarily in the scope to this discussion though.
> > >
> > >
> > > Thank you~
> > >
> > > Xintong Song
> > >
> > >
> > >
> > > On Wed, Apr 22, 2020 at 8:32 PM Chesnay Schepler <
> ches...@apache.org
> > >
> > > wrote:
> > >
> > >> 1) Likely not, as this again introduces a hard-dependency on
> >

[ANNOUNCE] Weekly Community Update 2020/18

2020-05-03 Thread Konstantin Knauf
Dear community,

happy to share - a brief - community update this week with an update on
Flink 1.10.1, our application to Google Season of Docs 2020, a discussion
to support Hadoop 3, a recap of Flink Forward Virtual 2020 and a bit more.

Flink Development
==

* [releases] Yu has published a RC #2 for Flink 1.10.1. No -1s so far. [1]

* [docs] Apache Flink's application to Google Season of Docs 2020 is about
to be finalized. Marta has opened PR for the announcement and Seth &
Aljoscha volunteered as Mentor. Apache Flink is pitching a project to
improve the documentation of Table API & SQL. [2]

* [hadoop] Robert has started a discussion on adding support for Hadoop 3.
In particular, the thread discusses the questions of whether Hadoop 3 would
be supported via flink-shaded-hadoop or not. [3]

* [configuration] Timo has started a discussion on how we represent
configuration hierarchies in properties (Flink configuration as well as
Connector properties), so that the resulting files would be valid
JSON/YAML. [4]

* [connectors] Leonard Xu proposes to refactor package, module and class
names of the Flink JDBC connector to be consistent with other connectors .
Details in the [5].

[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Release-1-10-1-release-candidate-2-tp41019.html
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/PROPOSAL-Google-Season-of-Docs-2020-tp40264.html
[3]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Adding-support-for-Hadoop-3-and-removing-flink-shaded-hadoop-tp40570p40601.html
[4]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Hierarchies-in-ConfigOption-tp40920.html
[5]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Refactor-flink-jdbc-connector-structure-tp40984.html

flink-packages.org
==

* Alibaba has published a preview version of its SpillableHeapStateBackend
on flink-packages.org. [6] This statebackend is contributed to Apache in
FLINK-12692 [7]. The SpillableHeapStateBackend is a Java Heap-based
Statebackend (like the FilesystemStatebackend) that spills the coldest
state to disk before the heap is exhausted.

[6] https://flink-packages.org/packages/spillable-state-backend-for-flink
[7] https://issues.apache.org/jira/browse/FLINK-12692

Notable Bugs
==

I did not encounter anything particularly worth sharing.

Events, Blog Posts, Misc
===

* Fabian has published a recap of Flink Foward Virtual 2020 on the
Ververica blog. [8]

* All recordings of Flink Forward Virtual 2020 have been published on
Youtube. [9]

[8] https://www.ververica.com/blog/flink-forward-virtual-2020-recap
[9]
https://www.youtube.com/watch?v=NF0hXZfUyqE&list=PLDX4T_cnKjD0ngnBSU-bYGfgVv17MiwA7

Cheers,

Konstantin

-- 

Konstantin Knauf

https://twitter.com/snntrable

https://github.com/knaufk


Re: [VOTE] Release 1.10.1, release candidate #2

2020-05-03 Thread Robert Metzger
Thanks a lot for addressing the issues from the last release candidate and
creating this one!

+1 (binding)

- Started Flink on YARN on Google Cloud DataProc by setting HADOOP_CLASSPATH
- checked staging repo



On Sat, May 2, 2020 at 6:57 PM Thomas Weise  wrote:

> +1 (binding)
>
> Checked signatures and hashes.
>
> Run internal benchmark applications.
>
> I found a regression that was actually introduced with 1.10.0, hence not a
> blocker for this release:
>
> https://github.com/apache/flink/pull/11975
>
> Thanks,
> Thomas
>
>
> On Fri, May 1, 2020 at 5:37 AM Yu Li  wrote:
>
> > Hi everyone,
> >
> > Please review and vote on the release candidate #2 for version 1.10.1, as
> > follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint D8D3D42E84C753CA5F170BDF93C07902771AB743 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.10.1-rc2" [5],
> > * website pull request listing the new release and adding announcement
> blog
> > post [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Yu
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346891
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.10.1-rc2/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1363/
> > [5]
> >
> >
> https://github.com/apache/flink/commit/f92e8a9d60ef664acd66230da43c6f0a1cd87adc
> > [6] https://github.com/apache/flink-web/pull/330
> >
>


[jira] [Created] (FLINK-17497) Quickstarts Java nightly end-to-end test fails with "class file has wrong version 55.0, should be 52.0"

2020-05-03 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17497:
--

 Summary: Quickstarts Java nightly end-to-end test fails with 
"class file has wrong version 55.0, should be 52.0"
 Key: FLINK-17497
 URL: https://issues.apache.org/jira/browse/FLINK-17497
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger
Assignee: Robert Metzger


CI: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=540&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334

{code}

[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
java.io.FileNotFoundException: flink-quickstart-java-0.1.jar (No such file or 
directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:230)
at java.util.zip.ZipFile.(ZipFile.java:160)
at java.util.zip.ZipFile.(ZipFile.java:131)
at sun.tools.jar.Main.list(Main.java:1115)
at sun.tools.jar.Main.run(Main.java:293)
at sun.tools.jar.Main.main(Main.java:1288)
Success: There are no flink core classes are contained in the jar.
Failure: Since Elasticsearch5SinkExample.class and other user classes are not 
included in the jar. 
[FAIL] Test script contains errors.
Checking for errors...
No errors in log files.
Checking for exceptions...
No exceptions in log files.
Checking for non-empty .out files...
grep: 
/home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/log/*.out:
 No such file or directory
No non-empty .out files.

[FAIL] 'Quickstarts Java nightly end-to-end test' failed after 0 minutes and 6 
seconds! Test exited with exit code 1


{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17498) MapCancelingITCase.testMapCancelling fails with timeout

2020-05-03 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17498:
--

 Summary: MapCancelingITCase.testMapCancelling fails with timeout
 Key: FLINK-17498
 URL: https://issues.apache.org/jira/browse/FLINK-17498
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


CI: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=540&view=logs&j=e824df38-aa5e-5531-f993-88388cf903b8&t=4df9e78d-8d99-5a27-509a-217c7c98d003

{code}

at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at 
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Flink 1.9.2 why always checkpoint expired

2020-05-03 Thread Congxian Qiu
Hi

>From the picture and the previous eamil. you use RocksDBStateBackend, and
all the operators chained together, checkpoint timeout set to 2min.

Do you have keyed state in your job (do you have `keyby` in your job)?

I'll share some experience to find out the reason of checkpoint timeout
problem,
1. does the snapshot thread can get checkpoint lock if you run on version <
1.10
2. does the main thread consumes too much cpu, so that barrier can not be
handled.
3. could you please enable debug log and find out more information.


Best,
Congxian


qq <471237...@qq.com> 于2020年4月28日周二 上午9:20写道:

>
>
> 2020年4月27日 12:40,Jiayi Liao  写道:
>
> Hi,
>
> The picture in your attachment is too vague to see any detail. And beside
> the overview, could you take a look at the details of a specific expired
> checkpoint in history tab? From my experience, the expiration is usually
> because:
>
> 1. The data skew problem, which you can find out from checkpoints' details.
> 2. The processing is too slow (or the job is back-pressured) and the
> checkpoint timeout is set too short.
>
> Best Regards,
> Jiayi Liao
>
> On Mon, Apr 27, 2020 at 12:34 PM qq <471237...@qq.com> wrote:
>
>> Hi all,
>>
>> Why my flink checkpoint always expired, I used RocksDB checkpoint,
>> and I can’t get any useful messages for this. Could you help me ? Thanks
>> very much.
>>
>>
>>
>> <粘贴的图形-1.tiff>
>>
>
>


[jira] [Created] (FLINK-17499) LazyTimerService used to register timers via State Processing API incorectly mixes event time timers with processing time timers

2020-05-03 Thread Adam Laczynski (Jira)
Adam Laczynski created FLINK-17499:
--

 Summary: LazyTimerService used to register timers via State 
Processing API incorectly mixes event time timers with processing time timers
 Key: FLINK-17499
 URL: https://issues.apache.org/jira/browse/FLINK-17499
 Project: Flink
  Issue Type: Bug
  Components: API / State Processor
Affects Versions: 1.10.0
Reporter: Adam Laczynski


@Override
public void register*ProcessingTime*Timer(long time) {
  ensureInitialized();
  
internalTimerService.register{color:#FF}*EventTime*{color}Timer(VoidNamespace.INSTANCE,
 time);
 }

Same issue for both registerEventTimeTimer and registerProcessingTimeTimer.

https://github.com/apache/flink/blob/master/flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/output/operators/LazyTimerService.java#L62



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] flink-connector-rabbitmq api changes

2020-05-03 Thread seneg...@gmail.com
Hi,

Okay so keep the current constructors as is, create new ones with more
granular parsing of the results. Sounds like a good plan.

How do we proceed from here ?

Regards,
Karim Mansour

On Fri, May 1, 2020 at 5:03 PM Austin Cawley-Edwards <
austin.caw...@gmail.com> wrote:

> Hey,
>
> (Switching to my personal email)
>
> Correct me if I'm wrong, but I think Aljoscha is proposing keeping the
> public API as is, and adding some new constructors/ custom deserialization
> schemas as was done with Kafka. Here's what I was able to find on that
> feature:
>
> * https://issues.apache.org/jira/browse/FLINK-8354
> *
>
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaDeserializationSchema.java
> *
>
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer011.java#L100-L114
>
> Best,
> Austin
>
> On Fri, May 1, 2020 at 6:19 AM seneg...@gmail.com 
> wrote:
>
> > Hello,
> >
> > So the proposal is to keep the current RMQSource constructors /  public
> api
> > as is and create new ones that gives more granular parsing ?
> >
> > Regards,
> > Karim Mansour
> >
> > On Thu, Apr 30, 2020 at 5:23 PM Austin Cawley-Edwards <
> > aus...@fintechstudios.com> wrote:
> >
> > > Hey all + thanks Konstantin,
> > >
> > > Like mentioned, we also run into issues with the RMQ Source
> > inflexibility.
> > > I think Aljoscha's idea of supporting both would be a nice way to
> > > incorporate new changes without breaking the current API.
> > >
> > > We'd definitely benefit from the changes proposed here but have another
> > > issue with the Correlation ID. When a message gets in the queue
> without a
> > > correlation ID, the source errors and the job cannot recover, requiring
> > > (painful) manual intervention. It would be nice to be able to
> dead-letter
> > > these inputs from the source, but I don't think that's possible with
> the
> > > current source interface (don't know too much about the source
> > specifics).
> > > We might be able to work around this with a custom Correlation ID
> > > extractor, as proposed by Karim.
> > >
> > > Also, if there are other tickets in the RMQ integrations that have gone
> > > unmaintained, I'm also happy to chip it at maintaining them!
> > >
> > > Best,
> > > Austin
> > > 
> > > From: Konstantin Knauf 
> > > Sent: Thursday, April 30, 2020 6:14 AM
> > > To: dev 
> > > Cc: Austin Cawley-Edwards 
> > > Subject: Re: [DISCUSS] flink-connector-rabbitmq api changes
> > >
> > > Hi everyone,
> > >
> > > just looping in Austin as he mentioned that they also ran into issues
> due
> > > to the inflexibility of the RabiitMQSourcce to me yesterday.
> > >
> > > Cheers,
> > >
> > > Konstantin
> > >
> > > On Thu, Apr 30, 2020 at 11:23 AM seneg...@gmail.com > > seneg...@gmail.com> mailto:seneg...@gmail.com>>
> > wrote:
> > > Hello Guys,
> > >
> > > Thanks for all the responses, i want to stress out that i didn't feel
> > > ignored i just thought that i forgot an important step or something.
> > >
> > > Since i am a newbie i would follow whatever route you guys would
> suggest
> > :)
> > > and i agree that the RMQ connector needs a lot of love still "which i
> > would
> > > be happy to submit gradually"
> > >
> > > as for the code i have it here in the PR:
> > > https://github.com/senegalo/flink/pull/1 it's not that much of a
> change
> > in
> > > terms of logic but more of what is exposed.
> > >
> > > Let me know how you want me to proceed.
> > >
> > > Thanks again,
> > > Karim Mansour
> > >
> > > On Thu, Apr 30, 2020 at 10:40 AM Aljoscha Krettek  > > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I think it's good to contribute the changes to Flink directly since
> we
> > > > already have the RMQ connector in the respository.
> > > >
> > > > I would propose something similar to the Kafka connector, which takes
> > > > both the generic DeserializationSchema and a
> KafkaDeserializationSchema
> > > > that is specific to Kafka and allows access to the ConsumerRecord and
> > > > therefore all the Kafka features. What do you think about that?
> > > >
> > > > Best,
> > > > Aljoscha
> > > >
> > > > On 30.04.20 10:26, Robert Metzger wrote:
> > > > > Hey Karim,
> > > > >
> > > > > I'm sorry that you had such a bad experience contributing to Flink,
> > > even
> > > > > though you are nicely following the rules.
> > > > >
> > > > > You mentioned that you've implemented the proposed change already.
> > > Could
> > > > > you share a link to a branch here so that we can take a look? I can
> > > > assess
> > > > > the API changes easier if I see them :)
> > > > >
> > > > > Thanks a lot!
> > > > >
> > > > >
> > > > > Best,
> > > > > Robert
> > > > >
> > > > > On Thu, Apr 30, 2020 at 8:09 AM Dawid Wysakowicz <
> > > dwysakow...@apache.org

[GitHub] [flink-web] hequn8128 commented on a change in pull request #330: Add Apache Flink release 1.10.1

2020-05-03 Thread GitBox


hequn8128 commented on a change in pull request #330:
URL: https://github.com/apache/flink-web/pull/330#discussion_r419198602



##
File path: _posts/2020-05-05-release-1.10.1.md
##
@@ -0,0 +1,370 @@
+---
+layout: post
+title:  "Apache Flink 1.10.1 Released"
+date:   2020-05-05 12:00:00
+categories: news
+authors:
+- liyu:
+  name: "Yu Li"
+  twitter: "LiyuApache"
+---
+
+The Apache Flink community released the first bugfix version of the Apache 
Flink 1.10 series.
+
+This release includes 143 fixes and minor improvements for Flink 1.10.0. The 
list below includes a detailed list of all fixes and improvements.

Review comment:
   143 => 152

##
File path: _config.yml
##
@@ -194,8 +194,8 @@ release_archive:
 flink:
   -
 version_short: "1.10"
-version_long: 1.10.0
-release_date: 2020-02-11
+version_long: 1.10.1

Review comment:
   Add 1.10.1 instead of replacing 1.10.0.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: "[VOTE] FLIP-108: edit the Public API"

2020-05-03 Thread Xintong Song
+1 (non-binding)

Thank you~

Xintong Song



On Sat, May 2, 2020 at 9:46 PM Becket Qin  wrote:

> +1. The API change sounds good to me.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Fri, May 1, 2020 at 10:25 PM Yu Li  wrote:
>
> > +1
> >
> > I could see there's a thorough discussion and the solution looks good.
> > Thanks for driving this Yangze.
> >
> > Best Regards,
> > Yu
> >
> >
> > On Fri, 1 May 2020 at 21:34, Till Rohrmann  wrote:
> >
> > > Thanks for updating the FLIP Yangze.
> > >
> > > +1 (binding)
> > >
> > > for the update.
> > >
> > > Cheers,
> > > Till
> > >
> > > On Thu, Apr 30, 2020 at 4:34 PM Yangze Guo  wrote:
> > >
> > > > Hi, there.
> > > >
> > > > The "FLIP-108: Add GPU support in Flink"[1] is now working in
> > > > progress. However, we met problems regarding class loader and
> > > > dependency. For more details, you could look at the discussion[2].
> The
> > > > discussion thread is now converged and the solution is changing the
> > > > RuntimeContext#getExternalResourceInfos, let it return
> > > > ExternalResourceInfo and adding methods to ExternalResourceInfo
> > > > interface.
> > > >
> > > > Since the solution involves changes in the Public API. We'd like to
> > > > start a voting thread for it.
> > > >
> > > > The proposed change is:
> > > >
> > > > ```
> > > > public interface RuntimeContext {
> > > > /**
> > > >  * Get the specific external resource information by the
> > > resourceName.
> > > >  */
> > > > Set getExternalResourceInfos(String
> > > > resourceName);
> > > > }
> > > > ```
> > > >
> > > > ```
> > > > public interface ExternalResourceInfo {
> > > >   String getProperty(String key);
> > > >   Collection getKeys();
> > > > }
> > > > ```
> > > >
> > > > The vote will be open for at least 72 hours. Unless there is an
> > > objection,
> > > > I will try to close it by May 4, 2020 14:00 UTC if we have received
> > > > sufficient votes.
> > > >
> > > > [1]
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-108%3A+Add+GPU+support+in+Flink
> > > > [2]
> > > >
> > >
> >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-108-Problems-regarding-the-class-loader-and-dependency-td40893.html
> > > >
> > > > Best,
> > > > Yangze Guo
> > > >
> > >
> >
>