[ 
https://issues.apache.org/jira/browse/FLINK-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17344459#comment-17344459
 ] 

Guenter Hipler commented on FLINK-13414:
----------------------------------------

[~Zhen-hao], yes Kafka provides "only" a wrapper around the Java API
([https://kafka.apache.org/28/documentation/streams/developer-guide/dsl-api.html#scala-dsl)]
I'm looking into the scala world since around 2 years (nearly the same amount 
of time as Flink / Kafka). Coming from the object oriented mind set coined by 
Java & co. I had quite hard time to understand the functional way of 
programming used in the streaming world. And looking back at this time I mostly 
saw reference sources implemented in scala. Even books preferred the concise 
and idiomatic scala style. Now I realize a soft shift from scala into the java 
style. (and in the case of Flink and Spark to Python). Maybe that's because the 
majority of people are socialized like myself.
>From my point of view this is really sad. Today I would argue that thinking 
>with data is thinking with functions (and other concepts) and Scala is still 
>the way to go.
And looking back to my last 2 years it's even fruitful for people coming 
entirely from the conceptual (meta-data world) because they are able to start 
without the solely object oriented way of thinking. To be more detailed: I'm 
talking about the (scientific) library sector where librarians often have a 
very good understanding of meta-data concepts but there is still a gap between 
implementing things (by now mostly done by more software oriented people) and 
modelling data. I'm pretty sure that this gap could be overcome and from my 
point of view Scala 3 is a next step into this direction.

At the same time I see the difficulties for a Flink project (with its roots in 
the scientific sector) struggling now with maintaining older Scala code because 
of dependencies what becomes difficult to overcome (when I'm correct one of the 
main reasons is because of type serialization using macro technology changing 
completely in Scala 3 ?)

So I'm in favour of a well defined Scala wrapper - as there are quite idiomatic 
ones for other projects   (here [https://github.com/sksamuel] examples for 
elasticsearch and pulsar) - and I would like to support this process but 
probably I'm still not experienced enough using the necessary scala concepts. I 
think it needs one or two "lead people" for this.

In our project we would be very happy to see Flink going into this direction so 
we could combine the Kafka DSL (we used in a current project 
[https://gitlab.com/swissbib/swisscollections] ) together with Flinks 
functionality and potential (as we did it in a specialized project last year 
[https://gitlab.com/swissbib/slsp/series-transformation/volumes-series-enrichment-flink/-/blob/master/src/main/scala/org/swissbib/slsp/Job.scala]
 )

Günter (ex swissbib.ch project, university library Basel)

> Add support for Scala 2.13
> --------------------------
>
>                 Key: FLINK-13414
>                 URL: https://issues.apache.org/jira/browse/FLINK-13414
>             Project: Flink
>          Issue Type: New Feature
>          Components: API / Scala
>            Reporter: Chaoran Yu
>            Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to