Hi Gary,
I have used this technique before. I deleted flink-avro jar from lib and
packed it into the application  jar and there are no problems.

Best,
Nick

On Thu, May 14, 2020 at 6:11 AM Gary Yao <g...@apache.org> wrote:

> Its because the flink distribution of the cluster is 1.7.2. We use a
>> standalone cluster , so in the lib directory in flink the artifact is
>> flink-core-1.7.2.jar . I need to pack flink-core-1.9.0.jar from application
>> and use child first class loading to use newer version of flink-core.
>>
>
> Do you have experience with this technique in production? In general I do
> not think this can work; a job pretending to run a newer version of Flink
> generally cannot communicate with an older JobManager, which normally does
> not even run user code.
>
> If you are stuck with Flink 1.8, maybe it is an option for you to backport
> FLINK-11693 to Flink 1.8 yourself and build a custom Kafka connector.
>
>
> On Tue, May 12, 2020 at 10:04 PM Nick Bendtner <buggi...@gmail.com> wrote:
> >
> > Hi Gary,
> > Its because the flink distribution of the cluster is 1.7.2. We use a
> standalone cluster , so in the lib directory in flink the artifact is
> flink-core-1.7.2.jar . I need to pack flink-core-1.9.0.jar from application
> and use child first class loading to use newer version of flink-core. If I
> have it as provided scope, sure it will work in IntelliJ but not outside of
> it .
> >
> > Best,
> > Nick
> >
> > On Tue, May 12, 2020 at 2:53 PM Gary Yao <g...@apache.org> wrote:
> >>
> >> Hi Nick,
> >>
> >> Can you explain why it is required to package flink-core into your
> >> application jar? Usually flink-core is a dependency with provided
> >> scope [1]
> >>
> >> Best,
> >> Gary
> >>
> >> [1]
> https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#dependency-scope
> >>
> >> On Tue, May 12, 2020 at 5:41 PM Nick Bendtner <buggi...@gmail.com>
> wrote:
> >> >
> >> > Hi Gary,
> >> > Thanks for the info. I am aware this feature is available in 1.9.0
> onwards. Our cluster is still very old and have CICD challenges,I was
> hoping not to bloat up the application jar by packaging even flink-core
> with it. If its not possible to do this with older version without writing
> our own kafka sink implementation similar to the flink provided version in
> 1.9.0 then I think we will pack flink-core 1.9.0 with the application and
> follow the approach that you suggested. Thanks again for getting back to me
> so quickly.
> >> >
> >> > Best,
> >> > Nick
> >> >
> >> > On Tue, May 12, 2020 at 3:37 AM Gary Yao <g...@apache.org> wrote:
> >> >>
> >> >> Hi Nick,
> >> >>
> >> >> Are you able to upgrade to Flink 1.9? Beginning with Flink 1.9 you
> can use
> >> >> KafkaSerializationSchema to produce a ProducerRecord [1][2].
> >> >>
> >> >> Best,
> >> >> Gary
> >> >>
> >> >> [1] https://issues.apache.org/jira/browse/FLINK-11693
> >> >> [2]
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/api/java/org/apache/flink/streaming/connectors/kafka/KafkaSerializationSchema.html
> >> >>
> >> >> On Mon, May 11, 2020 at 10:59 PM Nick Bendtner <buggi...@gmail.com>
> wrote:
> >> >> >
> >> >> > Hi guys,
> >> >> > I use 1.8.0 version for flink-connector-kafka. Do you have any
> recommendations on how to produce a ProducerRecord from a kafka sink.
> Looking to add support to kafka headers therefore thinking about
> ProducerRecord. If you have any thoughts its highly appreciated.
> >> >> >
> >> >> > Best,
> >> >> > Nick.
>

Reply via email to