The hello world example works fine in Flink 1.14.X, but flink sql with kafka is 
not working , I have tried several times in two different enviroments.



Sorry , I forgot tell you it's in flink sql with kafka. Job is 




%flink.ssql(parallelism = 1)




DROP TABLE IF EXISTS aa;

CREATE TABLE aa(

 `user_type` STRING

)




WITH(

    'connector' = 'kafka',

    'topic' = 'topica',

    'properties.bootstrap.servers' = 'x.x.x.x:9092',

    'format' = 'json',

    'scan.startup.mode' = 'latest-offset',

    'scan.topic-partition-discovery.interval' = '10',

    'json.ignore-parse-errors' = 'true'

);




%flink.ssql

SELECT user_type FROM aa;







At 2022-05-10 13:51:31, "Jeff Zhang" <zjf...@gmail.com> wrote:

Zeppelin 0.10.1 supports Flink 1.14.x, have you tried to run the hello world 
example which don't require kafka connector ?


On Tue, May 10, 2022 at 1:44 PM 李 <lifuqion...@126.com> wrote:





Hi:
    I found zepplin 0.10.1 does not work with flink 1.14.2 and 1.14.4 using 
kafka connector, but it workes with flink 1.13.1.
    Does it means zepplin does not work with flink 1.14?  Or anybody meet the 
same problem and solve it?
    Thank you.
     
The Exception is:
Job failed during initialization of JobManager
org.apache.flink.runtime.client.JobInitializationException: Could not start the 
JobMaster.
at 
org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1609)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: 
java.lang.RuntimeException: org.apache.flink.runtime.JobException: Cannot 
instantiate the coordinator for operator Source: 
KafkaSource-default_catalog.default_database.log_source4 -> 
SinkConversionToTuple2 -> Sink: Zeppelin Flink Sql Stream Collect Sink 
1a110240-3a3d-4b1f-ad0e-35e4aef5874c
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
... 3 more
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.JobException: 
Cannot instantiate the coordinator for operator Source: 
KafkaSource-default_catalog.default_database.log_source4 -> 
SinkConversionToTuple2 -> Sink: Zeppelin Flink Sql Stream Collect Sink 
1a110240-3a3d-4b1f-ad0e-35e4aef5874c
at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:316)
at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:114)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
... 3 more
Caused by: org.apache.flink.runtime.JobException: Cannot instantiate the 
coordinator for operator Source: 
KafkaSource-default_catalog.default_database.log_source4 -> 
SinkConversionToTuple2 -> Sink: Zeppelin Flink Sql Stream Collect Sink 
1a110240-3a3d-4b1f-ad0e-35e4aef5874c
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:217)
at 
org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.attachJobGraph(DefaultExecutionGraph.java:791)
at 
org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:196)
at 
org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:107)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:334)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:190)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:130)
at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:132)
at 
org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:110)
at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:346)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:323)
at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:106)
at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:94)
at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
... 4 more
Caused by: java.lang.ClassCastException: cannot assign instance of 
org.apache.flink.kafka.shaded.org.apache.kafka.clients.consumer.OffsetResetStrategy
 to field 
org.apache.flink.connector.kafka.source.enumerator.initializer.ReaderHandledOffsetsInitializer.offsetResetStrategy
 of type org.apache.kafka.clients.consumer.OffsetResetStrategy in instance of 
org.apache.flink.connector.kafka.source.enumerator.initializer.ReaderHandledOffsetsInitializer
at 
java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2301)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1431)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2410)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2328)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1666)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2404)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2328)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1666)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2404)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2328)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1666)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:502)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:460)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602)
at 
org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589)
at 
org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:67)
at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.create(OperatorCoordinatorHolder.java:431)
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:211)
... 17 more




 





 





--

Best Regards

Jeff Zhang

Reply via email to