[ 
https://issues.apache.org/jira/browse/KAFKA-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883602#comment-16883602
 ] 

ASF GitHub Bot commented on KAFKA-8659:
---------------------------------------

bfncs commented on pull request #7080: KAFKA-8659: SetSchemaMetadata SMT fails 
on records with null value and schema
URL: https://github.com/apache/kafka/pull/7080
 
 
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> SetSchemaMetadata SMT fails on records with null value and schema
> -----------------------------------------------------------------
>
>                 Key: KAFKA-8659
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8659
>             Project: Kafka
>          Issue Type: Bug
>          Components: KafkaConnect
>            Reporter: Marc Löhe
>            Priority: Minor
>
> If you use the {{SetSchemaMetadata}} SMT with records for which the key or 
> value and corresponding schema are {{null}} (i.e. tombstone records from 
> [Debezium|[https://debezium.io/]), the transform will fail.
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Schema required for 
> [updating schema metadata]
> at 
> org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
> at 
> org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:67)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
> ... 11 more
> {code}
>  
> I don't see any problem in passing those records as is in favor of failing 
> and will shortly add this in a PR.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to