[ 
https://issues.apache.org/jira/browse/FLINK-20569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17298578#comment-17298578
 ] 

Dawid Wysakowicz commented on FLINK-20569:
------------------------------------------

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14362&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5

> testKafkaSourceSinkWithMetadata hangs
> -------------------------------------
>
>                 Key: FLINK-20569
>                 URL: https://issues.apache.org/jira/browse/FLINK-20569
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka, Table SQL / Ecosystem
>    Affects Versions: 1.12.0, 1.13.0
>            Reporter: Huang Xingbo
>            Priority: Major
>              Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10781&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=f266c805-9429-58ed-2f9e-482e7b82f58b]
> {code:java}
> 2020-12-10T23:10:46.7788275Z Test testKafkaSourceSinkWithMetadata[legacy = 
> false, format = 
> csv](org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase) is 
> running.
> 2020-12-10T23:10:46.7789360Z 
> --------------------------------------------------------------------------------
> 2020-12-10T23:10:46.7790602Z 23:10:46,776 [                main] INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl [] - 
> Creating topic metadata_topic_csv
> 2020-12-10T23:10:47.1145296Z 23:10:47,112 [                main] WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Property 
> [transaction.timeout.ms] not specified. Setting it to 3600000 ms
> 2020-12-10T23:10:47.1683896Z 23:10:47,166 [Sink: 
> Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
> physical_2, physical_3, headers, timestamp]) (1/1)#0] WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Using 
> AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE 
> semantic.
> 2020-12-10T23:10:47.2087733Z 23:10:47,206 [Sink: 
> Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
> physical_2, physical_3, headers, timestamp]) (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting 
> FlinkKafkaInternalProducer (1/1) to produce into default topic 
> metadata_topic_csv
> 2020-12-10T23:10:47.5157133Z 23:10:47,513 [Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 has no restore state.
> 2020-12-10T23:10:47.5233388Z 23:10:47,521 [Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 will start reading the following 1 partitions from the 
> earliest offsets: [KafkaTopicPartition{topic='metadata_topic_csv', 
> partition=0}]
> 2020-12-10T23:10:47.5387239Z 23:10:47,537 [Legacy Source Thread - Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 creating fetcher with offsets 
> {KafkaTopicPartition{topic='metadata_topic_csv', partition=0}=-915623761775}.
> 2020-12-11T02:34:02.6860452Z ##[error]The operation was canceled.
> {code}
> This test started at 2020-12-10T23:10:46.7788275Z and has not been finished 
> at 2020-12-11T02:34:02.6860452Z



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to