Env: Linux , Kafka 9.0.x, NiFi: 0.7 

We like to use new ConsumeKafka processor  but found critical limitations 
compared to old getKafka processor. 
New ConsumeKafka is not writing critical Kafka attributes  i.e., kafka.key, 
kafka.offset, kafka.partition etc into flowFile attributes. 


Old getKafka processor: 

Standard FlowFile Attributes
Key: 'entryDate'
               Value: 'Sun Jul 17 15:17:00 CDT 2016'
Key: 'lineageStartDate'
               Value: 'Sun Jul 17 15:17:00 CDT 2016'
Key: 'fileSize'
               Value: '183'
FlowFile Attribute Map Content
Key: 'filename'
               Value: '19709945781167274'
Key: 'kafka.key'
               Value: '{"database":"test","table":"sc_job","pk.systemid":1}'
Key: 'kafka.offset'
               Value: '1184010261'
Key: 'kafka.partition'
               Value: '0'
Key: 'kafka.topic'
               Value: ‘data'
Key: 'path'
               Value: './'
Key: 'uuid'
               Value: '244059bb-9ad9-4d74-b1fb-312eee72124a'
 
—————————
new ConsumeKafka processor : 
 
Standard FlowFile Attributes
Key: 'entryDate'
               Value: 'Sun Jul 17 15:18:41 CDT 2016'
Key: 'lineageStartDate'
               Value: 'Sun Jul 17 15:18:41 CDT 2016'
Key: 'fileSize'
               Value: '183'
FlowFile Attribute Map Content
Key: 'filename'
               Value: '19710046870478139'
Key: 'path'
               Value: './'
Key: 'uuid'
               Value: '349fbeb3-e342-4533-be4c-424793fa5c59’

Thanks 
Sumo 

Reply via email to