We are evaluating Kafka for some of our use cases. As part of that effort I am 
trying to run an experiment with a cluster we have set up, and using the 
producer perf test tool supplied with the binaries. 

Here’s the cluster info:

Runs in Kubernetes, with 4 CPUs, 32 GB RAM, 100 GB log space allocation for 
each node.
3 ZooKeeper nodes
5 Kafka nodes

Here is the topic description:

$ bin/kafka-topics.sh --describe --bootstrap-server localhost:9092 --topic 
testtopic3
Topic:testtopic3    PartitionCount:5    ReplicationFactor:3    
Configs:min.insync.replicas=2,segment.bytes=1073741824,retention.ms 
<https://slack-redir.net/link?url=http%3A%2F%2Fretention.ms>=3600000,flush.messages=1,unclean.leader.election.enable=false
    Topic: testtopic3    Partition: 0    Leader: 1    Replicas: 1,4,2    Isr: 
1,4,2
    Topic: testtopic3    Partition: 1    Leader: 4    Replicas: 4,2,3    Isr: 
4,2,3
    Topic: testtopic3    Partition: 2    Leader: 2    Replicas: 2,3,5    Isr: 
2,3,5
    Topic: testtopic3    Partition: 3    Leader: 3    Replicas: 3,5,1    Isr: 
3,5,1
    Topic: testtopic3    Partition: 4    Leader: 5    Replicas: 5,1,4    Isr: 
5,1,4



And here is the producer test run command line and the result:

$ bin/kafka-producer-perf-test.sh --topic testtopic3 --num-records 1000000 
--throughput -1 --record-size 256 --producer-props 
bootstrap.servers=kafka-a-0.ri:9092,kafka-b-0.ri:9092,kafka-c-0.ri:9092,kafka-d-0.ri:9092,kafka-e-0.ri:9092
 acks=all batch.size=1 max.block.ms 
<https://slack-redir.net/link?url=http%3A%2F%2Fmax.block.ms>=3600000 
enable.idempotence=true max.in.flight.requests.per.connection=1 retries=3  
--transaction-duration-ms 3600000
4100 records sent, 819.7 records/sec (0.20 MB/sec), 2572.0 ms avg latency, 
4892.0 ms max latency.
4261 records sent, 852.0 records/sec (0.21 MB/sec), 7397.2 ms avg latency, 
9873.0 ms max latency.
4216 records sent, 843.0 records/sec (0.21 MB/sec), 12383.7 ms avg latency, 
14849.0 ms max latency.
4400 records sent, 879.8 records/sec (0.21 MB/sec), 17332.0 ms avg latency, 
19784.0 ms max latency.
4354 records sent, 870.8 records/sec (0.21 MB/sec), 22349.4 ms avg latency, 
24763.0 ms max latency.
4477 records sent, 895.4 records/sec (0.22 MB/sec), 27241.1 ms avg latency, 
29728.0 ms max latency.
4366 records sent, 873.2 records/sec (0.21 MB/sec), 32218.3 ms avg latency, 
34703.0 ms max latency.
4408 records sent, 881.6 records/sec (0.22 MB/sec), 37190.6 ms avg latency, 
39672.0 ms max latency.
4159 records sent, 831.5 records/sec (0.20 MB/sec), 42135.0 ms avg latency, 
44640.0 ms max latency.
4260 records sent, 852.0 records/sec (0.21 MB/sec), 47098.0 ms avg latency, 
49624.0 ms max latency.
 4360 records sent, 872.0 records/sec (0.21 MB/sec), 52137.1 ms avg latency, 
54574.0 ms max latency.
4514 records sent, 902.8 records/sec (0.22 MB/sec), 57038.1 ms avg latency, 
59554.0 ms max latency.
 4273 records sent, 854.3 records/sec (0.21 MB/sec), 62001.8 ms avg latency, 
64524.0 ms max latency.
4348 records sent, 869.6 records/sec (0.21 MB/sec), 67037.8 ms avg latency, 
69494.0 ms max latency.
4039 records sent, 807.5 records/sec (0.20 MB/sec), 72009.8 ms avg latency, 
74481.0 ms max latency.
4327 records sent, 865.2 records/sec (0.21 MB/sec), 76993.8 ms avg latency, 
79457.0 ms max latency.
4307 records sent, 861.4 records/sec (0.21 MB/sec), 82011.9 ms avg latency, 
84449.0 ms max latency.
4506 records sent, 901.0 records/sec (0.22 MB/sec), 86922.6 ms avg latency, 
89434.0 ms max latency.
4343 records sent, 868.6 records/sec (0.21 MB/sec), 91918.8 ms avg latency, 
94394.0 ms max latency.
org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an 
operation with an old epoch. Either there is a newer producer with the same 
transactionalId, or the producer's transaction has been expired by the broker.
Exception in thread "main" org.apache.kafka.common.KafkaException: Cannot 
perform send because at least one previous transactional or idempotent request 
has failed with errors.
    at 
org.apache.kafka.clients.producer.internals.TransactionManager.failIfNotReadyForSend(TransactionManager.java:357)
    at 
org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartitionToTransaction(TransactionManager.java:341)
    at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:915)
    at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:856)
    at 
org.apache.kafka.tools.ProducerPerformance.main(ProducerPerformance.java:143)
Caused by: org.apache.kafka.common.errors.ProducerFencedException: Producer 
attempted an operation with an old epoch. Either there is a newer producer with 
the same transactionalId, or the producer's transaction has been expired by the 
broker.
[2019-10-25 21:15:05,183] ERROR [Producer clientId=producer-1, 
transactionalId=performance-producer-default-transactional-id] Aborting 
producer batches due to fatal error 
(org.apache.kafka.clients.producer.internals.Sender)
org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an 
operation with an old epoch. Either there is a newer producer with the same 
transactionalId, or the producer's transaction has been expired by the broker.

It does not make any difference whether I add a transaction id flag like 
‘—transactional-id mytxn’ to the command line. 

Is there something I am missing here?

Sincerely,
Anindya Haldar
Oracle Responsys

Reply via email to