Re: LogCleaner is not removing Transaction Records

2019-05-10 Thread Guozhang Wang
For who interested in this thread, there's a ticket created for it and we believe it is a lurking bug and are trying to fix it before the 2.3 release: https://issues.apache.org/jira/browse/KAFKA-8335 Guozhang On Fri, May 10, 2019 at 10:39 AM Michael Jaschob wrote: > Weichu, > > while I don't

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms after enabling SASL PLAINTEXT authentication

2019-05-10 Thread goutham krishna Teja
Hi All, I'm running into time out exception when i try to run producer and consumer through java or console. *kafka server.properties* advertised.host.name=127.0.0.1 listeners=SASL_PLAINTEXT://127.0.0.1:9090 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=PL

Re: LogCleaner is not removing Transaction Records

2019-05-10 Thread Michael Jaschob
Weichu, while I don't have a solution we are seeing the same thing in our world. I put out a query to the mailing list a week or two ago (no responses unfortunately): https://lists.apache.org/thread.html/04273f5cfe4f6c6ed9ab370399f208a5cd780576880650aae839de25@%3Cusers.kafka.apache.org%3E . We're

Re: Customers getting duplicate emails

2019-05-10 Thread Ryanne Dolan
Kafka only supports exactly-once and idempotency within the context of streams apps where records are consumed and produced within the same cluster. As soon as you touch the outside world in a non-idempotent way, e.g. by sending an email, these guarantees fall away. It is essentially impossible to

Re: Kafka transaction between 2 kafka clusters

2019-05-10 Thread Kamal Chandraprakash
MirrorMaker 2.0 stores the offsets of one cluster in another. So, you can read the offsets from the same cluster once this KIP is implemented. https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-RemoteClusterUtils On Fri, May 10, 2019 at 12:29 PM Em

Re: Kafka upgrade process details

2019-05-10 Thread Kamal Chandraprakash
Hi, In Kafka v2.1.0, the OffsetCommit Request/Response schema version is changed to v4 for the *__consumer_offsets* topic. If you upgrade Kafka to v2.1.0 & higher and changed the inter.broker.protocol version to 2.1, then you cannot revert back to older versions as it doesn't know how to read the

Re: Customers getting duplicate emails

2019-05-10 Thread Wade Chandler
> On May 10, 2019, at 8:26 AM, ASHOK MACHERLA wrote: > > Dear Team > > In our project, for SMS/Email purpose we are using Kafka cluster and > Real-time Notification which is our custom application. > > We are sending to messages from Kafka to Real-time Notification, and then > SMTP Gateway

Customers getting duplicate emails

2019-05-10 Thread ASHOK MACHERLA
Dear Team In our project, for SMS/Email purpose we are using Kafka cluster and Real-time Notification which is our custom application. We are sending to messages from Kafka to Real-time Notification, and then SMTP Gateway servers. Our problem is ,sometimes customers are getting same email for

Re: kafka server shutdown automatically

2019-05-10 Thread Stephen Powis
Looks like someone/something on your system sent it a SIGHUP signal: [2019-05-09 12:54:56,295] INFO *Terminating process due to signal SIGHUP * > (org.apache.kafka.common.utils.LoggingSignalHandler) > > On Fri, May 10, 2019 at 5:43 PM wrote: > Hi guys, > my kafka server dead without any error lo

kafka server shutdown automatically

2019-05-10 Thread info
Hi guys, my kafka server dead without any error log. i see something in server.log file like this: [2019-05-09 12:02:24,103] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2019-05-09 12:12:24,103] INFO [GroupMe