Hey Guys, I have been hunting through the archives and can't find anyone else that has ran into this issue so I'm going to toss it out there an hope someone can point me to a solution.
We are running two kafka clusters on opposite coasts in AWS and need to have mirror maker copy the messages from one cluster to the other ( pretty straight forward mirror maker setup ). The fly in the ointment shows up when we add encryption on our data. Now I know SSL support is an 0.9 feature but we can't wait unless that's coming out this weekend... So, We have an ipsec tunnel for all site to site traffic. Confirmed that everything works when mirror maker is running on each kafka broker on one coast, we can replicate data with all traffic through the VPN tunnel. Unfortunately, we can't maintan enough throughput with all the traffic traversing this single connection. So we have configured ipsec transport tunnels ( esp with nat-t ) between all of our kafka brokers ( confirmed that they all talk to each other over the encrypted links via telnet ), ping et all is working fine, but, the mirror maker consumer process is no longer able to connect to the brokers on the remote coast. We have disabled ipsec and allowed traffic to traverse the open internet and mirror maker works fine, when we enable the ipsec transport tunnels everything falls apart. Even the simple console consumer is unable to connect. the error we see ( after several seconds ) is : WARN Fetching topic metadata with correlation id 0 for topics [Set(test)] from broker [id:5,host:kafka-west-5,port:9092] failed (kafka.client.ClientUtils$) java.net.SocketTimeoutException Disabling the ipsec encryption has everything working again so we are fairly certain this is the issue, but I don't see why? All other protocols seem to work fine with encryption enabled, we can even telnet to the kafka broker ports... Any information and direction would be greatly apprechiated......