Hi
On Fri, 8 Nov 2019 at 11:13, Akshay Das wrote:
> Hi Team,
>
> I'm trying to consume from a kafka cluster using java client, but the kafka
> server can only be accessed via jumphost/ssh tunnel. But even after
> creating ssh tunnel we are not able to read because once consumer fetches
> metadata
That is not the requirement. We want the communication via ssh tunnel.
On Fri, Sep 13, 2019 at 4:50 PM M. Manna wrote:
> why not try using internal vs external traffic
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic
>
>
> if you set
why not try using internal vs external traffic
https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic
if you set EXTERNAL enndpoints and map it to SSL - you clients should only
receive EXTERNAL endpoints for comms. Does this sound okay for you?
We cannot use external endpoints because of security reasons.
Is there an option to tell zookeeper/broker not to send broker host detail
metadata to its clients?
On Thu, Sep 12, 2019 at 3:05 PM M. Manna wrote:
> Have you tried using EXTERNAL endpoints for your Kafka broker to separate
> TLS from
Hey,
I have done this before with this proxy:
https://github.com/grepplabs/kafka-proxy#connect-to-kafka-through-socks5-proxy-example
You can spin up a socks proxy when you ssh to your jumphost (-D argument if
not mistaken) and
configure the proxy as described in the readme.
It is good for dev an
Have you tried using EXTERNAL endpoints for your Kafka broker to separate
TLS from internal traffic? Also, have you checked zk admin whether the
broker metadata is exposing your TLS endpoints to clients ?
On Thu, 12 Sep 2019 at 10:23, Akshay Das
wrote:
> Hi Team,
>
> I'm trying to consume from