I fixed this problem by decoupling the 2 issues ..
- since I am using an http service for interactive queries, I need an
http interface. For that I was using 0.0.0.0 and a port number. This is
still the same - the only change I made is to have a Mesos configuration
where Marathon pick
Server restart is required, only if you are using SASL/PLAIN mechanism.
Other mechanisms (Kerberos, Scram) restart is not required.
https://issues.apache.org/jira/browse/KAFKA-4292 will help us to write
custom handlers.
On Tue, Aug 1, 2017 at 4:26 AM, Alexei Levashov <
alexei.levas...@arrayent.c
It actually is possible to do so if you adapt the Connect Converter API to
streams. There are a couple of good reasons why we shouldn't require
everyone to just use the same schema:
1. Efficiency
Connect favors a little bit of inefficiency (translating byte[] ->
serialization runtime format -> Co
Thanks Sameer.
Please stay tuned as we work on back port it to 0.10.2.1.
Guozhang
On Fri, Jul 28, 2017 at 10:15 PM, Sameer Kumar
wrote:
> Hi Guozhang,
>
> I am using 10.2.1.
>
> -Sameer.
>
> On Sat, Jul 29, 2017 at 12:05 AM, Guozhang Wang
> wrote:
>
> > Sameer,
> >
> > This bug should be alre
From: Jay Allen
Sent: Friday, July 28, 2017 12:09 PM
To: users@kafka.apache.org
Subject: Socks proxy
Hey guys,
We're trying to use the Java Kafka client but it turns out it's not socks
proxy aware - the connect uses a SocketChannel that does not work with
prox
Hello,
Is there any dynamic approach to add user to the cluster for clients
connecting to the running cluster.
What I mean by that - can I avoid bouncing a broker if I have to add new
user with say SASL authentication?
When I add a new entry to kafka_server_jaas.conf it looks like it is
required t
Thanks for your response. Is it 200% only for the OffsetCommitRequest, or
is it similar for all the requests?
On Mon, Jul 31, 2017 at 12:48 PM, Gaurav Abbi wrote:
> Hi Apurva,
> 1. The increase is about 200%.
> 2. There is no increase in throughput. However, this has caused in error
> rate and
Hi Apurva,
1. The increase is about 200%.
2. There is no increase in throughput. However, this has caused in error
rate and a decrease in the responses received per second.
One more thing to mention, we also upgraded to 0.11.0.0 client libraries.
We are currently using old Producer and consumer
How much is the increase? Is there any increase in throughput?
On Mon, Jul 31, 2017 at 8:04 AM, Gaurav Abbi wrote:
> Hi All,
> We recently upgraded to Kafka 0.11.0.0 from 0.10.1.1.
> Since then we have been observing increased latencies especially
> OffsetCommit requests.
> Looking at the server
On Sun, Jul 30, 2017 at 10:21 PM, UMESH CHAUDHARY
wrote:
> Hi Ewen,
> Thanks for your comments.
>
> 1) Yes, there are some test and java classes which refer these configs, so
> I will include them as well in "public interface" section of KIP. What
> should be our approach to deal with the classes
Hi,
We recently enabled timestamp and security features in our production
clusters. We have 5 clusters which are smaller and 2 larger aggreagtion
clusters which mirror data from the 5 clusters.
The version of Kafka is 0.10.1.1.
For security we enabled the brokers to have both PLAINTEXT and
SASL_
It feels like the wrong usecase for kafka. Its not meant as something you
connect your end users to. Maybe MQTT would be a better fit as the serving
layer to end users or just poll as you said.
2017-07-31 17:10 GMT+02:00 Thakrar, Jayesh :
> You may want to look at the Kafka REST API instead of ha
You may want to look at the Kafka REST API instead of having so many direct
client connections.
https://github.com/confluentinc/kafka-rest
On 7/31/17, 1:29 AM, "Dr. Sven Abels" wrote:
Hi guys,
does anyone have an idea about the possible limits of concurrent users?
-
Hi All,
We recently upgraded to Kafka 0.11.0.0 from 0.10.1.1.
Since then we have been observing increased latencies especially
OffsetCommit requests.
Looking at the server side metrics, it seems the culprit is the Follower
time.
We are using following
inter.broker.protocol.version: 0.11.0.0
log.me
Hi,
I would want to discuss a problem I am facing while trying to use Kafka in
Docker container.
My topology is simple:
One Linux VM running a Docker machine.
- One container for Zookeeper
- One container for Kafka
I publish subscribe to the Broker from a client on my local machine, which
is in
Hi!
I have a problem with receiving and sending messages. My pipeline is:
- I get lines of words: "x xxx xx x"
- I want to split it into single words and send also as single message.
When I did something like this:
There is an error that can not resolve .runWith with such signatu
Hello!
Is it possible to send an array of String by Kafka Producer object. I want
to take some messages from 'topic1' - lines of text then split it to single
words and send it to another topic. I tried to use foreach loop over
msg.toString.split("//+") but it didn't help me.
object KafkaConsumer
Hi,
My kafka server is running on my linux machine and approx 50 clients are
connected and are working fine. Only one issue is there that the log file
of some client is increasing even if no message in published by producer.
My consumer in initialized with two topics = "systemupdate" and
"cfba196cf
We should pass necessary ssl configs using --command-config command-line
option.
>>security.protocol=SSL
>>ssl.truststore.location=/var/private/ssl/client.truststore.jks
>>ssl.truststore.password=test1234
http://kafka.apache.org/documentation.html#security_configclients
On Mon, Jul 31, 2017 at
Thank you for your help Vahid.
I use kafka_2.11-0.10.0.1 with ssl.
kafka-consumer-groups.sh script fails with a java heap space out of memory.
Am i doing something wrong ?
#bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server
myserver:9092 --list
Error while executing consumer group com
20 matches
Mail list logo