Have you tried changing the configured JMX port? After all, it's possible
the conflict is between kafka and some other software running on the same
server.
On 28 June 2017 at 21:06, Eric Coan wrote:
> Hello,
>
>
> Unfortunately Kafka does indeed startup and run for a little bit before
> crashing
I believe so. You need to be careful that the mirror maker producer doesn't
reorder messages; in particular if retries > 0 then
max.in.flight.requests.per.connection must be 1. If
retries=0 then it doesn't matter what max.in.flight.requests.per.connection
is.
On 29 June 2017 at 05:52, Sunil Parm
I've updated the experimental code with a couple of ways of doing joins.
One following the fluent approach and one following the builder approach.
The 2 examples can be found here:
https://github.com/dguy/kafka/blob/dsl-experiment/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KSt
Hi everyone,
I use on kafka_2.11-0.10.1.1, and I'm trying check it work. I have Zookeeper
and Kafka on one host.
I'm calling console producer: "kafka-console-producer.sh --broker-list
10.0.0.19:9092 --topic test"
I expect message in consumer. Consumer is calling as:
"kafka-console-consumer.sh -
Please share your server configuration. How are you advertising the
listeners?
On 29 Jun 2017 13:44, "Anton Mushin" wrote:
> Hi everyone,
> I use on kafka_2.11-0.10.1.1, and I'm trying check it work. I have
> Zookeeper and Kafka on one host.
> I'm calling console producer: "kafka-console-produc
Thanks for your reply.
My server.properties:
broker.id=0
#delete.topic.enable=true
#advertised.listeners=PLAINTEXT://your.host.name:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafk
All the brokers write to server.log. The broker that happens to be the
controller will also write to the controller.log file.
-Dave
-Original Message-
From: karan alang [mailto:karan.al...@gmail.com]
Sent: Wednesday, June 28, 2017 6:04 PM
To: users@kafka.apache.org
Subject: Kafka logs
I have tried changing the port to no avail. You can also see in the first
email running a: `netstat -tulpn` produces output saying that nothing is
using port . Also seeing as how 90% of the time it works it really
doesn't seem like other software would sometimes be booted, and sometimes
not.
O
Uncomment the line
#advertised.listeners with your address & port
Create a fresh topic and retry your test. Let us know what happens.
KR,
On 29 Jun 2017 2:06 pm, "Anton Mushin" wrote:
> Thanks for your reply.
>
> My server.properties:
>
> broker.id=0
> #delete.topic.enable=true
> #advertised.
Hi Vincent,
What version of Kafka/Kafka Streams are you running, more specifically when
this error occurred?
Thanks,
Bill
On Wed, Jun 28, 2017 at 12:24 PM, Bill Bejeck wrote:
> Thanks for the info Vincent.
>
> -Bill
>
> On Wed, Jun 28, 2017 at 12:19 PM, Vincent Rischmann
> wrote:
>
>> I'm not
Hi,
I'm trying to use metrics.reporters https://pastebin.com/185HAjq5 but
couldn't get any useful values. All I get is 0.0 or -infinity. What is
wrong? I'm referencing it like in this library
https://github.com/SimpleFinance/kafka-dropwizard-reporter
metric.reporters=kafkaReporter.MetricsExtractor
Hi Team,
We are having a Kafka cluster with multiple Topics in it and shared with
multiple services(clients). Each service will have multiple events source from
where they will be pushing messages to Kafka brokers. Once a service starts
producing message at a high rate, it will affect other s
Request quotas was just added to 0.11. Does that help in your use case?
https://cwiki.apache.org/confluence/display/KAFKA/KIP-124+-+Request+rate+quotas
-hans
> On Jun 29, 2017, at 12:55 AM, sukumar.np wrote:
>
> Hi Team,
>
>
>
> We are having a Kafka cluster with multiple Topics in it and s
Hi,
we're using Kafka 0.10.1.1 and the streams app is using 0.10.2.1
On Thu, Jun 29, 2017, at 04:21 PM, Bill Bejeck wrote:
> Hi Vincent,
>
> What version of Kafka/Kafka Streams are you running, more specifically
> when
> this error occurred?
>
> Thanks,
> Bill
>
> On Wed, Jun 28, 2017 at 12:24
MirrorMaker acts as a consumer+producer. So it will consume from the source
topic and produce to the destination topic. That means that the destination
partition is chosen using the same technique as the normal producer:
* if the source record has a key, the key will be hashed and the hash will
Guozhang,
"1) if the coming record's key is null, then when it flows into the join
processor inside the topology this record will be dropped as it cannot be
joined with any records from the other stream."
Can you please elaborate on the notion of key? By keys, do you mean kafka
partition keys?
Fo
Guozhang,
"1) if the coming record's key is null, then when it flows into the join
processor inside the topology this record will be dropped as it cannot be
joined with any records from the other stream."
Can you please elaborate on the notion of key? By keys, do you mean kafka
partition keys?
Fo
17 matches
Mail list logo