-- Forwarded message --
From: Elahe Bagheri
Date: Sat, Jul 1, 2017 at 4:16 PM
Subject: kafka instalation
To: users@kafka.apache.org
Dear,
Recently I used to install kafka on ubuntu server 16.0.4. I installed jdk8
and zookeepr successfully. there was no problem with installing k
When KStream / KTable is created from a source topic, both of them has a
record as a key-value pair, and the key is read from Kafka as the message
key.
What you showed in JSON seems only be the value of the message, and hence
I'm asking what's the key of the message, which will be the key of the
s
thanks you ,got this problem solved with your advice
发件人: tao xiao
发送时间: 2017年7月5日 20:55:45
收件人: users@kafka.apache.org
主题: Re: org.apache.kafka.common.KafkaException: Failed to construct kafka
producer
you need to use org.apache.kafka.common.serialization.Strin
Hello Tom,
Currently only the client-side packages are using o.a.k.common.metrics for
metrics reporting, for broker-side metrics (including
LeaderElectionRateAndTimeMs), they are still implemented on yammer metrics (
com.yammer.metrics).
Guozhang
On Mon, Jul 3, 2017 at 4:20 AM, Tom Dearman wr
I think Damina's finding is correct regarding the consumer bug, and there
is a PR being worked on already:
https://github.com/apache/kafka/pull/3489/files
Guozhang
On Tue, Jul 4, 2017 at 10:04 AM, Debasish Ghosh
wrote:
> Thanks!
>
> On Tue, Jul 4, 2017 at 10:28 PM, Damian Guy wrote:
>
> > Yes
The literature suggests running the MM on the target cluster when possible
(with the exception of when encryption is required for transferred data).
I am wondering if this is still the recommended approach when mirroring
from multiple clusters to a single cluster (i.e. multiple MM instances).
Is
Hi team,
What is the command to shutdown kafka server gracefully instead of using 'kill
-9 PID'?
If we use bin/kafka-server-stop.sh it shows "No kafka server to stop" but the
service actually running and I see the PID by using "ps -ef|grep kafka"
Thanks
Achintya
Hi Raghav,
Yes, you should be able to use AdminClient from 0.11.0. Take a look at the
Javadocs (
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/admin/package-summary.html).
The integration tests may be useful too (
https://github.com/apache/kafka/blob/trunk/core/src/test/scala/inte
quick update: This can be unblocked with consumer.wakeup(). So my current
work around is to run this in a separate thread and cancel it after a
timeout.
On Fri, Jun 30, 2017 at 11:14 AM, Raghu Angadi wrote:
> Consumer blocks forever during initialization if the brokers are not
> reachable. 'requ
Hi Everyone,
We just upgraded to 0.10.0, and we've repeatedly seen a situation where a
broker is up and appears to be fetching replica state from the lead
replicas but the broker is not listed as part of the cluster. Any ideas as
to why this is happening? Anything I should grep for in the logs?
T
Hi,
Does anyone know how to batch fetch/commit the Kafka topic offsets using
the new Kafka 0.10 API?
When we were using Kafka 0.81, we used BlockingChannel to send
OffsetCommitRequest and OffsetFetchRequest to do it in batch from Zk.
However in 0.10, everything is built for single consumer based.
Hi Rajini
Now that 0.11.0 is out, can we use the Admin client ? Are there some
example code for these ?
Thanks.
On Wed, May 24, 2017 at 9:06 PM, Rajini Sivaram
wrote:
> Hi Raghav,
>
> Yes, you can create ACLs programmatically. Take a look at the use of
> AclCommand.main in https://github.com/a
Keep in mind Kafka brokers can use many file descriptors/handles. You may
need to increase the OS file descriptor limits.
http://kafka.apache.org/documentation/#os
"File descriptor limits: Kafka uses file descriptors for log segments and
open connections. If a broker hosts many partitions, consid
This is quite vague.
What commands have you executed?
What do you refer to by open files? Is it the log partition or consumer
offsets?
On 5 Jul 2017 3:21 pm, "Satyavathi Anasuri"
wrote:
> Hi,
>I have created a topic with 500 partitions in 3 node
> cluster with replication f
Thanks Vahid, I like the KIP.
One question - could we keep the current "--describe" behavior unchanged
and introduce "--only-xxx" options to filter down the full output as you
proposed ?
ciao,
Edo
--
Edoardo Comar
IBM Message Hub
IBM UK Ltd, Hu
Hi Vahid,
no we are not relying on parsing the current output.
I just thought that keeping the full output isn't necessarily that bad as
it shows some sort of history of how a group was used.
ciao
Edo
--
Edoardo Comar
IBM Message Hub
IBM UK Ltd,
Hi,
I have created a topic with 500 partitions in 3 node cluster
with replication factor 3. kafka version is 0.11. I executed lsof command and
it lists more 1 lakh open files. why these many open files and how to reduce it
?.
reg's
Satya.
Hi,
I have to process a topic with few thousand messages and a dozen partitions
from the very beginning. This topic is manually populated before
consumption. In this setup a consumer consuming from several partitions at
the same time tend to consume assigned partitions sequentially: first all
mess
you need to use org.apache.kafka.common.serialization.StringSerializer as
your ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG
On Wed, 5 Jul 2017 at 19:18 罗 辉 wrote:
> hi guys:
>
> I got an exception which i searched searchhadoop.com and the archive as
> well and got no matches, here it is:
>
> l
hi guys:
I got an exception which i searched searchhadoop.com and the archive as well
and got no matches, here it is:
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more
info.
Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to
construct kaf
20 matches
Mail list logo