Hello,
I've run into a weird situation with Kafka 0.8.1.1. I had an operating
cluster which I wanted to extend with new brokers. The sequence was as
follows:
1. I added the brokers to cluster and ensured that they appeared under
/brokers/ids.
2. Ran reassign-partitions tool to redistribute the d
For Kafka 0.8.x[.x], refer to
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
.
2015-04-23 23:20 GMT+03:00 Corey Nolet :
> I have a cluster of 3 nodes and I've created a topic with some number of
> partitions and some number of replica
The fetch request size should not be less than the maximum message size.
Apparently, all your messages are larger than 1 byte, so when you set fetch
size to 1, your consumer is unable to fetch anything.
2015-02-03 10:53 GMT+03:00 Honda Wei ( 魏宏達 ) :
> Hi Kafka Team
>
> I write some simple program
This is a quote from Kafka documentation:
"The routing decision is influenced by the kafka.producer.Partitioner.
interface Partitioner {
int partition(T key, int numPartitions);
}
The partition API uses the key and the number of available broker
partitions to return a partition id. This id is u
Hello,
I wonder if there are any known issues with running Kafka 0.8.1.1 against
Oracle JDK 7? Any unsupported JVM options in startup scripts, runtime
issues, etc.? I'm trying to understand how easy Kafka migration from JDK 6
to 7 would be.
Thanks,
Yury
I also noticed that some disks do not contain any partitions which is even
more strange. Why does broker not use them, while other disks are
overloaded?
2014-12-25 12:07 GMT+03:00 Yury Ruchin :
> Hi,
>
> With Kafka 0.8.1.1, I'm continuously running into the issue with no disk
>
Hi,
With Kafka 0.8.1.1, I'm continuously running into the issue with no disk
space remaining. I'm observing that on the problematic brokers partitions
are distributed across several identical disks very unevenly. For example,
provided the fact that partitions have similar size, I see 6 partitions
Hello,
I've come across a (seemingly) strange situation when my Kafka producer
gave so uneven distribution across partitions. I found that I used null key
to produce messages, guided by the following clause in the documentation:
"If the key is null, then a random broker partition is picked." Howev
normally update such
> metadata to ZK during the period?
>
> Guozhang
>
> On Tue, Dec 2, 2014 at 7:38 AM, Yury Ruchin wrote:
>
> > Hello,
> >
> > In a multi-broker Kafka 0.8.1.1 setup, I had one broker crashed. I
> > restarted it after some noticeable time
Hello,
In a multi-broker Kafka 0.8.1.1 setup, I had one broker crashed. I
restarted it after some noticeable time, so it started catching up the
leader very intensively. During the replication, I see that the disk load
on the ZK leader bursts abnormally, resulting in ZK performance
degradation. Wh
2949371/java-map-nio-nfs-issue-causing-a-vm-fault-a-fault-occurred-in-a-recent-uns
>
> Guozhang
>
> On Fri, Nov 14, 2014 at 5:38 AM, Yury Ruchin
> wrote:
>
> > Hello,
> >
> > I've run into an issue with Kafka 0.8.1.1 broker. The broker stopped
> > working a
Mark,
For non-Java clients an option could be to expose JMX via REST API using
Jolokia as an adapter. This may be helpful:
http://stackoverflow.com/questions/5106847/access-jmx-agents-from-non-java-clients
Joel,
I'm not familiar with Kafka build infrastructure, but e. g. Jenkins can
easily propaga
Hello,
I've run into an issue with Kafka 0.8.1.1 broker. The broker stopped
working after the disk it was writing to ran out of space. I freed up some
space and tried to restart the broker. It started some recovery procedure,
but after some short time in the logs I see the following strange error
Hello,
A while back, I saw mentioning about making preferred replica election
automatic. It would be really nice to have cluster balance restoring
automatically. Is this work on the roadmap? Any chance it will be included
into 0.8.2? 0.9?
Thanks,
Yury
A couple of points to keep in mind during the rolling update:
- "Controlled shutdown" should be used to bring brokers down (
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-1.ControlledShutdown),
so that brokers gracefully transfer leadership before actually go
icated will be lost)." for acks=1
>
> More info on that:
> http://aphyr.com/posts/293-call-me-maybe-kafka
>
> However, I'd be happy if someone with more Kafka experience confirmed my
> understanding of that issue.
>
>
> Kind regards,
> Michał Michalski,
> m
e gap in the
background once broker 1 started over?
2014-06-18 19:59 GMT+04:00 Neha Narkhede :
> You don't gain much by running #4 between broker bounces. Running it after
> the cluster is upgraded will be sufficient.
>
> Thanks,
> Neha
>
>
> On Wed, Jun 18, 2014 at 8
In ZK shell the following command:
ls /brokers/ids
will give you a list like this:
[0, 1, 2, 3, 4]
where items are broker ids you can further use to issue "get" request to ZK:
get /brokers/ids/
2014-06-26 12:37 GMT+04:00 Balasubramanian Jayaraman <
balasubramanian.jayara...@autodesk.com>:
tool?
2014-06-25 2:04 GMT+04:00 Neha Narkhede :
> I would turn on DEBUG on the tool to see which url it reads and doesn't
> find the owners.
>
>
>
>
> On Tue, Jun 24, 2014 at 11:28 AM, Yury Ruchin
> wrote:
>
> > I've just double-checked. The URL
I've just double-checked. The URL is correct, the same one is used by Kafka
clients.
2014-06-24 22:21 GMT+04:00 Neha Narkhede :
> Is it possible that maybe the zookeeper url used for the
> VerifyConsumerRebalance tool is incorrect?
>
>
> On Tue, Jun 24, 2014 at 12:02 AM,
Hi,
I've run into the following problem. I try to read from a 50-partition
Kafka topic using high level consumer with 8 streams. I'm using 8-thread
pool, each thread handling one stream. After a short time, the threads
reading from the stream stop reading. Lag between topic latest offset and
the c
Hi folks,
In my project, we want to perform to update our active Kafka 0.8 cluster to
Kafka 0.8.1.1 without downtime and losing any data. The process (after
reading http://kafka.apache.org/documentation.html#upgrade) looks to me
like this. For each broker in turn:
1. Bring the broker down.
2. Upd
Hi all,
I'm using Kafka 0.8 and I've ran into an issue with ConsumerConnector.
Steps to reproduce:
1. Start single-broker Kafka cluster with auto.create.topic.enable set to
"true"
2. Start ConsumerConnector on topic (which does not yet exist) with
auto.offset.reset set to "smallest".
3. Produce s
Looks like Kafka classes is not on your classpath. You should either
assemble an uber-jar from your project (e. g. using Maven Assembly plugin
with jar-with-dependencies descriptor ref) or add location of Kafka classes
to your classpath.
2014-05-06 19:11 GMT+04:00 David Novogrodsky :
> All,
>
>
That kafkaServer-gc.log contains GC logs produced by the JVM. The JVM GC
logging options are set in kafka-run-class.sh script from Kafka
distribution. GC logging is enabled by default. You should be able to
override the default behavior with KAFKA_GC_LOG_OPTS variable (never tried
it myself though)
Hi,
Having Kafka 0.8, I send messages using Producer in async mode. I wonder
what will happen if a message cannot be sent (e. g. all brokers get down).
In sync mode, error handling is straightforward: after
"message.send.max.retries" the send() method will throw
FailedToSendMessageException. Howe
Hi,
I'm using Kafka 0.8 which does not have a command to delete topic. However,
I need the functionality and I'm trying to adopt this approach:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/DeleteTopicCommand.scala.
I see it simply deletes the topic node from ZK. My qu
27 matches
Mail list logo