hi, all
im testing kafka-0.8.2.0 using ASYNC producer;
I have two processes, each process have 20 threads to send messages, the
testing time is 10 seconds;
the message is random UUID;
the result is as below:
I called "producer.send(producerRecord, callback)" 2200191 times;
the "callback"
if i wanna send the message syncronously i can do as below:
future=producer.send(producerRecord, callback);
future.get();
but the throughput decrease dramatically;
is there a method to send the messages by batch but synchronously ?
Can you do:
producer.send(...)
...
producer.send(...)
producer.flush()
By the time the flush returns all of your messages should have been sent
On 8 September 2015 at 11:50, jinxing wrote:
> if i wanna send the message syncronously i can do as below:
> future=producer.send(producerRecord, callb
Hi all ,
I'm a user of kafka(version is kafka_2.10-0.8.2.0), but recently I met a
problem annoying me for a time.
I create a topic named A for example. this topic has 18 partitions ,and run
9 webservices on 9 servers to consume this topic,each service consume 2
partitions configured in file .
Hi,
I have been trying(struggling) to pull metrics from *kafka-producer using
JMXTRANS but couldnot able to get*. I can be able to pull metrics from
kafka-server(brokers) but not from producers and consumers. The following
is the command which i am trying to pull the metrics :
*--> bin/kafka-run-c
Hi,
syslog-ng (https://syslog-ng.org/) is one of the most widely used open
source log collection tools, capable of filtering, classifying, parsing log
data and forwarding it to a wide variety of destinations. In its most
recent release (3.7.1
https://github.com/balabit/syslog-ng/releases/tag/syslo
Jörg,
So, I will start with some assumptions I have which effect my suggestions
below. I assume that the details you list are per cluster, and you have
3 clusters, one in each DC. Each DC¹s cluster replicates its topic ONLY
to the other DC¹s (Mirror maker configuration, otherwise you have circul
Hi King,
So, I think the issue could be which consumer you are using. Are you
using the simple consumer or the high level consumer API? And which
version of kafka are you using?
If you are using the simple consumer API, you can listen to a specific
partition. But you have to do the failover cod
Just a reminder that the deadline for vote is today.
Thanks,
Jun
On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao wrote:
> This is the first candidate for release of Apache Kafka 0.8.2.2. This only
> fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy in
> 0.8.2.1.
>
> Release Notes
+1 (non-binding)
Ran the build, works fine. All test cases passed
On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao wrote:
> This is the first candidate for release of Apache Kafka 0.8.2.2. This only
> fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy in
> 0.8.2.1.
>
> Release Notes
How large are your messages compressed? 50k requests/sec could equate to as
little as 50 KB/sec of traffic per topic, 50 GB/sec, or more. The size of
the messages is going to be pretty important when considering overall
throughput here. Additionally, what kind of network interfaces are you
using on
Hi
I have a kafka cluster with 3 brokers. I have a topic with ~50 partitions
and replication factpr of 3.
When 2 brokers are down - I m getting below error in producer code
5/09/09 00:56:15 WARN network.Selector: Error in I/O with brokerIP(Ip
of broker which is down)
java.net.ConnectExceptio
We have observed that some producer instances stopped sending traffic to
brokers, because the memory buffer is full. those producers got stuck in
this state permanently. Because we couldn't find out which broker is bad
here. So I did a rolling restart the all brokers. after the bad broker got
bounc
Refer the mail list
http://qnalist.com/questions/6002514/new-producer-metadata-update-problem-on-2-node-cluster
https://issues.apache.org/jira/browse/KAFKA-1843
On Wed, Sep 9, 2015 at 7:37 AM, Shushant Arora
wrote:
> Hi
>
> I have a kafka cluster with 3 brokers. I have a topic with ~50 partitio
14 matches
Mail list logo