Hi
I have kafka version 0.8.2 installed on my cluster.
I am able to create topics and write messages to it. But when I fetched
lates offsets of brokers using
kafka-run-class kafka.tools.GetOffsetShell --broker-list brokerAddr:9092
--topic topicname --time -1
I got offsets of few partitions and t
Hi
I have a kafka cluster with 2 brokers and replication as 2.
Now say for a partition P1 leader broker b1 has offsets 1-10 and ISR broker
is behind leader and now it has data for offsets (1-5) only. Now broker B1
gets down and kafka elects B2 as leader for partition P1. Now new write for
partitio
Hi
I have a kafka cluster with 3 brokers. I have a topic with ~50 partitions
and replication factpr of 3.
When 2 brokers are down - I m getting below error in producer code
5/09/09 00:56:15 WARN network.Selector: Error in I/O with brokerIP(Ip
of broker which is down)
java.net.ConnectExceptio
Is it possible using low level consumer to get kafka messages based on
timestamp, say I want to get all messages from last 5 minutes. I don't know
what were offsets of partitions 5 minutes back.
In low level consumer : when I gave epoch for whichTime , it failed.
requestInfo.put(topicAndPartitio
ead and SyncProducer all come from the old Scala producer, I
> thought you meant that producer not the new Java producer?
>
> Guozhang
>
> On Tue, Jun 30, 2015 at 9:09 AM, Shushant Arora >
> wrote:
>
> > According to code of org.apache.kafka.clients.producer.KafkaPr
thread there is one SyncProducer for each destination broker. I think that
> blog may mis-understand the design a bit.
>
> Guozhang
>
> On Tue, Jun 30, 2015 at 1:45 AM, Shushant Arora >
> wrote:
>
> > According to
> > https://engineering.gnip.com/kafka-async-p
number of ProducerSendThread will be always one, regardless of the
> number of destination brokers, or the number of partitions.
>
> Guozhang
>
> On Mon, Jun 29, 2015 at 9:38 AM, Shushant Arora >
> wrote:
>
> > Hi
> >
> > Does kafka async producer creates thread(
Hi
Does kafka async producer creates thread(ProducerSendThread) in producer
memory based on no number of partitions or brokers in kafka cluster to
which it will write.
If my cluster had 1000 partitions does each producer will have 1000 threads
running always ?
Hi
Is zookeeper not fit for high writes? since write goes via leader
node then how kafka maintains offsets using zookeeper update operation very
efficiently.
If there are a lot of subscribers you may feel a squeeze
though.
zookeeper is also not strictly consistent then how its guranteed cosumer
when I add partitions in topic after creation, do restart of producers
required?
I am using java producers and messages are keyed messages, so when total no
of partitions change do we need to restart producers or it gets no of
current partitions in each send call?
As per org.apache.kafka.common.u
3 AM, Sriharsha Chintalapani
wrote:
> Sushant,
> You are using kafka clients new consumer api. It looks like you want
> to use high-level consumer api?. If so you need use following kafka core
> lib as the dependency
>
> org.apache.kafka
> kafka_2.10
> 0.8.2.1
>
>
which is the latest jar to be used for kafka java client.
As in
org.apache.kafka
kafka-clients
0.8.2.1
In class org.apache.kafka.clients.consumer.KafkaConsumer
public Map> poll(long timeout) {
// TODO Auto-generated method stub
return null;
}
poll method retur
12 matches
Mail list logo