Thanks a lot, Tim, this is the config of brokers

----------
broker.id=1
port=9092
host.name=10.100.70.128
num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
auto.leader.rebalance.enable=true
auto.create.topics.enable=true
default.replication.factor=3

log.dirs=/tmp/kafka-logs-1
num.partitions=8

log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=536870912
log.cleanup.interval.mins=1

zookeeper.connect=10.100.70.128:2181,10.100.70.28:2181,10.100.70.29:2181
zookeeper.connection.timeout.ms=1000000

-----------------------


We actually play around request.required.acks in producer config, -1 cause
long latency, 1 is the parameter to cause messages lost. But I am not sure,
if this is the reason to lose the records.


thanks

AL







On Fri, Jan 2, 2015 at 9:59 AM, Timothy Chen <tnac...@gmail.com> wrote:

> What's your configured required.acks? And also are you waiting for all
> your messages to be acknowledged as well?
>
> The new producer returns futures back, but you still need to wait for
> the futures to complete.
>
> Tim
>
> On Fri, Jan 2, 2015 at 9:54 AM, Sa Li <sal...@gmail.com> wrote:
> > Hi, all
> >
> > We are sending the message from a producer, we send 100000 records, but
> we
> > see only 99573 records for that topics, we confirm this by consume this
> > topic and check the log size in kafka web console.
> >
> > Any ideas for the message lost, what is the reason to cause this?
> >
> > thanks
> >
> > --
> >
> > Alec Li
>



-- 

Alec Li

Reply via email to