Is the server busy on I/O? What's log.flush.interval on the broker? For
better performance, you need to set it to a few hundreds or more.
Thanks,
Jun
On Wed, Nov 28, 2012 at 7:12 PM, Jamie Wang wrote:
> Hi,
>
> Thanks for all the replies. It helped me understand the system. I
> appreciate it.
Dmitri,
Could you reproduce this easily? Are you using a load balancer? Earlier,
another user had the same issue and eventually figured out that the problem
is in the network router.
Thanks,
Jun
On Wed, Nov 28, 2012 at 11:34 AM, Dmitri Priimak <
prii...@highwire.stanford.edu> wrote:
> Hi.
>
>
Hi,
Thanks for all the replies. It helped me understand the system. I appreciate
it.
I tried change the producer properties to async and also changed
queue.enqueueTimeout.ms=-1. But I still get the exception. I then changed the
producer queue size to 20K and 30K. My hope is by making the queu
If your transaction messages are all in a single data topic then you could
perhaps use compressed message set for each transaction- that way you dont
need control messages and thus would be atomic (message sets are stored at
a single physical offset and delivered the same way to consumers).
Not su
In 0.7, the number of partitions is a per broker config and in 0.8,
the number of partitions is a per topic config. In both cases, it is
does not change whether or not you use a physical load balancer or
not.
Thanks,
Neha
On Wed, Nov 28, 2012 at 12:08 PM, Riju Kallivalappil
wrote:
> When using a
When using a physical LB on the producer side for Kafka, is it possible to
have more than one partition per topic in each broker?
Thanks,
Riju
All,
We're having an issue with Zookeeper, which has nothing to do with Kafka, but
my consumers don't appear to be attempting to connect with the two nodes that
are up. I specify my zk.connect as such: host1:2181,host2:2181,host3:2181.
Is this correct? Should this work? I didn't see anything
Hi.
In the kafka broker (version 0.7.0) log I see occasionally following error
message
FATAL Halting due to unrecoverable I/O error while handling producer request:
Unexpected end of
ZLIB input stream (kafka.server.KafkaRequestHandlers)
java.io.EOFException: Unexpected end of ZLIB input stream
The clientid is used to identify a particular client application. This
is used by the server side request logging to identify the client
sending a particular request. The clientid is also used to give
meaningful names to the mbeans for producer/consumer clients.
Also, there are 2 ways to send the
It seems that Apache infra already moved Kafka repo to the top level. The
following is the new svn location for the 0.8 branch.
https://svn.apache.org/repos/asf/kafka/branches/0.8
Thanks,
Jun
Hi,
It seems that Kafka svn repository suddenly disappeared today. We
officially graduated from incubator last week, but haven't filed infra
tickets to move our repository yet. Is our repository automatically moved
to somewhere?
svn ls https://svn.apache.org/repos/asf/incubator/kafka
svn: URL 'ht
The offset now begins at 0 and increases sequentially for each partition.
The offset is identical across all replicas of that partition on different
brokers, but amongst different partitions the offsets are independent (as
before). The offset of a committed message is unique within that
topic/parti
To use SimpleConsumer, you need to send TopicMetadataRequest (available in
SimpleConsumer) to figure out the leader of each partition before making
the fetch requests.
In both 0.7 and 0.8, a fetch request fetches data starting at the provided
offset. In 0.8, offset is a sequential and evergrowing
Hi Jun,
Sorry, neither the missing 0 leader or all those WARN messages have
been reproducible. Tried several times this morning.
I'll be starting from a green-field cluster again this afternoon so I'll
keep an eye out for it happening again.
Thanks,
Chris
On Wed, Nov 28, 2012 at 12:08 PM, Jun
Chris,
Not sure what happened to the WARN logging that you saw. Is that easily
reproducible? As for log4j, you just need to change log4j.properties. You
can find out on the web how to configure a rolling log file.
Thanks,
Jun
On Wed, Nov 28, 2012 at 5:10 AM, Chris Curtin wrote:
> Hi Jun,
>
> N
Hi,
First, thanks for 0.8.0 I'm really impressed with the redundancy
and simplification of the producer and consumer models.
I've upgraded my consumers from 0.7.2 to 0.8.0 and have some questions.
I am using the Simple Consumer since I need to support replay of messages
at request from the clien
You can find the information at
http://incubator.apache.org/kafka/design.html
Look for consumer registration algorithm and consumer rebalancing algorithm.
Thanks,
Jun
On Wed, Nov 28, 2012 at 7:13 AM, S Ahmed wrote:
> Can someone go over how a consumer goes about reading from a broker?
>
> e
If you read from offset x last, what information can you get regarding how
many messages are left to process?
On Wed, Nov 28, 2012 at 10:13 AM, S Ahmed wrote:
> Can someone go over how a consumer goes about reading from a broker?
>
> example:
>
> 1. connect to zookeeper, get information on the
Hi Jun,
No, all 9 brokers are up and when I look at the files in /opt/kafka-[]-logs
there is data for partition 0 of that topic on 3 different brokers.
After confirming this was still happening this morning, I bounced all the
brokers and on restart one of them took over primary on partition 0. No
19 matches
Mail list logo