Hi Manikumar,
No, we are not using compression on our topics. I will try out Todd Palino’s
suggestion regarding the offsets topic.
Thanks
Rakesh
On 27/04/2016 23:34, "Manikumar Reddy" wrote:
>Hi,
>
> Are you enabling log compaction on a topic with compressed messages?
> If yes, then that
all,
i am using kafka 0.9 with python-kafka v1.1.1. When i try to create a
SimpleConsumer, it throws error as,
File "/usr/lib/python2.7/site-packages/kafka/consumer/simple.py", line
130, in __init__
auto_commit_every_t=auto_commit_every_t)
File "/usr/lib/python2.7/site-packages/kafka/con
Hello!
I have many topics that shares the same group name.
Will a change on one topic (add a new consumer, remove a consumer, add a
new partition) will trigger a rebalance of all the other topics that
remained unchanged?
More clear, the rebalance is at the group level no matter the topics or is
I see from your debug messages it's the kafka-producer-network-thread that is
doing this so you might be hitting the same issue as I had in
https://www.mail-archive.com/users%40kafka.apache.org/msg18562.html.
I was creating a producer when my application started but if it didn't publish
a mess
Hi Jeff,
Thanks for bringing this up. I think it is a good idea for exposing unit
test utils as a separate jar; for example, here are the current jars we
have in Apache Kafka (* are the ones that are NOT included in the release):
-
kafka-tools.jar
-
kafka-examples.jar *
-
kaf
I set up a simple Kafka configuration, with one topic and one partition. I
have a Python producer to continuously publish messages to the Kafka server
and a Python consumer to receive messages from the server. Each message is
about 10K bytes, far smaller than socket.request.max.bytes=104857600. Wha
PS: The message dropping occurred intermittently, not all at the end. For
example, it is the 10th, 15th, 18th messages that are missing. It it were
all at the end, it would be understandable because I'm not using flush() to
force transmitting.
Bo
On Thu, Apr 28, 2016 at 10:15 AM, Bo Xu wrote:
I am having an issue where the FetchResponse returns a large value
MessageSetSize, but all of the bytes that follow after are null.
Any ideas what could be causing that?
I am getting a response without an error code, but like I said the remaining
bytes (600 or more) are set to zero.
Heath Ivie
Hi Andrew,
There are no such benchmarks conducted for this purposes yet, but we are
definitely interested in having a benchmark or contributing to existing
ones like Yahoo! Streams benchmarks, and we are alo happy to see and be
involved in those activities.
Guozhang
On Mon, Apr 25, 2016 at 1:5
Hi Ryan,
This is a great question. Kafka Streams currently has the same issue as
with Samza that due to its key-based partitioning mechanism, with increased
partitions the state stores may be invalid since the keyed-messages will be
re-routed to different partitions and hence different tasks.
For
Can you try using 0.9.0.1 for both client and broker? I tried replicating
the issue with both on 0.9.0.1 and i could not. I also tried what Phil has
suggested but in my case i see only one meta data refresh. I have seen
issues like this only for a short time when leader went down. Continuous
occurr
Hey All,
Was wondering what happens in the case of having a cluster where brokers'
server.properties files differ. That is, what happens if Broker1 has
default.replication.factor=3 and Broker2 has default.replication.factor=1.
If I issue a create topic command w/o specifying the replication level,
Hi Phil,
I tested my code and it is correct. I do see heartbeats getting missed
sometimes and causing session time out for consumer where generation is
marked dead. I see that there are long time windows where there is no
heartbeat whereas i do commit in between these time windows and there is no
Hello Buvana,
Could you show me the command line you used to produce the text to Kafka as
input? Also which version of Kafka are you using for the broker?
Guozhang
On Wed, Apr 27, 2016 at 12:07 PM, Ramanan, Buvana (Nokia - US) <
buvana.rama...@nokia.com> wrote:
> Hello,
>
> I am trying to execu
Hello, Robert.
I upgraded to 0.9.0.1, and (after baking for a day and a half) confirm that the
issue is now resolved. KAFKA-2978 is likely the culprit.
Thanks,
- Alex
-Original Message-
From: Underwood, Robert [mailto:robert.underw...@inin.com]
Sent: Tuesday, April 26, 2016 2:51 PM
To
Ok, I think I found the cause of the problem. The default value
of max_in_flight_requests_per_connection is 5 in Python producer. This
turns out too small for my environment and application. When this value is
reached and the producer tries several times and still fails, the message
is dropped. And
Hi Neha,
Thanks for sending out this Survey! Hopefully we got a lot of responses
from the community. Are there any results to share?
Thank you,
Grant
On Wed, Apr 6, 2016 at 4:58 PM, Neha Narkhede wrote:
> Folks,
>
> We'd like to hear from community members about how you are using Kafka
> today
17 matches
Mail list logo