Hi,
I see configuration for broker "max.message.bytes" 1,000,000
and configuration for producer "max.request.size" 1,048,576
Why is default config for broker is less than producer? If that is the case
then there will be message sent by producer which is bigger than what
broker could receive.
Cou
Hi, sorry if my understanding is incorrect.
I am integrating kafka producer with application, when i try to shutdown
all kafka broker (preparing for prod env) I notice that 'send' method is
blocking.
Is new producer fetch metadata not async?
Rendy
hi,
How can i send data from a log file to kafka server?
Can i use kafka with flume.
or is there any other way to do it
Thanks
Kafka with Flume is one way (Just use Flume's SpoolingDirectory source with
Kafka Channel or Kafka Sink).
Also, Kafka itself has a Log4J appender as part of the project, this will
work if the log is written with log4j.
Gwen
On Tue, May 12, 2015 at 12:52 PM, ram kumar wrote:
> hi,
>
> How can i
Good day,
I'm wondering a bit about the effects of changing the log.retention
setting, be it in the configuration or on-the-fly via ZooKeeper, will it
apply to already existing log segments, or only for new ones?
For example, we have a default of 12 hours, if I change the value to 24
hours in the
Hi,
I would like to start using Kafka, can I start from 0.9 or is it better to
develop on 0.8.2.1 and than migrate to 0.9 ?
My plans are to be in production by september
Will a 0.8.2.1 client (producer/consumer) be able to talk to 0.9 brokers ?
Is there any public maven artifact for Kafka 0.9 ?
That¹s right. Send() will first try to get metadata of a topic, that is a
blocking operation.
On 5/12/15, 2:48 AM, "Rendy Bambang Junior"
wrote:
>Hi, sorry if my understanding is incorrect.
>
>I am integrating kafka producer with application, when i try to shutdown
>all kafka broker (preparing f
Hi there,
I'm working on a script that fails kafka v8.2 brokers from the cluster,
mostly intended for dealing with long term downtimes such as hardware
failures. The script generates a new partition assignment, moving any
replica on the failed host to other available hosts.
The problem I'm havin
The way it works I suppose is that, the producer will do fetchMetadata, if
the last fetched metadata is stale (the refresh interval has expired) or if
it is not able to send data to a particular broker in its current metadata
(This might happen in some cases like if the leader moves).
It cannot pr
We are basically using kafka as a transport mechanism for multi-line log
files.
So, for this we are using single partition topics (with a replica for good
measure) writing to a multi-broker cluster.
Our producer basically reads a file line-by-line (as it is being written
to) and publishes each li
I’m using this version of kafka:
org.apache.kafka
kafka_2.9.2
0.8.1.1
I’m using kafka.server.KafkaServer in memory for some integration tests. I
start KafkaServer and use AdminUtils.createTopic(ZkClient, String, Integer,
Integer, Properties) to create a Topic.
I then use the foll
I could not follow the reasoning behind blocking the send method if the
metadata is not up-to-date. Though, I see that it as per design, it
requires the metadata to batch the message into appropriate topicPartition
queue. Also, if the metadata could not be updated in the specified
interval, it thro
Hi Scott,
what producer client are you using?
Reordering is possible in async producers in the case of temporary broker
failures
and the combination of request.required.acks != 0 and retries > 0.
Consider the case where a producer has 20 messages in-flight to the broker,
out of those
messages #
We are using the Java producer API (0.8.2.1 if I am not mistaken). We are
using producer type of sync though.
On Tue, May 12, 2015 at 3:50 PM Magnus Edenhill wrote:
> Hi Scott,
>
> what producer client are you using?
>
> Reordering is possible in async producers in the case of temporary broker
>
I completely agree with Mohit, an application should not have to know or
care about
producer implementation internals.
Given a message and its delivery constraints (produce retry count and
timeout) the producer
should hide any temporal failures until the message is succesfully
delivered, a permanen
Andrew,
The recompression logic didn't change in 0.8.2.1. The broker still takes
all messages in a single request, assigns offsets and recompresses them
into a single compressed message.
Are you using mirror maker to copy data from the 0.8.1 cluster to the 0.8.2
cluster? If so, this may have to d
Hi Jun,
I figured it out this morning and opened
https://issues.apache.org/jira/browse/KAFKA-2189 --
it turned out to be a bug in versions 1.1.1.2 through 1.1.1.6 of
snappy-java that has recently
been fixed (I was very happy to see their new unit test named
"batchingOfWritesShouldNotAffectCompress
The max.request.size effectively caps the largest size message the producer
will send, but the actual purpose is, as the name implies, to limit the
size of a request, which could potentially include many messages. This
keeps the producer from sending very large requests to the broker. The
limitatio
Hi, Andrew,
Thanks for finding this out. I marked KAFKA-2189 as a blocker for 0.8.3.
Could you share your experience on snappy 1.1.1.7 in the jira once you have
tried it out? If the result looks good, we can upgrade the snappy version
in trunk.
Jun
On Tue, May 12, 2015 at 1:23 PM, Olson,Andrew
I am in the process of testing and migrating our prod kafka from 0.8.1.1 to
0.8.2.1.
Wanted to do a quick check with the community if anyone has observed any issue
with writing/reading data to 0.8.2.1 kafka broker(s), using 0.8.1.1 producer
and consumer.
Any gotchas to watch for or any concerns?
Send() will only block if the metadata is *not available* for the topic.
It won’t block if metadata there is stale. The metadata refresh is async
to send(). However, if you send the message to a topic for the first time,
send() will trigger a metadata refresh and block until it has metadata for
tha
Thank you for the clarification.
I think I agree with Mohit. Sometime blocking on logging is not acceptable
by nature of application who uses kafka.
Yes it is not blocking when metadata is still available. But application
will be blocked once metada is expired.
It might be handled by application
Hi,
I'm wondering when you call kafka.javaapi.Producer.send() with a list of
messages, and also have compression on (snappy in this case), how does it
decide how many messages to put together as one?
The reason I'm asking is that even though my messages are only 70kb
uncompressed, the broker comp
Thanks, I get the difference now. This is assuming request to be sent
contains at least >1 messages. Isn't it?
Rendy
On May 13, 2015 3:55 AM, "Ewen Cheslack-Postava" wrote:
> The max.request.size effectively caps the largest size message the producer
> will send, but the actual purpose is, as th
Hi,
For monitorting purposes, is there a way to find the partitions for a topic
that are assigned to consumers in a group? We are using high level consumer
and the offsets are stored in kafka.
Tried searching for methods in ZKUtils, but could not find anything that
gives this information. Any poi
Perhaps you could try the ConsumerOffsetChecker. The "Owner" field might be
what you want..
Aditya
From: Bharath Srinivasan [bharath...@gmail.com]
Sent: Tuesday, May 12, 2015 7:29 PM
To: users@kafka.apache.org
Subject: Kafka 0.8.2.1 - Listing partitions
Well, there is no separate tool available for importing and exporting
offsets from kafka, which will also provide this functionality. We are
working on it.
You can try the consumerOffsetChecker as Aditya mentioned.
Thanks,
Mayuresh
On Tue, May 12, 2015 at 8:11 PM, Aditya Auradkar <
aaurad...@li
Well, the batch size is decided by the value set for the property :
"batch.size";
"The producer will attempt to batch records together into fewer requests
whenever multiple records are being sent to the same partition. This helps
performance on both the client and the server. This configuration
As we need to do this programmatically, i tried to strip out the relevant
parts from ConsumerOffsetChecker. It did work.
Thanks for the suggestions.
On Tue, May 12, 2015 at 8:58 PM, Mayuresh Gharat wrote:
> Well, there is no separate tool available for importing and exporting
> offsets from ka
Oops. I originally sent this to the dev list but meant to send it here.
Hi,
>
> When using Samza 0.9.0 which uses the new Java producer client and snappy
> enabled, I see messages getting corrupted on the client side. It never
> happens with the old producer and it never happens with lz4, gzip,
If you are using new Java producer, reorder could happen if
max.inflight.requests.per.connection is set to > 1 and retries are enabled
- which are both default settings.
Can you set max.in.flight.requests.per.connection to 1 and see if this
solve the issue?
Jiangjie (Becket) Qin
On 5/12/15, 12:5
Application will not block on each metadata refresh or metadata is
expired.
Application will only be blocked when
1. It sends the first message to a topic (only for that single message), or
2. The topic has been deleted from broker thus refreshed metadata loses
the topic info (which is pretty rar
Mayuresh, this is about the old producer instead of the new Java producer.
Jamie,
In the old producer, if you use sync mode, the list of message will be
sent as a batch. On the other hand, if you are using async mode, the
messages are just put into the queue and batched with other messages.
Notice
Hi,
Any updates on this issue? I keep seeing this issue happening over and over
again
On Thu, May 7, 2015 at 7:28 PM, tao xiao wrote:
> Hi team,
>
> I have a 12 nodes cluster that has 800 topics and each of which has only 1
> partition. I observed that one of the node keeps generating
> NotLead
34 matches
Mail list logo