Hi All,
What is the lightest way to check if the connection to Kafka is up or not?
Currently I'm having the producer attempt to send a message to a topic, and
then catching a TimeoutException and handling logic inside of the catch
statement. This doesn't seem like the right way to do this, though
I think of it as default.offset :)
Realistically though, changing the configuration name will probably
cause more confusion than leaving a bad name as is.
On Tue, Apr 14, 2015 at 11:10 AM, James Cheng wrote:
> "What to do when there is no initial offset in ZooKeeper or if an offset is
> out of
# I looked the documents of kafka and I see that there is no way a
> consume instance can
>read specific messages from partition.
>
With Kafka you read messages from the beginning multiple times. Since you
say later that
you do not have many messages per topic, you can iterate over the
I am using sarama "golang" kafka 1.8.1 client (
https://github.com/Shopify/sarama) to send messages to message queue once
in 3secs and this task drives cpu consumption to 130% on the quad-cpu
blade; The number stays this high regardless of number of
partitions/consumers According to results of
Hi Victor, two points:
- Based on the backtrace, you are using a very old version of Sarama. You
might have better luck using a more recent stable version.
- Are you setting `MaxBufferTime` in the configuration to 0 or a very small
value? If so the loop will spin on that timer. Try making this val
Hi Evan,
It's hardcoded:
MaxBufferTime = 1000
MaxBufferedBytes= 1024
How's it versioned? Should i just download zip file from project github?
On Wed, Apr 15, 2015 at 12:48 PM, Evan Huus wrote:
> Hi Victor, two points:
>
> - Based on the backtrace, you are using a very old version of S
On Wed, Apr 15, 2015 at 1:33 PM, Victor L wrote:
> Hi Evan,
> It's hardcoded:
> MaxBufferTime = 1000
>
Without any units this defaults to nanoseconds, which means your timer is
spinning every microsecond. You probably mean 1000 * time.Millisecond?
> MaxBufferedBytes= 1024
>
> How's i
I inherited this code, it just set to 1000, what would be reasonable time:
1000 * time.Millisecond?
On Wed, Apr 15, 2015 at 1:41 PM, Evan Huus wrote:
> On Wed, Apr 15, 2015 at 1:33 PM, Victor L wrote:
>
> > Hi Evan,
> > It's hardcoded:
> > MaxBufferTime = 1000
> >
>
> Without any units th
On 04/15/15 09:31, Manoj Khangaonkar wrote:
> # I looked the documents of kafka and I see that there is no way a
>> consume instance can
>>read specific messages from partition.
>>
>
> With Kafka you read messages from the beginning multiple times. Since you
> say later that
> you do
The field is a time.Duration (https://golang.org/pkg/time/#Duration) and it
is the maximum duration to buffer messages before triggering a flush to the
broker. The default was 1 millisecond (this value does not even exist in
the most recent release). Setting it to higher values trades off latency
f
I see the following error in consumer;
Unable to Receive Message:kafka server: The requested offset is outside the
range of offsets maintained by the server for the given topic/partition
On Wed, Apr 15, 2015 at 2:09 PM, Victor L wrote:
> I set it to 10ms and it did fix cpu consumption problem bu
I set it to 10ms and it did fix cpu consumption problem but i don't see any
messages coming up in consumer queue
On Wed, Apr 15, 2015 at 1:57 PM, Evan Huus wrote:
> The field is a time.Duration (https://golang.org/pkg/time/#Duration) and
> it
> is the maximum duration to buffer messages before t
Check the offset you're asking for, and check what offsets actually exist
on your cluster. It sounds like they don't line up.
On Wed, Apr 15, 2015 at 2:10 PM, Victor L wrote:
> I see the following error in consumer;
> Unable to Receive Message:kafka server: The requested offset is outside the
>
I meant you can read messages multiple times if you want to.
Yes, you would store offset and request reading from an offset with Simple
Consumer API to implement once and only once delivery.
regards
On Wed, Apr 15, 2015 at 10:55 AM, Pete Wright
wrote:
>
>
> On 04/15/15 09:31, Manoj Khangaonkar
the one in zookeeper is 123 and consumer is asking for 67375... Don't
understand where's the second value coming from
On Wed, Apr 15, 2015 at 2:17 PM, Evan Huus wrote:
> Check the offset you're asking for, and check what offsets actually exist
> on your cluster. It sounds like they don't line up
Hi team,
I discovered an issue that when a high level consumer with roundrobin
assignment strategy consumes a topic that hasn't been created on broker a
NPE exception is thrown during partition rebalancing phase. I use Kafka
0.8.2.1
Here is the step to reproduce:
1. create a high level consumer
i have 0.8.1 producer/consumer... I start producer first but when i start
consumer i see the following :
Apr 15 21:26:29 node-1cc1de6d1224 docker[32360]:
INFO|10cf22c161e3|1|consumer|main.go:34|2015/04/15 21:26:29|Starting
ConsumerService
Apr 15 21:26:29 node-1cc1de6d1224 docker[32360]:
INFO|10cf2
Hello Tao,
Do you think the solution to KAFKA-2056 will resolve this issue? It will be
included in 0.8.3 release.
Guozhang
On Wed, Apr 15, 2015 at 2:21 PM, tao xiao wrote:
> Hi team,
>
> I discovered an issue that when a high level consumer with roundrobin
> assignment strategy consumes a topi
Guozhang,
No, I don't think the patch of KAFKA-2056 would fix this problem. The NPE
is thrown at the line that is called before the fix executes. But I do
notice that the code in trunk did fix the issue by ensuring the size of map
returned from ctx.consumersForTopic is > 0. So the code in trunk is
19 matches
Mail list logo