You can use query_watermark_offsets() to get high watermark of the topic
partition to use as max offset.
Regards,
Koushik
-Original Message-
From: Aurelien DROISSART
Sent: Thursday, August 29, 2019 5:32 AM
To: users@kafka.apache.org
Subject: librdkafka : seek() to offset out of range
Hello all,
we have a question about librdkafka
- Kafka version : 2.0.1
- librdkafka version : 0.11.4
We have an application connecting to kafka using librdkafka.
It needs to consume a topic starting at a specific offset value stored in the
application.
It does so by calling rd_kafka_seek
> > above configurations, why do we get invalid_request?
> >
> >
> > Thanks,
> >
> > Vignesh.
> >
> >
> >
> >
> > On Wed, Sep 27, 2017 at 9:51 AM, Vignesh wrote:
> >
> > > Hi,
> > >
> > > We are using
with
> above configurations, why do we get invalid_request?
>
>
> Thanks,
>
> Vignesh.
>
>
>
>
> On Wed, Sep 27, 2017 at 9:51 AM, Vignesh wrote:
>
> > Hi,
> >
> > We are using LibrdKafka library version .11.0 and calling List Offset API
>
get invalid_request?
Thanks,
Vignesh.
On Wed, Sep 27, 2017 at 9:51 AM, Vignesh wrote:
> Hi,
>
> We are using LibrdKafka library version .11.0 and calling List Offset API
> with a timestamp on a 0.10.2 kafka server installed in a windows machine.
>
> This request returns
Hi,
We are using LibrdKafka library version .11.0 and calling List Offset API
with a timestamp on a 0.10.2 kafka server installed in a windows machine.
This request returns an error code, 43 - INVALID_REQUEST.
We have other local installations of Kafka version 0.10.2 (also on Windows)
and are
Kafka itself. But we need to using Python API to monitor the consumer
offset. We need some Python API like the kafka-consumer-groups.sh tool to
list all consumer-group & show their offset one by one.
We plan to use confluent-kafka-python/librdkafka as our option. But we can
not find API to list
tion parameter or that we have
> some issues with our environment that we can find and solve.
>
> We are using both the java client and librdkafka and see similar CPU issues
> in both clients.
>
> We have looked at recommendations from:
> https://github.com/edenhill/lib
processes we are concerned that
short poll loops will cause an overconsumption of CPU capacity. We are
hoping we might have missed some configuration parameter or that we have
some issues with our environment that we can find and solve.
We are using both the java client and librdkafka and see similar
Hey Dave,
yes that's the general plan.
Regards,
Magnus
2016-09-29 19:33 GMT+02:00 Tauzell, Dave :
> Does anybody know if the librdkafka releases are kept in step with kafka
> releases?
>
> -Dave
> This e-mail and any files transmitted with it are confidential, may
Does anybody know if the librdkafka releases are kept in step with kafka
releases?
-Dave
This e-mail and any files transmitted with it are confidential, may contain
sensitive information, and are intended solely for the use of the individual or
entity to whom they are addressed. If you have
java.lang.Thread.run(Thread.java:744)
>
>
> Thank you.
>
> Regards,
> Mazhar Shaikh
>
>
> On Fri, Aug 19, 2016 at 8:05 PM, Jun Rao wrote:
>
>> Mazhar,
>>
>> Let's first confirm if this is indeed a bug. As I mentioned earlier, it's
>> po
kh >
> wrote:
>
> > Hi Jun,
> >
> > In my earlier runs, I had enabled delivery report (with and without
> offset
> > report) facility provided by librdkafka.
> >
> > The producer has received successful delivery report for the all msg sent
> &g
> report) facility provided by librdkafka.
>
> The producer has received successful delivery report for the all msg sent
> even than the messages where lost.
>
> as you mentioned. producer has nothing to do with this loss of messages.
>
> I just want to know, as when c
Hi Jun,
In my earlier runs, I had enabled delivery report (with and without offset
report) facility provided by librdkafka.
The producer has received successful delivery report for the all msg sent
even than the messages where lost.
as you mentioned. producer has nothing to do with this loss of
Mazhar,
With ack=1, whether you lose messages or not is not deterministic. It
depends on the time when the broker receives/acks a message, the follower
fetches the data and the broker fails. So, it's possible that you got lucky
in one version and unlucky in another.
Thanks,
Jun
On Thu, Aug 18,
86] -> List(5, 3), [topic1,72] ->
> > List(5, 3), [topic1,79] -> List(5, 3), [topic1,82] -> List(5, 3)), 1 ->
> > Map([topic1,92] -> List(1, 0), [topic1,95] -> List(1, 0), [topic1,69] ->
> > List(1, 0), [topic1,93] -> List(1, 0), [topic1,70] -> List(1, 0
16-08-17 13:09:50,295] DEBUG [Controller 2]: topics not in preferred
> replica Map() (kafka.controller.KafkaController)
> [2016-08-17 13:09:50,295] TRACE [Controller 2]: leader imbalance ratio for
> broker 0 is 0.00 (kafka.controller.KafkaController)
> [2016-08-17 13:09:50,295] DEBUG [Controller
>
> List(4, 2))) (kafka.controller.KafkaController)
> [2016-08-17 13:09:50,295] DEBUG [Controller 2]: topics not in preferred
> replica Map() (kafka.controller.KafkaController)
> [2016-08-17 13:09:50,295] TRACE [Controller 2]: leader imbalance ratio for
> broker 0 is 0.00 (kafka.control
Listener)
[2016-08-17 13:10:43,383] DEBUG Sending MetadataRequest to
Brokers:ArrayBuffer(0, 5, 1, 2, 3, 4) for TopicAndPartitions:Set([topic1,67],
[topic1,95]) (kafka.controller.IsrChangeNotificationListener)
[2016-08-17 13:10:43,394] DEBUG [IsrChangeNotificationListener] Fired!!!
(kafka.con
r
> > guarantees acked messages won't be lost after leader failure.
> >
> > Thanks,
> >
> > Jun
> >
> > On Wed, Aug 10, 2016 at 11:41 PM, Mazhar Shaikh <
> > mazhar.shaikh...@gmail.com>
> > wrote:
> >
> > > Hi Kafka Team,
> &
s=all in the producer? Only the latter
> guarantees acked messages won't be lost after leader failure.
>
> Thanks,
>
> Jun
>
> On Wed, Aug 10, 2016 at 11:41 PM, Mazhar Shaikh <
> mazhar.shaikh...@gmail.com>
> wrote:
>
> > Hi Kafka Team,
> >
&g
Are you using acks=1 or acks=all in the producer? Only the latter
guarantees acked messages won't be lost after leader failure.
Thanks,
Jun
On Wed, Aug 10, 2016 at 11:41 PM, Mazhar Shaikh
wrote:
> Hi Kafka Team,
>
> I'm using kafka (kafka_2.11-0.9.0.1) with librdka
r
broker 4 is 0.00 (kafka.controller.KafkaController)
[2016-08-17 13:10:37,278] DEBUG [IsrChangeNotificationListener] Fired!!!
(kafka.controller.IsrChangeNotificationListener)
[2016-08-17 13:10:37,292] DEBUG Sending MetadataRequest to
Brokers:ArrayBuffer(0, 5, 1, 2, 3, 4) for
TopicAndPartitions
Shaikh
wrote:
> Hi Kafka Team,
>
> I'm using kafka (kafka_2.11-0.9.0.1) with librdkafka (0.8.1) API for
> producer
> During a run of 2hrs, I notice the total number of messaged ack'd by
> librdkafka delivery report is greater than the maxoffset of a partition in
> ka
Hi Kafka Team,
I'm using kafka (kafka_2.11-0.9.0.1) with librdkafka (0.8.1) API for
producer
During a run of 2hrs, I notice the total number of messaged ack'd by
librdkafka delivery report is greater than the maxoffset of a partition in
kafka broker.
I'm running kafka broker w
26 matches
Mail list logo