You should use ConsumerRebalanceListener to check if there is a rebalance
or not. Seeing same assigned partitions after each poll doesn't mean there
is no rebalance.
Steve
On Wed, Aug 1, 2018, 8:57 AM Manoj Khangaonkar
wrote:
> Hi,
>
> I am implementing flow control by
>
> (1) using the pause(
Is `delete.topic.enable` set to `true`? It's a broker configuration.
-Matthias
On 7/31/18 8:57 AM, Pulkit Manchanda wrote:
> HI All,
>
> I am want to create and delete Kafka topics on runtime in my Application.
> I followed few projects on GitHub like
> https://github.com/simplesteph/kafka-0.11
Hi,
I am implementing flow control by
(1) using the pause(partitions) method on the consumer to stop
consumer.poll from returning messages.
(2) using the resume(partitions) method on the consumer to let
consumer.poll return messages
This works well for a while. Several sets of pause-resume wor
No. Transaction markers are not exposed.
Why would you need them? They are considered implementation details.
-Matthias
On 7/27/18 5:58 AM, ma...@kafkatool.com wrote:
> Is there any way for a KafkaConsumer to view/get the transactional
> marker messages?
>
> --
> Best regards,
> Mark
Hello Francesco,
Streams auto-created repartition topics's num.partitions are determined by
the num.tasks of the writing sub-topology, which is then determined by the
source topic's num.partitions in turn. There are some proposals about
extending this coupling but not yet implemented:
https://cwik
Not the best idea but you can open another listener that is not SASL
enabled until they patch is released.
On Tue, Jul 31, 2018 at 8:17 AM, Pierre Coquentin <
pierre.coquen...@gmail.com> wrote:
> thanks, too bad :(
>
> On Tue, Jul 31, 2018 at 4:44 PM Gabriele Paggi
> wrote:
>
> > Hi Pierre,
> >
HI All,
I am want to create and delete Kafka topics on runtime in my Application.
I followed few projects on GitHub like
https://github.com/simplesteph/kafka-0.11-examples/blob/master/src/main/scala/au/com/simplesteph/kafka/kafka0_11/demo/KafkaAdminClientDemo.scala
But to no avail. The code runs
thanks, too bad :(
On Tue, Jul 31, 2018 at 4:44 PM Gabriele Paggi
wrote:
> Hi Pierre,
>
> Indeed, it doesn't support SASL_SSL.
> The issue is being worked on here:
> https://issues.apache.org/jira/browse/KAFKA-5235
>
> On Tue, Jul 31, 2018 at 2:19 PM Pierre Coquentin <
> pierre.coquen...@gmail.c
Hi Pierre,
Indeed, it doesn't support SASL_SSL.
The issue is being worked on here:
https://issues.apache.org/jira/browse/KAFKA-5235
On Tue, Jul 31, 2018 at 2:19 PM Pierre Coquentin
wrote:
> Hi
>
> Thanks for the answer but GetOffsetShell doesn't work with SSL and SASL
> enabled. It doesn't seem
Hi,
I have a question about topic repartitioning in Kafka Streams using
`through` function.
I try to explain the context Briefly:
I have single topic A with two partitions:
A:1:9
A:0:0
I try to create a repartitioned topic using Kafka Streams API:
builder.stream("A").map<>((key, val) => KeyV
Hi
I'm new to Kafka and am looking to set up a 3 node cluster in our data
centers to handle local log traffic. Currently, our traffic is sent via
UDP so we can tolerate some message loss. What I can't tolerate is a
complete loss of the Kafka cluster because we would quickly fill up our UDP
buffe
Hi
Thanks for the answer but GetOffsetShell doesn't work with SSL and SASL
enabled. It doesn't seem possible to provide the consumer.config like we do
for kafka-console-consumer.sh.
On Tue, Jul 31, 2018 at 12:19 PM Gabriele Paggi
wrote:
> Hi Pierre,
>
> You can use kafka.tools.GetOffsetShell, w
Hi Pierre,
You can use kafka.tools.GetOffsetShell, which will return the earliest
offset for each partition in a given topic.
If you want to see the latest available ones, change "--time -2" with
"--time -1"
gpaggi@kafkalog001:~$ kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list $(hostn
Hi
How do we get the starting offset of a partition from a topic when we have
a retention policy configured?
I've checked in shell script kafka-topic.sh or kafka-console-consumer.sh
but found nothing so far. Do we have to implement our own solution to get
this information?
It could be nice in kafk
14 matches
Mail list logo