Hi Akshay,
In regards to your 3rd question (and indirectly to your 2nd question),
instead of having different consumer groups, why not just multiple
consumers in the same group? That would ensure that each consumer only
reads from one partition in the topic. You can even assign the partition if
yo
Use case: we process documents from a variety of sources. We want to
process some of these sources in a priority order, but we don’t want to
necessarily finish all the higher priority sources before going to lower
priority because the volume of higher priority sources can be extremely
high.
We hav
You can read more here:
https://docs.confluent.io/current/connect/connect-jdbc/docs/source_connector.html
Hope that helps.
Thanks,
Subhash
Sent from my iPhone
> On Jun 19, 2018, at 11:12 AM, Pranav Shah wrote:
>
> Hello,
>
> Is there connector available for Kafka SQL Server CDC?
>
> Thanks
Hi Pranav,
Yes, there is a JDBC source connector that you can use to achieve CDC from SQL
into Kafka using connect.
Thanks,
Subhash
Sent from my iPhone
> On Jun 19, 2018, at 11:12 AM, Pranav Shah wrote:
>
> Hello,
>
> Is there connector available for Kafka SQL Server CDC?
>
> Thanks,
> Pr
duplicates.
Hope that helps.
Thanks,
Subhash
On Tue, Feb 13, 2018 at 10:44 AM, Xavier Noria wrote:
> On Tue, Feb 13, 2018 at 2:59 PM, Subhash Sriram
> wrote:
>
> Hey Xavier,
> >
> > Within a consumer group, you can only have as many consumers as you have
> > parti
Hey Xavier,
Within a consumer group, you can only have as many consumers as you have
partitions in a topic. If you start more consumers than partitions within the
same group, they will just be idle.
Thanks,
Subhash
Sent from my iPhone
> On Feb 13, 2018, at 8:30 AM, Xavier Noria wrote:
>
> C
Hi Sunil,
Burrow might be of interest to you:
https://github.com/linkedin/Burrow
Hope that helps.
Thanks,
Subhash
Sent from my iPhone
> On Jan 29, 2018, at 7:40 PM, Sunil Parmar wrote:
>
> We're using 0.9 ( CDH ) and consumer offsets are stored within Kafka. What
> is the preferred way to g
Hi Brian,
Maybe this will be of help:
https://www.confluent.io/certification/
Thanks,
Subhash
Sent from my iPhone
> On Jan 19, 2018, at 1:15 PM, brian spallholtz
> wrote:
>
> I am a SA/SE on several kafka clusters, I was wondering if there was a
> training and certification program or trac
Hi,
I am not an expert, but from looking at the ACL documentation, you can't
control read authorization at the partition level, only at the topic level. If
it is possible to control access at the partition level, maybe you could have a
dedicated partition for each customerID?
Thanks,
Subhash
Hi Irtiza,
Have you looked at jmxtrans? It has multiple output writers for the metrics and
one of them is the KeyOutWriter which just writes to disk.
https://github.com/jmxtrans/jmxtrans/wiki
Hope that helps!
Thanks,
Subhash
Sent from my iPhone
> On Dec 6, 2017, at 5:36 AM, Irtiza Ali wrote
Hi Jens,
Have you looked at Burrow?
https://github.com/linkedin/Burrow/blob/master/README.md
Thanks,
Subhash
Sent from my iPhone
> On Aug 12, 2017, at 8:55 AM, Jens Rantil wrote:
>
> Hi,
>
> I am one of the maintainers of prometheus-kafka-consumer-group-exporter[1],
> which exports consumer
st by publishing to a single partition topic. When you
consume, you should see that it is all in order.
I hope that helps.
Thanks,
Subhash
On Thu, Jun 22, 2017 at 4:37 PM, karan alang wrote:
> Hi Subhash,
>
> number of partitions - 3
>
> On Thu, Jun 22, 2017 at 12:37 PM, Subha
How many partitions are in your topic?
On Thu, Jun 22, 2017 at 3:33 PM, karan alang wrote:
> Hi All -
>
> version - kafka 0.10
> I'm publishing data into Kafka topic using command line,
> and reading the data using kafka console consumer
>
> *Publish command ->*
>
> $KAFKA_HOME/bin/kafka-verifia
output, but it should get fresh
> offsets (with `--describe` for example), since the old offsets were
> removed once it became inactive.
>
> --Vahid
>
>
>
>
> From: Subhash Sriram
> To: users@kafka.apache.org
> Date: 05/05/2017 02:38 PM
> Subject:Re
et and it
> will not be listed in the consumer group command output.
>
> But the consumer group in your case should be alive, since it did not
> become inactive.
>
> Did the command use to list the group in the output before?
>
> --Vahid
>
>
>
>
> From: Subhas
Hey everyone,
I am a little bit confused about how the kafka-consumer-groups.sh/
ConsumerGroupCommand works, and was hoping someone could shed some light on
this for me.
We are running Kafka 0.10.1.0, and using the new Consumer API with the
Confluent.Kafka C# library (v0.9.5) that uses librdkafka
16 matches
Mail list logo