I'm running Kafka 1.1.1 and Zookeeper 3.4.6 in a cluster, both guarded by
Kerberos. My app stack includes a module containing topic configurations,
and my continuous integration build autodeploys changes to topics with
kafka-topics.sh and kafka-configs.sh.
When I try to use a non-superuser princip
Hi All,
I am testing kafka locally, I am able to produce and consume message. But,
after consuming the message from topic I want to acknowledge.
Looking for solution. Please revert if anyone have.
Thanks & Regards
Rahul Singh
Please read KafkaConsumer javadoc - your answer is already there.
Thanks,
On Mon, 21 Jan 2019 at 13:13, Rahul Singh <
rahul.si...@smartsensesolutions.com> wrote:
> Hi All,
>
> I am testing kafka locally, I am able to produce and consume message. But,
> after consuming the message from topic I wa
I am using in Node with node-kafka module.
On Mon, Jan 21, 2019 at 6:45 PM M. Manna wrote:
> Please read KafkaConsumer javadoc - your answer is already there.
>
> Thanks,
>
> On Mon, 21 Jan 2019 at 13:13, Rahul Singh <
> rahul.si...@smartsensesolutions.com> wrote:
>
> > Hi All,
> >
> > I am test
Are you using kafka-node or node-rdkafka? In either case you should call
Consumer.commit(cb) or something similar to manually commit offsets (aka
acknowledge messages).
Alternatively so can set a config parameter on the consumer to autoCommit.
https://github.com/SOHU-Co/kafka-node/blob/master/
I am using node-kafka, I have used consumer.commit to commit offsets but
don't know why when I restart the consumer it consume the committed offsets.
Thanks
On Mon, Jan 21, 2019, 10:24 PM Hans Jespersen Are you using kafka-node or node-rdkafka? In either case you should call
> Consumer.commit(cb
Show some code Rahul.
On Mon, Jan 21, 2019 at 11:02 AM Rahul Singh <
rahul.si...@smartsensesolutions.com> wrote:
> I am using node-kafka, I have used consumer.commit to commit offsets but
> don't know why when I restart the consumer it consume the committed
> offsets.
>
> Thanks
>
> On Mon, Jan 2
Hi Rajiv,
Did you ever find out what was causing this issue?
I notice something similar on my Kafka Cluster only that the 95th percentile
of the log flush goes above 10 or 20 Min, and then I start to see under
replicated partitions.
Besides the difference on time, the scenario is quite the sam
Do you mean this node-kafka from 4 years ago
(https://github.com/sutoiku/node-kafka)?
If so that’s a very very old client, only supports Apache Kafka 0.8 and stores
offsets in zookeeper (which Kafka 0.9 and above no longer do).
I recommend you use a more up to date nodejs kafka client than this
Hello!
I'm having trouble when deploying a new version of a service during the
re-balancing step where the topology doesn't match what KafkaStreams
library assumes and there's a NPE while creating tasks.
Background info:
I'm running a Spring Boot service which utilizes KafkaStreams, currently
sub
That is expected... It's not possible to change the subscription during
a rolling restart. You need to stop all instances and afterwards start
new instances with the new subscription.
I did not look into the details of your change, but you might also need
to reset your application before starting
Hi Hans,
Let me correct this. I am using kafka-node client not node-kafka.
On Tue, Jan 22, 2019 at 3:56 AM Hans Jespersen wrote:
> Do you mean this node-kafka from 4 years ago (
> https://github.com/sutoiku/node-kafka)?
>
> If so that’s a very very old client, only supports Apache Kafka 0.8 and
Hi Daniel,
This is my code. Hopes it looks understandable, thanks :)
const kafka = require('kafka-node');
const ConsumerGroup = kafka.ConsumerGroup;
let options = {
kafkaHost: '127.0.0.1:9092',
groupId: 'DualTest',
autoCommit: false,
// autoCommitIntervalMs: 5000,
protocol: [
13 matches
Mail list logo