Also to let you all know I get this error too on server console
log4j:ERROR Failed to rename
[D:\kafka_2.10-0.10.2.0-SNAPSHOT/logs/controller.log] to
[D:\kafka_2.10-0.10.2.0-SNAPSHOT/logs/controller.log.2016-12-15-11].
Looks like related. It seems some central process in new version if failing
to
Hi,
Is there any easy way to reset offset of consumer group back to 0 for given
topic.
I am using below command to check offset
bin/kafka-consumer-offset-checker.sh --zookeeper zookeeper-host.com:2181
--group consumer-group --topic nameoftopic --security SASL_PLAINTEXT
Thanks,
J
Hi all,
I am running kafka connect using 2 node cluster. I have 5 connectors running
with maxtask 1 each. But all the tasks are running in same node, work is not
distributed across 2 nodes.
I am using a custom source connectors.
Any help is appreciated
Thanks
Manjunath
Hi,
I suggest to include topic config as well as part of streams config
properties like we do for producer and consumer configs.
The topic config supplied would be used for creating internal changelog
topics along with certain additional configs which are applied by default.
This way we don't have
Make sure that the principal ID is exactly what Kafka sees. Guessing what
the principal ID is by using keytool or openssl is not going to help from
my experience. The best is to add some logging to output the SSL client ID
in the org.apache.kafka.common.network.SslTransportLayer.peerPrincipal() .
T
Hi Apurva,
When I try producing single messages to a similarly-configured topic (30
partitions, 3x replication, acks=all), and enable full protocol debugging,
I am seeing single-digit ms round-trip-times for the request-response cycle
at a protocol level. But, I see other overheads which might be
Thanks Shrikant for your reply, but I did consumer part also and more over
I am not facing this issue only with consumer, I am getting this errors
with producer as well as consumer
On Wed, Dec 14, 2016 at 3:53 PM, Shrikant Patel wrote:
> You need to execute kafka-acls.sh with --consumer to enabl
You need to execute kafka-acls.sh with --consumer to enable consumption from
kafka.
_
Shrikant Patel | 817.367.4302
Enterprise Architecture Team
PDX-NHIN
-Original Message-
From: Raghu B [mailto:raghu98...@gmail.com]
Sent: Wednesday, Dece
Hi All,
I am trying to enable ACL's in my Kafka cluster with along with SSL
Protocol.
I tried with each and every parameters but no luck, so I need help to
enable the SSL(without Kerberos) and I am attaching all the configuration
details in this.
Kindly Help me.
*I tested SSL without ACL, it w
Mikael,
by "growing out of bounds" we refer to the fact, that the changelog
encodes the keys as pair of . Thus, over time as
we create more and more window, storage requirement grows and grows and
will eventually hit a wall. How fast this happens, depends mainly on
your window advance time (and nu
Hi Jeff,
It depends on the size of the messages as well as settings like
`fetch.max.bytes` and `max.partition.fetch.bytes`. Assuming that the fetch
returned all messages for all 4 topic partitions, the consumer would return
500 messages from one partition, then 500 from another partition and so on
Hi Mathieu,
I think you are right, there is currently no mutual exclusion between
`task.commit()` and `task.poll()`. The solution you are thinking of with
maintaining the committed offset state yourself seems reasonable, though
inconvenient.
It probably makes sense to add a new parameterized `com
Scenario:
Four topics, each with one partition having 1000 messages. A single
consumer group subscribed to all four topics. Only two consumer processes
within the consumer group,..
Using the default strategy, each individual consumer will be subscribed to
two topic_partitions.
Will a single call
Hi Matthias,
kind of :)
I'm interested in the retention mechanisms and my use case is to keep old
windows around for a long time (up to a year or longer) and access them via
interactive queries. As I understand from the documentation, the retention
mechanism is used to avoid changelogs from "grow
Hi Apurva,
The first error vanished after I restarted all the brokers. I haven't seen
these recursive errors and my thought is since we restarted zookeeper nodes
we might have put all the brokers in some sort of a iffy state
The broker occasionally being hung has plagued us quite a bit. Our Kafka
Recently, we have seen our brokers crash with below errors, any idea what might
be wrong here? The brokers have been running for long with the same
hosts/configs without this issue before. Is this something to do with new
version 0.10.0.1 (which we upgraded recently) or could it be a h/w issue?
Regarding 1), you can see a NotLeaderForPartition exception if the leader
for the partition has moved to another host but the client metadata has not
updated itself yet. The messages should disappear once the metadata is
updated on all clients.
Leaders may move if brokers are bounced, or if they h
Which version of Kafka are you using? How did you run the tool exactly? Can
you share the command line?
On Tue, Dec 13, 2016 at 6:05 PM, Jeremy Hansen wrote:
> The new host has been in place for over a week. Lag is still high on
> partitions that exist on that new host. Should I attempt another
I would suggest trying a recent java version first, if I read about this one:
http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2014-December/011534.html
Cheers
Robert
--
Robert Schumann | Lead DevOps Engineer | mobile.de GmbH
T: + 49. 30. 8109. 7219
M: +49.151. 5504. 8246
F: +49. 30. 8109.
Understood. Makes sense.
For this, you should apply Streams configs manually when creating those
topics. For retention parameter, use the value you specify in
corresponding .until() method for it.
-Matthias
On 12/14/16 10:08 AM, Sachin Mittal wrote:
> I was referring to internal change log top
Hi Gwilym,
What is the latency for synchronously producing to this cluster? Is it also
1000 to 2000ms?
Thanks,
Apurva
On Wed, Dec 14, 2016 at 2:17 AM, Gwilym Evans
wrote:
> Hi folks,
>
> New to the list and new to operating Kafka. I'm trying to find out what a
> reasonable turnaround time for
I would suggest creating a JIRA and describing in detail what was going on
in the cluster when this happened, and posting the associated broker /
state change / controller logs.
Thanks,
Apurva
On Wed, Dec 14, 2016 at 3:28 AM, Mazhar Shaikh
wrote:
> Hi All,
>
> I am using kafka_2.11-0.9.0.1 with
I was referring to internal change log topic. I had to create them manually
because in some case the message size of these topic were greater than the
default ones used by kafka streams.
I think someone in this group recommended to create these topic manually. I
understand that it is better to hav
I am not sure if I can follow.
However, in Kafka Streams using window aggregation, the windowed KTable
uses a key-value store internally -- it's only called windowed store
because it encodes the key for the store as pair of
and also applies a couple of other mechanism with
regard to retention tim
I am wondering about "I create internal topic manually" -- which topics
do you refer in detail?
Kafka Streams create all kind of internal topics with auto-generated
names. So it would be quite tricky to create all of them manually
(especially because you need to know those name in advance).
IRRC,
In a turn of events - this morning I was about to throw in the proverbial
towel on Kafka. In a last ditch effort I killed all but one instance of my
app, put it back to a single thread (why offer the option if it's not
advised?) and deleted every last topic that had any relation to this app.
I res
I created a new topic from shell.
I published some messages to it via java producer thread and consume some
messages via streams application.
Then I terminated streams application using ctrl c.
Then I delete the topic via shell.
I get this exception.
[2016-12-14 22:24:43,529] ERROR [KafkaApi-0]
Hi team – I’m trying to bring up new kafka-broker but its failing for the FATAL
error and details below
Can you please help me on fixing this issue.
WARN SASL configuration failed: javax.security.auth.login.LoginException: No
key to store Will continue connection to Zookeeper server without SAS
Hi there,
Any idea why log.retention attribute is not working? We kept
log.retention.hours=6 in server.properties but we see old data are not getting
deleted. We see Dec 9th data/log files are still there.
We are running this in production boxes and if it does not delete the old files
our stor
Hi,
I'm wondering about the tradeoffs when implementing a tumbling window with
a long retention, e.g. 1 year. Is it better to use a normal key value store
and aggregate the time bucket using a group by instead of a window store?
Best,
Mikael
We do recommend one thread per instance of the app. However, it should also
work with multiple threads.
I can't debug the problem any further without the logs from the other apps.
We'd need to try and see if another instance still has task 1_3 open ( i
suspect it does )
Thanks,
Damian
On Wed, 14
What should I do about this? One thread per app?
On Wed, Dec 14, 2016 at 4:11 AM, Damian Guy wrote:
> That is correct
>
> On Wed, 14 Dec 2016 at 12:09 Jon Yeargers
> wrote:
>
> > I have the app running on 5 machines. Is that what you mean?
> >
> > On Wed, Dec 14, 2016 at 1:38 AM, Damian Guy
>
That is correct
On Wed, 14 Dec 2016 at 12:09 Jon Yeargers wrote:
> I have the app running on 5 machines. Is that what you mean?
>
> On Wed, Dec 14, 2016 at 1:38 AM, Damian Guy wrote:
>
> > Hi Jon,
> >
> > Do you have more than one instance of the app running? The reason i ask
> is
> > because t
I have the app running on 5 machines. Is that what you mean?
On Wed, Dec 14, 2016 at 1:38 AM, Damian Guy wrote:
> Hi Jon,
>
> Do you have more than one instance of the app running? The reason i ask is
> because the task (task 1_3) that fails with the
> "java.lang.IllegalStateException" in this l
Hi All,
I am using kafka_2.11-0.9.0.1 with java version "1.7.0_51".
On random days kafka process stops (crashes) with a java coredump file as
below.
(gdb) bt
#0 0x7f33059f70d5 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x7f33059fa83b in abort () from /lib/x86_64-linux-gnu/libc.s
Hi folks,
New to the list and new to operating Kafka. I'm trying to find out what a
reasonable turnaround time for committing offsets is.
I'm running a 0.10.1.0 cluster of 17 brokers plus 3 dedicated zookeeper
nodes, though the cluster has been upgraded from its starting point at
0.10.0.0. The of
Hi Sachin,
> windowstore.changelog.additional.retention.ms
>
> How does this relate to rentention.ms param of topic config?
> I create internal topic manually using say rentention.ms=360.
> In next release (post kafka_2.10-0.10.0.1) since we support delete of
> internal changelog topic as wel
Hi Jon,
Do you have more than one instance of the app running? The reason i ask is
because the task (task 1_3) that fails with the
"java.lang.IllegalStateException" in this log is previously running as a
Standby Task. This would mean the active task for this store would have
been running elsewhere
38 matches
Mail list logo