Manish Sharma wrote:
> One of my brokers (id 0) is continuously emitting such log entries and
> eating up CPU cycles..
>
> *[2015-11-07 12:49:50,677] INFO Partition [Wmt_Thursday_158,12] on broker
> 0: Shrinking ISR for partition [Wmt_Thursday_158,12] from 0,3 to 0
> (kafka.cluster.Partition)*
>
Hi Harsha,
I have used the Fully qualified domain name. Just for security concerns,
Before sending this mail,i have replaced our FQDN hostname to localhost.
yes, i have tried KINIT and I am able to view the tickets using klist
command as well.
Thanks,
Prabhu
On Wed, Dec 30, 2015 at 11:27 AM, Ha
I keep getting such warnings intermittenly in my application . The
application connects to a kafka server and pushes messages. None of my
messages have failed howeever.
The application is a spring application and it uses kafka-clients to
establish connection and send messages to kafka
kafka-client
Thanks guys. The `seek` seems a solution. But it's more cumbersome than in
0.8 because I have to plug in some extra code in my consumer abstractions
rather than simply deleting a zk node.
And one more question: where does kafka 0.9 stores the consumer-group
information? In fact I also tried to dele
Dear Team,
We are using Kafka-0.8.2.1 and having log.retention.hours=168 but files of
__consumer_offsets are not getting deleted, due to this lots of disc spaces
are used.
Please help how to delete file of offset storage topic after specified time.
Thanks and Regards,
Madhukar
can you add your jass file details. Your jaas file might have
useTicketCache=true and storeKey=true as well example of
KafkaServer jass file
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
serviceName="kafka"
keyTab="/vagrant/keytabs/kafka1.key
If you want to monitor offset (ZK or Kafka based), try with QuantFind's
Kafka Offset Monitor.
If you use Docker, it's easy as:
docker run -p 8080:8080 -e ZK=zk_hostname:2181
jpodeszwik/kafka-offset-monitor
and then opening browser to dockerhost:8080.
If not in the Docker mood, use instructions he
Do you have access to the server logs? Any error is likely recorded there
with a stack trace. You also might check what server version you are
connecting to.
-Dana
On Dec 30, 2015 3:49 AM, "Birendra Kumar Singh" wrote:
> I keep getting such warnings intermittenly in my application . The
> applic
Hi Marko,
Yes we're currently using this on our production kafka 0.8. But it does not
seem to work with the new consumer API in 0.9.
To answer my own question about deleting consumer group in new consumer
API, it seems that it's currently not possible with the new consumer API
(there's no delete r
Hi there,
I'd like to announce that our open source library, Reactive Kafka (which
wraps Akka Streams for Java/Scala around Kafka consumers/producers), now
supports Kafka 0.9. More details:
https://softwaremill.com/reactive-kafka-09/
Hi,
I am running kafka 0.9.0 locally.
I am having a particular situation in the following scenario.
(1) 1 Producer inserts 500 records (300bytes each aprox) to 1 topic 0
partition (or 1 as you prefer)
(2) After the producer finished inserting the 500 records, 1 Consumer reads
in a loop from this
Looks like there is an open issue reated to the same.
https://issues.apache.org/jira/browse/KAFKA-2078
@Dana
Which server logs do you want me to check. Zookeeper or the kafka??I didnt
find any stack trace over there though.
Its only in my appliation logs that I see them. And it comes as a WARN
rat
Hi Han,
if it doesn't work you should file an issue, since it explicitly says in
the readme that it works with:
1 zookeeper built-in high-level consumer (based on Zookeeper)
2 kafka built-in offset management API (based on Kafka internal topic)
3 Storm Kafka Spout (based on Zookeeper by defau
A few thoughts from a non-expert:
connections are also processed asynchronously in the poll loop. If you are
not enabling any timeout, you may be seeing a few initial iterations spent
on setting up the channel connections. Also you probably need a few loop
iterations to get through an initial meta
I was thinking kafka logs, but KAFKA-2078 suggests it may be a deeper
issue. Sorry, I don't have any better suggestions / ideas right now than
you found in that JIRA ticket.
-Dana
On Wed, Dec 30, 2015 at 10:10 AM, Birendra Kumar Singh
wrote:
> Looks like there is an open issue reated to the sam
Xavier,
The md5 checksum is generated by running "gpg --print-md MD5". Is there a
command that generates the output that you wanted?
Thanks,
Jun
On Tue, Dec 29, 2015 at 5:13 PM, Xavier Stevens wrote:
> The current md5 checksums of the release downloads all seem to be returning
> in an atypica
Hey Jun,
I was expecting that you just used md5sum (GNU version).
The nice part of using it is that when scripting a check it has a -c option:
md5sum -c kafka_2.11-0.9.0.0.tgz.md5
The difficult bit with what is currently there, is that it has a whole
bunch of newlines and spacing in it. So I ha
Xavier,
We also generate sha1 and sha2. Do we have to use different tools to
generate those too?
Thanks,
Jun
On Wed, Dec 30, 2015 at 2:29 PM, Xavier Stevens wrote:
> Hey Jun,
>
> I was expecting that you just used md5sum (GNU version).
>
> The nice part of using it is that when scripting a ch
Jun,
I'm not saying what you're doing is wrong it just wasn't what I expected.
It looks like all of Apache's release process pages is using GPG from what
I can tell, which is fine.
To answer your question though about sha1 and sha2. The GNU coreutils are
in the form of sum (Examples: md5sum, sha1
Hello-
We have a use case where we're trying to create a topic, delete, then recreate
with the same topic name.
Running into inconsistant results.
Creating the topic:
/opt/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 3
--topic test-01 --zookeeper zoo01:2181, zoo02:218
20 matches
Mail list logo