roach where you delegate the token validation to the authorization
server.
[1] https://github.com/strimzi/strimzi-kafka-oauth
- marko
On 2020/04/08 04:37:09, Antony Alphonse wrote:
> Hi,
>
> I'm looking to implement authentication using Oauthbearer mechanism in my
> Kafka cl
token.
[1] https://github.com/strimzi/strimzi-kafka-oauth
- marko
On 2020/04/08 04:37:09, Antony Alphonse wrote:
> Hi,
>
> I'm looking to implement authentication using Oauthbearer mechanism in my
> Kafka cluster. My Oauth server will be Azure AD. If anyone have implemented
>
This would be useful, for example, showing messages belonging to the
same transaction.
thanks
marko
- Original Message -
From: users@kafka.apache.org
To:
Cc:
Sent:Tue, 31 Jul 2018 14:40:20 -0700
Subject:Re: Viewing transactional markers in client
No
Is there any way for a KafkaConsumer to view/get the transactional
marker messages?
--
Best regards,
Marko
www.kafkatool.com
set are successfully written to disk even in case of power failure,
>>provided disks didn't crash ?
No, see my above reply.
-Dave
-Original Message-
From: JEVTIC, MARKO [mailto:marko.jev...@fisglobal.com]
Sent: Tuesday, May 30, 2017 8:05 AM
To: users@kafka.apache.org
Subject: cl
Hi all,
I wasn't able to find in documentation firm agreement about Kafka message reply.
So, before going through the source code, I would like to ask a question:
If kafka client producer gets record meta data with a valid offset, do we
consider that that message is indeed fsynced to the disk
You can use something like this to get a comma-separated list of all filed
in a folder:
ls -l | awk '{print $9}' ORS=','
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/>
Do you know in advance (when sending the first message) how many messages
that batch is going to have?
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://semat
date in-sync replica becomes
the leader.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
On Thu, Sep 29, 2016 at 7:30 PM, Ezra Stuetz
xible enough for any type of use case? What do
you think cannot be achieved?
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
On Thu, Sep
BTW regarding latency:
https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Con
o consumer lag; i.e. lag can fit
in the OS page cache so you're not even hitting disk when consuming)
measured in low 10s of ms.
No read replicas. You only read from partition's master. I.e. replicas are
used to achieve redundancy.
Marko Bonaći
Monitoring | Alerting | Anomaly Detecti
Hi Karin,
regarding 5 (fsyncing to disk), take a look at the broker configuration
parameters whose names start with log.flush.
http://kafka.apache.org/documentation.html#brokerconfigs
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Sup
Hi Tom,
if you need a commercially proven lag monitoring solution (and all other
Kafka and ZK metrics) take a look at our SPM.
Hope you don't mind me plugging this one in :)
[image: Inline image 1]
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Is there a way to get a list of all consumer groups and consumer group
offsets using either KafkaConsumer or KafkaProducer (or some other method)
in the new Java client?
Best regards,
Marko
www.kafkatool.com
were to assign the second consumer with a different group, each
consumer would consume all messages (independently of one another).
BTW, Kafka is not broadcasting anything, your consumers are pulling
messages out of Kafka :)
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralize
Instantly reminded me of Streams API, where you can use Java8 streams
semantics (filter being one of them) to do the first thing in Gouzhang's
response (filter messages from one topic into another - I assume that's
what you were looking for).
Marko Bonaći
Monitoring | Alerting | Anomaly
Ah, that makes sense. After adding the truststore to the server configs
things seem to work correctly, thanks!
marko
> Have you configured a truststore in server.properties? You don't need this
> when using security.inter.broker.protocol=PLAINTEXT and client-auth is
> disabled, b
], Error during controlled
shutdown, possibly because leader movement took longer than the configured
socket.timeout.ms: Connection to Node(0, debian, 9094) failed
(kafka.server.KafkaServer)
the relevant configs are
listeners=SSL://:9094
security.inter.broker.protocol=SSL
port=9094
Marko
> If y
I'm assuming that you created a topic with replication factor 3, while
having only a single broker.
Try with replication factor 1 or add additional brokers.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Semat
There is only one broker in this case. There are no errors (besides the
warning below) on either the broker or the client side. It just returns an
empty topic list if plaintext is not configured, even though client is
using SSL in both cases.
marko
> Hi,
>
> That warning is harmless. P
by the
consumer in both cases, I'm not sure why having the plaintext port would
affect the SSL behavior.
--
Best regards,
Marko
www.kafkatool.com
Thanks, this seems to do the trick.
Best regards,
Marko
www.kafkatool.com
> Hi
>
> For the Kafka Consumer, there are seekToBeginning and seekToEnd method
> that
> point your the beginning and end of the partition. You can use one of the
> methods to point the consumer to a ce
How does one get the first/last offset for a given partition using the new
KafkaConsumer/Producer? In the old SimpleConsumer you would just use the
getOffsetsBefore() method.
--
Best regards,
Marko
www.kafkatool.com
Also sent to: ggol...@hortonworks.com
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
On Fri, Apr 15, 2016 at 1:47 AM, Guruditta Golani
wrote:
>
>
Automated reply:
thank you for attempting to subscribe to Kafka mailing list.
To finish the subscription process send email to
users-subscr...@kafka.apache.org < users-subscr...@kafka.apache.org>
:)
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Managemen
+lies+orphaned+offsets+
Has anything changed in 0.9?
Thanks
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
-providers
Amazon Kinesis would also work.
Anything really that would "outsource" the initial effort until you're
ready to commit to Kafka.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http:
ahead and use Kafka regardless of the load.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
On Mon, Mar 21, 2016 at 6:25 PM, Ben
These two issues track progress of Kafka consumer 0.9.
https://github.com/apache/spark/pull/10953
https://github.com/apache/spark/pull/11143
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/>
This sounds like a good approach, since it does not depend on JMX. There
doesn't seem to be an easy way to make these API calls using the new
Java-client, though. Everything is kind of hidden behind the KafkaConsumer
and KafkaProducer classes.
Marko
www.kafkatool.com
> I think it is pos
Is there a way to detect the broker version (even at a high level 0.8 vs
0.9) using the kafka-clients Java library?
--
Best regards,
Marko
www.kafkatool.com
Does the new KafkaConsumer support storing offsets in Zookeeper or only in
Kafka? By looking at the source code I could not find any support for
Zookeeper, but wanted to confirm this.
--
Best regards,
Marko
www.kafkatool.com
was a stream when it was little :)
Marko Bonaći
On Mon, Jan 11, 2016 at 5:53 PM, England, Laura (Interfuse) <
laura.engl...@interfusecomms.com> wrote:
> Hello!
>
> HPE Matter<http://www.hpematter.com/>, the digital magazine from HPE
> where the brightest minds in busines
I think that the attempt to write a message to a non-existent topic creates
that topic (when auto.create is set to true).
If it's set to false you get back error.
Have you tried that?
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearc
Hello again Cosmin :)
I think this is because offsets are kept in a special _consumer_offsets
topic, as opposed to ZK previously.
Take a look here:
http://search-hadoop.com/m/uyzND1T1i3BNkRFM1&subj=Re+Kafka+0+8+2+1+how+to+read+from+__consumer_offsets+topic+
Marko Bonaći
Monitoring | Aler
Actually, why don't you use the same code as outlined here (that includes
timeout in props):
http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearc
Hi Cosmin,
do you have default server configuration on these new nodes you're setting
up?
I'd check consumer's socket.timeout.ms, maybe someone set it to 30 instead
of 30 000 :)
Speaking from my own experience (I had the same symptom and this turned out
to be the cause).
Marko Bo
own between requests?
FINALLY: tell us more about your use case.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
On Mon, Jan 4, 2016
er by default)
I'm still on Kafka 0.8, so I can't shed any light on your issue.
Thx for the AdminClient info.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sem
structions here:
https://github.com/quantifind/KafkaOffsetMonitor
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
On Wed, Dec 30, 2015
I was refering to Dana Powers's answer in the link I posted (to use a
client API). You can find an example here:
http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
We recently had the same question:
http://search-hadoop.com/m/uyzND1kM7q1gElhy1
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
On T
Hmm, I guess you're right Tod :)
Just to confirm, you meant that, while you're changing the exported file it
might happen that one of the segment files becomes eligible for cleanup by
retention, which would then make the imported offsets out of range?
Marko Bonaći
Monitoring | Alerting
-class.sh kafka.tools.ImportZkOffsets --input-file
/tmp/zk-offsets --zkconnect localhost:2181
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>
BTW I use Spotify's image since it contains both ZK and Kafka, but I think
the latest version they built is 0.8.2.1, so you might have to build the
new image yourself if you need 0.9, but that's trivial to do.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized
Hi,
there was a problem with JMX consumer lag in 0.8:
http://search-hadoop.com/m/uyzND14v72215XZpK&subj=Re+Consumer+lag+lies+orphaned+offsets+
Has anything changed now with 0.9?
Thanks
Did not know that quotas landed in 0.9. Very nice!
Being able to throttle clients that don't have real-time SLAs (in favor of
those who do) is a great addition.
Thanks for that Grant.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearc
r your use case.
Perhaps you could check Consumer offsets from your Producer and then decide
based on that information whether to throttle Producer or not. Could get
complicated really fast, though.
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elastic
AFAIK there is no such notion as maximum length of a topic, i.e. offset has
no limit, except Long.MAX_VALUE I think, which should be enough for a
couple of lifetimes (9 * 10E18, or quintillion or million trillions).
What would be the purpose of that, besides being a nice foot-gun :)
Marko Bonaći
line image 2]
We're running producers, brokers and consumers on AWS.
Is it possible that the network is that much flaky?
What's your experience?
Thanks,
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <
ishing the same thing?
Or should I just forget about it and use the recommended approach from the
low-level consumer code example in the wiki (which I currently use as the
fallback)?
Thanks,
Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsear
consumers, including Apache Storm Kafka spout
consumers
-Show JSON and XML messages in a pretty-printed format
-Export messages from a topic to files on local hard drive
Feedback and comments are welcome, you can find the tool at www.kafkatool.com
Cheers,
Marko
53 matches
Mail list logo