;ve
been messing around a bit with a one topic/one partition setup, but all
consumers receive the same (and total amount of) messages.
Do all the clients support this? I've currently got the option between C#, Java
and Python, more or less. (I expect the Java one to be most feature-complete)
Great, thanks - that does help. I'll kick off some partitions, then. :)
(I think I saw your video lectures on safaribooksonline! I should probably have
paid better attention..)
Joris Peeters
Software Developer
Research and Data Technology
T: +44 (0) 20 8576 5800
-Original Me
quot;if this-sort-of message arrived, then we expect
that-sort-of-message to be received within this time" etc).
I'm sure I can piece something together that does this, but perhaps it comes
out of the box. (Couldn't find it, though).
We're using the Java client and Kafka 8.2.1
I intend to write some bespoke monitoring for our internal kafka system.
Amongst others, it should give us warning if some topics are lagging behind too
much (i.e. production much faster than consumption).
What would currently be the neatest way to get a list of all topics, partitions
and consu
opics, partitions, consumergroups
Joris,
You checkout Burrow https://github.com/linkedin/Burrow which gives you
monitoring and alerting capability for offset stored in Kafka
On Wed, 16 Sep 2015 at 23:05 Joris Peeters
wrote:
> I intend to write some bespoke monitoring for our internal kafk
I'm trying to set up a kafka consumer (in Java) that uses the new approach of
committing offsets (i.e. in the __consumer_offsets topic etc, rather than
through zookeeper).
Am I correct in believing that the current version (we're using kafka 8.2.1)
does not expose this through the high level co
consumer properties and high level API is able to pick
it up and commit offsets to Kafka
Here is the code reference where kafka offset logic kicks in
https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala#L308
On Tue, 22 Sep 2015 at 17:44
g the console consumer to check the offsets topic, remember that
you need this line in consumer.properties:
exclude.internal.topics=false
On Tue, Sep 22, 2015 at 6:05 AM Joris Peeters
wrote:
> Ah, nice! Does not look like it is working, though. For some reason
> the __consumer_offsets top
I have a topic with three partitions, to which I send 5 messages (very rapidly
after one another) and they get partitioned well ([1,2,2] as it happens).
I'm launching three identical high level java customers (kafka 8.2.1),
single-threaded, to consume those messages. Once a message is received, t
the first message since the default is
largest (in which case auto commit is a red herring.)
On Wed, Sep 23, 2015 at 3:17 AM Joris Peeters
wrote:
> I have a topic with three partitions, to which I send 5 messages (very
> rapidly after one another) and they get partitioned well ([1,2,2] as
9 versions, so it wouldn't be in 0.8.1 or
0.8.2 versions. The new consumer also supports this feature.
-Ewen
On Thu, Jan 7, 2016 at 2:02 AM, Joris Peeters
wrote:
> We are using Kafka 8.2.1 (*), and have
> kafka.javaapi.consumer.ConsumerConnector connected to a single topic
>
I suppose the consumers would also need to all belong to the same consumer
group for your expectation to hold. If the three consumers belong to different
consumer groups, I'd expect each of them to receive all the messages,
regardless of the number of partitions.
So perhaps they are on different
Hello,
I am trying to come up with a good security approach for a Kafka project
inside our company. I see there's a variety of options available
(ACL/RBAC/certificates/...), and I'm hoping someone can suggest a few
possibilities.
Some points of interest,
- Kafka will be used both by end-users/res
Hello,
I am trying to set up Kafka with custom authentication & authorisation. The
authentication is PLAIN, i.e. user/pass.
On a single broker, this worked fine (i.e. users were able to authenticate
correctly), but in a multi-broker set-up I am struggling to get the
interbroker communication to w
haps it is something
that should be looked at by Kafka dev?
-- Forwarded message -----
From: Joris Peeters
Date: Wed, Oct 16, 2019 at 2:41 PM
Subject: custom authentication; inter-broker issues?
To:
Hello,
I am trying to set up Kafka with custom authentication & authorisation. The
Hello,
We use custom Kafka authentication and authorisation, in a manner very
similar to https://github.com/navikt/kafka-plain-saslserver-2-ad, i.e. by
providing an implementation of
org.apache.kafka.common.security.auth.AuthenticateCallBackHandler and
kafka.security.auth.Authorizer - for plain us
Hello,
For auditing and tracking purposes, I'd like to be able to monitor user
consumer events like topic subscriptions etc. The idea is to have the
individual events, not some number/second aggregation.
We are using the confluent-docker kafka image, for 5.2.2 (with some bespoke
auth injected), w
r Kafka cluster that could be used for auditing purposes.
> Not necessary I must say (I would go for the solution above) but certainly
> possible.
>
> Thanks,
>
> -- Ricardo
> On 6/23/20 7:41 AM, Joris Peeters wrote:
>
> Hello,
>
> For auditing and tracking purposes
Do you know why your consumers are so slow? 12E6msg/hour is msg/s,
which is not very high from a Kafka point-of-view. As you're doing database
inserts, I suspect that is where the bottleneck lies.
If, for example, you're doing a single-row insert in a SQL DB for every
message then this would i
gt; Joris:
> Great point.
> DB insert is a bottleneck (and hence moved it to its own layer) - and we
> are batching but wondering what is the best way to calculate the batch
> size.
>
> Thanks,
> Yana
>
> On Mon, Dec 21, 2020 at 1:39 AM Joris Peeters
> wrote:
>
There's an official Confluent version:
https://hub.docker.com/r/confluentinc/cp-kafka/
On Tue, Mar 16, 2021 at 2:24 PM Otar Dvalishvili
wrote:
> Greetings,
> Why there are no official Kafka Docker images?
>
There's a few unknown parameters here that might influence the answer,
though. From the top of my head, at least
- How much replication of the data is needed (for high availability), and
how many acks for the producer? (If fire-and-forget it can be faster, if
need to replicate and ack from 3 broker
work in finance industry myself). What you're
doing is a very useful benchmark, but I'd surround it with the above
caveats to avoid overpromising.
-J
On Thu, Jan 6, 2022 at 4:58 PM Marisa Queen
wrote:
> Hi Joris,
>
> I've spoken to him. His answers are below:
>
>
> M. Queen
>
>
> On Thu, Jan 6, 2022 at 2:14 PM Joris Peeters
> wrote:
>
> > I'd just follow the instructions in https://kafka.apache.org/quickstart
> to
> > set up Kafka and Zookeeper on a single node, by running the Java
> processes
> > directly. Or
24 matches
Mail list logo