Requesting permission to contribute to Kafka

2022-10-29 Thread Bart Van Bos
Hereby my details:

Wiki ID: bart van bos (bartvan...@gmail.com)
Jira ID: boeboe (bartvan...@gmail.com)

BR,

*Bart Van Bos*
*SW & ICT Engineering - AllBits BVBA*

Mobile: +32 485 630 628
E-mail: bartvan...@gmail.com
BTW: BE.0678.829.457
IBAN: BE23 9731 7830 1491
Address: Lobroeken 25, 3191 Hever


[jira] [Created] (KAFKA-14340) KIP-880: X509 SAN based SPIFFE URI ACL within mTLS Client Certificates

2022-10-29 Thread Bart Van Bos (Jira)
Bart Van Bos created KAFKA-14340:


 Summary: KIP-880: X509 SAN based SPIFFE URI ACL within mTLS Client 
Certificates 
 Key: KAFKA-14340
 URL: https://issues.apache.org/jira/browse/KAFKA-14340
 Project: Kafka
  Issue Type: Wish
  Components: security
Affects Versions: 3.3.1
Reporter: Bart Van Bos


Istio and other SPIFFE based systems use clients certificates to provide 
workload ID. Kafka currently does support Client Cert based AuthN/Z and mapping 
to ACL, but only so be inspecting the CN field within a Client Certificate.

There are several POC implementations our there implementing a bespoke 
_KafkaPrincipalBuilder_ implementation for this purpose. Two examples include


 * [https://github.com/traiana/kafka-spiffe-principal]
 * [https://github.com/boeboe/kafka-istio-principal-builder] (written by myself)

 

This KIP request is to include this functionality into Kafka's main 
functionality so end-users don't need to load custom and non-vetted java 
classes.

The main use case for me is having a lot of istio customers that express the 
will to be able to leverage SPIFFE based IDs for there Kafka ACL Authorization.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Requesting permission to contribute to Kafka

2022-10-29 Thread Chris Egerton
Hi Bart,

You should be good to go now.

Cheers,

Chris

On Sat, Oct 29, 2022 at 12:05 PM Bart Van Bos  wrote:

> Hereby my details:
>
> Wiki ID: bart van bos (bartvan...@gmail.com)
> Jira ID: boeboe (bartvan...@gmail.com)
>
> BR,
>
> *Bart Van Bos*
> *SW & ICT Engineering - AllBits BVBA*
>
> Mobile: +32 485 630 628
> E-mail: bartvan...@gmail.com
> BTW: BE.0678.829.457
> IBAN: BE23 9731 7830 1491
> Address: Lobroeken 25, 3191 Hever
>


[DISCUSS] KIP-880: X509 SAN based SPIFFE URI ACL within mTLS Client Certificates

2022-10-29 Thread Bart Van Bos
Hi all,

I wanted to check and verify if there is any interest and animo to adopt
this feature request:
https://issues.apache.org/jira/browse/KAFKA-14340

Istio and other *SPIFFE* based systems use X509 Client Certificates to
provide workload ID. Kafka currently does support Client Cert based AuthN/Z
and mapping to ACL, but only so be inspecting the CN field within a Client
Certificate.

There are several POC implementations out there implementing a bespoke
*KafkaPrincipalBuilder* implementation for this purpose. Two examples
include


   - https://github.com/traiana/kafka-spiffe-principal
  - https://github.com/boeboe/kafka-istio-principal-builder (written by
  myself)

The gist is to introspect X509 based client certificates, look for a URI
based SPIFFE entry in the SAN extension and return that as a principle,
that can be used to write ACL rules.

This KIP request is to include this functionality into Kafka's main
functionality so end-users don't need to load custom and non-vetted java
classes/implementations.

The main use case for me is having a lot of Istio customers that express
the will to be able to leverage SPIFFE based IDs for their Kafka ACL
Authorization. This eliminates the need for sidecars on the broker side or
custom *EnvoyFilters* and other less optimal implementations to integrate
Kafka into an Istio secured Kubernetes environment.

I believe this would make for a better integration between the Istio/SPIFFE
and Kafka ecosystems.

PS: I can use some advice on the provided implementation as well, because I
do not have kafka experience in terms of committing or contributing code.

Best regards,
*Bart Van Bos*
*SW & ICT Engineering - AllBits BVBA*

Mobile: +32 485 630 628
E-mail: bartvan...@gmail.com
BTW: BE.0678.829.457
IBAN: BE23 9731 7830 1491
Address: Lobroeken 25, 3191 Hever


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1326

2022-10-29 Thread Apache Jenkins Server
See 




Odd behaviour (bug?) - seek with "latest" offset reset strategy

2022-10-29 Thread Dan S
Hello,

I opened a PR to add slightly more detailed documentation to seek(), as I
had spent hours googling when I wanted to use it. After some great reviews
from @showuon, I added some integration tests, and I noticed very odd
behaviour:

https://github.com/apache/kafka/pull/12753/files#r1007760016

In these tests, we start with 10 messages going to a topic partition, and
getting consumed. Then, we seek to an invalid offset. Based on
https://medium.com/lydtech-consulting/kafka-consumer-auto-offset-reset-d3962bad2665

The offset reset should kick in, and as new records show up, they should be
read.

What I have observed (the tests seem to pass locally is):

After the above, if I poll, I get an empty list, which makes sense (we're
at the end, waiting for new messages)

If I seek to offset 17, and then add 10 more messages, the next message I
get is offset 17, which is not what the link says, but sort of makes sense,
because it's what I asked for.

If, however, I seek to 27, and add 10 messages, the next offset I get is 0,
which seeks plain wrong. We're neither at the old end (offset 10), or at
the new end (offset 20), or where we asked to be (offset 27).

Am I doing something wrong/missing something, or is there a bug, and if so,
what should I do (file a jira, add a fix to the pr, open a new pr?). What
is the desired behaviour? Is it to get message at offset 11 in both cases?

Thanks,


Dan


Odd behaviour (bug?) - seek with "latest" offset reset strategy

2022-10-29 Thread Dan S
Hello,

I opened a PR to add slightly more detailed documentation to seek(), as I
had spent hours googling when I wanted to use it. After some great reviews
from @showuon, I added some integration tests, and I noticed very odd
behaviour:

https://github.com/apache/kafka/pull/12753/files#r1007760016

In these tests, we start with 10 messages going to a topic partition, and
getting consumed. Then, we seek to an invalid offset. Based on
https://medium.com/lydtech-consulting/kafka-consumer-auto-offset-reset-d3962bad2665

The offset reset should kick in, and as new records show up, they should be
read.

What I have observed (the tests seem to pass locally is):

After the above, if I poll, I get an empty list, which makes sense (we're
at the end, waiting for new messages)

If I seek to offset 17, and then add 10 more messages, the next message I
get is offset 17, which is not what the link says, but sort of makes sense,
because it's what I asked for.

If, however, I seek to 27, and add 10 messages, the next offset I get is 0,
which seeks plain wrong. We're neither at the old end (offset 10), or at
the new end (offset 20), or where we asked to be (offset 27).

Am I doing something wrong/missing something, or is there a bug, and if so,
what should I do (file a jira, add a fix to the pr, open a new pr?). What
is the desired behaviour? Is it to get message at offset 11 in both cases?

Thanks,


Dan


add my jira id to contributers

2022-10-29 Thread Arun Raju
Can my jira id *arunbhargav * be added to the contributor list?