-
> From: Jarek Jarcec Cecho
> To: dev@kafka.apache.org
> Subject: Re: Two open issues on Kafka security
> Date: Thu, 2 Oct 2014 08:33:45 -0700
>
> Thanks for getting back Jay!
>
> For the interface - Looking at Sentry and other authorization libraries
> in the Hadoop
Hey Michael,
Cool. Yeah I think in practice there isn't a huge difference since
Kafka requests are just length prefixed packets the only difference is
the presence or absence of the header fields. Having them there will
make life simpler and more consistent for client implementations since
this wi
Hi Jay,
Yup― in both SASL & (non-blocking) SSL the runtime libs provide an
“engine” abstraction that just takes in & produces buffers of byte
containing the authentication messages. The application is responsible for
transmitting them… somehow. I was picturing a simple length-prefixed
packet.
Tha
Thanks for getting back Jay!
For the interface - Looking at Sentry and other authorization libraries in the
Hadoop eco system it seems that “username” is primarily use to perform
authorization these days. And then IP for auditing. Hence I feel that
username+IP would be sufficient, at least for
Here is the client side in ZK:
https://svn.apache.org/repos/asf/zookeeper/trunk/src/java/main/org/apache/zookeeper/client/ZooKeeperSaslClient.java
Note how they have a special Zookeeper request API that is used to
send the SASL bytes (e.g. see ZooKeeperSaslClient.sendSaslPacket).
This API follows
Hey Michael,
WRT question 2, I think for SASL you do need the mechanism information
but what I was talking about was the challenge/response byte[] that is
sent back and forth from the client to the server. My understanding is
that SASL gives you an api for the client and server to use to produce
t
Hey Jarek,
I agree with the importance of separating authentication and
authorization. The question is what concept of identity is sufficient
to pass through to the authorization layer? Just a "user name"? Or
perhaps you also need the ip the request originated from? Whatever
these would be it woul
Regarding question #1, I’m not sure I follow you, Joe: you’re proposing (I
think) that the API take a byte[], but what will be in that array? A
serialized certificate if the client authenticated via SSL and the
principal name (perhaps normalized) if the client authenticated via
Kerberos?
Regarding
I’m following the security proposal wiki page [1] and this discussion and I
would like to jump in with few points if I might :) Let me start by saying
that I like the material and the discussion here, good work!
I was part of the team who originally designed and worked on Sentry and I
wanted t
Hi Jonathan,
"Hadoop delegation tokens to enable MapReduce, Samza, or other frameworks
running in the Hadoop environment to access Kafka"
https://cwiki.apache.org/confluence/display/KAFKA/Security is on the list,
yup!
/***
Joe Stein
Founder, Principal Con
This is not nearly as deep as the discussion so far, but I did want to
throw this idea out there to make sure we¹ve thought about it.
The Kafka project should make sure that when deployed alongside a Hadoop
cluster from any major distributions that it can tie seamlessly into the
authentication and
inline
On Tue, Sep 30, 2014 at 11:58 PM, Jay Kreps wrote:
> Hey Joe,
>
> For (1) what are you thinking for the PermissionManager api?
>
> The way I see it, the first question we have to answer is whether it
> is possible to make authentication and authorization independent. What
> I mean by that
Hey Gwen,
That makes sense.
I think this is one area where having pluggable authorization makes
the story a bit more complex since all the management of default
permissions or even how to ensure a user does or doesn't have a
permission is going to be specific to the authorization model a
particul
Hey Joe,
For (1) what are you thinking for the PermissionManager api?
The way I see it, the first question we have to answer is whether it
is possible to make authentication and authorization independent. What
I mean by that is whether I can write an authorization library that
will work the same
<< we need to make it easy for secured clusters to pass audits (SOX, PCI
and friends)
I think this is the MVP for the security features for 0.9 as a guideline
for how we should be proceeding.
On Tue, Sep 30, 2014 at 7:25 PM, Gwen Shapira wrote:
> Re #2:
>
> I don't object to the "late authentic
Re #2:
I don't object to the "late authentication" approach, but we need to
make it easy for secured clusters to pass audits (SOX, PCI and
friends).
So, we need to be able to configure a cluster as "secured" and with
this config switch "nobody" user to zero privileges.
I liked the multi-port appro
1) We need to support the most flexibility we can and make this transparent
to kafka (to use Gwen's term). Any specific implementation is going to
make it not work with some solution stopping people from using Kafka. That
is a reality because everyone just does it slightly differently enough. If
Re #1:
Since the auth_to_local is a kerberos config, its up to the admin to
decide how he likes the user names and set it up properly (or leave
empty) and make sure the ACLs match. Simplified names may be needed if
the authorization system integrates with LDAP to get groups or
something fancy like
18 matches
Mail list logo