Hi, Mayuresh,

I seems to me that there are two common use cases of authorizer. (1) Use
the default SimpleAuthorizer and the kafka-acl to do authorization. (2) Use
a customized authorizer and an external tool for authorization. Do you
think there is a use case for a customized authorizer and kafka-acl at the
same time? If not, it's better not to complicate the kafka-acl api.

Thanks,

Jun



On Mon, Feb 13, 2017 at 10:35 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> Hi Jun,
>
> Thanks for the review and comments. Please find the replies inline :
>
> This is so that in the future, we can extend to types like group.
> ---> Yep, I did think the same. But since the SocketServer was always
> creating User type, it wasn't actually used. If we go ahead with changes in
> this KIP, we will give this power of creating different Principal types to
> the PrincipalBuilder (which users can define there own). In that way Kafka
> will not have to deal with handling this. So the Principal building and
> Authorization will be opaque to Kafka which seems like an expected
> behavior.
>
>
> Hmm, normally, the configurations you specify for plug-ins refer to those
> needed to construct the plug-in object. So, it's kind of weird to use that
> to call a method. For example, why can't principalBuilderService.rest.url
> be passed in through the configure() method and the implementation can use
> that to build principal. This way, there is only a single method to compute
> the principal in a consistent way in the broker and in the kafka-acl tool.
> ----> We can do that as well. But since the rest url is not related to the
> Principal, it seems out of place to me to pass it every time we have to
> create a Principal. I should replace "principalConfigs" with
> "principalProperties".
> I was trying to differentiate the configs/properties that are used to
> create the PrincipalBuilder class and the Principal/Principals itself.
>
>
> For LinkedIn's use case, do you actually use the kafka-acl tool? My
> understanding is that LinkedIn does authorization through an external tool.
> ----> For Linkedin's use case we don't actually use the kafka-acl tool
> right now. As per the discussion that we had on
> https://issues.apache.org/jira/browse/KAFKA-4454, we thought that it would
> be good to make kafka-acl tool changes, to make it flexible and we might be
> even able to use it in future.
>
> It seems it's simpler if kafka-acl doesn't to need to understand the
> principal builder. The tool does authorization based on a string name,
> which is expected to match the principal name. So, I am wondering why the
> tool needs to know the principal builder.
> ----> If we don't make this change, I am not sure how clients/end users
> will be able to use this tool if they have there own Authorizer that does
> Authorization based on Principal, that has more information apart from name
> and type.
>
> What if we only make the following changes: pass the java principal in
> session and in
> SimpleAuthorizer, construct KafkaPrincipal from java principal name. Will
> that work for LinkedIn?
> ----> This can work for Linkedin but as explained above, it does not seem
> like a complete design from open source point of view.
>
> Thanks,
>
> Mayuresh
>
>
> On Thu, Feb 9, 2017 at 11:29 AM, Jun Rao <j...@confluent.io> wrote:
>
> > Hi, Mayuresh,
> >
> > Thanks for the reply. A few more comments below.
> >
> > On Wed, Feb 8, 2017 at 9:14 PM, Mayuresh Gharat <
> > gharatmayures...@gmail.com>
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the review. Please find the responses inline.
> > >
> > > 1. It seems the problem that you are trying to address is that java
> > > principal returned from KafkaChannel may have additional fields than
> name
> > > that are needed during authorization. Have you considered a customized
> > > PrincipleBuilder that extracts all needed fields from java principal
> and
> > > squeezes them as a json in the name of the returned principal? Then,
> the
> > > authorizer can just parse the json and extract needed fields.
> > > ---> Yes we had thought about this. We use a third party library that
> > takes
> > > in the passed in cert and creates the Principal. This Principal is then
> > > used by the library to make the decision (ALLOW/DENY) when we call it
> in
> > > the Authorizer. It does not have an API to create the Principal from a
> > > String. If it did support, still we would have to be aware of the
> > internal
> > > details of the library, like the field values it creates from the
> certs,
> > > defaults and so on.
> > >
> > > 2. Could you explain how the default authorizer works now? Currently,
> the
> > > code just compares the two principal objects. Are we converting the
> java
> > > principal to a KafkaPrincipal there?
> > > ---> The SimpleAclAuthorizer currently expects that, the Principal it
> > > fetches from the Session object is an instance of KafkaPrincipal. It
> then
> > > uses it compare with the KafkaPrincipal extracted from the stored ACLs.
> > In
> > > this case, we can construct the KafkaPrincipal object on the fly by
> using
> > > the name of the Principal as follows :
> > >
> > > *val principal = session.principal*
> > > *val kafkaPrincipal = new KafkaPrincipal(KafkaPrincipal.USER_TYPE,
> > > principal.getName)*
> > > I was also planning to get rid of the principalType field in
> > > KafkaPrincipal as
> > > it is always set to *"*User*"* in the SocketServer currently. After
> this
> > > KIP, it will no longer be used in SocketServer. But to maintain
> backwards
> > > compatibility of kafka-acls.sh, I preserved it.
> > >
> > >
> > >
> > This is so that in the future, we can extend to types like group.
> >
> >
> > > 3. Do we need to add the following method in PrincipalBuilder? The
> > configs
> > > are already passed in through configure() and an implementation can
> cache
> > > it and use it in buildPrincipal(). It's also not clear to me where we
> > call
> > > the new and the old method, and whether both will be called or one of
> > them
> > > will be called.
> > > Principal buildPrincipal(Map<String, ?> principalConfigs);
> > > ---> My thought was that the configure() method will be used to build
> the
> > > PrincipalBuilder class object itself. It follows the same way as
> > Authorizer
> > > gets configured. The buildPrincipal(Map<String, ?> principalConfigs)
> will
> > > be used to build individual principals.
> > > Let me give an example, with the kafka-acls.sh :
> > >
> > >    - bin/kafka-acls.sh --principalBuilder
> > >    userDefinedPackage.kafka.security.PrincipalBuilder
> > > --principalBuilder-properties
> > >    principalBuilderService.rest.url=URL  --authorizer
> > >    kafka.security.auth.SimpleAclAuthorizer --authorizer-properties
> > >    zookeeper.connect=localhost:2181 --add --allow-principal name=bob
> > >    type=USER_PRINCIPAL --allow-principal name=ALPHA-GAMMA-SERVICE
> > >    type=SERVICE_PRINCIPAL --allow-hosts Host1,Host2 --operations
> > Read,Write
> > >    --topic Test-topic
> > >       1. *userDefinedPackage.kafka.security.PrincipalBuilder* is the
> > user
> > >       defined PrincipalBuilder class.
> > >       2. *principalBuilderService.rest.url=URL* can be a remote
> service
> > >       that provides you an HTTP endpoint which takes in a set of
> > > parameters and
> > >       provides you with the Principal.
> > >       3. *name=bob type=USER_PRINCIPAL* can be used by PrincipalBuilder
> > to
> > >       create UserPrincipal with name as bob
> > >       4. *name=ALPHA-GAMMA-SERVICE type=SERVICE_PRINCIPAL *can be used
> by
> > >       PrincipalBuilder to create a ServicePrincipal with name as
> > >       ALPHA-GAMMA-SERVICE.
> > >    - This seems more flexible and intuitive to me from end user's
> > >    perspective.
> > >
> > >
> > Hmm, normally, the configurations you specify for plug-ins refer to those
> > needed to construct the plug-in object. So, it's kind of weird to use
> that
> > to call a method. For example, why can't principalBuilderService.rest.
> url
> > be passed in through the configure() method and the implementation can
> use
> > that to build principal. This way, there is only a single method to
> compute
> > the principal in a consistent way in the broker and in the kafka-acl
> tool.
> >
> > For LinkedIn's use case, do you actually use the kafka-acl tool? My
> > understanding is that LinkedIn does authorization through an external
> tool.
> > It seems it's simpler if kafka-acl doesn't to need to understand the
> > principal builder. The tool does authorization based on a string name,
> > which is expected to match the principal name. So, I am wondering why the
> > tool needs to know the principal builder. What if we only make the
> > following changes: pass the java principal in session and in
> > SimpleAuthorizer, construct KafkaPrincipal from java principal name. Will
> > that work for LinkedIn?
> >
> >
> >
> > > Principal buildPrincipal(Map<String, ?> principalConfigs) will be
> called
> > > from the commandline client kafka-acls.sh while the other API can be
> > called
> > > at runtime when Kafka receives a client request over request channel.
> > >
> > > 4. The KIP has "If users use there custom PrincipalBuilder, they will
> > have
> > > to implement there custom Authorizer as the out of box Authorizer that
> > > Kafka provides uses KafkaPrincipal." This is not ideal for existing
> > users.
> > > Could we avoid that?
> > > ---> Yes, this is possible to avoid if we do point 2.
> > >
> > >
> > > Thanks,
> > >
> > > Mayuresh
> > >
> > > On Wed, Feb 8, 2017 at 3:31 PM, Jun Rao <j...@confluent.io> wrote:
> > >
> > > > Hi, Mayuresh,
> > > >
> > > > Thanks for the KIP. A few comments below.
> > > >
> > > > 1. It seems the problem that you are trying to address is that java
> > > > principal returned from KafkaChannel may have additional fields than
> > name
> > > > that are needed during authorization. Have you considered a
> customized
> > > > PrincipleBuilder that extracts all needed fields from java principal
> > and
> > > > squeezes them as a json in the name of the returned principal? Then,
> > the
> > > > authorizer can just parse the json and extract needed fields.
> > > >
> > > > 2. Could you explain how the default authorizer works now? Currently,
> > the
> > > > code just compares the two principal objects. Are we converting the
> > java
> > > > principal to a KafkaPrincipal there?
> > > >
> > > > 3. Do we need to add the following method in PrincipalBuilder? The
> > > configs
> > > > are already passed in through configure() and an implementation can
> > cache
> > > > it and use it in buildPrincipal(). It's also not clear to me where we
> > > call
> > > > the new and the old method, and whether both will be called or one of
> > > them
> > > > will be called.
> > > > Principal buildPrincipal(Map<String, ?> principalConfigs);
> > > >
> > > > 4. The KIP has "If users use there custom PrincipalBuilder, they will
> > > have
> > > > to implement there custom Authorizer as the out of box Authorizer
> that
> > > > Kafka provides uses KafkaPrincipal." This is not ideal for existing
> > > users.
> > > > Could we avoid that?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > >
> > > > On Fri, Feb 3, 2017 at 11:25 AM, Mayuresh Gharat <
> > > > gharatmayures...@gmail.com
> > > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > It seems that there is no further concern with the KIP-111. At this
> > > point
> > > > > we would like to start the voting process. The KIP can be found at
> > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=67638388
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Mayuresh
> > > > >
> > > >
> > >
> > > Thanks,
> > >
> > > --
> > > -Regards,
> > > Mayuresh R. Gharat
> > > (862) 250-7125
> > >
> >
> > Thanks,
> >
> > Jun
> >
>
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>

Reply via email to