Hi all,

First of all, apologies for digging up this year-old thread.

I believe that without further changes we will be losing support for a couple 
of important SCRAM management scenarios after the transition to a 
Zookeeper-less Kafka.

One of the scenarios is a migration of a cluster. Topics and their 
configuration can be read and re-created in a new cluster, ACLs can be copied 
over as well, and even messages can mirrored. The SCRAM credentials could also 
be read from one Zookeeper/chroot to another, but without Zookeeper this will 
no longer be true as we don't have Admin client operations for reading and 
setting the hashed/encrypted credentials.

Another scenario is one where a federated group of clusters allows for clients 
to all use the same set of credentials in different clusters. The 
hashed/encrypted credentials can be pre-computed and then added to each 
cluster's ZK/chroot, or even just copied over from the first cluster to others. 
With access to Zookeeper this could be done without having to store the actual 
password anywhere, only the hashed/encrypted credentials are moved around. But 
because the Upsert operation requires the actual password, this will no longer 
be possible.

I think we could maintain support for both of these scenarios if we expand the 
broker-side API slightly with support for these two operations:

- Fetching the encrypted credentials for a given SCRAM user
- Creating a SCRAM user with already encrypted credentials instead of with a 
password

Does this make sense? Should we have another KIP?

It seems the first operation at least was originally part of this KIP, but 
Rajini flagged it as a concern:

> With AdminClient, we have been more conservative because we are now giving
> access over the network. You cannot retrieve any sensitive broker configs,
> even in encrypted form. I think it would be better to follow the same model
> for SCRAM credentials. It is not easy to decode the encoded SCRAM
> credentials, but it is not impossible. In particular, it is prone to
> dictionary attacks. I think the only information we need to return from
> `listScramUsers` is the SCRAM mechanism that is supported for that user.

Surely not impossible, but the mechanisms make use of a salt and an arbitrary 
number of iterations which I believe make dictionary attacks not be a concern. 
Besides, calls to broker APIs can be authenticated which keeps access to the 
encrypted credentials limited.

What do you think?

Best,

--
Igor

On Tue, Jun 30, 2020, at 10:45 PM, Colin McCabe wrote:
> Hi Rajini,
> 
> OK.  Let's remove the encrypted credentials from ListScramUsersResponse 
> and the associated API.  I have updated the KIP-- take a look when you 
> get a chance.
> 
> best,
> Colin
> 
> 
> On Fri, May 15, 2020, at 06:54, Rajini Sivaram wrote:
>> Hi Colin,
>> 
>> We have used different approaches for kafka-configs using ZooKeeper and
>> using brokers until now. This is based on the fact that whatever you can
>> access using kafka-configs with ZooKeeper, you can also access directly
>> using ZooKeeper shell. For example, you can retrieve any config stored in
>> ZooKeeper including sensitive configs. They are encrypted, so you will need
>> the secret for decoding it, but you can see all encrypted values. Similarly
>> for SCRAM credentials, you can retrieve the encoded credentials. We allow
>> this because if you have physical access to ZK, you could have obtained it
>> from ZK anyway. Our recommendation is to use ZK for SCRAM only if ZK is
>> secure.
>> 
>> With AdminClient, we have been more conservative because we are now giving
>> access over the network. You cannot retrieve any sensitive broker configs,
>> even in encrypted form. I think it would be better to follow the same model
>> for SCRAM credentials. It is not easy to decode the encoded SCRAM
>> credentials, but it is not impossible. In particular, it is prone to
>> dictionary attacks. I think the only information we need to return from
>> `listScramUsers` is the SCRAM mechanism that is supported for that user.
>> 
>> Regards,
>> 
>> Rajini
>> 
>> 
>> On Fri, May 15, 2020 at 9:25 AM Tom Bentley <tbent...@redhat.com> wrote:
>> 
>>> Hi Colin,
>>> 
>>> The AdminClient should do the hashing, right?  I don't see any advantage to
>>>> doing it externally.
>>> 
>>> 
>>> I'm happy so long as the AdminClient interface doesn't require users to do
>>> the hashing themselves.
>>> 
>>> I do think we should support setting the salt explicitly, but really only
>>>> for testing purposes.  Normally, it should be randomized.
>>>> 
>>> 
>>>> 
>>>> I also wonder a little about consistency with the other APIs which have
>>>>> separate create/alter/delete methods. I imagine you considered exposing
>>>>> separate methods in the Java API,  implementing them using the same
>>> RPC,
>>>>> but can you share your rationale?
>>>> 
>>>> I wanted this to match up with the command-line API, which doesn't
>>>> distinguish between create and alter.
>>>> 
>>> 
>>> OK, makes sense.
>>> 
>>> Cheers,
>>> 
>>> Tom
>>> 
>> 
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to