Hi Colin,

> Rather than breaking compatibility, we should simply add a new "incremental" 
> boolean to AlterConfigsOptions.  Callers can set this boolean to true when 
> they want the update to be incremental.  It should default to false so that 
> old code continues to work.

Agreed, let's do it this way.

> Hmm.  I don't think AlterOperation is necessary.  If the user wants to delete 
> a configuration key named "foo", they can create a ConfigEntry with name = 
> "foo", value = null.

AlterConfig's config type currently is string, so the only possibility
is to use an empty string as changing the type to nullable_string
could be breaking if the client code doesn't expect -1 as the string
size. In the discussion thread earlier we had a conversation about
this with Rajini, let me paste it here (so it gives some context). At
that point I had the text "<default>" for this functionality:
"4. We use "<default>" internally to store default quotas and other
defaults. But I don't think we should externalise that string. We use empty
string elsewhere for indicating default, we can do the same here. And we
use STRING rather than NULLABLE_STRING in describe configs etc. So we
should probably do the same for quotas."

> Yeah, this might be an excessive maintenance burden.  Maybe we should get rid 
> of the old zookeeper-based code, and just move towards having only a 
> KIP-248-based tool.  It's a breaking change, but it's clear to users that 
> it's occurring, and what the fix is (specifying --bootstrap-server instead of 
> --zookeeper).

Earlier Rajini raised a concern that direct zookeeper interaction is
required to add the SCRAM credentials which will be used for
validation if inter-broker communication uses this auth method. This
is currently done by the ConfigCommand. Therefore we can't completely
get rid of it yet either.
In my opinion though on a longer term (and this is now a bit
off-topic) Kafka shouldn't use Zookeeper as a credentials store, just
provide an interface, so 3rd party authentication stores could be
implemented. Then similarly to the authorizer we could have Zookeeper
as a default though and a client that manages SCRAM credentials in ZK.
>From this perspective I'd leave the the command there but put a
warning that the tool is deprecated and should only be used for
setting up SCRAM credentials.
What do you think?

Cheers,
Viktor


On Thu, May 3, 2018 at 7:47 PM, Colin McCabe <cmcc...@apache.org> wrote:
> On Thu, May 3, 2018, at 05:11, Viktor Somogyi wrote:
>> @Magnus, yes that is correct. Thanks for your feedback. Updated it with
>> this (which might be subject to change based on the conversation with
>> Colin): "The changes done will be incremental in version 1, opposed to the
>> atomic behavior in version 0. For instance in version 0 sending an update
>> for producer_byte_rate for userA would result in removing all previous data
>> and setting userA's config with producer_byte_rate. Now in version 1
>> opposed to version 0 it will add an extra config and keeps other existing
>> configs."
>
> Hi Viktor,
>
> AdminClient#alterConfigs is a public API which users have already written 
> code against.  If we silently change what it does to be incremental addition 
> rather than complete replacement of the existing configuration, we will break 
> all of that existing code.  If we do that, there is not even any way that 
> users can write code to support both broker versions.  AdminClient does not 
> expose any API that users can use to check broker version.  I think that 
> would be really bad for users.
>
> Rather than breaking compatibility, we should simply add a new "incremental" 
> boolean to AlterConfigsOptions.  Callers can set this boolean to true when 
> they want the update to be incremental.  It should default to false so that 
> old code continues to work.
>
>>
>> @Colin,
>> yes, I have/had a hard time finding a place for this operation. I think ADD
>> and DELETE should be on config level to allow complex use cases (if someone
>> builds their own tool based on the AdminClient), so users can add and
>> delete multiple configs in one request.
>
> Hmm.  I don't think AlterOperation is necessary.  If the user wants to delete 
> a configuration key named "foo", they can create a ConfigEntry with name = 
> "foo", value = null.
>
>> But also at the same time, SET is as you're suggesting really seems like a
>> flag that tells the AdminClient/AdminManager how they should behave.
>> However since the AdminClient matches protocol version with the broker via
>> the API_VERSIONS request, I think it would be enough to modify the
>> AdminManager that it should behave differently in case of an increased
>> protocol versions, if there is this extra flag set through
>> AlterConfigOptions (AdminClient sets the flag on the protocol, which will
>> be reflected after parsing in AdminManager). Also if we target this change
>> to 2.0 (June?), then we might not need the extra flag but make the behavior
>> break. What do you think?
>
> Right.  I think a flag in AlterConfigsRequest makes sense.  AdminClient can 
> set it based on a boolean field in AlterConfigsOptions.
>
>>
>> Keeping the --zookeeper option working is not infeasible of course - and as
>> per the community's feedback it may be the better option. Although one of
>> the goals is to put this new ConfigCommand to the tools module, which
>> doesn't have the dependency on the server code, it would be a bit harder.
>> Most likely I'd need to call into the Scala code with reflection, which
>> could be quite complicated.
>
> Yeah, this might be an excessive maintenance burden.  Maybe we should get rid 
> of the old zookeeper-based code, and just move towards having only a 
> KIP-248-based tool.  It's a breaking change, but it's clear to users that 
> it's occurring, and what the fix is (specifying --bootstrap-server instead of 
> --zookeeper).
>
> best,
> Colin
>
>>
>> Also rewrote the request semantics, hopefully it's more clear now.
>>
>> Let me know what do you think about this and thank you for your feedback.
>>
>> Cheers,
>> Viktor
>>
>> On Thu, Apr 26, 2018 at 7:06 PM, Colin McCabe <cmcc...@apache.org> wrote:
>>
>> > Hi Viktor,
>> >
>> > If I'm reading the KIP right, it looks like the new proposed verison of
>> > AlterConfigs sets an OperationType on a per-config basis:
>> >
>> >  > AlterConfigs Request (Version: 1) => [resources] validate_only
>> >  >   validate_only => BOOLEAN
>> >  >   resources => resource_type resource_name [configs]
>> >  >     resource_type => INT8
>> >  >     resource_name => STRING
>> >  >     configs => config_name config_value config_operation
>> >  >       config_name => STRING
>> >  >       config_value => STRING
>> >  >       config_operation => INT8 [NEW ADDITION]
>> >  >
>> >  > Request Semantics:
>> >  >
>> >  >      By default in the broker we parse an AlterConfigRequest version 0
>> >  > with Unknown operation and handle it with the currently existing
>> > behavior.
>> >  > Version 1 requests however must have the operation set to other than
>> >  > Unknown, otherwise an InvalidRequestException will be thrown.
>> >  >          Set operation also does Add if needed to be backward
>> > compatible
>> >  > with the existing ConfigCommand semantics.
>> >
>> > However, this seems like a configuration that should be global to the
>> > whole AlterConfigs request, right?  It doesn't make sense to have one
>> > configuration key use AlterOperation.Set and the other use
>> > AlterOperation.Add -- the Set one specifies that we should overwrite the
>> > whole node in ZK.
>> >
>> > Another consideration here is that we should do this in a compatible
>> > fashion in AdminClient.  Existing code that relies on the "set everything"
>> > behavior should not break.  The best way to do this is to add a boolean to
>> > ./clients/src/main/java/org/apache/kafka/clients/admin/AlterConfigsOptions.java
>> > , specifying whether we want to clear everything that hasn't been
>> > specified, or not.  This should default to true so that existing code can
>> > continue to work.... Unless we believe that the existing AlterConfigs
>> > behavior is so broken that it should be changed, even in a
>> > compatibility-breaking way.
>> >
>> > Similarly, for other tools, we managed to support both the zookeeper-based
>> > way and the new way in the same tool for a while.  This seems like
>> > something users would really want-- is it truly infeasible to do here?  The
>> > Java code could call into the Scala code as necessary when the zk flag was
>> > specified, right?
>> >
>> > best,
>> > Colin
>> >
>> >
>> > On Thu, Apr 26, 2018, at 01:47, Magnus Edenhill wrote:
>> > > Hi Viktor,
>> > >
>> > > after speaking to Rajini it seems like this KIP will allow clients to
>> > > perform incremental configuration updates with AlterConfigs, only
>> > providing
>> > > the settings
>> > > that it wants to change, as opposed to the current atomic behaviour where
>> > > all settings
>> > > need to be provided to avoid having them revert to their default values.
>> > >
>> > > If this is indeed the case, could you update the KIP to make this more
>> > > clear?
>> > > I.e., that using Version 1 of AlterConfigs has the incremental behaviour,
>> > > while
>> > > version 0 is atomic.
>> > >
>> > > Thanks,
>> > > Magnus
>> > >
>> > >
>> > >
>> > >
>> > > 2018-04-16 13:27 GMT+02:00 Viktor Somogyi <viktorsomo...@gmail.com>:
>> > >
>> > > > Hi Rajini,
>> > > >
>> > > > The current ConfigCommand would still be possible to use, therefore
>> > those
>> > > > who wish to set up SCRAM or initial quotas would be able to continue
>> > doing
>> > > > it through kafka-run-class.sh.
>> > > > In an ideal world I'd keep it in the current ConfigCommand command so
>> > we
>> > > > wouldn't mix the zookeeper and admin client configs. Perhaps I could
>> > create
>> > > > a kafka-configs-zookeeper.sh shell script for shortening the
>> > > > kafka-run-class command.
>> > > > What do you and others think?
>> > > >
>> > > > Thanks,
>> > > > Viktor
>> > > >
>> > > >
>> > > >
>> > > > On Mon, Apr 16, 2018 at 10:15 AM, Rajini Sivaram <
>> > rajinisiva...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Hi Viktor,
>> > > > >
>> > > > > The KIP proposes to remove the ability to configure using ZooKeeper.
>> > This
>> > > > > means we will no longer have the ability to start up a cluster with
>> > SCRAM
>> > > > > credentials since we first need to create SCRAM credentials before
>> > > > brokers
>> > > > > can start if the broker uses SCRAM for inter-broker communication
>> > and we
>> > > > > need SCRAM credentials for the AdminClient before we can create new
>> > ones.
>> > > > > For quotas as well, we will no longer be able to configure quotas
>> > until
>> > > > at
>> > > > > least one broker has been started. Perhaps, we ought to retain the
>> > > > ability
>> > > > > to configure using ZooKeeper for these initialization scenarios and
>> > > > support
>> > > > > only AdminClient for dynamic updates?
>> > > > >
>> > > > > What do others think?
>> > > > >
>> > > > > Regards,
>> > > > >
>> > > > > Rajini
>> > > > >
>> > > > > On Sun, Apr 15, 2018 at 10:28 AM, Ted Yu <yuzhih...@gmail.com>
>> > wrote:
>> > > > >
>> > > > > > +1
>> > > > > > -------- Original message --------From: zhenya Sun <
>> > toke...@126.com>
>> > > > > > Date: 4/15/18  12:42 AM  (GMT-08:00) To: dev <dev@kafka.apache.org
>> > >
>> > > > Cc:
>> > > > > > dev <dev@kafka.apache.org> Subject: Re: [VOTE] #2 KIP-248: Create
>> > New
>> > > > > > ConfigCommand That Uses The New AdminClient
>> > > > > > non-binding +1
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > from my iphone!
>> > > > > > On 04/15/2018 15:41, Attila Sasvári wrote:
>> > > > > > Thanks for updating the KIP.
>> > > > > >
>> > > > > > +1 (non-binding)
>> > > > > >
>> > > > > > Viktor Somogyi <viktorsomo...@gmail.com> ezt írta (időpont: 2018.
>> > ápr.
>> > > > > 9.,
>> > > > > > H 16:49):
>> > > > > >
>> > > > > > > Hi Magnus,
>> > > > > > >
>> > > > > > > Thanks for the heads up, added the endianness to the KIP. Here
>> > is the
>> > > > > > > current text:
>> > > > > > >
>> > > > > > > "Double
>> > > > > > > A new type needs to be added to transfer quota values. Since the
>> > > > > protocol
>> > > > > > > classes in Kafka already uses ByteBuffers it is logical to use
>> > their
>> > > > > > > functionality for serializing doubles. The serialization is
>> > > > basically a
>> > > > > > > representation of the specified floating-point value according
>> > to the
>> > > > > > IEEE
>> > > > > > > 754 floating-point "double format" bit layout. The ByteBuffer
>> > > > > serializer
>> > > > > > > writes eight bytes containing the given double value, in Big
>> > Endian
>> > > > > byte
>> > > > > > > order, into this buffer at the current position, and then
>> > increments
>> > > > > the
>> > > > > > > position by eight.
>> > > > > > >
>> > > > > > > The implementation will be defined in
>> > > > > > > org.apache.kafka.common.protocol.types with the other protocol
>> > types
>> > > > > > and it
>> > > > > > > will have no default value much like the other types available
>> > in the
>> > > > > > > protocol."
>> > > > > > >
>> > > > > > > Also, I haven't changed the protocol docs yet but will do so in
>> > my PR
>> > > > > for
>> > > > > > > this feature.
>> > > > > > >
>> > > > > > > Let me know if you'd still add something.
>> > > > > > >
>> > > > > > > Regards,
>> > > > > > > Viktor
>> > > > > > >
>> > > > > > >
>> > > > > > > On Mon, Apr 9, 2018 at 3:32 PM, Magnus Edenhill <
>> > mag...@edenhill.se>
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > Hi Viktor,
>> > > > > > > >
>> > > > > > > > since serialization of floats isn't as straight forward as
>> > > > integers,
>> > > > > > > please
>> > > > > > > > specify the exact serialization format of DOUBLE in the
>> > protocol
>> > > > docs
>> > > > > > > > (e.g., IEEE 754),
>> > > > > > > > including endianness (big-endian please).
>> > > > > > > >
>> > > > > > > > This will help the non-java client ecosystem.
>> > > > > > > >
>> > > > > > > > Thanks,
>> > > > > > > > Magnus
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > 2018-04-09 15:16 GMT+02:00 Viktor Somogyi <
>> > viktorsomo...@gmail.com
>> > > > >:
>> > > > > > > >
>> > > > > > > > > Hi Attila,
>> > > > > > > > >
>> > > > > > > > > 1. It uses ByteBuffers, which in turn will use
>> > > > > > Double.doubleToLongBits
>> > > > > > > to
>> > > > > > > > > convert the double value to a long and that long will be
>> > written
>> > > > in
>> > > > > > the
>> > > > > > > > > buffer. I'v updated the KIP with this.
>> > > > > > > > > 2. Good idea, modified it.
>> > > > > > > > > 3. During the discussion I remember we didn't really decide
>> > which
>> > > > > one
>> > > > > > > > would
>> > > > > > > > > be the better one but I agree that a wrapper class that makes
>> > > > sure
>> > > > > > the
>> > > > > > > > list
>> > > > > > > > > that is used as a key is immutable is a good idea and would
>> > ease
>> > > > > the
>> > > > > > > life
>> > > > > > > > > of people using the interface. Also more importantly would
>> > make
>> > > > > sure
>> > > > > > > that
>> > > > > > > > > we always use the same hashCode. I have created wrapper
>> > classes
>> > > > for
>> > > > > > the
>> > > > > > > > map
>> > > > > > > > > value as well but that was reverted to keep things
>> > consistent.
>> > > > > > Although
>> > > > > > > > for
>> > > > > > > > > the key I think we wouldn't break consistency. I updated the
>> > KIP.
>> > > > > > > > >
>> > > > > > > > > Thanks,
>> > > > > > > > > Viktor
>> > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > On Tue, Apr 3, 2018 at 1:27 PM, Attila Sasvári <
>> > > > > asasv...@apache.org>
>> > > > > > > > > wrote:
>> > > > > > > > >
>> > > > > > > > > > Thanks for working on it Viktor.
>> > > > > > > > > >
>> > > > > > > > > > It looks good to me, but I have some questions:
>> > > > > > > > > > - I see a new type DOUBLE is used for quota_value , and it
>> > is
>> > > > not
>> > > > > > > > listed
>> > > > > > > > > > among the primitive types on the Kafka protocol guide. Can
>> > you
>> > > > > add
>> > > > > > > some
>> > > > > > > > > > more details?
>> > > > > > > > > > - I am not sure that using an environment (i.e.
>> > > > > > > > USE_OLD_COMMAND)variable
>> > > > > > > > > is
>> > > > > > > > > > the best way to control behaviour of kafka-config.sh . In
>> > other
>> > > > > > > scripts
>> > > > > > > > > > (e.g. console-consumer) an argument is passed (e.g.
>> > > > > > --new-consumer).
>> > > > > > > If
>> > > > > > > > > we
>> > > > > > > > > > still want to use it, then I would suggest something like
>> > > > > > > > > > USE_OLD_KAFKA_CONFIG_COMMAND. What do you think?
>> > > > > > > > > > - I have seen maps like Map<List<ConfigResource>,
>> > > > > > > > Collection<QuotaType>>.
>> > > > > > > > > > If List<ConfigResource> is the key type, you should make
>> > sure
>> > > > > that
>> > > > > > > this
>> > > > > > > > > > List is immutable. Have you considered to introduce a new
>> > > > wrapper
>> > > > > > > > class?
>> > > > > > > > > >
>> > > > > > > > > > Regards,
>> > > > > > > > > > - Attila
>> > > > > > > > > >
>> > > > > > > > > > On Thu, Mar 29, 2018 at 1:46 PM, zhenya Sun <
>> > toke...@126.com>
>> > > > > > wrote:
>> > > > > > > > > > >
>> > > > > > > > > > >
>> > > > > > > > > > >
>> > > > > > > > > > > +1 (non-binding)
>> > > > > > > > > > >
>> > > > > > > > > > >
>> > > > > > > > > > > | |
>> > > > > > > > > > > zhenya Sun
>> > > > > > > > > > > 邮箱:toke...@126.com
>> > > > > > > > > > > |
>> > > > > > > > > > >
>> > > > > > > > > > > 签名由 网易邮箱大师 定制
>> > > > > > > > > > >
>> > > > > > > > > > > On 03/29/2018 19:40, Sandor Murakozi wrote:
>> > > > > > > > > > > +1 (non-binding)
>> > > > > > > > > > >
>> > > > > > > > > > > Thanks for the KIP, Viktor
>> > > > > > > > > > >
>> > > > > > > > > > > On Wed, Mar 21, 2018 at 5:41 PM, Viktor Somogyi <
>> > > > > > > > > viktorsomo...@gmail.com
>> > > > > > > > > > >
>> > > > > > > > > > > wrote:
>> > > > > > > > > > >
>> > > > > > > > > > > > Hi Everyone,
>> > > > > > > > > > > >
>> > > > > > > > > > > > I've started a vote on KIP-248
>> > > > > > > > > > > > <https://cwiki.apache.org/conf
>> > luence/display/KAFKA/KIP-
>> > > > > > > > > > 248+-+Create+New+
>> > > > > > > > > > > > ConfigCommand+That+Uses+The+New+AdminClient#KIP-248-
>> > > > > > > > > > > > CreateNewConfigCommandThatUsesTheNewAdminClient-
>> > > > > > DescribeQuotas>
>> > > > > > > > > > > > a few weeks ago but at the time I got a couple more
>> > > > comments
>> > > > > > and
>> > > > > > > it
>> > > > > > > > > was
>> > > > > > > > > > > > very close to 1.1 feature freeze, people were occupied
>> > with
>> > > > > > that,
>> > > > > > > > so
>> > > > > > > > > I
>> > > > > > > > > > > > wanted to restart the vote on this.
>> > > > > > > > > > > >
>> > > > > > > > > > > >
>> > > > > > > > > > > > *Summary of the KIP*
>> > > > > > > > > > > > For those who don't have context I thought I'd
>> > summarize it
>> > > > > in
>> > > > > > a
>> > > > > > > > few
>> > > > > > > > > > > > sentence.
>> > > > > > > > > > > > *Problem & Motivation: *The basic problem that the KIP
>> > > > tries
>> > > > > to
>> > > > > > > > solve
>> > > > > > > > > > is
>> > > > > > > > > > > > that kafka-configs.sh (which in turn uses the
>> > ConfigCommand
>> > > > > > > class)
>> > > > > > > > > uses
>> > > > > > > > > > a
>> > > > > > > > > > > > direct zookeeper connection. This is not desirable as
>> > > > getting
>> > > > > > > > around
>> > > > > > > > > > the
>> > > > > > > > > > > > broker opens up security issues and prevents the tool
>> > from
>> > > > > > being
>> > > > > > > > used
>> > > > > > > > > > in
>> > > > > > > > > > > > deployments where only the brokers are exposed to
>> > clients.
>> > > > > > Also a
>> > > > > > > > > > somewhat
>> > > > > > > > > > > > smaller motivation is to rewrite the tool in java as
>> > part
>> > > > of
>> > > > > > the
>> > > > > > > > > tools
>> > > > > > > > > > > > component so we can get rid of requiring the core
>> > module on
>> > > > > the
>> > > > > > > > > > classpath
>> > > > > > > > > > > > for the kafka-configs tool.
>> > > > > > > > > > > > *Solution:*
>> > > > > > > > > > > > - I've designed new 2 protocols: DescribeQuotas and
>> > > > > > AlterQuotas.
>> > > > > > > > > > > > - Also redesigned the output format of the command line
>> > > > tool
>> > > > > so
>> > > > > > > it
>> > > > > > > > > > provides
>> > > > > > > > > > > > a nicer result.
>> > > > > > > > > > > > - kafka-configs.[sh/bat] will use a new java based
>> > > > > > ConfigCommand
>> > > > > > > > that
>> > > > > > > > > > is
>> > > > > > > > > > > > placed in tools.
>> > > > > > > > > > > >
>> > > > > > > > > > > >
>> > > > > > > > > > > > I'd be happy to receive any votes or feedback on this.
>> > > > > > > > > > > >
>> > > > > > > > > > > > Regards,
>> > > > > > > > > > > > Viktor
>> > > > > > > > > > > >
>> > > > > > > > > >
>> > > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> >

Reply via email to