The original idea was to pass in the JSON string representing the UserScramCredentialsRecord directly to make this simple and to require no parsing at all. Here is a example of the JSON object: {"name":"alice","mechanism":1,"salt":"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=","SaltedPassword":"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=","iterations":8192} Note that it isn't very friendly. The mechanism is an integer value 1 or 2 not a enum such as SCRAM-SHA-256 or SCRAM-SHA-512, The salt and iterations are required and there is no password, just a SaltedPassword which the customer would have to generate externally.
Moving away from the above we will have to parse and validate arguments and then from that generate the UserScramCredentialsRecord. The question is what that looks like. Should it be closer to what kafha-configs uses or should it be our own made up JSON format? Whichever we chose, one format should be sufficient as it will only be used in this application. The requirements so far for argument parsing are: - We want to specify the mechanism is a customer friendly enum SCRAM-SHA-256 or SCRAM-SHA-512 - We want the salt and iterations to be optional and have a default if not specified. - We want the customer to be able to specify a password which then generates the salted password. - We want to allow the customer to specify a salted password if they so choose. - We want the user to specify a user for the credentials orto specify default-entity. This is on top of the arguments needed for kafka-storage. We should also look forward to when we add additional record types that need to be parsed and stored for bootstrap. What I am suggesting is that we have an --add-config argument that requires at least one key=value subargument which indicates which record type to add. An example would be SCRAM-SHA-256=[iterations=8192,password=alice-secret] This indicates that the record to add is a UserScramCredentialsRecord with mechanism 1 (SCRAM-SHA-256) and there are some key values to add to the record. This is very much like kafka-config. Note that this record is still incomplete and that we need to specify a user to apply it to and that is where entity-type users entity-name alice subarguments are needed. If during parsing of the arguments the record is incomplete, then kafka-storage will exit with a failure. --Proven On Mon, Feb 13, 2023 at 4:54 PM José Armando García Sancio <jsan...@confluent.io.invalid> wrote: > Comments below. > > On Mon, Feb 13, 2023 at 11:44 AM Proven Provenzano > <pprovenz...@confluent.io.invalid> wrote: > > > > Hi Jose > > > > I want to clarify that the file parsing that Argparse4j provides is just > a > > mechanism for > > taking command line args and putting them in a file. It doesn't > > actually change what the > > command line args are for processing the file. So I can add any > > kafka-storage command > > line arg into the file including say the storage UUID. I see that the > file > > parsing will be useful > > in the future as we add more record types to add for the bootstrap > process. > > Understood. > > > I'm against adding specific parsing for a list of configs vs. a separate > > JSON file as it is adding > > more surface area that needs testing for a feature that is used > > infrequently. One config method > > should be sufficient for one or more SCRAM records that a customer wants > to > > bootstrap with. > > Does this mean that the storage tool won't parse and validate SCRAM > configuration? How will the user know that their SCRAM configuration > is correct? Do they need to start the cluster to discover if their > SCRAM configuration is correct? > > Thanks, > -- > -José >