+1 for using exit status in the command-line tools. The other day I wanted to 
modify a shell script to create a Kafka topic, using bin/kafka-topics.sh 
--create --topic ...

The tool's behaviour is not very conducive to automation:

- If the topic creation was successful, it prints out a message and exits with 
status 0.
- If the topic already exists, it prints out a message and exits with status 0.
- If the Kafka broker is down, it prints out an error message and exits with 
status 0.
- If Zookeeper is down, it keeps retrying.

In this example, an exit status to indicate what happened would be really 
helpful.

Martin

On 10 Mar 2014, at 07:48, Michael G. Noll <mich...@michael-noll.com> wrote:
> Oh, and one more comment:
> 
> I haven't checked all the CLI tools of Kafka in that regard, but preferably 
> each tool would properly return zero exit codes on success and non-zero on 
> failure (and possibly distinct error exit codes).
> 
> That would simplify integration with tools like Puppet, Chef, Ansible, etc. 
> Also, it allows shell chaining of commands via && and || for manual 
> activities as well as scripting (e.g. to automate tasks during upgrades or 
> migration).
> 
> If exit codes are already used consistently across the CLI tools, then please 
> ignore this message. :-)
> 
> --Michael
> 
> 
> 
>> On 08.03.2014, at 20:09, "Michael G. Noll" <mich...@michael-noll.com> wrote:
>> 
>> I just happen to come across that message.  As someone who is a mere
>> Kafka user take my feedback with a grain of salt.
>> 
>>> On 03/05/2014 05:01 AM, Jay Kreps wrote:
>>> Personally I don't mind the current approach as it is discoverable and
>>> works with tab completion.
>> 
>> Having typical shell features such as tab completion are indeed nice.
>> 
>> 
>>> I wouldn't be opposed to replacing kafka-run-class.sh with a generic kafka
>>> script that handles the java and logging options and maintaining a human
>>> friendly mapping for some of the class names so that e.g.
>>> ./kafka topics --list
>>> ./kafka console-producer --broker localhost:9092
>>> would work as a short cut for some fully qualified name:
>>> ./kafka kafka.producer.ConsoleProducer
>>> and
>>> ./kafka
>>> would print a list of known commands. We would probably need a way to
>>> customize memory settings for each command as we do now, though.
>> 
>> If you decide to go for a `kafka <subcommand> ...` approach, what about
>> at least splitting the admin commands (e.g. topic management and such)
>> from non-admin commands (e.g. starting console producers/consumers)?
>> 
>>   $ kafka admin topics --create ...
>>   $ kafka admin topics --list
>> 
>> (Admittedly listing topics is a pretty safe command but should sitll
>> fall under the admin category IMHO.)
>> 
>> Such a distinction would also give some hints on how dangerous a
>> potential commandline could be (say, `kafka admin` commands are likely
>> to change the state of the cluster itself, whereas `kafka
>> console-producer` would "only" start to read data, which should have a
>> lesser impact if things go wrong).
>> 
>> What would also be nice is a "[-h|--help]" option (or a `kafka help
>> <command>` variant) that would describe each command.  But IIRC there
>> may be a discussion thread/JIRA ticket for that already.
>> 
>>> We would
>>> need some way to make this typo resistent (e.g. if you type a command wrong
>>> you should get a reasonable error and not some big class not found stack
>>> trace).
>> 
>> I agree that such stack traces are irritating.  At 2 AM in the morning
>> an Ops person does not want filter relevant error messages from the
>> stacktrack noise.  (See the related thread on "Logging irrelevant
>> things" from Mar 05).
>> 
>> 
>> All the above being said, I'm happy to hear you are discussing how to
>> improve the current CLI tools!
>> 
>> --Michael
>> 
>> 
>> 
>> 

Reply via email to