Hey Gwen, We discussed this a bit about this when starting on the new clients.
We were super sloppy about this in initial Kafka development--single jar, no real differentiation between public and private apis. The plan was something like the following: 1. Start to consider this with the new clients. 2. Do the public/private designation at the package level. The public packages are o.a.k.common, o.a.k.errors, o.a.k.producer, o.a.k.consumer, o.a.k.tools. This makes javadoc and things like that easier, and it makes it easy to see at a glance all the public classes. It would be even better to enforce this in the build if that is possible (i.e. no class from a non-public package is leaked) but we haven't done this. This approach obviously wasn't possible in Hadoop since they started without a clear delineation as we did in the original scala code. Thoughts? -Jay On Tue, Dec 16, 2014 at 10:04 AM, Gwen Shapira <gshap...@cloudera.com> wrote: > > Hi, > > Kafka has public APIs in Java and Scala, intended for use by external > developers. > In addition, Kafka also exposes many public methods that are intended > to use within Kafka but are not intended to be called by external > developers. > Also, some of the external APIs are less stable than others (the new > producer for example). > > In Hadoop we have a similar situation, and to avoid misunderstandings > or miscommunications on which APIs are external and which are stable, > we use annotations to communicate this information. > We find it very useful in preventing our customers from accidentally > getting into trouble by using internal methods or unstable APIs. > > Here are the annotations Hadoop uses: > > https://hadoop.apache.org/docs/current/api/src-html/org/apache/hadoop/classification/InterfaceStability.html > > https://hadoop.apache.org/docs/current/api/src-html/org/apache/hadoop/classification/InterfaceAudience.html > > I'm wondering what others think about using something similar in Kafka. > > Gwen >