The high level design seems to indicate that all of the logic for when and
how to copy log segments to remote storage lives in the RLM class. The
default implementation is then HDFS specific with additional
implementations being left to the community. This seems like it would
require anyone implementing a new RLM to also re-implement the logic for
when to ship data to remote storage.

Would it not be better for the Remote Log Manager implementation to be
non-configurable, and instead have an interface for the remote storage
layer? That way the "when" of the logic is consistent across all
implementations and it's only a matter of "how," similar to how the Streams
StateStores are managed.

On Mon, Feb 4, 2019 at 10:01 AM Harsha <ka...@harsha.io> wrote:

>  Hi All,
>          We are interested in adding tiered storage to Kafka. More details
> about motivation and design are in the KIP.  We are working towards an
> initial POC. Any feedback or questions on this KIP are welcome.
>
> Thanks,
> Harsha
>

Reply via email to