Hi folks,

When Bo (thanks for the time and contribution) started the work on
https://github.com/apache/spark/pull/41036 he started the Go client
directly in the Spark repository. In the meantime, I was approached by
other engineers who are willing to contribute to working on a Rust client
for Spark Connect.

Now one of the key questions is where should these connectors live and how
we manage expectations most effectively.

At the high level, there are two approaches:

(1) "3rd party" (non-JVM / Python) clients should live in separate
repositories owned and governed by the Apache Spark community.

(2) All clients should live in the main Apache Spark repository in the
`connector/connect/client` directory.

(3) Non-native (Python, JVM) Spark Connect clients should not be part of
the Apache Spark repository and governance rules.

Before we iron out how exactly, we mark these clients as experimental and
how we align their release process etc with Spark, my suggestion would be
to get a consensus on this first question.

Personally, I'm fine with (1) and (2) with a preference for (2).

Would love to get feedback from other members of the community!

Thanks
Martin

Reply via email to