What we’ve done is we’ve overloaded the term ‘experimental’ to mean too many 
related but different ideas. We need additional, more specific terminology to 
disambiguate.

1. Labelling released features that were known to be unstable at release as 
‘experimental’  retroactively shouldn’t happen and AFAIK only happened once, 
with MVs, and ‘experimental’ there was just a euphemism for ‘broken’. Our 
practices are more mature now, I like to think, that a situation like this 
would not arise in the future - the bar for releasing a completed marketable 
feature is higher. So the label ‘experimental’ should not be applied 
retroactively to anything.

2. It’s possible that a released, once considered production-ready feature, 
might be discovered to be deeply flawed after being released already. We need 
to temporarily mark such a feature as ‘broken' or ‘flawed'. Not experimental, 
and not even ‘unstable’. Make sure we emit a warning on its use everywhere, 
and, if possible, make it opt-in in the next major, at the very least, to 
prevent new uses of it. Announce on dev, add a note in NEWS.txt, etc. If the 
flaws are later addressed, remove the label. Removing the feature itself might 
not be possible, but should be considered, with heavy advanced telegraphing to 
the community.

3. There is probably room for genuine use of ‘experimental’ as a feature label. 
For opt-in features that we commit with an understanding that they might not 
make it at all. Unstable API is implied here, but a feature can also have an 
unstable API without being experimental - so ‘experimental' doesn’t equal to 
‘api-unstable’. These should not be relied on by any production code, they 
would be heavily gated by unambiguous configuration flags, disabled by default, 
allowed to be removed or changed in any version including a minor one.

4. New features without known flaws, intended to be production-ready and 
marketable eventually, that we may want to gain some real-world confidence with 
before we are happy to market or make default. UCS, for example, which seems to 
be in heavy use in Astra and doesn’t have any known open issues (AFAIK). It’s 
not experimental, it’s not unstable, it’s not ‘alpha’ or ‘beta’, it just hasn't 
been widely enough used to have gained a lot of confidence. It’s just new. I’m 
not sure what label even applies here. It’s just a regular feature that happens 
to be new, doesn’t need a label, just needs to see some widespread use before 
we can make it a default. No other limitation on its use.

5. Early-integrated, not-yet fully-completed features that are NOT experimental 
in nature. Isolated, gated behind deep configuration flags. Have a CEP behind 
them, we trust that they will be eventually completed, but for pragmatic 
reasons it just made sense to commit them at an earlier stage. ‘Preview’, 
‘alpha’, ‘beta’ are labels that could apply here depending on current feature 
readiness status. API-instability is implied. Once finished they just become a 
regular new feature, no flag needed, no heavy config gating needed.

I might be missing some scenarios here.

> On 10 Dec 2024, at 09:12, Mick Semb Wever <m...@apache.org> wrote:
> 
> I see value in using a beta flag in addition to an experimental flag,
> and that such a beta flag should see a lot more use than experimental.
> 
> Java 17 definitely  falls in the beta category.  I/We definitely
> recommend its usage in production, but as has been said data is needed
> over trust and the community hasn't the resources to provide such data
> – we're just waiting for any user to give us the feedback "we're using
> it prod".  (My expectations were that we'd hear this by 5.0.3.)
> 
> Early integration is valuable sometimes, and anything marked
> experimental (once we have a beta flag in use) should be able to later
> become deprecated and removed.  So I agree with Dinesh's point, that
> also emphasises a high bar for merging – totally agree that we've seen
> a number of things merged that missed basic testing requirements.
> 
> A possibility with SAI is to mark it beta while also marking 2i as
> deprecated (and leaving SASI as marked).  This sends a clear signal
> (imho) that SAI is the recommended solution forward but also being
> honest about its maturity and QA.
> 
> 
> On Tue, 10 Dec 2024 at 09:42, Jon Haddad <j...@rustyrazorblade.com> wrote:
>> 
>> I am strongly against early integration, because we can't / don't remove 
>> things when we should.  MVs are the prime example here, as is the current 
>> iteration of Vector search.
>> 
>> Early integration works fine when it's internal software that you have 
>> control over, it doesn't work well for software that gets deployed and 
>> relied on outside your org.
>> 
>> 
>> 
>> On Mon, Dec 9, 2024 at 2:02 PM Dinesh Joshi <djo...@apache.org> wrote:
>>> 
>>> On Mon, Dec 9, 2024 at 12:26 PM Jon Haddad <j...@rustyrazorblade.com> wrote:
>>>> 
>>>> I hope I've made my point.  The bar for merging in new functionality 
>>>> should be higher.  Features should work with 1TB of data on 3 nodes, 
>>>> that's a low bar.  I've spent at least a thousand hours over the last 5 
>>>> years developing the tooling to do these tests, there's no reason to not 
>>>> do them, and when we know things are broken, we shouldn't ship them.
>>> 
>>> 
>>> I am a big fan of early integration. I agree that the bar for merging 
>>> should be high but at the same time we should lean more heavily on feature 
>>> flagging which is also a very common software industry practice. This would 
>>> allow an operator to enable features that are deemed risky for production 
>>> use. It creates a faster feedback loop and will reveal issues earlier in 
>>> the development cycle. It might actually avoid big patches but that is a 
>>> topic for a different thread.
>>> 
>>> Dinesh

Reply via email to