Hi, Just wanting to check if something (that we could avoid if it's bad) is a performance concern in any way: Proliferation of record schemata. I believe Schemas are internally deduplicated as they are hashable, and are likely to be reused so the savings from deduplication is substantial. However, should we expect performance degradation (beyond having to read a bit more data to handle the additional Schema data) if we have, say, millions of unique record Schemata due to variations in Schema content?
Cheers Paul