terrymanu commented on issue #37159:
URL:
https://github.com/apache/shardingsphere/issues/37159#issuecomment-3582498723
## Problem Understanding
- Location:
infra/util/src/main/java/org/apache/shardingsphere/infra/util/yaml/representer/ShardingSphereYamlRepresenter#representJavaBeanProperty
currently uses .orElse(new DefaultYamlTupleProcessor().process(nodeTuple));
proposal is to switch to lazy orElseGet.
- Goal: avoid creating and executing the default processor when a
ShardingSphereYamlTupleProcessor is already found.
## Root Cause
- Optional.orElse eagerly evaluates its argument, so the default processor
is instantiated and called every time. orElseGet only runs the supplier when
the Optional is empty (per Java docs:
https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html#orElseGet-
java.util.function.Supplier-).
## Analysis
- Behavior: the default processor just filters null/empty nodes and is
side‑effect free; eager execution does not change output.
- Performance: only when many properties hit a custom processor does
orElseGet save a tiny object creation and call; if most use the default
processor, cost is identical. YAML conversion is not a hotspot, so benefit is
negligible.
## Conclusion
- This is a semantic/style micro-optimization, not a functional bug. Using
orElseGet aligns with lazy semantics; performance gain is minimal but
acceptable as a small improvement.
- Suggested change:
```java
return TypedSPILoader.findService(ShardingSphereYamlTupleProcessor.class,
property.getName())
.map(processor -> processor.process(nodeTuple))
.orElseGet(() -> new
DefaultYamlTupleProcessor().process(nodeTuple));
```
We’re open to this tweak. Community contributors are warmly invited to
submit a PR with the change and corresponding tests—thanks for helping improve
ShardingSphere!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]