On 23/11/2024 15:02, Eugene Sidelnyk wrote:
If I remember correctly, the whole concept of "value" is fully
described in DDD book by Eric Evans. If that's the point of the RFC, I
wonder of there's any point in not making such classes immutable by
default, and to keep only one instance of value object unique per
given set of properties in memory, thereby eliminating cloning
altogether and optimizing the memory usage.
As I mentioned on the "Records" thread, guaranteeing that every
combination of values will exist exactly once in memory could create
more overhead than it saves.
Certainly if you write this, sharing memory makes a lot of sense:
$arr = []; $i = 0;
while ( $i++ < 100 ) {
$arr[] = new Point(0,0);
}
But if you instead write this, maintaining the cache will end up more
expensive than just allocating each object/record/struct directly:
$arr = []; $i = 0;
while ( $i++ < 100 ) {
$arr[] = new Point($i, $i);
}
If the guarantee is copy-on-write, caching could be a compile-time
optimisation; e.g. an OpCache pass might rewrite the first loop to the
equivalent of this:
$arr = []; $i = 0;
$__cachedPoint = new Point(0,0);
while ( $i++ < 100 ) {
$arr[] = $__cachedPoint;
}
The main thing that would prevent this optimisation is a custom
constructor which might make the number of "new" calls observable.
Either the optimiser would have to detect a custom constructor, or data
classes / structs / records would have to prohibit defining one.
--
Rowan Tommins
[IMSoP]