On Wednesday, 25 February 2015 at 00:12:41 UTC, anonymous wrote:
If the whole function was @trusted, the compiler wouldn't catch other safety violations that are not related to malloc/free.

You don't need to. In C++ you use a separate "@trusted" data-structure for capturing ownership, aka unique_ptr (which provides a type related to linear typing).

The downside is that @safe on that function then doesn't mean "compiler verified memory-safe" anymore. Instead it means "compiler assisted @trusted".

Not only on that function, on the whole data structure? You no longer have a scope for what code is considered dangerous.


There's also the other way around: Mark the function as @trusted and throw ()@safe{...}() covers over the non-problematic parts. This doesn't work when a template parameter affects the safety, though.

That sounds more attractive than the provided example, but the right thing to do is to establish proper encapsulation. That means you need a protection level that is stronger than "private" that restricts "unsafe state" to a @trusted vetted construct. Like unique_ptr informally does in C++.


But why is malloc and free not considered safe by default then?

Well, because they aren't.

So that should change?


The goal is to have human verified, compiler recognized memory-safety, when E allows for it.

You can't:
* mark nothing @safe/@trusted, because malloc/free are not safe;
* mark the methods @trusted, because E may be unsafe.

@trusted malloc/free is a hack, but it allows the compiler to infer @safe iff E is safe.

You mean outside RCArray, iff RCArray as a whole is manually verified? But that would surely mean that the @trusted region is RCArray and neither the constructor or malloc/free?

And that assumes strong typing, which D currently does not provide. Without strong typing it will be very difficult for the compiler to infer anything across compilation units.

Reply via email to