On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
The compiler of course can't require shared methods to be
thread-safe as it simply can't prove thread-safety in all
cases. This is like shared/trusted: You are supposed to make
sure that a function behaves as expected. The compiler will
catch some easy to detect mistakes (like calling a non-shared
method from a shared method <=> system method from safe method)
but you could always use casts, pointers, ... to fool the
compiler.
You could use the same argument to mark any method as @trusted.
Yes it's possible, but it's a very bad idea.
Though I do agree that there might be edge cases: In a single
core, single threaded environment, should an interrupt function
be marked as shared? Probably not, as no synchronization is
required when calling the function.
But if the interrupt accesses a variable and a normal function
accesses the variable as well, the access needs to be
'volatile' (not cached into a register by the compiler; not
closely related to this discussion) and atomic, as the
interrupt might occur in between multiple partial writes. So
the variable should be shared, although there's no
multithreading (in the usual sense).
Of course, I just wanted to point out that Kagamin's post
scriptum is a simplification I cannot agree with. As a best
practice? Sure. As a "never do it"? No.
On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
Am Mon, 13 Feb 2017 17:44:10 +0000
schrieb Moritz Maxeiner <mor...@ucworks.org>:
you'd still need those memory barriers. Also note that the
synchronization in the above is not needed in terms of
semantics.
However, if you move you synchronized to the complete sub-code
blocks barriers are not necessary. Traditional mutex locking is
basically a superset and is usually implemented using barriers
AFAIK. I guess your point is we need to define whether shared
methods guarantee some sort of sequential consistency?
My point in those paragraphs was that synchronization and memory
barriers solve two different problems that can occur in
non-sequential programming and because of Kagamin's statement
Memory barriers are a bad idea because they don't defend from a
race condition
makes no sense (to me). But yes, I do think that the definition
should have more background /context and not the current "D FAQ
states `shared` guarantees sequential consistency (not
implemented)". Considering how many years that has been the state
I have personally concluded (for myself and how I deal with D)
that sequential consistency is a non-goal of `shared`, but what's
a person new to D supposed to think?
On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
struct Foo
{
shared void doA() {lock{_tmp = "a";}};
shared void doB() {lock{_tmp = "b";}};
shared getA() {lock{return _tmp;}};
shared getB() {lock{return _tmp;}};
}
thread1:
foo.doB();
thread2:
foo.doA();
auto result = foo.getA(); // could return "b"
I'm not sure how a compiler could prevent such 'logic' bugs.
It's not supposed to. Also, your example does not implement the
same semantics as what I posted and yes, in your example, there's
no need for memory barriers. In the example I posted,
synchronization is not necessary, memory barriers are (and since
synchronization is likely to have a significantly higher runtime
cost than memory barriers, why would you want to, even if it were
possible).
On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
However, I think it should be considered a best practice to
always make a shared function a self-contained entity so that
calling any other function in any order does not negatively
effect the results. Though that might not always be possible.
Yes, that matches what I tried to express.
On Tuesday, 14 February 2017 at 10:52:37 UTC, Johannes Pfau wrote:
Am Mon, 13 Feb 2017 17:44:10 +0000
schrieb Moritz Maxeiner <mor...@ucworks.org>:
My opinion on the matter of `shared` emitting memory barriers
is that either the spec and documentation[1] should be updated
to reflect that sequential consistency is a non-goal of
`shared` (and if that is decided this should be accompanied by
an example of how to add memory barriers yourself), or it
should be implemented. Though leaving it in the current "not
implemented, no comment / plan on whether/when it will be
implemented" state seems to have little practical consequence
- since no one seems to actually work on this level in D - and
I can thus understand why dealing with that is just not a
priority.
I remember some discussions about this some years ago and IIRC
the final decision was that the compiler will not magically
insert any barriers for shared variables. Instead we have
well-defined intrinsics in std.atomic dealing with this. Of
course most of this stuff isn't implemented (no shared support
in core.sync).
-- Johannes
Good to know, thanks, I seem to have missed that final decision.
If that was indeed the case, then that should be reflected in the
documentation of `shared` (including the FAQ).