Consider the following program made up of two separate files:

==> file1.c <==
extern int x;

int main() {
x = 5;
}

==> file2.c <==
int __thread x = 10;

This will compile, link, and run on the IA64, but will fail at link time on
AMD64:

% gcc file2.c file1.c
/usr/bin/ld: x: TLS definition in /tmp/ccmdUAs3.o section .tdata mismatches
non-TLS reference in /tmp/ccuSmPAa.o
/tmp/ccuSmPAa.o: could not read symbols: Bad value
collect2: ld returned 1 exit status

However if the initial extern were changed to:
  extern __thread int x;
it will also compile, link, and run on the AMD64.

To further complicate matters, if the program is rewritten into a single
file as follows:

int __thread x;

int main() {
  extern int x;
  x = 5;
}

it will fail at compile-time with gcc 4.1:

fx.c: In function 'main':
fx.c:4: error: non-thread-local declaration of 'x' follows thread-local
declaration
fx.c:1: error: previous declaration of 'x' was here

independent of the fact that this program likely would work fine on the IA64
and perhaps some other architectures.

It seems that GCC is enforcing a policy that the __thread attribute has to
be added to extern declarations if the underlying variable is declared
with the __thread attribute.

If we viewed the __thread attribute as something like assigning a variable
to a particular linkage section (which is what it does), then shouldn't
that assignment be transparent to programs referencing the variable via
an extern?

What are the technical reasons for the front-end enforcing this restriction,
when apparently some linkers will handle the TLS linkage fine?  If in fact
it is required that __thread be added to the extern, is the compiler simply
accommodating a limitation/bug in the linker?

Reply via email to