Re: Unwinding CFI gcc practice of assumed `same value' regs

2006-12-12 Thread Ulrich Drepper

Andrew Haley wrote:

Null-terminating the call stack is too well-established practice to be
changed now.


Which does not mean that the mistake should hold people back.  This is 
just one of the mistakes in the x86-64 ABI.  It was copied from x86 and 
it was wrong there already.




In practice, %ebp either points to a call frame -- not necessarily the
most recent one -- or is null.  I don't think that having an optional
frame pointer mees you can use %ebp for anything random at all,


Of course it means that.



The right way to fix the ABI is to specify that %ebp mustn't be
[mis]used in this way, not to add a bunch more unwinder data.


Nope.  The right way is to specify things like backtraces with the 
adequate mechanism.  I fully support adding the Dwarf3 unwinder 
requirements.


--
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖


Re: Unwinding CFI gcc practice of assumed `same value' regs

2006-12-12 Thread Ulrich Drepper

Andrew Haley wrote:

Sure it does.  Not breaking things is an excellent reason, probably
one of the the best reasons you can have.


Nothing breaks if the responsible tools are updated in unison.



Really?  Well, that's one interpretation.  I don't believe that,
though.  It's certainly an inconsistency in the specification, which
says that null-termination is supported, and this implies that you
can't put a zero in there.


Again, this is just because the "authors" of the ABI didn't think.  x86 
has the same problem.  ebp is freely used and not just for non-NULL 
values.  Register's a scarce and I doubt  you'll find any support 
introducing a register class which says that the register can only hold 
non-zero value.




"All of these" might be the right way to go.  That is, keep
null-terminating the stack, strengthen the rules about what you might
do with %ebp, and extend debuginfo.


The thread setup and the startup code certainly does initialize the 
register with zero.  But this means nothing, the register can have zero 
values in all kinds of other places.


--
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖


Re: Call for compiler help/advice: atomic builtins for v3

2005-11-06 Thread Ulrich Drepper
Mark Mitchell wrote:
> Yes, GLIBC does that kind of thing, and we could do.  In the simplest
> form, we could have startup code that checks the CPU, and sets up a
> table of function pointers that application code could use.

That's not what glibc does and it is a horrible idea.  The indirect
jumps are costly, very much so.  The longer the pipeline the worse.

The best solution (for Linux) is to compile multiple versions of the DSO
and place them in the correct places so that the dynamic linker finds
them if the system has the right functionality.  Code generation issues
aside, this is really only needed for atomic ops and maybe vector
operations (XMM, Altivec, ...).  The number of configurations really
needed and important is small (as is the libstdc++ binary).

So, just make sure that an appropriate configure line can be given
(i.e., add --enable-xmm flags or so) and make it possible to compile
multiple libstdc++ without recompiling the whole gcc.  Packagers can
then compile packages with multiple libstdc++.

Pushing some of the operations (atomic ops) to libgcc seems sensible but
others (like vector ops) are libstdc++ specific and therefore splitting
the functionality between libstdc++ and libgcc means requiring even more
versions since then libgcc also would have to be available in multiple
versions.


And note that for Linux the atomic ops need to take arch-specific
extensions into account.  For ppc it'll likely mean using the vDSO.

-- 
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖



signature.asc
Description: OpenPGP digital signature


Re: Call for compiler help/advice: atomic builtins for v3

2005-11-07 Thread Ulrich Drepper
Richard Guenther wrote:
> Also, libgcc
> does _not_ know the machine - it only knows the -march it was compiled
> for.  Inlining and transparently handling different sub-architecture just
> does not play together well.

Yes, libgcc doesn't know this.  But the libgcc can be installed in the
correct place and the dynamic linker, which does in fact know what arch
is used, can make the decision.  It's really pretty easy for those
platforms with sufficient flexibility.  Use if cascades or indirect
jumps for the others, if necessary.

-- 
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖



signature.asc
Description: OpenPGP digital signature


Re: Call for compiler help/advice: atomic builtins for v3

2005-11-07 Thread Ulrich Drepper
Daniel Jacobowitz wrote:
> The only real problem with this is that it mandates use of shared
> libgcc for the routines in question... always.  If they ever go into
> libgcc.a, we can't make sure we got the right copy.

This is what lies ahead of us anyway.  Solaris apparently official
denounced static linking for the system libs completely and you can find
many comments to the same effect from me, too.  We have to get away from
the limitations static linking is imposing.

For those who really really need static linking, use the least common
denominator.   It's those people who have to pay the price, not the
(sane) rest of us.

-- 
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖



signature.asc
Description: OpenPGP digital signature


Re: Call for compiler help/advice: atomic builtins for v3

2005-11-07 Thread Ulrich Drepper
Andrew Pinski wrote:
\> Uli you keep forgetting about other OS which don't use elf (like Mac
OS X),
> though for Mac OS X, it is easier to support this as the way mach-o handles
> fat binaries, you only really need one libgcc which contains the functions
> for all of subprocessor types.

What is it you are complaining about then?

I don't care about any platform other than Linux.  My goal is to prevent
the solution from being suboptimal even though the means to support the
best solution exist and it requires almost no extra work.  If other
platforms can use the same mechanisms, good for them.  If not it's up to
somebody who cares to come up with a solution.

-- 
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖



signature.asc
Description: OpenPGP digital signature


Re: Call for compiler help/advice: atomic builtins for v3

2005-11-07 Thread Ulrich Drepper
Andrew Pinski wrote:
> You might not care about anything except for GNU/Linux but GCC has to care
> to some point.

The important point is that this (and similar things like vector
instructions) is an issues which cannot be solved adequately with an
one-size-fits-all mechanism.  It requires platform specific solutions
even if this means more maintenance effort.  Equalizing is not
acceptable since it punishes the more evolved platforms.

-- 
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖



signature.asc
Description: OpenPGP digital signature


x32 libraries

2015-11-17 Thread Ulrich Drepper
Is there a reason why libmpx and libgccjit aren't build for x32?  This
is in the case when building IA-32, x86-64, and x32 all together.
Haven't tested any other way to build.  I suspect it's just an
oversight in the way configuration works since I cannot see a
technical reason.


Solaris vtv port breaks x32 build

2015-11-28 Thread Ulrich Drepper
After the Solaris vtv port was added my build of the x86 gcc with x32
support fails.  The build is special since the kernel doesn't have x32
support and I cannot directly run x32 binaries (they are run in a kvm
kernel).  This is used to work well by configuring the tree with
--build and --host etc.

The changes to libvtv/configure.ac forced the execution of some code
for all targets, specifically the AC_USE_SYSTEM_EXTENSIONS macro.
Once I take this (and the code depending on it) out the build is once
again fine.  Below is the change I used.  Can we revert the change
until the Solaris port is correctly done?

--- a/libvtv/configure.ac
+++ b/libvtv/configure.ac
@@ -27,7 +27,7 @@ target_alias=${target_alias-$host_alias}
 AC_SUBST(target_alias)
 GCC_LIBSTDCXX_RAW_CXX_FLAGS

-AC_USE_SYSTEM_EXTENSIONS
+# AC_USE_SYSTEM_EXTENSIONS

 # Use same top-level configure hooks in libgcc/libstdc++/libvtv.
 AC_MSG_CHECKING([for --enable-vtable-verify])
@@ -45,21 +45,21 @@ AC_MSG_RESULT($enable_vtable_verify)
 unset VTV_SUPPORTED
 AC_MSG_CHECKING([for host support for vtable verification])
 . ${srcdir}/configure.tgt
-case ${host} in
-  *-*-solaris2*)
-# libvtv requires init priority support, which depends on the linker
-# used on Solaris.
-AC_CACHE_CHECK(for init priority support, libvtv_cv_init_priority, [
-AC_COMPILE_IFELSE([AC_LANG_PROGRAM(,
-  [[void ip (void) __attribute__ ((constructor (1)));]])],
-  [libgcc_cv_init_priority=yes],[libgcc_cv_init_priority=no])])
-if test x$libvtv_cv_init_priority = xno; then
-  VTV_SUPPORTED=no
-fi
-# FIXME: Maybe check for dl_iterate_phdr, too?  Should be covered by
-# configure.tgt restricting to libvtv to Solaris 11+.
-;;
-esac
+# case ${host} in
+#   *-*-solaris2*)
+# # libvtv requires init priority support, which depends on the linker
+# # used on Solaris.
+# AC_CACHE_CHECK(for init priority support, libvtv_cv_init_priority, [
+# AC_COMPILE_IFELSE([AC_LANG_PROGRAM(,
+#   [[void ip (void) __attribute__ ((constructor (1)));]])],
+#   [libgcc_cv_init_priority=yes],[libgcc_cv_init_priority=no])])
+# if test x$libvtv_cv_init_priority = xno; then
+#   VTV_SUPPORTED=no
+# fi
+# # FIXME: Maybe check for dl_iterate_phdr, too?  Should be covered by
+# # configure.tgt restricting to libvtv to Solaris 11+.
+# ;;
+# esac
 AC_MSG_RESULT($VTV_SUPPORTED)

 # Decide if it's usable.


Solaris vtv port breaks x32 build

2015-11-28 Thread Ulrich Drepper
After the Solaris vtv port was added my build of the x86 gcc with x32
support fails.  The build is special since the kernel doesn't have x32
support and I cannot directly run x32 binaries (they are run in a kvm
kernel).  This is used to work well by configuring the tree with
--build and --host etc.

The changes to libvtv/configure.ac forced the execution of some code
for all targets, specifically the AC_USE_SYSTEM_EXTENSIONS macro.
Once I take this (and the code depending on it) out the build is once
again fine.  Below is the change I used.  Can we revert the change
until the Solaris port is correctly done?

--- a/libvtv/configure.ac
+++ b/libvtv/configure.ac
@@ -27,7 +27,7 @@ target_alias=${target_alias-$host_alias}
 AC_SUBST(target_alias)
 GCC_LIBSTDCXX_RAW_CXX_FLAGS

-AC_USE_SYSTEM_EXTENSIONS
+# AC_USE_SYSTEM_EXTENSIONS

 # Use same top-level configure hooks in libgcc/libstdc++/libvtv.
 AC_MSG_CHECKING([for --enable-vtable-verify])
@@ -45,21 +45,21 @@ AC_MSG_RESULT($enable_vtable_verify)
 unset VTV_SUPPORTED
 AC_MSG_CHECKING([for host support for vtable verification])
 . ${srcdir}/configure.tgt
-case ${host} in
-  *-*-solaris2*)
-# libvtv requires init priority support, which depends on the linker
-# used on Solaris.
-AC_CACHE_CHECK(for init priority support, libvtv_cv_init_priority, [
-AC_COMPILE_IFELSE([AC_LANG_PROGRAM(,
-  [[void ip (void) __attribute__ ((constructor (1)));]])],
-  [libgcc_cv_init_priority=yes],[libgcc_cv_init_priority=no])])
-if test x$libvtv_cv_init_priority = xno; then
-  VTV_SUPPORTED=no
-fi
-# FIXME: Maybe check for dl_iterate_phdr, too?  Should be covered by
-# configure.tgt restricting to libvtv to Solaris 11+.
-;;
-esac
+# case ${host} in
+#   *-*-solaris2*)
+# # libvtv requires init priority support, which depends on the linker
+# # used on Solaris.
+# AC_CACHE_CHECK(for init priority support, libvtv_cv_init_priority, [
+# AC_COMPILE_IFELSE([AC_LANG_PROGRAM(,
+#   [[void ip (void) __attribute__ ((constructor (1)));]])],
+#   [libgcc_cv_init_priority=yes],[libgcc_cv_init_priority=no])])
+# if test x$libvtv_cv_init_priority = xno; then
+#   VTV_SUPPORTED=no
+# fi
+# # FIXME: Maybe check for dl_iterate_phdr, too?  Should be covered by
+# # configure.tgt restricting to libvtv to Solaris 11+.
+# ;;
+# esac
 AC_MSG_RESULT($VTV_SUPPORTED)

 # Decide if it's usable.


Re: Solaris vtv port breaks x32 build

2015-11-30 Thread Ulrich Drepper
On Mon, Nov 30, 2015 at 6:57 PM, Jeff Law  wrote:
> What part of this requires bits to run?  I see AC_COMPILE_IFELSE, but not
> anything which obviously requires running the resulting code.

Without AC_USE_SYSTEM_EXTENSIONS one gets:

configure.ac:111: warning: AC_COMPILE_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS
../../lib/autoconf/specific.m4:332: AC_GNU_SOURCE is expanded from...
configure.ac:111: the top level


Re: Solaris vtv port breaks x32 build

2015-11-30 Thread Ulrich Drepper
On Mon, Nov 30, 2015 at 9:14 PM, Jeff Law  wrote:
> Right, but isn't AC_COMPILE_IFELSE a compile test, not a run test?


The problem macro is _AC_COMPILER_EXEEXT_WORKS.  The message is at the end.

This macro *should* work for cross-compiling but somehow it doesn't
work.  In libvtv/configure $cross_compiling is not defined
appropriately.  I'm configuring with the following which definitely
indicates that cross-compiling is selected.

~/gnu/gcc/configure --prefix=/usr --enable-bootstrap --enable-shared
--enable-host-shared --enable-threads=posix --with-system-zlib
--enable-__cxa_atexit --disable-libunwind-exceptions
--enable-gnu-unique-object --enable-linker-build-id
--with-linker-hash-style=gnu --enable-plugin --enable-initfini-array
--with-tune=haswell --with-multilib-list=m32,m64,mx32
--build=x86_64-redhat-linux build_alias=x86_64-redhat-linux
--enable-offload-targets=nvptx-none
--with-cuda-driver-include=/usr/local/cuda/include
--with-cuda-driver-lib=/usr/local/cuda/lib64
--host=x86_64-redhat-linux host_alias=x86_64-redhat-linux
--enable-languages=c,c++,fortran,jit,lto --enable-libmpx
--enable-gnu-indirect-function --with-system-zlib --prefix=/usr
--mandir=/usr/share/man --infodir=/usr/share/info
--enable-vtable-verify

configure:2863: checking for C compiler default output file name
configure:2885:
/home/drepper/local/gcc-builds/test-20151130/./gcc/xgcc
-B/home/drepper/local/gcc-builds/test-20151130/./gcc/ -B/us
r/x86_64-redhat-linux/bin/ -B/usr/x86_64-redhat-linux/lib/ -isystem
/usr/x86_64-redhat-linux/include -isystem /usr/x86_64-redhat-li
nux/sys-include  -mx32 -g -O2   conftest.c  >&5
configure:2889: $? = 0
configure:2926: result: a.out
configure:2942: checking whether the C compiler works
configure:2951: ./a.out
/home/drepper/gnu/gcc/libvtv/configure: line 2953: ./a.out: cannot
execute binary file: Exec format error


Re: Solaris vtv port breaks x32 build

2015-12-01 Thread Ulrich Drepper
On Tue, Dec 1, 2015 at 2:39 AM, Matthias Klose  wrote:
> that might be another instance of
> https://gcc.gnu.org/ml/gcc-patches/2015-01/msg02064.html
> Does something like this help?

No, same problem as before.  This macro doesn't actually generate any
code in configure.


Re: Solaris vtv port breaks x32 build

2015-12-01 Thread Ulrich Drepper
On Tue, Dec 1, 2015 at 10:16 AM, Bernd Edlinger
 wrote:
> Your host_alias looks wrong: isn't it equal to your build_alias ?

Yes.  The goal is to basically build a native compiler but prevent it
from trying to run any binaries.  There is no fine-grained way to tell
the configuration mechanism to run x86-64 and x86 binaries, but not
x32 binaries.

The config mechanism I use worked nicely for quite a while until this
one change.  All the other libraries have no issue with the
configuration.  It's been a long time since I dug deeply into all this
autoconf magic.  It just seems that in libvtv the settings are applied
differently.  I do see this of course:

if test "x$host_alias" != x; then
  if test "x$build_alias" = x; then
cross_compiling=maybe
$as_echo "$as_me: WARNING: If you wanted to set the --build type,
don't use --host.
If a cross compiler is detected then cross compile mode will be used." >&2
  elif test "x$build_alias" != "x$host_alias"; then
cross_compiling=yes
  fi
fi

This will cause cross_compiling not to be set.  The question is
whether libvtv is really unique in its requirements or whether the
other configure scripts in the project are doing something more to
prevent this type of problem.


LTO and version scripts

2014-06-30 Thread Ulrich Drepper
Using LTO to create a DSO works fine (i.e., it performs the expected
optimizations) for symbols which are marked with visibility
attributes.  It does not work, though, when the symbol is not
restricted in its visibility in the source file but instead is
prevented from being exported from the DSO by a version script (ld
--version-script=FILE).

Is this known?  I only found general problems related to linker
scripts although version script parameters do not cause any other
failures.


Re: LTO and version scripts

2014-08-05 Thread Ulrich Drepper
On Tue, Aug 5, 2014 at 12:57 AM, Alan Modra  wrote:
> What version linker?  In particular, do you have the fix for PR12975?

The Fedora 19 version.  I think it hasn't changed since then which
means it is 2.23.88.0.1-13 (from the RPM version number).  No idea
whether that fix is included and unfortunately won't have time to try
before the weekend.


Re: LTO and version scripts

2014-08-10 Thread Ulrich Drepper
On Wed, Aug 6, 2014 at 11:07 PM, Alan Modra  wrote:
> Both Fedora 19 and 20 have the patch needed for this to work.  Hmm, I
> suppose the other thing necessary is a gcc that implements
> LDPT_GET_SYMBOLS_V2.  You may be lacking that.  Here's what I see with
> mainline gcc and ld.

It's been a while since I tried it and this was a larger project.  I
can confirm that with the current binutils on Fedora 19 it does work
as expected.  I'll keep an eye out for this.


Re: GCC 5 Status Report (2015-01-19), Trunk in Stage 4

2015-01-19 Thread Ulrich Drepper
On Mon, Jan 19, 2015 at 12:32 PM, Jonathan Wakely  wrote:
> I would like to commit these two patches which complete the C++11
> library implementation:

I would definitely be in favor.


> https://gcc.gnu.org/ml/gcc-patches/2015-01/msg01694.html

Just a nit.  Why wouldn't you check the value of the variable after
the assignment in the test case?


asm in inline function invalidating function attributes?

2011-10-15 Thread Ulrich Drepper
I think gcc should allow the programmer to tell it something about a
function return value even if the function is inlined and the compiler
can see all the code.  Consider the code below.

If NOINLINE is defined the compiler will call g once.  No need to do
anything after the h() call since the function is marked const.

If NOINLINE is not defined and the compiler sees the asm statement it
will expand the function body twice.  Don't worry about the content of
the asm, this is correct in the case I care about.  What I expect is
that gcc still respects that the function is marked const and performs
the same optimization as in the case when the function is not inlined.

Is there anything I miss how to achieve this?  I don't think so in
which case, do people agree that this should be changed?


extern int **g(void) __attribute__((const, nothrow));
#ifndef NOINLINE
extern inline int ** __attribute__((always_inline, const, nothrow,
gnu_inline, artificial)) g(void) {
  int **p;
  asm ("call g@plt" : "=a" (p));
  return p;
}
#endif

#define pr(c) ((*g())[c] & 0x80)

extern void h(void);

int
f(const char *s)
{
  int c = 0;
  for (int i = 0; i < 20; ++i)
c |= pr(s[i]);

  h();

  for (int i = 20; i < 40; ++i)
c |= pr(s[i]);

  return c;
}


Re: asm in inline function invalidating function attributes?

2011-10-16 Thread Ulrich Drepper
On Sun, Oct 16, 2011 at 06:31, Richard Guenther
 wrote:
> The question is now, of course why you need to emit calls
> from an asm() statement, something which isn't guaranteed
> to work anyway (IIRC).

It's not guaranteed to work in general.  The problem to solve is that
I know the function which is called is not clobbering any registers.
If I leave it with the normal function call gcc has to spill
registers.  If I can hide the function call the generated code can be
significantly better.

An alternative solution would be to have a function attribute which
allows me to specify which registers are clobbered.  Or at least an
attribute which says none are clobbered.


Re: asm in inline function invalidating function attributes?

2011-10-16 Thread Ulrich Drepper
On Sun, Oct 16, 2011 at 14:38, Jakub Jelinek  wrote:
> If this is about e.g.
> 2011-09-14  Ulrich Drepper  
>
>        * sysdeps/x86_64/fpu/bits/mathinline.h (__MATH_INLINE): Use
>        __extern_always_inline.
>        Define lrint{f,} and llrint{f,} for 64-bit and in some situations for
>        32-bit.

No, something else.  It's about the calls to __ctype_b_loc etc.  The
function just returns a pointer, nothing else.


Re: asm in inline function invalidating function attributes?

2011-10-17 Thread Ulrich Drepper
On Mon, Oct 17, 2011 at 02:57, Richard Guenther
 > It would simply be an alternate ABI
that makes all registers callee-saved?
> I suppose that would be not too hard to add.

That would be great.  There are quite a few interfaces which have a
trivial normal case and only in special situations you need something
more.  These interfaces could be translated into:

retval fct(sometype arg) { return cond ? somevalue : complexfct(); }

These would be candidates for this new ABI.  The compiler might have
to do minimal spilling to implement the 'cond' expression but the new
ABI would be used unwise if this is expensive.

Note that I would want registers used for passing parameters to also
be preserved (except, of course, when they are needed for the return
value).  This proved really useful in the kernel syscall interface
which behaves like that.


template class with default parameter in template parameter declaration

2011-11-08 Thread Ulrich Drepper
Complicated title, here's a bit of code:

#ifdef FIX
# define PARM2 , class T5
#else
# define PARMS2
#endif


template
struct cl1 {
};

template class T4 = cl1>
struct cl2 {
};

cl2<> var;

If compiled without FIX defined this will fail with gcc 4.3 and later.
 Haven't checked 4.2 but it works without the fix with gcc 4.1.  The
strange thing is that it also worked with ancient compilers before the
C++ frontend rewrite (gcc 3.2).  In short, when a expecting a template
class in a template parameter list it now is not possible anymore to
skip the default parameters.  Since this is an actual use of the class
(in case the default is used) and the programmer declares to have no
interest in the type of the second template parameter I wouldn't say
it is needed but I haven't tracked a statement in the standard.

Before changing too much code I want to make sure this new and very
old behavior is what is required by the standard and not a bug which
slipped in again.


-mavx option

2012-01-26 Thread Ulrich Drepper
I think gcc is missing an option since -mavx controls two different
things.  First, the generation of VEX-encoded instructions.  Second,
the use of ymm registers.  The latter is not always available when the
former is and using VEX-encoded instructions by themselves can have an
advantage.  Currently, when OSXAVE is not available (use xgetbv to
test etc etc) then I cannot use code which gcc generates with -mavx
even though I am only interested in the VEX encoding.

Could we have a -mvex option which is automatically implied in -mavx?


Re: (C++) mangling vector types

2009-11-12 Thread Ulrich Drepper

On 11/12/2009 07:24 AM, Jason Merrill wrote:

c) Use -fabi-version=2.1.


I'd favor this if you can emit aliases with the old names wherever this 
is possible and currently done.


--
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖


Re: (C++) mangling vector types

2009-11-12 Thread Ulrich Drepper

On 11/12/2009 08:35 AM, Jason Merrill wrote:

I'd favor this if you can emit aliases with the old names wherever this
is possible and currently done.


I suppose if we unconditionally use the new mangling and emit a weak
alias with the old mangling, old external references will resolve to
somthing, so code that only uses one vector size will continue to work;


Really?  How'd you create code with the new compiler and older libraries 
which only provide definitions for the old names?


If I'm not missing anything then using a 2.1 ABI version makes sense 
since it is upward compatible, other than other ABI breakages.


--
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖


bug or idiosyncrasy?

2012-08-17 Thread Ulrich Drepper
Compiling the following code with D defined works.  Leave it out (and
remove the extra dimension which has no influence on the data layout
etc) and it compiles.  Is this correct?  Why wouldn't a simple use of
an array parameter be sufficient?



#ifdef D
#define XD [1]
#define XR [0]
#define XBB {
#define XBE }
#else
#define XD
#define XR
#define XBB
#define XBE
#endif

int aa XD[10] = { XBB 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 XBE };

int f(int arr XD[10])
{
  int s = 0;
  for (auto i : arr XR)
s += i;
  return s;
}

int main()
{
  return f(aa);
}


Re: RFC: Disallow undocumented IA32 TLS GD access

2007-08-23 Thread Ulrich Drepper
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

H.J. Lu wrote:
> Is there any objection?

No, this is correct and necessary.

- --
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)

iD8DBQFGzaGk2ijCOnn/RHQRAlIzAKCQHMUkXlxkPN0VijvrhZSyN1VqygCfRMQU
zNdi1GNzOWuFCoYlCNCsl0c=
=U1sJ
-END PGP SIGNATURE-


desired behavior or missing warning?

2020-09-17 Thread Ulrich Drepper via Gcc
I found myself with code similar to this:

struct base {
  virtual void cb() = 0;
};

struct deriv final : public base {
  void cb() final override { }
};


The question is about the second use of 'final'.  Because the entire
class is declared final, should the individual function's annotation be
flagged with a warning?  I personally think it should because it might
distract from the final of the class itself.



signature.asc
Description: OpenPGP digital signature


-Wformat and u8""

2022-05-09 Thread Ulrich Drepper via Gcc
I have a C++20+ code base which forces the program to run using an UTF-8
locale and then uses u8"" strings internally.  This causes warnings with
-Wformat.

#include 

int main()
{
  printf((const char*) u8"test %d\n", 1);
  return 0;
}

Compile with
   g++ -std=gnu++20 -c -O -Wall t.cc

and you'll see:
t.cc: In function ‘int main()’:
t.cc:5:24: warning: format string is not an array of type ‘char’ [-Wformat=]
5 |   printf((const char*) u8"test %d\n", 1);
  |^

I would say it is not gcc's business to question my use of u8"" given that
I use a cast and the u8"" string can be parsed by the -Wformat handling.

Before filing a report I'd like to take the temperature and see whether
people agree with this.

Thanks.


Re: -Wformat and u8""

2022-05-09 Thread Ulrich Drepper via Gcc
On Mon, May 9, 2022 at 11:26 AM Florian Weimer  wrote:

> On the other hand, that cast is still quite ugly.


Yes, there aren't yet any I/O functions defined for char8_t and therefore
that's the best we can do right now.  I have all kinds of ugly macros to
high these casts.


> All string-related
> functions in the C library currently need it.


Yes, but the cast isn't the issue.  Or more correctly: gcc disregarding the
cast for -Wformat is.

Anyway, I'm not concerned about the non-I/O functions.  This is all C++
code after all and there are functions for all the rest.


> Isn't this a problem with char8_t?
>

 Well, yes, the problem is that gcc seems to just see the u8"" type
(char8_t) even though I tell it with the cast to regard it as a const
char.  Again, I ensure that the encoding matches and putting UTF-8 in char
strings is actually incorrect (in theory).


[RFC] database with API information

2022-09-06 Thread Ulrich Drepper via Gcc
I talked to Jonathan the other day about adding all the C++ library APIs to
the name hint file now that the size of the table is not really a concern
anymore.

Jonathan mentioned that he has to create and maintain a similar file for
the module support.  It needs to list all the exported interfaces and this
is mostly a superset of the entries in the hint table.

Instead of duplicating the information it should be kept in one place.
Neither file itself is a natural fit because the additional information
needed  (e.g., the standard version information for the name hint table) is
not needed in the other location.

Hence, let's use a simple database, a CSV file for simplicity, and generate
both files from this.  Easily done, I have an appropriate script and a CSV
file with the information of both Jonathan's current export file and the
current state of the name hint table.

The only detail that keeps me from submitting this right now is the way the
script is implemented.  This is just a natural fit for a Python script.
The default installation comes with a csv module and there are nice ways to
adjust and output boilerplate headers like those needed in those files.

It would be possible to create separate awk scripts (there is only one
Python script) but it'll be rather ugly and harder to maintain than the
Python version.

Of course the problem is: I don't think that there is yet any maintainer
tool written in Python (except some release engineering tools).  The
question is therefore: is it time to lift this restriction?  I cannot today
imagine any machine capable of serving a gcc developer which doesn't also
have a Python implementation.  As long as there is no dependency on exotic
modules I doubt that anything will break.


Opinions?


Re: [RFC] database with API information

2022-09-09 Thread Ulrich Drepper via Gcc
On Fri, Sep 9, 2022 at 5:26 PM Iain Sandoe  wrote:

> One small request, I realise that Python 2 is dead, but I regularly
> bootstrap GCC
> on older machines that only have Python 2 installations.  If possible (and
> it sounds
> plausible if the job is really quite simple) - it would be good to support
> those older
> machines without having to take a detour to find a way to build Python 3
> on them first.
>

Would this really be an issue?  Just as is the case for the gperf-generated
files, the repository would contain the generated files and gperf/python
would only be needed if someone changes those files or explicitly removes
them.


commit signing

2022-09-14 Thread Ulrich Drepper via Gcc
For my own projects I started /automatically/ signing all the git commits.
This is so far not that important for my private projects but it is
actually important for projects like gcc.  It adds another layer of
security to the supply chain security.

My shell prompt (as many other people's as well) shows the current git
branch but in addition also shows the validity of the signature if it
exists.  For this a file with the authorized keys needs to be provided.

I found it easiest to use SSH keys for signing.  One can create a new key
for each project.  If the desktop environment uses GPG daemon or something
like that one doesn't even realize the signing request, it's handled
automatically.

git allows to set up signature handling on a per-project basis.  I.e., no
decision made for one project will have any impact on other projects.  For
painless operation all that is needed is that the authorized keys are
published but that's not a problem, they are public keys after all.  They
can be distributed in the source code repository itself.

My question is: could/should this be done for gcc?  It's really easy to set
up:

- create new key:

  $ ssh-keygen -f ~/.ssh/id_ed25519_gcc -t ed25519

  (of course you can use other key types)

- configure your git repository.  This has to be done for each git tree,
the information is stored in the respective tree's .git/config file

  $ git config gpg.format ssh
  $ git config user.signingKey ~/.ssh/id_ed25519_gcc.pub
  $ git config commit.gpgsign true
  $ git config tag.gpgsign true

  If ssh-agent is not used then the user.signingKey must point to the
private key but this is hopefully not the case for anyone.  It's also
possible to add the entire key to the configuration, which doesn't
compromise security.

  It is possible to define global git configurations (by adding --global to
the command lines) but this means the settings are shared with all the
projects you work on.  This can work but doesn't have to.

- collect all maintainer's keys in a public place.  There could be in the
gcc tree a file 'maintainer-keys'.  The file contains one line per key, the
public key preceded by the respective email address.  If this is the case
use

  $ git config gpg.ssh.allowedSignersFile maintainer-keys

  At least the original git seems to be happy with relative paths (i.e., if
the file is not at the toplevel an appropriate path can be added)

  Every maintainer then just has to remember to submit any newly created
key as a patch to the 'maintainer-keys' file.  That's it.

The key creation ideally is a one-time effort.  The git configuration is
for everyone using the gcc git tree a once-per-local-repository effort (and
can be scripted, the gcc repo could even contain a script for that).

After this setup everything should be automated.  Someone not interested in
the signature will see no change whatsoever.  Those who care can check it.
Note, that github also has support for this in their web UI.  CLI users can
use

  $ git config log.showSignature true

to have git display the signature state in appropriate places by default.

If and when signatures are universally used one could think about further
steps like restricting merges based on trust levels, add revocation lists,
Or even refusing pushes without a valid signature.  This would indeed mean
a higher level of security.


Opinions?


Re: commit signing

2022-09-14 Thread Ulrich Drepper via Gcc
On Wed, Sep 14, 2022 at 1:31 PM Richard Biener 
wrote:

> How does this improve supply chain security if the signing happens
> automagically rather than manually at points somebody actually
> did extra verification?


It works only automatically if you have ssh-agent (and/or gpg-agent)
running.  I assume that's what developers do anyway because that's how they
like push changes to sourceware.  If you don't have an agent you'll have to
provide the signature of the signing key at the time of the commit.


What's the extra space requirement if every commit is signed?  I suspect
> the signatures themselves do not compress well.
>

The signatures are probably implemented as signed hashes of some sort.  So,
perhaps an additional SHA256 block plus infrastructure to determine the key
used etc.  I doubt that this is really measurable with today's disks and
servers and network connections.


Re: commit signing

2022-09-28 Thread Ulrich Drepper via Gcc
On Wed, Sep 14, 2022 at 2:07 PM Ulrich Drepper  wrote:

> On Wed, Sep 14, 2022 at 1:31 PM Richard Biener 
> wrote:
>
>> How does this improve supply chain security if the signing happens
>> automagically rather than manually at points somebody actually
>> did extra verification?
>
>
> It works only automatically if you have ssh-agent (and/or gpg-agent)
> running.  I assume that's what developers do anyway because that's how they
> like push changes to sourceware.  If you don't have an agent you'll have to
> provide the signature of the signing key at the time of the commit.
>


This was the last message I sent and no further questions or comments
arrived.

Shall I prepare a small patch with an initial version of the key files
(with my key), perhaps a patch to the setup script Jonathan mentioned, and
a few words to be added to a README or similar file (which?)?

Initially this could be optional and we could gather data on the pickup and
only after an initial period switch to make the signing mandatory.