Re: -Wuninitialized issues

2005-11-03 Thread David Taylor
> From: Jeffrey A Law <[EMAIL PROTECTED]>
> Date: Wed, 02 Nov 2005 19:17:06 -0700
> 
> On Wed, 2005-11-02 at 20:44 -0500, Daniel Jacobowitz wrote:

> > People who use -Wall -Werror are _already_ pissed off about
> > -Wuninitialized.  It virtually guarantees that your build will fail on
> > a new release of GCC.

Putting on my user hat, I would have to disagree.

At EMC we build with '-Wall -Werror' and do not get pissed off by
-Wuninitialized.  Historically, -Wuninitialized does not generate
a lot of false positives in our code.

> Very true, but I don't think that's an execute to generate even more
> false positives for them! :-)
> 
> Those groups of users also get upset anytime we add something -Wall, but
> that's another discussion unto itself!

I get upset when a new warning is added to -Wall, it generates noise,
and there's no way to turn it off.  But, let's not have that
discussion today.

> jeff

Take user hat off; put developer hat back on.

David


Command line options and pragmas

2006-01-09 Thread David Taylor
For a variety of reasons, we would like to be able to specify
individual compilation switches *within* individual files.

When building we specify a large number of compilation options.  But,
some files need tweaks of one sort or another to the generic set of
options.

Most of the tweaks involve turning selected warnings off.  But, there
are others as well -- turning warnings on, changing optimization
levels, turning selected optimizations on or off, and so on.

We'd like to be able to specify the tweaks to the individual
compilation flags *within* the individual files.

One of the advantages we see is that it would simplify some of the
makefile logic -- rather than having logic (as we do now) to record
the individual compilation switches and comparing that against what
the options were the last time the file was successfully compiled, we
could just look at the time stamp of the file.

[The logic in question compensates for the lack of .KEEP_STATE support
in GNU make.]

Additionally, solutions such as .KEEP_STATE have potential problems
when doing nested distributed builds.

To change the individual compilation switches (as opposed to the
generic compilation switches), you could then just edit the
corresponding *.c file.

(If you change a generic compilation switch, then *EVERYTHING* needs
to be rebuilt and that is handled separately in our system (by having
a depdendency of every *.o file upon the makefile fragment holding
such information).)

The options to be customized do NOT involve options handled by the gcc
driver, only ones handled by the compiler proper (i.e., cc1 and
friends).

Some questions come to mind:

. For starters, does this sound reasonable?  That is, if I implemented
this, and we contributed it back, would it be considered?  Or would it
likely be rejected out of hand?

. Next, if it would not be rejected on the "we don't want to have such
functionality" basis, then the question becomes one of what should the
interface look like?  Some possibilities include:

#pragma GCC command-line -Wprecision-mismatch

  unilaterally set -Wprecision-mismatch

or possibly:

#pragma GCC command-line push(-Wprecision-mismatch)

  set -Wprecision-mismatch, remember previous values

and

#pragma GCC command-line pop(-Wprecision-mismatch)

  reset -Wprecision-mismatch to its previous value

or maybe:

#pragma GCC warnings Wprecision-mismatch

(with some similiar syntax for other, non warning, options).

Or possibly some other syntax?

. Additionally, are there options you would emphatically *NOT* want to
ever be allowed to be set this way?

I would personally favor not supporting any options that take file
names -- from a security perspective.  And probably a command line
option to enable / disable support for the pragma at all.

Comments?

David
--
David Taylor
[EMAIL PROTECTED]



Re: Command line options and pragmas

2006-01-09 Thread David Taylor
> Date: Mon, 9 Jan 2006 11:13:22 -0800
> From: Joe Buck <[EMAIL PROTECTED]>
> 
> On Mon, Jan 09, 2006 at 01:46:21PM -0500, David Taylor wrote:
> > For a variety of reasons, we would like to be able to specify
> > individual compilation switches *within* individual files.
> 
> You don't need a gcc modification to do that; you can arrange to get
> the compiler flags from a comment in the file.  A bit of scripting,
> and your build system can find these comments, and either use a default
> flag set or generate an error if the magic comment is not found.

We're considering that option.

One of the considerations is that some files are compiled 20+ times
(e.g., if it's present in a lot of variations of the product and
contains conditional compilation), and the individual compilation
options sometimes vary depending upon which variation the file is
being compiled for -- so, we'd like to be able to use CPP.

Yes, we can run a sed or awk or perl script over the file, feed the
output to cpp, and then modify the compiler invocation line as
appropriate.  It's something we have considered doing.

But, less than 10% of the files have individual compilation options --
over 90% of the files use the generic set without any additions (and
the percentage using the generic set is likely to increase over time).
We would end up invoking the script more than a dozen times for files
without individual compilation switches for every time we invoked it
for a file with individual compilation switches.

So, I'd prefer to avoid the overhead of the script if I could.

David
--
David Taylor
[EMAIL PROTECTED]



Re: Command line options and pragmas

2006-01-09 Thread David Taylor
> Date: Mon, 9 Jan 2006 14:30:03 -0500
> From: DJ Delorie <[EMAIL PROTECTED]>

> > . Next, if it would not be rejected on the "we don't want to have such
> > functionality" basis, then the question becomes one of what should the
> > interface look like?  Some possibilities include:
> > 
> > #pragma GCC command-line -Wprecision-mismatch
> > 
> >   unilaterally set -Wprecision-mismatch
> 
> I was planning on proposing something like:
> 
>   #pragma GCC diagnostic [warning|error|ignore] -Wprecision-mismatch
> 
> The diagnostic machinery already has support for some of this, I had
> hoped to find time to make it fine-grained, allowing you to override
> the KIND of each warning, and thus override -Werror on a
> per-warning-type basis.
> 
> I had planned on forcing the user to place these pragmas before the
> first function definition, otherwise it becomes difficult to track
> when various warnings are in force.

The *BULK* of our files with individual compilation switches only have
switches for warnings or for profiling.  I'd be leary of any attempt
(e.g. changing code profiling options) to change code generation in
the middle of a function.  Our intended use (if we implement it) would
be to use the pragma only prior to the start of code -- only comments,
whitespace, and preprocessor directives permitted prior to the pragma.

So, I'd be comfortable with such a restriction long term for code
generation and related options and short term for warnings.

Longer term, I'd like to be able to control warnings on a line by line
basis.  The ability to say "I've examined this expression / line /
block / whatever of code and I'm happy with it with-regard-to warning
XYZ, please be quiet" would be very valuable.

Based on Gaby's comments, it sounds like fine-grained control would be
a much bigger project.

David
--
David Taylor
[EMAIL PROTECTED]



Re: gcc: -ftest-coverage and -auxbase

2019-06-17 Thread David Taylor
Sorry for the late reply.  Your message never arrived in my mailbox.
I suspect that corporate email has swallowed it for some stupid
reason.  I'm replying to a copy I found in the mailing list archives
at gcc dot gnu dot org.  Hopefully I didn't screw up the editing.

From: Richard Biener 
Date: Thu, 13 Jun 2019 10:22:54 +0200



On Wed, Jun 12, 2019 at 10:17 PM  wrote:
>
> When doing a build, we use a pipe between GCC and GAS.
> And because we wish to do some analysis of the assembly code,
> we do not use -pipe but instead do '-S -c -'.  And this has worked
> fine for many years.

Can you please show us complete command-lines here?  -S -c -
will only assemble (and require source from standard input
and produce output in -.s).

Actually, GCC recognzes '-c -' as meaning to write to stdout.  The *.c
file is given on the command line to GCC.  And the output of GAS is
specified with -o.

The compile & assemble part of the command line is approximately 2K
bytes in length.  Mostly it's pretty boring.  It's roughly:

/full/path/to/version/controlled/gcc \
-MMD -MF bin//some/dir/path.o.d \
more than a dozen '-iquote ' combos \
some -D switches \
some -imacros switches \
-pipe \
more than a dozen -f switches \
-Wall -Werror and about two dozen additional -W switches \
some -m switches \
-gdwarf-4 -g3 \
-S -o - some/dir/path.c \
|
/full/path/to/version/controlled/as \
-warn-fatal-warnings -64
bin//some/dir/path.o_

On success the *.o_ file will be renamed to *.o in the same directory.

Dozen products each built on a different machine (whichever dozen
build cluster machines are currently the most lightly loaded).

Each sub-build is done by a GNU make with either a '-j 64' or '-j 128'.

Currently all the compiles write to the same GCNO file.  Not very
useful.  If -auxbase was not just passed to sub-processes but actually
user settable, I believe that the problem would disappear...

Ignoring documentation (it's needed and important, but I haven't
thought about what to say as yet), I believe that this would be a
one-line change to common.opt and nothing more.

> I was recently looking into collecting some coverage information.
> To that end, I added --coverage to the invocation line.  And it slowed
> things down by more than an order of magnitude!
>
> Investigating, it appears that the problem is the writing of the GCNO
> files.
>
> We do our builds on a build cluster with a lot of parallelism.
> With the result that a dozen machines are each doing a bunch
> of writes to the file '-.gcno' in an NFS mounted directory.
>
> Rather than have a full, not incremental, build take 5-10 minutes,
> It takes 4 hours.  And rather than have each of several thousand
> compiles produce their own GCNO file, they all get overwritten...
>
> Grep'ing around, I found '-auxbase'.  If I correctly understand it,
> when compiling
>
> some/path/name.c
>
> into
>
> bin/some-product/some/path/name.o,
>
> I could simply say
>
> -auxbase $(@:%.o=%)
>
> The problem is that in common.opt, auxbase is marked RejectDriver.
>
> It looks like removing it would some my problem.  Anyone have a reason
> why removing that would be a bad idea?  Or have a different solution?
>
> Thanks.
>
> David


GCC's instrumentation and the target environment

2019-11-01 Thread David Taylor
I wish to use GCC based instrumentation on an embedded target.  And I
am finding that GCC's libgcov.a is not well suited to my needs.

Ideally, all the application entry points and everthing that knows
about the internals of the implementation would be in separate files
from everything that does i/o or otherwise uses 'system services'.

Right now GCC has libgcov-driver.c which includes both gcov-io.c and
libgcov-driver-system.c.

What I'd like is a stable API between the routines that 'collect' the
data and the routines that do the i/o.  With the i/o routines being
non-static and in a separate file from the others that is not
#include'd.

I want them to be replaceable by the application.  Depending upon
circumstances I can imagine the routines doing network i/o, disk i/o,
or using a serial port.

I want one version of libgcov.a for all three with three different
sets of i/o routines that I can build into the application.  If the
internals of instrumentation changes, I want to not have to change the
i/o routines or anything in the application.

If you think of it in disk driver terms, some of the routines in
libgcov.a provide a DDI -- an interface of routines that the
application call call.  For applications that exit, one of the
routines is called at program exit.  For long running applications,
there are routines in the DDI to dump and flush the accumulated
information.

And the i/o routines can be thought of as providing a DKI -- what the
library libgcov.a expects of the environment -- for example, fopen and
fwrite.

There's also the inhibit_libc define.  While if you don't have headers
you might have a hard time including  or some of the other
header files, if the environment has a way of doing i/o or saving the
results, there is no real reason why it should not be possible to
provide instrumentation.

Comments?


Re: GCC's instrumentation and the target environment

2019-11-20 Thread David Taylor
Sorry for not responding sooner.

Thanks Martin.

Like Joel we have a third party solution to instrumentation.  Part of
my objection to the third party solution is freedom.  There are
customizations we would like, but not having source we're at the mercy
of the vendor both for whether it gets done and the timing.  Part of
the objection is the massive amount of changes I had to make to our
build system to integrate it and the resulting feeling of fragility.
They are reportedly addressing the latter in a future release.

By contrast, looking at GCC based instrumentation, the changes
required to our build system are very small and easy.

Part of my purpose in posting was the belief that this problem --
wanting to instrument embedded code -- is not uncommon and has likely
been solved already.  And the hope that one of the solvers would feel
that their existing solution was in a good enough shape to contribute
back.  Or that someone would point out that there is already an
existing {Free | Open} source solution that I am overlooking.

Since no one has mentioned an existing solution, here is a first draft
of a proposed solution to GCC instrumentation not playing well with
embedded...

NOTE: *NONE* of the following has been implemented as yet.  I would
ulimately like this to be something that once implemented would be
considered for becoming part of standard GCC.  So, if you see something
that would impede that goal or if changed would improve its chances,
please speak up.

Add a new configure options --{with|without}-libgcov-standalone-env

Default: without (i.e., hosted)

Question: should hosted libgcov be the default for all configuration
tuples?  Or should it only be the default when there are headers?
Or...?

When standalone, when building libgcov.a files, suppress
-Dinhibit_libc and add -DLIBGCOV_STANDALONE_ENV to the command line
switches.

Then in libgcov-driver.c, libgcov-driver-system.c, gcov-io.c, replace

all calls of fopenwith calls of __gcov_open_int
 fread  __gcov_read_int
 fwrite __gcov_write_int
 fclose __gcov_close_int
 fseek  __gcov_seek_int
 ftell  __gcov_tell_int

 setbuf __gcov_setbuf_int

 Probably belongs inside __gcov_open_int instead of as
 a separate routine.

 getenv __gcov_getenv_int

 abort  __gcov_abort_int

 When the application is 'the kernel' or 'the system',
 abort isn't really an option.

 fprintf__gcov_fprintf_int

 This is called in two places -- gcov_error and
 gcov_exit_open_gcda_file.  The latter hard codes the
 stream as stderr; the former calls get_gcov_error_file
 to get the stream (which defaults to stderr but can be
 overridded via an environment variable).

 I think that get_gcov_error_file should be renamed to
 __gcov_get_error_file, be made non-static, and be
 called by gcov_exit_open_gcda_file instead of hard
 coding stderr.

 For that matter, I feel that gcov_exit_open_gcda_file
 should just call __gcov_error instead of doing it's
 own error reporting.  And __gcov_get_error_file and
 __gcov_error should be a replacable routines as
 embedded systems might well have a different way of
 reporting errors.

 vfprintf   __gcov_vfprintf_int

 If gcov_open_gcda_file is altered to call
 __gcov_error and __gcov_error becomes a replacable
 routine, then fprintf and vfprintf do not need to be
 wrapped.

 malloc __gcov_malloc_int
 free   __gcov_free_int

 Embedded applications often do memory allocation
 differently.
 
While I think that the above list is complete, I wouldn't be surprised
if I missed one or two.

Other than __gcov_open_int, there would be no conflict if the _int was
left off the end.  I put it on to emphasize that these routines were
not meant to be called by the user, but rather are provided by the
user.  Some other naming convention might be better.

There would be a new header file, included as appropriate.  If the
normal (hosted) build was in effect, then some new (potentially one
line static inline) functions would be defined for each of the above
functions that just call the expected standard libc function.  If the
embedded (standalone) build was in effect, then there would be extern
declarations for each of the above, but *NO* definition -- the
definition would be the reposibility of the application.

Comments?

David

Re: changing the SPARC default

2006-03-17 Thread David Taylor
> Date: Thu, 16 Mar 2006 18:51:59 -0800 (PST)
> From: Alexey Starovoytov <[EMAIL PROTECTED]>
> 
> On Thu, 16 Mar 2006, Joel Sherrill wrote:

> It seems everybody agreed that solaris 10+ can be changed to -mcpu=v9 default.
> Great!
> What are the thoughts about Solaris 7,8,9 ?
> 
> They don't run on embedded sparcs and the legacy of sun4c, sun4m, sun4d
> systems can now be found mostly on ebay alongside 8bit ISA video card.

A datapoint: Solaris 7 runs on sun4c systems.  In fact, it is the
*LAST* Solaris released with support for the sun4c architecture.

> Really, solaris gcc users of sun4c rarity can always configure gcc with
> --with_cpu=v7 or v8. Most of the solaris gcc users are on ultra2 and 3
> and should see immediate benefit of better default.
> I guess it's worth to sacrifice the convenience of the default install
> for sun4c users, so the rest solaris gcc users can have better performance.
> 
> Alex.

David


stabs changes for 64 bit targets

2013-05-13 Thread David Taylor
There are problems when using current STABS debug format for 64 bit
targets.

A STABS entry is 12 bytes:

. e_strx (4 bytes)
. e_type (1 byte)
. e_other (1 byte)
. e_desc (2 bytes)
. e_value (4 bytes)

Unless you have an awfully lot of debug information, 4 bytes for a
string table index is fine.  But, for 64 bit targets, 4 bytes for an
address is not so good.

Initially we were using the x86-64 small memory model.  But we now have
a need to place some data symbols at higher addresses.  Doing so and
compiling with debugging turned on causes linkage problems -- with STABS
relocations.

So, we are now looking at 16 byte STABS entries -- increasing the size
of the e_value field from 4 bytes to 8 bytes and leaving the other 4
fields alone.

We want things to be backwards compatible and to have one tool chain
that is caplable of producing / consuming both formats.

We also want the changes to be implemented in such a way that they would
be considered for inclusion in future versions of GCC, BINUTILS, and
GDB.

To that end, our thinking is that:

. GCC would still produce STABS entries as now; but, would also support
a command line switch to tell the assembler that the larger STABS were
wanted.

. the new section would be named .stab2 instead of .stab

  [I wasn't sure what to name it; I didn't want to call it .stab64 as I
   could envision someone someday needing to increase the size of the
   string table.  I don't really care what it is called provided it is
   not .stab (backwards compatibility).]

. if both .stab and .stab2 are present in an executable, they would
  share the same string table (unless this proves to be a problem, in
  which case the .stab2 string table would be named either .stab2str or
  .stabstr2)

. GAS, based on a command line switch would produce either 12 byte or 16
  byte STABS.

. OBJDUMP / LD / GDB would look at the section name to determine how to
  read the STABS.

I am told that the GCC and BINUTILS changes are done except for the
addition of new tests to the testsuite.  The GDB changes have been
started.

Does this sound like a reasonable approach?  Is the new section name
okay?  Or do people have suggestions for a better name?  Anything left
out that needs to be done for the changes to be considered for
inclusion?

EMC has paperwork on file for copyright assignment of past and future
changes to GCC, BINUTILS, and GDB.

David
--
David Taylor
dtay...@emc.com


Re: stabs changes for 64 bit targets

2013-05-14 Thread David Taylor
Jakub Jelinek  wrote:

> On Mon, May 13, 2013 at 10:45:46AM -0400, David Taylor wrote:
> > There are problems when using current STABS debug format for 64 bit
> > targets.
> 
> Why are you considering extending STABS at this point?
> STABS support might very well be dropped altogether from GCC 4.9 or the next
> release, people just should use DWARF[234] everywhere.

There are multiple reasons.  One of the big reasons is...

Prior to GCC 4.7, DWARF is too verbose compared to STABS.

In STABS, all strings go into the string table; identical strings get
put into the table just once.

In DWARF, prior to GCC 4.7, macro strings do not go into the string
table.  If 1000 files all include a given header file, each #define in
that header gets its own string in the debug information -- so the
string is present 1000 times.  GCC 4.7 (DWARF4) fixes this.

We have STABS extensions (posted years ago, but never merged) that
record macros in the STABS debug information -- at the -g3 level, just
like for DWARF.

[Posting updated MACRO extensions and trying to get them merged in is on
my plate as part of the internal upgrade from gdb 7.1 to gdb 7.6.  The
extensions predate 7.1.]

We are currently using GCC 4.5.4 for most things; 4.6.x for others.  I
don't knbow the details, but 4.7 was (is?) considered unacceptable, so
we're planning on skipping it and waiting for 4.8.1 or later.

There are other reasons besides the DWARF verboseness, but they are
solvable.  The verboseness (over 10x increase in the size of the elf
file) is a show stopper.  So, for now, we're sticking with STABS.

I would like the 16 bytes STABS to be done in a manner that they would
be considered for inclusion.


Re: stabs changes for 64 bit targets

2013-05-14 Thread David Taylor
Steven Bosscher  wrote:

> On Tue, 14 May 2013 10:38:02 -0400, David Taylor wrote:
> > There are other reasons besides the DWARF verboseness, but they are
> > solvable.  The verboseness (over 10x increase in the size of the elf
> > file) is a show stopper.
> 
> People keep saying that here from time to time. You said it earlier
> this year, but despite trying quite hard I've never been able to
> reproduce such horrible reasons anywhere. I sent you some results of
> my tests, see http://gcc.gnu.org/ml/gcc/2013-01/msg00142.html and the
> whole discussion surrounding it, but you did not reply.

I remember replying to several messages on the subject from various
people.

As I recall, the problem is largely gone in 4.7 -- it was an earlier
version that had the big big problem.

Prior to 4.7, DWARF at level 3 verboseness (i.e., turn on emitting
macros), stores each macro separately rather than in a string table.
Non macro strings are in a string table, but not macros.  With STABS, at
-g3 (store macros -- local extension, posted years ago, but never
accepted / merged) all the strings go into the string table with each
string appearing just once.

I learned after the discussion had died out that the bulk of the
increase was due to the macros.  If I recall correctly, without macros
the increase was more like 20-30%.  I might be remembering the
percentage wrong, but I do remember that DWARF was still bigger than
STABS.

Being able to expand macros in the debugger is important to some of our
users, so I'm not willing to turn that off.

Once 4.8.1 comes out I'm willing to revisit the question of possibly
switching to DWARF and trying to convince by boss and his boss that we
should switch.  But, not now.

> I really don't think further changes to the stabs support should be
> made at this point, other than a full re-write to mimic the DWARF
> model of representing the debug info internally somehow and only emit
> it at the end. That would make it compatible again with PCH and LTO,
> and would *finally* make the front ends not dependent on asm target
> hooks/macros. It's even been suggested to internally represent
> everything as DWARF and to write a dwarf-to-stabs debug emitter.
> 
> Ciao!
> Steven

>From a format perspective, the 12 bytes -> 16 bytes change is fairly
minor.  When fetching / storing an entry, the meaning / interpretation
of the entry is the same except for the size of the address field -- 4
bytes vs 8 bytes.

I could argue that for the x86-64, for anything other than small memory
model, STABS should have had 8 bytes for the address field all along.
But, its been 4 bytes for years and I would like to maintain backwards
compatibility...

So, I think that the cleanest backwards compatible solution is to put
the STABS with a bigger address field into a different section.

I would be agreeable to using the same section name as currently and
putting the information into the zeroth entry if we can achieve
consensus on what it should be.

On the other hand, if the consensus is that there should be no changes
to STABS, then when we finish our changes I won't bother posting the
changes.


stabs support in binutils, gcc, and gdb

2013-01-03 Thread David Taylor
What is the status of STABS support?

I know that there is considerably more activity around DWARF than STABS.
It appears that STABS is largely in maintenance mode.  Are there any
plans to deprecate STABS support?  If STABS enhancements were made and
posted would they be frowned upon?  Or would they be reviewed for
possible inclusion in a future release?

[We have copyright assignments in place for past and future changes to
BINUTILS, GCC, and GDB -- and it took almost 4 years from start to
finish -- I do not want to ever have to go through that again with the
company lawyers!  So, paperwork should not be an issue.]

I know that DWARF is more expressive than STABS.  And if it didn't cause
such an explosion in disk space usage, we would probably have switched
from STABS to DWARF years ago.

Switching to DWARF causes our build products directory (which contains
*NONE* of the intermediate files) to swell from 1.2 GB to 11.5 GB.
Ouch!  The DWARF ELF files are 8-12 times the size of the STABS ELF
files.

If the DWARF files were, say, a factor of 2 the size of the STABS files,
I could probably sell people on switching to DWARF; but, a factor of 8
to 12 is too much.

Thanks.

David


Re: stabs support in binutils, gcc, and gdb

2013-01-11 Thread David Taylor
Doug Evans  wrote:

> On Thu, Jan 3, 2013 at 9:52 AM, nick clifton  wrote:
> >> Switching to DWARF causes our build products directory (which contains
> >> *NONE* of the intermediate files) to swell from 1.2 GB to 11.5 GB.
> >> Ouch!  The DWARF ELF files are 8-12 times the size of the STABS ELF
> >> files.
> >>
> >> If the DWARF files were, say, a factor of 2 the size of the STABS files,
> >> I could probably sell people on switching to DWARF; but, a factor of 8
> >> to 12 is too much.
> >
> >
> > Have you tried using a DWARF compression tool like dwz ?
> >
> >   http://gcc.gnu.org/ml/gcc/2012-04/msg00686.html
> >
> > Or maybe the --compress-debug-sections option to objcopy ?
> 
> Yeah, that would be really useful data to have.
> 
> Plus, there's also -gdwarf-4 -fdebug-types-section.
> 
> So while plain dwarf may be 8-12x of stabs, progress has been made,
> and we shouldn't base decisions on incomplete analyses.
> 
> If we had data to refute (or substantiate) claims that dwarf was
> *still* X% larger than stabs and people were still avoiding dwarf
> because of it, that would be really useful.
> 

DWARF alone is more than 8-12 times larger than STABS alone.

For our product, the DWARF elf file is 8-12 times larger than the STABS
elf file.  But, part of the file is the text + data + symbol table +
various elf headers.  So, the debugging information swelled by a larger
factor.

Some numbers.  Picking d90a.elf because it is first alphabetically.

{As to what d90f.elf is -- that's unimportant; but, it's the kernel for
one of the boards in one of our hardware products.]

With STABS, it's 83,945,437 bytes.  If I strip it, it's 34,411,472
bytes.

SIZE reports that the text is 26,073,758 bytes and that the data is
8,259,394 bytes, for a total of 34,333,152.  So, the stipped size is
78,320 bytes larger than text+data.

>From objdump:

 77 .stab 01f40700      0208deb8  2**2
  CONTENTS, READONLY, DEBUGGING
 78 .stabstr  00e0b6bc      03fce5b8  2**0
  CONTENTS, READONLY, DEBUGGING

So, the two STABS sections come to a total of 47,496,636 bytes.

(Stripped size 34,411,472) + (size of .stab & .stabstr) is 2,037,329
bytes shy of the unstriped size.  Presumably symbols.

DWARF 4 total file size 967,579,501 bytes.  Ouch!  Stripped 34,411,440
bytes.  Which is 32 bytes smaller than the stabs case.  Continuing...

Adding up the various debugging sections I get

931,076,638 bytes for the .debug* sections

52,977 for the .stab and .stabstr sections (not sure where they came
from -- maybe libgcc?  Origin is unimportant for the present
purposes.)

Ignoring the 52,977 stabs stuff, that's 931076638 / 47496636 ~=  19.6.

Using DWZ reduced the elf file size by approximately 1% when using dwarf
3 or dwarf 4.  With dwarf 2 the file is about 10% bigger and dwz reduces
it by about 10% -- i.e., to about the same file size as when using dwarf
[34].

Using objcopy --compress-debug-sections reduced the overall elf file
size to approximately 3.4 times that of the stabs file -- definitely
better than the 11.5 ratio when not using it.

Summarizing:

STABS:

total file size:83,945,437
text+data:  34,333,152
debugging:  47,496,636
other:   2,115,649

DWARF:

total file size:967,579,501
text+data:   34,333,120 (don't know why it is 32 bytes smaller)
DWARF debugging:931,076,638
STABS debugging: 52,977
other:2,116,766

file size ratio:  967,579,501 / 83,945,437 = 11.5
debug size ratio: 931,076,638 / 47,496,636 = 19.6

(It would actually be slightly worse if the remaining ~50K of STABS was
converted to DWARF.)

If I use objcopy --compress-debug-sections to compress the DWARF debug
info (but don't use it on the STABS debug info), then the file size
ratio is 3.4.

While 3.4 is certainly better than 11.5, unless I can come up with a
solution where the ratio is less than 2, I'm not currently planning on
trying to convince them to switch to DWARF.

David


Re: stabs support in binutils, gcc, and gdb

2013-01-14 Thread David Taylor
Andreas Schwab  wrote:

> David Taylor  writes:
> 
> > {As to what d90f.elf is -- that's unimportant; but, it's the kernel for
> > one of the boards in one of our hardware products.]
> 
> Is it an optimized or an unoptimized build?

Optimized, -O2.  According to find piped to wc, there's 2587 C files, 11
assembly files, 5 C++ files, and 2839 header files.

Across the files, there's 4.9M lines in C files, 870K lines in header
files, 9.7K lines in assembly, and 5.9K lines in C++.



modified x86 ABI

2007-10-22 Thread David Taylor
At EMC we have a version of GCC which targets the x86 with a non
standard ABI -- it produces code for 64 bit mode mode, but with types
having the 32 bit ABI sizes.  So, ints, longs, and pointers are 32
bits -- that is, it's ILP32 rather than LP64 -- but with the chip in
64 bit mode.

Actually, pointers are somewhat schizophrenic -- software 32 bits,
hardware 64 bits.

Currently the changes are against 3.4.6 and are not yet
``productized''.

If this set of changes was cleaned up, finished, and made relative to
top of trunk rather than relative to 3.4.6, would people be interested
in them?

Put another way, should I bother to post them to gcc-patches (probably
3-6 months out) for possible inclusion into gcc?

Thanks.

Later,

David