> Okay for stage1?
Ok, assuming everyone agrees to those versions ;-)
> 2006-10-12 Kaveh R. Ghazi <[EMAIL PROTECTED]>
>
> * configure.in: Require GMP-4.1+ and MPFR-2.2+.
> * configure: Regenerate.
> Could this patch be applied now?
> http://gcc.gnu.org/ml/gcc/2006-07/msg00210.html
Assuming it's been bootstrapped with no regressions, and the legal
paperwork is in place, yes.
> Okay for mainline?
Ok. src too, please.
> 2006-11-06 Kaveh R. Ghazi <[EMAIL PROTECTED]>
>
> * configure.in: Robustify error message for missing GMP/MPFR.
>
> * configure: Regenerate.
The r8c/m16c family cannot shift by more than 16 bits at a time ever,
or 8 bits at a time with constant shifts. So, to do a variable number
of shift on a 32 bit value, it needs to emit a conditional, turning
the attached example into this:
i = 0xf;
if (j >= 16)
{
i >>= 8;
i >>= 8;
I compared the generated code with an equivalent explicit test,
and discovered that gcc uses a separate rtx for the intermediate:
i = 0xf;
if (j >= 16)
{
int i2;
i2 = i >> 8;
i = i2 >> 8;
j -= 16;
}
This seems to avoid the combiner problem, becuase you don't have the
same
"Ed S. Peschko" <[EMAIL PROTECTED]> writes:
> And in any case, why should it be off-topic?
Regardless of how much it affects, us, it's off-topic *by definition*
in *this forum*. This isn't the right place to discuss such topics
because that's the way we want it to be.
> This is because libiberty's API is all internal really and is always
> changing and never stable. It is not really a well defined library
> unlike say libgomp.
... although we do try to keep backward compatiblity when possible, so
that newer libiberties work with older gcc/binutils/gdb/whateve
Joe Buck <[EMAIL PROTECTED]> writes:
> The ordinary user who builds gcc from source does not need *any*
> version of automake, autoconf, etc., so any strict requirements that
> are imposed on these tools have an impact only on gcc developers.
I wish we could have similar requirements for GMP and
> > I wish we could have similar requirements for GMP and MPFR, rather
> > than requiring the user to pre-install them on pretty much EVERY
> > computer.
>
> Do you mean that gcc should be distributed with GMP and MPFR libraries
> in the tarball? (There had been a discussion about including them
Paolo Bonzini <[EMAIL PROTECTED]> writes:
> > That idea got nixed, but I think it's time to revisit it. Paolo has
> > worked out the kinks in the configury and we should apply his patch and
> > import the gmp/mpfr sources, IMHO.
>
> Note that these two issues (my patch, which by the way was start
> DJ, as a build machinery maintainer, you are authorized to approve
> such a patch. Is anything holding you back?
You mean, besides politics?
The last time such a patch came through, we were in the middle of
discussing the various --with-* options. I wanted to let that settle
first.
> I suggest that you send this report to [EMAIL PROTECTED], the
> DJGPP port of GCC maintainers are much more likely to respond there.
He did that first.
Is your target a newlib target? If so, are you including --with-newlib?
> Why isn't --with-newlib the default for newlib targets?
--with-newlib *tells* us that it's a newlib target.
> Newlib targets are targets without their own native libc. I find it
> exceedingly hard to believe that AIX falls into this category.
Newlib supports some platforms that have their own native libc.
subreg_get_inf() in rtlanal.c blindly assumes that any hard register
can hold any smaller-than-native mode:
nregs_ymode = hard_regno_nregs[xregno][ymode];
. . .
&& (GET_MODE_SIZE (ymode) % nregs_ymode) == 0)
However, there are registers in m32c that cannot hold a QImode value.
By this,
> So can you expand on what is actually going wrong?
At the moment, the problem is divide by zero - nregs_ymode is zero.
IIRC the problem before was that reload kept choosing $r2 or $r3 for
pseudos that were QImode. Since the m32c is already register starved,
this leads to unfixable situations.
> I believe we've generally assumed that all hard registers can be
> subreg'd. That said, HARD_REGNO_MODE_OK should keep QImode values
> out of those registers. And insn constraints should keep reload
> from using those registers for QImode insns. So can you expand on
> what is actually going w
Followup - it seems that CANNOT_CHANGE_MODE_CLASS governs whether
these subregs are attempted. It's not clear from the documentation
that it does this.
Here's an example of bad assumptions. The current code calculates the
subreg location BEFORE checking to see if such a subreg is legal.
This patch moved the legality check before the location calculations.
With this patch, I can build gcc's libraries and newlib, but I haven't
run full regressions
On behalf of Red Hat I would like to publish patches to add support
for the Toshiba Media Processor (MeP) to GCC 3.4.
We don't expect this port to be accepted into the gcc source tree
as-is, as the 3.4 branch is closed to new ports, and this port needs
some core gcc changes. We don't yet have a p
Why do we use 256 instead of BIGGEST ALIGNMENT in ix86_data_alignment?
This is causing all sorts of build problems for djgpp, as I'm getting
lots of warnings about too-big alignments, and with -Werror...
Index: i386.c
===
--- i386.c
> It is to improve performance of string functions on larger chunks of
> data. x86-64 specify this, for x86 it is optional. I don't think we
> should end up warning here - it is done only for static variables where
> the alignment can be higher than what BIGGEST_ALIGNMENT promise.
Higher than B
> Yes, BIGGEST_ALIGNMENT is supposed to be the biggest alignment the
> compiler will ever use.
Will ever use, or can ever use? Based on the code, it looks like "can
ever use" - i.e. it's an edict to the compiler to not exceed that
value, thus varasm warns when you exceed it.
> So if ix86_data_a
> For ELF, this would be 256, good. For pecoff, that would be smaller.
Actually, I think pecoff supports larger alignments. DJGPP is plain
COFF (similar to i386-coff).
> I thought BIGGEST_ALIGNMENT was the largest alignment that the
> processor ever requires for any data type. Which is not the
> same thing, since this is alignment desired for performance.
Since varasm complains when alignment exceeds MAX_OFILE_ALIGNMENT, and
MAX_OFILE_ALIGNMENT defaults to BIG
> > I like the "min (256, MAX_OFILE_ALIGNMENT)" fix...
>
> So do I.
Ok to apply then? Tested via djgpp cross-compile and cross-host.
* config/i386/i386.c (ix86_data_alignment): Don't specify an
alignment bigger than the object file can handle.
Index: i386.c
===
> The assembler should produce a hard error if one asks it for an
> alignment that can't be satisfied.
The djgpp assembler quietly ignores such requests (not intentional);
the object file just stops at 2**4. I only noticed the bug because
gcc itself complains, and with -Werror (as djgpp and bi
> Yes, this is OK.
Thanks, applied.
> (to be very pedantic, we can assert that MAX_OFILE_ALIGNMENT>=256 on
> x86-64 targets, but well).
My head hurts at the thought of x86_64-aout.
> I fully agree with Richard's interpretation concerning
> BIGGEST_ALIGNMENT meaning - ie in special cases for pe
> > Something like this, then?
>
> Sure.
Committed, then.
> I found this problems until now in libc++, libiberty.
Could you be more specific about the libiberty ones?
> In libiberty I found the following points.
Do any of these cause real failures?
I'm working on readying the Toshiba Media Processor (mep-elf) port for
contribution to GCC 4.x, but we added some core changes needed to
support it. The changes are listed below; I'd like some feedback
about these before I go too far with them. Are these concepts
acceptable for inclusion in gcc?
> This and the register changes come close to multi-arch gcc.
Yup. The core has two modes, core and vliw, and the coprocessor(s)
each have their own units. The core manages the opcode processing,
but the coprocessor does the work.
> Historically we have not tried to support different architect
> Personally I'd love to see us go this way if it doesn't
> inconvenience us too much.
It would be useful to be able to optimize each function as to, for
example, arm or thumb mode based on -Os and/or some heuristics. As a
long-term goal, at least.
> This is what we do for the Cell also, we expect people to compile
> using two different compilers right now, but we are actually looking
> into doing an "one source" based compiling where some functions or
> loops are pushed off to the SPUs via annotations like the OpenMP
> ones.
It sounds like
The minmax_operator predicate also checks the operands of the operator
to be appropriate, but the minmax_operator predicate is used for both
integer and floating point operations. The predicate, as is, only
matches the integer operands.
Taking out the check on XEXP(...) in minmax_operator seems
> You just have to make sure that the predicate only accepts the types
> of operands the insn and constraints are prepared to handle. I would
> be a little bit skeptical of removing the gpr_or_int10_operand test,
> for example. But it would be reasonable to check something else for a
> floating
> > if (targetm.disallow_inlining_p (node->decl, decl))
> >return false;
> >
> > if (targetm.disallow_call_p (current_function_decl, function))
> >return error_mark_node;
>
> I don't see a real problem with this, but I would prefer to see
> "allow_XX" rather than "disallow_XX". It's
> Oh, I get it now. No, there is no reason for the duplication between
> minmax_operator and the insn itself. You should be able to remove the
> tests from minmax_operator. I wonder why they are there at all?
This, then?
2007-03-15 DJ Delorie <[EMAIL PROTECTED]>
> Ok.
Thanks! Applied.
> Do you mean where is the best place to call these functions?
Yup.
> Look at the calls to cgraph_mark_edge in ipa-inline.c
There is no such function. I couldn't find anything in ipa-inline
that (1) had access to both ends of the call edge, (2) was actually
called, and (3) was called before th
> Guys - what branch/tag are you looking at doing this on?
It's only a couple of lines of code, do we need a branch for that?
Or do you mean the COPmode changes, which are bigger? They still
might be managable enough for trunk, if the timing is right.
> > Sorry, I meant cgraph_mark_inline. It looks like what you want to
> > me. But maybe I'm misreading it.
>
> And cgraph_check_inline_limits
The magic place was cgraph_mark_inline_edges.
Turns out, when I changed disallow_* to allow_*, I forgot to reverse
the sense of the target's implementa
. This
> affects the generators, some MI files, etc. The types don't exist
> unless the target calls for them.
Here's the first pass at this portion of the patch, originally written
by Richard over three years ago. No regressions on i686-linux.
2007-03-26 Richard Sandiford &
> Earlier you sent out a patch preventing inlining. That suggests that
> you can not compile code to run on both the main processor and the
> coprocessor at the same time.
No, that's not how it works. We always support both the main
processor and the coprocessor at the same time, in the same
co
When cross compiling with a sysroot, you sometimes end up with nested backticks.
The case we're seeing it with is m32r-elf, where gcc_tooldir is defined thusly:
gcc_tooldir = $(libsubdir)/$(unlibsubdir)/`echo $(exec_prefix) | sed -e
's|^$(prefix)||' -e 's|/$(dollar)||' -e 's|^[^/]|/|' -e
's|/[
"Dave Korn" <[EMAIL PROTECTED]> writes:
> Doh. Yes, we'd need immediate evaluation *and* $(shell ...).
I think it's *or* not *and*. How about this? Seems to work for me.
Index: configure.ac
===
--- configure.ac(revision
> On Tue, Mar 27, 2007 at 03:01:04PM -0400, DJ Delorie wrote:
> > - CROSS_SYSTEM_HEADER_DIR='$(gcc_tooldir)/sys-include'
> > + CROSS_SYSTEM_HEADER_DIR='$(shell echo $(gcc_tooldir)/sys-include)'
>
> Don't you need more quotes than that?
I think
> I only meant:
>
> CROSS_SYSTEM_HEADER_DIR='$(shell echo "$(gcc_tooldir)/sys-include")'
I figured you meant that. Can you think of an example that would
benefit from this quoting?
Ok, I suppose, as long as the backticks still get expanded.
> Currently the complete ".rodata" section is copied from load address
> (ROM) to RAM, that is by treating it similar to ".data" section.
Right, the linker scripts know which chips have accessible flash and
which don't.
> We went through the discussion in the following link and realized
> that t
"H. J. Lu" <[EMAIL PROTECTED]> writes:
> Does anyone have suggestions to resolve this? Why not use structure
> of bitfields instead of int for target_flags?
Years ago I added an option to support multi-way options using the
switch table and default values. I'm not sure how that translated
into t
In dwarf2out.c : dwarf2out_frame_init we have this code:
#ifdef DWARF2_UNWIND_INFO
if (DWARF2_UNWIND_INFO)
initial_return_save (INCOMING_RETURN_ADDR_RTX);
#endif
However, gdb really needs that slot to unwind stack frames when
debugging, on targets that don't use dwarf2 unwinding (i.e. cygw
Those frame offsets are relative to $fp, not $sp. *Those* offsets are
the same for those functions. Your debugger needs to interpret the
DW_CFA_def_cfa_reg codes.
> (insn 28 26 29 1 /mnt/disk2/src/gcc/gcc/libgcc2.c:464 (set (mem/i:HI
> (reg/f:HI 8 si [orig:30 D.1371 ] [30]) [5 +0 S2 A16])
> (subreg:HI (reg/v:DI 31 [ u ]) 0)) 1 {*movhi} (nil)
> (nil))
This is a tricky one. You need to split up the moves early enough to
let reload be flexible,
> (a) the numbers reported by the "time" command,
> (b) what sort of machine this is and how old,
Thinkpad 600 (RHL 9 i386, PII 266MHz) 192Mb (7 yrs old)
real0m46.115s
user0m40.080s
sys 0m3.930s
Dual Opteron 246 (FC3 x86_64, 3GHz) 2Gb (new)
real0m4.344s
user0m3.875s
sys
> > Dual Opteron 246 (FC3 x86_64, 3GHz) 2Gb (new)
>
> Lucky guy! ;)
Oops, I mean 2GHz :-P
(I have another new machine that's a P4 3GHz)
> probably the sanest thing is to go with the automake-like approach of
> one .d file per .c file, which then can be annotated without having to
> write logic to parse a big dependency file and update it in place.
The problem with .d files is that there's no good automatic way to
deal with header
> That script doesn't really parse the file at all, it just scans for
> #include lines, and it processes each header only once no matter how
> many files reference it. Which has got to be faster than what
> cpplib is doing.
Right, I figured you could run it once just to see how *much* faster
it
> Is there a good way of creating an assembler comments directly from RTL?
>
> I want to be able to add debugging/explanation strings to assembler
> listing (GAS). Unfortunately I want to do this from RTL prologue and
> epilogue (and thus avoid using TARGET_ASM_FUNCTION_EPILOGUE - where
> it woul
It seems to me that this kind of bug should have been noticed already,
so... am I missing something?
2005-03-21 DJ Delorie <[EMAIL PROTECTED]>
* optabs.c (expand_binop): Make sure the first subword's result
gets st
> Nick Clifton and I have been discussing the idea of keeping GCC and
> binutils' copy of dwarf.h in sync. I've just resolved all of the
> differences with the binutils version of the file. Perhaps DJ's
> merge script could keep gcc/gcc/dwarf.h in sync with
> src/include/elf/dwarf.h as it does f
> On Mon, 21 Mar 2005, DJ Delorie wrote:
> > 2005-03-21 DJ Delorie <[EMAIL PROTECTED]>
> >
> > * optabs.c (expand_binop): Make sure the first subword's result
> > gets stored.
>
> This is OK for mainline, provided that you bootstrap and r
> Would there be any objection to patches that convert function
> definitions in libiberty to use ISO C prototype style, instead of
> K&R style?
I would be in support of such a patch iff it converts all the
functions, not just the ones gcc happens to use.
> Just to make sure I understand. I was thinking of whatever was
> under $GCC/libiberty (and included). Are you thinking of something
> more?
No.
> A single patch is a huge stuff; I propose to break it into a series
> of patches. Is that OK with you?
I only want to avoid a situation where li
> I take it that all libiberty-using projects have taken the plunge,
> then? You vetoed this conversion awhile back because libiberty had
> to be done last.
At this point, I think libiberty *is* the last.
> What's your opinion on dropping C89 library routines from libiberty?
What would that bu
> Less to maintain is all I was hoping for. I think the configure
> scripts (both libiberty's and gcc's) could be simplified quite a bit
> if we assumed a C89 compliant runtime library, as could libiberty.h
> and system.h.
Well, gcc can make assumptions libiberty can't, and as far as
libiberty's
> Isn't that what newlib is for...?
"Should be" does not mean "is". I know libiberty has been used for
this purpose in the past (remember the demangler in libstdc++ times?)
so I wouldn't want to assume we aren't still doing it.
> I should be clear, though; I only want to make this assumption fo
How to convert this code? There is no single OPT_* that reflects when
the first warning is emitted.
if (params == 0 && (warn_format_nonliteral || warn_format_security))
warning (0, "format not a string literal and no format arguments");
else
warning (O
> To reflect the logical intent of these options while passing a
> unique OPT_* to each warning call, you'd need to add an option
> -Wformat-security-nonliteral for the warnings in the intersection of
> the two options;
At one point I proposed a system that let you say "this option infers
these o
> No company is going to spend money on fixing this until we adjust
> our (collective) attitude and take this seriously.
We could call ulimit() to force everyone to have less available RAM.
Connect it with one of the maintainer flags, like enable-checking or
something, so it doesn't penalize dist
> We already do that for when checking is enabled, well the GC heuristics
> are tuned such that it does not change which is why
> --enable-checking=release is always faster than without it.
Right, but it doesn't call ulimit(), so other sources of memory
leakage wouldn't be affected. I'm thinking
> so I assume setting hard ulimit to 128MB will just result in build
> process crashing instead of slowdown and swapping,
We would limit physical ram, not virtual ram. If you do a "man
setrlimit", I'm talking about RLIMIT_RSS. The result would be slowing
down and swapping, not crashing.
> What I have problem understanding is the last sentence of this
> paragraph in the light of your claim that it will results in
> swapping especially when we consider developers' machines with
> 512MB/1GB RAM, i.e. machines where memory is not "tight".
Sigh, Linux works the same way. Processes c
> So you can say "mem=128m" or the like.
Yes, but that doesn't help when I want to test one application on a
system that's been otherwise up and running for months, and is busy
doing other things. The RSS limit is *supposed* to do just what we
want, but nobody seems to implement it correctly any
> (There's still a POSIX-ism in the generator, in that it tries to
> write to "/dev/null". On Windows systems, I bet this will often
> work, but create a real file with that name. It would be better,
> and avoid portability problems, to guard the calls to fwrite, etc.,
> with "if (file)" rather
> This is still not an answer to the question I originally asked - do you
> see any way IN C to write code which has the relevant property of the
> class above (that is, that the FOOmode constants are not accessible
> except to authorized code) and which does not rely on free conversion
> between
> This doesn't do what I want at all. The goal is to make the *symbolic
> enumeration constants* inaccessible to most code.
Oh.
enum {
THE_VAL_QUUX_ENUMS
} TheValQuux;
If not defined, you get one enum, THE_VAL_QUUX_ENUMS. The "authority"
can define it to a list of enums, so it gets expanded.
>(2) When and if you switch to this:
>
> class machine_mode
> {
> enum value_t {
>VOIDmode, SImode, // ...
> } value;
>
> // accessors, whatever ...
> };
I think what Mark wants is to migrate to this:
class machine_mo
> No, the goal is to make the *values* inaccessible, not the names.
No, *I* want gcc to stop doing *&$@ like this:
stack_parm = gen_rtx_PLUS (Pmode, stack_parm, offset_rtx);
It should use GET_MODE(stack_parm) in case the target has multiple
pointer sizes.
And YES I have a port with multiple
> Furthermore, that does not stop an enthusiastic programmer from
> feeding the interface functions with the wrong values
If you seed the first enum from DATESTAMP, and properly range check,
you can find these cases pretty quickly and abort.
TVQ_SEED = (DATESTAMP%10) * 1000,
TVQ_FOO1,
...
> The cases I've found in my conversion was when codes use plain
> "0" instead of VOIDmode or whatever machine_mode is appropriate.
> That use of plain 0 breaks compilation with a C++ compiler.
If the #include isn't portable enough, just hard code a 42. We'd need
suitable changes for insn-modes
> Might it be more desirable for the compiler's code to only refer to
> target "type" modes as opposed to "size" modes?
Not always, see my mail about Pmode. The problem isn't just how gcc
refers to machine words, but that gcc assumes their usage is context
independent or inflexible. For example
> Now we have e.g. XNEW* and all we need is a new -W* flag to catch
> things like using C++ keywords and it should be fairly automatic to
> keep incompatibilities out of the sources.
Why not this?
#ifndef __cplusplus
#pragma GCC poison class template new . . .
#endif
> where then the target may declare class machine_mode
> target_int_mode ("HI", 16),
This is where we disagree. The *target* shouldn't map types to modes.
The *MI* should map types to modes. The target just creates the modes
it supports and describes them. The MI looks them up by descripti
> - ok, and how does it know that it needs a 32-bit unsigned scalar?
tm.h: #define INT_TYPE_SIZE 32
Combined with "unsigned int foo;" in the user's source file.
The MI doesn't need to know that this fits in a QImode.
> the world is it desirable to go go in a big circle to identify
> which
> which is defined to correspond to some physical mode
Close. Defined to correspond to one or more physical modes.
> - Huh?, can you provide a single example of where a char type would
> be mapped by the target to two different target specified modes?
i386 can hold a char in %al (QImode) o
gcc.dg/compat/struct-layout-1_generate.c assumes sizeof(int) is 4.
This of course fails on any target where sizeof(int) is 2. They may
fail when sizeof(int) is 8 too, or at least they won't be testing the
full range of possibilities.
I've noticed that quite a few testcases make these types of
as
> struct-layout-1_generate.c is run on the host, not on the target.
> And for hosts AFAIK GCC requires 32-bit int.
But the structures it generates assume 32-bit ints:
T(0,enum E2 a:31;,B(0,a,e2_m1,e2_0))
You can't have a 31 bit enum on a 16 bit target. You get messages
like this:
> Some of the work being carried out and posted on the gcc-patches
> mailing list makes those projects seem insignificant in comparision.
There's a wide range of ability in gcc developers, so there's a wide
range of projects to work on. They all use the same *process* so
starting with "trivial"
Consider:
int __attribute__((section("foo"))) *var1;
int * __attribute__((section("foo"))) var2;
var2 is itself in section foo, and points to an int.
Isn't var1 a pointer to something in section foo, and not itself in
foo? GCC instead treats var1 like var2.
I couldn't figure out a suitable se
> "section" attributes are presently storage-class-like (similar to
> "static") and only work on declarations.
Ok, I see that we set the "apply to decl" bit for "section". I guess
the question is - why? Would it be more consistent to keep track of
where it is given, and complain if it is applie
> most code and GCC documentation uses the less clear do-what-I-mean
> positions instead.
Ok, that's kinda what I figured. Thanks!
> if (OPT_Wmissing_braces)
>warning (OPT_Wmissing_braces, "missing braces around
> initializer");
FYI OPT_Wmissing_braces is an enum constant; it will always be nonzero.
> [3]
> warning (OPT_Wmissing_braces, "missing braces around initializer");
That is what we decided t
> So, I assume this patch is wrong in this regard:
> http://gcc.gnu.org/ml/gcc-cvs/2005-06/msg00392.html
Yes, it's wrong in that way.
> [3] shows which options is used to enable/disable that diagnostic
> (assumming it is controled by a particular switch). In either case
> the main diagnostic is always emitted.
No, [3] will also enable/disable the warning, as the OPT_* is used to
look up the variable, and the variable is checke
> The various exceptions of the form "if an attribute is applied to the type
> of a decl which can only apply to a decl, then apply it to the decl" are
> there because they represent forms used by existing code.
What about this scenario?
typedef int __attribute__((section("foo"))) FOOINT;
FOO
> You need to pass the option to warning() also for another reason: we want to
> be able to optionally print which flag can be used to disable each warning,
> so warning() has to be smarter than it used to be.
In addition, we've talked about the idea of having the diagnostic
machinery keep track
> I don't care if it's spelt warn_foo, OPT_Wfoo, warning_p(foo) or
> whatever, so long as it's spelt only one way. The 'warning
> (OPT_Wfoo, ...)' syntax helps only where there is no conditional
> before the warning -- how often does that occur? The way it
> currently is, one runs the risk of wr
1 - 100 of 742 matches
Mail list logo