[ANN] C++Now 2013 submission deadline extended to January 5th
Just a quick note that the proposals deadline for the C++Now 2013 conference has been extended to January 5th: http://cppnow.org/2013-call-for-submissions/ C++Now is the largest general C++ conference, that is, it is not specific to any library/framework or compiler vendor. C++Now has three tracks with presentations ranging from hands-on, practical tutorials to advanced C++ design and development techniques. Like last year, expect a large number of talks to focus on C++11 with this year bringing more practical, experience-based knowledge on using the new language features. Giving a talk at C++Now is a great way to share with others something cool that you have learned or built. Plus, the registration fee is waived for one speaker of every standard presentation while shorter sessions are prorated.
Re: Fw: [RE-SENDING]Re: MCSoC2013: to enhance embedded Linux for many-core system
On 18 December 2012 03:06, ETANI NORIKO wrote: > > Of course, we can use GCC on a host core, and we can use MPFR and GMP. > However, as long as we use LD to link object files and create a binary file > for a computing device core, we cannot use MPFR and GMP. > > Here, we would like to ask you as follows: > 1) Can LD have a function to link MPFR and GMP like GCC? > Or > 2) MPFR and GMP are installed in GCC with GCC toolchain. Can MPFR and GMP be > created as static libraries independent of GCC? Yes, if you install them with --disable-shared then GCC will not depend on them at runtime. See http://gcc.gnu.org/wiki/InstallingGCC for an even easier solution using contrib/download_prerequisites
Re: Adding Rounding Mode to Operations Opcodes in Gimple and RTL
On Fri, 14 Dec 2012, Michael Zolotukhin wrote: I found quite an old bug PR34768 and was thinking of doing what was suggested there. Wrong bug number? 34678 probably. Particularly, I was wondering about adding new subcodes to gimple and rtl for describing operations with rounding. Currently, I think the problem could be tackled in the following way: In gimple we'll need to add a pass that would a) find regions with constant, compile-time known rounding mode, b) replace operations with subcodes like plus/minus/div/etc. with the corresponding operations with rounding (plus_ru, plus_rd etc.), c) remove fesetround calls if the region doesn't contain instructions that could depend on rounding mode. In RTL we'll need to support the instructions with rounding mode, and also we'll need to be able to somehow emit such instructions. Probably, we'll need a reverse transformation to insert fesetround calls around the regions with instructions with rounding - that could be done by mode_switching pass. If this approach looks reasonable to you, then there are more questions: 1) What is the best way to represent operations with rounding in Gimple and RTL? Should we add plus_round and add an attribute to describe rounding mode, or we should add opcodes for different rounding modes (plus_round_up, plus_round_down, etc.) - of course, that should be done for all opcodes that are affected by rounding, not only plus-opcode. 2) What's the best place to insert the new passes? Any other input is more than welcome on this. Dealing with rounding modes sounds like a great (and hard) thing, much needed by some users: here we wrap all operations with asm("":"+m"(x)) currently ("+mx" for sse, for some reason "+g" didn't seem to impact optimization last time I checked), because -frounding-math doesn't really work (simplify_const_unary_operation does constant propagation on sqrt, some rtl pass notices common sub-expressions and merges them despite a change of rounding mode in between, etc). One thing that seems to make sense but would be way too large a change is to actually represent operations roughly the way they work in C and on x86: + is a function that takes 3 arguments (the 3rd is a global (TLS) variable describing the rounding mode) and has 2 outputs (already an issue in itself): the sum, and a tmp that is just used for exception_flags |= tmp. In some cases, you would know the rounding mode, but in the usual case where you don't, you could still know that (with some flags) a*b+c could be optimized to fma(a,b,c) if and only if * and + have compatible rounding modes (i.e. the same, or one has "don't care"). Things like common sub-expression optimization would work just fine: external function calls or asm might change the global variable, and without them the global variable remains the same and since the operator+ is const... I don't know how modeling the rounding mode flag as a global variable would work with the mode_switching pass. Not a realistic suggestion though :-( Take this message as encouragement to improve this in any way you can :-) -- Marc Glisse
Re: Adding Rounding Mode to Operations Opcodes in Gimple and RTL
On 12/14/2012 04:20 AM, Richard Biener wrote: > Exposing known rounding modes as new operation codes may sound like > a good idea (well, I went a similar way with trying to make operations with > undefined overflow explicit ... but the fallout was quite large even though > there is only one kind of undefined overflow and not many operation codes > that are affected ... so the work stalled - see no-undefined-overflow branch). > But don't under-estimate the fallout - both in wrong-code and > missed-optimizations. Yes, there will be problems adding new operation codes, but if you separate out the subcode somewhere, how can you be sure that the existing optimizations are looking at it and honoring it? It seems to me that's just as much a source of wrong-code as new operation codes. > Not sure if we want to start allocating sub-spaces of codes to a group > to allow flag-like composition (say, PLUS_EXPR gets 0x10 and the lower > nibble specifies the rounding mode). It looks more appealing for the > rounding mode case (more cases) than for the binary (un-)defined overflow > case. The largest problem here is that we're constrained on space: ENUM_BITFIELD(rtx_code) code: 16; unsigned int subcode : 16; we can't afford to allocate an entire nibble to rounding. We could allocate the codes in some sort of pattern that would make it easy to extract the rounding mode algorithmicly. Something like (code - BASE) % 5 since there are 4 directed rounding modes plus "unknown" or "dynamic". > You'd want to expose the rounding mode libc functions as builtins to be > able to detect them. That's good anyway and can be done independently > (they currently act as memory optimization barrier which avoids most of > the issues with -frounding-math support). Yep. > Insertion of rounding mode changes has to be done after 2nd scheduling > (and you probably want to have even 1st scheduling optimize the schedule > for rounding mode changes ...). Machine-reorg is one natural place to do > it (or where we currently insert vzeroupper). Flogging the 387 fpcr or the sse mxcr is just complicated enough to require a free register, and thus it probably has to be done before register allocation. E.g. during the optimize-mode-switching pass where we currently handle 387 rounding modes coming from other builtins and casts. r~
Re: Adding Rounding Mode to Operations Opcodes in Gimple and RTL
On Tue, Dec 18, 2012 at 4:34 PM, Richard Henderson wrote: > On 12/14/2012 04:20 AM, Richard Biener wrote: >> Exposing known rounding modes as new operation codes may sound like >> a good idea (well, I went a similar way with trying to make operations with >> undefined overflow explicit ... but the fallout was quite large even though >> there is only one kind of undefined overflow and not many operation codes >> that are affected ... so the work stalled - see no-undefined-overflow >> branch). >> But don't under-estimate the fallout - both in wrong-code and >> missed-optimizations. > > Yes, there will be problems adding new operation codes, but if you separate > out the subcode somewhere, how can you be sure that the existing optimizations > are looking at it and honoring it? It seems to me that's just as much a > source > of wrong-code as new operation codes. Indeed. Which is why I settled with new operation codes for no-undefined-overflow. >> Not sure if we want to start allocating sub-spaces of codes to a group >> to allow flag-like composition (say, PLUS_EXPR gets 0x10 and the lower >> nibble specifies the rounding mode). It looks more appealing for the >> rounding mode case (more cases) than for the binary (un-)defined overflow >> case. > > The largest problem here is that we're constrained on space: > > ENUM_BITFIELD(rtx_code) code: 16; > > unsigned int subcode : 16; > > we can't afford to allocate an entire nibble to rounding. > > We could allocate the codes in some sort of pattern that would make it > easy to extract the rounding mode algorithmicly. Something like > > (code - BASE) % 5 > > since there are 4 directed rounding modes plus "unknown" or "dynamic". Or stick with what I've done on no-undefined-overflow: #define PLUS_EXPR_P(code) (code == PLUS_EXPR || code == PLUS_NV_EXPR) ... /* Returns an equivalent non-NV tree code for CODE. */ static inline enum tree_code strip_nv (enum tree_code code) { switch (code) { case NEGATENV_EXPR: return NEGATE_EXPR; ... for FP rounding you'd probably have similar stuff as we have for the qualifiers and functions to attach/remove rounding modes from codes. >> You'd want to expose the rounding mode libc functions as builtins to be >> able to detect them. That's good anyway and can be done independently >> (they currently act as memory optimization barrier which avoids most of >> the issues with -frounding-math support). > > Yep. > >> Insertion of rounding mode changes has to be done after 2nd scheduling >> (and you probably want to have even 1st scheduling optimize the schedule >> for rounding mode changes ...). Machine-reorg is one natural place to do >> it (or where we currently insert vzeroupper). > > Flogging the 387 fpcr or the sse mxcr is just complicated enough to require > a free register, and thus it probably has to be done before register > allocation. > E.g. during the optimize-mode-switching pass where we currently handle 387 > rounding modes coming from other builtins and casts. Or initially just not do anything here - you have to treat all external calls conservatively anyway (and asms, unless we have a portable documented way of saying "this asm affects the fp status" ...). So initially just make sure that the statements that are supposed to be barriers for FP ops behave that way (note that such conservative treatment produces "undefined rounding mode" ops which are not combinable with even their own kind). Which comes back to the fact that you somehow need to model dependency on (possibly) rounding mode changing statements. You can always abuse virtual operands for that (either the existing single one or by re-introducing the possibility of having multiple virtual operands per stmt). Of course most passes do not care about virtual operands on things that do not look like memory accesses. And this is just the GIMPLE side, you also need to handle fold (maybe a non-issue - but look at compund stmt foldings) and RTL. Richard. > > r~
Re: Deprecate i386 for GCC 4.8?
On 12/12/2012 11:07 PM, David Brown wrote: On 12/12/12 20:54, Robert Dewar wrote: On 12/12/2012 2:52 PM, Steven Bosscher wrote: And as usual: If you use an almost 30 years old architecture, why would you need the latest-and-greatest compiler technology? Seriously... Well the embedded folk often end up with precisely this dichotomy :-) True enough. But if no sign of 386 embedded chips, then reasonable to deprecate I agree. I believe it has been a very long time since any manufacturers made a pure 386 chip. While I've never used x86 devices in any of my embedded systems, I believe there are two main classes of x86 embedded systems - those that use DOS (these still exist!), and those that aim to be a small PC with more modern x86 OS's. For the DOS systems, gcc does not matter, because it is not used - It is used (http://www.delorie.com/djgpp/). However 386 is not really supported any more for DJGPP for rather long time. I do not have corresponding hardware to test on 386 already for a long time so I did not do any testing on 386 when I built recent GCC versions for DJGPP for DJ FTP server (last is gcc-4.7.2). As far as I remember read in mailing list 386 support no more work (at least C++ standard library). So I guess deprecating 386 could be not too large loss. compilers like OpenWatcom are far more common (ref. the FreeDOS website). And for people looking for "embedded PC's", the processor is always going to be a lot more modern than the 386 - otherwise they are not going to be able to run any current OS. Andris
Re: Deprecate i386 for GCC 4.8?
On Tue, Dec 18, 2012 at 8:38 AM, Andris Pavenis wrote: > On 12/12/2012 11:07 PM, David Brown wrote: >> >> On 12/12/12 20:54, Robert Dewar wrote: >>> >>> On 12/12/2012 2:52 PM, Steven Bosscher wrote: >>> And as usual: If you use an almost 30 years old architecture, why would you need the latest-and-greatest compiler technology? Seriously... >>> >>> >>> Well the embedded folk often end up with precisely this dichotomy :-) >> >> >> True enough. >> >>> But if no sign of 386 embedded chips, then reasonable to deprecate >>> I agree. >> >> >> I believe it has been a very long time since any manufacturers made a pure >> 386 chip. While I've >> never used x86 devices in any of my embedded systems, I believe there are >> two main classes of x86 >> embedded systems - those that use DOS (these still exist!), and those that >> aim to be a small PC >> with more modern x86 OS's. For the DOS systems, gcc does not matter, >> because it is not used - > > > It is used (http://www.delorie.com/djgpp/). However 386 is not really > supported any more for DJGPP > for rather long time. I do not have corresponding hardware to test on 386 > already for a long time > so I did not do any testing on 386 when I built recent GCC versions for > DJGPP for DJ FTP server > (last is gcc-4.7.2). As far as I remember read in mailing list 386 support > no more work (at least > C++ standard library). So I guess deprecating 386 could be not too large > loss. Dosbox?
Re: Deprecate i386 for GCC 4.8?
The official DJGPP triplet is for i586, not i386. I don't mind djgpp-wise if we deprecate i386, as long as we keep i586. Anyone still using djgpp for i386 can dig out old versions from the archives :-)
Possible issue with integer promotion for << and >> in gcc.4.5.3
Cygwin gcc 4.5.3 I have run some tests to determine the gcc 4.5.3 integer promotion policies. The tests show that for 'char' input, "char << long" and "char >> long" promote to INT while other operations using a long promote to" long", and that "char << ulong" and "char >> ulong" promote to INT while other operations using ulong promote to "ulong". In similar fashion, "long << long" and "long >> long" promote to INT while other promotions are to long, and that "long << ulong" and "long >> ulong" promote to long while other operations promote to ulong. The code used and the output are included below. The code is not production code. I think the code is correct and that my interpretation is also correct. I realize that gcc 4.5.3 is no longer supported so if my guess is correct do I have to switch to mingw (for precompiled later compilers)? output func0(char 1) !x 0 0x BOOL ~x-2 0xfffe INT +x 1 0x0001 INT -x-1 0x INT ++x3 0x0002 CHAR --x -1 0x CHAR x++2 0x0001 CHAR x--0 0x0001 CHAR func1(char -1, bool true) x + y0 0x INT x - y -2 0xfffe INT x * y -1 0x INT x / y -1 0x INT x % y1 0x0001 INT x << y -2 0xfffe INT x >> y -1 0x INT x & y1 0x0001 INT x | y -1 0x INT x ^ x -2 0xfffe INT func1(char -1, char 1) x + y0 0x INT x - y -2 0xfffe INT x * y -1 0x INT x / y -1 0x INT x % y1 0x0001 INT x << y -2 0xfffe INT x >> y -1 0x INT x & y1 0x0001 INT x | y -1 0x INT x ^ x -2 0xfffe INT func1(char -1, uchar 1) x + y0 0x INT x - y -2 0xfffe INT x * y -1 0x INT x / y -1 0x INT x % y1 0x0001 INT x << y -2 0xfffe INT x >> y -1 0x INT x & y1 0x0001 INT x | y -1 0x INT x ^ x -2 0xfffe INT func1(char -1, long 1) x + y0 0x LONG x - y -2 0xfffe LONG x * y -1 0x LONG x / y -1 0x LONG x % y1 0x0001 LONG x << y -2 0xfffe INT x >> y -1 0x INT x & y1 0x0001 LONG x | y -1 0x LONG x ^ x -2 0xfffe LONG func1(char -1, ulong 1) x + y0 0x ULONG x - y 4294967294 0xfffe ULONG x * y 4294967295 0x ULONG x / y 4294967295 0x ULONG x % y1 0x0001 ULONG x << y -2 0xfffe INT x >> y -1 0x INT x & y1 0x0001 ULONG x | y 4294967295 0x ULONG x ^ x 4294967294 0xfffe ULONG func0(long 1) !x 0 0x BOOL ~x-2 0xfffe LONG +x 1 0x0001 LONG -x-1 0x LONG ++x3 0x0003 LONG --x -1 0x LONG x++2 0x0001 LONG x--0 0x0001 LONG func1(long -1, bool true) x + y0 0x LONG x - y -2 0xfffe LONG x * y -1 0x LONG x / y -1 0x LONG x % y1 0x0001 LONG x << y -2 0xfffe LONG x >> y -1 0x LONG x & y1 0x0001 LONG x | y -1 0x LONG x ^ x -2 0xfffe LONG func1(long -1, char 1) x + y0 0x LONG x - y -2 0xfffe LONG x * y -1 0x LONG x / y -1 0x LONG x % y1 0x0001 LONG x << y -2 0xfffe LONG x >> y -1 0x LONG x & y1 0x0001 LONG x | y -1 0x LONG x ^ x -2 0xfffe LONG func1(long -1, uchar 1) x + y0 0x LONG x - y -2 0xfffe LONG x * y -1 0x LONG x / y -1 0x LONG x % y1 0x0001 LONG x << y -2 0xfffe LONG x >> y -1 0x LONG x & y1 0x0001 LONG x | y -1 0x LONG x ^ x -2 0xfffe LONG func1(long -1, long 1) x + y0 0x LONG x - y -2 0xfffe LONG x * y -1 0x LONG x / y -1 0x LONG x % y1 0x0001 LONG x << y -2 0xfffe LONG x >> y -1 0x LONG x & y1 0x0001 LONG x | y -1 0x LONG x ^ x -2 0xfffe LONG func1(long -1, ulong 1) x + y0 0x ULONG x - y 4294967294 0xf
Re: Adding Rounding Mode to Operations Opcodes in Gimple and RTL
On Fri, 14 Dec 2012, Michael Zolotukhin wrote: > Currently, I think the problem could be tackled in the following way: > In gimple we'll need to add a pass that would a) find regions with > constant, compile-time known rounding mode, b) replace operations with > subcodes like plus/minus/div/etc. with the corresponding operations > with rounding (plus_ru, plus_rd etc.), c) remove fesetround calls if > the region doesn't contain instructions that could depend on rounding > mode. I'd say constant rounding mode optimization is pretty much a corner case - yes, constant rounding modes for particular code are useful in practice, but the bigger problem is making -frounding-math work reliably - stopping optimizations that are invalid when the rounding mode might change dynamically (and any call to an external function might happen to call fesetround) and, similarly, optimizations that are invalid when exceptions might be tested (you can't optimize away ((void) (a + b)) for floating-point when exceptions might be tested, for example, as it might raise an exception flag - again, any external function might test exceptions, or they might be tested after return from the function containing that expression). Then there are probably bugs with libgcc functions not raising the right exceptions / handling rounding modes correctly, and lots of other issues of detail to address to get these things right (including a lot of testsuite work). Although constant rounding modes are probably more often useful in practice than dynamic modes, processor support for them is much more limited (although I think IA64 may have support for multiple rounding direction registers and specifying which is used in an instruction, which is the sort of thing that would help for constant modes). And C99 / C11 don't have C bindings for constant rounding modes - proposed bindings can be found in WG14 N1664, the current draft of the first part of a five-part Technical Specification for C bindings to IEEE 754-2008. As suggested above, GCC doesn't really have support for even the IEEE 754-1985 bindings in C99 / C11 Annex F - no support for the FENV_ACCESS pragma, and the -frounding-math -ftrapping-math options don't have all the desired effects. When I've thought about implementation approaches I've largely thought about them from the initial correctness standpoint - how to add thorough testcases for all the various cases that need to be covered, and disabling optimizations fairly crudely for -frounding-math / -ftrapping-math as needed, before later trying to optimize. There's the open question of whether the default set of options (which includes -ftrapping-math) would need to change to avoid default-options performance being unduly affected by making -ftrapping-math actually do everything it should for code testing exceptions. Although the 754-2008 draft bindings include constant rounding directions, most of those bindings are new library functions and macros. I've thought a bit about what would be involved in implementing them properly in glibc (where you have the usual issues of everything needing implementing for five different floating-point types, and thorough testing for all those different types) - but given the size of such a project, have no immediate plans to work on it - there is a lot to be done on glibc libm as-is just to improve correctness for the existing functions (and a lot already done for glibc 2.16 and 2.17 in that regard). (I'd guess each of (proper Annex F support in GCC; fixing the remaining correctness issues in glibc libm for return values and exceptions; and implementing the N1664 bindings) would likely be months of work.) -- Joseph S. Myers jos...@codesourcery.com
Re: Deprecate i386 for GCC 4.8?
On Wed, 12 Dec 2012, Steven Bosscher wrote: > Linux support for i386 has been removed. Should we do the same for GCC? FWIW, glibc hasn't really supported i386 for several years (at least with the Linux kernel; I don't know about Hurd), since NPTL requires atomic operations that i386 doesn't have, so fails to link unless you use -march=i486 or later. -- Joseph S. Myers jos...@codesourcery.com
Re: Deprecate i386 for GCC 4.8?
My primary concern centers whether any 386 w/o fpu IP cores or space hardened i386dx/sx or 486sx CPUs are impacted. These could be used in new designs. This also eliminates gcc from use on any older embedded x86 boards w/o fpu. RTEMS still supports these but depends on gcc as the foundation. We can use qemu to automate testing gcc. We have been periodically posting results but not in the past few months. I did a build sweep in the 4.8 devel cycle but ended up using most of my time to report PRs. I realize that for mainstream PCs, these are ancient and a good candidate for deprecation. --joel "Joseph S. Myers" wrote: >On Wed, 12 Dec 2012, Steven Bosscher wrote: > >> Linux support for i386 has been removed. Should we do the same for GCC? > >FWIW, glibc hasn't really supported i386 for several years (at least with >the Linux kernel; I don't know about Hurd), since NPTL requires atomic >operations that i386 doesn't have, so fails to link unless you use >-march=i486 or later. > >-- >Joseph S. Myers >jos...@codesourcery.com
www.amd64.org is down
Hi, I am trying to submit my x32 extension to x86-64 discussion mailing list. But my email was bounced back. Do we need a more reliable place for x86-64 psABI? H.J. -- Forwarded message -- From: Mail Delivery Subsystem Date: Fri, Dec 7, 2012 at 1:02 PM Subject: Delivery Status Notification (Failure) To: hjl.to...@gmail.com Delivery to the following recipient failed permanently: disc...@x86-64.org Technical details of permanent failure: The recipient server did not accept our requests to connect. Learn more at http://support.google.com/mail/bin/answer.py?answer=7720 [(10) mail.x86-64.org. [217.9.48.20]:25: Connection timed out] - Original message - DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:cc:content-type; bh=QINMp2O+4NZvU5NyJVN1pMR2IN7ClLyZY6EPx8djst0=; b=mSQ+YmhJiInSYD1Bu+3M6cl3DwcHuai5Par4awQU/ylSy9Kwobo2bKOy2jHj2RAvlv TtWe+zSXBM+6RCXBYEA3EwORunmayFH/K2yF8XqHQXFBh7Ywcf+Ip91wiSoDPpIBi6ca 252uWQof8oMSpFwuU5pzk1lY0XQc84GDrw0aElnRfmGIzqCj1BHakpEgO/iiTz/NlaQV NPn2+tRaAaTq7AqzK23e/AyjckyoFur+QoggWM0fhho9vSsR/XtvxvP/vtwLEiMx9xlT vUfx8IHkVFRi55kI/oPDDqrvrGDZ8J4FJSWO/88kiKf74f/a9hEUzzUWqsVbqV0Id0U8 7+FQ== MIME-Version: 1.0 Received: by 10.224.146.74 with SMTP id g10mr24044414qav.93.1354645920248; Tue, 04 Dec 2012 10:32:00 -0800 (PST) Received: by 10.49.12.210 with HTTP; Tue, 4 Dec 2012 10:31:59 -0800 (PST) Date: Tue, 4 Dec 2012 10:31:59 -0800 Message-ID: Subject: PING [discuss] [x86-64 psABI] RFC: Extend x86-64 psABI to support x32 From: "H.J. Lu" To: Michael Matz Cc: "H. Peter Anvin" , disc...@x86-64.org, GNU C Library , GCC Development , GDB , x32-...@googlegroups.com, Binutils Content-Type: text/plain; charset=ISO-8859-1 On Thu, May 17, 2012 at 12:50 PM, H.J. Lu wrote: > On Tue, May 15, 2012 at 9:07 AM, Michael Matz wrote: >> Hi, >> >> On Mon, 14 May 2012, H.J. Lu wrote: >> >>> > As a minor nitpick, I have always used x32 with a lower case x. The >>> > capital X32 looks odd to me. >>> > >>> >>> I used X32 together with LP64. I can use ILP32 instead of X32 when LP64 >>> is mentioned at the same time. >> >> I'd prefer that. x32 is a nice short-hand name for the whole thing, but >> not descriptive, unlike LP64. So, yes, IMO it should be ILP32 in the ABI >> document. >> > > Here is the updated change. Any comments? > > Thanks. > PING. -- H.J. -- H.J.
Re: Possible issue with integer promotion for << and >> in gcc.4.5.3
On Tue, Dec 18, 2012 at 2:39 PM, Arthur Schwarz wrote: > > I have run some tests to determine the gcc 4.5.3 integer promotion policies. This message is not appropriate for the mailing list gcc@gcc.gnu.org, which is for the development of GCC itself. It would be appropriate on the mailing list gcc-h...@gcc.gnu.org. Please take any followups to gcc-h...@gcc.gnu.org. Thanks. The integer promotion policies are the ones specified in the language standard. > The > tests show that for 'char' input, "char << long" and "char >> long" promote to > INT while other operations using a long promote to" long", and that "char << > ulong" and "char >> ulong" promote to INT while other operations using ulong > promote to "ulong". In similar fashion, "long << long" and "long >> long" > promote to INT while other promotions are to long, and that "long << ulong" > and > "long >> ulong" promote to long while other operations promote to ulong. The > code used and the output are included below. The shift operators produce a value of the same type as the left operand, subject to the usual integer promotions. You say in the paragraph above that long << long promotes to INT, but that sounds unlikely, and I don't see any support for that in the program output that you showed. You suggest that there is a bug somewhere, but I don't see one. Ian