Re: libgcc: soft float on mips-wrs-vxworks

2013-01-27 Thread Jan Smets
No reply from the maintainer . Can anyone please make this
configuration change to trunk or should I open a bugreport for this?

Thanks
- Jan


On Fri, Jan 4, 2013 at 5:13 PM, Rainer Orth  
wrote:
> Hi Jan,
>
>> I'm running a heavily modified version of vxworks and we implement
>> floating point operations in software. I noticed that the the default
>> libraries do not include FP/DP routines, I think that got changed
>> somewhere after 4.6, I think.
>>
>> libgcc/config.host  :
>>
>>  137 mips*-*-*)
>>  138 cpu_type=mips
>>  139 tmake_file=mips/t-mips
>>  140 ;;
>>
>> is overrided by:
>>
>>  263 *-*-vxworks*)
>>  264   tmake_file=t-vxworks
>>  265   ;;
>
> true, this is unlike the other ${host} cases in config.host.
>
>> So my ugly fix is :
>>
>> Index: config.host
>> ===
>> --- config.host (revision 194855)
>> +++ config.host (working copy)
>> @@ -778,6 +778,7 @@
>> extra_parts="$extra_parts crti.o crtn.o"
>> ;;
>>  mips-wrs-vxworks)
>> +   tmake_file="$tmake_file mips/t-mips mips/t-elf mips/t-mips16"
>> ;;
>>  mipstx39-*-elf* | mipstx39el-*-elf*)
>> tmake_file="$tmake_file mips/t-crtstuff mips/t-mips16"
>
> You should just append to $tmake_file for *-*-vxworks* instead of
> overriding it as every other OS (well, with the exception of *-*-vms*)
> does.  I can't see any bad side effect from this.
>
>> Can you have a look and see whether this is a configuration error?
>
> It's probably a configuration error.  Unfortunately, my calls for
> testing the libgcc changes went largely unheard, so it's upon the target
> maintainers now to fix the fallout on their platforms.
>
> Rainer
>
> --
> -
> Rainer Orth, Center for Biotechnology, Bielefeld University



-- 
Smets Jan
j...@smets.cx


vec.h vs. --enable-gather-detailed-mem-stats

2013-01-27 Thread Steven Bosscher
Hello Diego,

There still appears to be an issue with the vec.h changes, the
detailed memory stats are not very informative. The allocation lines
are shown in vec.h without further details:


t8000.log:vec.h:1268 ((null))   0:
0.0% 40   4: 0.0%
t8000.log:vec.h:1263 ((null))   0:
0.0% 327440  73: 0.0%
t8000.log:vec.h:1255 ((null))   0:
0.0% 7208806579: 0.6%
t8000.log:vec.h:1260 ((null))  754680:
9.6%   12634104  729374:71.2%
t8000.log:vec.h:1264 ((null))
1089176:13.9%   32381804  151514:14.8%
t8000.log:vec.h:1215 ((null))
5997120:76.5%   38899664  136293:13.3%
t8000.log:vec.h:758 ((null))0:
0.0%  0: 0.0% 40: 0.0%  0: 0.0%  1
t8000.log:vec.h:674 ((null))   787240:
0.2%   1872: 0.0%  0: 0.0%   1024: 0.1%   8076
t8000.log:vec.h:746 ((null))  7312448:
1.5%3504304: 4.0%3396880: 5.2%   7168: 0.5% 180222
t8000.log:vec.h:694 ((null))  8388800:
1.7%   12585720:14.4%4194400: 6.4%   1080: 0.1% 69
t8000.log:vec.h:706 ((null)) 22612928:
4.7%1572664: 1.8%4194304: 6.4%   1136: 0.1% 41
t8000.log:vec.h:652 ((null))
67125296:13.8%   33562640:38.4%  0: 0.0% 64: 0.0%
4


Is this a known issue?

Ciao!
Steven


Re: About new project

2013-01-27 Thread Gerald Pfeifer
On Sat, 26 Jan 2013, Hongtao Yu wrote:
> How can I set up a new project under GCC and make it open-sourced? 
> Thanks!

That depends on what you mean by "under GCC", I'd say.  If you
have improvements for GCC, submitting those as patches against
GCC will be best, cf. http://gcc.gnu.org/contribute.html .

If you want to work on an independent project, you can just go
ahead and use one of those services like github, SourceForge etc.

(Note that the GNU project talks about free software, cf.
https://www.gnu.org/philosophy/free-software-for-freedom.html )

Gerald


gcc-4.8-20130127 is now available

2013-01-27 Thread gccadmin
Snapshot gcc-4.8-20130127 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.8-20130127/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.8 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 195497

You'll find:

 gcc-4.8-20130127.tar.bz2 Complete GCC

  MD5=894cd5579e049b837d97e88ed1e2db45
  SHA1=6d0525e5366094d139996f9957a688b6191c1bb5

Diffs from 4.8-20130120 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.8
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Floating Point subnormal numbers under C99 with GCC 4.7‏

2013-01-27 Thread Argentinator Rincón Matemático
Hi, dear friends.

I am testing floating-points macros in C language, under the standard C99.
My compiler is GCC 4.6.1. (with 4.7.1, I have the same result).

I have two computers:
My system (1) is Windows XP SP2 32bit, in an "Intel (R) Celeron (R) 420" @ 1.60 
GHz.
My system (2) is Windows 7 Ultimate SP1 64bit, in an "AMD Turion II X2 
dual-core mobile M520 ( 2,3 ghz 1MB L2 Cache )"
(The result was the same in both systems.)

I am interested in testing subnormal numbers for the types float, double and 
long double.
I've tried the following line:

printf(" Float: %x\n Double: %x\n Long Double: %x\n",fpclassify(FLT_MIN / 4.F), 
fpclassify(DBL_MIN / 4.), fpclassify(LDBL_MIN / 4.L ));

I've compiled with the options -std=c99 and -pedantic (also without -pedantic).
Compilation goes well, however the program shows me this:

 Float: 400
 Double: 400
 Long Double: 4400

(0x400 == FP_NORMAL, 0x4400 == FP_SUBNORMAL)

I think that the right result must be 0x4400 in all cases.

When I tested the constant sizes, I have obtained they are of the right type.
For example, I have obtained:

sizeof(float) == 4
sizeof(double) == 8
sizeof(long double) == 12

Also:

sizeof(FLT_MIN / 4.F) == 4
sizeof(DBL_MIN / 4.) == 8
sizeof(LDBL_MIN / 4.L) == 12

This means that FLT_MIN / 4.F only can be a float, and so on.
Moreover, FLT_MIN / 4.F must be a subnormal float number.
However, it seems like the fpclassify() macro behave as if any argument were a 
long double number.

Just in case, I have recompiled the program by putting the constants at hand:

printf(" Float: %x\n", fpclassify(0x1p-128F));

The result was the same.

Am I missunderstanding the C99 rules? Or the fpclassify() macro has a bug in 
the GCC compiler?
(in the same way, the isnormal() macro "returns" 1 for float and double, but 0 
for long double).
I quote the C99 standard paragraph that explains the behaviour of fpclassify 
macro:

First, an argument represented in a format wider than its semantic type is 
converted to its semantic type.
Then classification is based on the type of the argument.

Thanks.
Sincerely, yours.
Argentinator  

Re: question about section 10.12

2013-01-27 Thread Kenneth Zadeck
this looks good to me.  does your patch also address the vec_concat 
issue that marc raised?

On 01/26/2013 09:59 PM, Hans-Peter Nilsson wrote:

From: Kenneth Zadeck 
Date: Sat, 26 Jan 2013 16:19:40 +0100
the definition of vec_duplicate in section 10.12 seems to restrictive.

i have seen examples where the "small vector" is really a scalar. Should
the doc be "small vector or scalar"?

Yes.  This patch has been sitting in a tree of mine forever;
it's just that the tree is a read-only-access-tree and every
time I've re-discovered it, I've been distracted before moving
it to a write-access-tree and committing as obvious...  And
right now when being reminded, I don't have access to one for
another 36h.  Maybe it's a curse. :)  (N.B. if you prefer your
own wording I don't mind; just offering a supporting
observation.)

* doc/rtl.texi (vec_duplicate): Mention that a scalar
can be a valid operand.

Index: doc/rtl.texi
===
--- doc/rtl.texi(revision 195491)
+++ doc/rtl.texi(working copy)
@@ -2634,7 +2634,8 @@ the two inputs.
  
  @findex vec_duplicate

  @item (vec_duplicate:@var{m} @var{vec})
-This operation converts a small vector into a larger one by duplicating the
+This operation converts a scalar into a vector or a small vector
+into a larger one by duplicating the
  input values.  The output vector mode must have the same submodes as the
  input vector mode, and the number of output parts must be an integer multiple
  of the number of input parts.

brgds, H-P




Re: About new project

2013-01-27 Thread Hongtao Yu

On 1/27/2013 5:04 PM, Gerald Pfeifer wrote:

On Sat, 26 Jan 2013, Hongtao Yu wrote:

How can I set up a new project under GCC and make it open-sourced?
Thanks!

That depends on what you mean by "under GCC", I'd say.  If you
have improvements for GCC, submitting those as patches against
GCC will be best, cf. http://gcc.gnu.org/contribute.html .

If you want to work on an independent project, you can just go
ahead and use one of those services like github, SourceForge etc.
Actually, we have designed and implement a tentative demand-driven 
flow- and context-sensitive pointer analysis in GCC 4.7. This pointer 
analysis is used for pairwise data dependence checking for 
vectorization. Currently, it does not serve for optimizations directly, 
although it may do in the future. Do you think which way is best for 
releasing our code, to open a branch inside GCC or to release a plugin 
for GCC? Thanks!


Hongtao


(Note that the GNU project talks about free software, cf.
https://www.gnu.org/philosophy/free-software-for-freedom.html )

Gerald





Re: Floating Point subnormal numbers under C99 with GCC 4.7‏

2013-01-27 Thread Tim Prince

On 1/27/2013 6:02 PM, Argentinator Rincón Matemático wrote:

Hi, dear friends.

I am testing floating-points macros in C language, under the standard C99.
My compiler is GCC 4.6.1. (with 4.7.1, I have the same result).

I have two computers:
My system (1) is Windows XP SP2 32bit, in an "Intel (R) Celeron (R) 420" @ 1.60 
GHz.
My system (2) is Windows 7 Ultimate SP1 64bit, in an "AMD Turion II X2 dual-core 
mobile M520 ( 2,3 ghz 1MB L2 Cache )"
(The result was the same in both systems.)

I am interested in testing subnormal numbers for the types float, double and 
long double.
I've tried the following line:

printf(" Float: %x\n Double: %x\n Long Double: %x\n",fpclassify(FLT_MIN / 4.F), 
fpclassify(DBL_MIN / 4.), fpclassify(LDBL_MIN / 4.L ));

I've compiled with the options -std=c99 and -pedantic (also without -pedantic).
Compilation goes well, however the program shows me this:

  Float: 400
  Double: 400
  Long Double: 4400

(0x400 == FP_NORMAL, 0x4400 == FP_SUBNORMAL)

I think that the right result must be 0x4400 in all cases.

When I tested the constant sizes, I have obtained they are of the right type.
For example, I have obtained:

sizeof(float) == 4
sizeof(double) == 8
sizeof(long double) == 12

Also:

sizeof(FLT_MIN / 4.F) == 4
sizeof(DBL_MIN / 4.) == 8
sizeof(LDBL_MIN / 4.L) == 12

This means that FLT_MIN / 4.F only can be a float, and so on.
Moreover, FLT_MIN / 4.F must be a subnormal float number.
However, it seems like the fpclassify() macro behave as if any argument were a 
long double number.

Just in case, I have recompiled the program by putting the constants at hand:

printf(" Float: %x\n", fpclassify(0x1p-128F));

The result was the same.

Am I missunderstanding the C99 rules? Or the fpclassify() macro has a bug in 
the GCC compiler?
(in the same way, the isnormal() macro "returns" 1 for float and double, but 0 
for long double).
I quote the C99 standard paragraph that explains the behaviour of fpclassify 
macro:

First, an argument represented in a format wider than its semantic type is 
converted to its semantic type.
Then classification is based on the type of the argument.

Thanks.
Sincerely, yours.
Argentinator

This looks more like a topic for gcc-help.
Even if you had quoted gcc -v it would not reveal conclusively where 
your  or fpclassify() came from, although it would give some 
important clues.  There are at least 3 different implementations of gcc 
for Windows (not counting 32- vs. 64-bit), although not all are commonly 
available for gcc-4.6 or 4.7.  The specific version of gcc would make 
less difference than which implementation it is.
I guess, from your finding that sizeof(long double) == 12, you are 
running a 32-bit compiler even on the 64-bit Windows.
The 32-bit gcc I have installed on my 64-bit Windows evaluates 
expressions in long double unless -mfpmath=sse is set (as one would 
normally do).  This may affect the results returned by fpclassify. 
64-bit gcc defaults to -mfpmath=sse.


--
Tim Prince



Re: hard typdef - proposal - I know it's not in the standard

2013-01-27 Thread Alec Teal
To some up again (I've kept the quotes so it can be seen what's already 
been talked about) I propose something that is almost identical to a 
typedef as it exists now, all behaviour of this hard typedef is almost 
identical except:


1) the "hard" type is never implicitly 'cast' to anything else of the 
same actual type (a BookId may never become an int implicitly)
2) the "hard" type may be used in overloading (building on the 'may not 
be implicitly cast' a call to f(someBook) will never resolve to f(int), 
only to f(BookId)


Because it is to behave like a typedef the c-style cast isn't actually a 
cast at all, it is a way of telling the compiler you intend to use the 
hard-typed variable as you have written. There is no actual cast because 
it's a kind of typedef the underlying data is by definition the same. If 
the compiler didn't complain when you tried to use a BookId as an int or 
visa-versa that would defeat the purpose. the c-style cast of "int x = 
(int) some_book;" or whatever is not a cast (I'm saying the same thing 
again, sorry) it's just telling the compiler "Yes, I mean to write this"


Alec


On 24/01/13 19:45, Lawrence Crowl wrote:

On 1/24/13, Alec Teal  wrote:

Did anyone read?

I can sometimes take several days to get to reading an email,
particularly when travelling.


I hope you see how it is nothing like a strong typedef (as its
called). To me it seems like the strong one is too similar to a
class to be worth adding, especially after reading that paper,
it seems like it would allow new-php-user like behaviour of
EVERYTHING IS A CLASS but with types, we've all been there. While
this could stop some errors ... I've discussed this already.

If you want your feature in mainline gcc, you will need to convince
the maintainers that the feature is valuable.  Likewise, if you want
your extension in the C++ language, you will need to convince the C++
standards committee.  Both tasks have essentially the same structure.

Clearly explain the programming problem you have.  You need to
explain why the current language is not really working.  Using
examples of real failures is helpful.  You should show that the
problem is significant to a significant number of programmers.
(They may not know they have a problem, so you will need to
explain it.)

The for your feature, IIUC, is that you are not proposing
something that keeps track of physical units, so many examples of
bad operations would not apply to your feature.  You need to make
the case that there is some aspect of programming that the feature
captures in code.

You need to examine a few alternative solutions.  Presumably,
your proposal is one of them.

Finally, you should discuss any interaction between your feature
and existing features.  In your case, you appear to be changing the
meaning of C casts.  What about C++ casts?  Do you need a new one,
or do you change the meaning of existing ones?

This list of tasks may seem like a lot of work, but it will likely be
significantly less than the implementation work.  More importantly,
it will help ensure that the feature has a market of users.


Alec
I am eager to see what you guys think, this is a 'feature' I've wanted for a
long time and you all seem approachable rather than the distant compiler
gods I expected.

I can also see why 'strong typedefs' were not done, it tries to do too much
with the type system and becomes very object like

Alec Teal  wrote:


On 23/01/13 23:07, Lawrence Crowl wrote:

On 1/23/13, Jonathan Wakely  wrote:

On 23 January 2013 09:15, Alec Teal wrote:

I was fearful of using the word attribute for fear of getting it wrong?
What
is "this part" of the compiler called

I think attributes are handled in the front end and transformed into
something in the compiler's "tree" data structures.

FWIW I've usually seen this feature referred to as "strong typedefs".
That brings to mind the "strong using" extension G++ had for
namespaces, which (prior to getting standardised as "inline
namespaces") used __attribute__((strong)) so that attribute already
exists in the C++ front end:
http://gcc.gnu.org/onlinedocs/gcc/Namespace-Association.html

Note that there is a proposal before the C++ standard committee on
just this topic.  Consider its semantics when implementing.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3515.pdf


After reading that it doesn't seem like the same thing, they are talking
about - essentially - creating "classes" to handle types with no runtime
overhead, they want to be certain the optimizer got rid of it or save
the optimizer a job. I'm not saying that's bad I just think it is
separate. Typdefs are supposed to be an alias, yes in light of that the
title of "type definition" seems a little misleading when put with that
paper, but none the less a typedef is an alias.

While reading the book "The design and evolution of C++" (I am not
saying this to throw around "look, from the 'founder' of C++'s mouth!",
I read it so I could learn why things ar

Re: hard typdef - proposal - I know it's not in the standard

2013-01-27 Thread James Dennett
On Sun, Jan 27, 2013 at 6:19 PM, Alec Teal  wrote:
> To some up again (I've kept the quotes so it can be seen what's already been
> talked about) I propose something that is almost identical to a typedef as
> it exists now, all behaviour of this hard typedef is almost identical
> except:
>
> 1) the "hard" type is never implicitly 'cast' to anything else of the same
> actual type (a BookId may never become an int implicitly)

So that's like a "private" opaque alias in n3515.

> 2) the "hard" type may be used in overloading (building on the 'may not be
> implicitly cast' a call to f(someBook) will never resolve to f(int), only to
> f(BookId)

Also the same as a private opaque aias in n3515.

> Because it is to behave like a typedef the c-style cast isn't actually a
> cast at all, it is a way of telling the compiler you intend to use the
> hard-typed variable as you have written. There is no actual cast because
> it's a kind of typedef the underlying data is by definition the same. If the
> compiler didn't complain when you tried to use a BookId as an int or
> visa-versa that would defeat the purpose. the c-style cast of "int x = (int)
> some_book;" or whatever is not a cast (I'm saying the same thing again,
> sorry) it's just telling the compiler "Yes, I mean to write this"

That's a cast -- an explicit request in code for a type conversion.
The fact that it's a pure compile-time operation and a no-op at
runtime has no bearing on whether it "is a cast", just as we can
static_cast beween enumerators and integers today with no run-time
cost.  One of the purposes of casts is to tell the compiler "Yes, I
mean to write this", and it's common for casts to be purely
compile-time operations.

n3515 is also explicit that the permitted conversions have no run-time cost.

Is there anything that you propose that a "private opaque alias" from
n3515 does not provide?

-- James


Re: Initial Stack Padding?

2013-01-27 Thread Ian Lance Taylor
On Sat, Jan 26, 2013 at 5:44 PM, Matt Davis  wrote:
> This question is similar to my last; however, I think it provides a
> bit more clarity as to how I am obtaining offsets from the frame
> pointer.  I have an RTL pass that is processed after expand passes.  I
> keep track of a list of stack allocated variables.  For each VAR_DECL,
> I obtain the DECL_RTL rtx.  Since these variables are local, the RTL
> expression reflects an offset from the stack frame pointer.  For
> instance, the variable 'matt':
>
> (mem/f/c:DI (plus:DI (reg/f:DI 20 frame)
> (const_int -8 [0xfff8])) [0 matt+0 S8 A64])
>
> I interpret this as being -8 bytes away from the frame pointer, when
> the function 'matt' has scope in is executing.  Since 'matt' is a
> pointer, and the stack grows downward (x86), and this is a 64-bit
> machine, the contents of 'matt' end at the frame pointer and span 8
> bytes below the frame pointer to where the first byte of 'matt'
> begins.  This is fine in some cases, but if I were to rewrite the
> source and add a few more variables.  It seems that there might be a
> few words of padding before the data for the first variable from the
> stack pointer begins.  If I were to add a 4 byte integer to this
> function, 'matt' would still be declared in RTL as above, but instead
> of really being -8 it is actually -32.  Where do the 24 bytes of
> padding between the frame pointer and the last byte of 'matt' come
> from?   Further, how can I find this initial padding offset at compile
> time?  I originally thought that the offset in the rtx, as above,
> would reflect this stack realignment, but it appears not to.

The frame pointer in RTL is a notional frame pointer.  It need not
correspond to any actual hardware register.  In fact most processors
distinguish the soft frame pointer from the hard frame pointer, and
most permit eliminating the frame pointer entirely and just using the
stack pointer.

I'm not sure how to answer the rest of your paragraph because I'm not
sure which frame pointer you are talking about.  E.g., which one do
you mean when you mention -32?  If you are talking about x86_64 then I
guess you are seeing the fact that the stack must be aligned according
to -mpreferred-stack-boundary, which defaults to a 16 byte boundary.

Ian


Re: hard typdef - proposal - I know it's not in the standard

2013-01-27 Thread Alec Teal

On 28/01/13 02:38, James Dennett wrote:
That's a cast -- an explicit request in code for a type conversion. 
The fact that it's a pure compile-time operation and a no-op at 
runtime has no bearing on whether it "is a cast", just as we can 
static_cast beween enumerators and integers today with no run-time 
cost. One of the purposes of casts is to tell the compiler "Yes, I 
mean to write this", and it's common for casts to be purely 
compile-time operations. n3515 is also explicit that the permitted 
conversions have no run-time cost. Is there anything that you propose 
that a "private opaque alias" from n3515 does not provide? -- James 
No, it's the other way around, n3515 provides stuff this doesn't - but 
by design. It's not an "either or".


(I've deleted like 3 different responses now).

It really isn't an "either or", I am not saying "this over n3515", I 
would want both (I think, that's the point of this discussion).


I would prefer a hard-typedef for things like vector components (if I 
didn't decide to use a struct, where the components would be given by 
name), or ids, the compiler would stop me mixing types unless I meant 
to, the very nature of just putting the word "hard" before a typedef is 
something I find appealing as the function of it is to stop me from 
doing silly things and to allow me to be reminded of what something is 
from it's definition (a BookId for example), it'll also allow 
overloading, I find the idea of a function called "getName" returning 
the name of a book, author or whatever very appealing when passed a 
BookId, AuthorId or a whateverId.


The very nature of a typedef is an alias, if I alias something from an 
int I have grown to expect to be able to add it and do other integer 
things with it, this is true of the hard-typedef to, I don't want to (I 
may have said this) be able to define a type system so rigid that an IDE 
auto-complete could create a grand unified theory. I don't want to stop 
and think when using this on a float type "can I divide this", not all 
operations form a group (this is why I mentioned groups earlier), real 
numbers do not form a group under multiplication (and hence division by 
requirement of an inverse) because of 0 being a real number. Despite 
this we may still divide by zero. I do not want to have to use n3515 and 
be faced with this temptation. Having said that I have yet to use it so 
maybe it wouldn't be that big.


That is why I'd want both, but at least in my mind n3515 would be nearer 
to "if I really wanted it I could use classes" than the hard-typedef.


I may have said this too, but I'll say it again. Typedefs are something 
that go on a line and define a useful alias. I doubt this is disputed, 
sticking the word "hard" before it and gaining this is something I find 
very appealing, having to write:


using opaque-type = access-specifier underlying-class {
desired-member-trampoline-signature = default;
friend desired-nonmember-trampoline-signature = default;
};
(or something of that form)

while useful, is less appealing. I don't really care that I /could/ add 
BookIds, I think the hard-typedef is more in line with how typedefs are 
actually used, but abstract algebra has taught me that you cannot 
rigidly define operations on things that'll always be defined or even 
useful, I also see operators more as a notation than operations, I think 
this is what tempts people and luls them into needlessly defining 
operators, adding two BookIds doesn't have to mean the operation of 
addition on integers. I am going off topic, suffice to say perhaps I 
will think of a use for additive notation for a binary operation on 
BookIds, it need not be the same as addition on integers. If I did 
define such a thing though doesn't this blur the line between class and 
a hard-or-strong typedef?


Alec





Re: hard typdef - proposal - I know it's not in the standard

2013-01-27 Thread Alec Teal

I've thought of how to phrase it.
Yes n3515 does allow more than the 'hard-typedef', they do (in part) do 
the same job, but the context where you'd use one and not the other is 
very clear, I like clean notations, I think that's a mathematician 
thing, as I am sure you know (or have touched on) the mathematicians go 
to great lengths to save themselves ink. I can think of many examples of 
this, think of ways of writing integrals, especially ones along a 
parameterized curve in a vector field, it just became an integral with 
the symbol of the curve below it. 1 doesn't mean 1 it means the 
multiplicative identity element of a set, "" 0 but for additive 
notation. You get the idea (I don't want to bore you or go beyond) but 
think of the hard-typedef as the integral symbol with a circle though 
it, showing over a closed curve, it's just a short hand in a case where 
you are integrating over a closed curve, the hard-typedef is a short 
hand in the case you want to 'inherit' operations from the base class.


I hope this explains it better!

Alec

On 28/01/13 02:38, James Dennett wrote:
n3515 is also explicit that the permitted conversions have no run-time 
cost. Is there anything that you propose that a "private opaque alias" 
from n3515 does not provide? -- James