Re: const volatile behaviour change in GCC 7

2016-09-22 Thread Paul.Koning

> On Sep 22, 2016, at 6:17 AM, David Brown  wrote:
> 
> ...
> Your trouble is that your two pointers, cur and end, are pointing at
> different variables.  Comparing two pointers that are independent (i.e.,
> not pointing to parts of the same aggregate object) is undefined - the
> compiler can assume that these two external objects could be anywhere in
> memory, so there is no way (in pure C) for you to know or care how they
> are related.  Therefore it can assume that you will never reach "cur ==
> end".

Would making them intptr_t instead of pointers fix that?

paul



Re: const volatile behaviour change in GCC 7

2016-09-22 Thread Paul.Koning

> On Sep 22, 2016, at 11:16 AM, David Brown  wrote:
> 
> On 22/09/16 16:57, paul.kon...@dell.com wrote:
>> 
>>> On Sep 22, 2016, at 6:17 AM, David Brown  wrote:
>>> 
>>> ...
>>> Your trouble is that your two pointers, cur and end, are pointing at
>>> different variables.  Comparing two pointers that are independent (i.e.,
>>> not pointing to parts of the same aggregate object) is undefined - the
>>> compiler can assume that these two external objects could be anywhere in
>>> memory, so there is no way (in pure C) for you to know or care how they
>>> are related.  Therefore it can assume that you will never reach "cur ==
>>> end".
>> 
>> Would making them intptr_t instead of pointers fix that?
>> 
> 
> With care, yes.  But I think it still relies on gcc not being quite as
> smart as it could be.  This seems to generate working code, but the
> compier could in theory still apply the same analysis:
> 
> void rtems_initialize_executive(void)
> {
>  uintptr_t cur = (uintptr_t) _Linker_set__Sysinit_begin;
>  uintptr_t end = (uintptr_t) _Linker_set__Sysinit_end;

I would not expect the compiler to apply pointer rules for code like this.  
(u)intptr_t is an integer type; it happens to be one whose width is chosen to 
match the width of pointers on the platform in question, but that doesn't 
change the fact the type is integer.  For example, it is perfectly valid for an 
intptr_t variable to contain values that could not possibly be pointers on a 
given platform.

paul


Re: const volatile behaviour change in GCC 7

2016-09-22 Thread Paul.Koning

> On Sep 22, 2016, at 11:31 AM, Richard Earnshaw (lists) 
>  wrote:
> 
>>> ...
>>> void rtems_initialize_executive(void)
>>> {
>>> uintptr_t cur = (uintptr_t) _Linker_set__Sysinit_begin;
>>> uintptr_t end = (uintptr_t) _Linker_set__Sysinit_end;
>> 
>> I would not expect the compiler to apply pointer rules for code like this.  
>> (u)intptr_t is an integer type; it happens to be one whose width is chosen 
>> to match the width of pointers on the platform in question, but that doesn't 
>> change the fact the type is integer.  For example, it is perfectly valid for 
>> an intptr_t variable to contain values that could not possibly be pointers 
>> on a given platform.
>> 
>>  paul
>> 
> 
> It sounds to me as these are the sort of optimizations that should be
> disabled when compiling with -ffreestanding.

Possibly so.  I dislike workarounds of that form; it seems safer to change the 
C code to rely on properties called out by the standard.  After all, if an 
optimization is permitted, it might at some future time be done, and counting 
on there to be a way to turn off that optimization via a switch, or relying on 
that switch not changing across releases, isn't as safe.

paul



Re: Converting to LRA (calling all maintainers)

2016-09-25 Thread Paul.Koning

> On Sep 25, 2016, at 4:46 AM, Eric Botcazou  wrote:
> 
>> There is no hurry to kill old reload.  As you say, many targets will
>> not be converted soon.  But one day it will be removed.  Not in GCC 7,
>> not in GCC 8 almost certainly.  But one day.
> 
> Certainly not in GCC 8, the top priority is IMO the cc0 thing and you cannot 
> really do both at the same time for a given port.  Do we have a Wiki page for 
> the cc0 conversion?  If no, I can start one based on my fresh experience with 
> the Visium port.

That would be great.  I've been trying to understand the process, and while I 
have a few notes from assorted emails over the years, it certainly isn't clear 
in my mind.

paul



Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-04 Thread Paul.Koning

> On Oct 3, 2016, at 7:48 PM, Martin Sebor  wrote:
> 
> In a recent review Jason and I discussed the style convention
> commonly followed in the C++ front end to annotate arguments
> in calls to functions taking bool parameters with a comment
> along the lines of
> 
>  foo (1, 2, /*bar_p=*/true);

I can't fathom why this makes any sense at all.  Bool is just another data 
type.  And on top of that, "true" is obviously a value of type bool.  I can't 
imagine any reason why calls should have funny comments in them that appear 
only for arguments of that particular type.

Now if you were to propose that all parameters regardless of types should be 
annotated like this, at least the idea would be consistent.  Also amazingly 
ugly.

It strikes that this is an attempt to make up for the syntactic deficiencies of 
C/C++.  But comments are a poor tool for that, as Lint showed decades ago.

paul


Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-04 Thread Paul.Koning

> On Oct 4, 2016, at 11:46 AM, Jonathan Wakely  wrote:
> 
> On 4 October 2016 at 16:41,   wrote:
>> 
>>> On Oct 3, 2016, at 7:48 PM, Martin Sebor  wrote:
>>> 
>>> In a recent review Jason and I discussed the style convention
>>> commonly followed in the C++ front end to annotate arguments
>>> in calls to functions taking bool parameters with a comment
>>> along the lines of
>>> 
>>> foo (1, 2, /*bar_p=*/true);
>> 
>> I can't fathom why this makes any sense at all.  Bool is just another data 
>> type.  And on top of that, "true" is obviously a value of type bool.  I 
>> can't imagine any reason why calls should have funny comments in them that 
>> appear only for arguments of that particular type.
> 
> You should get out more :-)
> 
> http://c2.com/cgi/wiki?UseEnumsNotBooleans
> 
> https://ariya.io/2011/08/hall-of-api-shame-boolean-trap
> 
> http://www.flipcode.com/archives/Replacing_Bool_Arguments_With_Enums.shtml


That's good stuff, but it doesn't justify putting funny comments on boolean 
arguments, it argues for avoiding boolean in the first place.  If you want to 
propose outlawing the use of boolean type in the coding standard, these 
articles certainly serve as support for that.  If you want to argue for adding 
named arguments, as in Python, that works too.

paul


Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-05 Thread Paul.Koning

> On Oct 5, 2016, at 12:12 PM, Jeff Law  wrote:
> 
> On 10/04/2016 03:08 PM, Jason Merrill wrote:
>> On Tue, Oct 4, 2016 at 4:29 PM, Zan Lynx  wrote:
>> ...
>> In GCC sources, I think users look at the function definition more
>> often than the declaration in the header, the latter of which
>> typically has neither comments nor parameter names.
> So true.  One could claim that our coding standards made a fundamental 
> mistake -- by having all the API documentation at the implementation rather 
> than at the declaration.  Sigh

Yes, though the mistake started in C and C++ which allow declarations with only 
types, not names.

paul


Re: Suboptimal bb ordering with -Os on arm

2016-11-10 Thread Paul.Koning

> On Nov 10, 2016, at 8:16 PM, Nicolai Stange  wrote:
> ...
> 
>> There is no way to ask for somewhat fast and somewhat small at the
>> same time, which seems to be what you want?
> 
> No, I want small, possibly at the cost of performance to the extent of
> what's sensible. What sensible actually is is what my question is about.
> 
> Example: A (hypothetical) code size saving of 0.001% at the cost
> of 100x slower code certainly isn't. But 0.1% at the cost of
> some additional 0.5us here and there -- no clue.
> 
> So, summarizing, I'm not asking whether I should use -O2 or -Os or
> whatever, but whether the current behaviour I'm seeing with -Os is
> intended/expected quantitatively.

If you care enough to ask this question, my answer is that you should compile 
your application a bunch of different ways -- -Os, -O2, -O3, as well as a 
variety of the fine grained optimization flags -- and pick the result that best 
achieves what you're looking for.  

paul



Re: Adoption of C subset standards

2017-01-09 Thread Paul.Koning

> On Jan 9, 2017, at 1:28 PM, Richard Kenner  wrote:
> 
>> Regardless of that sort of issue, I think on previous occasions when the
>> topic of MISRA (or other coding standard) checking came up, there has
>> been a general opinion from the gcc developers that the compiler itself
>> is not the best place for this sort of checking - they recommend an
>> external tool, and don't want the main code base cluttered with such
>> specific warnings for the dozens of coding standards in common use.
> 
> Note that there's also a legal issue here: when one has to obtain a
> license from MISRA.

I suspect there are vast quantities of coding guidelines out there, some of 
which may make some sense while others may not.  I don't see a good reason why 
one particular club should have its suggestions embodied in GCC code.

But as for a license, it's hard to see why that might be.  You can't copyright 
rules (only a particular expression of same, and only to the extend that the 
"sweat of the brow" rule doesn't apply).  And it doesn't sound like patentable 
matter either.  That said, if some outfit thinks it can ask for licensing on 
matter of this kind, that in itself is in my mind sufficient to exclude them 
from any consideration.

paul


Re: Adoption of C subset standards

2017-01-09 Thread Paul.Koning

> On Jan 9, 2017, at 1:55 PM, Richard Kenner  wrote:
> 
>> But as for a license, it's hard to see why that might be.  You can't
>> copyright rules (only a particular expression of same, and only to
>> the extend that the "sweat of the brow" rule doesn't apply).  And it
>> doesn't sound like patentable matter either.  That said, if some
>> outfit thinks it can ask for licensing on matter of this kind, that
>> in itself is in my mind sufficient to exclude them from any
>> consideration.
> 
> The issue is trademark.  When you can use the term "MISRA" to describe
> something.  It's my understanding that you can't use the term to
> describe something that checks rules without paying the license fee
> for the trademark, but this is a complex issue and needs to be 
> doublechecked.

I suppose that would be true if you refer to MISRA in the messages.  If you 
don't then you're not using the trademark.

But still, I'm back to my previous comment.  People who try to extract license 
fees for stuff like this should just be rejected.  It's bad enough we have ISO 
doing this; we should not put up with random others trying to do the same.

paul


Re: Adoption of C subset standards

2017-01-09 Thread Paul.Koning

> On Jan 9, 2017, at 4:08 PM, David Brown  wrote:
> ...
> I found a reference to this in MISRA's forums:
> 
> 
> 
> The post and reply are from 4 years ago, but I expect the situation is the 
> same now as then.  Basically, MISRA are quite happy for any tools to support 
> checking of the rules, no matter what the license of the tools, and there is 
> no certification or checking for conformance.  However, if you are going to 
> include the rule texts, you need that part of the checker to be under a 
> "commercial license agreement that contain[s] the expected restrictions on 
> reverse engineering or extracting of information from the software".  The say 
> it's fine for a pure open source checker to refer to the checks by MISRA rule 
> number, but not by rule text.  It looks like no license is needed from MISRA 
> to do this. 

I'd say that messages of the form "you violated rule number 42" are 
unacceptable, since they have no intellegible meaning.  And the MISRA reply you 
pointed to says specifically that GCC couldn't get a license to use the 
messages because it's under GPL.  (The "additional module" exception mentioned 
afterwards seems to be based on a misunderstanding of the GPL "derived work" 
machinery.)

It looks like MISRA should adjust its rules if it wants to support open source.

paul



Re: Release Signing Keys are Susceptible to Attack

2017-08-17 Thread Paul.Koning

> On Aug 17, 2017, at 4:39 AM, Richard Biener  
> wrote:
> 
> On Thu, Aug 17, 2017 at 4:23 AM, R0b0t1  wrote:
>> After downloading and verifying the releases on
>> ftp://ftp.gnu.org/gnu/, I found that the maintainers used 1024 bit DSA
>> keys with SHA1 content digests. 1024 bit keys are considered to be
>> susceptible to realistic attacks, and SHA1 has been considered broken
>> for some time.
>> 
>> http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-131Ar1.pdf, 
>> p17
>> https://shattered.io/
>> 
>> SHA1 is weak enough that a team of researchers was able to mount a
>> realistic attack at no great cost.

I agree that 1024 bit RSA or DSA keys are not a good idea.  Since DSA is fixed 
at 1024 bits, that means DSA is obsolete.  Fortunately RSA can use any key size 
(if you wait for it), and 2048 is a good choice at the moment.

As for SHA1, your statement misses some critical detail.  There are two basic 
attacks on hash functions:

1. Construct a pair of messages that have the same hash.
2. Given message X, construct message Y different from X that has the same hash.

What has been demonstrated is #1.  But that doesn't break signatures of 
existing data -- only #2 would.  #2 is much harder and has not been 
demonstrated.  It is true that #1 is a significant weakness and indicates SHA1 
is at risk, but there is no emergency relating to the use of SHA1 in digital 
signatures of data such as GCC kits.

> It looks like gpg2 uses SHA1 as digest algorithm by default.  I use
> a 2048bit RSA for signing, that should be ok, no?
> 
> I suggest to report the issue to gnupg upstream (I'm using 2.0.24
> with libgcrypt version 1.6.1).  It looks like the OpenPGP standard
> mandates SHA1 here and using --digest-algo is stronly advised
> against for interoperability reasons.

In spite of what I said above about SHA1, I would argue that warning is 
obsolete and the spec needs to be updated accordingly.  Current gpg clearly 
supports SHA-2 (as "sha256", "sha384" and "sha512") and it would make sense to 
use it.

If you want to be extra accommodating, you could publish signatures both with 
sha512 and with sha1, the latter not quite as strong but available for those 
who can't handle the newer algorithm.

paul


Re: Overwhelmed by GCC frustration

2017-08-17 Thread Paul.Koning

> On Aug 17, 2017, at 11:22 AM, Oleg Endo  wrote:
> 
> On Wed, 2017-08-16 at 19:04 -0500, Segher Boessenkool wrote:
>>  
>> LRA is easier to work with than old reload, and that makes it better
>> maintainable.
>> 
>> Making LRA handle everything reload did is work, and someone needs to
>> do it.
>> 
>> LRA probably needs a few more target hooks (a _few_) to guide its
>> decisions.
> 
> Like Georg-Johann mentioned before, LRA has been targeted mainly for
> mainstream ISAs.  And actually it's a pretty reasonable choice.  Again,
> I don't think that "one RA to rule them all" is a scalable approach.
>  But that's just my opinion.

I think G-J said "...  LRA focusses just comfortable, orthogonal targets" which 
is not quite the same thing.

I'm a bit curious about that, since x86 is hardly "comfortable orthogonal".  
But if LRA is targeted only at some of the ISA styles that are out in the 
world, which ones are they, and why the limitation?

One of GCC's great strength is its support for many ISAs.  Not all to the same 
level of excellence, but there are many, and adding more is easy at least for 
an initial basic level of support.  When this is needed, GCC is the place to go.

I'd like to work on moving one of the remaining CC0 targets to the new way, but 
if the reality is that GCC is trying to be "mainstream only" then that may not 
be a good way for me to spend my time.

paul



Re: Overwhelmed by GCC frustration

2017-08-17 Thread Paul.Koning

> On Aug 17, 2017, at 1:36 PM, Bin.Cheng  wrote:
> 
> On Thu, Aug 17, 2017 at 6:22 PM,   wrote:
>> 
>> ...
>> One of GCC's great strength is its support for many ISAs.  Not all to the 
>> same level of excellence, but there are many, and adding more is easy at 
>> least for an initial basic level of support.  When this is needed, GCC is 
>> the place to go.
>> 
>> I'd like to work on moving one of the remaining CC0 targets to the new way, 
>> but if the reality is that GCC is trying to be "mainstream only" then that 
>> may not be a good way for me to spend my time.
> HI,
> I can't believe GCC ever tries to be "mainstream only".  It's somehow
> like that because major part of requirements come from popular
> architectures, and large part of patches are developed/tested on
> popular architectures.  It's a unfortunate/natural result of lacking
> of developers for non-mainstream.  We can make it less "mainstream
> only" only if we have enough developers for less popular arch.  Your
> work will contribute to this and must be highly appreciated :)
> 
> Thanks,
> bin


Thanks for the encouragement.  I will keep tinkering with the pdp11 target to 
make it better.

paul


Re: Power 8 in-core crypto not working as expected

2017-09-07 Thread Paul.Koning

> On Sep 7, 2017, at 10:35 AM, Jeffrey Walton  wrote:
> 
> On Thu, Sep 7, 2017 at 4:38 AM, Segher Boessenkool
>  wrote:
>> Hi!
>> 
>> On Thu, Sep 07, 2017 at 12:37:33AM -0400, Jeffrey Walton wrote:
>>> I have implementation for AES on Power 8 using GCC's built-ins. Its
>>> available for inspection and download at
>>> https://github.com/noloader/AES-Power8. The problem is, it does not
>>> arrive at the correct results on GCC112 (ppc64-le) or GCC119 (AIX, big
>>> endian).
>> 
>> First see if you can get a *single* vcipher call to work as expected
>> (it is a single round of AES).  Refer to Power ISA 3.0B and FIPS 197.
> 
> Thanks Segher.
> 
> We are using the key and subkey schedule from FIPS 197, Appendix A. We
> are using it because the key schedule is fully specified.
> 
> We lack the known answers for a single round using a subkey like one
> specified in FIPS 197. IBM does not appear to provide them.

Known answers don't depend on hardware.  If there is a documented single round 
known answer, and the hardware primitive is a single round with a supplied 
subkey, then that answer should apply.

paul



Re: Byte swapping support

2017-09-12 Thread Paul.Koning

> On Sep 12, 2017, at 5:32 AM, Jürg Billeter  
> wrote:
> 
> Hi,
> 
> To support applications that assume big-endian memory layout on little-
> endian systems, I'm considering adding support for reversing the
> storage order to GCC. In contrast to the existing scalar storage order
> support for structs, the goal is to reverse the storage order for all
> memory operations to achieve maximum compatibility with the behavior on
> big-endian systems, as far as observable by the application.

I've done this in the past by C++ type magic.  As a general setting it doesn't 
make sense that I can see.  As an attribute applied to a particular data item, 
it does.  But I'm not sure why you'd put this in the compiler when programmers 
can do it easily enough by defining a "big endian int32" class, etc.

paul



Re: Byte swapping support

2017-09-13 Thread Paul.Koning

> On Sep 13, 2017, at 5:51 AM, Jürg Billeter  
> wrote:
> 
> On Tue, 2017-09-12 at 08:52 -0700, H.J. Lu wrote:
>> Can you use __attribute__ ((scalar_storage_order)) in GCC 7?
> 
> To support existing large code bases, the goal is to reverse storage
> order for all scalars, not just (selected) structs/unions. Also need to
> support taking the address of a scalar field, for example. C++ support
> will be required as well.

I wonder about that.  It's inefficient to do byte swapping on local data; it is 
only useful and needed on external data.  Data that goes to files for use by 
other-byte-order applications, or data that goes across a bus or network to 
consumers that have the other byte order.  A byte swapped local variable only 
consumes cycles and instruction space to no purpose.

My experience is that byte order marking of data is very useful, but it always 
is applied selectively just to those spots where it is needed.

paul


Re: Potential bug on Cortex-M due to used registers/interrupts.

2017-11-16 Thread Paul.Koning


> On Nov 16, 2017, at 11:54 AM, Vitalijus Jefišovas  wrote:
> 
> On Cortex-M mcu’s, when interrupt happens, NVIC copies r0-r3 and couple
> other registers onto the psp stack, and then jumps to interrupt routine,
> when it finishes, NVIC restores these registers, and jumps back to user’s
> function.
> What is happening under the hood, NVIC only stacks 4 registers, r0, r1, r2,
> r3. The other ones r4-r12 is developer’s responsibility.
> I was looking at assembly code generated by GCC and there are plenty of
> instructions using r4-r12 registers.
> 
> How does GCC handle scenario when execution is switched to unknown
> procedure, which changes all of these registers?

Seems obvious to me.  If basic interrupt handling saves only a few registers, 
the assumption clearly is that many small interrupt handlers will only use 
those registers, so this makes things faster.

But it also means that any interrupt handler that uses registers other than 
those that were saved before is itself responsible for saving and restoring 
them.  The only way the concept of an interrupt makes sense is if it is 
invisible to the interrupted code.  That means that any application-level state 
is preserved across an interrupt.  How and where that is done doesn't matter, 
that's an internal detail of a particular interrupt path.

The only exception I can think of is when you have one or two registers that 
are explicitly reserved only for use in exception handlers -- such as the K0/K1 
registers in MIPS.  But this is quite rare, I can't remember seeing it anywhere 
else.

So the answer is: GCC doesn't handle that case because it's not allowed to 
happen.

paul


Re: Request for data

2017-12-14 Thread Paul.Koning
The TZ project, which maintains the timezone database, would be a good place to 
find pointers.  They don't actually manage that information, but pointers to 
"shape files" that translate map coordinates into the timezone identifier are 
available.

paul

> On Dec 14, 2017, at 2:44 PM, Eric S. Raymond  wrote:
> 
> For a slightly higher-quality conversion, the attribution entries in
> the map file should have a third field: timezone.  IANA zones are
> acceptable.
> 
> This wouldn't change how commit times are stored internally, but the
> Git tools use it for display in local time.
> --
>   http://www.catb.org/~esr/";>Eric S. Raymond
> 
> My work is funded by the Internet Civil Engineering Institute: 
> https://icei.org
> Please visit their site and donate: the civilization you save might be your 
> own.
> 
> 


Re: Status of m32c target?

2018-01-22 Thread Paul.Koning


> On Jan 22, 2018, at 5:17 AM, Trevor Saunders  wrote:
> 
> On Mon, Jan 22, 2018 at 10:57:35AM +0100, Martin Jambor wrote:
>> Hi,
>> 
>> On Fri, Jan 19 2018, Sandra Loosemore wrote:
>>> On 01/19/2018 10:14 AM, Jeff Law wrote:
>>> 
 cc0 needs to die.  That doesn't mean that any particular target needs to
 be dropped -- it just  means that someone has to step forward to do the
 conversion.
>>> 
>>> Unifying two parallel threads:  might this be a good project for GSoC?
>> 
>> I have no idea how big the task would be but it seems it could be quite
>> difficult.  Therefore we would need an experienced and extraordinarily
>> willing to help mentor.
> 
> I'd be willing to help students where I can, but unfortunately I suspect
> this isn't a great project.  My memory of looking at what was involved
> is that you needed to learn a lot about how rtl worked to even
> understand what needed to be done much less actually change things and
> then test them.  That said maybe if we don't tell them its really hard
> they'll go and do it and prove me wrong.

Not only that, but it obviously requires understanding in detail how the 
particular target architecture works, for each target you want to rewrite.  
With good instruction set architecture documentation that's doable though not 
necessarily easy; if good documentation does not exist then it becomes harder 
still.

paul