Re: 3 byte variables [feature request]
On Wed, Jun 05, 2013 at 01:49:28PM -0400, Ed Smith-Rowland wrote: > On 06/05/2013 10:43 AM, kuldeep dhaka wrote: > >Hello, > > > >while working on a program i faced a problem. > >i need only 20 bit , and having a 32 bit only waste more memory(12 > >byte), 24bit would much be better. > >i need a uint24_t variable but gcc dont support it so i have to start > >with uint32_t. > >it would be very nice if gcc support such variables. > >i made a little program and found that gcc (on i686) returned error. > > > >http://embeddedgurus.com/stack-overflow/2009/06/three-byte-integers/ > > > >-- > >Kuldeep Singh Dhaka > >Sakul > >+91-8791676237 > >kuldeepdha...@gmail.com > >Programmer, Open Source, Web Developer, System Administrator, > >Entrepreneur, Animal Lover, Student, Reverse Engineer, Embedded > >System, Learning. > > > >Bitcoins Accepted. > >My GnuPG Public Key > >Fork Me > > > Even if gcc had such a type I bet they would get aligned to 4-bytes > on most any target. Precisely this. And in fact gcc / C can support this, using bitfields. typedef struct { int bits : 24; } int24_t; typedef struct { unsigned bits : 24; } uint24_t; This is as close as you can get, and these structs will be 4-byte aligned. Cheers, Rob
Re: programming language that does not inhibit further optimization by gcc
GCC does value analysis similar to what you mentioned. You'll find it under the -fdump-tree-vrp options. To provide extra information you can add range checks which GCC will pick up on. If you know a value is small, use a small integer type and gcc will pick up the range of values which can be assigned to it. What are the problems you're trying to solve? Is it a low memory system you're running on? If you're after performance, add restrict to your parameters and either use unions to get around aliasing or do what the Linux dev team do with -fno-strict-aliasing. Regarding threading - I think trying to use multiple threads without having to learn thread libraries is a bit of a gamble. Threading is difficult even in high level languages and you should have a good background before approaching. For struct packing, I suppose you could just order your entries largest-first which is one approach, but it's kinda like the 0-1 knapsack problem. On 15 October 2013 01:31, Albert Abramson wrote: > I have been looking everywhere online and talking to other coders at > every opportunity about this, but cannot find a complete answer. > Different languages have different obstacles to complete optimization. > Software developers often have to drop down into non-portable > Assembly because they can't get the performance or small size of > hand-optimized Assembly for their particular platform. > > The C language has the alias issue that limits the hoisting of loads. > Unless the programmer specifies that two arrays will never overlap > using the 'restrict' keyword, the compiler may not be able to handle > operations on arrays efficiently because of the unlikely event that > the arrays could overlap. Most/all languages also demand the > appearance of serialization of instructions and memory operations, as > well as extreme correctness in even the most unlikely circumstances, > even where the programmer may not need them. > > Is there a language out there (similar to Fortran or a dialect of C) > that doesn't inhibit the compiler from taking advantage of every > optimization possible? Is there some way to provide a C/C++ compiler > with extra information about variables and programs so that it can > maximize performance or minimize size? For example: > > int age = 21;//[0, 150) setting maximum limits, compiler could use byte > int > int outsideTemp = 20;//[-273, 80] > float ERA = 297; //[0, 1000, 3] [min, max, digits of > accuracy needed] > > Better yet, allow some easier way of spawning multiple threads without > have to learn all of the Boost libraries, OpenCL, or OpenGL. In other > words, is there yet a language that is designed only for performance > that places no limits on compiler optimizations? Is there a language > that allows the compiler to pack struct variables in tighter by > reorganizing those values, etc? > > If not, is it possible to put together some dialect of C/C++ that > replaces Assembly outright? > > -- > Max Abramson > “In the end, more than freedom, they wanted security. They wanted a > comfortable life, and they lost it all – security, comfort, and > freedom. When the Athenians finally wanted not to give to society but > for society to give to them, when the freedom they wished for most was > freedom from responsibility, then Athens ceased to be free and was > never free again.” --Sir Edward Gibbon
Re: gnu software bugs - long double
Sat, Nov 02, 2013, Mischa Baars: > On 11/02/2013 11:19 PM, Jonathan Wakely wrote: > >On 2 November 2013 22:12, Mischa Baars wrote: > > > >And 1.1 is not representable as long double. > > > >If you are willing to stop being so arrogant for a few minutes and > >learn something try running this program and think about what the > >result means: > > > >#include > > > >int main() > >{ > > long double l = 1.1; > > long double ll = 10 * l; > > assert( ll == 11 ); > >} > > > >If you think GCC gets this result wrong then please use a different > >compiler and stop wasting everyone's time, this is off-topic for this > >mailing list. > > There's no converting to any string in your example. You only > convert source code strings into their corresponding doubles. > > What I'm trying to point out is that the output differs from the > value entered in the source. The string 1.1 from the source is not > correctly converted to it's corresponding double, or the double is > not correctly converted back into it's corresponding string. I challenge you to come up with a better way of representing floating point numbers in an efficient manner _and_ support silly edge cases like imperfectly representable numbers. Read up on IEEE floats, particularly that link Jonathan posted earlier. But please stop wasting everyones' time. Clang exhibits the same behaviour, it's not a compiler issue. Thank you
Re: Surprising Behavior Comparing Floats
On Sat, 11 Jan 2014, Nick wrote: I'm very surprised by the result in #6. #7 seems to be doing the same thing, except that it uses a local variable to hold the sum. Sounds to me like it could be related to excess precision - checkout the -ffloat-store option. I don't see it on my machine either way, but I'm on 4.7.2. Rob
Control Flow Graphs
Hi, Does anyone know if it is possible to view/access/print out the control flow graphs produced by GCC, either at compilation time, or after compilation has taken place? Thanks for your time. Rob
Re: Control Flow Graphs
Thanks very much. I am planning to use this output to help me add some debugging features to GDB, involving temporal logic, but the optimizations involve using a CFG. The dream for my project would be to add an option to GC to print the CFG data to a file, and then parse it and do some analysis on it, and pass it on do GDB to go some light formal methods. I attach below my email to the GDB mailinglist about the aims of the project, for those who are interseted and have not seen it already. Any feedback you have is much appreciated. - Hi, The principle works as follows: It it possible to create an automaton from an LTL formula, with expressions for values of variables as the transitions from one state to the next. Then the tricky part is building an automaton which represents the program. But once you have these it is possible to see if the automaton 'match' and if they do then the property holds. With regards to building the system automaton, at the very simplest you could single step the code, get values of variables at each step and make an appropriate transition on the automaton. However, this is obviously very inefficient, and improvements would most likely be made by building a control flow graph of the program (in some way) and use the nodes of that graph as the points get get values, or something like that. The advantage of including something like this in GDB is that once the property that the programmer expected to hold becomes false, program execution can stop and he programmer can use the standard GDB tools. Well, that'd be the idea anyway. The original idea was to do the same thing but for concurrent programs because there is research which says that using LTL formulas and the automaton technique, you can say whether properties of concurrent programs hold for all the possible interleavings. However, it was decided that that was too complicated, so it was narrowed to non-concurrent programs. --- Thanks again. Rob On 16/10/06, Paul Yuan <[EMAIL PROTECTED]> wrote: Call dump_flow_info() defined in cfg.c to output the pred and succ of each block. You can use this output to construct the CFG. Maybe the software graphviz can help you to visualize the CFG. On 10/16/06, Rob Quill <[EMAIL PROTECTED]> wrote: > Hi, > > Does anyone know if it is possible to view/access/print out the > control flow graphs produced by GCC, either at compilation time, or > after compilation has taken place? > > Thanks for your time. > > Rob > -- Paul Yuan www.yingbo.com
Re: how to load a cfg from a file by tree-ssa
I haven't looked into this yet, but as I think I may need to be able to do something similar, is it possible to parse the cfg file that is given out, and build a C structure like that? Thanks Rob On 21/11/06, Paolo Bonzini <[EMAIL PROTECTED]> wrote: > but i don't know how to load that file in a structure. > I.e. i have the file .cfg builded by gcc with the option > -fdump-tree-cfg and i want to load this file in a C structure so i can > manipulate it. There is no such functionality in GCC. GCC can only build the CFG from source code, and can only do so to produce assembly language output. Paolo
Re: how to load a cfg from a file by tree-ssa
On 24/11/06, Paolo Bonzini <[EMAIL PROTECTED]> wrote: Rob Quill wrote: > I haven't looked into this yet, but as I think I may need to be able > to do something similar, is it possible to parse the cfg file that is > given out, and build a C structure like that? It seems to me that the answer had been already given in the message you fully quoted: we cannot help you. I don't wish to labour the point, but what I am trying to understand is whether it is impossible or just impractical. Could you explain why it is that it can't be done? For example, the file given out doesn't maintain enough information about the graph, in which case, why is in not possible to output enough information in such a format that it can be read and parsed into a C structure? Sorry if this is tedious, or I have missed something obvious. Compilers are a new subject for me, but a C representation of a control flow graph would be greatly benificial for some optimisations I am trying to make to allow the use of light formal methods in GDB. See here for a discussion: http://sourceware.org/ml/gdb/2006-10/msg00078.html And, by the way, I suggest not to top post: see http://en.wikipedia.org/wiki/TOFU for more info. Appologies. Thanks for your help. Rob Paolo >> > but i don't know how to load that file in a structure. >> > I.e. i have the file .cfg builded by gcc with the option >> > -fdump-tree-cfg and i want to load this file in a C structure so i can >> > manipulate it. >> >> There is no such functionality in GCC. GCC can only build the CFG from >> source code, and can only do so to produce assembly language output.
Re: how to load a cfg from a file by tree-ssa
On 24/11/06, Paolo Bonzini <[EMAIL PROTECTED]> wrote: Rob Quill wrote: > On 24/11/06, Paolo Bonzini <[EMAIL PROTECTED]> wrote: >> Rob Quill wrote: >> > I haven't looked into this yet, but as I think I may need to be able >> > to do something similar, is it possible to parse the cfg file that is >> > given out, and build a C structure like that? >> >> It seems to me that the answer had been already given in the message you >> fully quoted: we cannot help you. > > I don't wish to labour the point, but what I am trying to understand > is whether it is impossible or just impractical. It is neither impossible, nor impractical. Rather, it is irrelevant as for as GCC is concerned: it is not within the scope of a compiler to read back the CFG that (it prints them out to aid debugging), and this is why GCC does not provide APIs to do so. All you have to do, is understand the output format (either by looking at tree-cfg.c or by "reverse engineering" it yourself), and write a corresponding parser. The C structs you will use may be based on those in GCC (see basic-block.h) but probably will not because these will bring many other details on the GCC intermediate representation. Excellent. This is what I expected I would have to do all along. Thank you for your help. Rob Paolo
DW_AT_start_scope
Hi, I am considering trying to add DW_AT_start_scope attributes to the debug info emmited by GCC, so it can be used by GDB. I just wanted to know what people think about this, and how difficult it is likely to be? Thanks for your time. Rob
Variable scope debug info
Hi, I wrote an email about this a while ago, but it was rather consice and didn't explain the problem well. My problem is thus: When using GDB do debug the follow bit of code: int i = 0; int j = 2; int k = 3; If I set a breakpoint at the 3rd line, before int k = 3; has been executed, and check if k is in scope, I find that it is, when, of course, it shouldn't be. I emailed the GDB mailing list about this and was informed that the problem arose because GCC does not emmit and GDB cannot handle the DW_AT_start_scope attribute (DWARF 3 standard, page 61, item 11). Normally this is not a problem, but in my particular case, the checking of variable values is automated, with decisions being made on the value of those variables, so a variable being in scope and having an incorrect value will result in a incorrect decision being made. I have be warned that attempting to do this (from GDB's perspective) is not likely to be easy, and will more than likely bring forth new problems, but doing it would greatly help the correctness of what I am trying to do, and unless anyone has any objections, I think there is no reason not to make something meet the specification. So, assuming no-one has any objections, my question is, how do I go about trying to do it in GCC? What should I look at and where should I look? Thanks for your help. Rob
Re: Variable scope debug info
On 05/04/07, Joe Buck <[EMAIL PROTECTED]> wrote: On Thu, Apr 05, 2007 at 02:37:06PM +0100, Rob Quill wrote: > My problem is thus: When using GDB do debug the follow bit of code: > > int i = 0; > int j = 2; > int k = 3; > > If I set a breakpoint at the 3rd line, before int k = 3; has been > executed, and check if k is in scope, I find that it is, when, of > course, it shouldn't be. I emailed the GDB mailing list about this and > was informed that the problem arose because GCC does not emmit and GDB > cannot handle the DW_AT_start_scope attribute (DWARF 3 standard, page > 61, item 11). If adding scope attributes every time more than one variable is declared adds to the already immense bulk of C++ debugging information, I'd prefer to live with the bug myself. Out of interest, why? I haven't looked into this properly yet, so I don't know what problems it might cause. Will having this much more debug info significantly slow down GCC/GDB? Will it have other problems? Rob
Re: Variable scope debug info
So the general concensus is that's it's not worth doing? Hypothetically, if I did it and it didn't make much difference, would it be worth submitting a patch? Or should I just give up before I start? Rob On 06/04/07, Daniel Jacobowitz <[EMAIL PROTECTED]> wrote: On Thu, Apr 05, 2007 at 09:46:18AM -0700, Joe Buck wrote: > Now, it might turn out that adding additional dwarf records for > every single declaration won't significantly increase the size > of the debug information. But it is a consideration. FWIW, I would expect that it would not make a significant difference. -- Daniel Jacobowitz CodeSourcery
Re: Variable scope debug info
On 05/04/07, Brian Ellis <[EMAIL PROTECTED]> wrote: Now if there were actual function calls in the initialization, and no records were emitted, I would consider that to be a problem (haven't tested this at the moment though), however, static initializers like that could easily be skipped as a feature in the interest of saving space. Example : int i = 0; int j = 2; int n = CalculateSomething( j, &i ); int k = 3; I would expect debug records for the initialization of 'n' to be emitted and a break point to properly work on that line... all others I could completely care less for. I don't really understand, because the problem remains that if you break before int n... and do print n you get a value, whereas you should get an error saying the variable is not in scope. Heck, if the scope for k (being a primitive type that is initialized with a compile-time constant value) were started before n, i wouldn't even consider that to be a bug... I would consider it a marvel of intelligence in the compiler. How so? Rob - Original Message From: Joe Buck <[EMAIL PROTECTED]> To: Rob Quill <[EMAIL PROTECTED]> Cc: GCC Development Sent: Thursday, April 5, 2007 12:32:04 PM Subject: Re: Variable scope debug info On Thu, Apr 05, 2007 at 02:37:06PM +0100, Rob Quill wrote: > My problem is thus: When using GDB do debug the follow bit of code: > > int i = 0; > int j = 2; > int k = 3; > > If I set a breakpoint at the 3rd line, before int k = 3; has been > executed, and check if k is in scope, I find that it is, when, of > course, it shouldn't be. I emailed the GDB mailing list about this and > was informed that the problem arose because GCC does not emmit and GDB > cannot handle the DW_AT_start_scope attribute (DWARF 3 standard, page > 61, item 11). If adding scope attributes every time more than one variable is declared adds to the already immense bulk of C++ debugging information, I'd prefer to live with the bug myself. Food fight? Enjoy some healthy debate in the Yahoo! Answers Food & Drink Q&A. http://answers.yahoo.com/dir/?link=list&sid=396545367
Re: Variable scope debug info
On 06/04/07, Joe Buck <[EMAIL PROTECTED]> wrote: On Fri, Apr 06, 2007 at 11:38:50AM +0100, Rob Quill wrote: > So the general concensus is that's it's not worth doing? > Hypothetically, if I did it and it didn't make much difference, would > it be worth submitting a patch? Or should I just give up before I > start? It might be worth doing. I think that, in addition to a patch, I'd like to see measurements (maybe just the size increase in libstdc++.{a,so}). If the cost is small, I will not object. Seeing what the debugging sessions look like before and after will help assure that we're getting it right, especially at the boundary: if we have 23 Foo foo = func(); 24 int bar = 0; then if a breakpoint is set at line 23, foo should not be in scope, but at line 24, it should be. OK, that sounds good to me. As you say, there is no way to know until it is actually implemented how muh affect it will have. However, I won't be able to start on it until summer as I have to worry about sitting my finals, so I'll let you know when I get started. Thanks all for your help :) Rob
Successful bootstrap of GCC 3.4.3 on i586-pc-interix3 (with one little problem)
bash-3.00$ ../gcc-3.4.3/config.guess i586-pc-interix3 bash-3.00$ gcc/xgcc -v Using built-in specs. Configured with: ../gcc-3.4.3/configure --verbose --disable-shared --with-stabs --enable-nls --with-gnu-as --with-gnu-ld --enable-targets=i586-pc-interix3 --enable-threads=posix --enable-languages=c,c++ --enable-checking : (reconfigured) ../gcc-3.4.3/configure --verbose --disable-shared --with-stabs --enable-nls --with-gnu-as --with-gnu-ld --enable-targets=i586-pc-interix3 --enable-threads=posix --enable-languages=c,c++,g77,ada,java,objc : (reconfigured) ../gcc-3.4.3/configure --verbose --disable-shared --with-stabs --enable-nls --with-gnu-as --with-gnu-ld --enable-targets=i586-pc-interix3 --enable-threads=posix --enable-languages=ada,c,c++,f77,f95,java,objc,obj-c++ : (reconfigured) ../gcc-3.4.3/configure --verbose --disable-shared --with-stabs --enable-nls --with-gnu-as --with-gnu-ld --enable-targets=i586-pc-interix3 --enable-threads=posix --enable-languages=ada,c,c++,f77,java,objc Thread model: posix gcc version 3.4.3 There was a problem in building libobjc, libtool wouldn't run due to having '#! sh' as its first line. Changing it to '#! /bin/sh' enabled the bootstrap to complete. System: Windows 2000 SP4, Services for Unix 3.5.
Re: dejagnu version update?
On 5/13/20 10:51 AM, Mike Stump wrote: > So, now that ubuntu 20.04 is out and RHEL 8 is out, and they both > contain 6, and SLES has 6 and since we've been sitting at 1.4.4 for > so long, anyone want to not update dejagnu to require 1.6? We do still find and fix bugs occasionally. :-) And 1.4.4 was released 16 years ago... Linaro has been using the most recent release for years, so I think it's a safe upgrade. - rob - --- https://www.senecass.com
Re: dejagnu version update?
On 5/14/20 8:08 AM, Rainer Orth wrote: >> stops responding for whatever reason. I have come up with a solution >> (that I'd be happy to upstream, except that DejaGNU maintenance seems to >> have been dead for like a year now), which I have also confirmed to be >> required with current DejaGNU Git master so it must be a different one, >> and I would like to know how it might be related to the bug you mention. I feel I need to bring up the issue that DejaGnu is 30 years old, and it's two maintainers are either trying to pay bills, or trying to retire, or both... This problem will effect more projects in the future, not just DejaGnu. I'd love to see if anyone would like to become a co-maintainer, who preferably will be actively working for a few decades still. I think most people on these lists make their income from working on the toolchain, but some of us are still volunteers... and getting older every day... There is a patch backlog neither of us have even looked at, sorry. I'm willing to put some time into this, but I think you all realize the time involved to adequately test this. I'm not sure I have enough disk space. :-) Personally, I tried to find funding to refactor DejaGnu in Python, since Tcl is unmaintained too, but nobody was interested. - rob - --- https://www.senecass.com
Re: dejagnu version update?
On 5/14/20 10:08 AM, David Edelsohn wrote: > Have you approached the Linux Foundation Core Infrastructure > Initiative for funding for both DejaGNU maintenance (patch backlog) > and refactoring DejaGNU in Python efforts? Not that team, the folks I talked to thought I was crazy for wanting to refactor it. :-) That's been the usual answer from anyone. I even talked about this at GNU Tools Cauldron once. Seriously though, it'd be great to analyze the current code base, write an actual design document, clean internal APIs, and build something we can use for another 30 years. I'd start by writing a solid expect module for Python, and then embedding Tcl in Python as a comparability layer. I understand it'd be a huge project, which is why I haven't done this as a volunteer. I think there has been a bit of a leave it alone so it stays stable attitude... Bugs in the test framework effect our ability to work on the toolchain efficiently, but other than the handful of people here, nobody cares. Forgetting the refactoring daydream, testing patches and doing releases still needs to be done, so I think we need a long-term solution and fresh energy. - rob - --- https://www.senecass.com
Re: dejagnu version update?
On 5/14/20 5:34 PM, Maciej W. Rozycki wrote: > And then current development appears ongoing, ferociously indeed, with the > last check in literally today (barring my time zone), as indicated here: > <https://core.tcl-lang.org/tcl/timeline>. It's obvious I haven't been paying attention, so much for my retirement... :-) That's awesome actually, and it does look active. > Other than that what would be the technical advantage from rewriting > DejaGnu in Python (/me asking as a Python ignoramus)? The main thought was more towards an actual design and clean APIs, and assuming Tcl/Expect was unmaintained, needed to use something that'll be around for a few more decades. Assuming Tcl/Expect are maintained forever, that's a bit of a moot point. Everybody has always complained about using Tcl in DejaGnu. Python is just much more commonly used in the current century. Right now working through patches is probably more important. :-) There's zero patches on the DejaGnu savannah site, so I'd ask anybody to submit them so I don't have to dig them out of email archives. - rob -
Re: dejagnu version update?
On 5/15/20 6:22 PM, Mike Stump wrote: > Anyway, love to have software that can move code wholesale. Love to move the > testsuite into a new language. All it needs is funding. :-) What GDB needs is expect, not Tcl. Most of the GDB testsuite is just expect pattern matching from the shell. That's the entire reason I choose Tcl as it already had expect support. Expect was necessary functionality for GDB testing. For GCC & Binutils, Expect is only used for remote testing support. As it's possible to embed Tcl in other programs, the idea was to use an embedded Tcl interpreter when needed during a transition period. It's mostly just the framework itself that would need to be refactored into Python. There is also a large amount of code in gcc/testsuite that should probably be in core DejaGnu too. That would be a large component in analyzing existing code to write a true design doc. The best part is now we have large toolchain testsuites to use to test DejaGnu changes. A one point we thought DejaGnu would be used for other projects, but I think it's obviously that these days it's only used for GNU toolchain testing. I'm making progress on setting up a development environment to test patches. I use my ABE tool to build toolchains, had to fix some bugs (and add PI support) first. - rob - --- https://www.senecass.com
Re: dejagnu version update?
On 5/16/20 5:45 PM, Maciej W. Rozycki wrote: > Overall perhaps a patch management system might be good having to make > chasing patches easier, such as patchwork, and we already use Git, so we As an old GNU project, we're required to use what the FSF prefers, which is on savannah. https://savannah.gnu.org/patch/?group=dejagnu, Our bug tracker is there their too. We've used that for a long time. Yes, patches in email are harder to track. > fresh patchwork? The patch traffic is surely much lower with DejaGnu than > it is with glibc, and there would be no data to migrate (but we might want > to feed a couple of months' back worth of mailing list traffic). I'm now building up the infrastructure to properly test patches, but it's not enough to do the next release. All I have these days is my laptop and a PI B3+. I'd need access to more hardware as some of the patches effect cross testing, or get others to test the release candidates. Much of the problems with cross testing are often obscure timing problems. It's amazing how sometimes a minor unrelated change changes the timing and things break... To do a release properly requires duplicating that level of infrastructure for at least several targets and several toolchain release, and built on more than one GNU/Linux distro. It'll take most of the week to really get a good base setup with baseline test results, but some of the patches like the DejaGnu testsuite ones will go in first since they don't effect the toolchain. Jacob already added 9 patches to our site. I'm still building cross compilers since some of his patches effect cross testing. I did add ADA to my builds, which isn't a normal build default, since I thought some of the patches for ADA. - rob - --- https://www.senecass.com
Re: dejagnu version update?
On 5/17/20 1:43 PM, Maciej W. Rozycki wrote: > patch service before. It doesn't appear linked to our mailing list either > and instead you need to go through the hoops of a web interface (and open > an account first) to submit a change. From what I remember, it was decided the GNU toolchain (minus DejaGnu) that the savannah infrastructure was insufficient. Some of the others here probably remember better. GNU projects are different when it's not part of the toolchain. My Gnash project uses savannah heavily. But you are right, it forces you into a webbie world, and email is mostly for notifications. > That's precisely what patchworks is for, which has been used to various > extent for the GNU C library, GCC and GDB already. All of which projects > are of similar vintage to DejaGnu and I reckon rather important for the > GNU project. Given that our main patch submission channel is e-mail, > what's the point in using a system that does not accept one? So maybe the discussion point is if we want to consider DejaGnu part of the toolchain, and use the same infrastructure as the other tools. I have no problem with that. DejaGnu doesn't change much, and there were only 2 of us working on it, so we just never thought about it. - rob -
Re: dejagnu version update?
I processed the patch backlog for DejaGnu, and have gone through the bug list. It'd be nice if somebody could try master with a more complex environment, etc... if I'm going to push out a release. For cross testing all I have is a PI and QEMU. - rob -
Re: dejagnu version update?
On 5/26/20 7:20 PM, Maciej W. Rozycki wrote: > I'll run some RISC-V remote GCC/GDB testing and compare results for > DejaGnu 1.6/1.6.1 vs trunk. It will take several days though, as it takes > many hours to go through these testsuite runs. That'd be great. I'd rather push out a stable release, than have to fix bugs right after it gets released. - rob -
Re: connecting a QEMU VM to dejagnu...
On 10/16/19 5:40 PM, Alan Lehotsky via DejaGnu wrote: > The one example I found via a web search seems to want to do > everything in the virtual machine - but I have to believe that’s > going to be insanely slow… Well, qemu is a virtual machine... Here's the ones I used for GNU toolchain cross testing: https://git.linaro.org/toolchain/abe.git/tree/config/boards. There's a few on there. If you're building cross compilers, just use ABE and it's all built in. - rob -
SSA GIMPLE
Hello, I am looking for some more information of the SSA Gimple syntax and was wondering if there was BNF available? I am interested in the IR of gcc and am just looking for some further documentation/explanation of some of the syntax I am observing such as: OBJ_TYPE_REF(D.103787_32;D.103784_29->4) (D.103784_29, value__23); ** save_filt.1022_12 = <<>>; save_eptr.1021_13 = <<>>; resx; iftmp.256_17 = (int (*__vtbl_ptr_type) (void) *) D.52956_16; D.53402_2 = &this_1->m_cur_val; __base_ctor (&D.53467); __comp_ctor (&nm, if_typename__8, &D.53467); __cxa_atexit (__tcf_0, 0B, &__dso_handle); __static_initialization_and_destruction_0 (1, 65535); Does anyone know where I might find such information? Any help and/or pointers in the direction of information would be most welcome. I tried the gcc wiki but I couldn't find much on SSA Gimple/low-Gimple Thanks and regards all! Rob
Updating gnu.org/software Fortran Page
Hi. The gnu.org webmaster team are going to be updating all the pages in the gnu.org/software subdirectory, which includes - http://www.gnu.org/software/fortran This will involve changing the design of the page to match the new style of gnu.org, and updating the information about the GNU Fortran project if possible. Is there any information you would like adding to the project page or any other changes you would like making to its content? Thanks. - Rob Myers, Chief GNU Webmaster.
Re: Help for my Master thesis
On 3/29/2013 1:35 PM, Kiefmann Bernhard wrote: Dear Ladies and Gentlemen! My name is Bernhard Kiefmann and I'm writing my Master's thesis with the topic "the suitability of the GNU C compiler used in safety-related areas". The first problem with this is that I have to check if the compiler met the requirements of the international standard IEC 61508:2010. Here I would like to ask you my question as follows: 1) What are the rules of the compiler development? Are there any diagrams of UML? Because they are a requirement of the standard. 2) Are there activities for the Functional Verification? 3) What procedures and measures for - The design and programming guidelines - Dynamic analysis and testing - Functional testing and black box testing - Ausfall-/Versagensanalyse - modeling - Performance tests - Semi Formal Methods - Static Analysis - Modular approach If you have information here for me I would rather help in assessing whether the compiler for use in safety-relevant area is suitable. The second point of my work is concerned with the treatment of releases. Are you putting any kind of evidences in your source-code and how they look like? Because the evidences should be read and analyzed and the investigation should demonstrate if the changes in the release code effects on the safety relevant area. I would like to thank you in advance for your help, stand for any questions you may have in the meantime, I remain Yours sincerely Kiefmann Bernhard bernhard.kiefm...@stud.fh-campuswien.ac.at For aerospace applications, RTCA/DO-178C entitled "Software Considerations in Airborne Systems and Equipment Certification" governs all aspects of software safety. This document and its supplements addresses all of the information that you're interested in. DO-333 entitled "Formal Methods Supplement to DO-178C and DO-278A", DO-332 entitled "Object-Oriented Technology and Related Techniques Supplement to DO-178C and DO-278A", and DO-331 entitled "Model-Based Development and Verification Supplement to DO-178C and DO-278A" will also be relevant. These are all available from http://www.rtca.org/ (not free, sorry). The EU has identically worded documents via the EUROCAE organization (http://www.eurocae.net/). For example, ED-12C is identical to DO-178C. Rob.
Re: DejaGnu and toolchain testing
On 07/25/2013 06:21 PM, Joseph S. Myers wrote: > I was interested to watch the video of the DejaGnu BOF at the Cauldron. A > few issues with DejaGnu for toolchain testing that I've noted but I don't > think were covered there include: Thanks for the thoughtful comments, they're useful as I start considering refactoring DejaGnu to keep it working for the next 22 years... There is a lot of crusty old code in DejaGnu, I admit it. DejaGnu was never truly designed, it was just built and debugged while we were using it, and it shows. I'm not sure if this discussion is better on the GCC list or the DejaGnu list, but I would like to keep this thread going. Course GCC developers are the main users of DejaGnu anyway. > * DejaGnu has a lot of hardcoded logic to try to find various files in a > toolchain build directory. A lot of it is actually for very old toolchain > versions (using GCC version 2 or older, for example). The first issue > with this is that it doesn't belong in DejaGnu: the toolchain should be > free to rearrange its build directories without needing changes to DejaGnu > itself (which in practice means there's lots of such logic in the > toolchain's own testsuites *as well*, duplicating the DejaGnu code to a > greater or lesser extent). The second issue is that "make install" DejaGnu is a testing framework, so it makes sense that much of the GCC testing logic is in gcc/testsuite/{lib,config}. It was also a decision at the time that having a testsuite override existing procs in DejaGnu core was a good idea. Now many years later, I think I'd move most what GCC needs into the core, especially all the "dg* style of tests. At one time the thought was DejaGnu was a general purpose test framework, but I think at this point in time, it's really just used for toolchain testing. (although my Gnash project also uses it) So I think tweaking DejaGnu core to be mainly toolchain testing oriented is probably a good idea. > * Similarly, DejaGnu has hardcoded prune_warnings - and again GCC adds > lots of its own prunes; it's not clear hardcoding this in DejaGnu is a > particularly good idea either. The DejaGnu pruning is older than GCC's. :-) > * Another piece of unfortunate hardcoding in DejaGnu is how remote-host > testing uses "-o a.out" when running tools on the remote host - such a > difference from how they are run on a local host results in lots of issue This is historical, a.out being common at the time. > * A key feature of QMTest that I like but I don't think got mentioned is > that you can *statically enumerate the set of tests* without running them. > That is, a testsuite has a well-defined set of tests, and that set does > not depend on what the results of the tests are - whereas it's very easy > and common for a DejaGnu test to have test names (the text after PASS: or > FAIL: ) depending on whether the test passed or failed, or how the test > passed or failed (no doubt the testsuite authors had reasons for doing > this, but it conflicts with any automatic comparison of results). The One of my other ideas for DejaGnu 2.0 is improved test result output. I'm currently importing all test results into a database (see the mysql branch on savannah), and find text parsing painful and lacking more fine grained details. The text field for PASS/FAIL is overloaded. Since I want to improve the ability to analyze results, ie... comparing what happens with differing configure or command line options, I think the output format has to change. One thought is to only add new fields into the --xml output, as that is database specific, and leave the current text output unchanged. > * People in the BOF seemed happy with expect. I think expect has caused > quite a few problems for toolchain testing. In particular, there are or I don't think it was that we were happy with expect, but at least for GDB testing, nobody has any alternatives. I thought I mentioned that a refactored DejaGnu would only use expect for GDB testing, everything else wouldn't require it. That also means all the remote execution procs would need to work without expect as well. - rob -
Re: DejaGnu and toolchain testing
On 07/26/2013 10:37 AM, Joseph S. Myers wrote: > Anything in the core needs to avoid obstructing toolchain changes. People > typically test with the installed DejaGnu from their OS, and the OS itself > may well be a few years old (e.g. Ubuntu 10.04), so it's undesirable for > an enhancement to the GCC testsuite to require a new version of DejaGnu. > This means clean extensibility, and avoiding DejaGnu hardcoding things > that are not stable public interfaces. DejaGnu is basically stagnant because most people consider the pain of any improvements too great to change anything. If I launch off on a DejaGnu 2.0, my thought is the existing release wouldn't go away. Many distributions ship multiple versions of some applications. Any changes to DejaGnu would likely live in a branch for a long time, but would be usable by anyone interested in better functionality. Yes, an actual design and defining public interfaces would be a good idea. Currently DejaGnu has many arbitrary APIs and settings, all created without a whole lot of thought other than working around or fixing a problem. I also realize that any major changes to DejaGnu will require corresponding changes in the testsuite support code. I'm completely aware of how much work this would be having written much of it... There would have to be backward compatibility maintained for a considerable time. > Right now, DejaGnu has lots of toolchain stuff in the core ... toolchain > stuff for building Cygnus trees 20 years ago rather than what's useful > now. It's not that much better if a DejaGnu version released in 2013 and > used for testing in 2017 has things in it that are good for testing 2013 > toolchains and get in the way for testing 2017 toolchains. I'd agree there is lots of crufty support for things like the old Cygnus trees that could be removed. Ideally I'd prefer to explore people's ideas on what would be useful for testing toolchains 5-10 years from now. Me, I want something not dependent on a dying and mostly unmaintained scripting language that nobody likes anyway (the current working idea is to use python). I also want to be able to compare test results in better ways than diffing huge text files. I'd like to compare multiple test runs as well in a reasonably detailed fashion. - rob -
Re: RFH: GPLv3
As a (non-developer) user, may I humbly submit a slightly different view: The change of license is an Event, which needs to be marked in concrete by a version number change. All future mainline development will be under the GPLv3. However, there are many people who (due to legal or commercial pressures, amongst others) are required to continue under GPLv2, and there doesn't seem to be a good pragmatic reason to actively penalise those people. I think that having one more GPLv2 release, and then all future releases under GPLv3, creates a discontinuity in the compiler between licenses which may be unhelpful because the first GPLv3 gcc will be technically different to the last GPLv2 gcc. This means that the decision about which to use will be a combination of license issues and technical issues. So, could there be a simultaneous release of gcc under GPLv2 and GPLv3, identical in all respects except for the license? The GPLv2 release will represent the best-quality compiler that the project can deliver, as a base for those who must continue to support it. The GPLv3 release will be the reference point for future development, and will be a known quantity in technical terms. The GPLv2 compiler could be gcc 4.2.1, and the GPLv3 compiler could be gcc 4.3.0. There is an issue that people have been hearing about the new functionalities that gcc 4.3 will have, but it shouldn't be too hard to "market" the concept that 4.3 is now a license change version, and 4.4 will be the compiler that 4.3 was going to be. Perhaps the simultaneous release could be done on July 31, which is iirc the FSF's deadline for GPLv2 releases. Extending the gcc 4.2.1 release date might allow some last-minute bug-fixes to make it in there. Compiler vendors etc will have an increased workload maintaining the separability of GPLv2 and GPLv3 code during the transition to the new license, and it would seem that the transition will take quite some time (years?), but I'm sure that they will develop procedures to make it manageable. Cheers, Rob Brown.
Re: RFH: GPLv3
>Krzysztof Halasa wrote: >> Michael Eager <[EMAIL PROTECTED]> writes: >> >>> Not until someone updates the txt. Which should happen quickly, >>> but if someone applies a GPLv3 patch to a previously GPLv2 branch, >>> the entire branch becomes GPLv3, whether the COPYING file was >>> updated or not. >> >> Come on, if the FSF (the copyright holder) distributes a program, >> and if the included licence says GPLv2+, then the licence is GPLv2+ >> and you'll have a really hard time trying to convince anyone that >> it's different. > >You asked if COPYING would be updated. The answer is not necessarily. >The COPYING text may say GPLv2+, but if there has been a GPLv3 patch >applied to the branch, then the entire branch is GPLv3. I struggle to believe this. Afaik a bunch of code is released under a license, and nothing has the power to magically change that license. If someone applies a GPLv3 patch to some GPLv2 code and releases the whole under the GPLv2, then that person has broken copyright law and the release is invalid (because the GPLv3 code has been released without a license), but the rest of the GPLv2 code is still GPLv2. Or have I missed something here? It sounds to me like the syntactic mischief Microsoft is playing when it calls the GPL "viral" (note, I'm not suggesting that you are making mischief, just that the implication is similar)! > >> BTW: the copyright holder is free to take a GPLv3 patch and >> release it under GPLv2 (and any other licence). > >FSF is the copyright holder. As of this time, they have said >that they are not willing to release patches under GPLv2 for >application to GPLv2 branches. Mark has a proposal which would >allow for that. > I think this misses a point: FSF is a copyright assignee, and I don't know how that relates to "holding", but the person who wrote the patch is free to dual-license without reference to the FSF. So as a completely fabricated example: say in 6 months Richard Kenner makes a patch to (GPLv3) mainline for a bug, and you want that patch to improve a GPLv2 product that you're maintaining for one of your customers. You are free to ask Richard to release that patch to you under GPLv2, and Richard is free to grant that request.
Re: RFH: GPLv3
Robert Dewar wrote: >One could of course just take a blanket view that everything >on the site is, as of a certain moment, licensed under GPLv3 >(note you don't have to change file headers to achieve this, >the file headers have no particular legal significance in >any case). According to http://www.gnu.org/licenses/gpl-howto.html, the file headers are precisely the place to make the license grant. > >That at least would be clean, and anyone concerned with >accepting GPLv3 stuff would simply know that as of this >date and time, they should no longer download ANYTHING >from the entire gnu.org site. > >That's actually not so terrible, you lose some users >temporarily, but at least there is no misunderstanding. There would be gross misunderstanding! Placing everything on gnu.org under GPLv3 does nothing to affect all of its mirrors. So if I download gcc-4.2.0.tar.bz2 from ftp.gnu.org then it's GPLv3, but if I download it from any of the mirrors then it's GPLv2. Surely the aim of the process should be to eliminate "gotchas" as much as possible. Everyone has the responsibility to verify that they have a license before using someone else's code. How could I, as the recipient of a file which says "GPLv2" etc at the top, know that it was downloaded from gnu.org and is actually really GPLv3?
Removing #include "*.c"
Hi, I have decided to try and remove occurrences of #include "*.c" as suggested here: http://gcc.gnu.org/projects/beginner.html However, it remarks that: "There may be places where someone is trying to simulate generic programming through the macro facility. Discuss what should be done with the maintainers of those files." I have started by trying to remove #include "decimal32.c" from /libdecnumber/bid/decimal32.c, but I believe this is a case of the above. In which case, is there a general way to go about removing such cases, and also why is it necessary to include a .c file at all? Thanks for your help. Rob Quill
Handling overloaded template functions with variadic parameters
Hi, I am trying to fix this bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33962 The problem seems to that more_specialized_fn() doesn't seem to know how to cope with deciding whether which function is more specialised from two variadic functions. I have narrowed the problem down to more_specialized_fn() being told their are two parameters to look at, but the tree chain for the arguments only having one. I am not sure if this because: a) the number of parameters should not include the variadic (...) parameter or b) the tree chains should include something to show the variadic parameter Any advice you can offer is greatly appreciated. Rob Quill
Help understanding overloaded templates
Hi, I was wondering if anyone could help me make sense of the more_specialized_fn() function in pt.c (line 13281). Specifically, I am trying to understand what each of the are: tree decl1 = DECL_TEMPLATE_RESULT (pat1); tree targs1 = make_tree_vec (DECL_NTPARMS (pat1)); tree tparms1 = DECL_INNERMOST_TEMPLATE_PARMS (pat1); tree args1 = TYPE_ARG_TYPES (TREE_TYPE (decl1)); and how the function is supposed to deal with variadic functions in terms of these. That is to say, if a function is variadic, how is that represented in these data structures? Any help is much appreciated. Thanks. Rob
plugin help: Inserting a function call in gimple code?
I'm experimenting with the gimple plugin infrastructure and I'm having trouble instrumenting code in a way that is compatible with the optimizer. Here's a simple example that is intended to insert the function call "__memcheck_register_argv(argc, argv)" at the beginning of main. The code runs during pass_plugin_gimple (which comes right after pass_apply_inline in passes.c) and works great with -O0, but causes the compiler to crash with -O1 or higher. - tree argv_registrar_type; tree argv_registrar; tree argv_registrar_call; argv_registrar_type = build_function_type_list (void_type_node, integer_type_node, build_pointer_type (build_pointer_type (char_type_node)), NULL_TREE); argv_registrar = build_fn_decl ("__memcheck_register_argv", argv_registrar_type); argv_registrar_call = build_call_expr (argv_registrar, 2, DECL_ARGUMENTS (cfun->decl), TREE_CHAIN (DECL_ARGUMENTS (cfun->decl))); bsi_insert_before (&iter, argv_registrar_call, BSI_SAME_STMT); --- With -O1, I get the compiler failure --- test.c: In function 'main': test.c:2: error: expected an SSA_NAME object test.c:2: error: in statement __memcheck_register_argv (argc, argv); test.c:2: internal compiler error: verify_ssa failed Please submit a full bug report, with preprocessed source if appropriate. See http://gcc.gnu.org/bugs.html> for instructions. when attempting to compile the code - int main(int argc, char **argv) { return 0; } - I've tried all kinds of combinations of gimplify_stmt, update_ssa, etc., but I just can't figure our the "correct" way to insert a function call into the gimple tree. Any help would be greatly appreciated! Best, Rob
Re: plugin help: Inserting a function call in gimple code?
Martin Jambor wrote: Hi, On Wed, Jan 02, 2008 at 06:13:37PM -0500, Rob Johnson wrote: I'm experimenting with the gimple plugin infrastructure and I'm having trouble instrumenting code in a way that is compatible with the optimizer. Here's a simple example that is intended to insert the function call "__memcheck_register_argv(argc, argv)" at the beginning of main. The code runs during pass_plugin_gimple (which comes right after pass_apply_inline in passes.c) and works great with -O0, but causes the compiler to crash with -O1 or higher. - tree argv_registrar_type; tree argv_registrar; tree argv_registrar_call; argv_registrar_type = build_function_type_list (void_type_node, integer_type_node, build_pointer_type (build_pointer_type (char_type_node)), NULL_TREE); argv_registrar = build_fn_decl ("__memcheck_register_argv", argv_registrar_type); argv_registrar_call = build_call_expr (argv_registrar, 2, DECL_ARGUMENTS (cfun->decl), TREE_CHAIN (DECL_ARGUMENTS (cfun->decl))); DECL_ARGUMENTS is a tree chain of PARM_DECLs and in SSA GIMPLE, scalar operands (integers and pointer are both scalar, is_gimple_reg() predicate is there to identify variables that need to be converted to SSA) need to be SSA_NAMEs of declarations (PARM_DECLs and VAR_DECLs in particular). Therefore I suspect you need to create a different chain of respective SSA_NAMES and pass that to build_call_expr(). You can get the default SSA_NAME by calling gimple_default_def(). Yes! It seems it's slightly more complicated given that, with some optimization levels the code is sometimes turned into SSA and sometimes not, and that, depending on how the original program uses argv and argc, they may or may not already have default definitions. If the original program doesn't touch argv/argc, then we also have to update the set of referenced vars. Here's the code I wrote to construct the argc argument to the above call expression: tree argc; /* If we're doing SSA... */ if (cfun->gimple_df && cfun->gimple_df->default_defs) { argc = gimple_default_def (cfun, DECL_ARGUMENTS (cfun->decl)); if (!argc) { argc = DECL_ARGUMENTS (cfun->decl); mark_sym_for_renaming (argc); } add_referenced_var (DECL_ARGUMENTS (cfun->decl)); } else argc = DECL_ARGUMENTS (cfun->decl); The code for argv is similar. Does that look right? Once I get this plugin working, I will try to extract some useful building blocks for other plugin writers. Thanks for your help! Best, Rob
[gnu.org #456639] broken link in libstdc++ manual online
> [spoon.reloa...@gmail.com - Sun Jun 21 16:20:11 2009]: > > In the page of the libstdc++ manual about the API documentation: > http://gcc.gnu.org/onlinedocs/libstdc++/api.html > the link to "the latest collection" goes to a "404 Not Found" Hi. I'm forwarding you this email that was assigned to gnu webmasters for the attention of your web page guys. Thanks. - Rob Myers.
files in (at least) two archives in https://gcc.gnu.org/onlinedocs/gcc-14.2.0/ contain case-sensitive file names
Hello, From https://gcc.gnu.org/onlinedocs/gcc-14.2.0/ I recently downloaded gnat_rm-html.tar.gz and gnat_ugn-html.tar.gz. In both archives there are filenames that are case-sensitive, e.g. index.html and Index.html. This is fine for Unix/Linux users but not workable for MSWindows users. Is it possible to remedy this, or otherwise provide a warning about the non-portability of these archives. Or provide a workaround. Best regards, Rob Groen