RE: Possible range based 'for' bug

2015-06-22 Thread Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf
> Of Julian Klappenbach
> Sent: 21 June 2015 16:56
> To: gcc@gcc.gnu.org
> Subject: Re: Possible range based 'for' bug
> 
> Version info:
> 
> Configured with:
> --prefix=/Applications/Xcode.app/Contents/Developer/usr
> --with-gxx-include-dir=/usr/include/c++/4.2.1
> Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)
> Target: x86_64-apple-darwin14.3.0
> Thread model: posix
> 

Is this what gcc --version returns on your mac?

Paulo Matos


Expectations for 0/0

2015-07-28 Thread Paulo Matos
Hi,

What are the expectations for the 0/0 division?
Test execute.exp=arith-rand.c generates two integers, both being 0 and one of 
the testing blocks is:
  { signed int xx = x, yy = y, r1, r2;
if ((unsigned int) xx << 1 == 0 && yy == -1)
  continue;
r1 = xx / yy;
r2 = xx % yy;
if (ABS (r2) >= (unsigned int) ABS (yy) || (signed int) (r1 * yy + r2) 
!= xx)
  abort ();
  }

Our routine returns : 
R1: 0x
R2: 0xf

Then it aborts because ABS (r2) >= (unsigned int) ABS (yy).
While I understand the results from our division routine might be peculiar, 
this division is also undefined.

The block skips the test for ((unsigned int) xx << 1 == 0 && yy == -1), should 
we skip it if they're both zero as well? If not, what do you expect to get from 
0/0 and 0%0?

Regards,

Paulo Matos




RE: Expectations for 0/0

2015-07-29 Thread Paulo Matos


> -Original Message-
> From: Andrew Haley [mailto:a...@redhat.com]
> Sent: 28 July 2015 18:38
> To: Paulo Matos; gcc@gcc.gnu.org
> Subject: Re: Expectations for 0/0
> 
> On 07/28/2015 04:40 PM, Paulo Matos wrote:
> > The block skips the test for ((unsigned int) xx << 1 == 0 && yy == -
> 1), should we skip it if they're both zero as well?
> 
> Yes.  It's undefined behaviour.  If we don't want to invoke nasal
> daemons we shouldn't do this.


Thanks. I will propose a patch to avoid this.

-- 
Paulo Matos


RE: Expectations for 0/0

2015-07-29 Thread Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf
> Of Paulo Matos
> Sent: 29 July 2015 10:12
> To: Andrew Haley; gcc@gcc.gnu.org
> Subject: RE: Expectations for 0/0
> 
> 
> 
> > -Original Message-
> > From: Andrew Haley [mailto:a...@redhat.com]
> > Sent: 28 July 2015 18:38
> > To: Paulo Matos; gcc@gcc.gnu.org
> > Subject: Re: Expectations for 0/0
> >
> > On 07/28/2015 04:40 PM, Paulo Matos wrote:
> > > The block skips the test for ((unsigned int) xx << 1 == 0 && yy ==
> -
> > 1), should we skip it if they're both zero as well?
> >
> > Yes.  It's undefined behaviour.  If we don't want to invoke nasal
> > daemons we shouldn't do this.
> 
> 
> Thanks. I will propose a patch to avoid this.
> 

My mistake. The check is already in the test but as I simplified the test, I 
ended up removing the check for 0 in the denominator.

Apologies.

-- 
Paulo Matos


Re: Repository for the conversion machinery

2015-08-27 Thread Paulo Matos



On 27/08/15 16:48, Eric S. Raymond wrote:

FX :

[context for the Fortran list: the svn repo is about to be converted into a git 
repo, which will be the official gcc repo onwards]

Hi Eric,

I realize that some of our Fortran maintainers (and committers) are not listed 
in the map file:

Fortran Janne Blomqvist 
Fortran Tobias Burnus   
Fortran Daniel Franke   
Fortran Daniel Kraft
Fortran Mikael Morin
Fortran Janus Weil  

Is that normal?


Do they have actual commits in the repository or were their commits shipped
as patches and merged by a core maintainer?

If the former, then I don't know why they're not in the map.  It contains
an entry for every distinct Unix username it could extract.  What usernames
should I expect these people to have?



I noticed I am not on the list (check commit r225509, user pmatos) either.

And thanks for your help on this transition.

--
Paulo Matos


svn timeouts

2015-08-27 Thread Paulo Matos

Hi,

Am I the only one regularly getting svn timeouts lately?
svn: E210002: Unable to connect to a repository at URL 
'svn://gcc.gnu.org/svn/gcc/trunk'

svn: E210002: Network connection closed unexpectedly

Is this because the repository is being overloaded with requests 
regarding the latest transition start?


--
Paulo Matos


Re: Repository for the conversion machinery

2015-08-27 Thread Paulo Matos



On 27/08/15 16:56, Paulo Matos wrote:


I noticed I am not on the list (check commit r225509, user pmatos) either.

And thanks for your help on this transition.



r188804 | mkuvyrkov 

for example.

--
Paulo Matos


Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Paulo Matos


On 18/03/16 15:02, Jonathan Wakely wrote:
> 
> It's probably crashing because it's too large, so if you reduce it
> then it won't crash.
> 

Would be curious to see what's the limit though, or if it depends on the
machine he's running GCC on.

-- 
Paulo Matos



signature.asc
Description: OpenPGP digital signature


Re: Is test case with 700k lines of code a valid test case?

2016-03-19 Thread Paulo Matos


On 14/03/16 16:31, Andrey Tarasevich wrote:
> Hi,
> 
> I have a source file with 700k lines of code 99% of which are printf() 
> statements. Compiling this test case crashes GCC 5.3.0 with segmentation 
> fault. 
> Can such test case be considered valid or source files of size 35 MB are too 
> much for a C compiler and it should crash? It crashes on Ubuntu 14.04 64bit 
> with 16GB of RAM. 
> 
> Cheers,
> Andrey
> 

I would think it's useful but a reduced version would be great.
Can you reduce the test? If you need a hand, I can help. Contact me
directly and I will give it a try.

Cheers,
-- 
Paulo Matos


RE: jump_table_data and active_insn_p

2014-05-12 Thread Paulo Matos


> -Original Message-
> From: Steven Bosscher [mailto:stevenb@gmail.com]
> Sent: 05 May 2014 10:11
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: jump_table_data and active_insn_p
> 
> On Mon, Mar 17, 2014 at 12:51 PM, Paulo Matos wrote:
> > Why is jump_table_data an active_insn?
> > int
> > active_insn_p (const_rtx insn)
> > {
> >   return (CALL_P (insn) || JUMP_P (insn)
> >   || JUMP_TABLE_DATA_P (insn) /* FIXME */
> >   || (NONJUMP_INSN_P (insn)
> >   && (! reload_completed
> >   || (GET_CODE (PATTERN (insn)) != USE
> >   && GET_CODE (PATTERN (insn)) != CLOBBER; }
> >
> > It is clear that someone [Steven Bosscher] thought it needs fixing
> but what's the problem with just removing it from the OR-expression?
> 
> Places using active_insn_p, next_active_insn, prev_active_insn, etc.,
> need to be audited to make sure it's safe to remove JUMP_TABLE_DATA
> from the OR-expression.
> 
> I've done most of that work, but it needs finishing and for that I
> need to find some time.
> See http://gcc.gnu.org/ml/gcc-patches/2013-11/msg03122.html
> 

Fair enough.

Thanks for the explanation.

Paulo Matos

> Ciao!
> Steven


RE: GCC driver to "Compile twice, score the assembly, choose the best"?

2014-05-15 Thread Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf
> Of Ian Bolton
> Sent: 15 May 2014 12:47
> To: gcc@gcc.gnu.org
> Subject: GCC driver to "Compile twice, score the assembly, choose the
> best"?
> 
> Hi, fellow GCC developers!
> 
> I was wondering if the "gcc" driver could be made to invoke "cc1"
> twice, with different flags, and then just keep the better of the two
> .s files that comes out?
> 
> I'm sure this is not a new idea, but I'm not aware of anything being
> done in this area, so I've made this post to gather your views. :)
> 
> The kinds of flags I am thinking could be toggled are register
> allocation and instruction scheduling ones, since it's very hard to
> find one-size-fits-all there and we don't really want to have the user
> depend on knowing the right one.
> 
> Obviously, compilation time will go up, but the run-time benefits
> could be huge.
> 
> What are your thoughts?  What work in this area have I failed to dig
> up in my limited research?

This looks like something that should be done outside of cc1. Other people have 
thought about it and what you suggest is exactly the example found in OpenTuner 
(http://opentuner.org/) announcement paper:
http://dspace.mit.edu/handle/1721.1/81958

The difference is that you don't compare .s files but instead choose the metric 
based on the execution of the program on a test bench.

HTH,

Paulo Matos


RE: Roadmap for 4.9.1, 4.10.0 and onwards?

2014-05-20 Thread Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf
> Of Basile Starynkevitch
> Sent: 20 May 2014 16:29
> To: Bruce Adams
> Cc: gcc@gcc.gnu.org
> Subject: Re: Roadmap for 4.9.1, 4.10.0 and onwards?
> 
> On Tue, 2014-05-20 at 11:09 +0100, Bruce Adams wrote:
> > Hi,
> > I've been tracking the latest releases of gcc since 4.7 or so
> (variously interested in C++1y support, cilk and openmp).
> > One thing I've found hard to locate is information about planned
> inclusions for future releases.
> > As much relies on unpredictable community contributions I don't
> expect there to be a concrete or reliable plan.
> 
> > However, equally I'm sure the steering committee have some ideas
> over
> > what ought to be upcoming releases.
> 
> As a whole, the steering committee does not have any idea, because GCC
> development is based upon volunteer contributions.
>

I understand the argument but I am not sure it's the way to go. Even if the 
project is based on volunteer contributions it would be interesting to have a 
tentative roadmap. This, I would think, would also help possible beginner 
volunteers know where to start if they wanted to contribute to the project. So 
the roadmap could be a list of features (big or small) of bug fixes that we 
would like fixed for a particular version. Even if we don't want to name it 
roadmap it would still be interesting to have a list of things that are being 
worked on or on the process of being merged into mainline and therefore will 
make it to the next major version.

That being said I know it's hard to set sometime apart to write this kind of 
thing given most of us prefer to be hacking on GCC. From a newcomer point of 
view, however, not having things like a roadmap makes it look like the project 
is heading nowhere.


Re: GCC version bikeshedding

2014-07-20 Thread Paulo Matos

On 20/07/14 17:59, Richard Biener wrote:

On July 20, 2014 5:55:06 PM GMT+01:00, Jakub Jelinek  wrote:

Hi!

So, what versioning scheme have we actually agreed on, before I change
it in
wwwdocs?  Is that
5.0.0 in ~ April 2015, 5.0.1 in ~ June-July 2015 and 5.1.0 in ~ April
2016,
or
5.0 in ~ April 2015, 5.1 in ~ June-July 2015 and 6.0 in ~ April 2016?
The only thing I understood was that we don't want 4.10, but for the
rest
various people expressed different preferences and then it was
presented as
agreement on 5.0, which applies to both of the above.

It is not a big deal either way of course.


I understood we agreed on 5.0 and further 5.1, 5.2 releases from the branch and 
6.0 a year later. With unspecified uses for the patch level number (so leave it 
at zero).



That's what I understood as well. Someone mentioned to leave the patch 
level number to the distros to use which sounded like a good idea.


Paulo Matos


Richard.


Jakub









RE: GCC version bikeshedding

2014-07-21 Thread Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf
> Of Andi Kleen
> Sent: 20 July 2014 22:29
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: GCC version bikeshedding
> 
> Paulo Matos  writes:
> >
> > That's what I understood as well. Someone mentioned to leave the
> patch
> > level number to the distros to use which sounded like a good idea.
> 
> Sounds like a bad idea, as then there would be non unique gcc
> versions.
> redhat gcc 5.0.2 potentially being completely different from suse gcc
> 5.0.2
> 

I understand your concern but I am not convinced it's bad. The main reason for 
this is that we wouldn't distribute GCCs x.y.z with z != 0 so if you would see 
5.0.3 in the wild then you could only conclude it's 5.0 with a few patches from 
some vendor. As I type this I am starting to think how frustrating this might 
become. However, isn't it the case that nowadays you can have different gcc 
4.9.1-2 distributed from different distros? The default gcc in my linode shows: 
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)

So, I can't see why in the future you couldn't have:
Gcc version 5.1 (Ubuntu/Linaro 5.1.3)

This is only if the release managers want to assign the patch level number to 
distros. I don't think there was a decision on this.

Paulo Matos


Extracting live registers

2018-11-06 Thread Paulo Matos
Hi,

I remember from awhile ago that there's some option (or there was...)
that gets GCC to print some register allocation information together
with the assembler output.

I am interested in obtaining the live registers per basic block. I think
the option I had in mind did that but I can't remember the option name
anymore. Can someone point me out to the option or a way to extract such
information?

Kind regards,
-- 
Paulo Matos


Re: Extracting live registers

2018-11-06 Thread Paulo Matos
Apologies, wrong mailing list. Should have sent this to gcc-help.

On 06/11/2018 21:35, Paulo Matos wrote:
> Hi,
> 
> I remember from awhile ago that there's some option (or there was...)
> that gets GCC to print some register allocation information together
> with the assembler output.
> 
> I am interested in obtaining the live registers per basic block. I think
> the option I had in mind did that but I can't remember the option name
> anymore. Can someone point me out to the option or a way to extract such
> information?
> 
> Kind regards,
> 

-- 
Paulo Matos


Re: Extracting live registers

2018-11-06 Thread Paulo Matos



On 07/11/2018 00:40, Segher Boessenkool wrote:
> Hi Paulo,
> 
> -fdump-rtl-alignments[-all] is the last dump with all that information I
> think.  This one also has all this info without -all it seems.  With -all
> it shows it interleaving the RTL dump as well, which may or may not be
> handy for you.
> 

Thanks, however it provides no correspondence to the set of asm
instructions in the basic block. After you mentioned
-fdump-rtl-alignments, I tried a few related flags and hit upon what I
thought would work: -dA and -dP, but unfortunately these don't output
live out information per basic block so it's not helpful for my
application. It would be great if -dA or -dP would show live out info as
well, but that doesn't seem to be the case at the moment.

-- 
Paulo Matos


Re: Extracting live registers

2018-11-07 Thread Paulo Matos



On 07/11/2018 20:27, Segher Boessenkool wrote:
> 
> Sure, it shows the register information at the edges of basic blocks
> only.  This is what you asked for btw ;-)
> 
> 

True, but I need a way to map that information to the assembly
instructions in the basic block. :) I think it's not impossible with all
that gcc provides, but there's certainly a fair amount of parsing of
these files, which is not ideal given their format might change under my
feet.

-- 
Paulo Matos


riscv64 dep. computation

2019-02-14 Thread Paulo Matos
Hello,

While working on a private port of riscv, I noticed that upstream shows
the same behaviour.

For the code:
#define TYPE unsigned short

struct foo_t
{
  TYPE a;
  TYPE b;
  TYPE c;
};

void
func (struct foo_t *x, struct foo_t *y)
{
  y->a = x->a;
  y->b = x->b;
  y->c = x->c;
}

If I compile this with -O2, sched1 groups all loads and all stores
together. That's perfect. However, if I change TYPE to unsigned char and
recompile, the stores and loads are interleaved.

Further investigation shows that for unsigned char there are extra
dependencies that block the scheduler from grouping stores and loads.

For example, there's a dependency between:
(insn 8 3 9 2 (set (mem:QI (reg/v/f:DI 76 [ yD.1533 ]) [0
y_6(D)->aD.1529+0 S1 A8])
(subreg/s/v:QI (reg:DI 72 [ _1 ]) 0)) "test.c":13:8 142
{*movqi_internal}
 (expr_list:REG_DEAD (reg:DI 72 [ _1 ])
(nil)))

and
(insn 11 10 12 2 (set (reg:DI 74 [ _3 ])
(zero_extend:DI (mem:QI (plus:DI (reg/v/f:DI 75 [ xD.1532 ])
(const_int 2 [0x2])) [0 x_5(D)->cD.1531+0 S1 A8])))
"test.c":15:11 89 {zero_extendqidi2}
 (expr_list:REG_DEAD (reg/v/f:DI 75 [ xD.1532 ])
(nil)))

which didn't exist in the `unsigned short' case.

I can't find where this dependency is coming from but also can't justify
it so it seems like a bug to me. Is there a reason for this to happen
that I might not be aware of?

While I am at it, debugging compute_block_dependencies in sched-rgn.c is
a massive pain. This calls sched_analyze which receives a struct
deps_desc that tracks the dependencies in the insn list. Is there a way
to pretty print this structure in gdb?

Kind regards,

-- 
Paulo Matos


Re: riscv64 dep. computation

2019-02-14 Thread Paulo Matos



On 14/02/2019 19:56, Jim Wilson wrote:
> On 2/14/19 3:13 AM, Paulo Matos wrote:
>> If I compile this with -O2, sched1 groups all loads and all stores
>> together. That's perfect. However, if I change TYPE to unsigned char and
>> recompile, the stores and loads are interleaved.
>>
>> Further investigation shows that for unsigned char there are extra
>> dependencies that block the scheduler from grouping stores and loads.
> 
> The ISO C standard says that anything can be casted to char *, and char
> * can be casted to anything.  Hence, a char * pointer aliases everything.
> 
> If you look at the alias set info in the MEMs, you can see that the char
> * references are in alias set 0, which means that they alias everything.
>  The short * references are in alias set 2 which means they only alias
> other stuff in alias set 2.  The difference here is that short * does
> not alias the structure pointers, but char * does.  I haven't tried
> debugging your example, but this is presumably where the difference
> comes from.
>

OK, that seems to make sense. Indeed if I use restrict on the argument
pointers, the compiler will sort itself out and group the loads and stores.

> Because x and y are pointer parameters, the compiler must assume that
> they might alias.  And because char * aliases everything, the char
> references alias them too.  If you change x and y to global variables,
> then they no longer alias each other, and the compiler will schedule all
> of the loads first, even for char.
> 

Are global variables not supposed to alias each other?
If I indeed do that, gcc still won't group loads and stores:
https://cx.rv8.io/g/rFjGLa

-- 
Paulo Matos


Re: riscv64 dep. computation

2019-02-15 Thread Paulo Matos



On 15/02/2019 19:15, Jim Wilson wrote:
> On Thu, Feb 14, 2019 at 11:33 PM Paulo Matos  wrote:
>> Are global variables not supposed to alias each other?
>> If I indeed do that, gcc still won't group loads and stores:
>> https://cx.rv8.io/g/rFjGLa
> 
> I meant something like
> struct foo_t x, y;
> and now they clearly don't alias.  As global pointers they may still alias.
> 

Ah ok, of course. Like that it makes sense they don't alias.

Thanks,

-- 
Paulo Matos


RISC-V sibcall optimization with save-restore

2019-03-20 Thread Paulo Matos
org/bugs/> for instructions.


Is there a way around this issue and allow the libcall to be emitted,
even if the sibcall was enabled? Or emit a sibcall even if it had been
disabled? Since the problem stems that at sibcall_ok_for_function_p I
don't have enough information to know what to do, is there a way to
decide this later on?

Thanks,

-- 
Paulo Matos


GCC Buildbot

2017-09-20 Thread Paulo Matos
Hi all,

I am internally running buildbot for a few projects, including one for a
simple gcc setup for a private port. After some discussions with David
Edelsohn at the last couple of Cauldrons, who told me this might be
interesting for the community in general, I have contacted Sergio DJ
with a few questions on his buildbot configuration for GDB. I then
stripped out his configuration and transformed it into one from GCC,
with a few private additions and ported it to the most recent buildbot
version nine (which is numerically 0.9.x).

To make a long story short: https://gcc-buildbot.linki.tools
With brief documentation in: https://linkitools.github.io/gcc-buildbot
and configuration in: https://github.com/LinkiTools/gcc-buildbot

Now, this is still pretty raw but it:
* Configures a fedora x86_64 for C, C++ and ObjectiveC (./configure
--disable-multilib)
* Does an incremental build
* Runs all tests
* Grabs the test results and stores them as properties
* Creates a tarball of the sum and log files from the testsuite
directory and uploads them

This mail's intention is to gauge the interest of having a buildbot for
GCC. Buildbot is a generic Python framework to build a test framework so
the possibilities are pretty much endless as all workflows are
programmed in Python and with buildbot nine the interface is also
modifiable, if required.

If this is something of interest, then we will need to understand what
is required, among those:

- which machines we can use as workers: we certainly need more worker
(previously known as slave) machines to test GCC in different
archs/configurations;
- what kind of build configurations do we need and what they should do:
for example, do we want to build gcc standalone against system (the one
installed in the worker) binutils, glibc, etc or do we want a builder to
bootstrap everything?
- initially I was doing fresh builds and uploading a tarball (450Mgs)
for download. This took way too long. I have moved to incremental builds
with no tarball generation but if required we could do this for forced
builds and/or nightly. Ideas?
- We are currently running the whole testsuite for each incremental
build (~40mins). If we want a faster turnaround time, we could run just
an important subset of tests. Suggestions?
- would we like to run anything on the compiler besides the gcc
testsuite? I know Honza does, or used to do, lots of firefox builds to
test LTO. Shall we build those, for example? I noticed there's a testing
subpage which contains a few other libraries, should we build these?
(https://gcc.gnu.org/testing/)
- Currently we have a force build which allows people to force a build
on the worker. This requires no authentication and can certainly be
abused. We can add some sort of authentication, like for example, only
allow users with a gcc.gnu.org email? For now, it's not a problem.
-  We are building gcc for C, C++, ObjC (Which is the default). Shall we
add more languages to the mix?
- the gdb buildbot has a feature I have disabled (the TRY scheduler)
which allows people to submit patches to the buildbot, buildbot patches
the current svn version, builds and tests that. Would we want something
like this?
- buildbot can notify people if the build fails or if there's a test
regression. Notification can be sent to IRC and email for example. What
would people prefer to have as the settings for notifications?
- an example of a successful build is:
https://gcc-buildbot.linki.tools/#/builders/1/builds/38
This build shows several Changes because between the start and finish of
a build there were several new commits. Properties show among other
things test results. Responsible users show the people who were involved
in the changes for the build.

I am sure there are lots of other questions and issues. Please let me
know if you find this interesting and what you would like to see
implemented.

Kind regards,

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-20 Thread Paulo Matos


On 20/09/17 17:07, Jeff Law wrote:
> I'd strongly recommend using one of the existing infrastructures.  I
> know several folks (myself included) are using Jenkins/Hudson.  There's
> little to be gained building a completely new infrastructure to manage a
> buildbot.
> 

As David pointed out in another email, I should have referenced the
buildbot homepage:
http://buildbot.net/

This is a framework with batteries included to build the kind of things
we want to have for testing. I certainly don't want to start a Jenkins
vs Buildbot discussion.

Kind regards,

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-21 Thread Paulo Matos


On 20/09/17 19:14, Joseph Myers wrote:
> On Wed, 20 Sep 2017, Paulo Matos wrote:
> 
>> - buildbot can notify people if the build fails or if there's a test
>> regression. Notification can be sent to IRC and email for example. What
>> would people prefer to have as the settings for notifications?
> 
> It's very useful if someone reviews regressions manually and files bugs / 
> notifies authors of patches that seem likely to have caused them / fixes 
> them if straightforward.  However, that can take a lot of time (especially 
> if you're building a large number of configurations, and ideally there 
> would be at least compilation testing for every target architecture 
> supported by GCC if enough machine resources are available).  (I do that 
> for my glibc bot, which does compilation-only testing for many 
> configurations, covering all supported glibc ABIs except Hurd - the 
> summaries of results go to libc-testresults, but the detailed logs that 
> show e.g. which test failed or failed to build aren't public; each build 
> cycle for the mainline bot produces about 3 GB of logs, which get deleted 
> after the following build cycle.)
> 

I totally agree that only if people get involved in checking if there
were regressions and keeping an eye on what's going on are things going
to improve. The framework can help a lot here by notifying the right
people and the mailing list if something gets broken or if there are
regressions but once the notification is sent someone certainly needs to
pick it up.

I believe that once the framework is there and if it's reliable and user
friendly and does not force people to check the UI every day, instead
notifying people only when things go wrong, then it will force people to
take notice and do something about breakages.

At the moment, there are no issues with regards to logs sizes but we are
starting small with a single worker. Once we have more we'll have to
revisit this issue.

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-21 Thread Paulo Matos


On 21/09/17 01:01, Segher Boessenkool wrote:
> Hi!
> 
> On Wed, Sep 20, 2017 at 05:01:55PM +0200, Paulo Matos wrote:
>> This mail's intention is to gauge the interest of having a buildbot for
>> GCC.
> 
> +1.  Or no, +100.
> 
>> - which machines we can use as workers: we certainly need more worker
>> (previously known as slave) machines to test GCC in different
>> archs/configurations;
> 
> I think this would use too much resources (essentially the full machines)
> for the GCC Compile Farm.  If you can dial it down so it only uses a
> small portion of the machines, we can set up slaves there, at least on
> the more unusual architectures.  But then it may become too slow to be
> useful.
> 

We can certainly decide what builds on workers in the compile farm and
what doesn't. We can also decide what type of build we want. A full
bootstrap, all languages etc. I still have to look at that. Not sure how
to access the compile farm or who has access to them.

>> - what kind of build configurations do we need and what they should do:
>> for example, do we want to build gcc standalone against system (the one
>> installed in the worker) binutils, glibc, etc or do we want a builder to
>> bootstrap everything?
> 
> Bootstrap is painfully slow, but it catches many more problems.
> 

Could possibly do that on a schedule instead.

>> -  We are building gcc for C, C++, ObjC (Which is the default). Shall we
>> add more languages to the mix?
> 
> I'd add Fortran, it tends to find problems (possibly because it has much
> bigger tests than most C/C++ tests are).  But all extra testing uses
> disproportionally more resources...  Find a sweet spot :-)  You probably
> should start with as little as possible, or perhaps do bigger configs
> every tenth build, or something like that.
> 

Sounds like a good idea.

>> - the gdb buildbot has a feature I have disabled (the TRY scheduler)
>> which allows people to submit patches to the buildbot, buildbot patches
>> the current svn version, builds and tests that. Would we want something
>> like this?
> 
> This is very useful, but should be mostly separate...  There are of course
> the security considerations, but also this really needs clean builds every
> time, and perhaps even bootstraps.
> 

There could be two types of try schedulers, one for full bootstraps and
one just for GCC. Security wise we could always containerize.

>> - buildbot can notify people if the build fails or if there's a test
>> regression. Notification can be sent to IRC and email for example. What
>> would people prefer to have as the settings for notifications?
> 
> Just try it!  IRC is most useful I think, at least for now.  But try
> whatever seems useful, if there are too many complaints you can always
> turn it off again ;-)
>
> Thank you for working on this.
>

Thanks for all the comments. I will add the initial notifications into
IRC and see how people react.

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-21 Thread Paulo Matos


On 21/09/17 02:27, Joseph Myers wrote:
> On Wed, 20 Sep 2017, Segher Boessenkool wrote:
> 
>>> - buildbot can notify people if the build fails or if there's a test
>>> regression. Notification can be sent to IRC and email for example. What
>>> would people prefer to have as the settings for notifications?
>>
>> Just try it!  IRC is most useful I think, at least for now.  But try
>> whatever seems useful, if there are too many complaints you can always
>> turn it off again ;-)
> 
> We have the gcc-regression list for automatic regression notifications 
> (but as per my previous message, regression notifications are much more 
> useful if someone actually does the analysis, promptly, to identify the 
> cause and possibly a fix).
> 

Yes, the gcc-regression list. Will add a notifier to email the list.

> My glibc bots only detect testsuite regressions that change the exit 
> status of "make check" from successful to unsuccessful (and regressions 
> that break any part of the build, etc.).  That works for glibc, where the 
> vast bulk of configurations have clean compilation-only testsuite results.  
> It won't work for GCC - you need to detect changes in the results of 
> individual tests (or new tests which start out as FAIL and may not 
> strictly be regressions but should still be fixed).  Ideally the expected 
> baseline *would* be zero FAILs, but we're a long way off that.
> 

Yes, with GCC is slightly more complex but it should be possible to
calculate regressions even in the presence of non-zero FAILs.

Thanks for your comments,

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-21 Thread Paulo Matos


On 21/09/17 14:11, Mark Wielaard wrote:
> Hi,
> 
> First let me say I am also a fan of buildbot. I use it for a couple of
> projects and it is really flexible, low on resources, easy to add new
> builders/workers and easily extensible if you like python.
> 
> On Thu, 2017-09-21 at 07:18 +0200, Markus Trippelsdorf wrote:
>> And it has the basic problem of all automatic testing: that in the
>> long run everyone simply ignores it.
>> The same thing would happen with the proposed new buildbot. It would
>> use still more resources on the already overused machines without
>> producing useful results.
> 
> But this is a real concern and will happen if you are too eager testing
> all the things all the time. So I would recommend to start as small as
> possible. Pick a target that builds as fast as possible. Once you go
> over 30 minutes of build/test time it really is already too long. Both
> on machine resources and human attention span. And pick a subset of the
> testsuite that should be zero-FAIL. Only then will people really take
> notice when the build turns from green-to-red. Otherwise people will
> always have an excuse "well, those tests aren't really reliable, it
> could be something else". And then only once you have a stable
> small/fast builder that reliably warns committers that their commit
> broke something extend it to new targets/setups/tests as long as you
> can keep the false warnings as close to zero as possible.
> 

Thanks. This is an interesting idea, however it might not be an easy
exercise to choose a subset of the tests for each compiled language that
PASS, are quick to run and representative. It would be interesting to
hear from some of the main developers which of the tests would be better
to run.

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-21 Thread Paulo Matos


On 21/09/17 16:41, Martin Sebor wrote:
> 
> The regression and the testresults lists are useful but not nearly
> as much as they could be.  For one, the presentation isn't user
> friendly (a matrix view would be much more informative).  But even
> beyond it, rather than using the pull model (people have to make
> an effort to search it for results of their changes or the changes
> of others to find regressions), the regression testing setup could
> be improved by adopting the push model and automatically emailing
> authors of changes that caused regressions (or opening bugs, or
> whatever else might get our attention).
> 

This is certainly one of the notifications that I think that need to be
implemented. If a patch breaks build or testing, the responsible parties
need to be informed, i.e. commiters, authors and possibly the list as well.

Thanks,

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-21 Thread Paulo Matos
y are
> not false positive is very time consuming.
> 
> Having a buggy bisect framework can also lead to embarrassing
> situations, like when I blamed a C++ front-end patch for a regression
> in fortran ;-)
> 
> Most of the time, I consider it's more efficient for the project if I warn
> the author of the patch that introduced the regression than if I try to
> fix it myself. Except for the most trivial ones, it resulted several times
> in duplicated effort and waste of time. But of course, there are many
> more efficient gcc developers than me here :)
> 

I think that's the point. I mean, as soon as a regression/build fail is
noticed, the buildbot should notify the right people of what happened
and those need to take notice and fix it or revert their patch. If
someone submits a patch, is notified it breaks GCC and does nothing,
then we have a bigger problem.

> Regarding the cpu power, maybe we could have free slots in
> some cloud? (travis? amazon?, )
> 

Any suggestions on how to get these free slots? :)

Thanks for all the great suggestions and tips on your email.
-- 
Paulo Matos


Re: GCC Buildbot

2017-09-22 Thread Paulo Matos


On 22/09/17 01:23, Joseph Myers wrote:
> On Thu, 21 Sep 2017, Paulo Matos wrote:
> 
>> Interesting suggestion. I haven't had the opportunity to look at the
>> compile farm. However, it could be interesting to have a mix of workers:
>> native compile farm ones and some x86_64 doing cross compilation and
>> testing.
> 
> Note that even without a simulator (but with target libc), you can test 
> just the compilation parts of the testsuite using a board file with a 
> dummy _load implementation.
> 

I was not aware of that. I will keep that in mind once I try to setup a
cross-compilation worker.

I assume you have done this before. Do you have any scripts for
cross-compiling you can share?

Thanks,

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-25 Thread Paulo Matos


On 25/09/17 11:52, Martin Liška wrote:
> Hi Paulo.
> 
> Thank you for working on that! To be honest, I've been running local buildbot 
> on
> my desktop machine which does builds your buildbot instance can do (please 
> see:
> https://pasteboard.co/GLZ0vLMu.png):
> 

Hi Martin,

Thanks for sharing your builders. Looks like you've got a good setup going.

I have done the very basic only since it was my interest to understand
if people would find it useful. I didn't want to waste my time building
something people have no interest to use.

It seems there is some interest so I am gathering some requirements in
the GitHub issues of the project. One very important feature is
visualization of results, so I am integrating support for data gathering
in influxdb to display using grafana. I do not work full time on this,
so it's going slowly but I should have a dashboard to show in the next
couple of weeks.

> - doing time to time (once a week) sanitizer builds: ASAN, UBSAN and run 
> test-suite
> - doing profiled bootstrap, LTO bootstrap (yes, it has been broken for quite 
> some time) and LTO profiled bootstrap
> - building project with --enable-gather-detailed-mem-stats
> - doing coverage --enable-coverage, running test-suite and uploading to a 
> location: https://gcc.opensuse.org/gcc-lcov/
> - similar for Doxygen: https://gcc.opensuse.org/gcc-doxygen/
> - periodic building of some projects: Inkscape, GIMP, linux-kernel, Firefox - 
> I do it with -O2, -O2+LTO, -O3, ...
>   Would be definitely fine, but it takes some care to maintain compatible 
> versions of a project and GCC compiler.
>   Plus handling of dependencies of external libraries can be irritating.
> - cross build for primary architectures
>
> That's list of what I have and can be inspiration for you. I can help if you 
> want and we can find a reasonable resources
> where this can be run.
>

Thanks. That's great. As you can see from #9 in
https://github.com/LinkiTools/gcc-buildbot/issues/9, most of the things
I hope to be able to run in the CompileFarm unless, of course, unless
people host a worker on their own hardware. Regarding your offer for
resources. Are you offering to merge your config or hardware? Either
would be great, however I expect your config to have to be ported to
buildbot nine before merging.

> Apart from that, I fully agree with octoploid that 
> http://toolchain.lug-owl.de/buildbot/ is duplicated effort which is running
> on GCC compile farm machines and uses a shell scripts to utilize. I would 
> prefer to integrate it to Buildbot and utilize same
> GCC Farm machines for native builds.
> 

Octoploid? Is that a typo?
I discussed that in the Cauldron with David was surprised to know that
the buildbot you reference is actually not a buildbot implementation
using the Python framework but a handwritten software. So, in that
respect is not duplicated effort. It is duplicated effort if on the
other hand, we try to test the same things. I will try to understand how
to merge efforts to that buildbot.

> Another inspiration (for builds) can come from what LLVM folks do:
> http://lab.llvm.org:8011/builders
> 

Thanks for the pointer. I at one point tried to read their
configuration. However, found the one by gdb simpler and used it as a
basis for what I have. I will look at their builders nonetheless to
understand what they build and how long they take.

> Anyway, it's good starting point what you did and I'm looking forward to more 
> common use of the tool.
> Martin
> 

Thanks,
-- 
Paulo Matos


Re: GCC Buildbot

2017-09-25 Thread Paulo Matos


On 25/09/17 13:14, Jonathan Wakely wrote:
> On 25 September 2017 at 11:13, Paulo Matos wrote:
>>> Apart from that, I fully agree with octoploid that 
>>> http://toolchain.lug-owl.de/buildbot/ is duplicated effort which is running
>>> on GCC compile farm machines and uses a shell scripts to utilize. I would 
>>> prefer to integrate it to Buildbot and utilize same
>>> GCC Farm machines for native builds.
>>>
>>
>> Octoploid? Is that a typo?
> 
> No, it's Markus Trippelsdorf's username.
> 

Ah, thanks for the clarification.

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-25 Thread Paulo Matos


On 25/09/17 13:36, Martin Liška wrote:
> 
> Would be great, what exactly do you want to visualize? For me, even having 
> green/red spots
> works fine in order to quickly identify what builds are wrong.
> 

There are several options and I think mostly it depends on what everyone
would like to see but I am thinking that a dashboard with green/red
spots as you mention (which depends not on the existence of failures)
but on the existence of a regression at a certain revision. Also, an
historical graph of results and gcc build times might be interesting as
well.

For benchmarks like Qt, blitz (as mentioned in the gcc testing page), we
can plot the build time of the benchmark and resulting size when
compiling for size.

Again, I expect that once there's something visible and people are keen
to use it, they'll ask for something specific. However, once the
infrastructure is in place, it shouldn't be too hard to add specific
visualizations.

> 
> Hopefully both. I'm attaching my config file (probably more for inspiration 
> that a real use).
> I'll ask my manager whether we can find a machine that can run more complex 
> tests. I'll inform you.
> 

Thanks for the configuration file. I will take a look. Will eagerly wait
for news on the hardware request.

> 
> Yes, duplication in way that it is (will be) same things. I'm adding author 
> of the tool,
> hopefully we can unify the effort (and resources of course).
> 

Great.

-- 
Paulo Matos


Re: GCC Buildbot

2017-09-26 Thread Paulo Matos


On 26/09/17 10:43, Martin Liška wrote:
> On 09/25/2017 02:49 PM, Paulo Matos wrote:
>> For benchmarks like Qt, blitz (as mentioned in the gcc testing page), we
>> can plot the build time of the benchmark and resulting size when
>> compiling for size.
>>
> 
> Please consider using LNT:
> http://llvm.org/docs/lnt/
> 
> Usage:
> http://lnt.llvm.org/
> 
> I've been investigating the tools and I know that ARM people use the tool:
> https://gcc.gnu.org/wiki/cauldron2017#ARM-perf
> 

Good suggestion. I was actually at the presentation. The reason I was
going with influx+grafana was because I know the process and already use
that internally --- the LNT configuration was unknown to me but you're
right. It might be better in the long term. I will look at the
documentation.

Thanks.

-- 
Paulo Matos


GCC Buildbot Update - Definition of regression

2017-10-10 Thread Paulo Matos
Hi all,

It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:

* 3 x86_64 workers from CF are now installed;
* There's one scheduler for trunk doing fresh builds for every Daily bump;
* One scheduler doing incremental builds for each active branch;
* An IRC bot which is currently silent;

The next steps are:
* Enable LNT (I have installed this but have yet to connect to buildbot)
for tracking performance benchmarks over time -- it should come up as
http://gcc-lnt.linki.tools in the near future.
* Enable regression analysis --- This is fundamental. I understand that
without this the buildbot is pretty useless so it has highest priority.
However, I would like some agreement as to what in GCC should be
considered a regression. Each test in deja gnu can have several status:
FAIL, PASS, UNSUPPORTED, UNTESTED, XPASS, KPASS, XFAIL, KFAIL, UNRESOLVED

Since GCC doesn't have a 'clean bill' of test results we need to analyse
the sum files for the current run and compare with the last run of the
same branch. I have written down that if for each test there's a
transition that looks like the following, then a regression exists and
the test run should be marked as failure.

ANY -> no test  ; Test disappears
ANY / XPASS -> XPASS; Test goes from any status other than XPASS
to XPASS
ANY / KPASS -> KPASS; Test goes from any status other than KPASS
to KPASS
new test -> FAIL; New test starts as fail
PASS -> ANY ; Test moves away from PASS

This is a suggestion. I am keen to have corrections from people who use
this on a daily basis and/or have a better understanding of each status.

As soon as we reach a consensus, I will deploy this analysis and enable
IRC bot to notify on the #gcc channel the results of the tests.

-- 
Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-10 Thread Paulo Matos


On 11/10/17 06:17, Markus Trippelsdorf wrote:
> On 2017.10.10 at 21:45 +0200, Paulo Matos wrote:
>> Hi all,
>>
>> It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
>>
>> * 3 x86_64 workers from CF are now installed;
>> * There's one scheduler for trunk doing fresh builds for every Daily bump;
>> * One scheduler doing incremental builds for each active branch;
>> * An IRC bot which is currently silent;
> 
> Using -j8 for the bot on a 8/16 (core/thread) machine like gcc67 is not
> acceptable, because it will render it unusable for everybody else.

I was going to correct you on that given what I read in
https://gcc.gnu.org/wiki/CompileFarm#Usage

but it was my mistake. I assumed that for an N-thread machine, I could
use N/2 processes but the guide explicitly says N-core, not N-thread.
Therefore I should be using 4 processes for gcc67 (or 0 given what follows).

I will fix also the number of processes used by the other workers.

> Also gcc67 has a buggy Ryzen CPU that causes random gcc crashes. Not the
> best setup for a regression tester...
> 

Is that documented anywhere? I will remove this worker.

Thanks,

-- 
Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-10 Thread Paulo Matos


On 10/10/17 23:25, Joseph Myers wrote:
> On Tue, 10 Oct 2017, Paulo Matos wrote:
> 
>> new test -> FAIL; New test starts as fail
> 
> No, that's not a regression, but you might want to treat it as one (in the 
> sense that it's a regression at the higher level of "testsuite run should 
> have no unexpected failures", even if the test in question would have 
> failed all along if added earlier and so the underlying compiler bug, if 
> any, is not a regression).  It should have human attention to classify it 
> and either fix the test or XFAIL it (with issue filed in Bugzilla if a 
> bug), but it's not a regression.  (Exception: where a test failing results 
> in its name changing, e.g. through adding "(internal compiler error)".)
> 

When someone adds a new test to the testsuite, isn't it supposed to not
FAIL? If is does FAIL, shouldn't this be considered a regression?

Now, the danger is that since regressions are comparisons with previous
run something like this would happen:

run1:
...
FAIL: foo.c ; new test
...

run1 fails because new test entered as a FAIL

run2:
...
FAIL: foo.c
...

run2 succeeds because there are no changes.

For this reason all of this issues need to be taken care straight away
or they become part of the 'normal' status and no more failures are
issued... unless of course a more complex regression analysis is
implemented.

Also, when I mean, run1 fails or succeeds this is just the term I use to
display red/green in the buildbot interface for a given build, not
necessarily what I expect the process will do.

> 
> My suggestion is:
> 
> PASS -> FAIL is an unambiguous regression.
> 
> Anything else -> FAIL and new FAILing tests aren't regressions at the 
> individual test level, but may be treated as such at the whole testsuite 
> level.
> 
> Any transition where the destination result is not FAIL is not a 
> regression.
> 
> ERRORs in the .sum or .log files should be watched out for as well, 
> however, as sometimes they may indicate broken Tcl syntax in the 
> testsuite, which may cause many tests not to be run.
> 
> Note that the test names that come after PASS:, FAIL: etc. aren't unique 
> between different .sum files, so you need to associate tests with a tuple 
> (.sum file, test name) (and even then, sometimes multiple tests in a .sum 
> file have the same name, but that's a testsuite bug).  If you're using 
> --target_board options that run tests for more than one multilib in the 
> same testsuite run, add the multilib to that tuple as well.
> 

Thanks for all the comments. Sounds sensible.
By not being unique, you mean between languages?
I assume that two gcc.sum from different builds will always refer to the
same test/configuration when referring to (for example):
PASS: gcc.c-torture/compile/20000105-1.c   -O1  (test for excess errors)

In this case, I assume that "gcc.c-torture/compile/2105-1.c   -O1
(test for excess errors)" will always be referring to the same thing.

-- 
Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-11 Thread Paulo Matos


On 11/10/17 10:35, Christophe Lyon wrote:
> 
> FWIW, we consider regressions:
> * any->FAIL because we don't want such a regression at the whole testsuite 
> level
> * any->UNRESOLVED for the same reason
> * {PASS,UNSUPPORTED,UNTESTED,UNRESOLVED}-> XPASS
> * new XPASS
> * XFAIL disappears (may mean that a testcase was removed, worth a manual 
> check)
> * ERRORS
> 

That's certainly stricter than what it was proposed by Joseph. I will
run a few tests on historical data to see what I get using both approaches.

> 
> 
>>> ERRORs in the .sum or .log files should be watched out for as well,
>>> however, as sometimes they may indicate broken Tcl syntax in the
>>> testsuite, which may cause many tests not to be run.
>>>
>>> Note that the test names that come after PASS:, FAIL: etc. aren't unique
>>> between different .sum files, so you need to associate tests with a tuple
>>> (.sum file, test name) (and even then, sometimes multiple tests in a .sum
>>> file have the same name, but that's a testsuite bug).  If you're using
>>> --target_board options that run tests for more than one multilib in the
>>> same testsuite run, add the multilib to that tuple as well.
>>>
>>
>> Thanks for all the comments. Sounds sensible.
>> By not being unique, you mean between languages?
> Yes, but not only as Joseph mentioned above.
> 
> You have the obvious example of c-c++-common/*san tests, which are
> common to gcc and g++.
> 
>> I assume that two gcc.sum from different builds will always refer to the
>> same test/configuration when referring to (for example):
>> PASS: gcc.c-torture/compile/2105-1.c   -O1  (test for excess errors)
>>
>> In this case, I assume that "gcc.c-torture/compile/2105-1.c   -O1
>> (test for excess errors)" will always be referring to the same thing.
>>
> In gcc.sum, I can see 4 occurrences of
> PASS: gcc.dg/Werror-13.c  (test for errors, line )
> 
> Actually, there are quite a few others like that
> 

That actually surprised me.

I also see:
PASS: gcc.dg/Werror-13.c  (test for errors, line )
PASS: gcc.dg/Werror-13.c  (test for errors, line )
PASS: gcc.dg/Werror-13.c  (test for errors, line )
PASS: gcc.dg/Werror-13.c  (test for errors, line )

among others like it. Looks like a line number is missing?

In any case, it feels like the code I have to track this down needs to
be improved.

-- 
Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-11 Thread Paulo Matos


On 11/10/17 11:15, Christophe Lyon wrote:
> 
> You can have a look at
> https://git.linaro.org/toolchain/gcc-compare-results.git/
> where compare_tests is a patched version of the contrib/ script,
> it calls the main perl script (which is not the prettiest thing :-)
> 

Thanks, that's useful. I will take a look.

-- 
Paulo Matos


Re: GCC CI on Power

2017-11-07 Thread Paulo Matos


On 07/11/17 16:54, David Malcolm wrote:
> On Mon, 2017-11-06 at 18:46 -0200, Nathália Harumi wrote:
>> Hi,
>> My name is Nathália Harumi, I'm a student of The University of
>> Campinas.
>>
>> I'm working on validation projects on OpenPower lab (which is a
>> partnership
>> between IBM and The University of Campinas, in Brazil) and we'd like
>> to
>> find a way to contribute the GCC community.
>>
>> Now a days, we use a Buildbot platform for internal validation
>> projects on
>> Power with several ppc architectures and flavours as workers. We're
>> also
>> working with glibc community to improve it buildbot and to provide
>> workers
>> for builds on ppc.
>>
>> So, we'd like to know which platform you use for CI and how we can
>> contribute with it.
>>
>> Thank you,
>> Nathália.
> 
> Hi Nathália
> 
> I don't think there's anything "official" yet, but Paulo Matos [CCed]
> has been experimenting with running buildbot for gcc:
> 
>   https://gcc.gnu.org/ml/gcc/2017-10/msg00068.html
> 
> (albeit it with some significant unsolved problems, as I understand it)
> 
> Maybe you and Paulo could set things up so that the ppc workers you
> have could contribute to Paulo's buildbot instance?
> 

Dave, thanks for forwarding this to me.

Hello Nathalia,

Thanks for reaching out.
I am running an experimental buildbot for GCC (currently possibly down
as it's moving servers, but downtime should be short).

There are a few issues to fix, which I will be dealing with in the next
couple of weeks. A few examples of important issues to fix are:

* dealing with regressions on the gcc testsuite;
* trimming down the testsuite to have a fast testsuite run which takes
not longer than a coffee break;
* ensuring that notifications are processes properly.

Would you be able to share your configuration or solution to these issues?

Kind regards,

-- 
Paulo Matos


GCC Buildbot Update

2017-12-14 Thread Paulo Matos
ts and then try to get the gcc test results there as well.

*TODO:*

So on my todo list for the next iteration I have:

- analysis of out-of-memory issues in CF for Fast builders;
- analysis of aarch64 build failure;
- merge regression testing verification branch into master and deploy
into production;
  - this should then trigger the irc bot reporter for regressions;
- open up LNT for benchmarking and add benchmarking job for x64_64 using
csibe (as an initial proof of concept);

If you have any wishes, questions, want to offer some fast machines to
have workers on or if you know what's wrong with the Fast builder or the
aarch64 machines, please let me know.

I hope to send another update in about a months time.

Kind regards,
-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-15 Thread Paulo Matos


On 14/12/17 12:39, David Malcolm wrote:
> 
> Looking at some of the red blobs in e.g. the grid view there seem to be
> a few failures in the initial "update gcc trunk repo" step of the form:
> 
> svn: Working copy '.' locked
> svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for
> details)
> 

Yes, that's a big annoyance and a reason I have thought about moving to
using the git mirror, however that would probably bring other issues so
I am holding off. I need to add a reporter so that if it fails I am
notified by email and mobile phone.

This happens when there's a timeout from a server _during_ a
checkout/update (the svn repo unfortunately times out way too often). I
thought about doing an svn cleanup before each checkout but read it's
not good practice. If you have any suggestions on this please let me know.

> https://gcc-lnt.linki.tools/#/builders/3/builds/388/steps/0/logs/stdio
> 

Apologies, https://gcc-lnt.linki.tools is currently incorrectly
forwarding you to https://gcc-buildbot.linki.tools. I meant to have it
return an error until I open that up.

> Is there a bug-tracking location for the buildbot?
> Presumably:
>   https://github.com/LinkiTools/gcc-buildbot/issues
> ?
> 

That's correct.

> I actually found a serious bug in jamais-vu yesterday - it got confused
> by  multiple .sum lines for the same source line e.g. from multiple
> "dg-" directives that all specify a particular line).  For example,
> when testing one of my patches, of the 3 tests reporting as
>   "c-c++-common/pr83059.c  -std=c++11  (test for warnings, line 7)"
> one of the 3 PASS results became a FAIL.  jv correctly reported that
> new FAILs had occurred, but wouldn't identify them, and mistakenly
> reported that new PASSes has occurred also.
> 
> I've fixed that now; to do so I've done some refactoring and added a
> testsuite.
>

Perfect, thank you very much for this work.

> It looks like you're capturing the textual output from "jv compare" and
> using the exit code.  Would you prefer to import "jv" as a python
> module and use some kind of API?  Or a different output format?
> 

Well, I am using a fork of it which I converted to Python3. Would you be
open to convert yours to Python3? The reason I am doing this is because
all other Python software I have and the buildbot use Python3.

I would also prefer to have some json format or something but when I
looked at it, the software was just printing to stdout and I didn't want
to spend too much time implementing it, so I thought parsing the output
was just easier.

> If you file pull request(s) for the changes you've made in your copy of
> jamais-vu, I can take at look at merging them.
>

Happy to do so...
Will merge your changes into my fork first then.

Kind regards,
-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-15 Thread Paulo Matos


On 14/12/17 21:32, Christophe Lyon wrote:
> Great, I thought the CF machines were reserved for developpers.
> Good news you could add builders on them.
> 

Oh. I have seen similar things happening on CF machines so I thought it
was not a problem. I have never specifically asked for permission.

>> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as -march=armv8.1-a
>> Assembler messages:
>> Error: unknown architecture `armv8.1-a'
>>
>> Error: unrecognized option -march=armv8.1-a
>>
>> However, if I run the a compiler build manually with just:
>>
>> $ configure --disable-multilib
>> $ nice -n 19 make -j4 all
>>
>> This compiles just fine. So I am at the moment attempting to investigate
>> what might cause the difference between what buildbot does and what I do
>> through ssh.
>>
> I suspect you are hitting a bug introduced recently, and fixed by:
> https://gcc.gnu.org/ml/gcc-patches/2017-12/msg00434.html
> 

Wow, that's really useful. Thanks for letting me know.

-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-15 Thread Paulo Matos


On 15/12/17 08:42, Markus Trippelsdorf wrote:
> 
> I don't think this is good news at all. 
> 

As I pointed out in a reply to Chris, I haven't seeked permission but I
am pretty sure something similar runs in the CF machines from other
projects.

The downside is that if we can't use the CF, I have no extra machines to
run the buildbot on.

> Once a buildbot runs on a CF machine it immediately becomes impossible
> to do any meaningful measurement on that machine. That is mainly because
> of the random I/O (untar, rm -fr, etc.) of the bot. As a result variance
> goes to the roof and all measurements drown in noise.
> 
> So it would be good if there was a strict separation of machines used
> for bots and machines used by humans. In other words bots should only
> run on dedicated machines.
> 

I understand your concern though. Do you know who this issue could be
raised with? FSF?

-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-15 Thread Paulo Matos


On 15/12/17 10:21, Christophe Lyon wrote:
> And the patch was committed last night (r255659), so maybe your builds now 
> work?
> 

Forgot to mention that. Yes, it built!
https://gcc-buildbot.linki.tools/#/builders/5

-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-16 Thread Paulo Matos


On 15/12/17 15:29, David Malcolm wrote:
> On Fri, 2017-12-15 at 10:16 +0100, Paulo Matos wrote:
>>
>> On 14/12/17 12:39, David Malcolm wrote:
> 
> [...]
> 
>>> It looks like you're capturing the textual output from "jv compare"
>>> and
>>> using the exit code.  Would you prefer to import "jv" as a python
>>> module and use some kind of API?  Or a different output format?
>>>
>>
>> Well, I am using a fork of it which I converted to Python3. Would you
>> be
>> open to convert yours to Python3? The reason I am doing this is
>> because
>> all other Python software I have and the buildbot use Python3.
> 
> Done.
> 
> I found and fixed some more bugs, also (introduced during my
> refactoring, sigh...)
> 

That's great. Thank you very much for this work.

>> I would also prefer to have some json format or something but when I
>> looked at it, the software was just printing to stdout and I didn't
>> want
>> to spend too much time implementing it, so I thought parsing the
>> output
>> was just easier.
> 
> I can add JSON output (or whatever), but I need to get back to gcc 8
> work, so if the stdout output is good enough for now, let's defer
> output changes.
> 

Agree, for now I can use what I already have to read the output of jv.
I think I can now delete my fork and just use upstream jv as a submodule.

-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-16 Thread Paulo Matos


On 15/12/17 18:05, Segher Boessenkool wrote:
> All the cfarm machines are shared resources.  Benchmarking on them will
> not work no matter what.  And being a shared resource means all users
> have to share and be mindful of others.
> 

Yes, we'll definitely need better machines for benchmarking. Something I
haven't thought of yet.

>> So it would be good if there was a strict separation of machines used
>> for bots and machines used by humans. In other words bots should only
>> run on dedicated machines.
> 
> The aarch64 builds should probably not use all of gcc113..gcc116.
>
> We do not have enough resources to dedicate machines to bots.
>

I have disabled gcc116.

Thanks,
-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-20 Thread Paulo Matos


On 15/12/17 10:21, Christophe Lyon wrote:
> On 15 December 2017 at 10:19, Paulo Matos  wrote:
>>
>>
>> On 14/12/17 21:32, Christophe Lyon wrote:
>>> Great, I thought the CF machines were reserved for developpers.
>>> Good news you could add builders on them.
>>>
>>
>> Oh. I have seen similar things happening on CF machines so I thought it
>> was not a problem. I have never specifically asked for permission.
>>
>>>> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as -march=armv8.1-a
>>>> Assembler messages:
>>>> Error: unknown architecture `armv8.1-a'
>>>>
>>>> Error: unrecognized option -march=armv8.1-a
>>>>
>>>> However, if I run the a compiler build manually with just:
>>>>
>>>> $ configure --disable-multilib
>>>> $ nice -n 19 make -j4 all
>>>>
>>>> This compiles just fine. So I am at the moment attempting to investigate
>>>> what might cause the difference between what buildbot does and what I do
>>>> through ssh.
>>>>
>>> I suspect you are hitting a bug introduced recently, and fixed by:
>>> https://gcc.gnu.org/ml/gcc-patches/2017-12/msg00434.html
>>>
>>
>> Wow, that's really useful. Thanks for letting me know.
>>
> And the patch was committed last night (r255659), so maybe your builds now 
> work?
> 

On some machines, in incremental builds I still seeing this:
Assembler messages:
Error: unknown architectural extension `lse'
Error: unrecognized option -march=armv8-a+lse
make[4]: *** [load_1_1_.lo] Error 1
make[4]: *** Waiting for unfinished jobs

Looks related... the only strange thing happening is that this doesn't
happen in full builds.

-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-20 Thread Paulo Matos


On 20/12/17 10:51, Christophe Lyon wrote:
> 
> The recent fix changed the Makefile and configure script in libatomic.
> I guess that if your incremental builds does not run configure, it's
> still using old Makefiles, and old options.
> 
> 
You're right. I guess incremental builds should always call configure,
just in case.

Thanks,
-- 
Paulo Matos


Re: GCC Buildbot Update

2017-12-20 Thread Paulo Matos


On 20/12/17 12:48, James Greenhalgh wrote:
> On Wed, Dec 20, 2017 at 10:02:45AM +0000, Paulo Matos wrote:
>>
>>
>> On 20/12/17 10:51, Christophe Lyon wrote:
>>>
>>> The recent fix changed the Makefile and configure script in libatomic.
>>> I guess that if your incremental builds does not run configure, it's
>>> still using old Makefiles, and old options.
>>>
>>>
>> You're right. I guess incremental builds should always call configure,
>> just in case.
> 
> For my personal bisect scripts I try an incremental build, with a
> full rebuild as a fallback on failure.
> 
> That gives me the benefits of an incremental build most of the time (I
> don't have stats on how often) with an automated approach to keeping things
> going where there are issues.
> 
> Note that there are rare cases where depencies are missed in the toolchain
> and an incremental build will give you a toolchain with undefined
> behaviour, as one compilation unit takes a new definition of a
> struct/interface and the other sits on an outdated compile from the
> previous build.
> 
> I don't have a good way to detect these.
> 

That's definitely a shortcoming of incremental builds. Unfortunately we
cannot cope with full builds for each commit (even for incremental
builds we'll need an alternative soon). So I will implement the same
strategy of full build if incremental fails, I think.

With respect with regards to incremental builds with undefined behaviour
that probably means that dependencies are incorrectly calculated. It
would be great to sort these out. If we could detect that there are
issues with the incremental build we could then try to understand which
dependencies were not properly calculated. Just a guess, however
implementing this might take awhile and would obviously need a lot more
resources than we have available now.

-- 
Paulo Matos


Re: jamais-vu can now ignore renumbering of source lines in dg output (Re: GCC Buildbot Update)

2018-01-24 Thread Paulo Matos


On 24/01/18 20:20, David Malcolm wrote:
> 
> I've added a new feature to jamais-vu (as of
> 77849e2809ca9a049d5683571e27ebe190977fa8): it can now ignore test
> results that merely changed line number.  
> 
> For example, if the old .sum file has a:
> 
>   PASS: g++.dg/diagnostic/param-type-mismatch.C  -std=gnu++11  (test for 
> errors, line 106)
> 
> and the new .sum file has a:
> 
>   PASS: g++.dg/diagnostic/param-type-mismatch.C  -std=gnu++11  (test for 
> errors, line 103)
> 
> and diffing the source trees reveals that line 106 became line 103, the
> change won't be reported by "jv compare".
> 
> It also does it for dg-{begin|end}-multiline-output.
> 
> It will report them if the outcome changed (e.g. from PASS to FAIL).
> 
> To do this filtering, jv needs access to the old and new source trees,
> so it can diff the pertinent source files, so "jv compare" has gained
> the optional arguments
>   --old-source-path=
> and
>   --new-source-path=
> See the example in the jv Makefile for more info.  If they're not
> present, it should work as before (without being able to do the above
> filtering).
> 
> Is this something that the buildbot can use?
> 

Hi David,

Thanks for the amazing improvements.
I will take a look at them on Monday. I have a lot of work at the moment
so I decided to take 1/5 of my week (usually Monday) to work on buildbot
so I will definitely get it integrated on Monday and hopefully have
something to say afterwards.

Thanks for keeping me up-to-date with these changes.

-- 
Paulo Matos


Re: jamais-vu can now ignore renumbering of source lines in dg output (Re: GCC Buildbot Update)

2018-01-29 Thread Paulo Matos


On 24/01/18 20:20, David Malcolm wrote:
> 
> I've added a new feature to jamais-vu (as of
> 77849e2809ca9a049d5683571e27ebe190977fa8): it can now ignore test
> results that merely changed line number.  
> 
> For example, if the old .sum file has a:
> 
>   PASS: g++.dg/diagnostic/param-type-mismatch.C  -std=gnu++11  (test for 
> errors, line 106)
> 
> and the new .sum file has a:
> 
>   PASS: g++.dg/diagnostic/param-type-mismatch.C  -std=gnu++11  (test for 
> errors, line 103)
> 
> and diffing the source trees reveals that line 106 became line 103, the
> change won't be reported by "jv compare".
> 
> It also does it for dg-{begin|end}-multiline-output.
> 
> It will report them if the outcome changed (e.g. from PASS to FAIL).
> 
> To do this filtering, jv needs access to the old and new source trees,
> so it can diff the pertinent source files, so "jv compare" has gained
> the optional arguments
>   --old-source-path=
> and
>   --new-source-path=
> See the example in the jv Makefile for more info.  If they're not
> present, it should work as before (without being able to do the above
> filtering).


Hi,

I am looking at this today and I noticed that having the source file for
all recent GCC revisions is costly in terms of time (if we wish to
compress them) and space (for storage). I was instead thinking that jv
could calculate the differences offline using pysvn and the old and new
revision numbers.

I have started implementing this in my port. Would you consider merging it?

-- 
Paulo Matos


Re: jamais-vu can now ignore renumbering of source lines in dg output (Re: GCC Buildbot Update)

2018-01-29 Thread Paulo Matos


On 29/01/18 15:19, David Malcolm wrote:
>>
>> Hi,
>>
>> I am looking at this today and I noticed that having the source file
>> for
>> all recent GCC revisions is costly in terms of time (if we wish to
>> compress them) and space (for storage). I was instead thinking that
>> jv
>> could calculate the differences offline using pysvn and the old and
>> new
>> revision numbers.
> 
> Note that access to the source files is optional - jv doesn't need
> them, it just helps for the particular situation described above.
> 

I understand but it would be great to have line number filtering.

>> I have started implementing this in my port. Would you consider
>> merging it?
> 
> Sounds reasonable - though bear in mind that gcc might be switching to
> git at some point.
> 

Yes, I know... but... if we wait for that to happen to implement
something... :)

> Send a pull request (I've turned on travis CI on the github repository,
> so pull requests now automatically get tested on a bunch of different
> Python 3 versions).
> 

Sure.

-- 
Paulo Matos


Re: Both GCC and GDB buildbot use gcc114

2018-02-27 Thread Paulo Matos
On 27/02/18 13:53, Yao Qi wrote:
> Hi Paulo,
> I noticed that GDB buildbot pending build queue on aarch64-linux
> becomes longer and longer,
> https://gdb-build.sergiodj.net/builders/Ubuntu-AArch64-m64
> as it takes longer to finish each build and test.
> 
> Looks that you deployed aarch64-linux buildslave for GCC buildbot
> on gcc114 as well, that is reason that GDB build and test is slowed
> down (GCC build and test is slowed down too).
> 
> We'd better avoid using the same machine for two buildbots.  Are there
> any easy way to move one buildslave to a different machine like gcc115
> or gcc116.  As far as I know, they are identical.
>

Apologies for the clash on resources. Using gcc115 and gcc116 only now.

Kind regards,

-- 
Paulo Matos


Re: Conditional clobbering

2010-02-23 Thread Paulo Matos
On 02/23/10 19:12, Joern Rennecke wrote:
> Quoting "Paulo J. Matos" :
>> I have a situation in writing a specific condition on an md file.
>> I have an insn with 2 alternatives and then I use which_alternative to
>> generate the assembler code but if which_alternative == 1 I am
>> clobbering a register. How can I tell gcc that if it matches 1, a
>> given register is clobbered?
> 
> You use a clobber with a matching constraint for alternative 1 and X for
> other alternatives.  Make sure the patten is not named.

Thanks!

-- 
PMatos


RE: Modeling predicate registers with more than one bit

2013-03-26 Thread Paulo Matos
Hi, sorry for the delay of this reply but just returned from paternity leave.

> 
> Have you had a look at the SH backend?  SH cores have a "T Bit"
> register, which functions as carry bit, over/underflow, comparison
> result and branch condition register.  In the SH backend it's treated as
> a fixed SImode hard-reg (although BImode would suffice in this case, I
> guess).
> 

I have looked at sh but didn't fully understand how it worked. Your explanation 
made it clear.

> 
> The predicate is for matching various forms of T bit negation patterns.
> 
> Maybe you could try the same approach for your case.
> If your predicate register has multiple independent bit(fields), you
> could try defining separate hard-regs for every bit(field).
> 

It sounds that could be what I want. I probably need not different hard-regs 
but different pseudos (since I have different pseudo regs) at different modes 
(since the register might be set differently depending of the mode of the 
comparison).

That seems to be the way to go. 

Cheers,

Paulo Matos


RE: Modeling predicate registers with more than one bit

2013-03-26 Thread Paulo Matos
Hi, sorry for the delay of this reply but just returned from paternity leave.

> -Original Message-
> From: Hans-Peter Nilsson [mailto:h...@bitrange.com]
> Sent: 05 March 2013 01:45
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Modeling predicate registers with more than one bit
> 
> Except for CCmodes being dependent on source-modes, I'd sneak
> peeks at PowerPC.
> 

What do you mean by source modes?

> > If not, is there any way to currently
> > (as of HEAD) model this in GCC?
> 
> IIUC, this sounds simply like having multiple separate
> condition-code registers, just with a size-dependent CCmodes
> twist; for each type of comparison where there'd be a separate
> CCmode variant, you also need separate CCmodes for each source
> mode M, all separated in cbranchM4 and cstoreM4.
> 


I am not sure CC_MODE can solve the problem but I am not entirely experienced 
with using different CC_MODEs, the first thing that comes to mind is, how do 
you set the size of a CCmode?
A predicate register in our backend can be set as if it had different sizes. 
So, even though the register has 8 bits. It's possible to have just 1 bit set, 
2 bit sets, 4 bit sets of 8 bits sets depending if a comparison is of mode BI, 
QI, SI or DI.

I might have to use proper registers like SH does (following Oleg suggestion).

Thanks,

Paulo Matos


Clarification of cloned function names during profiling

2013-03-28 Thread Paulo Matos
Hello,

I have been investigating gcc and gprof interaction. 
I have noticed something strange, even though I cannot reproduce an example.

In certain situations, GCC will produce functions called foo.isra.0 or 
foo.constprop.0. 
These function names are created by clone_function_name where suffix is isra or 
constprop.

On the other hand in gprof/corefile.c (function core_sym_class) of binutils, 
symbols that don't include a '.clone.' (which used to be generated by 4.5 at 
least) are discarded (from 2.23.2).
  for (name = sym->name; *name; ++name)
{
  if (*name == '$')
return 0;

  while (*name == '.')
{
  /* Allow both nested subprograms (which end with ".NNN", where N is
 a digit) and GCC cloned functions (which contain ".clone").
 Allow for multiple iterations of both - apparently GCC can clone
 clones and subprograms.  */
  int digit_seen = 0;
#define CLONE_NAME  ".clone."
#define CLONE_NAME_LEN  strlen (CLONE_NAME)
  
  if (strlen (name) > CLONE_NAME_LEN
  && strncmp (name, CLONE_NAME, CLONE_NAME_LEN) == 0)
name += CLONE_NAME_LEN - 1;

  for (name++; *name; name++)
if (digit_seen && *name == '.')
  break;
else if (ISDIGIT (*name))
  digit_seen = 1;
else
  return 0;
}
}


My question is, how does this work with recent gcc's and binutils'? If I use 
-pg on gcc, will gcc stop outputting functions with isra, constprop, etc 
suffixes and revert to clone suffixes or will it just use .?

Cheers,

Paulo Matos




RE: Modeling predicate registers with more than one bit

2013-03-28 Thread Paulo Matos
> -Original Message-
> From: Hans-Peter Nilsson [mailto:h...@bitrange.com]
> Sent: 26 March 2013 17:43
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: RE: Modeling predicate registers with more than one bit
> >
> > What do you mean by source modes?
> 
> The SI and HI in subsi3 and subhi3.  IIRC you said your ISA set
> CC-bits differently depending on the size of the operand.
> 

That's true. And I am starting to think that CCMODE is exactly what I need.
Even though my predicate register is QImode (8 bits), only certain bits are set 
depending on source modes. I can I can specify which bits are set using CCmode 
macros?

> > I am not sure CC_MODE can solve the problem but I am not
> > entirely experienced with using different CC_MODEs, the first
> > thing that comes to mind is, how do you set the size of a
> > CCmode?
> 
> Unfortunately undocumented, but UTSL, for example
> gcc/config/mips/mips-modes.def.
> 
> If any register can be set to a "CC-value" then you don't need
> to set any specific set of registers aside.
> 

Not any register, only the set of predicate registers we have. What's UTSL?

-- 
Paulo Matos


RE: Clarification of cloned function names during profiling

2013-03-28 Thread Paulo Matos

> -Original Message-
> From: Joe Seymour [mailto:jseym...@codesourcery.com]
> Sent: 28 March 2013 15:17
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Clarification of cloned function names during profiling
> 
> 
> I had a patch committed to trunk gprof that taught it to handle
> ".constprop" functions correctly:
> 
> http://sourceware.org/ml/binutils/2012-09/msg00260.html
> 
> I suppose a similar patch would be required for isra functions as
> well...

Yes, I noticed that patch in HEAD. I have a hard time understanding how we now 
need to add all these exceptions to binutils because gcc changed its behaviour. 
Maybe binutils should allow anything that looks like .. at a 
minimum.

On the other hand this seems to imply that nobody actually uses gprof anymore...

I will try to get a patch submitted to binutils upstream. Thanks for your 
comment on this.

-- 
Paulo Matos


RE: Clarification of cloned function names during profiling

2013-03-28 Thread Paulo Matos
> -Original Message-
> From: Joe Seymour [mailto:jseym...@codesourcery.com]
> Sent: 28 March 2013 15:37
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Clarification of cloned function names during profiling
> 
> FWIW I fixed this for constprop because a customer reported it as an
> issue, so at least 1 person is still using it.

Make that 2. :) I will contact upstream.


RE: Modeling predicate registers with more than one bit

2013-03-28 Thread Paulo Matos
> -Original Message-
> From: Hans-Peter Nilsson [mailto:h...@bitrange.com]
> Sent: 26 March 2013 17:43
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: RE: Modeling predicate registers with more than one bit
> 
> Unfortunately undocumented, but UTSL, for example
> gcc/config/mips/mips-modes.def.
> 

Using the source now... :) Thanks.


RE: GCC Bugzilla database dump

2013-04-12 Thread Paulo Matos
My reply does not represent an official answer but I do know that this has come 
up before. 

The answer was that no, you cannot have a dump of the database for privacy 
reasons. You can however use web spider to get information directly from the 
webpages as long as you don't DoS the server. :)

Cheers,

Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of
> Shakthi Kannan
> Sent: 11 April 2013 17:41
> To: gcc@gcc.gnu.org
> Subject: GCC Bugzilla database dump
> 
> Hi,
> 
> I would like to know if it is possible to get a database dump of the
> GCC Bugzilla instance for analytics?
> 
>   http://gcc.gnu.org/bugzilla/
> 
> Please let me know.
> 
> Thanks!
> 
> SK
> 
> --
> Shakthi Kannan
> http://www.shakthimaan.com



vec<> inside GTYed struct

2013-04-19 Thread Paulo Matos
Hello,

Should I be concerned about using a vec<> inside a GTYed struct. Something like:
typedef loop_info * LOOP_INFO;

struct GTY(()) LOOP_INFO 
{
  ...
  vec infos;
};

is causing me some pain due to invalid free() / delete / delete[] / realloc() 
(as reported by valgrind) after a segfault).

Are there are any rules of thumb to what can go inside a GTYed struct? I read 
http://gcc.gnu.org/onlinedocs/gcc-4.8.0/gccint/Type-Information.html#Type-Information
unfortunately it doesn't mention the use of vecs inside GTYed structs.

For sake of completion the backtrace looks like:
==30111== Invalid free() / delete / delete[] / realloc()
==30111==at 0x4A078F0: realloc (vg_replace_malloc.c:632)
==30111==by 0x1037CEC: xrealloc (xmalloc.c:179)
==30111==by 0x663C4E: void 
va_heap::reserve(vec*&, 
unsigned int, bool) (vec.h:300)
==30111==by 0x663B49: vec::reserve(unsigned int, bool) (vec.h:1468)
==30111==by 0xCEA04E: firepath_reorg_loops(_IO_FILE*) (firepath.c:8764)
==30111==by 0xCE6107: firepath_reorg() (firepath.c:6678)
==30111==by 0x9BE01D: rest_of_handle_machine_reorg() (reorg.c:3927)
==30111==by 0x94BC0A: execute_one_pass(opt_pass*) (passes.c:2379)
==30111==by 0x94BDFE: execute_pass_list(opt_pass*) (passes.c:2427)
==30111==by 0x94BE2F: execute_pass_list(opt_pass*) (passes.c:2428)
==30111==by 0x94BE2F: execute_pass_list(opt_pass*) (passes.c:2428)
==30111==by 0x68A254: expand_function(cgraph_node*) (cgraphunit.c:1640)

Cheers,

Paulo Matos




disable-nls breaks build

2013-04-30 Thread Paulo Matos
Hello,

I just cloned gcc because of an error I was seeing in my port. It seems to me 
that the problem is the --disable-nls option but I haven't research yet why 
this is happening.

If I configure with:
../gcc/configure --prefix=/home/pmatos/work/tmp/install-gcc/ 
--exec-prefix=/home/pmatos/work/tmp/install-gcc/x86_64-rhel5 --disable-nls 
--enable-checking --disable-shared   --enable-lto --enable-languages=c 
--enable-werror-always

and make.
I get after awhile:
../../gcc/gcc/langhooks.c: In function 'void 
lhd_print_error_function(diagnostic_context*, const char*, diagnostic_info*)':
../../gcc/gcc/langhooks.c:457:41: error: unknown conversion type character 'r' 
in format [-Werror=format]
../../gcc/gcc/langhooks.c:457:41: error: format '%d' expects argument of type 
'int', but argument 5 has type 'const char*' [-Werror=format]
../../gcc/gcc/langhooks.c:457:41: error: unknown conversion type character 'R' 
in format [-Werror=format]
../../gcc/gcc/langhooks.c:457:41: error: too many arguments for format 
[-Werror=format-extra-args]
../../gcc/gcc/langhooks.c:462:31: error: unknown conversion type character 'r' 
in format [-Werror=format]
../../gcc/gcc/langhooks.c:462:31: error: format '%d' expects argument of type 
'int', but argument 5 has type 'const char*' [-Werror=format]
../../gcc/gcc/langhooks.c:462:31: error: unknown conversion type character 'R' 
in format [-Werror=format]
../../gcc/gcc/langhooks.c:462:31: error: too many arguments for format 
[-Werror=format-extra-args]

I am unsure about how nls works and what's causing this since pp_printf seems 
to be fine with %r, %R. I guess the important detail is that the string is 
surrounded by _(...).

Any hints on this?

Paulo Matos




RE: disable-nls breaks build

2013-05-01 Thread Paulo Matos
After a couple of tests, this failure seems to have nothing to do with nls. 
Even if I remove all the flags it still fails.
I guess it can only be a case of tool versions:
Building with GCC 4.7.2, gmp 4.3.0, mpfr 2.4.1, mpc 0.8.1. Would you think any 
of these could be affecting the build?

Cheers,

Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Paulo
> Matos
> Sent: 30 April 2013 14:33
> To: gcc@gcc.gnu.org
> Subject: disable-nls breaks build
> 
> Hello,
> 
> I just cloned gcc because of an error I was seeing in my port. It seems to me
> that the problem is the --disable-nls option but I haven't research yet why
> this is happening.
> 
> If I configure with:
> ../gcc/configure --prefix=/home/pmatos/work/tmp/install-gcc/ --exec-
> prefix=/home/pmatos/work/tmp/install-gcc/x86_64-rhel5 --disable-nls --enable-
> checking --disable-shared   --enable-lto --enable-languages=c --enable-
> werror-always
> 
> and make.
> I get after awhile:
> ../../gcc/gcc/langhooks.c: In function 'void
> lhd_print_error_function(diagnostic_context*, const char*,
> diagnostic_info*)':
> ../../gcc/gcc/langhooks.c:457:41: error: unknown conversion type character
> 'r' in format [-Werror=format]
> ../../gcc/gcc/langhooks.c:457:41: error: format '%d' expects argument of type
> 'int', but argument 5 has type 'const char*' [-Werror=format]
> ../../gcc/gcc/langhooks.c:457:41: error: unknown conversion type character
> 'R' in format [-Werror=format]
> ../../gcc/gcc/langhooks.c:457:41: error: too many arguments for format [-
> Werror=format-extra-args]
> ../../gcc/gcc/langhooks.c:462:31: error: unknown conversion type character
> 'r' in format [-Werror=format]
> ../../gcc/gcc/langhooks.c:462:31: error: format '%d' expects argument of type
> 'int', but argument 5 has type 'const char*' [-Werror=format]
> ../../gcc/gcc/langhooks.c:462:31: error: unknown conversion type character
> 'R' in format [-Werror=format]
> ../../gcc/gcc/langhooks.c:462:31: error: too many arguments for format [-
> Werror=format-extra-args]
> 
> I am unsure about how nls works and what's causing this since pp_printf seems
> to be fine with %r, %R. I guess the important detail is that the string is
> surrounded by _(...).
> 
> Any hints on this?
> 
> Paulo Matos
> 



RE: disable-nls breaks build

2013-05-01 Thread Paulo Matos
Turns out that this is a warning thrown by GCC that ends up as an error due to 
--enable-werror-always.

Paulo Matos

> -Original Message-
> > From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of
> Paulo
> > Matos
> > Sent: 30 April 2013 14:33
> > To: gcc@gcc.gnu.org
> > Subject: disable-nls breaks build
> >
> > and make.
> > I get after awhile:
> > ../../gcc/gcc/langhooks.c: In function 'void
> > lhd_print_error_function(diagnostic_context*, const char*,
> > diagnostic_info*)':
> > ../../gcc/gcc/langhooks.c:457:41: error: unknown conversion type character
> > 'r' in format [-Werror=format]
> > ../../gcc/gcc/langhooks.c:457:41: error: format '%d' expects argument of
> type
> > 'int', but argument 5 has type 'const char*' [-Werror=format]
> > ../../gcc/gcc/langhooks.c:457:41: error: unknown conversion type character
> > 'R' in format [-Werror=format]
> > ../../gcc/gcc/langhooks.c:457:41: error: too many arguments for format [-
> > Werror=format-extra-args]
> > ../../gcc/gcc/langhooks.c:462:31: error: unknown conversion type character
> > 'r' in format [-Werror=format]
> > ../../gcc/gcc/langhooks.c:462:31: error: format '%d' expects argument of
> type
> > 'int', but argument 5 has type 'const char*' [-Werror=format]
> > ../../gcc/gcc/langhooks.c:462:31: error: unknown conversion type character
> > 'R' in format [-Werror=format]
> > ../../gcc/gcc/langhooks.c:462:31: error: too many arguments for format [-
> > Werror=format-extra-args]
> >
> > I am unsure about how nls works and what's causing this since pp_printf
> seems
> > to be fine with %r, %R. I guess the important detail is that the string is
> > surrounded by _(...).
> >
> > Any hints on this?
> >
> > Paulo Matos
> >



BImode and STORE_VALUE_FLAG

2013-05-03 Thread Paulo Matos
Hello,

It seems to me there's a bug in 
simplify_const_relational_operation:simplify-rtx.c.
If you set STORE_VALUE_FLAG to -1, if you get to 
simplify_const_relational_operation with code: NE, mode: BImode, op0: reg, op1: 
const_int 0, then you end up in line 4717 calling get_mode_bounds.

get_mode_bounds will unfortunately return min:0, max:-1 for BImode and GCC 
proceeds to compare val which is 0 using:
/* x != y is always true for y out of range.  */
  if (val < mmin || val > mmax)
return const_true_rtx;

This simplifies the comparison to const_true_rtx in the case STORE_FLAG_VALUE 
is -1. This seems flawed. 

Unless there's some background reason for this to happen this seems like a bug. 
BImode is a two value mode: 0 or STORE_FLAG_VALUE (according to 
trunc_int_to_mode), therefore there are really no bounds and these comparisons 
in simplify_const_relational_operation should take special care if dealing with 
BImode. Also, having max < min is strange at best and I can imagine it can 
result in pretty strange behaviour if a developer assumes max >= min, as usual.

I am interested in comments to this piece of code. I am happy to patch 
simplify_const_relational_operation if you agree with what I said.

Cheers,

Paulo Matos




RE: BImode and STORE_VALUE_FLAG

2013-05-07 Thread Paulo Matos
> -Original Message-
> From: Mikael Pettersson [mailto:mi...@it.uu.se]
> Sent: 04 May 2013 11:51
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: BImode and STORE_VALUE_FLAG
> 
> 
> I can't comment on the code in question, but the backend for m68k may be
> affected
> since it defines STORE_FLAG_VALUE as -1.  Do you have a testcase that would
> cause
> wrong code, or a patch to cure the issue?  I'd be happy to do some testing on
> m68k-linux.
> 
> /Mikael

Mikael,

I have looked at m68k code and it seems that the predicate (cc reg) is FPmode.
I can't see a definition of FPmode anywhere (where is it?) but I assume it's 
not a
defined as a single bit. I think we are making the mistake of using BImode for 
this 
and therefore STORE_FLAG_VALUE of -1 is invalid (or unsupported because it's 
meaningless).

So I guess the problem (which might not be a problem after all can't be 
reproduced in m68k and
it's fine. I will keep researching this issue and will get back to you if I 
find 
anything interesting. In the meantime, where is FPmode defined in m68k?

Paulo Matos


RE: BImode and STORE_VALUE_FLAG

2013-05-07 Thread Paulo Matos

> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Paulo
> Matos
> Sent: 07 May 2013 14:19
> To: Mikael Pettersson
> Cc: gcc@gcc.gnu.org
> Subject: RE: BImode and STORE_VALUE_FLAG
>
> In the meantime, where is FPmode defined in m68k?
> 
>

Got it, FP is a mode iterator defined in m68k.md. :)


RE: BImode and STORE_VALUE_FLAG

2013-05-08 Thread Paulo Matos

> -Original Message-
> From: Mikael Pettersson [mailto:mi...@it.uu.se]
> Sent: 04 May 2013 11:51
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: BImode and STORE_VALUE_FLAG
> 
> I can't comment on the code in question, but the backend for m68k may be
> affected
> since it defines STORE_FLAG_VALUE as -1.  Do you have a testcase that would
> cause
> wrong code, or a patch to cure the issue?  I'd be happy to do some testing on
> m68k-linux.
>

Mikael,

Still related to this issue, I think I found a bug that affects m68k due to the 
use of STORE_FLAG_VALUE != 1.

Try the following example (this is a trimmed down version of vector-compare-1.c 
from gcc testsuite):

int main (int argc, char *argv[]) {
int i, ires;
volatile int i0 = 2;
volatile int i1 = 2;

ires = (i0 >= i1);

if (ires != (i0 >= i1 ? -1 : 0)) {
  __builtin_printf ("%i != ((" "%i" " " ">=" " " "%i" " ? -1 : 0) ", 
(ires), (i0), (i1));
  return 1;
}

return 0;
}

I haven't tried to run it in m68k-linux since I don't have binutils-m68k 
installed but I assume it will print something like:
-1 != ((2 >= 2 ? -1 : 0)

and return exit code 1.

I did run m68k cc1 (gcc-4.7.3) and dumped logs and found the problem (which 
matches what I am seeing with my port).
We get to vrp1 with:
  D.1392_5 = i0.0D.1390_3 >= i1.1D.1391_4;
  iresD.1386_6 = (intD.1) D.1392_5;
  # VUSE <.MEMD.1405_18>
  i0.3D.1394_7 ={v} i0D.1387;
  # VUSE <.MEMD.1405_18>
  i1.4D.1395_8 ={v} i1D.1388;
  if (i0.3D.1394_7 >= i1.4D.1395_8)
goto ;
  else
goto ;
  # SUCC: 4 [50.0%]  (true,exec) 3 [50.0%]  (false,exec)

  # BLOCK 3 freq:5000
  # PRED: 2 [50.0%]  (false,exec)
  # SUCC: 4 [100.0%]  (fallthru,exec)

  # BLOCK 4 freq:1
  # PRED: 2 [50.0%]  (true,exec) 3 [100.0%]  (fallthru,exec)
  # iftmp.2D.1393_1 = PHI <-1(2), 0(3)>
  if (iftmp.2D.1393_1 != iresD.1386_6)
goto ;
  else
goto ;
  # SUCC: 5 [62.2%]  (true,exec) 6 [37.8%]  (false,exec)

The important bits are:
  D.1392_5 = i0.0D.1390_3 >= i1.1D.1391_4;
  iresD.1386_6 = (intD.1) D.1392_5;
...
  # iftmp.2D.1393_1 = PHI <-1(2), 0(3)>
  if (iftmp.2D.1393_1 != iresD.1386_6)
goto ;
  else
goto ;

vrp1 will then proceed to find the ranges for D.1392_5 = i0.0D.1390_3 >= 
i1.1D.1391_4;
Since this is a comparison set_value_range_to_truthvalue is called and returns 
the range [0, 1].
Then vrp1 simplifies the phi node to iftmp.2D.1393_1 = PHI < 0 > since -1 is 
not within the range.

From hereon a couple of simplifications break the remainder of the cgraph and 
generates incorrect code.

Can you reproduce this?

Cheers,

Paulo Matos


RE: BImode and STORE_VALUE_FLAG

2013-05-09 Thread Paulo Matos

> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of
> Andreas Schwab
> Sent: 09 May 2013 09:52
> To: Paulo J. Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: BImode and STORE_VALUE_FLAG
> 
> "Paulo J. Matos"  writes:
> 
> > Further to this matter, can you explain the reasoning behind
> > vector-compare-1.c?
> 
> Vector comparisons are different.
> 

Right, my initial assumptions regarding vector comparisons were wrong and led 
me to create a scalar comparison test for my problem that was broken.

Thanks,

Paulo Matos


Mode precision and bytesize

2013-05-10 Thread Paulo Matos
Hello,

Can someone please clarify the difference between the precision of a mode and 
its bytesize?

Also, if you create a CCMODE and ADJUST it's bytesize to 1, then it's currently 
impossible to change its precision to 8.
You end up with a bytesize of 1 and a precision of 4*BITS_PER_UNITS which 
doesn't sound right.

This is because precision is output in genmodes.c:emit_mode_precision and will 
output 
tagged_printf ("%u*BITS_PER_UNIT", m->bytesize, m->name);
however at this point in time m->bytesize is still the original size of 4 (for 
a CCmode) before the evaluation of ADJUST_BYTESIZE in -modes.def.

Is there a reason for it to be like this?

Cheers,
Paulo Matos




Pushing the limits on vector modes

2013-05-17 Thread Paulo Matos
Hello,

I am trying to model a predicate register mode that acts like a vector. We have 
a few predicate registers that have 8 bits in size but they are set accordingly 
to the mode of operation (not necessarily a comparison). Word size is 64.

Here's an example, for a scalar comparison leq p0, r0, 1: p0 will be set to 
0xff if r0 <= but if I use simd operations the predicate register is not set as 
a single register.
For example, if I do tsteqw p0, r0, 0x   0001 (test equal on 32bits 
at a time), p0 will be 0b. I could use tsteqb p0, r0, 0x0100 0100 0100 
0100 (test equal on 8bits at a time), p0 will be 0b01010101.

It seems to me that the best way to represent this is by using a vector mode of 
BIs.
So in modes.def:
FRACTIONAL_INT_MODE (B2, 2, 1); // Two bits
FRACTIONAL_INT_MODE (B4, 4, 1); // Four bits
// use QI for 8 bits and BI for 1 bit

VECTOR_MODE (INT, BI, 8);
VECTOR_MODE (INT, B2, 4);
VECTOR_MODE (INT, B4, 2);
VECTOR_MODE (INT, QI, 1); // which probably doesn't make sense and QImode can 
simply be used instead.

Then I could model tsteqw as:
(define_insn "*tsteqw"
  [(set (match_operand:V2B4 0 "predicate_register" "=p")
(eq:V2B4 (match_operand:V2SI 1 "register_operand")
 (match_operand:V2SI 2 "general_operand")))]
  ...)

Is this something reasonable or will GCC simply choke since I am pushing the 
limits of vector modes?

Paulo Matos




RE: Pushing the limits on vector modes

2013-05-17 Thread Paulo Matos
amylaar,

Do you recall how I can get those ARC branches, where they branches in official 
GCC SVN?

Paulo Matos


> -Original Message-
> From: amyl...@spamcop.net [mailto:amyl...@spamcop.net]
> Sent: 17 May 2013 15:12
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Pushing the limits on vector modes
> 
> Quoting Paulo Matos :
> 
> > Hello,
> >
> > I am trying to model a predicate register mode that acts like a
> > vector. We have a few predicate registers that have 8 bits in size
> > but they are set accordingly to the mode of operation (not
> > necessarily a comparison). Word size is 64.
> 
> Yes need some surgery to the mode generator machinery.  I had the same
> problem with the mxp port, which you can still find in older ARC
> branches.



RE: Pushing the limits on vector modes

2013-05-17 Thread Paulo Matos
Found, what it seems to be the most recent arc branch, arc-4_4-20090909-branch/.
http://gcc.gnu.org/viewvc/gcc/branches/arc-4_4-20090909-branch/

Paulo Matos


> -Original Message-
> From: amyl...@spamcop.net [mailto:amyl...@spamcop.net]
> Sent: 17 May 2013 15:12
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Pushing the limits on vector modes
> 
> Quoting Paulo Matos :
> 
> > Hello,
> >
> > I am trying to model a predicate register mode that acts like a
> > vector. We have a few predicate registers that have 8 bits in size
> > but they are set accordingly to the mode of operation (not
> > necessarily a comparison). Word size is 64.
> 
> Yes need some surgery to the mode generator machinery.  I had the same
> problem with the mxp port, which you can still find in older ARC
> branches.



Testing and branching with different modes

2013-06-06 Thread Paulo Matos
Hello,

My port seems to be slightly awkward in testing and branching compared to the 
existing one in mainline.
All the comparisons in my port set a QImode (8bits) register (predicate 
register), however branches branch on a single bit value of the predicate 
register.

So while I have:
tsteqb p0, r0, r1

which sets p0 to 0xff is r0 == r1, then the branch looks like
bl p0., , 
where
bl p0.0, L1

branches to L1 is bit 0 of p0 is set.

So I have cmp_qimode, etc but then only cbranchbi4.

My cbranchbi4 should look like this:
(define_insn "cbranchbi4"
  [(set (pc)
(if_then_else
 (match_operator 0 " easy_comparison"
 [(subreg:BI (match_operand:QI 1 "register" "c") 0)
  (match_operand:BI 2 "const0"   "")])
 (label_ref (match_operand 3 "" ""))
 (pc)))]
  ""...

or
(define_insn "cbranchbi4"
  [(set (pc)
(if_then_else
 (match_operator 0 " easy_comparison"
 [(and:BI (match_operand:QI 1 " register" "c")
  (const_int 1))
  (match_operand:BI 2 "const0"   "")])
 (label_ref (match_operand 3 "" ""))
 (pc)))]
  ""...


but both of these alternatives don't seem to be supported as it causes an ICE 
in patch_jump_insn.
What's the best way to deal with this situation? Is there any port out there 
with similar issues?

Cheers,

Paulo Matos




objdump for gimple [lto]

2013-06-24 Thread Paulo Matos
Hello,

I see this item in http://gcc.gnu.org/wiki/LinkTimeOptimization :
7. Browsing/dumping tools for LTO files

Is there anything already out there, even if half-baked?
I am having trouble understanding a problem in LTO and I think the bug is in 
the writing of trees into the object file but can't find a way to know this 
unless I have something like an objdump for gimple.

If there's no tool out there to straightforwardly know this, what's the best 
approach to find out if there's a problem written in the LTO stream?

Cheers,

Paulo Matos




Delay scheduling due to possible future multiple issue in VLIW

2013-06-26 Thread Paulo Matos
Hello,

We have a port for a VLIW machine using gcc head 4.8 with an maximum issue of 2 
per clock cycle (sometimes only 1 due to machine constraints). 
We are seeing the following situation in sched2:

;;   --- forward dependences:  

;;   --- Region Dependences --- b 3 bb 0 
;;  insn  codebb   dep  prio  cost   reservation
;;    --   ---       ---
;;   38  1395 3 0 6 4   (p0+long_imm+ldst0+lock0),nothing*3 
: 44m 43 41 40 
;;   40   491 3 1 2 2   (p0+long_imm+ldst0+lock0),nothing   
: 44m 41 
;;   41   536 3 2 1 1   (p0+no_stl2)|(p1+no_dual)   : 44 
;;   43  1340 3 1 2 1   (p0+no_stl2)|(p1+no_dual)   : 44m 
;;   44  1440 3 4 1 1   (p0+long_imm)   : 

;;  dependencies resolved: insn 38
;;  tick updated: insn 38 into ready
;;  dependencies resolved: insn 41
;;  tick updated: insn 41 into ready
;;  Advanced a state.
;;  Ready list after queue_to_ready:41:4  38:2
;;  Ready list after ready_sort:41:4  38:2
;;  Ready list (t =   0):41:4  38:2
;;  Chosen insn : 38
;;0--> b  0: i  38r1=zxn([r0+`b'])
:(p0+long_imm+ldst0+lock0),nothing*3
;;  dependencies resolved: insn 43
;;  Ready-->Q: insn 43: queued for 4 cycles (change queue index).
;;  tick updated: insn 43 into queue with cost=4
;;  dependencies resolved: insn 40
;;  Ready-->Q: insn 40: queued for 4 cycles (change queue index).
;;  tick updated: insn 40 into queue with cost=4
;;  Ready-->Q: insn 41: queued for 1 cycles (resource conflict).
;;  Ready list (t =   0):  
;;  Advanced a state.
;;  Q-->Ready: insn 41: moving to ready without stalls
;;  Ready list after queue_to_ready:41:4
;;  Ready list after ready_sort:41:4
;;  Ready list (t =   1):41:4
;;  Chosen insn : 41
;;1--> b  0: i  41r0=r0+0x4   
:(p0+no_stl2)|(p1+no_dual)

So, it is scheduling first insn 38 followed by 41. 
The insn chain for bb3 before sched2 looks like:
(insn 38 36 40 3 (set (reg:DI 1 r1)
(zero_extend:DI (mem:SI (plus:SI (reg:SI 0 r0 [orig:119 ivtmp.13 ] 
[119])
(symbol_ref:SI ("b") [flags 0x80]  )) [2 MEM[symbol: b, index: ivtmp.13_7, offset: 0B]+0 S4 A32]))) pr3115b.c:13 
1395 {zero_extendsidi2}
 (nil))
(insn 40 38 41 3 (set (mem:SI (plus:SI (reg:SI 0 r0 [orig:119 ivtmp.13 ] [119])
(symbol_ref:SI ("a") [flags 0x80]  )) [2 MEM[symbol: a, index: ivtmp.13_7, offset: 0B]+0 S4 A32])
(reg:SI 1 r1 [orig:118 D.3048 ] [118])) pr3115b.c:13 491 {fp_movsi}
 (nil))
(insn 41 40 43 3 (set (reg:SI 0 r0 [orig:119 ivtmp.13 ] [119])
(plus:SI (reg:SI 0 r0 [orig:119 ivtmp.13 ] [119])
(const_int 4 [0x4]))) 536 {addsi3}
 (nil))
(insn 43 41 44 3 (set (reg:BI 64 p0 [122])
(ne:BI (reg:SI 1 r1 [orig:118 D.3048 ] [118])
(const_int 0 [0]))) pr3115b.c:13 1340 {cmp_simode}
 (expr_list:REG_DEAD (reg:SI 1 r1 [orig:118 D.3048 ] [118])
(nil)))
(jump_insn 44 43 55 3 (set (pc)
(if_then_else (ne (reg:BI 64 p0 [122])
(const_int 0 [0]))
(label_ref:SI 35)
(pc))) pr3115b.c:13 1440 {cbranchbi4}
 (expr_list:REG_DEAD (reg:BI 64 p0 [122])
(expr_list:REG_BR_PROB (const_int 9844 [0x2674])
(expr_list:REG_PRED_WIDTH (const_int 4 [0x4])
(nil


The problem with this is that GCC is scheduling insn 38, followed by 41, (a 
patched) 40, 43 and 44. 
However, if it had delayed scheduling 41, waited a clock cycle, issued 40 then 
it would be able to issue 38 paired with 43 in the same clock cycle and then 
44. 
So, instead of generating the following insn chain:
38
41
patched 40
43
44

it would generate
38
40
41 : 43
44

Is there a way to instruct the scheduler to wait on an instruction on a given 
clock cycle (even if that instruction is the only one on the ready list) 
because it's possible that it can be paired with a later instruction in the 
chain if issued simultaneously?

Cheers,

Paulo Matos




RE: Delay scheduling due to possible future multiple issue in VLIW

2013-06-27 Thread Paulo Matos
Let me add to my own post saying that it seems that the problem is that the 
list scheduler is greedy in the sense that it will take an instruction from the 
ready list no matter what when waiting and trying to pair it with later on with 
another instruction might be more beneficial. In a sense it seems that the idea 
is that 'issuing instructions as soon as possible is better' which might be 
true for a single issue chip but a VLIW with multiple issue has to contend with 
other problems.

Any thoughts on this?

Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Paulo
> Matos
> Sent: 26 June 2013 15:08
> To: gcc@gcc.gnu.org
> Subject: Delay scheduling due to possible future multiple issue in VLIW
> 
> Hello,
> 
> We have a port for a VLIW machine using gcc head 4.8 with an maximum issue of
> 2 per clock cycle (sometimes only 1 due to machine constraints).
> We are seeing the following situation in sched2:
> 
> ;;   --- forward dependences: 
> 
> ;;   --- Region Dependences --- b 3 bb 0
> ;;  insn  codebb   dep  prio  cost   reservation
> ;;    --   ---       ---
> ;;   38  1395 3 0 6 4
> (p0+long_imm+ldst0+lock0),nothing*3 : 44m 43 41 40
> ;;   40   491 3 1 2 2   (p0+long_imm+ldst0+lock0),nothing
> : 44m 41
> ;;   41   536 3 2 1 1   (p0+no_stl2)|(p1+no_dual)   : 44
> ;;   43  1340 3 1 2 1   (p0+no_stl2)|(p1+no_dual)   : 44m
> ;;   44  1440 3 4 1 1   (p0+long_imm)   :
> 
> ;;  dependencies resolved: insn 38
> ;;  tick updated: insn 38 into ready
> ;;  dependencies resolved: insn 41
> ;;  tick updated: insn 41 into ready
> ;;  Advanced a state.
> ;;  Ready list after queue_to_ready:41:4  38:2
> ;;  Ready list after ready_sort:41:4  38:2
> ;;  Ready list (t =   0):41:4  38:2
> ;;  Chosen insn : 38
> ;;0--> b  0: i  38r1=zxn([r0+`b'])
> :(p0+long_imm+ldst0+lock0),nothing*3
> ;;  dependencies resolved: insn 43
> ;;  Ready-->Q: insn 43: queued for 4 cycles (change queue index).
> ;;  tick updated: insn 43 into queue with cost=4
> ;;  dependencies resolved: insn 40
> ;;  Ready-->Q: insn 40: queued for 4 cycles (change queue index).
> ;;  tick updated: insn 40 into queue with cost=4
> ;;  Ready-->Q: insn 41: queued for 1 cycles (resource conflict).
> ;;  Ready list (t =   0):
> ;;  Advanced a state.
> ;;  Q-->Ready: insn 41: moving to ready without stalls
> ;;  Ready list after queue_to_ready:41:4
> ;;  Ready list after ready_sort:41:4
> ;;  Ready list (t =   1):41:4
> ;;  Chosen insn : 41
> ;;1--> b  0: i  41r0=r0+0x4
> :(p0+no_stl2)|(p1+no_dual)
> 
> So, it is scheduling first insn 38 followed by 41.
> The insn chain for bb3 before sched2 looks like:
> (insn 38 36 40 3 (set (reg:DI 1 r1)
> (zero_extend:DI (mem:SI (plus:SI (reg:SI 0 r0 [orig:119 ivtmp.13 ]
> [119])
> (symbol_ref:SI ("b") [flags 0x80]   0x2b9c011f75a0 b>)) [2 MEM[symbol: b, index: ivtmp.13_7, offset: 0B]+0 S4
> A32]))) pr3115b.c:13 1395 {zero_extendsidi2}
>  (nil))
> (insn 40 38 41 3 (set (mem:SI (plus:SI (reg:SI 0 r0 [orig:119 ivtmp.13 ]
> [119])
> (symbol_ref:SI ("a") [flags 0x80]   a>)) [2 MEM[symbol: a, index: ivtmp.13_7, offset: 0B]+0 S4 A32])
> (reg:SI 1 r1 [orig:118 D.3048 ] [118])) pr3115b.c:13 491 {fp_movsi}
>  (nil))
> (insn 41 40 43 3 (set (reg:SI 0 r0 [orig:119 ivtmp.13 ] [119])
> (plus:SI (reg:SI 0 r0 [orig:119 ivtmp.13 ] [119])
> (const_int 4 [0x4]))) 536 {addsi3}
>  (nil))
> (insn 43 41 44 3 (set (reg:BI 64 p0 [122])
> (ne:BI (reg:SI 1 r1 [orig:118 D.3048 ] [118])
> (const_int 0 [0]))) pr3115b.c:13 1340 {cmp_simode}
>  (expr_list:REG_DEAD (reg:SI 1 r1 [orig:118 D.3048 ] [118])
> (nil)))
> (jump_insn 44 43 55 3 (set (pc)
> (if_then_else (ne (reg:BI 64 p0 [122])
> (const_int 0 [0]))
> (label_ref:SI 35)
> (pc))) pr3115b.c:13 1440 {cbranchbi4}
>  (expr_list:REG_DEAD (reg:BI 64 p0 [122])
> (expr_list:REG_BR_PROB (const_int 9844 [0x2674])
> (expr_list:REG_PRED_WIDTH (const_int 4 [0x4])
> (nil
> 
> 
> The problem with this is that GCC is scheduling insn 38, followed by 41, (a
> patched) 40, 43 and 44.
> Howev

Infinite recursion due to builtin pattern detection

2013-06-27 Thread Paulo Matos
Hi,

I have noticed an interesting behaviour based on the memset builtin test on 
4.8.1. If you compile memset-lib.i with -O3 you get an infinite recursion.
$ cat memset-lib.i
# 1 
"/home/pmatos/work/fp_gcc-25x/gcc/testsuite/gcc.c-torture/execute/builtins/memset-lib.c"
# 1 ""
# 1 
"/home/pmatos/work/fp_gcc-25x/gcc/testsuite/gcc.c-torture/execute/builtins/memset-lib.c"
# 1 
"/home/pmatos/work/fp_gcc-25x/gcc/testsuite/gcc.c-torture/execute/builtins/lib/memset.c"
 1
extern void abort (void);
extern int inside_main;

__attribute__ ((__noinline__))
void *
memset (void *dst, int c, long unsigned int n)
{
  while (n-- != 0)
n[(char *) dst] = c;





  if (inside_main && n < 2)
abort ();


  return dst;
}
# 1 
"/home/pmatos/work/fp_gcc-25x/gcc/testsuite/gcc.c-torture/execute/builtins/memset-lib.c"
 2

$ ~/work/tmp/gcc_4.8.1-build/gcc/cc1 -O3 memset-lib.i -o-
.file   "memset-lib.i"
 memset
Analyzing compilation unit
Performing interprocedural optimizations
 <*free_lang_data>   <*free_inline_summary> 
 
Assembling functions:
 memset .text
.p2align 4,,15
.globl  memset
.type   memset, @function
memset:
.LFB0:
.cfi_startproc
testq   %rdx, %rdx
je  .L6
subq$8, %rsp
.cfi_def_cfa_offset 16
movsbl  %sil, %esi
callmemset
addq$8, %rsp
.cfi_def_cfa_offset 8
ret
.p2align 4,,10
.p2align 3
.L6:
movq%rdi, %rax
ret
.cfi_endproc
.LFE0:
.size   memset, .-memset
.ident  "GCC: (GNU) 4.8.1"
.section.note.GNU-stack,"",@progbits



So, it seems the GCC is smart enough to detect the pattern for memset but once 
it replaces the loop into the function a recursion is created because the 
function its replacing the loop in is called memset itself.
Interestingly GCC doesn't actually fail the test, probably due to interactions 
with other files and functions but this compilation unit does generate 
incorrect output so I think there might be a bug lurking here.

Another thing I cannot grasp is that the test for inside_main completely 
disappears. We do know n <= 2 after the loop but we have no information for 
inside_main so I would assume we still need to test inside_main to know if we 
should abort.

What do you think?

Paulo Matos




RE: Infinite recursion due to builtin pattern detection

2013-06-27 Thread Paulo Matos


> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Paulo
> Matos
> Sent: 27 June 2013 14:52
> To: gcc@gcc.gnu.org
> Subject: Infinite recursion due to builtin pattern detection
> 
> Another thing I cannot grasp is that the test for inside_main completely
> disappears. We do know n <= 2 after the loop but we have no information for
> inside_main so I would assume we still need to test inside_main to know if we
> should abort.
> 

Let me add that one of the replies to this last issue would be that GCC detects 
the infinite recursive call and removes everything else afterwards to the test 
involving internal_main disappears but that's not the case. The test is removed 
in vrp1 and the memset pattern is only replaced by a call to the builtin in 
ldist.

Paulo Matos


RE: Infinite recursion due to builtin pattern detection

2013-06-27 Thread Paulo Matos

> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Paulo
> Matos
> Sent: 27 June 2013 15:02
> To: gcc@gcc.gnu.org
> Subject: RE: Infinite recursion due to builtin pattern detection
> 
> 
> Let me add that one of the replies to this last issue would be that GCC
> detects the infinite recursive call and removes everything else afterwards to
> the test involving internal_main disappears but that's not the case. The test
> is removed in vrp1 and the memset pattern is only replaced by a call to the
> builtin in ldist.
> 
> Paulo Matos

Chris Groessler just pointed out to me in private that I missed the fact that n 
is unsigned so it'll wrap.
That explains why GCC removes the condition but the main issue of the memset 
recursion still stands.

Paulo Matos


DONT_BREAK_DEPENDENCIES bitmask for scheduling

2013-07-01 Thread Paulo Matos
Hi,

Near the start of schedule_block, find_modifiable_mems is called if 
DONT_BREAK_DEPENDENCIES is not enabled for this scheduling pass. It seems on 
c6x backend currently uses this.
However, it's quite strange that this is not a requirement for all backends 
since find_modifiable_mems, moves all my dependencies in SD_LIST_HARD_BACK to 
SD_LIST_SPEC_BACK even though I don't have DO_SPECULATION enabled.

Since dependencies are accessed later on from try_ready (for example), I would 
have thought that it would be always good not to call find_modifiable_mems,  
given that it seems to 'literally' break dependencies.

Is the behaviour of find_modifiable_mems a bug or somehow expected?

Cheers,

Paulo Matos




Inter register constraints

2013-07-05 Thread Paulo Matos
Hi,

I am convinced that what I am about to ask is not possible but I would like 
someone to confirm this.

Can I define in an insn a register constraint that depends on another register 
value?
So, for 
(set (match_operand:SI 0 "register_operand" "r")
 (match_operand:SI 1 "register_operand" "r"))

What I would like to represent is the fact that 0 and 1 are register pairs, so 
op0 is even and op1 is op0 + 1.
I noticed that avr defines register pair constraints but only for specific 
registers. If you have 64 registers that will give you 22 pairs. I could, of 
course, create all of these by hand by defining 23 classes and define a single 
constraint that matches these classes but I would like to know if there's 
another way. 

Cheers,

Paulo Matos




RE: Inter register constraints

2013-07-11 Thread Paulo Matos

> -Original Message-
> From: Georg-Johann Lay [mailto:a...@gjlay.de]
> Sent: 05 July 2013 18:03
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Inter register constraints
> 
> > have 64 registers that will give you 22 pairs. I could, of course,
> > create all of these by hand by defining 23 classes and define a
> > single constraint that matches these classes but I would like to know
> > if there's another way.
> 
> What are you trying to achieve?  In order so synthesize MOVW
> instructions after reload, see respective RTL peepholes.
> 

Thanks Johann, that seems to be pretty much what I want to do.
However when, in avr, there's a HI move of a register pair only the first 
register shows up in the instruction.
So movw r0:r1, r2:r3

is a *movhi: (set (reg:HI r0) (reg:HI r2))

How does gcc know that r1 is going to be clobbered?

Is it because GET_MODE_SIZE (HImode) is twice the register size and so it 
assumed the following register is clobbered as well? (or is there any hook that 
needs to be set)

Paulo Matos


RE: Delay scheduling due to possible future multiple issue in VLIW

2013-07-16 Thread Paulo Matos
Hello Maxim,

Thanks for your reply. I have in the meantime adjusted the list scheduler to 
handle the situation better and it's now working better than it was before on 
my port.
However, I will give your suggestion of using sched-ebb a try given that it 
might outperform the current solution I have.

Regards,
Paulo Matos


> -Original Message-
> From: Maxim Kuvyrkov [mailto:ma...@kugelworks.com]
> Sent: 16 July 2013 05:02
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Delay scheduling due to possible future multiple issue in VLIW
> 
> Paulo,
> 
> GCC schedule is not particularly designed for VLIW architectures, but it
> handles them reasonably well.  For the example of your code both schedules
> take same time to execute:
> 
> 38: 0: r1 = e[r0]
> 40: 4: [r0] = r1
> 41: 5: r0 = r0+4
> 43: 5: p0 = r1!=0
> 44: 6: jump p0
> 
> and
> 
> 38: 0: r1 = e[r0]
> 41: 1: r0 = r0+4
> 40: 4: [r0] = r1
> 43: 5: p0 = r1!=0
> 44: 6: jump p0
> 
> [It is true that the first schedule takes less space due to fortunate VLIW
> packing.]
> 
> You are correct that GCC scheduler is greedy and that it tries to issue
> instructions as soon as possible (i.e., it is better to issue something on
> the cycle, than nothing at all), which is a sensible strategy.  For small
> basic block the greedy algorithm may cause artifacts like the one you
> describe.
> 
> You could try increasing size of regions on which scheduler operates by
> switching your port to use scheb-ebb scheduler, which was originally
> developed for ia64.
> 
> Regards,
> 
> --
> Maxim Kuvyrkov
> KugelWorks
> 
> 
> 
> On 27/06/2013, at 8:35 PM, Paulo Matos wrote:
> 
> > Let me add to my own post saying that it seems that the problem is that the
> list scheduler is greedy in the sense that it will take an instruction from
> the ready list no matter what when waiting and trying to pair it with later
> on with another instruction might be more beneficial. In a sense it seems
> that the idea is that 'issuing instructions as soon as possible is better'
> which might be true for a single issue chip but a VLIW with multiple issue
> has to contend with other problems.
> >
> > Any thoughts on this?
> >
> > Paulo Matos
> >
> >
> >> -Original Message-
> >> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of
> Paulo
> >> Matos
> >> Sent: 26 June 2013 15:08
> >> To: gcc@gcc.gnu.org
> >> Subject: Delay scheduling due to possible future multiple issue in VLIW
> >>
> >> Hello,
> >>
> >> We have a port for a VLIW machine using gcc head 4.8 with an maximum issue
> of
> >> 2 per clock cycle (sometimes only 1 due to machine constraints).
> >> We are seeing the following situation in sched2:
> >>
> >> ;;   --- forward dependences: 
> >>
> >> ;;   --- Region Dependences --- b 3 bb 0
> >> ;;  insn  codebb   dep  prio  cost   reservation
> >> ;;    --   ---       ---
> >> ;;   38  1395 3 0 6 4
> >> (p0+long_imm+ldst0+lock0),nothing*3 : 44m 43 41 40
> >> ;;   40   491 3 1 2 2
> (p0+long_imm+ldst0+lock0),nothing
> >> : 44m 41
> >> ;;   41   536 3 2 1 1   (p0+no_stl2)|(p1+no_dual)   :
> 44
> >> ;;   43  1340 3 1 2 1   (p0+no_stl2)|(p1+no_dual)   :
> 44m
> >> ;;   44  1440 3 4 1 1   (p0+long_imm)   :
> >>
> >> ;;  dependencies resolved: insn 38
> >> ;;  tick updated: insn 38 into ready
> >> ;;  dependencies resolved: insn 41
> >> ;;  tick updated: insn 41 into ready
> >> ;;  Advanced a state.
> >> ;;  Ready list after queue_to_ready:41:4  38:2
> >> ;;  Ready list after ready_sort:41:4  38:2
> >> ;;  Ready list (t =   0):41:4  38:2
> >> ;;  Chosen insn : 38
> >> ;;0--> b  0: i  38r1=zxn([r0+`b'])
> >> :(p0+long_imm+ldst0+lock0),nothing*3
> >> ;;  dependencies resolved: insn 43
> >> ;;  Ready-->Q: insn 43: queued for 4 cycles (change queue
> index).
> >> ;;  tick updated: insn 43 into queue with cost=4
> >> ;;  dependencies resolved: insn 40
> >> ;;  Ready-->Q: insn 40: queued for 4 cycles (change queue
> index).
> >> ;;  tick updated: insn 40 into queue with cost=4
>

Prototypes for builtin functions

2013-08-29 Thread Paulo Matos
Hi,

I would like to hear how other architectures organize their builtin/intrinsic 
headers.

Until recently we had a header that would look like:

/* Types */
typedef char   V8B  __attribute__ ((vector_size (8)));
...

/* Prototypes */
extern V8B __vec_put_v8b (V8B B, char C, unsigned char D);
...

The problem with this approach (I found out) is that GCC after seeing the 
prototype changes the location of the definition of the builtin from 
BUILTINS_LOCATION to the headerfile/linenumber and then when calling 
DECL_IS_BUILTIN on __vec_put_v8b tree it returns 0. This blocks a few 
optimizations (I noticed this when specifically checking why some functions 
were not being unrolled properly).

So, I commented out the prototypes from the intrinsics header and left only the 
type definitions, however, tests on intrinsics fail because if I do:
V8B put_v8b_test (V8B a, char value, char index)
{
 V8B b = __vec_put_v8b (a, value, index);
 return b;
}

GCC complains with:
error: incompatible type for argument 1 of '__vec_put_v8b'
note: expected '__vector(8) signed char' but argument is of type 'V8B'

What's the correct way to create the intrinsics header?

-- 
Paulo Matos



RE: Prototypes for builtin functions

2013-09-02 Thread Paulo Matos
> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Paulo
> Matos
> Sent: 29 August 2013 11:44
> To: gcc@gcc.gnu.org
> Subject: Prototypes for builtin functions
> 
> Hi,
> 
> I would like to hear how other architectures organize their builtin/intrinsic
> headers.
> 
> Until recently we had a header that would look like:
> 
> /* Types */
> typedef char V8B  __attribute__ ((vector_size (8)));
> ...
> 
> /* Prototypes */
> extern V8B __vec_put_v8b (V8B B, char C, unsigned char D);
> ...


Besides pinging this question for some sort of answer, isn't true that if a 
declaration is marked extern, then the tree location for the decl shouldn't be 
changed?

-- 
Paulo Matos


Why DECL_BUILT_IN and DECL_IS_BUILTIN?

2013-09-03 Thread Paulo Matos
Hi,

Why do we have two macros in tree.h with seemingly the same semantics? 
DECL_BUILT_IN and DECL_IS_BUILTIN?

-- 

Paulo Matos




RE: Why DECL_BUILT_IN and DECL_IS_BUILTIN?

2013-09-03 Thread Paulo Matos
> -Original Message-
> From: Richard Biener [mailto:richard.guent...@gmail.com]
> Sent: 03 September 2013 11:19
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Why DECL_BUILT_IN and DECL_IS_BUILTIN?
> 
> On Tue, Sep 3, 2013 at 11:43 AM, Paulo Matos  wrote:
> > Hi,
> >
> > Why do we have two macros in tree.h with seemingly the same semantics?
> DECL_BUILT_IN and DECL_IS_BUILTIN?
> 
> The point is they are not the same.
> 

Thanks Richard. I could gather from inspecting GCC that they have different 
definitions and probably are doing different things but I didn't understand why 
they are different. Can you please elaborate on their differences?

Thanks,

Paulo Matos


RE: Why DECL_BUILT_IN and DECL_IS_BUILTIN?

2013-09-03 Thread Paulo Matos

> -Original Message-
> From: Richard Biener [mailto:richard.guent...@gmail.com]
> Sent: 03 September 2013 12:55
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Why DECL_BUILT_IN and DECL_IS_BUILTIN?
> 
> DECL_IS_BUILTIN is true if the decl was created by the frontend / backend
> rather than by user code (indicated by source location).  DECL_BUILT_IN
> is true if the decl represents a function of the standard library, a
> builtin that is
> recognized by optimization / expansion.  User declared prototypes of
> C library functions are not DECL_IS_BUILTIN but may be DECL_BUILT_IN.
> 
> Richard.
> 

Thanks for the explanation. 

Cheers.


RE: libgccjit.so: an embeddable JIT-compilation library based on GCC

2013-10-10 Thread Paulo Matos

> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of David
> Malcolm
> Sent: 09 October 2013 22:32
> To: gcc@gcc.gnu.org
> Subject: libgccjit.so: an embeddable JIT-compilation library based on GCC
> 
> As some may have seen I posted a patch to gcc-patches that adds a way to
> embed GCC as a shared library, for Just-In-Time compilation, for use
> e.g. by bytecode interpreters:
>   http://gcc.gnu.org/ml/gcc-patches/2013-10/msg00228.html
> 
> I've gone ahead and created a git-only on the mirror as branch
> "dmalcolm/jit":
>  http://gcc.gnu.org/git/?p=gcc.git;a=shortlog;h=refs/heads/dmalcolm/jit
> and I've been committing patches there.
> 
> I plan to post some of the patches for review against trunk (am
> bootstrapping/regtesting them as I write).
> 
> An example of using it can be seen at:
>   https://github.com/davidmalcolm/jittest/blob/master/README.rst
> 
> Some questions for the GCC steering committee:
> 
> * is this JIT work a good thing?  (I think so, obviously, but can I go
>   ahead and e.g. add it to the wiki under "Current Projects"?)
>
> [snip]

I am not the committee but the JIT works is useful, awesome and I am pretty 
confident it will have quite a few users. Thank you so much for it.



Testing ICEs resulting from profile directed optimization

2013-10-10 Thread Paulo Matos
Hi,

I have found an ICE reported as (PR 58682) and I have a fix.
However the testcase involved:
* compiling a 5 .i files with -fprofile-generate=
* running the executable
* compiling the same 5 .i files with -fprofile-use=, and only then getting the 
ICE.

Is there anything in the GCC testing framework that allows this kind of testing 
as opposed to the straightforward single test compilation?

Paulo Matos




RE: Testing ICEs resulting from profile directed optimization

2013-10-11 Thread Paulo Matos
> -Original Message-
> From: Jan Hubicka [mailto:hubi...@ucw.cz]
> Sent: 10 October 2013 17:24
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Testing ICEs resulting from profile directed optimization
> 
> > Hi,
> >
> > I have found an ICE reported as (PR 58682) and I have a fix.
> Cool :)
> 
> > However the testcase involved:
> > * compiling a 5 .i files with -fprofile-generate=
> > * running the executable
> > * compiling the same 5 .i files with -fprofile-use=, and only then getting
> the ICE.
> >
> > Is there anything in the GCC testing framework that allows this kind of
> testing as opposed to the straightforward single test compilation?
> 
> You probably only want to put those tests into gcc.dg/tree-prof directory.
> Those tests are run
> with FDO.  You will need to convert them to .c files.
> 

I have looked at the infrastructure and it seems it only supports single file 
tests. 
I was not able to reduce the ICE to a single file. The reason for this is 
because I need to compile the project,
execute it and compile it again. 

I could in theory simply add the gcda file plus the single .i (transformed into 
.c) that causes the ICE but
this would only work for the version of gcc that creates the gcda so it seems 
to be a no-go.

Any suggestions?

-- 
Paulo Matos


Invalid tree node causes segfault in diagnostic

2013-10-11 Thread Paulo Matos
Hello,

I have a testcase that, during parsing, generates an invalid tree. This invalid 
tree triggers tree_check_failed, which was expecting a string_cst.
tree_check_failed calls internal_error with tree_code_name[TREE_CODE (node)] 
without checking that TREE_CODE (node) is valid.

I attach a patch as a suggestion of a fix. I would like some comments.

This patch eliminates the segfault and generates the error:
../../../../../source/adsl-mech/components/common/PsdInfo.c:717:1: internal 
compiler error: tree check: expected string_cst, have (invalid code name) in 
get_named_section, at varasm.c:415

This is obviously another thing I still have to understand why it's happening.
The testcase is still too big (1Mb) and I haven't reproduced it with any 
upstream port so I am not reporting a bug yet.

For the patch attached:

2013-10-11  Paulo Matos  

* tree.c (tree_check_failed): Check that TREE_CODE is valid before 
passing it to tree_code_name.


Paulo Matos




tree_check_failed.patch
Description: tree_check_failed.patch


RE: Invalid tree node causes segfault in diagnostic

2013-10-11 Thread Paulo Matos
> -Original Message-
> From: Richard Biener [mailto:richard.guent...@gmail.com]
> Sent: 11 October 2013 13:47
> To: Paulo Matos
> Cc: gcc@gcc.gnu.org
> Subject: Re: Invalid tree node causes segfault in diagnostic
> 
> 
> Hmm.  We have several places accessing tree_code_name without checking.
> May I suggest to abstract accesses to it with a function call which can
> do the proper checking and return "" instead?
> 

Sounds good. I will prepare a patch with the change suggested for review.
While I am at it, can I patch backends as well? For example mep/mep.c has an 
occurrence
of tree_code_name[TREE_CODE (...

-- 
Paulo Matos


RE: Testing ICEs resulting from profile directed optimization

2013-10-14 Thread Paulo Matos

> -Original Message-
> From: Jan Hubicka [mailto:hubi...@ucw.cz]
> Sent: 11 October 2013 23:23
> To: Paulo Matos
> Cc: Jan Hubicka; gcc@gcc.gnu.org
> Subject: Re: Testing ICEs resulting from profile directed optimization
> 
> You can use dg-additional-sources for multi-file testcases.  I am not sure it
> is what you need,
> I will try to take a look tomorrow..
> 

Didn't know about dg-additional-sources. Let me give that a try.

-- 
Paulo Matos



RE: Testing ICEs resulting from profile directed optimization

2013-10-14 Thread Paulo Matos
> -Original Message-
> From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Paulo
> Matos
> Sent: 14 October 2013 09:53
> To: Jan Hubicka
> Cc: gcc@gcc.gnu.org
> Subject: RE: Testing ICEs resulting from profile directed optimization
> 
> 
> Didn't know about dg-additional-sources. Let me give that a try.
>

It works with dg-additional-sources but there are a few issues regarding 
possible application to gcc-trunk, I am moving the discussion to the thread in 
gcc-patches where I initially submitted the patch with fix.

Thanks,
-- 
Paulo Matos


  1   2   >