Re: [RFC] zip_vector: in-memory block compression of integer arrays

2022-08-17 Thread Richard Biener via Gcc
On Wed, Aug 17, 2022 at 8:35 AM Michael Clark via Gcc  wrote:
>
> Hi Folks,
>
> This is an edited version of a message posted on the LLVM Discourse.
>
> I want to share what I have been working on as I feel it may be of
> interest to the GCC compiler developers, specifically concerning alias
> analysis and optimizations for iteration of sparse block-based
> multi-arrays. I also have questions about optimization related to this
> implementation, specifically the observability of alias analysis
> pessimization and memory to register optimizations.
>
> I have been working on _zip_vector_. _zip_vector_ is a compressed
> variable length array that uses vectorized block codecs to compress and
> decompress integers using dense variable bit width deltas as well as
> compressing constant values and sequences. _zip_vector_ employs integer
> block codecs optimized for vector instruction sets using the Google
> Highway C++ library for portable SIMD/vector intrinsics.
>
> The high-level class supports 32-bit and 64-bit compressed integer arrays:
>
>   - `zip_vector`
> - { 8, 16, 24 } bit signed and unsigned fixed-width values.
> - { 8, 16, 24 } bit signed deltas and per block IV.
> - constants and sequences using per block IV and delta.
>   - `zip_vector`
> - { 8, 16, 24, 32, 48 } bit signed and unsigned fixed-width values.
> - { 8, 16, 24, 32, 48 } bit signed deltas with per block IV.
> - constants and sequences using per block IV and delta.
>
> Here is a link to the implementation:
>
> - https://github.com/metaparadigm/zvec/
>
> The README has a background on the delta encoding scheme. If you read
> the source, "zvec_codecs.h" contains the low-level vectorized block
> codecs while "zvec_block.h" contains a high-level interface to the block
> codecs using cpuid-based dynamic dispatch. The high-level sparse integer
> vector class leveraging the block codecs is in "zip_vector.h". It has
> been tested with GCC and LLVM on x86-64 using SSE3, AVX, and AVX-512.
>
> The principle of operation is to employ simplified block codecs
> dedicated to only compressing fixed-width integers and thus are
> extremely fast, unlike typical compression algorithms. They are _in the
> order of 30-150GiB/sec_ on a single core when running within the L1
> cache on Skylake AVX-512. From this perspective, zip_vector achieves its
> performance by reducing global memory bandwidth because it fetches and
> stores compressed data to and from RAM and then uses extremely fast
> vector codecs to pack and unpack compressed blocks within the L1 cache.
>  From this perspective, it is similar to texture compression codecs, but
> the specific use case is closer to storage for index arrays because the
> block codecs are lossless integer codecs. The performance is striking in
> that it can be faster for in-order read-only traversal than a regular
> array, while the primary goal is footprint reduction.
>
> The design use case is an offsets array that might contain 64-bit values
> but usually contains smaller values. From this perspective, we wanted
> the convenience of simply using `zip_vector` or `zip_vector`
> while benefiting from the space advantages of storing data using 8, 16,
> 24, 32, and 48-bit deltas.
>
> Q. Why is it specifically of interest to GCC developers?
>
> I think the best way to answer this is with questions. How can we model
> a block-based iterator for a sparse array that is amenable to vectorization?
>
> There are aspects to the zip_vector iterator design that are *not done
> yet* concerning its current implementation. The iteration has two
> phases. There is an inter-block phase at the boundary of each block (the
> logic inside of `switch_page`) that scans and compresses the previously
> active block, updates the page index, and decompresses the next block.
> Then there is a _broad-phase_ for intra-block accesses, which is
> amenable to vectorization due to the use of fixed-size blocks.
>
> *Making 1D iteration as fast as 2D iteration*
>
> Firstly I have to say that there is a lot of analysis for the
> optimization of the iterator that I would like to discuss. There is the
> issue of hoisting the inter-block boundary test from the fast path so
> that during block boundary traversal, subsequent block endings are
> calculated in advance so that the broad phase only requires a pointer
> increment and comparison with the addresses held in registers.
>
> The challenge is getting past compiler alias analysis. Alias analysis
> seems to prevent caching of the sum of the slab base address and active
> area offset in a register versus being demoted to memory accesses. These
> member variables hold the location of the slab and the offset to the
> uncompressed page which are both on the critical path. When these values
> are in memory, _it adds 4 or more cycles of latency for base address
> calculation on every access_. There is also the possibility to hoist and
> fold the active page check as we know we can make constructive

Re: [RFC] zip_vector: in-memory block compression of integer arrays

2022-08-17 Thread Michael Clark via Gcc

On 17/08/22 7:10 pm, Richard Biener wrote:


Q. Why is it specifically of interest to GCC developers?

I think the best way to answer this is with questions. How can we model
a block-based iterator for a sparse array that is amenable to vectorization?

There are aspects to the zip_vector iterator design that are *not done
yet* concerning its current implementation. The iteration has two
phases. There is an inter-block phase at the boundary of each block (the
logic inside of `switch_page`) that scans and compresses the previously
active block, updates the page index, and decompresses the next block.
Then there is a _broad-phase_ for intra-block accesses, which is
amenable to vectorization due to the use of fixed-size blocks.

*Making 1D iteration as fast as 2D iteration*

Firstly I have to say that there is a lot of analysis for the
optimization of the iterator that I would like to discuss. There is the
issue of hoisting the inter-block boundary test from the fast path so
that during block boundary traversal, subsequent block endings are
calculated in advance so that the broad phase only requires a pointer
increment and comparison with the addresses held in registers.

The challenge is getting past compiler alias analysis. Alias analysis
seems to prevent caching of the sum of the slab base address and active
area offset in a register versus being demoted to memory accesses. These
member variables hold the location of the slab and the offset to the
uncompressed page which are both on the critical path. When these values
are in memory, _it adds 4 or more cycles of latency for base address
calculation on every access_. There is also the possibility to hoist and
fold the active page check as we know we can make constructive proofs
concerning changes to that value.

Benchmarks compare the performance of 1D and 2D style iterators. At
certain times the compiler would hoist the base and offset pointers from
member variable accesses into registers in the 1D version making a
noticeable difference in performance. In some respects, from the
perspective of single-threaded code, the only way the pointer to the
active region can change is inside `switch_page(size_t y)`.

The potential payoff is huge because one may be able to access data ~
0.9X - 3.5X faster than simply accessing integers in RAM when combining
the reduction in global memory bandwidth with auto-vectorization, but
the challenge is generating safe code for the simpler 1D iteration case
that is as efficient as explicit 2D iteration.

1D iteration:

  for (auto x : vec) x2 += x;

2D iteration:

  for (size_t i = 0; i < n; i += decltype(vec)::page_interval) {
  i64 *cur = &vec[i], *end = cur + decltype(vec)::page_interval;
  while(cur != end) x2 += *cur++;
  }

Note: In this example, I avoid having a different size loop tail but
that is also a consideration.

I trialled several techniques using a simplified version of the
`zip_vector` class where `switch_page` was substituted with simple logic
so that it was possible to get the compiler to coalesce the slab base
pointer and active area offset into a single calculation upon page
crossings. There is also hoisting of the active_page check
(_y-parameter_) to only occur on block crossings. I found that when the
`switch_page` implementation became more complex, i.e. probably adding
an extern call to `malloc`, the compiler would resort to more
conservatively fetching through a pointer to a member variable for the
base pointer and offset calculation. See here:

https://github.com/metaparadigm/zvec/blob/756e583472028fcc36e94c0519926978094dbb00/src/zip_vector.h#L491-L496

So I got to the point where I thought it would help to get input from
compiler developers to figure out how to observe which internal
constraints are violated by "switch_page"  preventing the base pointer
and offset address calculation from being cached in registers.
"slab_data" and "active_area" are neither volatile nor atomic, so
threads should not expect their updates to be atomic or go through memory.

I tried a large number of small experiments. e.g. so let's try and
collapse "slab_data" and "active_area" into one pointer at the end of
"switch_page" so that we only have one pointer to access. Also, the
"active_page" test doesn't necessarily need to be in the broad phase. I
had attempted to manually hoist these variables by modifying the
iterators but found it was necessary to keep them where they were to
avoid introducing stateful invariants to the iterators that could become
invalidated due to read accesses.

Stack/register-based coroutines could help due to the two distinct
states in the iterator.

I want to say that it is not as simple as one might think on the
surface. I tried several approaches to coalesce address calculations and
move them into the page switch logic, all leading to performance
fall-off, almost like the compiler was carrying some pessimization that
forced touched member variables to be accessed via

Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Aaron Gray via Gcc
Hi,

I am looking for the original ConceptGCC source code, the
https://www.generic-programming.org/software/ConceptGCC/download.html has
all broken links and the SVN is gone.

Is this available on GCC git or SVN ?

Also I am wondering if the original concepts-lite code is available too
anywhere please ?

Also any pointers to the documentation for the current implementation ?

Regards,

Aaron
-- 
Aaron Gray

Independent Open Source Software Engineer, Computer Language Researcher,
Information Theorist, and Computer Scientist.


Re: Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Ben Boeckel via Gcc
On Wed, Aug 17, 2022 at 12:42:42 +0100, Aaron Gray via Gcc wrote:
> I am looking for the original ConceptGCC source code, the
> https://www.generic-programming.org/software/ConceptGCC/download.html has
> all broken links and the SVN is gone.
> 
> Is this available on GCC git or SVN ?

There is this repo that may be of interest:

https://github.com/asutton/gcc

No idea of the state of it though or how useful it is for C++20
Concepts.

--Ben


Re: Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Aaron Gray via Gcc
On Wed, 17 Aug 2022 at 13:16, Ben Boeckel  wrote:
>
> On Wed, Aug 17, 2022 at 12:42:42 +0100, Aaron Gray via Gcc wrote:
> > I am looking for the original ConceptGCC source code, the
> > https://www.generic-programming.org/software/ConceptGCC/download.html has
> > all broken links and the SVN is gone.
> >
> > Is this available on GCC git or SVN ?
>
> There is this repo that may be of interest:
>
> https://github.com/asutton/gcc
>
> No idea of the state of it though or how useful it is for C++20
> Concepts.

Thanks this from 2018

Aaron
-- 
Aaron Gray

Independent Open Source Software Engineer, Computer Language
Researcher, Information Theorist, and Computer Scientist.


Re: Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Jonathan Wakely via Gcc
On Wed, 17 Aug 2022, 13:43 Aaron Gray via Gcc,  wrote:

> Hi,
>
> I am looking for the original ConceptGCC source code, the
> https://www.generic-programming.org/software/ConceptGCC/download.html has
> all broken links and the SVN is gone.
>
> Is this available on GCC git or SVN ?
>

I don't think so, but I can't check now. I would ask Doug Gregor where that
code can be found now.


> Also I am wondering if the original concepts-lite code is available too
> anywhere please ?
>

Define "original". Andrew Sutton's Concepts Lite implementation was merged
into GCC trunk, and evolved to also support C++20 concepts.



> Also any pointers to the documentation for the current implementation ?
>

Only comments in the code, I think.


Re: Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Jonathan Wakely via Gcc
On Wed, 17 Aug 2022, 14:46 Jonathan Wakely,  wrote:

>
>
> On Wed, 17 Aug 2022, 13:43 Aaron Gray via Gcc,  wrote:
>
>> Hi,
>>
>> I am looking for the original ConceptGCC source code, the
>> https://www.generic-programming.org/software/ConceptGCC/download.html has
>> all broken links and the SVN is gone.
>>
>> Is this available on GCC git or SVN ?
>>
>
> I don't think so, but I can't check now. I would ask Doug Gregor where
> that code can be found now.
>
>
>> Also I am wondering if the original concepts-lite code is available too
>> anywhere please ?
>>
>
> Define "original". Andrew Sutton's Concepts Lite implementation was merged
> into GCC trunk, and evolved to also support C++20 concepts.
>

I think it was merged in August 2015 and first released in GCC 6.1


>
>
>> Also any pointers to the documentation for the current implementation ?
>>
>
> Only comments in the code, I think.
>
>
>


Re: Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Aaron Gray via Gcc
>>> Also I am wondering if the original concepts-lite code is available too
>>> anywhere please ?
>>
>>
>> Define "original".

a post ConceptGCC, GCC branch implementation ?

>> Andrew Sutton's Concepts Lite implementation was merged into GCC trunk, and 
>> evolved to also support C++20 concepts.

Great thought so !

> I think it was merged in August 2015 and first released in GCC 6.1

I am just trying to get an overview of the implementation history, and
code bases.

Many thanks,

Aaron
-- 
Aaron Gray

Independent Open Source Software Engineer, Computer Language
Researcher, Information Theorist, and amateur Computer Scientist.


man-pages futex(2) example program using _Atomic

2022-08-17 Thread Alejandro Colomar via Gcc

Hi,

The man-pages example program for the futex(2) page uses 
atomic_compare_exchange_strong(), but it seems to use it incorrectly, 
according to clang-tidy(1) (see below).  I've neved used _Atomic, and so 
I'm not confident in fixing the program.  Would you mind having a look 
at it and possibly sending a patch?


Thanks,

Alex

===

alx@asus5775:~/src/linux/man-pages/man-pages$ make 
tmp/src/man2/futex.2.d/futex.lint-c.clang-tidy.touch

LINT (clang-tidy)   tmp/src/man2/futex.2.d/futex.lint-c.clang-tidy.touch
Error while processing 
/home/alx/src/linux/man-pages/man-pages/tmp/src/man2/futex.2.d/futex.c.
/home/alx/src/linux/man-pages/man-pages/tmp/src/man2/futex.2.d/futex.c:60:13: 
error: address argument to atomic operation must be a pointer to _Atomic 
type ('uint32_t *' (aka 'unsigned int *') invalid) [clang-diagnostic-error]

if (atomic_compare_exchange_strong(futexp, &one, 0))
^
/usr/lib/llvm-14/lib/clang/14.0.6/include/stdatomic.h:135:67: note: 
expanded from macro 'atomic_compare_exchange_strong'
#define atomic_compare_exchange_strong(object, expected, desired) 
__c11_atomic_compare_exchange_strong(object, expected, desired, 
__ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)
  ^ 
   ~~
/home/alx/src/linux/man-pages/man-pages/tmp/src/man2/futex.2.d/futex.c:84:9: 
error: address argument to atomic operation must be a pointer to _Atomic 
type ('uint32_t *' (aka 'unsigned int *') invalid) [clang-diagnostic-error]

if (atomic_compare_exchange_strong(futexp, &zero, 1)) {
^
/usr/lib/llvm-14/lib/clang/14.0.6/include/stdatomic.h:135:67: note: 
expanded from macro 'atomic_compare_exchange_strong'
#define atomic_compare_exchange_strong(object, expected, desired) 
__c11_atomic_compare_exchange_strong(object, expected, desired, 
__ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)
  ^ 
   ~~
make: *** [lib/lint-c.mk:58: 
tmp/src/man2/futex.2.d/futex.lint-c.clang-tidy.touch] Error 1




--
Alejandro Colomar



OpenPGP_signature
Description: OpenPGP digital signature


Re: Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Richard Earnshaw via Gcc




On 17/08/2022 12:42, Aaron Gray via Gcc wrote:

Hi,

I am looking for the original ConceptGCC source code, the
https://www.generic-programming.org/software/ConceptGCC/download.html has
all broken links and the SVN is gone.

Is this available on GCC git or SVN ?

Also I am wondering if the original concepts-lite code is available too
anywhere please ?

Also any pointers to the documentation for the current implementation ?

Regards,

Aaron


Not withstanding what others have already said, the various concepts 
branches are still in the git repository, but aren't in the standard 
pull set.  You can use git fetch to explicitly pull them:


d743a72b52bcfaa1effd7fabe542c05a30609614refs/dead/heads/c++-concepts
780065c813a72664bd46a354a2d26087464c74fc 
refs/dead/heads/conceptgcc-branch
ce85971fd96e12d0d6675ecbc46c9a1884df766c 
refs/dead/heads/cxx0x-concepts-branch
14d4dad929a01ff7179350f0251af752c3125d74 
refs/deleted/r131428/heads/cxx0x-concepts-branch


I haven't looked as to which is most likely to be relevant.

R.

PS, note that these branches may not appear in some mirrors if they only 
mirror the default refs/heads set.


C++ Concepts: Working Paper n number ?

2022-08-17 Thread Aaron Gray via Gcc
Hi, Another query please.

I seem to have found the Concepts Technical Specification :-

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3929.pdf

But am looking for the n number of the C++ Concepts Working Paper ?

Many thanks in advance,

Aaron
-- 
Aaron Gray

Independent Open Source Software Engineer, Computer Language Researcher,
Information Theorist, and Computer Scientist.


[RFC] Draft release roadmap for RVV v1.0 formal release

2022-08-17 Thread eop Chen
Hi all,

As mentioned in the first RFC letter 
, here we want to 
share our draft roadmap for the formal release.

The main goal of this release is to stablize the using environment of the RVV C 
intrinsics for the users.
So as mentioned in the first RFC letter 
,
 we want to consider the completeness of current intrinsic API-s
and do minimal fixes and leave the current implementation “as-is” for v1.0.

As always we keep ourselves open minded for new proposals, but based on our 
take of this stable release,
we suggest that anything that breaks backward compatibility or proposals with 
big change to the current
version of RVV C intrinsic should not be considered for this release and we can 
dwell on them more in the
future.

Approximately it would take about half year to work our way to the release. 
Alongside with existing features
and issues, we will want to spare about 3~4 months to gather and address 
comments in the open community
before drafting out a release candidate. Then with respect to the “Ratification 
Policy 
”
 of the RISC-V International,
the release candidate will be held 45 days in the public for any last comments. 
Lastly, we will announce the
document freezed and release the formal v1.0.

To keep the release in the public eye, we plan to hold a monthly meeting so 
everyone can be aware of the
remaining features to be done. In the meeting we will also revisit some 
important details of the intrinsics to make
sure we clearly define them in the v1.0 documentation.

With knowledge of other meeting schedules, we are using similar time slot as 
the PS-ABI. The time may collide
with sig-datacenter meetings, but we assume that the participants there may not 
be too interested in the topic we
will be holding. So maybe 9/5 will be a good date to kickoff the first meeting, 
and then be held on Monday every
4 weeks?

Google calender link 


ICal 

We also need to emphasize before the discussion that a proposal needs to be 
backed by implementation from
either GCC or LLVM, and we can confirm that the proposal is **do-able** on both 
compilers.

Based on past participations in the Issue and Pull Request… We have some active 
developers on both GCC
and LLVM. At least we want to seek for their comments and acceptance of the 
proposed features and decisions
before stamping out what should be done.

———

Regarding current issues, let us first focus on main features we hope to land 
considering the completeness of
RVV C Intrinsic.

1. Adding an set of fixed point rounding mode intrinsics

Rationale: It is imaginable that common applications with limited precision 
will leverage the set of rvv fix-point
instructions. Explicit configuration of the rounding mode and saturation is now 
not possible since the PS-ABI is
planning to define them as call clobber. Therefore we need to provide an 
additional set of fixed point rounding
mode intrinsics with rounding mode explicitly specified in the function name, 
just like the policy intrinsics added
recently (#137 ).

@arcbbb  (Shih-Po Hung) have raised such issue in 
#144  and we 
think is a good starting point for discussion.
His proposal seeks to replace the current intrinsics, but with some second 
thought, we prefer extending more
intrinsics rather than replace current ones since this will break backward 
compatibility. We are open for discussion
and more comments in the meetings and offline in the issue.

2. Adding reinterpret_cast between vector mask register and other integer type 
m1 register

Rationale: There has been issue raised such that users of the intrinsic may 
want to convert mask vector registers
to m1 integer type vector registers (#69 
).

We think this is a reasonable demand but we also want to seek for use-cases to 
back up the addition of this feature.
An issue has been opened in #156 
 and hope to 
collect input from the community. Please consider sharing your
thoughts there regarding this issue.

———

Other miscellaneous issues we hope to resolve before v1.0 are the following. We 
list our proposal in below and
seek for more comments.

1. Redundancy about for “LMUL truncate and extension functions” (#115 
)

Brief: The issue proposed to remove t

Re: Reproducible builds - supporting relative paths in *-prefix-map

2022-08-17 Thread Mark Wielaard
Hi Richard,

On Mon, Aug 15, 2022 at 09:29:03PM +0100, Richard Purdie wrote:
> On Mon, 2022-08-15 at 21:55 +0200, Mark Wielaard wrote:
> > I might be misinterpreting the issue you are seeing.
> > 
> > But one problem with debuginfo/DWARF is that relative source paths
> > aren't clearly defined. If you move or install the executable or
> > (split) debug file out of the build directory a DWARF reader has no
> > way to know what the paths are relative to.
> > 
> > So for DWARF the paths always have to be absolute (they can still be
> > relative to the compilation dir (DW_AT_comp_dir), but at least that
> > has to be absolute (and the compiler should turn any relative path
> > into an absolute one or make sure they are relative to an absolute
> > compilation directory path).
> 
> It gets slightly more complicated as we build in a directory separate
> to the source where we can. Some source files are generated source
> files and placed in the build directory whilst many are in the source
> directory. DW_AT_comp_dir can be set to one or the other but it is the
> relative path between build and source which is problematic.

Could you give an example directory structure when that creates a
problem? Any debug paths generated should be absolute or relative to
an absolute path.  In particular the DW_AT_comp_dir is normally an
absolute path (getpwd). So normally just remapping some prefix of the
pwd (either srcdir and/or builddir) should work to keep the relative
paths correct when moving the (generated) sources under that new
absolute path. I am probably missing something about the directory
setup that makes that reasoning invalid.

> We split the debuginfo into a separate package. We also look at the
> sources it references and those go into a different separate package
> too. We support populating a remote debuginfod server with these or
> installing them onto the target.

Nice. How do you lookup the referenced sources and where are they
then installed?

> > Using known absolute paths generated with debugedit or
> > -fdebug-prefix-map makes sure the paths used in the debuginfo/DWARF
> > are always the same independent from the current srcdir or builddir to
> > make them reproducible. And the user/tools don't have to guess what
> > the relative paths are relative to.
> 
> We have that working and set debug-prefix-map today. What is
> problematic is trying to recreate the relative paths on target between
> our source and build directories. Currently, most generated files in
> the build directory just don't get handled correctly on target. We'd
> like to fix that. There is currently no way to remap a relative path
> though, at least as far as I could determine.

So is the problem that when collecting the generated source files you
cannot map the debug-prefix-map back anymore? The relative paths are
relative to the original (absolute) prefix, but you only have the
remapped prefix?

Cheers,

Mark


Re: Reproducible builds - supporting relative paths in *-prefix-map

2022-08-17 Thread Richard Purdie via Gcc
Hi Mark,

Thanks for the reply!

On Wed, 2022-08-17 at 13:23 +0200, Mark Wielaard wrote:
> Hi Richard,
> 
> On Mon, Aug 15, 2022 at 09:29:03PM +0100, Richard Purdie wrote:
> > On Mon, 2022-08-15 at 21:55 +0200, Mark Wielaard wrote:
> > > I might be misinterpreting the issue you are seeing.
> > > 
> > > But one problem with debuginfo/DWARF is that relative source paths
> > > aren't clearly defined. If you move or install the executable or
> > > (split) debug file out of the build directory a DWARF reader has no
> > > way to know what the paths are relative to.
> > > 
> > > So for DWARF the paths always have to be absolute (they can still be
> > > relative to the compilation dir (DW_AT_comp_dir), but at least that
> > > has to be absolute (and the compiler should turn any relative path
> > > into an absolute one or make sure they are relative to an absolute
> > > compilation directory path).
> > 
> > It gets slightly more complicated as we build in a directory separate
> > to the source where we can. Some source files are generated source
> > files and placed in the build directory whilst many are in the source
> > directory. DW_AT_comp_dir can be set to one or the other but it is the
> > relative path between build and source which is problematic.
> 
> Could you give an example directory structure when that creates a
> problem? Any debug paths generated should be absolute or relative to
> an absolute path.  In particular the DW_AT_comp_dir is normally an
> absolute path (getpwd). So normally just remapping some prefix of the
> pwd (either srcdir and/or builddir) should work to keep the relative
> paths correct when moving the (generated) sources under that new
> absolute path. I am probably missing something about the directory
> setup that makes that reasoning invalid.

An example from my build machine would be the gcc source here:

/media/build1/poky/build/tmp/work-shared/gcc-12.1.0-r0/gcc-12.1.0/

and libgcc being compiled here:

/media/build1/poky/build/tmp/work/core2-64-poky-linux/libgcc/12.1.0-r0/gcc-12.1.0/build.x86_64-poky-linux.x86_64-poky-linux

so relatively that is:

../../../work/core2-64-poky-linux/libgcc/12.1.0-r0/gcc-12.1.0/build.x86_64-poky-linux.x86_64-poky-linux

On target we put the gcc source at /usr/src/debug/gcc/12.1.0/.

To make this work we'd have to put the actual gcc source three levels
under that, and replicate the "work/core2-64-poky-linux/libgcc/12.1.0-
r0/gcc-12.1.0/build.x86_64-poky-linux.x86_64-poky-linux" structure,
neither of which are things we really want to do.

It gets more complicated as we support the idea of "external source"
which is from an arbitrary location on the system but built within
tmp/work/xxx. In the general case we can't remap all possibilities.

> > e split the debuginfo into a separate package. We also look at the
> > sources it references and those go into a different separate package
> > too. We support populating a remote debuginfod server with these or
> > installing them onto the target.
> 
> Nice. How do you lookup the referenced sources and where are they
> then installed?

We use dwarfsrcfiles which it looks like you may have written! :)

https://git.yoctoproject.org/poky/tree/meta/recipes-devtools/dwarfsrcfiles/files/dwarfsrcfiles.c

Installation is configurable but usually in
/usr/src/debug///

> > > Using known absolute paths generated with debugedit or
> > > -fdebug-prefix-map makes sure the paths used in the debuginfo/DWARF
> > > are always the same independent from the current srcdir or builddir to
> > > make them reproducible. And the user/tools don't have to guess what
> > > the relative paths are relative to.
> > 
> > We have that working and set debug-prefix-map today. What is
> > problematic is trying to recreate the relative paths on target between
> > our source and build directories. Currently, most generated files in
> > the build directory just don't get handled correctly on target. We'd
> > like to fix that. There is currently no way to remap a relative path
> > though, at least as far as I could determine.
> 
> So is the problem that when collecting the generated source files you
> cannot map the debug-prefix-map back anymore?

Correct, if we build with relative paths to autoconf (which we
effectively have to so we can avoid lots of full path hardcoding), we
can't collect up and remap the generated files.

>  The relative paths are relative to the original (absolute) prefix, but 
> you only have the remapped prefix?

We can't recreate the relative paths on target (in the general external
source case) and we can't remap them with the code as it stands today.

Cheers,

Richard







Re: Wanted: original ConceptGCC downloads / branch, concepts-lite branch

2022-08-17 Thread Jonathan Wakely via Gcc
On Wed, 17 Aug 2022, 16:20 Richard Earnshaw via Gcc, 
wrote:

>
>
> On 17/08/2022 12:42, Aaron Gray via Gcc wrote:
> > Hi,
> >
> > I am looking for the original ConceptGCC source code, the
> > https://www.generic-programming.org/software/ConceptGCC/download.html
> has
> > all broken links and the SVN is gone.
> >
> > Is this available on GCC git or SVN ?
> >
> > Also I am wondering if the original concepts-lite code is available too
> > anywhere please ?
> >
> > Also any pointers to the documentation for the current implementation ?
> >
> > Regards,
> >
> > Aaron
>
> Not withstanding what others have already said, the various concepts
> branches are still in the git repository, but aren't in the standard
> pull set.  You can use git fetch to explicitly pull them:
>
> d743a72b52bcfaa1effd7fabe542c05a30609614
> refs/dead/heads/c++-concepts




This is "concepts lite".


780065c813a72664bd46a354a2d26087464c74fc
> refs/dead/heads/conceptgcc-branch




I think this was Doug Gregor's work, but I don't know how much of
ConceptGCC is present on the branch.


ce85971fd96e12d0d6675ecbc46c9a1884df766c
> refs/dead/heads/cxx0x-concepts-branch
> 14d4dad929a01ff7179350f0251af752c3125d74



This is described at https://gcc.gnu.org/git.html as:

"This branch contains the beginnings of a re-implementation of Concepts, a
likely future feature of C++, using some of the code from the prototype
implementation on conceptgcc-branch. It is not currently maintained."

I don't know who did this work, check the git history.


Re: C++ Concepts: Working Paper n number ?

2022-08-17 Thread Jonathan Wakely via Gcc
On Wed, 17 Aug 2022, 16:52 Aaron Gray via Gcc,  wrote:

> Hi, Another query please.
>
> I seem to have found the Concepts Technical Specification :-
>
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3929.pdf



That's an old draft, see https://en.cppreference.com/w/cpp/experimental for
the ISO TS number and the draft numbers.


>
> But am looking for the n number of the C++ Concepts Working Paper ?
>

Do you mean C++0x Concepts? N2042, and see N2618.