Hello Andrew, Thank you for the detailed response! This gives me a much clearer picture of how things work.
Regarding the two possible approaches: - I personally find *Option A (self-contained in-memory FS)* more interesting, and I'd like to work on it first. - However, if *Option B (RPC-based host FS access)* is the preferred approach for GSoC, I’d be happy to work on that as well. For now, I’ll begin setting up the toolchain and running simple OpenMP target kernels as suggested. Thanks again for your guidance! Best regards, Arijit Kumar Das. On Mon, 10 Mar, 2025, 10:55 pm Andrew Stubbs, <a...@baylibre.com> wrote: > On 10/03/2025 15:37, Arijit Kumar Das via Gcc wrote: > > Hello GCC Community! > > > > I am Arijit Kumar Das, a second-year engineering undergraduate from NIAMT > > Ranchi, India. While my major isn’t Computer Science, my passion for > system > > programming, embedded systems, and operating systems has driven me toward > > low-level development. Programming has always fascinated me—it’s like > > painting with logic, where each block of code works in perfect > > synchronization. > > > > The project mentioned in the subject immediately caught my attention, as > I > > have been exploring the idea of a simple hobby OS for my Raspberry Pi > Zero. > > Implementing an in-memory filesystem would be an exciting learning > > opportunity, closely aligning with my interests. > > > > I have carefully read the project description and understand that the > goal > > is to modify *newlib* and the *run tools* to redirect system calls for > file > > I/O operations to a virtual, volatile filesystem in host memory, as the > GPU > > lacks its own filesystem. Please correct me if I’ve misunderstood any > > aspect. > > That was the first of two options suggested. The other option is to > implement a pass-through RPC mechanism so that the runtime actually can > access the real host file-system. > > Option A is more self-contained, but requires inventing a filesystem and > ultimately will not help all the tests pass. > > Option B has more communication code, but doesn't require storing > anything manually, and eventually should give full test coverage. > > A simple RPC mechanism already exists for the use of printf (actually > "write") on GCN, but was not necessary on NVPTX (a "printf" text output > API is provided by the driver). The idea is to use a shared memory ring > buffer that the host "run" tool polls while the GPU kernel is running. > > > I have set up the GCC source tree and am currently browsing relevant > files > > in the *gcc/testsuite* directory. However, I am unsure *where the run > tools > > source files are located and how they interact with newlib system calls.* > > Any guidance on this would be greatly appreciated so I can get started as > > soon as possible! > > You'll want to install the toolchain following the instructions at > https://gcc.gnu.org/wiki/Offloading and try running some simple OpenMP > target kernels first. Newlib isn't part of the GCC repo, so if you > can't find the files then that's probably why! > > The "run" tools are installed as part of the offload toolchain, albeit > hidden under the "libexec" directory because they're really only used > for testing. You can find the sources with the config/nvptx or > config/gcn backend files. > > User code is usually written using OpenMP or OpenACC, in which case the > libgomp target plugins serve the same function as the "run" tools. These > too could use the file-system access, but it's not clear that there's a > common use-case for that. The case should at least fail gracefully > though (as they do now). > > Currently, system calls such as "open" simply return EACCESS > ("permission denied") so the stub implementations are fairly easy to > understand (e.g. newlib/libc/sys/amdgcn/open.c). The task would be to > insert new code there that actually does something. You do not need to > modify the compiler itself. > > Hope that helps > > Andrew > > > > > Best regards, > > Arijit Kumar Das. > > > > *GitHub:* https://github.com/ArijitKD > > *LinkedIn:* https://linkedin.com/in/arijitkd > > > >