Re: Package for development of out-of-tree kernel modules written in Rust
On 26/02/25 19:47, Miguel Ojeda wrote: On Wed, Feb 26, 2025 at 7:24 PM NoisyCoil wrote: It does indeed depend on the configuration, because at least some abstractions are enabled via the configuration and different metadata will be produced depending on whether they are or not (Miguel should confirm this, but I'm pretty sure about it). The first example that comes to my mind is the firmware abstractions, which are already in stable. So it can't go in linux-kbuild, and if it can't go in linux-headers either then it needs a new package. I was referring to the `.so`, not the `.rmeta`s, i.e. I thought Ben was referring to the last paragraph quoted. The `.rmeta`s definitely depend on the kernel config (and always will). The `.so` do not get the config passed right now, but we may end up passing it if we need it. So I would just assume they do. Ah, I assumed he was talking about both. Ben, do I understand correctly that the .so files shouldn't go in linux-headers because you want to use *linux-headers* for cross-compiling, meaning everything in linux-headers should be usable by the build arch instead of target arch? If this is the case, then I would also ask Miguel if the .rmeta files can be used from another architecture for cross-compiling. I think the .rmeta files can only be used with the same rustc that compiled the kernel crates? Don't know exactly which checks are performed on the compiler, are they only version checks? Cheers, Miguel
Re: Package for development of out-of-tree kernel modules written in Rust
On 26/02/25 13:40, Miguel Ojeda wrote: Ah, and by the way, there will likely be way more `.rmeta`s and `.so`s generated (and where they get placed will change) in the future, since the system will change, so please keep that in mind (e.g. perhaps try to avoid hardcoding details and/or overfitting on the current setup). Yeah I'd imagined this could happen. Currently d/rules just copies rust/{*.rmeta,*.so} into the destination directory. My guess was that in the future they could be placed in subdirectories of rust/, but on a second thought I think as subsystems accept Rust they could be located kind of anywhere in the build directory? If this is the case, then I think hardcoding can hardly be avoided. One will just have to change the hardcoded paths as needed (not ideal, but oh, well; also, absolutely not unheard of :-)). I say this mostly because of the *.so files: assuming that all *.rmeta files must be installed probably is a good enough assumption, but the same is not true for *.so files. On the other hand, one could maybe use something like `objdump -T whatever.so | grep rustc_proc_macro_decls` to discover if a .so file is a rust proc macro?
Re: Package for development of out-of-tree kernel modules written in Rust
On 26/02/25 14:22, Bastian Blank wrote: case one can just add the Rust bits there. But I still think the Rust bits should be installed in /usr/lib [1] instead of /usr/src. There is no need to get picky, sorry. No need to be sorry, this is the kind of feedback I am looking for. If you confirm you are fine with linux-headers-@abiname@@localversion@ installing binary files (including shared libraries) under /usr/src then that's what I'll do, and there will be no need for new packages. I had assumed this shouldn't be done (and I personally frown upon this, but this is of course matter of opinion). My assumption however can be wrong, and in the end my main interest is shipping these files in *some* kernel package. Which one is not that important to me.
Re: Package for development of out-of-tree kernel modules written in Rust
Hi Ben, On 26/02/25 19:11, Ben Hutchings wrote: This shouldn't go in a linux-headers package, because we aim to support cross-builds of modules. If it doesn't depend on the kernel configuration (aside from CONFIG_RUST being enabled) then it belongs in linux-kbuild. Ben. It does indeed depend on the configuration, because at least some abstractions are enabled via the configuration and different metadata will be produced depending on whether they are or not (Miguel should confirm this, but I'm pretty sure about it). The first example that comes to my mind is the firmware abstractions, which are already in stable. So it can't go in linux-kbuild, and if it can't go in linux-headers either then it needs a new package.
Re: Package for development of out-of-tree kernel modules written in Rust
On 26/02/25 19:10, Miguel Ojeda wrote: On Wed, Feb 26, 2025 at 3:15 PM NoisyCoil wrote: Yeah I'd imagined this could happen. Currently d/rules just copies rust/{*.rmeta,*.so} into the destination directory. My guess was that in the future they could be placed in subdirectories of rust/, but on a second thought I think as subsystems accept Rust they could be located kind of anywhere in the build directory? If this is the case, then I Unknown yet -- my current plan is to propose to generate them all in a single place (which may be different than their current place) for simplicity. But if people disagree, I may have to place them all over the tree. think hardcoding can hardly be avoided. One will just have to change the hardcoded paths as needed (not ideal, but oh, well; also, absolutely not unheard of :-)). I say this mostly because of the *.so files: assuming that all *.rmeta files must be installed probably is a good enough assumption, but the same is not true for *.so files. If a single place is better for you, that is another argument that I can use to push for the thing I mentioned above, so I wouldn't mind to hear that! 🙂 My personal opinion (which tbf doesn't matter a lot since I won't be the one dealing with this in the long run) is if there is no strong reason for placing them at specific but seemingly random places, picking a single place would be best. A few reasons I can think of: 1. it makes them more discoverable 2. it relieves packagers from having to manually keep track of what must be installed, especially if whether they are generated or not depends on the config (I think this is not the case right now, but I don't see why it couldn't be in the future) 3. if they are scattered all over the tree the package that ships them will be a large tree of directories each mostly containing a single rust artifact, which is quite ugly imo. On the other hand, one could maybe use something like `objdump -T whatever.so | grep rustc_proc_macro_decls` to discover if a .so file is a rust proc macro? In the case where we are everywhere, I guess I could output a file easily with the paths of all the needed dependencies, if that would help. If they cannot be put in a single place then, yeah, properly documenting their existence will make life easier for packagers. A config-dependent file $file generated at build time that one can just `cat $file | cpio -pd $DIR` would be enough. Again, this is not so much for .rmeta files, those are specific enough to Rust that one can just pull them all in (without solving issue 3. above though), but for other stuff like proc macro shared libraries which require deeper inspection than looking at file names. Thanks for the feedback! Thank you for taking the time to answer! Cheers, Miguel
Package for development of out-of-tree kernel modules written in Rust
Hi kernel team! I know this is early to say the least, but I would like to ask your opinion on the naming of a package that could eventually end up in the official Debian kernel packaging. As is the case for C, building out-of-tree kernel modules written in Rust [1] requires some extra files to be installed alongside the kernel. For C modules this is covered by linux-headers and dependencies. For Rust, one needs to have installed the *.rmeta and *.so files which are generated in the rust/ directory during the build. The first of these contain metadata like the version of rustc used for the build, information on exported types/items/etc., bindings and so on [2][3]. They do not contain compiled code but are ultimately dependent (among other things such as the compiler) on the .config used to build them, so they are arch- and flavor-specific. The only *.so file generated at this time is a proc macro, and that, to my understanding, is a regular shared library. Needless to say, Rust must be enabled in the kernel for these files to be generated, meaning the Debian kernel cannot currently support this. However, there is at least one fork of the Debian kernel being built with Rust enabled at present, namely, the Asahi kernel which we are maintaining in the Bananas Team [4] (which needs Rust for the Apple AGX GPU driver), so I started experimenting with the idea of having a package to provide these files. Following the scheme of having a metapackage + an actual versioned package I added two packages, linux-librust@source_suffix@@localversion@-dev [5] and linux-librust-@abiname@ [6], the first of which has the second as a dependency, while the second contains the actual files and links needed to build out-of-tree Rust modules. I tested it with [1] and it worked (modulo a small change needed to build [1] outside of mainline). You can find a draft at [7]. I would like to include the new linux-librust-* packages in the next version of our fork (6.13), but first I would like to ask your opinion on the naming I chose, whether you have suggestions (beyond the naming too!) and so on. If (when? ;-)) Rust is enabled in the Debian kernel, it would be nice to have a package like this to go with linux-headers, so I thought I could as well start this conversation to see if I can work on something that can eventually end up in src:linux. One disclaimer: Miguel Ojeda (whom I cc'ed in case something interesting comes out of this discussion) told me he wouldn't recommend people to do out-of-tree Rust development e.g. because the kernel crate (which lives in rust/kernel) may need to be rebuilt to enable or accommodate new abstractions. I use this recommendation to clarify that general development is not what a linux-librust package aims to enable. The problem it tries to solve is "how do I build out-of-tree kernel modules written in Rust that work with the kernel installed by my distribution?" (this is why it's packaged from the same source!) If your distribution does not provide the abstractions you need then your distribution's kernel is not good enough for your use-case and you need to rebuild the kernel from source, in which case you don't need a linux-librust package (you already have the files you need). On the other hand, if your distribution does not provide those files, you will not be able to build such modules even if your distribution's kernel is good enough for you. Cheers! P.S.: Also cc'ing waldi personally since he recently expressed interest in enabling Rust in the Debian kernel, and the Rust Team in case someone is interested. [1] https://github.com/Rust-for-Linux/rust-out-of-tree-module [2] https://rustc-dev-guide.rust-lang.org/backend/libs-and-metadata.html#rmeta [3] https://rustc-dev-guide.rust-lang.org/backend/libs-and-metadata.html#metadata [4] https://salsa.debian.org/bananas-team/wip/linux-asahi [5] For us this would be linux-librust-asahi-dev, since we renamed our source as linux-asahi. For src:linux this would be linux-librust-dev. I've been consistent with metapackages such as linux-headers@source_suffix@@localversion@ and linux-image@source_suffix@@localversion@-dbg and kept @source_suffix@ in the package's name. [6] For us this is linux-librust-$version-asahi, where "asahi" here is the flavour name (src:linux-asahi contains an additional asahi flavour to enable the configs needed for apple silicon and make it clear to users they have to install the -asahi packages instead of the -arm64 ones). For src:linux this would be linux-librust-$version-amd64, -arm64, -arm64-16k, etc. [7] https://salsa.debian.org/bananas-team/wip/linux-asahi/-/commit/7a3873463600bc85781e4683a38d6e510c154937
Re: Package for development of out-of-tree kernel modules written in Rust
On 26/02/25 12:29, Bastian Blank wrote: I completely miss the reason why this can't be part of linux-headers and just be used the same way. Yeah, I should have explained my reasoning in my previous email. The metapackage is added so users can choose whether to install the Rust bits or not. If you think the user should not have this choice (I don't disagree) then the metapackage is not needed. I'm not opposed to not having the metapackage. As for the actual contents of the package, the Rust bits, they are binary files (and include an actual shared library), so my reasoning was that they should belong to a different package than linux-headers-@abiname@@localversion@, and be installed under /usr/lib instead of /usr/src. This is what's currently being done for the kbuild files. So one alternative could be forgetting about the metapackage, only add a new linux-librust-@abiname@@localversion@, and make linux-headers-@abiname@@localversion@ depend on linux-librust-@abiname@@localversion@ so that's pulled in automatically with the headers (like kbuild is). Unless you are fine with linux-headers-@abiname@@localversion@ also installing binary files, in which case one can just add the Rust bits there. But I still think the Rust bits should be installed in /usr/lib [1] instead of /usr/src. [1] In the draft I'm using /usr/lib/linux-librust-@abiname@@localversion@ as installation path.
Re: Salsa CI job 'missing-breaks' to be enabled by default starting March 1st
On 06/03/25 14:25, Lorenzo wrote: Hello Otto, [please keep me in CC, I'm not subscribed] Salsa CI has had for many years the job 'missing-breaks' that complements piuparts by checking that the files a package introduce don't clash with files shipped by any other package in the distribution without having proper Breaks/Replaces in the `debian/control` file. This job works well, being quick to run and has had zero false positives in our experience. In salsa CI now I see: $ check_for_missing_breaks_replaces.py -o ${WORKING_DIR}/missing_breaks.xml --changes-file ${WORKING_DIR}/*.changes [ERROR] Missing Breaks/Replaces found [ERROR] runit-init conflicts with init-system-helpers files: {'/usr/share/man/man8/invoke-rc.d.8.gz', '/usr/sbin/service', '/usr/sbin/invoke-rc.d', '/usr/share/man/man8/service.8.gz'} Uploading artifacts for failed job this looks like false positive, file are in fact diverted. Does the test check for for diversions? ## Schedule 1. March 1st: Enable this job by default, but in allow_failure mode, making Salsa CI yellow on packages that fail on this job 2. March 31st: Remove the allow_failure mode, potentially making the Salsa CI red for packages that fail on this job Could you please consider delaying 2. until diversion are properly detected? Another instance of diversions not being detected is in linux's pipeline [1,2]: linux-libc-dev and oss4-dev both install /usr/include/linux/soundcard.h, oss4-dev diverts it, missing-break fails. If my understanding is correct, this will make all unstable/exp (oss4-dev is in unstable only) src:linux pipelines break starting March 31st. I agree that diversions should be detected. [1] https://salsa.debian.org/kernel-team/linux/-/jobs/7205419 [2] https://salsa.debian.org/kernel-team/linux/-/jobs/7182906 > Best Regards, > Lorenzo
Re: Salsa CI job 'missing-breaks' to be enabled by default starting March 1st
On 06/03/25 17:09, Bastian Blank wrote: Open an serious bug report against oss4-dev. No need to wait, it needs to go. oss4-dev is fine (unless diversions of files in linux-libc-dev are forbidden): oss4-dev is correctly diverting the header, as a consequence it needs not Break or Conflict with linux-libc-dev. The issue here is that the new missing-breaks pipeline job has no clue that packages are correctly diverting files, and it flags as missing Breaks packages which, in fact, do not miss Breaks because they aren't supposed to have any.