Anyone working on more recent glib/gtk4 packages?
Hello, I wanted to package Fractal, which is a native GNOME client for Matrix chat. It requires newer versions of glib and gtk than are currently in Guix. I believe I’ve seen in IRC that some folks are working on getting GNOME 43/44 packages done, which probably needs the glib/gtk updates to happen. If there’s work in this direction, could someone point me to it? Thanks, — Ian
Guix CLI, thoughts and suggestions
Greetings, As I’ve been learning Guix, one of the things I’ve found somewhat unpleasant is the lack of consistency within the guix CLI tool. It feels a bit Git-like, with not much consistency, commands that non-obvioulsy perform more than operation, related commands in different places in the tree, etc. Just so you know where I’m coming from: I’ve found that compliex CLI tooling benefits from organization and consistency. The Linux ip(8) command is a good example of this kind of organization: to add an IP address, you use `ip address add'. To show address, `ip address show', and to remove one `ip address del'. When options are needed, they get added after the verb or branch in the verb tree; the final verb may take positional arguments as well as --long or -s (short)-form options. Some examples of where I think Guix could do better. This is an illustrative list, not an exhaustive one. Inconsistent organization = Most package-related commands are under `guix package', but many are sibling commands. Examples are `guix size', `guix lint', `guix hash', etc. Inconsistency between verbs and options === Some verbs are bare-word positional arguments, and others are flags to related verbs. IMO, this is the biggest problem, and makes it very difficult to find all the things the CLI can do. `guix package' is a major offender in this area, as it mixes verbs and verb-specific options into the same level. For example, installing a package is `guix package -i foo' rather than `guix package install foo', removing is `guix package -r foo' rather than `guix package remove foo', and listing installed packages is `guix package -I' rather than `guix package installed' (or similar). This means that users can express commands which *seem* like they should work, but do not. For example `guix package -i emacs -r emacs-pgtk -I' represents a command to 1) install emacs 2) remove emacs-pgtk 3) list installed packages (which would verify the previous two operations occurred). This is a valid command within the accepted organization of `guix package', and doesn’t cause an error, but doesn’t work: the install and remove steps are ignored. A thing I’ve found throughout my career is that designing systems so it’s *impossible* to represent unsupported, nonsensical, or undefined things is an extremely valuable technique to avoid errors and pitfalls. I think Guix could get a lot of mileage out of adopting something similar. This causes a related problem of making it impossible to know what options are valid for what verbs. Will `guix package --cores=8 -r emacs' remove the package while using eight cores of my system? Will `guix system -s i686 switch-generation 5' switch me to a 32-bit version of generation 5? If verbs are organized better, and have their own options, this ambiguity vanishes. More inconsistency == Other parts of guix have the opposite problem: `guix system docker-image' probably ought to be an option to `guix system image' rather than a separate verb. Inconsistency between similar commands == There are generations of both the system (for GuixSD) and the user profile, however, they work differently. For the system, there’s `guix system list-generations' and `guix system switch-generation', but for the user profile, you need `guix package --list-generations' and `guix package --switch-generation=PATTERN'. Additionally, no help is available for either of the system commands: `guix system switch-generations --help' gives the same output as `guix system --help' -- no description of the supported ways of expressing a generation are available. Flattened verbs === Related, the generation-related commands under `guix system' ought to be one level deeper: `guix system generation list', `guix system generation switch' etc. Repeated options Many commands (`guix package', `guix system', `guix build', `guix shell') take -L options, to add Guile source to their load-path. This probably ought to be an option to guix itself, so you can do `guix -L~/src/my-channel build ...'. Suggestions === All commands should be organized into a tree of verbs. Verbs should have common aliases (`rm' for `remove', etc). Verbs should be selected by specifying the minimum unambiguous substring. For example `guix sys gen sw' could refer to `guix system generation switch'. Options should be applicable to each level of the tree, ex `guix -L~/src/my-channel' would add that load-path, which would be visible to any command. Requesting help is a verb. Appending "help" to any level of the verb tree should show both options applicable to that verb, and its child verbs. `guix help' would show global options and all top-level verbs (package, system, generation, etc); `guix package help' would show package
Re: Guix CLI, thoughts and suggestions
Hi Carlo, Thank you for the thoughtful reply. Carlo Zancanaro writes: Hi Ian, Much of what you've written is fair, and I'm sure that Guix's commands could be better organised. I'm not really involved in Guix development, but I think there are two "inconsistencies" that you've mentioned which can be explained. On Mon, Jan 15 2024, Ian Eure wrote: Some examples of where I think Guix could do better. This is an illustrative list, not an exhaustive one. Inconsistent organization = Most package-related commands are under `guix package', but many are sibling commands. Examples are `guix size', `guix lint', `guix hash', etc. I think the real inconsistency here is that `guix package' is poorly named. This command really operates on profiles, and performs operations (install, remove, list, etc.) on those profiles. Packages are given as arguments to this command. The other commands operate on, and show the properties of, packages. Similarly with `guix build'. Yes, I agree the behavior makes a bit more sense from that viewpoint. However, it does have non-profile-related things in it, such as `--show' and `--search'. This is getitng into another thing I’ve seen a bit of, which is overloaded commands -- ones that do multiple things that are unrelated or tangentally related. But, I didn’t have a good example, and my message was long enough already. Inconsistency between verbs and options === ... For example, installing a package is `guix package -i foo' rather than `guix package install foo', removing is `guix package -r foo' rather than `guix package remove foo', and listing installed packages is `guix package -I' rather than `guix package installed' (or similar). The specific example of `guix package' might be explained by considering it as a single transaction to update the profile. The command `guix package' really says "perform a transaction on the profile", and the options are the commands in the transaction. Since there can be multiple commands, and the command names look like package names, they are provided as options. This doesn't fully explain the behaviour. In particular the example you give: This means that users can express commands which *seem* like they should work, but do not. For example `guix package -i emacs -r emacs-pgtk -I' represents a command to 1) install emacs 2) remove emacs-pgtk 3) list installed packages (which would verify the previous two operations occurred). ... seems reasonable to have working within the view of `guix package' as a transactional operation. I agree that this would make sense, but my understanding is that `guix package' doesn’t work like that -- it only performs the final operation in the list. IMO, it should either do *everything* the commands specify, or print an error and take no action. It's also worth noting that there are convenience shortcuts in `guix install' and `guix remove'. It seems like a lot of work to change, and backwards compatibility also is an issue. I see backwards compatibility as the main issue here. There was a lot of discussion preceding the inclusion of `guix shell', because of the prospect of breaking existing tutorials/documentation floating around on the internet. This is an even bigger concern for a more drastic reorganisation of the CLI. I agree, I don’t think the situation can be improved without finding a solution to preserve BC. But, I didn’t think it was worth making detailed plans for any of this before gauging whether the problem was one broadly considered to be worth solving. — Ian
Re: QA is back, who wants to review patches?
Christopher Baines writes: [[PGP Signed Part:Undecided]] Hey! After substitute availability taking a bit of a dive recently, the bordeaux build farm has finally caught back up and QA is back submitting builds for packages changed by patches. QA also has a feature to allow easily tagging patches (issues) as having been reviewed and ready to merge (reviewed-looks-good). You can do this via sending an email and QA has a form ("Mark patches as reviewed") on the page for each issue to help you do this. I'd encourage anyone and everyone to review patches, there's no burden on you to spot every problem and you don't need any special knowledge. You just need to not be involved (so you can't review your own patches) and take a good look at the changes, mentioning any questions that you have or problems that you spot. If you think the changes look good to be merged, you can tag the issue accordingly. When issues are tagged as reviewed-looks-good, QA will display them in dark green at the top of the list of patches, so it's on those with commit access to prioritise looking at these issues and merging the patches if indeed they are ready. Let me know if you have any comments or questions! Wanted to check things out, but it’s giving the same error message on every page: An error occurred Sorry about that! misc-error #fvector->list: expected vector, got ~S#f#f Also, the certificate for issues.guix.gnu.org expired today. Is there a plan to improve the reliability Guix infrastructure? It seems like major things break with alarming regularity. — Ian
Re: Guix System automated installation
Hi Giovanni, Giovanni Biscuolo writes: [[PGP Signed Part:Undecided]] Hello Ian, I'm a little late to this discussion, sorry. I'm adding guix-devel since it would be nice if some Guix developer have something to add on this matter, for this reason I'm leaving all previous messages intact Csepp writes: Ian Eure writes: Hello, On Debian, you can create a preseed file containing answers to all the questions you’re prompted for during installation, and build a new install image which includes it. When booted, this installer skips any steps which have been preconfigured, which allows for either fully automated installation, or partly automated (prompt for hostname and root password, but otherwise automatic). Does Guix have a way to do something like this? The declarative config is more or less the equivalent of the Debian preseed file, but I don’t see anything that lets you build an image that’ll install a configuration. When using the guided installation (info "(guix) Guided Graphical Installation"), right before the actual installation on target (guix system init...) you can edit the operating-system configuration file: isn't it something similar to what you are looking for? Please consider that a preseed file is very limited compared to a full-fledged operating-system declaration since the latter contains the declaration for *all* OS configuration, not just the installed packages. I appreciate where you’re coming from, I also like the one-file system configuration, but this is inaccurate. Guix’s operating-system doesn’t encompass the full scope of configuration necessary to install and run an OS; Debian’s preseed has significantly more functionality than just specifying the installed packages. Right now, Debian’s system allows you to do things which Guix does not. Preseed files contain values that get set in debconf, Debian’s system-wide configuration mechanism, so they can both configure the resulting system as well as the install process itself. This means you can use a preseed file to tell the installer to partition disks, set up LUKS-encrypted volumes (and specify one or more passwords for them), format those with filesystems, install the set of packages you want, and configure them -- though debconf’s package configuration is more limited, generally, than Guix provides[1]. With Debian, I can create a custom installer image with a preseed file, boot it, and without touching a single other thing, it’ll install and configure the target machine, and reboot into it. That boot-and-it-just-works experience is what I want from Guix. For things that can’t be declared in operating-system, like disk partitioning and filesystem layout, the installer performs those tasks imperatively, then generates a system config with those device files and/or UUIDs populated, then initializes the system. There’s no facility for specifying disk partitioning or *creating* filesystems in the system config -- it can only be pointed at ones which have been created already. guix system image is maybe closer, but it doesn’t automate everything that the installer does. But the installer can be used as a Scheme library, at least in theory. The way I would approach the problem is by creating a Shepherd service that runs at boot from the live booted ISO. I would really Love So Much™ to avoid writing imperative bash scripts and just write Scheme code to be able to do a "full automatic" Guix System install, using a workflow like this one: 1. guix system prepare --include preseed.scm disk-layout.scm /mnt where disk-layout.scm is a declarative gexp used to partition, format and mount all needed filesystems the resulting config.scm would be an operating-system declaration with included the contents of preseed.scm (packages and services declarations) 2. guix system init config.scm /mnt (already working now) ...unfortunately I'm (still?!?) not able to contribute such code :-( I don’t think there’s any need for a preseed.scm file, and I’m not sure what would be in that, but I think this is close to the right track. Either operating-system should be extended to support things like disk partitioning, and effect those changes at reconfigure time (with suitable safeguards to avoid wrecking existing installs), or the operating-system config could get embedded in another struct which contains that, similar to the (image ...) config for `guix system image'. I think there are some interesting possibilities here: you could change your partition layout and have Guix resize them / create new ones for you. — Ian [1]: A workaround for this is to create packages which configure the system how you want, then include them on the installer image / list them in the packages to be installed. Not ideal, but you can.
Re: Proposal to turn off AOT in clojure-build-system
Hello, I’ve been following along with this discussion, as well as a discussion on Clojureverse, and thought it might be helpful to pull together some threads and design decisions around Clojure’s behavior. Clojure is designed to ship libraries as source artifacts, not bytecode ("pretty much all other Clojure libraries ... are all source code by design[1]."; "Clojure is ... a source-first language[2]"), and the view of the community is that shipping AOT artifacts "is an anti-pattern[1]." Clojure library JARs are more akin to source tarballs than binaries. The original design and intent of Clojure’s AOT compiler is to compile "just a few things... for the interop case" or "Everything... For the 'Application delivery', 'Syntax check', and 'reflection warnings' cases[3]." Clojure’s compiler is transitive and "does not support separate compilation"[3], meaning when a namespace is compiled, anything it uses is compiled and emitted with it. This is the crux of why mixing AOT and non-AOT code is troublesome: it causes dependency diamonds, where the AOT’d library contains a duplicate, older version of code used elsewhere in the project. The Clojure reference on compiling[4] gives some reasons you might want to AOT: "To deliver your application without source," "To speed up application startup," "To generate named classes for use by Java," "To create an application that does not need runtime bytecode generation and custom classloaders." Note that there’s no mention of compiling libraries for any reason; only applications. When AOT is used "for the interop case," it’s typical to AOT only those namespaces[5], not the entire library. Shipping AOT-compiled Clojure libraries has caused real and very weird and hard-to-debug problems in the past: https://clojure.atlassian.net/browse/CLJ-1886?focusedCommentId=15290 https://github.com/clj-commons/byte-streams/issues/68 and https://clojure.atlassian.net/browse/CLJ-1741 Clojure doesn’t have guarantees around ABI stability[6][7]. To date, most ABI changes have been additive, but there are no guarantees that the ABI will be compatible from any one version of Clojure to any other. The understanding of the Clojure community is that the design of the current compiler can’t offer a stable ABI[8] at all. Because nobody in the Clojure community AOTs intermediate (that is, library) code, this hasn’t been a problem and is unlikely to change. "Clojure tries very hard to provide source compatibility but not bytecode compatibility across versions[9]." Correctly handling the ABI concerns — which Guix currently does not do — would result in a combinatorial explosion of Clojure packages should multiple versions of Clojure ever be available in Guix at the same time. For example, if someone wanted to package Clojure 1.12.0-alpha9, you’d need to duplicate every package taking Clojure as an input so they use the correct version. While ABI breakage has been rare thus far, it seems likely that it’ll occur at some point; perhaps if Clojure reaches version 2.0.0. If Guix disables AOT for Clojure libraries, we have source compatibility, and the AOT/ABI problems are moot. Clojure’s compiler is non-deterministic[10]: the same compiler can will produce different bytecode for the same input across multiple runs. I’m not sure if this is a problem for Guix at this point in time, but it seems out of line with Guix expectations for compilation generally. Opinions follow: If we’re taking votes, mine is to *not* AOT Clojure libraries, both for the technical reasons laid out in, and also for the social reason of not violating the principle of least surprise. I understand that Guix and Clojure have very different approaches, and some balance must be struck. However, the lack of ABI guarantees, the compiler’s behavior, the promise of source compatibility, and matching the expectation of the audience these tools are meant for all convince me that disabling AOT is the right course here. AOT’ing Clojure applications (which means, more or less, "the Clojure tooling") is desirable, and should be maintained. — Ian [1]: https://clojureverse.org/t/should-linux-distributions-ship-clojure-byte-compiled-aot-or-not/10595/8 [2]: https://clojureverse.org/t/should-linux-distributions-ship-clojure-byte-compiled-aot-or-not/10595/30 [3]: https://clojure.org/reference/compilation [4]: https://archive.clojure.org/design-wiki/display/design/Transitive%2BAOT%2BCompilation.html [5]: https://clojure.org/guides/deps_and_cli#aot_compilation [6]: https://clojureverse.org/t/should-linux-distributions-ship-clojure-byte-compiled-aot-or-not/10595/30 [7]: https://gist.github.com/hiredman/c5710ad9247c6da12a99ff6c26dd442e [8]: https://clojureverse.org/t/should-linux-distributions-ship-clojure-byte-compiled-aot-or-not/10595/4 [9]: https://clojureverse.org/t/should-linux-distributions-ship-clojure-byte-compiled-aot-or-not/10595/18
Concerns/questions around Software Heritage Archive
Hi Guixy people, I’d never heard of SWH before I started hacking on Guix last fall, and it struck me as rather a good idea. However, I’ve seen some things lately which have soured me on them. They appear to be using the archive to build LLMs: https://www.softwareheritage.org/2024/02/28/responsible-ai-with-starcoder2/ I was also distressed to see how poorly they treated a developer who wished to update their name: https://cohost.org/arborelia/post/4968198-the-software-heritag https://cohost.org/arborelia/post/5052044-the-software-heritag GPL’d software I’ve created has been packaged for Guix, which I assume means it’s been included in SWH. While I’m dealing with their (IMO: unethical) opt-out process, I likely also need to stop new copies from being uploaded again in the future. Is there a way to indicate, in a Guix package, that it should *never* be included in SWH? Is there a way to tell Guix to never download source from SWH? I want absolutely nothing to do with them. Thanks, — Ian
Re: Concerns/questions around Software Heritage Archive
Christopher Baines writes: [[PGP Signed Part:Undecided]] Ian Eure writes: Hi Guixy people, I’d never heard of SWH before I started hacking on Guix last fall, and it struck me as rather a good idea. However, I’ve seen some things lately which have soured me on them. They appear to be using the archive to build LLMs: https://www.softwareheritage.org/2024/02/28/responsible-ai-with-starcoder2/ I was also distressed to see how poorly they treated a developer who wished to update their name: https://cohost.org/arborelia/post/4968198-the-software-heritag https://cohost.org/arborelia/post/5052044-the-software-heritag GPL’d software I’ve created has been packaged for Guix, which I assume means it’s been included in SWH. While I’m dealing with their (IMO: unethical) opt-out process, I likely also need to stop new copies from being uploaded again in the future. Is there a way to indicate, in a Guix package, that it should *never* be included in SWH? Not currently, and I don't really see the point in such a mechanism. If you really never want them to store your code, then you need to license it accordingly (and not make it free software). I don’t want my code in SWH *because* it’s free. A primary use of LLMs is laundering freely licensed software into proprietary, commercial projects through "AI" code completion and generation. Any Free software in an LLM training set can and will be used in violation of its license, without a clear path for the author to seek recourse. I deleted my code off Github and abandoned it completely for this exact reason, and am deeply irked to be going through this nonsense again. A more salient question may be: Is there a process within Guix (either the program or the organization) which uploads source to SWH? Or does it rely on SWH indepently? If the latter, my problem is likely solved by blocking SWH at my network edge and opting out of their archive (or trying to) and the downstream training models they’ve already put it in. If the former, the only control I currently have to protect my license is removing packages from Guix which contain it. I don’t want that outcome. Noting also that the path here seems to be SWH->huggingface->bigcode training set, and the opt-out process for the training set appears to be a complete sham. To opt-out, you must create a Github Issue; only one opt-out has *ever* been processed, and there are 200+ sitting there, many with no response for nearly a year[1]. I want no part of any of this. Is there a way to tell Guix to never download source from SWH? Also no, and it's probably best to do this at the network level on your systems/network if you want this to be the case. I’ll investigate this, though I’d prefer if there was a way to configure source mirrors in the Guix daemon. Skipping back to this though: I was also distressed to see how poorly they treated a developer who wished to update their name: https://cohost.org/arborelia/post/4968198-the-software-heritag https://cohost.org/arborelia/post/5052044-the-software-heritag This is probably worth thinking about as Guix is in a similar situation regarding publishing source code, and people potentially wanting to change historical source code both in things Guix packages and Guix itself. Like Software Heritage, there's cryptographical implications for rewriting the Git history and modifying source tarballs or nars that contain source code. We have 17TiB of compressed source code and built software stored for bordeaux.guix.gnu.org now and we should probably work out how to handle people asking for things to be removed or changed (for any and all reasons). It's probably worth working out our position on this in advance of someone asking. Yes, I agree that Guix needs a better solution for this. Thanks, — Ian [1]: https://github.com/bigcode-project/opt-out-v2/issues
Re: Concerns/questions around Software Heritage Archive
MSavoritias writes: On 3/17/24 11:39, Lars-Dominik Braun wrote: Hey, I have heard folks in the Guix maintenance sphere claim that we never rewrite git history in Guix, as a matter of policy. I believe we should revisit that policy (is it actually written anywhere?) with an eye towards possible exceptions, and develop a mechanism for securely maintaining continuity of Guix installations after history has been rewritten so that we maintain this as a technical possibility in the future, even if we should choose to use it sparingly. the fallout of rewriting Guix’ git history would be devastating. It would break every single Guix installation, because a) `guix pull` authenticates commits and we might lose our trust anchor if we rewrite history earlier than the introduction of this feature, b) `guix pull` outright rejects changes to the commit history to prevent downgrade attacks. Additionally it would break every single existing usage of the time machine and thereby completely defeat the goal of providing reproducible software environments since the commit hash is used to identify the point in time to jump to. I doubt developing “mechanisms” – whatever they look like – would be worth the effort. Our contributors matter, but so do our users. Never ever rewriting our git history is a tradeoff we should make for our users. Lars Thats a good point. in the sense that its a tradeoff here and I absolutely agree. But let me add some food for thought here: 1. Were the social aspects considered when the system came into place? 2. Is it more important for the system to stay as is than to welcome new contributors? 3. You mention "its a tradeoff we should make for our users". How many trans people where involved in that decision and how much did their opinion matter in this? I am saying this because giving power to people(what is called users) is not only handling them code or make sure everything is free software. Its also the hard part of making sure the voices of people that can not code is heard and is participating and taking in mind. Just want to say that I appreciate and agree with your thoughtful words. I’d also note that name changes aren’t a concern limited to trans people, and framing this as "we have to upend everything Because Transgender" is both wrong and feels pretty bad to me. Anyone can change their name at any time for any reason, or no reason at all, and may wish to update historical references to their previous names. Having a mechanism to support this is, in my view, a matter of basic decency and respect for all humans. Thanks, — Ian
Re: Concerns/questions around Software Heritage Archive
MSavoritias writes: On 3/17/24 13:53, paul wrote: Hi all , thank you MSavoritias for bringing up points that many of us share. It's clearly a tradeoff what to do about the past. For the future, as Christpher already stated, we need a serious solution that we can uphold as a free software project that does not alienate users or contributors. My opinion is that names are just wrong to be included, not only because of deadnames, but in general having a database with a column first_name and a column second_name is something only a 35 yrs old white cis boy could have thought was a good idea to model the spectrum of names humans use all over the world: https://web.archive.org/web/20240317114846/https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/ If we'd really need to identify contributors, and obviously Guix doesn't, we could use an UUID/machine readable identifier which can then be mapped to a displayed name. I believe git can already be configured to do so. giacomo The uuid sounds like a very interesting solution indeed. I wonder how easy it could be to add it to git. This also seems like interesting territory to explore. The concerns raised around rewriting history have valid points; I think it’s impractical to rewrite history any time a change needs to happen, as that would be an ongoing source of disruption. But rewriting history *once*, to switch to a more general mechanism, seems like a reasonable trade to me. This also presents an opportunity: we could combine this with a default branch switch from master to main. A news entry left as the final commit in master could inform people of whatever steps may be needed to update (if that can’t be automated), and the main branch would contain the rewritten history. It’s certainly not a perfect solution, but it seems pragmatic. — Ian
Re: Concerns/questions around Software Heritage Archive
Simon Tournier writes: Hi, On sam., 16 mars 2024 at 08:52, Ian Eure wrote: They appear to be using the archive to build LLMs: https://www.softwareheritage.org/2024/02/28/responsible-ai-with-starcoder2/ About LLM, Software Heritage made a clear statement: https://www.softwareheritage.org/2023/10/19/swh-statement-on-llm-for-code Quoting: We feel that the question is no longer whether LLMs for code should be built. They are already being built, independently of what we do, and there is no turning back. The real question is how they should be built and whom they should benefit. Principles: 1. Knowledge derived from the Software Heritage archive must be given back to humanity, rather than monopolized for private gain. The resulting machine learning models must be made available under a suitable open license, together with the documentation and toolings needed to use them. 2. The initial training data extracted from the Software Heritage archive must be fully and precisely identified by, for example, publishing the corresponding SWHID identifiers (note that, in the context of Software Heritage, public availability of the initial training data is a given: anyone can obtain it from the archive). This will enable use cases such as: studying biases (fairness), verifying if a code of interest was present in the training data (transparency), and providing appropriate attribution when generated code bears resemblance to training data (credit), among others. 3. Mechanisms should be established, where possible, for authors to exclude their archived code from the training inputs before model training begins. I hope it clarifies your concerns to some extent. It doesn’t clarify them, but it does illustrate them. HuggingFace and the StarCoder2 model is in violation of principle 2. By their own admission, they are including code without clear licensing[1]: The main difference between the Stack v2 and the Stack v1 is that we include both permissively licensed and unlicensed files. HuggingFace’s StarChat2 Playground[2] also violates this principle, as it outputs code without any license or provenance information; I know, because I tried it. While their own terms of use for StarCoder2 state: Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses... ...their own playground makes this impossible. HuggingFace is also in violation of the third principle, because they haven’t established a functioning opt-out model[3]. Opting out requires using non-free software; requests have been sitting for nearly a year with no action or response; and out of every request submitted, only a single one has *ever* been honored. They appear to be violating free software licenses on large scale. They are in violation of SWH’s own positions. Moreover, you wrote: « I want absolutely nothing to do with them. » Maybe there is a misunderstanding on your side about what “free software” and GPL means because once “free software”, you cannot prevent people to use “your” free software for any purposes you dislike. If you want to bound the use cases of the software you create, you need to explicitly specify that in the license. And if you do, your software will not be considered as “free software”. That’s the double sword of “free software”. :-) I am crystal clear on the meaning of free software. I wish to remove it from these models *in order to* keep it free. Thanks, — Ian [1]: https://arxiv.org/html/2402.19173v1 [2]: https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground [3]: https://huggingface.co/datasets/bigcode/the-stack-v2 [4]: https://github.com/bigcode-project/opt-out-v2/issues
Re: Concerns/questions around Software Heritage Archive
Simon Tournier writes: Hi, On lun., 18 mars 2024 at 12:38, Ian Eure wrote: They appear to be violating free software licenses on large scale. They are in violation of SWH’s own positions. [...] [1]: https://arxiv.org/html/2402.19173v1 [2]: https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground [3]: https://huggingface.co/datasets/bigcode/the-stack-v2 [4]: https://github.com/bigcode-project/opt-out-v2/issues Please note that Software Heritage folks are not co-author of all that; or I misread. Do not take me wrong, this is not an attempt to escape but a query for waiting the feedback of SWH. Shit rolls downhill. It’s the least surprising thing in the world to find that an "AI" company is violating licenses, because the entire technology is based on infringement at a massive scale. SWH’s partnership with, and promotion of, both the company and its license-violating model, in violation of their *own stated principles*, raises very legitimate questions. There are multpile overlapping concerns here; personal, organizational, legal, ethical, and technical. From a personal, legal standpoint, HuggingFace is almost certainly in violation of my code’s licenses. I will, therefore, work to remove my code from their models. From a personal, ethical standpoint, I believe that SWH has proven themselves untrustworthy by enabling *and promoting* this infringement in violation of their own stated policies, and will work to remove my code from their archive. Personally, I cannot extend them the benefit of the doubt on this. They blew it. From an organizational ethical standpoint, Guix is IMO on the right track by waiting on SWH (and perhaps pressuring them to fix things). From an organizational, technical perspective, I would like to see concrete measures to support my (and hundreds of others’) personal, ethical desires to exclude software from SWH, and by extension, HuggingFace’s models. As Ludo said, SWH folks are, by the way, also long time Free Software activists. In my view, this is not to their credit. I’d expect people familiar with Free Software to be *more* sensitive to licensing concerns, thus less likely to partner with a company likely to violate them. PS: Thanks for the detailed explanations. I will provide my reading later, after some concerns will be separated, eventually. You’re very welcome. Thanks, — Ian
Fallout from recent nss-certs changes
Some recent nss-certs changes have a negative side effects which needs to be fixed. A patch of mine was pushed recently (commit 0920693381d9f6b7923e69fe00be5de8621ddb6f), which adds nss-certs 3.98 to (gnu packages certs), under the nss-certs-3.98 variable. Then, commit fdfd7667c66cf9ce746330f39bcd366e124460e1 was pushed, which adds nss-certs to %base-packages-networking. This references the nss-certs variable, which is version 3.88.1. If an operating-system’s packages includes `(specification->package "nss-certs")', this causes breakage, because that form selects version 3.98, but %base-packages includes 3.88.1, which causes an error on the next `guix system reconfigure' due to conflicting package versions in the profile. Prior to commit 65e8472a4b6fc6f66871ba0dad518b7d4c63595e, the graphical installer would ask users if they wanted to install nss-certs, and put this form into the operating-system’s packages, so there are likely many users affected -- it bit me, and I’ve seen a couple in IRC as well. I think the options to fix this are: 1. Removing (specification->package "nss-certs") from one’s operating-system. 2. Grafting nss-certs 3.98 onto nss-certs 3.88.1. 3. Replacing nss-certs 3.88.1 with 3.98. The most expedient option is 1, as it can be applied by users -- but there’s probably not a good way to communicate that this needs to happen. There was some talk in IRC about grafting nss/nss-certs, but it looks like this didn’t happen. An upgrade is the best path, but would probably need to happen in core-updates, since this rebuilds a large number of packages. Thoughts on this? Thanks, — Ian
Re: Concerns/questions around Software Heritage Archive
Hello, I’m following up on this since discussion since it’s been a month and I haven’t heard any updates. Summarizing the situation: - SHF has an opaque, difficult, and undocumented process for handling name changes. I’s like to stress again that this is *not* strictly a transgender issue (though it likely affects them more, or in worse/different ways) -- it is a human respect issue. Many, many more cisgender people change their name than transgender people. - SHF gave their archive to HuggingFace, an "AI" company which is generating derived works with no attribution or provenance, in ways which violate the both licenses of the projects used to train their model, and the SHF principles for LLMs. - HuggingFace wasn’t respecting requests to opt-out of their model. On the first point, it sounds like SHF has made concrete progress to improve[1], which is very good to hear. If SHF continues on this course, I think the concern is resolved. On the third point, HuggingFace has begun honoring opt-out requests, but is still very far behind. Also, they don’t remove code from the older versions of their model -- it remains there forever. This is progress, but still, not great. On the second point, I have not seen any public statements indicating that either SHF or HuggingFace even acknowledges the problem. SHF’s most recent newsletter[2], published in April 2024 (after these concerns came to light), continues to tout that StarCoder2 is "the first AI model aligned with our principles," which appears to be false. StarCoder2 includes both licensed and unlicensed code, and HuggingFace’s own StarChat2 playground produces works derivative of this code, with no attribution or licensing information. There is also no statement or position on the SHF news blog. Nor hsa HuggingFace either fixed their tools, or made a statement. This is still very much a live concern. I have a few questions: - Has Guix reached out to SHF to express these concerns / get a response? - Whether a public or private response, what would Guix consider to be an acceptable response? An unacceptable respoinse? - How long is Guix willing to wait for a response? Thanks, — Ian [1]: https://cohost.org/arborelia/post/5273879-they-are-fixing-some [2]: https://www.softwareheritage.org/wp-content/uploads/2024/04/Software-Heritage-2024-Vision-Milestones-Newsletter.pdf Ian Eure writes: Hi Guixy people, I’d never heard of SWH before I started hacking on Guix last fall, and it struck me as rather a good idea. However, I’ve seen some things lately which have soured me on them. They appear to be using the archive to build LLMs: https://www.softwareheritage.org/2024/02/28/responsible-ai-with-starcoder2/ I was also distressed to see how poorly they treated a developer who wished to update their name: https://cohost.org/arborelia/post/4968198-the-software-heritag https://cohost.org/arborelia/post/5052044-the-software-heritag GPL’d software I’ve created has been packaged for Guix, which I assume means it’s been included in SWH. While I’m dealing with their (IMO: unethical) opt-out process, I likely also need to stop new copies from being uploaded again in the future. Is there a way to indicate, in a Guix package, that it should *never* be included in SWH? Is there a way to tell Guix to never download source from SWH? I want absolutely nothing to do with them. Thanks, — Ian
Re: Fallout from recent nss-certs changes
The change is mentioned in the channel news, but it says nothing about needing to remove that part of the config. On April 21, 2024 1:32:38 AM PDT, "pelzflorian (Florian Pelz)" wrote: >Hello Ian. My understanding of the nss-certs etc/news.scm item had been >that we should remove (specification->package "nss-certs"), which became >unnecessary and clutters config.scm. From what you write, this was >actually not intended, but it is still not a bug IMHO. > >(I’m not involved with the change, though.) > >Regards, >Florian Thanks, — Ian
Re: Fallout from recent nss-certs changes
No, this is not a bug. specification->package always returns the latest version of a package and has no way of knowing what variable(s) that package object is bound to. On April 21, 2024 8:02:50 AM PDT, Felix Lechner wrote: >Hi, > >On Sat, Apr 20 2024, Ian Eure wrote: > >> If an operating-system’s packages includes `(specification->package >> "nss-certs")', this causes breakage, because that form selects version >> 3.98, but %base-packages includes 3.88.1, which causes an error on the >> next `guix system reconfigure' due to conflicting package versions in >> the profile. > >Why does the unversioned stringy selector (specification->package >"nss-certs") resolve to a version different from the unversioned >variable nss-certs? Is that a bug? > >Kind regards >Felix > >P.S. I hoped to use the word "reified" but did not know how it fit in. Thanks, — Ian
Re: bug#67512: [PATCH v7 0/3] Add LibreWolf
Clément Lassieur writes: On Fri, Apr 12 2024, Andrew Tropin via Guix-patches via wrote: On 2024-04-06 08:04, Ian Eure wrote: Moves nss update to nss-3.98 / nss-certs-3.98 to avoid rebuilding thousands of packages. Rebases. Ian Eure (3): gnu: Add nss-3.98. gnu: Add nss-certs-3.98. gnu: Add librewolf. gnu/packages/certs.scm | 16 + gnu/packages/librewolf.scm | 621 + gnu/packages/nss.scm | 45 +++ 3 files changed, 682 insertions(+) create mode 100644 gnu/packages/librewolf.scm base-commit: ade6845da6cec99f3bca46faac9b2bad6877817e Hi Ian, tested those patches, didn't notice any issues. Added pipewire to LD_LIBRARY_PATH to make screensharing on wayland to work. Added librewolf.scm to gnu/local.mk. Pushed as https://git.savannah.gnu.org/cgit/guix.git/commit/?id=3dc26b4eae Thank you very much for you work! Thank you Andrew for reviewing. Now that this is pushed, is there anyone maintaining this "librewolf" package? This is serious work, with security updates quite often. Hi Clement, I’m planning to continue sending patches for updates and the like. Getting a working updater is close to the top of my list. Right now the package is subject to CVE-2024-3852 (high) CVE-2024-3853 (high) CVE-2024-3854 (high) CVE-2024-3855 (high) CVE-2024-3856 (high) CVE-2024-3857 (high) CVE-2024-3858 (high) CVE-2024-3859 (moderate) CVE-2024-3860 (moderate) CVE-2024-3861 (moderate) CVE-2024-3862 (moderate) CVE-2024-3302 (low) CVE-2024-3864 (high) CVE-2024-3865 (high) The version in Guix is the latest available. I’ll send in a patch when the next release happens; I’m waiting on upstream for that. Thanks, — Ian
Re: bug#67512: [PATCH v7 0/3] Add LibreWolf
Ian Eure writes: Clément Lassieur writes: On Fri, Apr 12 2024, Andrew Tropin via Guix-patches via wrote: On 2024-04-06 08:04, Ian Eure wrote: Moves nss update to nss-3.98 / nss-certs-3.98 to avoid rebuilding thousands of packages. Rebases. Ian Eure (3): gnu: Add nss-3.98. gnu: Add nss-certs-3.98. gnu: Add librewolf. gnu/packages/certs.scm | 16 + gnu/packages/librewolf.scm | 621 + gnu/packages/nss.scm | 45 +++ 3 files changed, 682 insertions(+) create mode 100644 gnu/packages/librewolf.scm base-commit: ade6845da6cec99f3bca46faac9b2bad6877817e Hi Ian, tested those patches, didn't notice any issues. Added pipewire to LD_LIBRARY_PATH to make screensharing on wayland to work. Added librewolf.scm to gnu/local.mk. Pushed as https://git.savannah.gnu.org/cgit/guix.git/commit/?id=3dc26b4eae Thank you very much for you work! Thank you Andrew for reviewing. Now that this is pushed, is there anyone maintaining this "librewolf" package? This is serious work, with security updates quite often. Hi Clement, I’m planning to continue sending patches for updates and the like. Getting a working updater is close to the top of my list. Right now the package is subject to CVE-2024-3852 (high) CVE-2024-3853 (high) CVE-2024-3854 (high) CVE-2024-3855 (high) CVE-2024-3856 (high) CVE-2024-3857 (high) CVE-2024-3858 (high) CVE-2024-3859 (moderate) CVE-2024-3860 (moderate) CVE-2024-3861 (moderate) CVE-2024-3862 (moderate) CVE-2024-3302 (low) CVE-2024-3864 (high) CVE-2024-3865 (high) The version in Guix is the latest available. I’ll send in a patch when the next release happens; I’m waiting on upstream for that. Okay, I see that I’m incorrect about this -- LibreWolf is moving onto Codeberg, but I was looking at their GitLab project, which doesn’t have the recent releases. I’ll get this updated. Thanks, — Ian
Re: Concerns/questions around Software Heritage Archive
Hello Guixers, It’s been another week with no response or movement on this. I’m disappointed that this situation seems to be getting treated so lightly. Adhering to the terms of software licenses is fundamental to the operation of the free software ecosystem; there is no software freedom without it. It’s surprising that a pretty clear-cut situation of creating derivative works of free software in violation of their licenses would be shrugged off so easily. Whatever the Guix organization’s position is, I’m reaching my personal limit, and need to see some kind of positive movement on this[1]. If Guix is going to continue to facilitate license violations, I will have no choice but to remove my software from it to defend them. — Ian [1]: Personally, I would be satisfied with a per-package setting which disables scheduling source for archiving by SWH. Seeing this, or a committment to build this within a reasonable timeframe, would allay my concerns. Ian Eure writes: Hello, I’m following up on this since discussion since it’s been a month and I haven’t heard any updates. Summarizing the situation: - SHF has an opaque, difficult, and undocumented process for handling name changes. I’s like to stress again that this is *not* strictly a transgender issue (though it likely affects them more, or in worse/different ways) -- it is a human respect issue. Many, many more cisgender people change their name than transgender people. - SHF gave their archive to HuggingFace, an "AI" company which is generating derived works with no attribution or provenance, in ways which violate the both licenses of the projects used to train their model, and the SHF principles for LLMs. - HuggingFace wasn’t respecting requests to opt-out of their model. On the first point, it sounds like SHF has made concrete progress to improve[1], which is very good to hear. If SHF continues on this course, I think the concern is resolved. On the third point, HuggingFace has begun honoring opt-out requests, but is still very far behind. Also, they don’t remove code from the older versions of their model -- it remains there forever. This is progress, but still, not great. On the second point, I have not seen any public statements indicating that either SHF or HuggingFace even acknowledges the problem. SHF’s most recent newsletter[2], published in April 2024 (after these concerns came to light), continues to tout that StarCoder2 is "the first AI model aligned with our principles," which appears to be false. StarCoder2 includes both licensed and unlicensed code, and HuggingFace’s own StarChat2 playground produces works derivative of this code, with no attribution or licensing information. There is also no statement or position on the SHF news blog. Nor hsa HuggingFace either fixed their tools, or made a statement. This is still very much a live concern. I have a few questions: - Has Guix reached out to SHF to express these concerns / get a response? - Whether a public or private response, what would Guix consider to be an acceptable response? An unacceptable respoinse? - How long is Guix willing to wait for a response? Thanks, — Ian [1]: https://cohost.org/arborelia/post/5273879-they-are-fixing-some [2]: https://www.softwareheritage.org/wp-content/uploads/2024/04/Software-Heritage-2024-Vision-Milestones-Newsletter.pdf Ian Eure writes: Hi Guixy people, I’d never heard of SWH before I started hacking on Guix last fall, and it struck me as rather a good idea. However, I’ve seen some things lately which have soured me on them. They appear to be using the archive to build LLMs: https://www.softwareheritage.org/2024/02/28/responsible-ai-with-starcoder2/ I was also distressed to see how poorly they treated a developer who wished to update their name: https://cohost.org/arborelia/post/4968198-the-software-heritag https://cohost.org/arborelia/post/5052044-the-software-heritag GPL’d software I’ve created has been packaged for Guix, which I assume means it’s been included in SWH. While I’m dealing with their (IMO: unethical) opt-out process, I likely also need to stop new copies from being uploaded again in the future. Is there a way to indicate, in a Guix package, that it should *never* be included in SWH? Is there a way to tell Guix to never download source from SWH? I want absolutely nothing to do with them. Thanks, — Ian
Did something with format-patch or send-email break?
I’m not sure of the precise mechanism employed, but I believe that that in the past, if I ran `git format-patch' and `git send-email', it would send an email to the right place. This is implied by the manual, which doesn’t mention a patch submission email address, except for an issue number address when sending a patch series. This doesn’t seem to work anymore; the output of `git format-patch' has no To: header populated, so `git send-email' asks me where: meson!ieure:~/Projects/guix/staging$ guix describe Generation 13Jun 08 2024 17:39:39(current) atomized 6bb138d repository URL: https://codeberg.org/ieure/atomized-guix.git branch: main commit: 6bb138db5b7f56f399c9cb2e0b45fecaa8cd0182 guix bc8a41f repository URL: https://git.savannah.gnu.org/git/guix.git branch: master commit: bc8a41f4a8d9f1f0525d7bc97c67ed3c8aea3111 meson!ieure:~/Projects/guix/staging$ git log --oneline HEAD^..HEAD bc8a41f4a8 (HEAD -> master, upstream/master) gnu: mes: Update to 0.26.1. meson!ieure:~/Projects/guix/staging$ echo "changes" >> gnu/packages/python-xyz.scm && git ci -am "changes" [master 88e3f97240] changes 1 file changed, 2 insertions(+) meson!ieure:~/Projects/guix/staging$ git format-patch -1 0001-changes.patch meson!ieure:~/Projects/guix/staging$ git send-email 0001-changes.patch 0001-changes.patch To whom should the emails be sent (if anyone)? C-c C-c meson!ieure:~/Projects/guix/staging$ Reproduced on Guix System, Guix on Debian (which the above output is from), and someone in #guix also reproduced it. Is it broken? Am I missing one of the numerous intricate fiddly bits of setup to make email patch flow work? The manual’s "The Perfect Setup" section only mentions Geiser and yasnippet setup, which isn’t relevant to this. Thanks, — Ian
Reducing "You found a bug" reports
There’s a steady number of bug reports generated by the "You found a bug" message which happens during `guix pull's. The overwhelming majority of these reports are caused by networking problems or the Guix infrastructure being unreliable or overloaded. Many of these were submitted during the recent guix.gnu.org downtime. Some of these that I see: 55066 62023 62830 61520 58309 ...I’m sure there are many more. Is there some way for this code to be smarter about when it prints the "report a bug" message, so it doesn’t tell users to report bugs when none exist? Is there a way for it to notice that the problem is related to networking, and tell the users to try again in a little while? Is it worth removing the "report a bug" message entirely? It doesn’t feel great to tell users to report a bug for things that aren’t bugs. They’re either closed, or never followed up on; it’s a poor experience on both ends. Thanks, — Ian
Re: Next Steps For the Software Heritage Problem
Hi MSavoritias, Thank you for the email. I’m going to lay out this situation as clearly as I can, in the hope that others will better understand, and hopefully treat it with the seriousness it deserves. 1. Guix requests SWH to archive some source code. This is fine. 2. SWH archives the code. This is also fine. 3. SWH gives all their source to an AI company, HuggingFace. This is questionable. While fine in theory, the company they gave it to, HuggingFace, violates both the licenses of the code they’re given, and SWH’s own policy on LLMs. Instead of terminating the partnership, SWH has continued to tout it as "responsible AI" in the face of these violations[1]. This makes me doubt whether they’re acting in good faith. 4. HuggingFace trains a LLM out of all the code they’re given and redistributes it. This is *not* fine. The LLM is a derivative work of the source code it’s trained on, which violates the licenses of many projects in its training set -- it’s akin to compiling a gigantic .so file built from the SWH dataset. 5. HuggingFace uses its StarCoder2 LLM to generate source code. This is *also* not fine. This output is also a derivative work of the inputs, and it’s redistributed with no license or attribution whatsoever. HuggingFace purports to include attribution in their model, however, their own tools make no use of it and emit code with no attribution. You can observe this behavior yourself: https://huggingface.co/spaces/HuggingFaceH4/starchat2-playground I understand Guix’s participation is several degrees removed from where the core of the problem lies. However, the partnership with SWH is indirectly enabling massive violations of the licenses of the software it packages. Guix should stop doing that. Thanks, — Ian [1]: https://www.softwareheritage.org/2024/02/28/responsible-ai-with-starcoder2/ MSavoritias writes: Hello, Context: As you may already know there have discussions around Software Heritage and the LLM model they are collaborating with for a bit now. The model itself was announced at https://www.softwareheritage.org/2023/10/19/swh-statement-on-llm-for-code/ As I have started writing some packages I became interested in how I might actually stop my code from ever reaching Software Heritage or at the very least said LLM model. Every single package in guix is added there automatically. I sent an email on Friday and I got an answer back that such consent mechanism hasn't been implemented and I was shown the legal terms. instead what I am supposed to do is: After guix has my code, my code will be automatically in Software Heritage and the LLM model. So I am supposed to opt out seperately with both of them to ensure that my code wont be used for future versions. This of course means that my code will stay forever in Software Heritage and the LLM model (or some version of it at least). The reasoning that was given was that code harvesting happens anyway and we give an opt-out. I am guessing its opt-out and not opt-in because they would have less code but this is speculation of course :) This is against our desire to make it a welcoming space and also against the spirit of our CoC. Specifically because authors do not know this happens when they submit packages to Guix. So it is all done without consent. Next Steps: So what can we do as a Guix community from here? Communication/Writing wise: 1. Add a clear disclaimer/requirment that any new package that is added in Guix, the person has to give consent or get consent from the person that the package is written in. This needs to be added in the docs and in the email procedures. 2. Make a blog post of our stance towards Software Heritage and the code harvesting they are doing. This post will write in environmental and ethical grounds why Guix is against this and mention specifically Software Heritage. This is done to separate and mention that we do not like what is happening in case anyone comes asking, and hopefully give public pressure to Software Heritage. 3. Exclude all Software Heritage merch, stands, talks, people in official capacity, logos, or anything else that participates in social events of guix and write it in some rules we have. also write in channel rules that Software Heritage is offtopic same way Non-Free Software is offtopic. 4. There doesn't seem to be any movement on the side of Guix towards: - Accountability in an official capacity of SH for the terrible handling of the trans name incident and a plan to make it easier in the future. - The LLM problem that was mentioned in this email. So with that said I urge anybody who has been in contact with them in an official Guix capacity to come forward, otherwise I can volunteer to be that. Idk if we have a community outreach thing I need to be in also for that. (we should if not) The above make two assumptions: 1. That the Guix community is against LLM/"AI". Which for envi
Re: Next Steps For the Software Heritage Problem
Hi Greg, Please read my earlier reply in this thread[1]. HuggingFace is demonstrably violating the licenses of the Free Software used to train its StarCoder2 LLM. Software Heritage is continuing to partner with HuggingFace in spite of these violations. Guix is continuing to partner with SWH in spite of their continued support of these violations. Guix is indirectly enabling the violation of the license for the Free Software it packages. Guix has the power to stop doing that. What is your specific rationale for continuing to enable these clear license violations? Thanks, — Ian [1]: https://lists.gnu.org/archive/html/guix-devel/2024-06/msg00195.html Greg Hogan writes: On Tue, Jun 18, 2024 at 12:33 PM MSavoritias wrote: Ah it seems I wasn't clear enough. I meant write something like: By packaging a software project for Guix you are exposing said software to a code harvesting project (also known as LLMs or "AI") run by Software Heritage and/or their partners. Make sure you have gotten fully informed consent and that the author of this package fully understands what the implications are. Something like that. To make it clear that the package that is about to be added to Guix is going to be harvested for the LLM models Software Heritage decided to share the code with. Hope this is more clear. Free software licenses do not require bespoke consent to "to run the program, to study and change the program in source code form, to redistribute exact copies, and to distribute modified versions" (and "Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so."). Your fear mongering against free software runs afoul of Guix project guidelines ("In addition, the GNU distribution follow [sic] the free software distribution guidelines. Among other things, these guidelines reject non-free firmware, recommendations of non-free software, and discuss ways to deal with trademarks and patents."). If you feel that LLMs/AI are violating the terms of a license, then feel free to pursue that through the legal system (potentially very profitable given the monetary penalties for violations of copyright). Otherwise, we should be celebrating the users and use of free software. I'm old enough to remember "Only wimps use tape backup: _real_ men just upload their important stuff on ftp, and let the rest of the world mirror it ;)" [https://lkml.iu.edu/hypermail/linux/kernel/9607.2/0292.html].
Re: Next Steps For the Software Heritage Problem
Guix sends archive requests to SWH. SWH gives that source code to HuggingFace. HuggingFace demonstrably violates the licenses. Guix could stop sending archive requests to SWH. This wouldn’t *stop* the bad things from happening, but it would *stop condoning* them. The same as how Guix not allowing non-free software doesn’t stop people from running it, but doesn’t condone it. Please read my replies in this thread, and the earlier "Concerns/questions around Software Heritage Archive" one. I have outlined the situation, repeatedly, with references. Thanks, — Ian Andy Tai writes: What is the role of GNU Guix in this? If Guix is mainly a referral mechanism like web page links to the actual contents, the real problem is not Guix but the use of free software which can be obtained via other mechanisms directly anyway to train LLMs if Guix is not in the loop?
Rust-team branch status?
Hi Guixers, I want to update the Librewolf package, but it now depends on Rust >= 1.76, which is newer than what's in master. I see the rust-team branch has versions up to 1.77 — is there a timeline for merging that, or a TODO list of things that need to be done to merge it? I'm not sure if I can help there, but would rather direct efforts towards getting rust updated than patching Librewolf to build with older versions.
Re: Rust-team branch status?
Hi Efraim, Efraim Flashner writes: [[PGP Signed Part:Undecided]] On Thu, Jun 20, 2024 at 05:10:11PM -0700, Ian Eure wrote: Hi Guixers, I want to update the Librewolf package, but it now depends on Rust >= 1.76, which is newer than what's in master. I see the >rust-team branch has versions up to 1.77 — is there a timeline for merging that, or a TODO list of things that need to be done to merge it? I'm not sure if I can help there, but would rather direct efforts towards getting rust updated than patching Librewolf to build with older versions. I managed to burn myself out on rust stuff a few months ago and I'm finally coming back to the rust-team branch. There are still hundreds of patches sent for the branch which I had hoped to catch-up on, but I'm fairly certain that the branch is in a good state for merging even now. Currently it has rust-1.77.1. There is a newer 1.77.2 available, and the newest version is 1.79. After merging the current branch I hope to be able to move the version of rust on the rust-team branch to whatever the latest version is. I’m very sorry to hear that you’re feeling burnt out. Would it be reasonable to merge the newer Rust versions, without changing the default from 1.75? That would unblock things needing them, without the risk of breaking packages which haven’t been updated. This might not work for other packages, but Guix seems to keep nearly every version of Rust around for bootstrapping the new ones, so I think this would work. Thanks, — Ian
Re: Rust-team branch status?
Efraim Flashner writes: [[PGP Signed Part:Undecided]] On Wed, Jun 26, 2024 at 10:46:56AM +0300, Efraim Flashner wrote: On Tue, Jun 25, 2024 at 08:48:12AM -0700, Ian Eure wrote: > Hi Efraim, > > Efraim Flashner writes: > > > [[PGP Signed Part:Undecided]] > > On Thu, Jun 20, 2024 at 05:10:11PM -0700, Ian Eure wrote: > > > Hi Guixers, > > > > > > I want to update the Librewolf package, but it now > > > depends on Rust > > > >= 1.76, which is newer than what's in master. I see the > > > >>rust-team > > > branch has versions up to 1.77 — is there a timeline for > > > merging > > > that, or a TODO list of things that need to be done to > > > merge it? > > > I'm not sure if I can help there, but would rather direct > > > efforts > > > towards getting rust updated than patching Librewolf to > > > build with > > > older versions. > > > > I managed to burn myself out on rust stuff a few months ago > > and I'm > > finally coming back to the rust-team branch. There are > > still hundreds > > of patches sent for the branch which I had hoped to > > catch-up on, but I'm > > fairly certain that the branch is in a good state for > > merging even now. > > > > Currently it has rust-1.77.1. There is a newer 1.77.2 > > available, and > > the newest version is 1.79. After merging the current > > branch I hope to > > be able to move the version of rust on the rust-team branch > > to whatever > > the latest version is. > > > I’m very sorry to hear that you’re feeling burnt out. > > Would it be reasonable to merge the newer Rust versions, > without changing > the default from 1.75? That would unblock things needing > them, without the > risk of breaking packages which haven’t been updated. > > This might not work for other packages, but Guix seems to > keep nearly every > version of Rust around for bootstrapping the new ones, so I > think this would > work. I'll see about backporting(?) the newer rust versions from the rust-team branch to the master branch. That way they are available for things like librewolf even if they aren't used for the actual rust packages yet. It shouldn't be too hard and I can make sure it doesn't cause problems on the rust-team branch, even thought it has to wait a bit until its turn to merge. I've pushed through rust-1.79 to master and I've built them on x86_64. My fast aarch64 build machine is currently offline so I can't test there and builds are ongoing on riscv64. The packages are public but hidden, so they can be pulled into a package definition if required (as rust-1.79) but can't be installed with a simple 'guix package -i rust'. Wonderful, thank you very much for the quick turnaround! I know next to nothing about Rust, but if there’s something that would help the rust-team branch, please let met know. Thanks, — Ian
Proposal: nss updates
The nss package updates frequently, around once a month. It's also very low in the package graph, so a ton of stuff depends on it. The most recent update was a graft for security fixes, so we didn't have to rebuild everything, but the new Librewolf version once again requires an nss update. I'm considering options to balance update frequency vs. huge rebuilds. Mozilla has strong compatibility guarantees for nss, so the risk of packages breaking is very small. This is purely about the cost in CPU time to build and bandwidth to transfer packages. Mozilla provides an ESR channel for nss, but Guix doesn't use it — we went from 3.88.1 to 3.99, skipping 3.91, which is the current ESR. I think the default nss in Guix should be the ESR, but we should also have a package for the latest nss, for stuff which needs it. This would let us update to the most recent nss without rebuilding so much, and only eat that cost when there’s a new ESR -- this happens approximately once a year. Concretely: The current nss package should stay how it is. When the next ESR happens, it should update to that (ungrafting nss at the same time), and track ESR releases only from that point forward. I don’t think it would make sense to downgrade the current 3.99 package to the 3.91 ESR, so this will be a little funky until that release happens. The latest version of nss should be added as a second package, named "nss-latest", bound to `nss-latest'. It should track updates as frequently as needed. While I’d prefer having the packages named "nss-esr" and "nss", I think the ESR should get the more prominent "nss" name, which should make it easy for developers to do the right thing -- if a bunch of packages depend on nss-latest, we’re back to the initial problem. Code comments documenting this would also be added. We might also want to adopt this approach for nspr. I’m not sure about nss-certs; I think that should probably track the nss ESR, and I don’t think there’s a compelling need for a package tracking the rapid release channel. I do want to improve this package by having it reuse the origin of nss instead of duplicating it. Does all this seem reasonable to everyone? If so, I can start sending patches. Thanks, — Ian
Re: Next Steps For the Software Heritage Problem
Hi Ludo, Ludovic Courtès writes: Ian Eure skribis: Guix sends archive requests to SWH. SWH gives that source code to HuggingFace. HuggingFace demonstrably violates the licenses. Which licenses? As has been said previously, and you can verify for yourself, it does not ingest code under copyleft licenses. While this is what their paper claims[1], it doesn’t appear to be true, since I can see my own GPL’d code in the training set. I’ve since moved nearly all of my code off GitHub, but if you visit their "Am I in The Stack?" page[2] and enter my old username ("ieure"), you will see pretty much every repository I ever hosted there, including both unlicensed and GPL’d code. Some examples are hyperspace-el, nssh-el, tl1-mode, etc. While there aren’t LICENSE files in those repos, the file headers of all clearly indicate that they’re GPL’d. Unfortunately, there is no way to check for the presence of code in the training set except by GitHub username. What I don’t know for certain is whether these are in the training set because they came from SWH, or because HuggingFace obtained them through other means. Given that all the links for my GitHub username on that "Am I in The Stack" link back to SWH, it seems very likely that it came from them. Thanks, — Ian [1]: https://arxiv.org/pdf/2402.19173 "We also exclude copyleft-licensed code..." [2]: https://huggingface.co/spaces/bigcode/in-the-stack
Re: Proposal: nss updates
Hi Felix, Felix Lechner writes: Hi Ian, On Thu, Jun 27 2024, Ian Eure wrote: The nss package updates frequently, around once a month. [...] I'm considering options to balance update frequency vs. huge rebuilds. Your plan sounds reasonable but my opinion is inconsequential. Instead, I'd like to point out that you are not alone: Wouldn't you like it if two days a month were set aside to allow uploads that trigger large rebuilds? The approach would pool intensive uploads in the time domain rather than how we do it now in space, namely branches. I think this is probably a good idea, though the implementation might be difficult to manage. I’m not sure where the patches would go if not some short-lived branch, so we’d still likely have the space complexity. A one-month timebox for large changes, where they merge or get backed out at the end, seems like it could be a reasonable way to break down some of the long-running branches. One thing we should be considerate of is users with limited bandwidth. Even if Guix has the compute to build and bandwidth to serve, not all of its users will. Thanks, — Ian
Re: Proposal: nss updates
Hi Maxim, Maxim Cournoyer writes: Hi Ian, Ian Eure writes: [...] Concretely: The current nss package should stay how it is. When the next ESR happens, it should update to that (ungrafting nss at the same time), and track ESR releases only from that point forward. I don’t think it would make sense to downgrade the current 3.99 package to the 3.91 ESR, so this will be a little funky until that release happens. The latest version of nss should be added as a second package, named "nss-latest", bound to `nss-latest'. It should track updates as frequently as needed. Conventionally in Guix that'd be called nss-next. Oh boy, naming things! :) I’m aware of the -next convention, but I’m not sure it makes sense here. It’s used primarily for newer versions packages in the same release channel -- ex. the default Python in Guix is python@3.10, but there’s python-next@3.12. At some point, python will get promoted to 3.12.x -- the "next" name is a reflection of this; it’s intended to be the next python package in Guix. Upstream has a single release channel, with a sliding support window over those releases. The nss/nspr situation is somewhat different, as there are two upstream channels ("rapid release" and "extended support release"). ESRs are a subset of rapid releases, and the majority of rapid releases never enter the ESR channel. Naming a rapid release nss-next when it will almost never become promoted to the nss package feels wrong to me. While I’d prefer having the packages named "nss-esr" and "nss", I think the ESR should get the more prominent "nss" name, which should make it easy for developers to do the right thing -- if a bunch of packages depend on nss-latest, we’re back to the initial problem. Code comments documenting this would also be added. We might also want to adopt this approach for nspr. I’m not sure about nss-certs; I think that should probably track the nss ESR, and I don’t think there’s a compelling need for a package tracking the rapid release channel. I do want to improve this package by having it reuse the origin of nss instead of duplicating it. Does all this seem reasonable to everyone? If so, I can start sending patches. This all sounds reasonable to me. Thank you for thinking about it and proposing to improve the situation. Thank you very much for your thoughts. I opened 71832 which implements the first part of my nss proposal and updates Librewolf, so if you have strong feelings about -next vs. -latest, that might be the best place to raise them. Thanks, — Ian
Re: Proposal: nss updates
Felix Lechner writes: Hi Ian, On Mon, Jul 01 2024, Ian Eure wrote: if you have strong feelings about -next vs. -latest How about nss-rapid? It provides the clue about what was packaged to someone who knows libnss. I like it. I’ll update the package descriptions to make this clear as well. Thanks, — Ian
Re: Proposal: nss updates
Maxim Cournoyer writes: Hi, Ian Eure writes: Felix Lechner writes: Hi Ian, On Mon, Jul 01 2024, Ian Eure wrote: if you have strong feelings about -next vs. -latest How about nss-rapid? It provides the clue about what was packaged to someone who knows libnss. I like it. I’ll update the package descriptions to make this clear as well. Thanks for the explanations regarding the ESR and rapid release channels of distribution for NSS. I don't feel strongly about it, but the '-latest' prefix is a bit easier to grok for someone not acquainted with libnss. I don’t have a strong preference either way, but lean towards calling it -rapid, as it matches the upstream terminology. The package descriptions can disambiguate this, ex. adding "(ESR)" or similar to nss. The recent 3.101.1 NSS release is an ESR, per the relesae notes[1]. What’s the process for getting that update into Guix? Since it’ll cause many rebuilds, it needs to go into a branch first. core-updates seems like a reasonable place for it -- do I just send a patch and use prose to indicate that it should land in core-updates instead of master? Or if I perform the work on the core-updates branch, do the patches indicate that when emailed? Thanks, — Ian [1]: https://firefox-source-docs.mozilla.org/security/nss/releases/nss_3_101_1.html
Impossible to remove all offload machines
Ran into this issue last week. If you: - Configure some offload build machines in your operating-system configuration. - Reconfigure your system. - Remove all offload build machines. - Reconfigure your system again. ...then various guix operations will still try to connect to offload machines, even if you reboot the affected client. This is caused by a bug in the `guix-activation' procedure: ;; ... and /etc/guix/machines.scm. #$(if (null? (guix-configuration-build-machines config)) #~#f (guix-machines-files-installation #~(list #$@(guix-configuration-build-machines config If there are no build machines defined in the configuration, no operation is performed (#f is returned), which leaves the previous generation’s /etc/guix/machines.scm in place. The same issue appears to affect channels: ;; ... and /etc/guix/channels.scm... #$(and channels (install-channels-file channels)) I’d be happy to take a stab at fixing this, but I’m not certain what direction to go, or how much to refactor to get there. Should the channels/machines files be removed (ignoring errors if they don’t exist)? Should empty files be installed? Should that happen inline in `guix-activation', or in another procedure? Should the filenames be extracted to %variables to avoid duplicating between the two places they’ll be used? If someone would like to provide answered, I would contribute a patch. Thanks, — Ian
Request for assistance maintaining LibreWolf
Hi folks, Last year, I spent several months getting the LibreWolf web browser packaged, reviewed, and contributed to Guix. I’m happy to have done so, and glad that it’s proved useful to others. One of the concerns raised as I was going through that process was responsibility for ongoing maintenance. I offered to take that on, and have followed through, continuing to contribute patches which improve the package and update it as new upstream releases occur -- which is very frequently. Unfortunately, much of this work is wasted, as the patches remain mired in the review backlog. The package is now three major version out of date and suffers from numerous CVEs. The initial patch to update the version to 127.x was submitted on June 29th; updated to 128.x on July 17th; and I’ll be sending the patch updating it to 129.x later today, after I’ve finished building and testing it. I’m stuck in an impossible situation. I can’t apply for committer access until I have more accepted contributions, but can’t build those contributions unless my patches are reviewed. It’s frustrating and demoralizing. Are there, perhaps, one or two committers who’d be willing to work more closely with me on LibreWolf on an ongoing basis? I’m not asking for help doing the work of maintaining the browser itself, which I remain committed to doing. I’m purely looking to consitently get timely feedback and review, because the normal process for contributions cannot reliably provide it. A second, and smaller question, is: is there a mechanism to direct others’ contributions to LibreWolf to me for review, without subscribing to every patch sent to Guix? I have seen some patches, and participated, but I have to go look for those, and it’d be more convenient if they were directed to me in the first place. Thanks, — Ian
Re: Request for assistance maintaining LibreWolf
Hi Sergio, Sergio Pastor Pérez writes: Hello Ian. I cannot help you since I don't have commit access. But I want to thank you for your hard work, I'm currently using your package. Thank you for the kind words, they truly mean a lot to me. Whatever the state of Guix proper, you can always find the current version of LibreWolf in my personal channel[1], though I don’t have a public substitute server, so long build times will await you if you choose this route. We should try to come up with a solution that alleviates the burden on the maintainers. Given how often this issue arises, what if we try, as a collective, to suggest new mechanisms that would improve the situation? If I recall correctly, someone suggested having a development branch in which, once the QA passes, the patches get automatically merged. I know some people rose concerns about the slowness of the QA system for this to be an effective solution, and there is also the issue ordering the patch application. If the previous solution is ruled out, I would like to know the opinion of the Guix community on a voting system. I'm imagining a system where we reuse the mailing infrastructure we have, where each accepted mail in the guix devel mailing list has 1 vote for a given patch, that way we avoid multiple votes from the same entity and would allow people without commit access, but active on the Guix development, to participate. So, we could set up a threshold where if a patch gets 10 votes from non-committers the merge would be done; preferably automated, but if it's not possible, committers would know what is ready to be merged without effort and what the community wants. I’m not sure this would be effective, because the QA service is unreliable. I regularly see patches which simply don’t get picked up by it, including many of my own. At other times, it lags very far behind. I don’t think it’s reliable enough to be in the critical path for anything. Guix is supposed to be a rolling-release distro, so it feels strange to have a develop branch which moves even faster. Thanks, — Ian [1]: https://codeberg.org/ieure/atomized-guix
Re: Request for assistance maintaining LibreWolf
The latest patch series has been sent (bug #71832). It fixes 14 CVEs, in addition to the 16 fixed in v5. I’ve backed out various improvements and bugfixes I wanted to include, and this does nothing other than the bare minimum to update the package. If anyone would like to step up and review the changes, I’d greatly appreciate it. Thanks, — Ian
Static hosting of substitutes
Hi folks, I’d like to provide substitutes for packages in my personal channel. The ideal setup for this would be for a machine on my internal net to perform the builds, then upload the results to another system on the open internet. That could be a machine running a web server pointed at a directory where the NARs get uploaded, or an S3-like object store, or something like that -- dirt simple, just shifting bytes off disk and out a socket. It seems that nothing like this exists, all the public substitute servers appear to use `guix publish'. That’s not an option for me, since it requires significantly more disk and compute than I have on any public-facing system, and I can’t justify the cost of bigger machines. What would it take to make a system like this work? Thanks, — Ian
Re: Request for assistance maintaining LibreWolf
It's not, IMO, because while it's very easy to set up a channel, it's very difficult to publish substitutes for it. I don't think collisions are any more likely, but perhaps you know of cases I haven't encountered. The larger risk is divergence of package definitions, so version X of a package in Bob's channel works very differently than version X+1 in Alice's. I'd greatly prefer to do the maintenance in Guix, as it'd be much simpler for everyone. — Ian On August 17, 2024 5:11:44 PM PDT, Andy Tai wrote: >I wonder how scalable this approach is, if many "package maintainers" >each have their own channel for the packages they are maintaining, and >made available this way. I would guess to use this approach the Guix >users have to do "guix package -u --allow-collision" > >> Date: Sat, 17 Aug 2024 12:43:11 -0700 >> From: Ian Eure >> Whatever the state of Guix proper, you can always find the current >> version of LibreWolf in my personal channel[1], though I don’t >> have a public substitute server, so long build times will await >> you if you choose this route. >
Re: Request for assistance maintaining LibreWolf
Suhail Singh writes: Ian Eure writes: The initial patch to update the version to 127.x was submitted on June 29th; updated to 128.x on July 17th; and I’ll be sending the patch updating it to 129.x later today, after I’ve finished building and testing it. Thank you for your continued commitment to this despite the lack of timely review. I appreciate your kind words; thank you. I’m stuck in an impossible situation. I can’t apply for committer access until I have more accepted contributions, but can’t build those contributions unless my patches are reviewed. It’s frustrating and demoralizing. I can empathize. I decided to take a step back from posting contributions earlier this year for similar reasons. I am hopeful this can improve in the (near) future. I’m feeling very similarly, and have been biasing to maintaining my own channel lately. A second, and smaller question, is: is there a mechanism to direct others’ contributions to LibreWolf to me for review, without subscribing to every patch sent to Guix? I have seen some patches, and participated, but I have to go look for those, and it’d be more convenient if they were directed to me in the first place. I believe the usual way of doing something like this is via teams (see ./etc/teams.scm ). I’m not sure whether/how well this mechanism works for non-committers. Thanks, — Ian
Re: Request for assistance maintaining LibreWolf
Hi Christopher, Christopher Baines writes: [[PGP Signed Part:Undecided]] Sergio Pastor Pérez writes: I cannot help you since I don't have commit access. But I want to thank you for your hard work, I'm currently using your package. I can only echo your frustration since I also have some patches ready to be merged that seem to be forgotten. As it has been discussed in the past, Guix is growing, but there are not enough hands to merge all the contributions that come through. We should try to come up with a solution that alleviates the burden on the maintainers. Given how often this issue arises, what if we try, as a collective, to suggest new mechanisms that would improve the situation? If I recall correctly, someone suggested having a development branch in which, once the QA passes, the patches get automatically merged. I know some people rose concerns about the slowness of the QA system for this to be an effective solution, and there is also the issue ordering the patch application. If the previous solution is ruled out, I would like to know the opinion of the Guix community on a voting system. I'm imagining a system where we reuse the mailing infrastructure we have, where each accepted mail in the guix devel mailing list has 1 vote for a given patch, that way we avoid multiple votes from the same entity and would allow people without commit access, but active on the Guix development, to participate. So, we could set up a threshold where if a patch gets 10 votes from non-committers the merge would be done; preferably automated, but if it's not possible, committers would know what is ready to be merged without effort and what the community wants. We've had for many months a feature in QA [1] where people can mark patches as being reviewed and looking like they're ready to be merged, which is personally what I hope will mitigate this feeling of "I cannot help you since I don't have commit access", because you can help, you can review the patches and if you think they're ready to merge, you can record that, and this does help highlight patches that are ready to merge. Yes, I’ve used it before. Unfortunately, it doesn’t appear to be making a material difference, as the size of the backlog continues to grow[1]. Progress on this problem would result in the backlog decreasing. It doesn’t matter how many reviewers say it looks good -- a committer is required to actually push the changes. The macro problem of the review process being broken has existed for years and there doesn’t seem to be concensus on the cause, much less a solution. Waiting for that fix is unreasonable, but if a committer was willing to collaborate with me, the worst effects could be mitigated. This is similar to how the Linux kernel works -- the "trusted deputy" approach. It’d also provide a path for contributers to grow into committers. Guix seems committed to using an email-based workflow, so I think it makes a lot of sense to look at how Linux does it. It’s the most successful project in the world to use email-based development. Thanks, — Ian [1]: https://debbugs.gnu.org/rrd/guix-patches.html
Re: Request for assistance maintaining LibreWolf
Christopher Baines writes: [[PGP Signed Part:Undecided]] Ian Eure writes: We've had for many months a feature in QA [1] where people can mark patches as being reviewed and looking like they're ready to be merged, which is personally what I hope will mitigate this feeling of "I cannot help you since I don't have commit access", because you can help, you can review the patches and if you think they're ready to merge, you can record that, and this does help highlight patches that are ready to merge. Yes, I’ve used it before. Unfortunately, it doesn’t appear to be making a material difference, as the size of the backlog continues to grow[1]. Progress on this problem would result in the backlog decreasing. It doesn’t matter how many reviewers say it looks good -- a committer is required to actually push the changes. I think it's unfair to say it's not making a difference, I really rely on it at least. I also think measuring the backlog and using that as the success metric is unwise, what we really want is an increase in throughput. Throughput of patch review is useless without considering the rate of new issues opened. It doesn’t matter how much review throughput increases if the new issue rate increases faster. What the graphs show is that the backlog has a trend of years-long growth -- that only happens when the open rate exceeds the close rate. The problem will continue to grow as long as that remains the case. Thanks, — Ian
Re: Static hosting of substitutes
Christopher Baines writes: [[PGP Signed Part:Undecided]] Ian Eure writes: I’d like to provide substitutes for packages in my personal channel. The ideal setup for this would be for a machine on my internal net to perform the builds, then upload the results to another system on the open internet. That could be a machine running a web server pointed at a directory where the NARs get uploaded, or an S3-like object store, or something like that -- dirt simple, just shifting bytes off disk and out a socket. It seems that nothing like this exists, all the public substitute servers appear to use `guix publish'. That’s not an option for me, since it requires significantly more disk and compute than I have on any public-facing system, and I can’t justify the cost of bigger machines. What would it take to make a system like this work? I've run a few substitute servers like this, the required code is actually quite simple and the build coordinator includes the necessary bits in the form of some included hooks [1]. 1: build-success-publish-hook and build-success-s3-publish-hook in https://git.savannah.gnu.org/cgit/guix/build-coordinator.git/tree/guix-build-coordinator/hooks.scm bordeaux.guix.gnu.org used to use the build-success-publish-hook to populate a directory with the narinfo and nar files, and NGinx simply served this directory, although now it uses the nar-herder to manage the nars (it still doesn't use guix publish). Thank you for the pointer. Does this integrate with Cuirass at all? I have a box running cuirass and guix publish which I’m using for my internal builds & substitutes -- ideally, I’d like NARs to get uploaded to a public host when Cuirass completes a build. Thanks, — Ian
Re: Request for assistance maintaining LibreWolf
Hi Ludo’, Ludovic Courtès writes: Hi Tomas, Ian, and all, Tomas Volf <~@wolfsden.cz> skribis: Ian Eure writes: I believe the usual way of doing something like this is via teams (see ./etc/teams.scm ). I’m not sure whether/how well this mechanism works for non-committers. I believe it should. AFAIK pretty much all it does is to automatically add the team members onto CC list when running `git send-email'. Yes, it works whether or not one has commit rights, and I agree that it could be helpful here. Sounds good -- I’ll send in a patch to add myself. I’ve run into a couple cases where patches got merged while I’ve had other patches open, and getting notified will help coordinate which stuff goes when. At the same time it is not really meant as a general notification system, so usefulness for you depends on whether some committer will merge the commit adding librewolf team (with you in it). Ian, what about teaming up with other Firefox derivative maintainers? I’m thinking notably of André and Clément who’ve worked on Tor Browser on Mullvad Browser, Mark H Weaver who’s been maintaining IceCat, and perhaps Jonathan who’s been taking care of IceDove (Cc’d)? Of course, each of these package is different but they’re in the same area so it probably makes sense to share reviewing efforts here. This makes a lot of sense to me, and I think it would solve my immediate problem. Would it make sense to set up a browser-team mailing list and etc/team.scm which notifies that of patches/bug reports sent on any of the browser packages? PS: I too have been a happy LibreWolf user for some time now, so I join others in thanking you for the great work! I really appreciate hearing this, thank you for saying. Thank, — Ian
Re: Request for assistance maintaining LibreWolf
Ludovic Courtès writes: Hi, Ian Eure skribis: This makes a lot of sense to me, and I think it would solve my immediate problem. Would it make sense to set up a browser-team mailing list and etc/team.scm which notifies that of patches/bug reports sent on any of the browser packages? I would suggest not creating a separate mailing list per team. Anyone sending patches in the team’s scope (the #:scope argument in ‘teams.scm’) automatically Cc’s team members, so you don’t miss things relevant to you. How does one request the creation of a browser-team mailing list? Thanks, — Ian
Re: Request for assistance maintaining LibreWolf
Hi André, André Batista writes: > At the same time it is not really meant as a general > notification > system, so usefulness for you depends on whether some > committer will > merge the commit adding librewolf team (with you in it). Ian, what about teaming up with other Firefox derivative maintainers? I’m thinking notably of André and Clément who’ve worked on Tor Browser on Mullvad Browser, Mark H Weaver who’s been maintaining IceCat, and perhaps Jonathan who’s been taking care of IceDove (Cc’d)? Of course, each of these package is different but they’re in the same area so it probably makes sense to share reviewing efforts here. I was reticent on this mainly because (i) I have never actually used LibreWolf and I don't have a clear picture of it besides it being a "Firefox + Arkenfox - Mozilla Branding" [1](?); (ii) I expect that most of these security (aka urgent) patches will land on the same day on a regular basis for all 4 browsers and given Mullvad and TorBrowser sources will be late in the game, I'll probably not be able to do such a timely (yet again, urgent) review, which could add to Ian's frustration, instead of relieving it; and (iii) AFAIUI, LibreWolf moves at a faster pace, which adds to my concern of not being able to keep up in the long run. That being said, given those patches have remained unreviewed for weeks in a row, I guess I can at least help improve the current situation and give commiters some more confidence that a given patch will not break hell loose when commited and so I'm willing to help with these reviews. However, I cannot promise to maintain it if/when Ian's lead happens to go missing. This sounds reasonable to me, and I would greatly appreciate any assistance that could be provided. My hope is that with some closer working relationships with existing Guix folks and more contributions, I can apply for commit access and maintain LibreWolf autonomously. Perhaps next year. One question in that regard: is there any difference between reviewing through QA's web interface and sending mail commands to debbugs' control? Is any of them preferable? I'd rather use the mail interface if that's enough. There’s no major difference, whichever you prefer. I think email is more likely to keep threads grouped. Cheers. 1. No disrespect meant to the project or its users, just my own current cluelessness exposed. I'll read the docs on it to understand it better though. None taken, and you’re on the right track. LibreWolf ships several improvements combined into one package, including hardening the default preferences, disables the numerous anti-features that ship with Firefox (full page ads on update, Pocket, telemetry, DRM, etc). They also don’t have the onerous trademark and logo requirements, which lets distrbutions ship with the upstream branding. The combination of better defaults, relaxed branding requiements, and closely tracking upstream make it a very compelling choice, IMO. I’ve been daily driving it for several years. Thanks, — Ian
Re: LibreWolf 130.0-1 notes
Hi André, André Batista writes: Hi Ian, sex 06 set 2024 às 08:29:40 (1725622180), i...@retrospec.tv enviou: 130.0-1 is out, but there’s an issue around DNS-over-HTTP preferences changing[1] in this version. Since this has a negative impact on users requiring them to reset their preferences to correct, I’m skipping this one and will package the next release. I separately have an issue open with them around the Firefox 130 AI Chatbot integration. This uses non-free "AI" services which have been trained on stolen GPL code, and hopefully will get stripped out of LW. It’s an experimental opt-in feature at the moment, but shouldn’t be present at all. Once there’s a fix for #1975, I’ll work on packaging it, disabling the AI nonsense in the Guix package definition, if necessary. Thanks, — Ian [1]: https://codeberg.org/librewolf/issues/issues/1975 In my opinion we should keep these discussions open and transparent for the whole guix community, unless there are some serious concerns in sharing it with the internet at large or if they are truly personal exchanges of no interest to other guix. Since I believe your above message _is_ of interest to guix (guixen could have a different understanding on this heated debate and could have a distinct judgement on the desirability of skipping this release), there is no personal or private information on your message and since we were instructed to keep team discussions on guix-devel[1], I'm taking the liberty to CC the list here. As for the content of your message, I think it's your take here, but IMO the DoH bug is annoying but not a show stopper, unless of course they are about to release a fixed version (which seems to be the case). 130.0-3 is out with this fix, and I’ve begun work to get it packaged, and to excise the GenAI sidebar. There are discussions happening around what to do with the sidebar[1]. My hope is that it’s removed upstream. Thanks, — Ian [1]: https://codeberg.org/librewolf/issues/issues/1919
Re: bug#72686: Impossible to remove all offload machines
Hi Maxim, Maxim Cournoyer writes: Hi Ian, Ian Eure writes: Ran into this issue last week. If you: - Configure some offload build machines in your operating-system configuration. - Reconfigure your system. - Remove all offload build machines. - Reconfigure your system again. ...then various guix operations will still try to connect to offload machines, even if you reboot the affected client. This is caused by a bug in the `guix-activation' procedure: ;; ... and /etc/guix/machines.scm. #$(if (null? (guix-configuration-build-machines config)) #~#f (guix-machines-files-installation #~(list #$@(guix-configuration-build-machines config If there are no build machines defined in the configuration, no operation is performed (#f is returned), which leaves the previous generation’s /etc/guix/machines.scm in place. The same issue appears to affect channels: ;; ... and /etc/guix/channels.scm... #$(and channels (install-channels-file channels)) Interesting! I’d be happy to take a stab at fixing this, but I’m not certain what direction to go, or how much to refactor to get there. Should the channels/machines files be removed (ignoring errors if they don’t exist)? Should empty files be installed? Should that happen inline in `guix-activation', or in another procedure? Should the filenames be extracted to %variables to avoid duplicating between the two places they’ll be used? If someone would like to provide answered, I would contribute a patch. I guess the simplest would be to attempt to remove the files when there are no offload machines or channels, in this already existing activation procedure. Extracting the file names to %variables sounds preferable yes, if there's a logical place to store them that is easily shared. As I was putting together a patch for this, I realized there’s a problem: if a user is *manually* managing either /etc/guix/machines.scm or channels.scm, these files would be deleted, which likely isn’t what they want. The current code lets users choose to manage these files manually or declaritively, and there’s no way to know if the files on disk are the result of a previous system generation or a user’s creation. Since the channel management is a relatively new feature, I suspect there are quite a few folks with manually-managed channels that this would negatively impact. I know there was some disruption just moving to declaritive management of channels (but I can’t find the thread/s at the moment). I don’t see an elegant technical solution to this. I think the best option is probably to say that those files should *always* be managed through operating-system, and put a fat warning in the channel news to update your config if they’re still handled manually. The only other option I can see would be to keep the existing filenames for user configuration, and declaritively manage different files -- like declaritive-channels.scm. This comes with its own set of problems, like needing to update the Guix daemon to read and combine multiple files; and the inability to know whether a given `channels.scm' is declaritively- or manually-managed means a bumpy upgrade path (ex. should this preexisting channels.scm file be left as-is, or renamed to the new name?) I’m inclined to go with the fat-warning option, but am also thinking this likely needs some guix-devel discussion. What do you think? Thanks, — Ian
Re: bug#72686: Impossible to remove all offload machines
Hi Maxim, Ian Eure writes: Hi Maxim, Maxim Cournoyer writes: Hi Ian, Ian Eure writes: Ran into this issue last week. If you: - Configure some offload build machines in your operating-system configuration. - Reconfigure your system. - Remove all offload build machines. - Reconfigure your system again. ...then various guix operations will still try to connect to offload machines, even if you reboot the affected client. This is caused by a bug in the `guix-activation' procedure: ;; ... and /etc/guix/machines.scm. #$(if (null? (guix-configuration-build-machines config)) #~#f (guix-machines-files-installation #~(list #$@(guix-configuration-build-machines config If there are no build machines defined in the configuration, no operation is performed (#f is returned), which leaves the previous generation’s /etc/guix/machines.scm in place. The same issue appears to affect channels: ;; ... and /etc/guix/channels.scm... #$(and channels (install-channels-file channels)) Interesting! I’d be happy to take a stab at fixing this, but I’m not certain what direction to go, or how much to refactor to get there. Should the channels/machines files be removed (ignoring errors if they don’t exist)? Should empty files be installed? Should that happen inline in `guix-activation', or in another procedure? Should the filenames be extracted to %variables to avoid duplicating between the two places they’ll be used? If someone would like to provide answered, I would contribute a patch. I guess the simplest would be to attempt to remove the files when there are no offload machines or channels, in this already existing activation procedure. Extracting the file names to %variables sounds preferable yes, if there's a logical place to store them that is easily shared. As I was putting together a patch for this, I realized there’s a problem: if a user is *manually* managing either /etc/guix/machines.scm or channels.scm, these files would be deleted, which likely isn’t what they want. The current code lets users choose to manage these files manually or declaritively, and there’s no way to know if the files on disk are the result of a previous system generation or a user’s creation. Since the channel management is a relatively new feature, I suspect there are quite a few folks with manually-managed channels that this would negatively impact. I know there was some disruption just moving to declaritive management of channels (but I can’t find the thread/s at the moment). I don’t see an elegant technical solution to this. I think the best option is probably to say that those files should *always* be managed through operating-system, and put a fat warning in the channel news to update your config if they’re still handled manually. The only other option I can see would be to keep the existing filenames for user configuration, and declaritively manage different files -- like declaritive-channels.scm. This comes with its own set of problems, like needing to update the Guix daemon to read and combine multiple files; and the inability to know whether a given `channels.scm' is declaritively- or manually-managed means a bumpy upgrade path (ex. should this preexisting channels.scm file be left as-is, or renamed to the new name?) I’m inclined to go with the fat-warning option, but am also thinking this likely needs some guix-devel discussion. What do you think? Disregard this, I continued thinking after sending the email (as one does) and realized that any managed file will be a link into the store -- so if the system is reconfigured with no build-machines or channels *and* the corresponding file is a store link, it should be removed; otherwise, it should remain untouched. I can work with this. Thanks, — Ian
Re: Seemingly unintentional near-world rebuild
Fixing broken cc to Andreas. Ian Eure writes: When my Cuirass pulled commit 5794926bed6fad4598bb565fb7f49be4205b11a1 this morning, it started rebuilding every package in my channel. This includes a package with zero inputs other than what cmake-build-system needs[1]. ci.guix has been evaluating this commit for 90 minutes[2] at the time of this writing, but hasn’t started any builds (it’s also 504ing around half the times I try to load it). I think it’s going to do a world rebuild, or very close to it. Was this expected? Should anything be done about it? Thanks, — Ian [1]: https://codeberg.org/ieure/atomized-guix/src/branch/main/atomized/packages/floppy-disk.scm#L54-L77 [2]: https://ci.guix.gnu.org/eval/1733664
Re: python-dbus-python changes triggered many rebuilds
Would it make sense to sort package inputs when computing derivations to prevent this sort of unintentional change? I don't think the input order is important for the build, so this seems like it could be relatively simple to implement & avoid this recurring. On November 1, 2024 5:20:51 PM PDT, Vagrant Cascadian wrote: >A large rebuild was triggered by: > >commit a9abf9a7b30f6801e122cae759df87b44c458773 >Author: Sharlatan Hellseher >Date: Fri Nov 1 21:10:04 2024 + > >gnu: python-dbus-python: Fix indentation. > >* gnu/packages/python-xyz.scm (python-dbus-python): Fix indentation, >adjust order of fields, sort inputs alphabetically. > >Change-Id: I895518f041bd2cfc9c2f94774a9d1db47b26ffc3 > >Guix refresh claims this would trigger 3987 builds on x86_64-linux, and >ci is cranking away at over 13000 builds across several architectures: > > https://ci.guix.gnu.org/eval/1772855 > >Anyone able to cancel that evaluation? > > >I pushed a commit reverting the ordering changes, which I think appears >to not trigger the rebuild: > >commit ea11d3608566174c4bae70faa4f9d0c67748d2db >Author: Vagrant Cascadian >Date: Fri Nov 1 16:55:02 2024 -0700 > >gnu: python-dbus-python: Revert ordering change on native-inputs. > >A large number of rebuilds (3987 according to guix refresh) was triggered > by: > > a9abf9a7b30f6801e122cae759df87b44c458773 gnu: python-dbus-python: Fix > indentation. > >Reverting the ordering changes does not trigger any rebuilds. > >* gnu/packages/python-xyz.scm (python-dbus-python): Unsort native-inputs. > > >Hopefully that was the right thing to do! > > >live well, > vagrant
Re: Magic Wormhole Package Weirdness/Potential Security Issues?
Hi Juliana, I’ve observed some similar weirdness in the past when I’ve updated versions. I believe what’s happening is that Guix uses the hash to look up the file in a content-addressed store (either the local store or SWH), and is lacking verification that the retrieved object is the expected one. — Ian Juliana Sims writes: Hey folks, I tried to update magic-wormhole today and things went super smoothly. All I had to do was change the version number. I didn't even have to change the source hash. If that strikes you as odd, good! It should! To cover all my bases, I pk'd the hash produced by `pypi-uri` and used `guix download` to try to fetch the same file and check its hash, only to find that `guix download` couldn't find anything at that URL or its fallbacks. To test if things were being exceptionally weird, I switched to pulling and building from git, and the build failed, expectedly, probably because one of the dependencies (magic-wormhole-transit-relay) was not the right version, which was what I had initially expected to happen. Does anyone know what might be going on here? Given the intended secure nature of this program, I'm concerned there may be something malicious happening somewhere along the way. I would love an explanation that quiets that concern. You can look at the current magic-wormhole package source and play around with it yourself to see what I'm talking about. Best, Juli PS I was trying to update all three packages in magic-wormhole.scm, but the transit relay in particular requires later versions of twisted and autobahn than the other two, which is minorly annoying. I know twisted can't be updated without rebuilding a bunch of stuff, so I don't plan to pursue this further for the time being.
Re: bug#72686: Impossible to remove all offload machines
Tomas Volf <~@wolfsden.cz> writes: [[PGP Signed Part:Undecided]] Hello, Ian Eure writes: Disregard this, I continued thinking after sending the email (as one does) and realized that any managed file will be a link into the store -- so if the system is reconfigured with no build-machines or channels *and* the corresponding file is a store link, it should be removed; otherwise, it should remain untouched. I can work with this. Will this correctly handle cases where user is managing the file using for example extra-special-file? No, it wouldn’t. I wonder whether fat-warning approach would not be better. I think I agree. — Ian
Re: Status of ZFS support on Guix
Hi Kaelyn, Morgan, Kaelyn writes: On Tuesday, October 1st, 2024 at 1:23 PM, Morgan Arnold wrote: I'd love to know where any opposition may be at as well. At this point I have a private channel which actually replaces much of the bootloader and initrd functionality (in part to support ZFS in the initrd using https://issues.guix.gnu.org/55231). In the past year, I actually took advantage of having basically replicated much of the initrd functionality in my channel to create a simple bootloader based on the Linux kernel (with the EFI stubloader) and a custom initrd that uses kexec to boot the actual system. It still needs a lot of polish, but has been good enough that combined with a few other small hacks and workarounds, I have several systems now booting with ZFS roots (some unencrypted, some using native encryption). I have done little to upstream most of it, or even to share what I've done, because of the seeming resistance to ZFS. I don’t think there’s resistance to ZFS. I do think there are some legitimate open questions around licensing[1], but the main issue seems to be that the contributor of #45692 chose to express their frustration with the slow pace of Guix patch review[2] in counterproductive and borderline abusive ways[3][4]. Personally, I’d very much like to see improved support for ZFS in Guix. I have one machine with a cobbled-together Guix ZFS setup, but proper support is a blocker for moving my primary ZFS-using system off Debian. Thanks, — Ian [1]: https://issues.guix.gnu.org/45692#75 [2]: Which I *extremely* sympathize with. [3]: https://issues.guix.gnu.org/45692#72 [4]: https://issues.guix.gnu.org/45692#78
Re: Replace Icedove for Thunderbird
Hi Felix, Sergio, Felix Lechner via "Development of GNU Guix and the GNU System distribution." writes: Hi Sergio, On Sun, Nov 03 2024, Sergio Pastor Pérez wrote: The Debian wiki says that the trademark issues with Thunderbird have been resolved: As far as I know, the Mozilla Foundation simply accepted the Debian builds as official. I am not sure the same is true for GNU Guix. I don’t know about Thunderbird, but the maintainer of the Firefox packages in Debian is a Mozilla employee, which is likely why the trademark policy doesn’t apply there. — Ian
Re: Welcome New Committer Ekaitz!
Hi Ekaitz, On Tue, Nov 26, 2024, at 12:25 PM, Ekaitz Zarraga wrote: > On 2024-11-26 13:11, Efraim Flashner wrote: >> I'd like to welcome Ekaitz as our newest committer to Guix! >> >> Ekaitz, why don't you introduce yourself again, since it's been a while >> > > > Yes! > Congratulations! -- Ian
Re: guix package -i kicad will fait with #error "Unsupported CPU architecture" on my Talos II
Hi Tobias, On Sat, Nov 30, 2024, at 8:18 PM, Tobias Alexandra Platen wrote: > When I try to install Kicad, I'll get the following output: > > The following derivations will be built: > /gnu/store/5rinv8djwjz0bdami6nr6cm3zj382fsb-libcxi-1.0.1- > 0.5b6f8b5.drv > /gnu/store/x26zsx3fw74vhfc35i79fansmlmhl0cc-libfabric-1.22.0.drv > /gnu/store/nfj2qvhyxvfc7x5fkdlyg2z1wnpqw9cz-openmpi-4.1.6.drv > /gnu/store/l1rbmwj9m4kgkm1qada665mm0m6g10w1-libngspice-43.drv > /gnu/store/y88c48hzcm78ch39wjfa3dpdgw9n6m7r-webkitgtk-with-libsoup2- > 2.44.1.drv > /gnu/store/qv0zpz8h2dvhbmxz6smsbmc763rdkdxm-wxwidgets-3.2.5.drv > /gnu/store/llydl3baxardzgf9xmdbgjikny68vaar-python-wxpython-4.2.0.drv > /gnu/store/b0s2jgimknw6035kcqvpb6sdxkhakz7b-kicad-7.0.11.drv > > After a few minutes the build fails with: > > building /gnu/store/5rinv8djwjz0bdami6nr6cm3zj382fsb-libcxi-1.0.1- > 0.5b6f8b5.drv... > \ 'build' phasebuilder for > `/gnu/store/5rinv8djwjz0bdami6nr6cm3zj382fsb-libcxi-1.0.1- > 0.5b6f8b5.drv' failed with exit code 1 > build of /gnu/store/5rinv8djwjz0bdami6nr6cm3zj382fsb-libcxi-1.0.1- > 0.5b6f8b5.drv failed > View build log at > '/var/log/guix/drvs/5r/inv8djwjz0bdami6nr6cm3zj382fsb-libcxi-1.0.1- > 0.5b6f8b5.drv.gz'. > > CC utils/read_lat.o > > #error "Unsupported CPU architecture" > > I also found a blogpost which mentions libcxi, what does libcxi do? > https://hpc.guix.info/blog/2024/11/targeting-the-crayhpe-slingshot-interconnect/ > The package description provides a pretty good summary: Interface to the Cassini/Slingshot high-speed interconnect Libcxi provides applications with a low-level interface to the Cray/HPE Cassini high-speed NIC (network interface controller), also known as Slingshot. > I guess that libcxi is optional for Kicad, so I could hack guix to > build Kicad without libcxi. > I think this is worth a bug report, and maybe you're also up to sending some patches? I think it's slightly more complicated than just the kicad package, though, since libcxi isn't a direct dependency: $ guix graph --path kicad libcxi kicad@7.0.11 libngspice@43 openmpi@4.1.6 libfabric@1.22.0 libcxi@1.0.1-0.5b6f8b5 So, I think what needs to happen is that libcxi needs to have POWER9 removed from its supported-systems field; and libfabric needs to conditionally include libcxi in its inputs based on architecture. I think the best way to do that is by checking for the build system in (package-supported-systems libcxi), which will avoid hardcoding duplicate arch tests in two packages. -- Ian
Re: bug#72686: Impossible to remove all offload machines
Hi Maxim, Maxim Cournoyer writes: Hi Ian, Ian Eure writes: [...] The only other option I can see would be to keep the existing filenames for user configuration, and declaritively manage different files -- like declaritive-channels.scm. This comes with its own set of problems, like needing to update the Guix daemon to read and combine multiple files; and the inability to know whether a given `channels.scm' is declaritively- or manually-managed means a bumpy upgrade path (ex. should this preexisting channels.scm file be left as-is, or renamed to the new name?) I'd think that be a great option to pursue, although it's more work more thoughts. Perhaps it could work along these lines (brainstorming) I like the idea to leave the original, potentially manually written file in place and complement it with a declarative counterpart. The same would also have benefited /etc/guix/acl, which suffers from the same ambiguity. Apologies for the silence, life stuff has been eating most of my free time, but I have a bit of bandwidth to spend on this problem again. I took a swing at this, it wasn’t as difficult as I expected. While this approach gives a smooth upgrade path for those who’ve configured channels in a stateful way switching to declarative configuration, it’s possibly bumpy for those already using a declarative config. If a machine with declarative channels is reconfigured, the channels will be duplicated from /etc/guix/channels.scm to /etc/guix/channels-declarative.scm. Using `delete-duplicates' on the merged channels should avoid major problems, but I think it still needs a loud entry in news and manual action (deleting /etc/guix/channels.scm) to upgrade. Given that both approaches will require manual action, I’m a bit inclined to go with the simpler, and take over the existing file. That said, I think the failure mode of the simpler approach (stomping on channels a user may have configured) is undeniably worse than potentially duplicating channels or continuing to pull in old ones unexpectedly. Do either of you have a strong opinion or more information which would help guide this decision? The root issue at work behind all these problems is that activation code only sees the desired target config, rather than the current and target configs. Comparing the current and target configs would allow the code to more precisely compute the needd change to move from one state to the next. I think that could be a good change to make, though it’s obviously going to be much more involved, and IMO will require discussion outside the scope of this specific bug. I have a draft patch series I hope to send up soon, but need to get Guix System up in a VM to test first. It does separate declarative channels into their own config, but doesn’t do the same for build machines. While I think many fewer users configure build machines than channels, it’s probably a good idea to use the same approach for both channels and machines. — Ian
Re: Why does `system reconfigure` need to `pull`?
Hi 45mg, 45mg <45mg.wri...@gmail.com> writes: Updating channel 'guix' from Git repository at 'https://git.savannah.gnu.org/git/guix.git'... It then says it's fetching and indexing objects, authenticating N new commits, etc. As far as I can tell, this stage is equivalent to `guix pull`. It sometimes repeats this several times during the course of the `reconfigure`. I didn't think much of this at first, although I thought it was odd that the manual did not mention a 'pull' stage [1]. I’ve also noticed this, and wondered why it always pulls first. Even if you have an internet connection, the pull makes the reconfigure slower than it could be. Now, I really want to believe that there's a way around this; that I haven't read the docs enough, and there's some option or command to reconfigure my system without pulling new commits. But I can't seem to find any such thing. (`guix time-machine` has the same problem [3].) Is there really no way to reconfigure my system without an internet connection? I dug around in the --help output, but I didn’t see anything that looked like it’d skip the pull. It’d definitely be nice to have an --offline or --no-pull option when reconfiguring. The simplest path forward for you is to replace your Guix channels with local clones of them, so `guix pull' uses your filesystem instead of the network. You can then `git pull' in your clone to get new commits when you like, and `guix pull' after that to update Guix to use them. I’ve heard of Guix getting used in places without much connectivity before, though I don’t know how it was accomplished. Maybe someone can chime in with their workflow. -- Ian
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi Ludo’, I support this overall direction. Implementation details may require additional concensus, though I’m not sure if they’ll rise to the level of a GCD, but would like to see some discussion of problems as they crop up. Forgejo is purpose-built for developing software; email is not. This difference shows up clearly in areas of tooling and consistency. Because email isn’t purpose-built for this, it must be adapted through the development of tooling to make it fit for purpose. This causes a lot of accidental complexity, rough integrations, and maintenance burden. I have personally encounterd a large number of problems with the workflow and tooling which simply do not exist in forge-style tooling: I want to preface this by saying, I’m not criticizing individuals or work or approach here. The done on Guix infrastructure and process was valuable and useful, but IMO Guix has simply outgrown parts of it. But I think problems need to be acknowledged, so these are some of my observations about them. - As of today, February 8th, the most recent bug shown on issues.guix.gnu.org is from February 2nd. It’s been running at least a week behind for many weeks. Since it’s harder for pepole to see patches and bugs, it’s harder to get them reviewed and worked on. - The QA status on issues.guix.gnu.org is not very useful. By far the most common thing to see here is "QA: Unknown," with no indication of why it’s unknown or when it may become known[1]. Sometimes this is infrastructure failures; other times, QA is overloaded. Both present the same way. The important information this should provide is, in large part, absent. In turn, that makes it much harder for contributors -- particularly non-committers -- to ascertain if a given patch is problematic or not. - Sending a patch series is well-known to be harder than it should. - Much of the infrastructure has suffered from frequent outages (once a week or more) which disrupt the user and contributer experience, and the situation hasn’t improved in at least a year. - The large number of tools needed to participate in Guix development is a real barrier to doing so. - Using email means that contributors often send patches "the wrong way" (ex. as attachments), which the tooling then rejects. Forges don’t suffer from this, because there’s only one way to contribute, which is the same for everyone. - The depth of patch review is inconsistent depending on the reviewer, which I believe is due to lacking a consistent process for doing so. Forge-style CI would improve this: it could report whether a package passes `guix lint', whether it triggers a large number of rebuilds, etc. The consistent application of these standards will, I believe, both ease burden on committers (you don’t have to remember to check these things) and raise the consistency of these policies getting applied. I am sensitive to those who are unhappy about using a web interface. I live in Emacs, I use EXWM, I’m writing this email in mu4e. I would like to see better Emacs tooling for interacting with Forgejo, so the heavier web interface can be avoided. I believe that in this instance, and others, the problems of making Forgejo work cleanly and consistently are significantly more tractable than making the existing email patch flow and tooling work cleanly and consistently. Therefore, to reiterate, I support this. -- Ian [1]: Here’s a patch I pulled at random, opened a week ago, which is stuck in "yet to process revision": https://qa.guix.gnu.org/issue/75991
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi all, Vagrant Cascadian writes: When working with salsa.debian.org (a gitlab instance) there is a way to fetch all the merge requests for a given git repository, so it just becomes part of my normal workflow and to some extent works offline. If codeberg had a similar feature, that would be great! Adding this line to the remote in .git/config will make PRs visible locally: fetch = +refs/pull/*/head:refs/remotes/origin/pr/* Alternately, you can add a separate remote just for PRs: [remote "pulls"] url = g...@codeberg.org:org/reop.git fetch = +refs/pull/*/head:refs/remotes/pulls/pr/* The latter setup can speed some workflows, since you can choose to fetch PRs or not. These work for any Forgejo or Giteainstance, and I believe work the same on GitHub/GitLab also. -- Ian
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi Liliana, Liliana Marie Prikler writes: Hi Guix, That way, pushes to `master` will be limited to changes that have already been validated and built. Perhaps we should rename 'master' to something else too. Food for thought for a future GCD :) I support this, and think it’s worth including in the Codeberg proposal. Both changes will be disruptive, and I strongly suspect it’s easier to do both at once. -- Ian
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi Liliana, Liliana Marie Prikler writes: Hi Guix, That way, pushes to `master` will be limited to changes that have already been validated and built. Perhaps we should rename 'master' to something else too. Food for thought for a future GCD :) I support this, and think it’s worth including in the Codeberg proposal. Both changes will be disruptive, and I strongly suspect it’s easier to do both at once. -- Ian
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi Efraim, Efraim Flashner writes: On Sat, Feb 08, 2025 at 12:50:22PM -0800, Ian Eure wrote: Hi all, Vagrant Cascadian writes: > When working with salsa.debian.org (a gitlab instance) there > is a way to > fetch all the merge requests for a given git repository, so > it just > becomes part of my normal workflow and to some extent works > offline. If > codeberg had a similar feature, that would be great! > Adding this line to the remote in .git/config will make PRs visible locally: fetch = +refs/pull/*/head:refs/remotes/origin/pr/* Alternately, you can add a separate remote just for PRs: [remote "pulls"] url = g...@codeberg.org:org/reop.git fetch = +refs/pull/*/head:refs/remotes/pulls/pr/* The latter setup can speed some workflows, since you can choose to fetch PRs or not. These work for any Forgejo or Giteainstance, and I believe work the same on GitHub/GitLab also. Is there a way to add this my the global gitconfig so that it happens automatically for all remotes from a specific forge? I don’t believe so. There may be a way to add it for all cloned repos, but I doubt it can be done automatically based on a remote pattern. Probably the best bet is to write a script which uses `git-config' to set it up, and either run that after cloning, or create a second `clone-pulls' which wraps `git-clone' and the script to configure the PR remote. -- Ian
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi 45mg, 45mg <45mg.wri...@gmail.com> writes: There are many possible consequences of this. One is that GCDs like this one would be seen by fewer people, so there would be less useful discussion and feedback. And in general, people outside of a small circle of old contributors already subscribed to the lists will not participate in, or even be aware of, core community discussions. So, I do feel that this is necessary to discuss, as one of the likely drawbacks (or opportunities!) presented by this GCD. I’ve had good success in professional settings with a PR workflow for this kind of thing. If you have a repo with the GCDs, new ones can be added with a PR to the repo, and discussion can take place by leaving comments on it. It was also beneficial to have the discussion and output of prior discussions shares, as context about current systems was easy to find. These were all internal engineering departments with <100 programmers, so I’m not sure how well it’d scale to a project the size of Guix. I’ve also noticed that when fully bought into forge systems, PRs often become the hammer that solves every problem, and this could arguably be a stretch -- certainly I have seen very inadvisable things managed through PRs. But in this case, I think it provided some value and may be worth considering in the future. -- Ian
Re: packages.guix.gnu.org down
Hi Christopher, On Sun, Dec 15, 2024, at 6:10 PM, Christopher Baines wrote: > "Ian Eure" writes: > >> Hi Guixers, >> >> Looks like https://packages.guix.gnu.org/ has been returning 504 errors for >> a few weeks. >> >> Any idea when it'll be back? > > Looks like it's working currently. > Indeed it is. I double-checked before mailing, and was consistently getting 504s. > I noticed some issues with data.guix.gnu.org (which > packages.guix.gnu.org uses) and cleaned things up today, which could > have fixed things. > Must have been! Thank you for whatever it was you fixed. :) -- Ian
Re: bug#74715: Request for merging "python-team" branch
Hi Lars-Dominik, On Sun, Dec 15, 2024, at 6:28 AM, Lars-Dominik Braun wrote: > Hi Ian, > >> Since this merge landed, the builds for several Python packages in my >> personal channel broke. Any package using pyproject-build-system for a >> Python project using setuptools seems to be affected. > > as Sharlatan Hellseher wrote in https://issues.guix.gnu.org/issue/74715#4, > you need to add python-setuptools and python-wheel to your > setuptools-based packages. The default python toolchain used by > pyproject-build-system (python-sans-pip-wrapper from > gnu/packages/python.scm) does not include these packages any more, > since they are technically not required and declaring them as *real* > inputs allows using different versions of these packages more easily > for packages, which require them. Plus there are quite a few packages, > which build using different build systems nowadays. > Thanks, this worked for me. I skimmed the related bug, but missed this comment. I think the docs for pyproject-build-system are likely the best place for this, as they already mention some of the setuptools/pyproject interaction. I sent a patch (#74899) with some draft language, let me know what you think. > The python importer should probably be updated to read pyproject.toml > and parse the [build-system] table (there is a toml parser in Guix now, > so this should be easy). > Would it be helpful to open a bug about this? Thanks, -- Ian
packages.guix.gnu.org down
Hi Guixers, Looks like https://packages.guix.gnu.org/ has been returning 504 errors for a few weeks. Any idea when it'll be back? Thanks, -- Ian
Re: bug#74715: Request for merging "python-team" branch
Hi all, Since this merge landed, the builds for several Python packages in my personal channel broke. Any package using pyproject-build-system for a Python project using setuptools seems to be affected. This python-manhole package[1] is an example. It's about as simple as they can get, with no inputs or custom build steps. It's failing with: starting phase `build' Using 'setuptools.build_meta' to build wheels, auto-detected '#f', override '#f'. Prepending '[]' to sys.path, auto-detected '#f', override '#f'. Traceback (most recent call last): File "", line 6, in File "/gnu/store/jjcka1g6sk2cvwx8nm4fdwpdq3vll0v0-python-3.10.7/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 992, in _find_and_load_unlocked File "", line 241, in _call_with_frames_removed File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1004, in _find_and_load_unlocked ModuleNotFoundError: No module named 'setuptools' error: in phase 'build': uncaught exception: %exception #<&invoke-error program: "python" arguments: ("-c" "import sys, importlib, json\nbackend_path = json.loads (sys.argv[1]) or []\nbackend_path.extend (sys.path)\nsys.path = backend_path\nconfig_settings = json.loads (sys.argv[4])\nbuilder = importlib.import_module(sys.argv[2])\nbuilder.build_wheel(sys.argv[3], config_settings=config_settings)" "[]" "setuptools.build_meta" "dist" "{}") exit-status: 1 term-signal: #f stop-signal: #f> phase `build' failed after 0.0 seconds command "python" "-c" "import sys, importlib, json\nbackend_path = json.loads (sys.argv[1]) or []\nbackend_path.extend (sys.path)\nsys.path = backend_path\nconfig_settings = json.loads (sys.argv[4])\nbuilder = importlib.import_module(sys.argv[2])\nbuilder.build_wheel(sys.argv[3], config_settings=config_settings)" "[]" "setuptools.build_meta" "dist" "{}" failed with status 1 build process 10 exited with status 256 Since it's complaining about setuptools, I thought I might need to add that to the native-inputs. Alas, that fails with a different error: starting phase `build' Using 'setuptools.build_meta' to build wheels, auto-detected '#f', override '#f'. Prepending '[]' to sys.path, auto-detected '#f', override '#f'. usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: -c --help [cmd1 cmd2 ...] or: -c --help-commands or: -c cmd --help error: invalid command 'bdist_wheel' Is there somewhere I can find out how to fix these packages for the updated pyproject-build-system? Should they be getting switched to python-build-system? Noting also that the PyPI module for `guix import' hardcodes the pyproject-build-system, so it will generate unbuildable definitions for any Python project which uses setuptools. Thanks, -- Ian [1]: https://codeberg.org/ieure/atomized-guix/src/branch/main/atomized/packages/python-xyz.scm#L22 On Fri, Dec 13, 2024, at 5:00 PM, jgart wrote: > Hi Sharlatan, > > Guix cheerleader here. Go for it! Merge ahoy! > 🦜🦆 > LGTM > >
Re: Add transmission-qt to the transmission package?
Hi Bodertz, On Sat, Dec 21, 2024, at 5:39 PM, Bodertz wrote: > The transmission package (the transmission:gui output specifically) has > transmission-gtk, but I personally prefer transmission-qt. By including > qtbase, qttools, and qtsvg, transmission-qt will also be built. > > I've made a simple package that inherits from transmission to do this: > > (define-public tranmission-qt > (package >(inherit transmission) >(name "transmission-qt") >(inputs (modify-inputs (package-inputs transmission) > (append qtbase qttools qtsvg) > > This works, except that transmission-qt.desktop is moved to the gui > output due to the inherited 'move-gui phase. I could figure out how to > remove that phase to fix that, of course. > Interesting, does this mean the existing "gui" output of transmission installs transmission-qt.desktop even though it doesn't include transmission-qt? If so, that seems like a bug to me. > But I'd prefer if transmission-qt were an option for the transmission > package directly, either under a new :qt output, or to have both > transmission-gtk and transmission-qt under the :gui output. Or just a > separate package, transmission-qt. > I *personally* lean towards separate packages, because I think packages are easier to find than outputs; and because it reduces the build footprint, due to needing fewer inputs. Right now, if you build transmission, it needs the gtk libraries, even if you don't build the gui output. So I think the best thing here is three packages: transmission (daemon only), transmission-gtk, and transmission-qt -- where the latter two are derived from the former. However, this will break existing users' setups, since AFAIK there's no way to replace a specific output of a package with a different package. So to avoid breakage, I think it'd be best to add a gui-qt output to the existing package -- assuming you're looking to contribute a patch to Guix for this. If this is for a personal channel or something, you can do whatever makes sense for you. -- Ian
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi Tomas, Tomas Volf <~@wolfsden.cz> writes: Sorry for second email, few more comments and questions. Ludovic Courtès writes: ## Choice of a Forge The software behind the forge has to be free software that is *plausibly* self-hosted on Guix System—this probably rules out GitLab Community Edition I am curious about this. GitLab Community Edition is under MIT (so ticks the free software checkbox). While I am not an expert, I *think* it is mix of golang and ruby code, so that seems feasible to self-host on top of Guix system? And GitLab would have the advantage that (Magit) Forge works with it. I’m not sure what the current state is, but when I was looking at setting up a self-hosted forge, GitLab was operationally very difficult, and I would say it arguably fails to clear the bar of "plausibly self-hostable." And, having used it as a day-to-day system for work, it’s very buggy and complex; and the public gitlab.com instance has frequent outages. And if you take issue with Codeberg’s ToS, I don’t think you’re going to like GitLab’s. https://about.gitlab.com/terms/ ## Issue Tracker Migration Path Importing all the issues and patches from Debbugs/mumi into Codeberg would be impractical: it would require the development of specific tools, would be a lossy process due to the fundamental mismatch between plain text email threads and Forgejo issues and pull requests, and would bring little in return. I understand the impracticality, but just want to make sure I understand the implications. Will this mean that I will have to go over all my patches and resend them to Codeberg one by one, or is it expected the patches already sent will still be processed (just new ones will not be accepted)? I’m also wondering about the mechanics of this. With the volume of patches Guix gets, any single cutover date will impact someone’s work in flight. If it’s possible transition by: - Setting a date for new work to occur in codeberg. - Disabling the creation of new bugs in debbugs on that date. - Allowing work in progress which was started in debbugs to be completed in debbugs. ...that seems like a reasonable way to shift over. ## Workflow [..] Since Guix requires signed commits by people listed in `.guix-authorizations`, we will *not* be able to click the “Merge” button nor to enable auto-merge on build success. Out of curiosity, is it possible to disable the merge button? I am pretty sure it is just a matter of time until someone presses it by accident. Yes. The repo settings lets you control which merge styles are offered, and unchecking everything other than "Manually merged" will disable the button. -- Ian
New committer
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hello Guixers, I’ve applied for, and received, my commit bit! The manual says to send a signed email to say so, so that’s what I’m doing. By way of introductions: I’ve been using Linux since around 1995, when I installed Slackware off floppy disks I made from images on a Walnut Creek CD-ROM. I was a Debian package maintainer for a while, and have been contributing to Guix for a bit over a year. Primarily the LibreWolf browser and Emacs related packages, but I try to upstream most stuff I find myself needing. I’ve been an Emacs user since 1998 or so, have authored many packages for it, spoken at EmacsConf twice, and contributed one or two things upstream. -- Ian -BEGIN PGP SIGNATURE- iQJFBAEBCAAvFiEEaYCpuVICqhHrHYkihJmsiPGnHPIFAmd4HVQRHGlhbkByZXRy b3NwZWMudHYACgkQhJmsiPGnHPLd8g/+N9gWGwAQ99+lhdV4uXb/JrjBdRWQyk28 F1G/A5IPKno4R8erA5/JNBe7GS+5BkziWkbdwFej+MPFvLV6smavhb1Z7KemyfxP SS0dapt1M6+rKDriqczaQyoKRu1FK5qPzBD74JGoyIv0bgXX8JdnX0qHesDeFlkD qMjX65LAW92rljXl4wzPVllcMj9QtB4nJQGL7LGHo7wy/6aO6kR7+PVQ5XMkgpYO 0IIEIcX4WW2jfc4bqxb6qYQrHrn8EDp5IfCgHJsA2OTeCDK/mu7U7ui7G39TczgE /fv1VSoyNMXGEG1h79LiyzmGTFb9zwKwUQ5ynf2FOSNtPD+OAiNORlM9TvGA1+Xo /v+6W/Z7F5epiESTilIr/WYR9u2IiXIh3cAtpu3OSBjZQc9TkM+3fE3Eajn7tG5s mQvPiNzpzqsejSH+oZCi7hED9OU8VT2lkErdJ0xu32T1ZC7VU9kXDA4rBT7RhLc5 uC/4GfpuK+qr4Oz/fmYrDWB+krFAcTLXJjhFkONEw/+/R2sb8kWexy1Sod0IDyv1 oZzX8Ok5fJHFBAPpQQE3PWXRUutFb6T/3P0PQc6KjPYMtxRNyH7cf60Vr1Dg1p4u XNHAdA1Cxbj64XCQ43zygWulRRHdn4x28vIDpbU9qpycEMwHpmEls4B3MzZg/1gd nW2OkjRGi9o= =tb8y -END PGP SIGNATURE-
Re: Welcoming Steve as a new committer!
Welcome! On February 12, 2025 7:20:02 AM PST, Maxim Cournoyer wrote: >Hello Guix! > >I'm pleased to announce that Steve has just been enabled as a committer >to the project, bringing the number of committers to 48. >Congratulations! Happy committing, Steve! And thank you for all your >contributions to the Guix project thus far! > >-- >Maxim >
Re: [GCD] Renaming `Master` Branch Into Another Term
Hi Jonathan, indieterminacy writes: Hello, I believe that the term `Master` for the root branch is an anacronism. I propose that it is renamed to `The Gulf of Mexico`. Unless people have a better suggestion? Please don’t waste our time with trolling nonsense. -- Ian
Re: Discussion with Codeberg volunteers
Hi Ludo’, Ludovic Courtès writes: Hello Guix, • Availability: From what they told us, they didn’t experience any serious downtime over the last year or so. (I did see someone online suggest otherwise so if you have experience, please share. FWIW, over 3+ months of Guix-Science, I’ve seen a couple of 1–2h downtime notifications from them and experienced slowness once, but that’s about it.) Codeberg hosts my personal Guix channel and substantially all my public projects, thus gets exercised regularly. Since my channel is hosted there, it must be available for `guix pull' to function. I haven’t had a single instance where a Codeberg outage prevented this from working. In contrast, Savannah has had a handful of outages over the same period. I’m in the US, in the Pacific time zone. I’ve seen two kinds of Codeberg outages: maintenance windows and DDoS attacks. The maintenance windows are always communicated in advance and I personally haven’t been impacted by them. DDoS attacks have had some small impact. DDoS attacks aren’t unique to Codeberg, and I don’t feel that a service should be judged negatively because some inconsiderate people decided to make others’ lives difficult. -- Ian
Re: [GCD] Migrating repositories, issues, and patches to Codeberg
Hi Andreas, Andreas Enge writes: Hello Ian, Am Sat, Feb 08, 2025 at 08:33:14AM -0800 schrieb Ian Eure: - The QA status on issues.guix.gnu.org is not very useful. By far the most common thing to see here is "QA: Unknown," with no indication of why it’s unknown or when it may become known[1]. Sometimes this is infrastructure failures; other times, QA is overloaded. Both present the same way. The important information this should provide is, in large part, absent. In turn, that makes it much harder for contributors -- particularly non-committers -- to ascertain if a given patch is problematic or not. my understanding is that moving to Codeberg would not automatically enable QA, but that we would still need to connect our own QA to the forge CI system. So this issue is essentially independent of where we host our sources - if any, the need for additional development could slow things down (but maybe the end result would be a simpler system, I do not know). Yes, work would be needed to build the CI tooling. That could be connecting the existing QA site to Codeberg, or building something new. I’d advocate for the latter, since a pure CI job is lighter weight than QA (it doesn’t need its own frontend), and as you mention, hooking QA to Codeberg means the existing slowdowns of QA would continue to happen. The decision where to hose the sources matters in that Forgejo provides explicit mechanisms for these kinds of things, which the current tooling lacks. For example, you can prevent merges unless the CI jobs pass, you can retry failing jobs, the jobs themselves exist in the repo and can be managed by contributors in the same way as the main code, etc. - The depth of patch review is inconsistent depending on the reviewer, which I believe is due to lacking a consistent process for doing so. Forge-style CI would improve this: it could report whether a package passes `guix lint', whether it triggers a large number of rebuilds, etc. The consistent application of these standards will, I believe, both ease burden on committers (you don’t have to remember to check these things) and raise the consistency of these policies getting applied. I think the main problem will remain the availability of reviewers, which again is independent of where the issues are hosted. But indeed, having a form where reviewers can check what they have done might make things easier. Availability of reviewers will always be a bottleneck, but that’s exactly why I believe improved tooling will help. Right now, a proper review before committing is labor-intensive: you have to pull master, apply the patch, run `guix lint', run `make', build the package, etc etc. Automating this process both makes it more accessible and lowers the burden of each review, which can[1] increase overall throughput. -- Ian [1]: I specifially don’t say it /will/, but I do believe it /can/.
Re: Understanding #:substitutable? and #55231
Hi Morgan, Morgan Arnold via "Development of GNU Guix and the GNU System distribution." writes: Hello, If the issue is simply that the patch has not been rebased against a new enough version of Guix to be merged, I am happy to do that rebasing. Additionally, please correct me if I have made any incorrect assertions above. It does seem that #55231 ended up in a place where there was concensus that it was acceptable, but didn’t get merged for some reason or other. I definitely could be wrong, but I suspect the issue is that when non-#:substitutable? packages are used in places other than package inputs, the downstream derivations don’t carry that information. I believe when used as a package input, non-#:substitutable? packages do, in fact, poison all downstream derivations. Happy to be corrected if I’m wrong here. I think it’s reasonable to merge this after it’s rebased on current master, and would be willing to do that unless Maxime or Ludo’ raise an objection. However, you resent a v1 patch to a bug where four versions has already been sent. If you’d be willing to resend as v5 (with `git format-patch -v5 -2'), I can get it pushed. Thanks, -- Ian
Re: Understanding #:substitutable? and #55231
Hi Maxime, Maxime Devos writes: On 9/02/2025 2:06, Ian Eure wrote: Hi Morgan, Morgan Arnold via "Development of GNU Guix and the GNU System distribution." writes: Hello, If the issue is simply that the patch has not been rebased against a new enough version of Guix to be merged, I am happy to do that rebasing. Additionally, please correct me if I have made any incorrect assertions above. No. See the stuff about #:substitutable?. The reason I didn't answer back then, is that I don't want to keep being a broken record. Could you help me understand the case where this becomes a problem? Is it: - If you have one machine with an operating-system which includes a non-#:substitutable? out-of-tree kernel module in its initrd, and - A second machine with an identical initrd configuration, and - The first machine is configured to serve substitutes, and - The second machine uses the first as a substitute server Then the non-#:substitutable? module would be distributed, violating its license? I’d also find it helpful to understand the line for specific acts and entities in play, on a matrix of: allowing violations, encouraging violations, or committing violations; and by individual Guix users, or by the Guix project itself. For example, I think the Guix project encouraging or committing a violation is unacceptable. I think this would help a great deal to make the bounds of the problem clear, which is needed to solve them. If 'make-linux-libre' in the presence of ZFS leads to #:subsitutable? problems, that doesn't mean it's fine to ignore the law for #52231. It means you need to: Could you please help me understand how `make-linux-libre' is in scope? I don’t believe any in-tree kernel modules have the problematic license terms, so I think the issue is purely out-of-tree stuff, whether that’s ZFS, nVidia drivers, "endpoint protection" security systems, etc. Perhaps you meant `make-initrd'? More specifically, ZFS proponents (at least as a group, and when limited to those visible in Guix) tend to be rather incoherent in their positions, in the sense that simultaneously do: (snip) I appreciate your perspective, however, I’m more interested in understanding the problems so they can be solved. Any help in that area would be greatly appreciated. It does seem that #55231 ended up in a place where there was concensus that it was acceptable, but didn’t get merged for some reason or other. I definitely could be wrong, but I suspect the issue is that when non-#:substitutable? packages are used in places other than package inputs, the downstream derivations don’t carry that information. I believe when used as a package input, non-#:substitutable? packages do, in fact, poison all downstream derivations. Happy to be corrected if I’m wrong here. Not quite - to my understanding, the downstream derivations _also_ don't carry that information when it's in package inputs (at least, last time I checked there didn't seem to be any mechanism to set #:substitutable? to #false when any of the inputs are unsubstitutable (whether non-bag(?) derivation inputs, implicit inputs, native-inputs, ...)). Ah, hmm. So these kind of violations are implictly prevented by Guix not shipping things in combinations which would violate the license terms? For packages, in typical situations the #:substitutable? #false of any 'native-inputs' of a package shouldn't impact the substitutability of the package. For 'inputs', it rather depends (e.g. static/dynamic, the particulars of the license, is it because of license reasons or something else). Since it somewhat depends on the situation, if you implement a thing like this, I would recommend making it a _default_ for #:substitutable?(*), that can be overridden by some method. That’s a good suggestion, thank you. I think it’s reasonable to merge this after it’s rebased on current master, and would be willing to do that unless Maxime or Ludo’ raise an objection. First you say you suspect the issue is that #:substitutable?-related behaviour isn't right yet, and immediately in the next paragraph you say it's reasonable to merge it. Given that the patches haven't been adjusted to solve this, this is rather incongruent. While I agree that the fundamental #:substitutable? mechanism of Guix could use improvement, I don’t believe these patches need to wait for that work, becasue: - This is a generic mechanism useful for any out-of-tree module regardless of license[1]. - They won’t cause the Guix project to commit a license violation. - They don’t encourage individuals to commit license violations. - While they could /allow/ individuals to commit violations, many things in Guix already do, because it’s infeasible to forbid. To the last point: - Right now, Guix allows a user to m
Re: none
Hi Z572, Z572 writes: Liliana Marie Prikler writes: ## Repository Update Path For a complete list of repositories associated with the Guix project, see GCD 002 ‘Migrating repositories, issues, and patches to Codeberg’. Most repositories can rename their main branch with no issue (see also Cost of Reverting below). For Guix itself, we would decide on a **flag day** 14 days after acceptance of this GCD at the earliest, and 30 days at the latest. On that day, the main development branch would become "main". A commit would reflect that by updating: 1. the `branch` field in `.guix-channel`; 2. the `branch` field of `%default-guix-channel` in `(guix channels)`; 3. any other reference to the "master" branch of the Guix repository that may appear in the repository (in particular the Manual Updates above). My main concern is whether it will affect guix time-machine. Will the master branch be removed after the migration to the main branch on some time, or will it just stay there? Good question. The proposal calls for keeping the master branch for a period of time following the switch, which is important to prevent Guix users who update infrequently from getting stuck. I think it should eventually get removed, but it’s hard to know what time to do it. Personally, I think not less than one year feels like the right ballpark. I don’t think removing the master branch would break most uses of `guix time-machine', because this defaults to the system’s current channels, and the new branch will have the same commit IDs. It shouldn’t matter whether those commits happened before or after the rename. The one case that could present problems is if `guix time-machine -C channels.scm' is called after the master branch has been removed, and channels.scm has the old branch name. This feels like an edge case to me, but if there’s strong evidence or consensus that this is imporant to maintain, the solution would be to keep the master branch around longer. Please let me know if I’m missing someting. Thanks, -- Ian
Re: G-Golf in Guix - G-Golf pkg(s) name(s)
Hi David, David Pirotte writes: Hello Ricardo, Is this such a problem, for guix, to make an exception? As you all understood by now, this is a bit of a sensitive subject for me. Ultimately, this kind of decision is a judgement call on the part of the committers reviewing the patches. In this case, the concensus is clear that the Guix convention should be upheld. Practically speaking, your choices at this time are to accept the conventions, or submit a patch removing g-golf from Guix. Thanks, -- Ian
Re: G-Golf in Guix - G-Golf pkg(s) name(s)
Hi David, David Pirotte writes: In Guix program languages specified libraries are named with the language as prefix, eg: python-six, perl-dbix-simple, and guile-g-golf. Guix should not do this. ... Changing this convention would require a very large amount of work and disrupt things every Guix user. I would encourage you to propose a GCD if you feel strongly that it should change. You can read about that process here: https://issues.guix.gnu.org/74736 ... I was once told Guix does this to copy the way python name his packages ... But normal distro do not apply this rather weird rule - just do not, ever ever, rename upstream projects, whether libs or apps. I’m not sure what the history of it is; I’d be interested to find out. It should be clear that for 'guile-g-golf' you would use g-golf as a guile library for the current 'guile'; and for 'guile2.2-g-golf', you would use g-golf as a guile library for 'guile-2.2'. It is very clear, w/t this 'guix only' rule, for any g-golf user (who are by definition developers), on any platform, any where in the world, that they would program their own lib/app using guile ... Any Linux distribution will have some drift between its package names and the software those packages contain, either out of necessity[1] or distribution conventions. Debian has a similar convention for both Python and Go libraries, which are commonly prefixed with "python-" and "golang-", respectively. I agree that the name `guile-g-golf' is a little cumbersome, but it could be worse; I note that there’s a "go-0xacab-org-leap-obfsvpn" package. Please reconsider Because of the large impact such a change would make, the process for reconsidering is to write a GCD which can be deliberated. In the absence of a formal change of convention and plan to implement it, the existing convention should be maintained. Thanks, -- Ian [1]: For example, Debian often appends versions to package names so multiple versions can be installed at the same time.
Large rebuilds, nss, and QA
Hi Guixers, A while back, I proposed splitting the nss package into two, one for the ESR, and one for the rapid release. Because LibreWolf tracks the Firefox Rapid Release channel, it needs a newer nss than what Guix has fairly frequently, around 2-3 times a year. However, the nss package is low in the graph, and upgrading it triggers ~15k rebuilds. Splitting this into an infrequently-updated package tracking ESR, which most things can use, and a separate package tracking the rapid release channel, which LibreWolf (and anything else needing a newer version) can use is IMO a reasonable way to balance browser security updates vs. frequent huge rebuilds. I added the nss-rapid package to unblock LibreWolf, with the intent that when the next nss ESR happened, the nss package would get updated (and ungrafted). That release happened a while back, and #73152 has patches to fully implement the split and update nss to the latest ESR. The standard Guix process for updating packages low in the graph are to push to a branch, let QA build substitutes, then merge[1]. Unfortunately, QA is extremely bogged down, which means this is effectively impossible, because by the time it builds the branch -- if it ever does -- so much has changed in master that the work likely needs a rebase, starting the whole process over again. What are the options for large updates like this? Thinking it through, there seem to be three paths, in rough order of best to worst: 1. Fix QA. I don’t know what’s wrong, or have a sense of what it would take to fix. A one-week freeze on commits to let it catch up? Moving to faster hardware so it can keep up with the pace of development? Fixing QA seems like the best option, but also the least clear and most difficult. 2. Push to a nss-updates branch and have Cuirass on ci.guix.gnu.org do the builds, then merge. I’m not sure what’s involved in doing this, likely someone would need to add a specification to Cuirass for it to work. 3. Push to core-packages-team. Since this is already building in CI and nss is arguably a core package, this makes sense to me. The downside is that it’s not clear how long it’ll be until that merges, and if we’re in another situation where LibreWolf has a security update that requires a NSS update, will cause conflicts. It also looks like the core-packages-team branch has been broken since 23 Jan[2], which presents further difficulties. 4. Merge to master and break substitutes for a few days. Any other options I haven’t considered? Thanks, -- Ian [1]: See `(guix) Managing Patches and Branches'. [2]: https://ci.guix.gnu.org/jobset/core-packages-team
Re: [GCD] Rename “main” branch
Hi Liliana, Thank you for putting this together. I sponsor. -- Ian
Re: New committer
Hi Greg, Greg Hogan writes: We have a C++ team! Has the survey shown that project contributors are happier when members of a team and those patches reviewed more promptly? I find Guix to be a powerful tool with unbounded potential. Thank you to all who have contributed. My particular use has been creating shared build environments with a focus on C++ development. My hope is to use the teams/branches workflow to provide more and more timely updates to these packages. Welcome! -- Ian
Re: Debbugs changes on #guix
Hi Felix, Guixers, Felix Lechner via "Development of GNU Guix and the GNU System distribution." writes: Hi, I enabled an experimental barebones feature that broadcasts Debbugs changes on #guix. It's another technical feature that could improve the speed at which bugs are being closed. Please let me know what you think! Reiterating what I said in IRC: I think the bot messages have value, but the value is subjective. To regular Guix users, they have more value, and to those looking for help with problems, they have less. Becasue of this, I think the messages need to be opt-in. The simplest way to do that is to put them in a separate channel, which those who want to see the messages can join. I could see posting in #guix for uncommon, exceptional, and/or important cases, such as new bugs with grave severity being filed (or existing bugs upgraded to grave severity). Thanks, -- Ian
Emacs dependent package input question
Hi Guixers, I’ve seen a few patches for Emacs packages lately which have the form: (package emacs-whatever (name "emacs-whatever") (description "Emacs interface for Whatever") (inputs (list whatever)) (arguments (list #:phases (modify-phases %standard-phases (add-after 'unpack 'set-whatever-path (lambda (#:key inputs #:allow-other-keys) (emacs-substitute-variables "whatever.el" ("emacs-whatever-program" (search-inputs-file inputs "/bin/whatever") ...and looking in emacs-xyz.scm, many packages do this. I understand why this pattern exists -- making the inferior an input means that installing the Emacs package Just Works -- but it also means increased disk consumption and somewhat less user flexibility. On the flexibility side, it means that if you install emacs-whatever and my-personal-whatever-fork, emacs-whatever will continue using the stock whatever, not your fork; making this work requires transforming emacs-whatever to replace the input. I think that because this behavior is so different from how most other operating systems work, it’s surprising, and the solution isn’t obvious. On disk space, it means that many packages come with inputs to serve many possible usecases, whether those are relevant or not. One fairly trivial example: emacs-emms has inputs for mpg321, mid3v2, ogg123, ogginfo, and opusinfo. My music library is 100% FLAC, so these programs are never used, but consume disk on my Guix install. And on the other hand, I often use EMMS to play videos, but mpv *isn’t* an input, so this usecase is broken out of the box. Another example is emacs-plantuml-mode, which has plantuml as an input, which has icedtea as an input -- meaning an install of the Emacs package comes with a whole Java runtime. For another example, emacs-emacsql is most often used to access a SQLite database, but supports MySQL and PostgreSQL as well. The Guix package has mariadb and postgresql in its inputs, so it can set their program paths. As a consequence, those packages (which I don’t use otherwise) are consuming 800mb: $ du -shc $(find /gnu/store -maxdepth 1 -type d -name \*-postgresql-\* -or -name \*mariadb\*) | tail -1 804Mtotal I’m not picking on any individual package or contributor here -- as I said, the pattern is widespread -- and I have multiple generations in the store, which increase those numbers. But this feels like an area that could be improved. I’m not sure how to do that, though. I can think of a few options: a) Leave it as is. Don’t love it, but if there’s concensus that this is the right way, then okay. b) Make packages that align better with specific usecases, ex. emacs-emacsql-sqlite, -mysql, etc. This feels fraught to me, and I don’t think it works if you get emacs-emacsql-sqlite as an input to some other emacs-* package, but want emacs-emacsql-mysql as a user. Perhaps metapackages which only exist to combine dependencies would make this a workable approach. c) Don’t set them at all, and use the same $PATH late binding as is typical of other Emacs setups. This would mean that installing Emacs packages wouldn’t Just Work, and users would have to install the inferiors (and know what packages those are in) themselves. d) Have some better policies around when to use inputs and when not to. This might be the most pragmatic approach, though it means inconsistency from package to package. e) Build a suggested package mechanism into Guix. This has been floated a couple times before. If it defaults to not installing suggested packages, but telling the user about them, this would be like option C, but making it easier to find which packages you might need; if it installs them by default, it would work like the current setup, but give users some kind of out to avoid the extra bandwidth/disk usage. I don’t know how this would work for declarative package setups -- how would I express to Guix Home that it should install emacs-emms with flac (for metaflac), but without mpg321, vorbis-tools, and mp3info? I’d love to hear others’ thoughts. Thanks, -- Ian
Heads up: LibreWolf 137.0-1 withdrawn due to a bug
I pushed a commit for LibreWolf 137.0-1 Thursday night, and pushed a revert yesterday afternoon. The upstream release was pulled, as it contains a bug. Upstream Firefox 137.0 changed the profile storage location from ~/.firefox to ~/mozilla/firefox, which resulted in LibreWolf’s profile location changing to ~/.mozilla/librewolf, which means it can’t find your configuration. The upstream bug tracking this is https://codeberg.org/librewolf/issues/issues/2400 If you upgraded during this time, I recommend that you downgrade back to 136.0.4-1. Alternately, you can move or symlink ~/.librewolf to ~/.mozilla/librewolf, though you’ll need to take additional action once the root issue is fixed. Thanks, -- Ian
Re: Emacs dependent package input question
Hi Nicolas, Nicolas Graves writes: On 2025-04-05 09:38, Ian Eure wrote: Hi Ian, thanks for this discussion. This is my (d) (better policy proposition): a) Leave it as is. Don’t love it, but if there’s concensus that this is the right way, then okay. I'd argue sometimes it's indeed the better solution, especially when the core of the package needs the binary to work properly, not when it's an additional option or feature. I'll refer to that as "essential binary" in the rest. I appreciate this line of thinking, but suspect that whether a binary is essential or not depends on the user’s usecase, which is unknowable at package build time. For a user using EMMS to play MP3s, flac isn’t essential, and mpg321 is; for someone using it to play FLACs, the inverse is true. (This is the case where not having the binary is absurd because it renders the emacs package unusable). I think it’s important to distinguish between "unusable" and "not usable unless the user takes some action." e) Build a suggested package mechanism into Guix. This has been floated a couple times before. If it defaults to not installing suggested packages, but telling the user about them, this would be like option C, but making it easier to find which packages you might need; if it installs them by default, it would work like the current setup, but give users some kind of out to avoid the extra bandwidth/disk usage. I don’t know how this would work for declarative package setups -- how would I express to Guix Home that it should install emacs-emms with flac (for metaflac), but without mpg321, vorbis-tools, and mp3info? I agree this is nice, but maybe it's enough to do that in Emacs packages themselves (ensuring they suggest the missing binary when it's not found) rather than in Guix Home itself. I also like this idea; do you have an idea what an implementation for that would look like? Presumably upstream package authors wouldn’t want Guix-specific stuff in packages that need to work on any OS/distro. -- Ian
Re: Emacs dependent package input question
Liliana Marie Prikler writes: Am Samstag, dem 05.04.2025 um 09:38 -0700 schrieb Ian Eure: Hi Guixers, I’ve seen a few patches for Emacs packages lately which have the form: (package emacs-whatever (name "emacs-whatever") (description "Emacs interface for Whatever") (inputs (list whatever)) (arguments (list #:phases (modify-phases %standard-phases (add-after 'unpack 'set-whatever-path (lambda (#:key inputs #:allow-other-keys) (emacs-substitute-variables "whatever.el" ("emacs-whatever-program" (search-inputs-file inputs "/bin/whatever") Side note: the ordering of fields here looks rather displeasing. Yes, this is a contrived example, and aggressively wrapped. ...and looking in emacs-xyz.scm, many packages do this. I understand why this pattern exists -- making the inferior an input means that installing the Emacs package Just Works -- but it also means increased disk consumption and somewhat less user flexibility. On the flexibility side, it means that if you install emacs-whatever and my-personal-whatever-fork, emacs-whatever will continue using the stock whatever, not your fork; making this work requires transforming emacs-whatever to replace the input. I think that because this behavior is so different from how most other operating systems work, it’s surprising, and the solution isn’t obvious. You obviously can still customize it to use a PATH lookup. You are wasting disk space if you do so. If you don’t use the inputs to the Emacs packages, you’re wasting disk (and bandwidth) whether you do that or not. Alternatively, input rewriting such as the --with-input command line flag replace whatever with your- personal-whatever-fork and thus do exactly what you want. Yes, this definitely solves it at a direct package level, but leaves much to be desired for cases of transitive dependencies. For example, emacs-orgit-forge depends on emacs-forge, which depends on emacs-emacsql, which depends on mariadb and postgresql, which none of the other packages actually use. I know there’s been some work on recursively rewriting package trees for cases like this, but I don’t believe any of it has merged, and I believe this sort of thing is far out of reach for new/casual Guix users. […] I’m not sure how to do that, though. I can think of a few options: a) Leave it as is. Don’t love it, but if there’s concensus that this is the right way, then okay. b) Make packages that align better with specific usecases, ex. emacs-emacsql-sqlite, -mysql, etc. This feels fraught to me, and I don’t think it works if you get emacs-emacsql-sqlite as an input to some other emacs-* package, but want emacs-emacsql-mysql as a user. Perhaps metapackages which only exist to combine dependencies would make this a workable approach. c) Don’t set them at all, and use the same $PATH late binding as is typical of other Emacs setups. This would mean that installing Emacs packages wouldn’t Just Work, and users would have to install the inferiors (and know what packages those are in) themselves. d) Have some better policies around when to use inputs and when not to. This might be the most pragmatic approach, though it means inconsistency from package to package. e) Build a suggested package mechanism into Guix. This has been floated a couple times before. If it defaults to not installing suggested packages, but telling the user about them, this would be like option C, but making it easier to find which packages you might need; if it installs them by default, it would work like the current setup, but give users some kind of out to avoid the extra bandwidth/disk usage. I don’t know how this would work for declarative package setups -- how would I express to Guix Home that it should install emacs-emms with flac (for metaflac), but without mpg321, vorbis-tools, and mp3info? f) Implement parametrized packages 😉 I think in practice, transformations and hopefully some day parameters will be a preferable solution to most of our Emacs-related customization woes. Indeed. Thanks for your thoughts, -- Ian
Re: './pre-inst-env guix build xxx' can not find packages in other channels.
Hi Feng, Feng Shu writes: Hello, I use the below channel setting: (service home-channels-service-type (cons* (channel (name 'nonguix) (url "https://gitlab.com/nonguix/nonguix";) ;; Enable signature verification: (introduction (make-channel-introduction "897c1a470da759236cc11798f4e0a5f7d4d59fbc" (openpgp-fingerprint "2A39 3FFF 68F4 EF7A 3D29 12AF 6F51 20A0 22FB B2D5" %default-channels)) but when I run ./pre-inst-env guix build unrar it can not find unrar, for unrar is a package in nonguix channel. The pre-inst-env runs a different Guix than your user (or system) Guix, so it won’t know about things like your channel configuration. Running `./pre-inst-env guix describe' will show you what I mean. my question is: how to let pre-inst-env work well with my channel setting, and no need to use -L in every command. The pre-inst-env is for developing Guix itself, and isn’t needed to build packages from other channels. A simple `guix build unrar' will build whatever version of unrar your current Guix knows about. If you want to edit a package in a third-party channel, `guix build -L. unrar' inside the nonguix source tree is the best way to do that. -- Ian
Patch application workflows
Hi folks, I’ve had issues getting a reliable patch review/push workflow going, and would like to hear what others are doing for this. Specific issues I’d like to solve: * `mumi am' sometimes claims not to find patches. This just happened to me: guix!env!ieure:~/projects/guix/staging$ mumi current 77009 77009 [PATCH] gnu: Add emacs-elfeed-tube. patch team-emacs unanswered opened 27 hours ago by Cayetano Santos guix!env!ieure:~/projects/guix/staging$ mumi am -- -s No patches found I retried it a few minutes later, and it worked. I would like something that works consistently /every/ time. * guix-patches repo sometimes lags. I have https://git.qa.guix.gnu.org/git/guix-patches as a remote on my Guix clone. Sometimes I can `git cherry-pick -s issue-', but other times, the patch hasn’t been processed yet. * `debbugs-gnu-apply-patch' loses metadata I like the idea of using debbugs to apply patches, since it requires less tooling. However, while it reliably applies the patch itself, it loses the commit message and committer information. Is there some way to make it work more like `mumi am'? * Git hooks prevent pushing from with `guix shell'. I apply patches & run builds in a `guix shell -m manifest.scm openssh --pure'. However, .git/hooks/pre-push hardcodes `guix', which isn’t available in this environment, meaning I have to constantly jump in/out of the guix shell, or keep a second shell open just to `git push'. While I can push from Magit, it runs `make' during its validation, and doesn’t run in the same environment. Depending on the patch, this can fail, which means I have to switch to the `guix shell', run `make', then push from Magit. I would like to have a single session that can both build patches and push commits. If anyone has workflow bits I can use that improve on any of these, I’d be grateful to hear about them. Thanks, -- Ian
Re: Deliberation period for GCD 003 "Rename the default branch" has technically started
Hi Liliana, Liliana Marie Prikler writes: Hi Guix, as the date for the GCD 003 was set to February 18th, the discussion period actually ended on Saturday already. I have incorporated some changes on Sunday to realign the proposal with GCD 002 (the Codeberg one), but barring any emergency changes there, GCD 003 is now to be considered final. As outlined in GCD 001, please respond to this mail with one of the following: - “I support”, meaning that you support the proposal; - “I accept”, meaning that you consent to the implementation of the proposal; - “I disapprove”, meaning that you oppose the implementation of the proposal. To count my own vote, I support the change of the default branch name to “main”. I support. Thanks, -- Ian
Help with strange Guile build failures
I have two patches I contributed which I need to make some additional changes to, but which I can’t make progress on because they fail to build. The failures are repeatable -- I can replicate them every time, across multiple computers -- but not /consistent/ -- the same code fails when built from a clean state, but works if I build the base commit of the branch, then the HEAD. First bug -- #77653. Working state is in the wasm-toolchain branch at https://codeberg.org/ieure/guix.git Symptom: `make clean && make' fails to build with: ice-9/eval.scm:293:34: error: clang-runtime-16: unbound variable hint: Did you forget a `use-modules' form? It’d be nice if this gave me a file/line for the problem, but the only added mention of this package is in gnu/packages/wasm.scm, where the wasm32-wasi-clang-runtime package inherits from it. The clang-runtime-16 package is in (gnu packages llvm), is public, and gnu/packages/wasm.scm uses (gnu packages llvm). If I load wasm.scm in the Guix REPL, it works fine. If I build the base commit of that branch (7ff20b9e94), then build the HEAD (dd2172a054), it builds fine. It only fails when I `make clean && make' on dd2172a054. Second bug -- #77106. Working state is in the autofs-service-type branch at https://codeberg.org/ieure/guix.git Symptom: `make clean && make' fails to build with: error: failed to load 'gnu/build/linux-initrd.scm': ice-9/eval.scm:293:34: In procedure symbol->string: Wrong type argument in position 1 (expecting symbol): #:cpio That file is unchanged in the autofs-service-type branch. The code doesn’t have `#:cpio' in it at all: #:use-module ((guix cpio) #:prefix cpio:) As with the first bug, if I check out the base commit (15562902da), `make clean && make', then check out HEAD (0ae700d9a7) and `make', it builds. If I check out HEAD and `make clean && make', it fails. The only changes in the branch are to gnu/packages/nfs.scm; gnu/build/linux-initrd.scm isn’t modified at all. Am I doing something wrong or missing obvious problems? Is this a Guile bug? Thanks, -- Ian
Re: G-Golf in Guix - G-Golf pkg(s) name(s)
Hi David, David Pirotte writes: Hi Inan, Ultimately, this kind of decision is a judgement call on the part of the committers reviewing the patches. In this case, the concensus is clear that the Guix convention should be upheld. Sorry to hear that [1], but I do not authorize guix to pick a different name, for its g-golf package - then the upstream (gnu project) package (and project) name that is. As noted before, the name of the project has not been changed, and the name used by its Guix package definition is part of the Guix project itself. It was not "renamed," a suitable name was chosen when the Guix package was created by Guix contributors. Your purported authorization is irrelevant, as the name is within the Guix project, not g-golf; and even if Guix had renamed the project, such changes are explicitly permitted by g-golf’d LGPLv3 license. Since you seem to be more devoted to maintaining the g-golf brand than adhering to the spirit of the license, a closed-source approach may be a better fit for your project -- though users are free to fork the last Free Software version of g-golf and maintain it independently, as allowed by its current LGPLv3 license. Practically speaking, your choices at this time are to accept the conventions, or submit a patch removing g-golf from Guix. Yes, I wish g-golf be removed from guix. I'll submit a patch: please allow me to send the patch to this list, I not a guix-patches list member. You don’t need to be a member to send a patch. Changes need to go through the guix-patches list and debugs. Please see `guix (Contributing)' in the Guix manual[1] for the process to follow. Since this is library code, other packages in Guix may depend on it; please take care to coordinate the impact of its removal. Thanks, -- Ian [1]: https://guix.gnu.org/manual/devel/en/html_node/Contributing.html
Re: diff-wiggle missing wiggle in Emacs
Hi Christopher, jgart, Christopher Howard writes: "jgart" writes: Hi guixers, Should we include wiggle as a dependency of Emacs since it is used by the diff-wiggle function from diff-mode.el? For my part — an enthusiastic Emacs user — I'd be a little hesitant to go down the path of adding explicit package dependencies for runtime requirements of obscure commands. I would hate to think that one day I wouldn't be able to install or build the Emacs guix package because something was broken in the wiggle guix package. But I'm open minded to further arguments for this. I agree with you, Emacs has a ton of stuff bundled with it, and I think including such things would make for an unpleasant experience for most users. One of the joys of Emacs is building a system tailored for you, and that includes installing other software in many cases. Noting also that Debian’s packages agree with this: they don’t depend on much. It would be nice, I suppose, if Guix had something like "recommended packages" in Debian. I agree that this would be nice. -- Ian
Re: make dist and related fun
Hi Vagrant, Vagrant Cascadian writes: On 2025-02-15, Vagrant Cascadian wrote: The generated tarball also appears to be missing a few files, some of which seem fine (e.g. .gitignore) but some which actually cause problems (e.g. missing po4a.cfg, tests/*.scm, gnu/patches/*.patch), some of which probably should be added to dist_patch_DATA in gnu/local.mk or other relevent values: Only in ../guix-master/gnu/packages/patches: librewolf-neuter-locale-download.patch 135.0.1-1 released today and I’m prepping patches for it, I can include this fix if nobody beats me to it. Can we glob so everything in gnu/packages/patches gets pulled in? It feels odd to maintain a separate list, presumably the patches wouldn’t be in there if something didn’t need them. Thanks, -- Ian
Re: make dist and related fun
Also, thank you for tackling this! -- Ian
Re: GCD005: Regular and efficient releases
Hi Rutherther, Rutherther writes: Ian Eure writes: I think it’s worth considering doing this the other way around: instead of freezing master, cut a release branch and cherry-pick fixes into it as needed. I don’t expect that development on non-release features will stop during the freeze, which means we’ll have a large backlog of work to merge once the freeze ends; this is a thing Guix has historically not been good at working through in a timely manner. A release branch would also support longer-term stable releases, if we wanted to do that. The downside is that it’s more work to cherry-pick fixes between branches, and there’s the potential for merge conflicts. I think that currently this isn't achievable very well. If release and master are diverging branches, and they will be if work is in both of them, then if users install the system from that, then pull, and reconfigure, their forward update check will fail as it expects the new commit to be descendant of the old one, and it won't be. That would be causing confusion. Hmm, good point, I think you’re right. -- Ian
Re: Committers: create and share your Codeberg account
Hi Ludo’, all, Ludovic Courtès writes: Hello Guix! If you’re a committer, please consider creating an account on Codeberg. To avoid problems, I suggest you send your account name as a public reply to this message, in a signed message. My codeberg username is ieure. https://codeberg.org/ieure Thanks, -- Ian signature.asc Description: PGP signature
Re: GCD005: Regular and efficient releases
Hi Steve, Steve George writes: 3. Rolling updates aren't suitable for all users. If a Guix release is a point in time snapshot of a rolling release, and the first `guix pull' after installation puts you back into the rolling release model, I don’t think more frequent releases address this need. Adding a slower-moving branch akin to Nix's stable could be an eventual goal as it would increase Guix's suitability for some users and use-cases [^2]. However, this GCD only sets out to implement regular releases which is a substantial change that would be a big improvement for our users. A specific issue caused by irregular releases is that new users/installs face a significant first "guix pull". This provides a poor initial user experience, and in some cases may even deter users [^4]. Additionally, it requires the project to keep old substitutes on our servers. It also causes user confusion becasue the assumption is that the release is what they should be using. We get a good number of folks in #guix reading the 1.4.0 manual, which doesn’t accurately reflect the current state of things. This GCD proposes an annual release cycle, with releases **in May**. To move onto this cycle the first release would be a little later: aiming for the **November of 2025**, with a short cycle to release in May 2026. I think it’d be better to align the initial release with the schedule we want to keep, so if we plan for a November release, plan to release every November. ## Release artifacts Using the primary architecture tier and the package sets would involve creating the following release artifacts: - GNU Guix System ISO image - GNU Guix System QCOW2 image - GNU Guix installer I’m not sure what the difference is between the installer and system ISO image is, could you elaborate please? Again in an effort to reduce developer toil, additional release artifacts could be created but would not be part of the formal release testing and errors would not block a release. The 1.4.0 release artifacts are[1]: - Installer image for i686 and x86_64. - QCow image for x86_64. - Binary for i686, x86_64, armhf, aarch64, and powerpc64e. - Source tarball. Are the binaries and source tarballs "additional release artifacts?" ### 4. Toolchain and transition freeze No major changes to toolchains (e.g. gcc-toolchain, rust-1.xx) or runtimes (e.g. java). There should be no changes that will cause major transitions. Debian defines a transition as one where a change in one package causes changes in another, the most common being a library. This isn't suitable for Guix since any change in an input causes a change in another package. Nonetheless, any change that alters a significant number of packages should be carefully considered and updates that cause other packages to break should be rejected. No alterations to the Guix daemon or modules are accepted after this point. Packages and services in the 'minimal' package set should not be altered. I think it’s worth considering doing this the other way around: instead of freezing master, cut a release branch and cherry-pick fixes into it as needed. I don’t expect that development on non-release features will stop during the freeze, which means we’ll have a large backlog of work to merge once the freeze ends; this is a thing Guix has historically not been good at working through in a timely manner. A release branch would also support longer-term stable releases, if we wanted to do that. The downside is that it’s more work to cherry-pick fixes between branches, and there’s the potential for merge conflicts. ### 7. Updates freeze Major package updates are frozen on 'master' as the focus is on fixing any blocking packages. Security updates still go to 'master'. This could prove troublesome, as the current Guix approach is to update packages to the version containing the fix. Are you thinking that we’d maintain that, or adopt a Debian style of backporting fixes where possible? ### 8. Breaking changes to staging To avoid a period of time where teams can't commit breaking changes, these are sent to a new 'staging' branch, rather than directly to master. The master branch slows down from this week. This is going to be difficult to manage, especially for contributions from those new to Guix development. Thank you for starting the discussion! -- Ian