Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Attila Lendvai
another +1 for the general sentiment of Katherine's message.


> I am all for it if it supplements the email based workflow (every
> time I need to do a modern style pull request type action, I am
> completely out of my depths and lost in the web interfaces...).


in my experience learning the quirks of the web based PR model, at least as a 
contributor, is much less effort than the constant friction of an email based 
workflow, let alone the learning curve of the emacs based tools.

i couldn't even find out which tools are used by those who are comfortable with 
the email based workflow. i looked around once, even in the manual, but maybe i 
should look again.

i'm pretty sure most maintainers have a setup where the emailed patches can be 
applied to a new branch with a single press of a button, otherwise it'd be hell 
of a time-waster.

one fundamental issue with the email based workflow is that its underlying data 
model simply does not formally encode enough information to be able to 
implement a slick workflow and frontend. e.g. with a PR based model the 
obsolete versions of a PR is hidden until needed (rarely). the email based 
model is just a flat list of messages that includes all the past mistakes, and 
the by now irrelevant versions.


> But someone would have to write and maintain them...


there are some that have already been written. here's an ad-hoc list of 
references:

#github #gitlab #alternative
https://codeberg.org/
https://notabug.org/
https://sourcehut.org/
https://sr.ht/projects
https://builds.sr.ht/
https://git.lepiller.eu/gitile
codeberg.org is gitea and sr.ht is sourcehut

-- 
• attila lendvai
• PGP: 963F 5D5F 45C7 DFCD 0A39
--
“The condition upon which God hath given liberty to man is eternal vigilance; 
which condition if he break, servitude is at once the consequence of his crime 
and the punishment of his guilt.”
— John Philpot Curran (1750–1817)




Relaxing the restrictions for store item names

2023-08-25 Thread Nathan Dehnel
What you could do is implement percent encoding:
https://en.wikipedia.org/wiki/Percent-encoding
-Allows you to store package titles in any language in an encoded form
-Allows the titles to be typed on latin keyboards
-Allows the packages to be accessed through URIs in the future without
causing problems



Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Attila Lendvai
> Now you might say that this leads to less diversity in the team of
> committers and maintainers as you need a certain level of privilege to
> seriously entertain the idea of dedicating that much time and effort to
> a project and I agree, but I also think this is a bigger reality of
> volunteer work in general.


the ultimate goal is not just diversity, but high efficiency of the people who 
cooperate around Guix, which then translates into a better Guix.

if the "rituals" around Guix contribution were merely a steep initial learning 
curve, then one could argue that it's a kind of filter that helps with the 
signal to noise ratio. but i think it's also a constant hindrance, not just an 
initial learning curve.


> Just because it's brought up a lot of times doesn't mean it's a good
> idea. There is a lot of good things that can be done for our web-based
> front ends; improving the search results on issues.guix.gnu.org would
> be one of them. However, I have little hopes for a web based means to
> submit contributions. I think email should be a format that's
> understood by folks who know how to operate a web browser.


again, i would press the argument that it's not about being able to, but about 
how much effort/attention is wasted on administration (i.e. not on hacking). i 
often have the impression that it took comparable effort to submit a smaller to 
mid size patch than making it.

-- 
• attila lendvai
• PGP: 963F 5D5F 45C7 DFCD 0A39
--
“Virtually no idea is too ridiculous to be accepted, even by very intelligent 
and highly educated people, if it provides a way for them to feel special and 
important.”
— Thomas Sowell (1930–)




Re: Relaxing the restrictions for store item names

2023-08-25 Thread Eidvilas Markevičius
On Fri, Aug 25, 2023 at 11:37 AM Nathan Dehnel  wrote:
>
> What you could do is implement percent encoding:
> https://en.wikipedia.org/wiki/Percent-encoding
> -Allows you to store package titles in any language in an encoded form
> -Allows the titles to be typed on latin keyboards
> -Allows the packages to be accessed through URIs in the future without
> causing problems

Now that's an idea. I didn't really thought of that. Although it'd
probably be trickier to implement in order to make all the tooling
compatible. I think that might be a good solution nonetheless.



Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Andreas Enge
Hello,

just a quick reply with what I do personally as one irrelevant data point :)

Am Fri, Aug 25, 2023 at 08:07:53AM + schrieb Attila Lendvai:
> i couldn't even find out which tools are used by those who are comfortable 
> with the email based workflow. i looked around once, even in the manual, but 
> maybe i should look again.

No tools at all, I would say, which indeed may be a bit inefficient...
Or: terminal, mutt, vim, git
A bit of web for browsing the manuals (of Guix and Guile)
and issues.guix.gnu.org
But then I type much faster than I click.

> i'm pretty sure most maintainers have a setup where the emailed patches can 
> be applied to a new branch with a single press of a button, otherwise it'd be 
> hell of a time-waster.

mutt
save message to /tmp/x
git am /tmp/x
or something like this

or:
git clone https://git.guix-patches.cbaines.net/guix-patches/
git checkout issue-x
git format-patch ...
then in the development checkout of Guix:
git am ...; make; ./pre-inst-env guix build

> one fundamental issue with the email based workflow is that its underlying 
> data model simply does not formally encode enough information to be able to 
> implement a slick workflow and frontend. e.g. with a PR based model the 
> obsolete versions of a PR is hidden until needed (rarely). the email based 
> model is just a flat list of messages that includes all the past mistakes, 
> and the by now irrelevant versions.

For this, I either go to issues.guix.gnu.org to download the newest patches,
in case the message is not in my inbox.
Otherwise I do not get your point: I keep untreated messages with the latest
patch version in my Guix inbox, and file away the others in a separate mbox.
So things are not flat, but have two levels: "to be treated" or "done".

Nothing to be documented, really, and I do not know whether these are just
personal habits or whether others work similarly. These might be the ways
of an aging non-emacs hacker...

> https://sourcehut.org/

This comes up a lot in the discussion and looks like an interesting
solution. It would be nice to be able to accomodate diverse styles
of working on Guix beyond (but including) emacs and vim.

Andreas




Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Attila Lendvai
> I feel like the advantages of a email-based workflow nowadays is more on
> the maintainer side of things (as managing large projects is easier


another thing worth pointing out here is that the harder it is to test a 
submitted patchset locally, the fewer non-committer reviews will happen.

and if all the review work rests on the shoulders of the committers, then 
there'll be long response times on submissions, or straight out 
forgotten/ignored submissions (khm). especially if it's about some hw or some 
sw that none of the committers care about, or could test locally (e.g. Trezor 
support: https://issues.guix.gnu.org/65037 that doesn't even build in master).

-- 
• attila lendvai
• PGP: 963F 5D5F 45C7 DFCD 0A39
--
“A programming language is low level when its programs require attention to the 
irrelevant.”
— Alan Perlis




Re: Why does Guix duplicate dependency versions from Cargo.toml?

2023-08-25 Thread Zhu Zihao

Jonas Møller  writes:

> Hi Guix! Why does cargo-build-system need #:cargo-inputs specified in the 
> package definition? This seems like a big
> mistake for a couple of reasons.

Just like the nice people in mail list explained, when building a
package, Guix builders are not allowed to connect to network, so the
crates should be prefetched.

and AFIAK, Maxime Devos is working on new build system called
"Antioxidant", which can build rust application without cargo (Yes,
invoke rustc directly!), The new build system will cache the rlib
intermediate result of crate and share between different builds. 

> 1 It is completely redundant, it should match what is in Cargo.toml. I know 
> `guix import crate` exists to automate
>  this process, but I don't understand the rationale for duplicating
>  this information.

Not only Guix do this, Debian also package Rust crates [1]. 

> 2 It is bad practice for Guix to override Cargo.lock if it exists, this means 
> that Guix is building a different binary to the
>  one the developers of the packaged Rust application are seeing on their end, 
> this is a much bigger problem.  

OK, IMO almost all software packaged by Linux distro are "different"
from upstream at binary level. A notable example is, software developer
may prefeer to add bundled 3rd party dependencies to ease the build of
package. But distro maintainers want to ensure every software use the
library provided by distro[2]. And there maybe distro specific patches
(Redhat backports security fixes to the old version of package for their
RHEL)

>  This can and will cause spurious build failures, or bugs that are unknown to 
> the developers of the Rust programs that
>  Guix packages.

The Rust crate in Guix is used to package rust application, if user have
problem with the rust application on Guix (e.g. ripgrep, fd, xxd, bat
...) They should report to Guix first, so it's Guix developer's
responsibility to smooth out those differences. 

If user want to develop a Rust crate/application, they can still use
"cargo" command and fetch crates from crates.io registry.

 

[1]: https://packages.debian.org/sid/librust-bytecount-dev
[2]: https://wiki.gentoo.org/wiki/Why_not_bundle_dependencies
-- 
Retrieve my PGP public key:

  gpg --recv-keys B3EBC086AB0EBC0F45E0B4D433DB374BCEE4D9DC

Zihao


signature.asc
Description: PGP signature


Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Attila Lendvai
> For this, I either go to issues.guix.gnu.org to download the newest patches,
> in case the message is not in my inbox.


some patchsets evolve a lot, and there are countless messages where obsolete 
patche versions are intermingled with non-obsolete discussion...


> Otherwise I do not get your point: I keep untreated messages with the latest
> patch version in my Guix inbox, and file away the others in a separate mbox.
> So things are not flat, but have two levels: "to be treated" or "done".


my point is that in a PR based model/workflow things like this is done by a 
program. and each brain cycle you spend on maintaining the sanity of your local 
inbox, is not spent on hacking, and the results of your effort is not even 
mirrored into the inbox of the other contributors.

this seems like a small thing, but multiply this with every message, and every 
potential contributor and maintainer... and then consider its cumulative effect 
on the emergent order that we call the Guix community.

meta:

the reason i'm contributing to this discussion is not that i'm proposing to 
move to some specific other platform right now. it's rather to nudge the 
consensus away from the conclusion that the email based workflow is good and is 
worth sticking with.

once/if we get closer that consensus, only then should the discussion move on 
to collect our requirements and evaluate the free sw solutions that are 
available today. which again could be organized much better in a wiki than in 
email threads, but that's yet another topic...

-- 
• attila lendvai
• PGP: 963F 5D5F 45C7 DFCD 0A39
--
“The only valid political system is one that can handle an imbecile in power 
without suffering from it.”
— Nassim Taleb (1960–)




SSSD, Kerberized NFSv4 and Bacula

2023-08-25 Thread Nathan Dehnel
I once tried setting up kerberized nfsv4 and ended up falling down an
endless rabbit hole and eventually gave up. Instead, I encrypted nfs
using wireguard.
https://alexdelorenzo.dev/linux/2020/01/28/nfs-over-wireguard.html
Very impressive post though!



Re: Relaxing the restrictions for store item names

2023-08-25 Thread Eidvilas Markevičius
Although now, just a few hours later, I'm having second thoughts on
this. When you really think about it, it's very unlinkely that some
user would prefer typing something like

guix install 
%D0%B8%D0%BC%D0%B0%D0%B3%D0%B8%D0%BD%D0%B0%D1%80%D0%B8-%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC

over

guix install имагинари-програм

even if they don't have the russian (or whatever other language)
keyboard layout set up on their system, so just for accessability
purposes, the solution wouldn't be all that great. It would also make
store name unnecessarily long (they're already long as is), and
there's a 255 char limit for filenames that we have to keep in mind as
well. Searching the store using standard utilities such as find and
grep would too, as a consequence, break... There's just too many
problems with this.

I believe what Julien proposed is the most reasonable solution:
unrestrict unicode characters in the store and (maybe) make it a
project policy to not put unicode characters inside package names
(however, personally I wouldn't be against that either).

Now ensuring that URIs don't break, especially for substitute
provision, should also be taken into consideration, but this can be
handled separately.

On Fri, Aug 25, 2023 at 12:14 PM Eidvilas Markevičius
 wrote:
>
> On Fri, Aug 25, 2023 at 11:37 AM Nathan Dehnel  wrote:
> >
> > What you could do is implement percent encoding:
> > https://en.wikipedia.org/wiki/Percent-encoding
> > -Allows you to store package titles in any language in an encoded form
> > -Allows the titles to be typed on latin keyboards
> > -Allows the packages to be accessed through URIs in the future without
> > causing problems
>
> Now that's an idea. I didn't really thought of that. Although it'd
> probably be trickier to implement in order to make all the tooling
> compatible. I think that might be a good solution nonetheless.



Re: Why does Guix duplicate dependency versions from Cargo.toml?

2023-08-25 Thread (
Zhu Zihao  writes:
> and AFIAK, Maxime Devos is working on new build system called
> "Antioxidant", which can build rust application without cargo (Yes,
> invoke rustc directly!), The new build system will cache the rlib
> intermediate result of crate and share between different builds. 

Sadly, I think that's been abandoned :(

  -- (



Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Wilko Meyer


Hi Attila,

Attila Lendvai  writes:

> i couldn't even find out which tools are used by those who are
> comfortable with the email based workflow. i looked around once, even
> in the manual, but maybe i should look again.

I can only speak for myself here, but I tend to use magit[0] from inside
emacs for most of these things (sometimes git format-patch and git
send-email directly on my shell). In magit there's:

- magit-am-* to apply patches[1]
- the magit-patch-popup to create patches[2]

I've written a few elisp functions on top of that, to be able to
e.g. directly apply a patch from a mail I've received (I use mu4e[3] as
my mail client) more conveniently. My set-up is far from being perfect
and quite simple, but more often than not perfectly enough for most of
my contributions to mail based projects.

More generally speaking, there's a pretty good tutorial[4] on
git-send-email written by the sourcehut folks, which also includes steps
on how to get git-send-email going on Guix. I usually refer to that when
being asked how to get started with a email based git workflow (I'm by
no means an expert on using said workflow (so I'd also be interested on
how said workflow looks like for other people on this mailing list), I
however do know enough to ocassionally use it conveniently enough for my
use-cases).

[0]: https://magit.vc
[1]: https://magit.vc/manual/magit/Maildir-Patches.html
[2]: https://magit.vc/manual/2.13.0/magit/Creating-and-Sending-Patches.html
[3]: https://djcbsoftware.nl/code/mu/mu4e.html
[4]: https://git-send-email.io/

Best Regards,

Wilko Meyer



Re: Relaxing the restrictions for store item names

2023-08-25 Thread Kaelyn
Hi,

A couple of small early-morning (for me) comments below... not for or against 
the idea of percent encoding, but as a little bit of food for thought while 
pondering how to handle Unicode in package names and/or store paths.

On Friday, August 25th, 2023 at 2:01 PM, Eidvilas Markevičius 
 wrote:

> Although now, just a few hours later, I'm having second thoughts on
> this. When you really think about it, it's very unlinkely that some
> user would prefer typing something like
> 
> guix install 
> %D0%B8%D0%BC%D0%B0%D0%B3%D0%B8%D0%BD%D0%B0%D1%80%D0%B8-%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC
> 
> over
> 
> guix install имагинари-програм

I imagine that, for usability, the percent encoding (or other encoding or 
transliteration) of non-ASCII characters could be handled transparently, i.e. 
for "guix install имагинари-програм", guix would translate "имагинари-програм" 
to the encoded form for operations. And if the escape character (e.g. the "%" 
in percent encoding) isn't also a valid character for store or package names 
then the values can be handled transparently. For example, both "guix install 
git" and "guix install %67%69%74" and "guix install g%69t" would all install 
git.

> even if they don't have the russian (or whatever other language)
> keyboard layout set up on their system, so just for accessability
> purposes, the solution wouldn't be all that great.

> It would also make
> store name unnecessarily long (they're already long as is), and
> there's a 255 char limit for filenames that we have to keep in mind as
> well. Searching the store using standard utilities such as find and
> grep would too, as a consequence,

I split out the quote above as a bit of reference. While I agree that we have 
to keep in mind the 255 char limit for filenames, with percent encoding causing 
a single byte in ASCII or UTF-8 to become ~3 bytes (with iirc most non-latin 
characters having multi-byte encodings in UTF-8) and the store hashes being a 
33 byte prefix (counting the dash), 255 chars is still quite a bit. 
Specifically, the extracted quote above--without the "> " prefixes and with 
line breaks treated as single characters--is exactly 255 characters. (I find a 
bit of readable text to be helpful for wrapping my brain around a value like 
"255 characters".)

Cheers,
Kaelyn

> break... There's just too many
> problems with this.
> 
> I believe what Julien proposed is the most reasonable solution:
> unrestrict unicode characters in the store and (maybe) make it a
> project policy to not put unicode characters inside package names
> (however, personally I wouldn't be against that either).
> 
> Now ensuring that URIs don't break, especially for substitute
> provision, should also be taken into consideration, but this can be
> handled separately.
> 
> On Fri, Aug 25, 2023 at 12:14 PM Eidvilas Markevičius
> markeviciuseidvi...@gmail.com wrote:
> 
> > On Fri, Aug 25, 2023 at 11:37 AM Nathan Dehnel ncdeh...@gmail.com wrote:
> > 
> > > What you could do is implement percent encoding:
> > > https://en.wikipedia.org/wiki/Percent-encoding
> > > -Allows you to store package titles in any language in an encoded form
> > > -Allows the titles to be typed on latin keyboards
> > > -Allows the packages to be accessed through URIs in the future without
> > > causing problems
> > 
> > Now that's an idea. I didn't really thought of that. Although it'd
> > probably be trickier to implement in order to make all the tooling
> > compatible. I think that might be a good solution nonetheless.



Re: Relaxing the restrictions for store item names

2023-08-25 Thread Eidvilas Markevičius
Well, what I realized right now is that this sort of "transparency"
may not even have to be handled by guix at all. If we remember the
fact that we're on a unix-based system, a user who really wants to
install some piece of software with a unicode name, but doesn't know
how to type the requisite characters could always use the help of an
external program to do transliteration to another alphabet for him
(e.g., translit from the perl-lingua-translit package):

guix install $(echo imaginari-program | translit -t "ISO 9" -r)

On Fri, Aug 25, 2023 at 7:32 PM Kaelyn  wrote:
>
> Hi,
>
> A couple of small early-morning (for me) comments below... not for or against 
> the idea of percent encoding, but as a little bit of food for thought while 
> pondering how to handle Unicode in package names and/or store paths.
>
> On Friday, August 25th, 2023 at 2:01 PM, Eidvilas Markevičius 
>  wrote:
>
> > Although now, just a few hours later, I'm having second thoughts on
> > this. When you really think about it, it's very unlinkely that some
> > user would prefer typing something like
> >
> > guix install 
> > %D0%B8%D0%BC%D0%B0%D0%B3%D0%B8%D0%BD%D0%B0%D1%80%D0%B8-%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC
> >
> > over
> >
> > guix install имагинари-програм
>
> I imagine that, for usability, the percent encoding (or other encoding or 
> transliteration) of non-ASCII characters could be handled transparently, i.e. 
> for "guix install имагинари-програм", guix would translate 
> "имагинари-програм" to the encoded form for operations. And if the escape 
> character (e.g. the "%" in percent encoding) isn't also a valid character for 
> store or package names then the values can be handled transparently. For 
> example, both "guix install git" and "guix install %67%69%74" and "guix 
> install g%69t" would all install git.
>
> > even if they don't have the russian (or whatever other language)
> > keyboard layout set up on their system, so just for accessability
> > purposes, the solution wouldn't be all that great.
>
> > It would also make
> > store name unnecessarily long (they're already long as is), and
> > there's a 255 char limit for filenames that we have to keep in mind as
> > well. Searching the store using standard utilities such as find and
> > grep would too, as a consequence,
>
> I split out the quote above as a bit of reference. While I agree that we have 
> to keep in mind the 255 char limit for filenames, with percent encoding 
> causing a single byte in ASCII or UTF-8 to become ~3 bytes (with iirc most 
> non-latin characters having multi-byte encodings in UTF-8) and the store 
> hashes being a 33 byte prefix (counting the dash), 255 chars is still quite a 
> bit. Specifically, the extracted quote above--without the "> " prefixes and 
> with line breaks treated as single characters--is exactly 255 characters. (I 
> find a bit of readable text to be helpful for wrapping my brain around a 
> value like "255 characters".)
>
> Cheers,
> Kaelyn
>
> > break... There's just too many
> > problems with this.
> >
> > I believe what Julien proposed is the most reasonable solution:
> > unrestrict unicode characters in the store and (maybe) make it a
> > project policy to not put unicode characters inside package names
> > (however, personally I wouldn't be against that either).
> >
> > Now ensuring that URIs don't break, especially for substitute
> > provision, should also be taken into consideration, but this can be
> > handled separately.
> >
> > On Fri, Aug 25, 2023 at 12:14 PM Eidvilas Markevičius
> > markeviciuseidvi...@gmail.com wrote:
> >
> > > On Fri, Aug 25, 2023 at 11:37 AM Nathan Dehnel ncdeh...@gmail.com wrote:
> > >
> > > > What you could do is implement percent encoding:
> > > > https://en.wikipedia.org/wiki/Percent-encoding
> > > > -Allows you to store package titles in any language in an encoded form
> > > > -Allows the titles to be typed on latin keyboards
> > > > -Allows the packages to be accessed through URIs in the future without
> > > > causing problems
> > >
> > > Now that's an idea. I didn't really thought of that. Although it'd
> > > probably be trickier to implement in order to make all the tooling
> > > compatible. I think that might be a good solution nonetheless.



Re: Relaxing the restrictions for store item names

2023-08-25 Thread Saku Laesvuori
> > Although now, just a few hours later, I'm having second thoughts on
> > this. When you really think about it, it's very unlinkely that some
> > user would prefer typing something like
> > 
> > guix install 
> > %D0%B8%D0%BC%D0%B0%D0%B3%D0%B8%D0%BD%D0%B0%D1%80%D0%B8-%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC
> > 
> > over
> > 
> > guix install имагинари-програм
> 
> I imagine that, for usability, the percent encoding (or other encoding
> or transliteration) of non-ASCII characters could be handled
> transparently, i.e. for "guix install имагинари-програм", guix would
> translate "имагинари-програм" to the encoded form for operations. And
> if the escape character (e.g. the "%" in percent encoding) isn't also
> a valid character for store or package names then the values can be
> handled transparently. For example, both "guix install git" and "guix
> install %67%69%74" and "guix install g%69t" would all install git.
>
> > [...]
>
> > It would also make
> > store name unnecessarily long (they're already long as is), and
> > there's a 255 char limit for filenames that we have to keep in mind as
> > well. Searching the store using standard utilities such as find and
> > grep would too, as a consequence,
> 
> I split out the quote above as a bit of reference. While I agree that
> we have to keep in mind the 255 char limit for filenames, with percent
> encoding causing a single byte in ASCII or UTF-8 to become ~3 bytes
> (with iirc most non-latin characters having multi-byte encodings in
> UTF-8) and the store hashes being a 33 byte prefix (counting the
> dash), 255 chars is still quite a bit. Specifically, the extracted
> quote above--without the "> " prefixes and with line breaks treated as
> single characters--is exactly 255 characters. (I find a bit of
> readable text to be helpful for wrapping my brain around a value like
> "255 characters".)
>
> > break... There's just too many problems with this.

The encoding could also be transparent in the other direction so the
percent encoded form would be usable on the command line (in addition to
the UTF-8 one, of course), but guix would translate it to UTF-8 for
operations. This would allow typing all package names with only ascii
characters but still keep the store readable and grepable. There are
most likely simple utility programs that can decode precent encoding, so
the store is also grepable with only ascii characters. 

There is no reason (that I can see) not to allow UTF-8 in the store
paths, other than it being hard to type with a keyboard for a different
locale. But how often do people actually want to type store paths by
hand? I at least avoid it at all times possible by using $(guix build ...), 
$(herd configuration ...), $(realpath /var/guix/profiles/...) etc.
Even when recovering a broken system the only store path you really need to
type is that of a working guix (and /var/guix/profiles/... probably also
works in a broken system).

> > even if they don't have the russian (or whatever other language)
> > keyboard layout set up on their system, so just for accessability
> > purposes, the solution wouldn't be all that great.

I agree. It is really annyoing and hard to write percent encoding by
hand, so this doesn't really solve the issue of UTF-8 being hard to
write with an ASCII keyboard.

Maybe some sort of fuzzy character matching could be used in guix search
instead of percent encoding. That way people could find the packages
even if they can't type the entire name and then use the name from guix
search (by copy-pasting or shell piping) to install it (or do whatever
operation they want to it).


signature.asc
Description: PGP signature


Re: SSSD, Kerberized NFSv4 and Bacula OFF TOPIC PRAISE

2023-08-25 Thread jbranso
August 24, 2023 3:57 PM, "Martin Baulig"  wrote:

> Hello,
> 
> About 2–3 months ago, I got an initial prototype of Bacula working on GNU 
> Guix. I had the Bacula
> Director, two separate Storage Daemons and the Baculum web interface running 
> in a GNU Guix VM on my
> Synology NAS.

I had to look it up...Apparently Bacula is a way to back up computers on a 
network.  Sounds cool!
https://en.wikipedia.org/wiki/Bacula

> At some point, I would really love to upstream these changes, but it's quite 
> a complex
> configuration - and I also had to do quite a few refactorings and clean-ups 
> for this to pass my
> personal quality standards.
> 
> One issue I had to deal with is that Bacula heavily relies upon clear-text 
> passwords in its various
> configuration files. To communicate between its different components, it uses 
> TLS with Client
> Certificates in addition to passwords. So in addition to writing clear-text 
> passwords into various
> configuration files, the X509 private keys, DH parameters, etc. also need to 
> be installed into
> appropriate directories.
> 
> I came up with quite an elegant solution for this problem - and introduced 
> three new services and
> an extension.
> 
> * My "guix secrets" tool provides a command-line interface to maintain a 
> "secrets database"
> (/etc/guix/secrets.db) that's only accessible to root. It can contain simple 
> passwords, arbitrary
> text (like for instance X509 certificates in PEM format) and binary data.

I know guix has been wanting to figure out how to have services that need 
passwords in the configuration
file.  This sounds like it could work!  

> * The problem with the standard activation service is that it runs early in 
> the boot process and
> all activation actions are run in a seemingly random way, there isn't a way 
> to provide any real
> dependencies. Any failures could possibly prevent the system from fully 
> booting up.
> 
> I created a new "activation-tree-service-type" - currently experimental and a 
> bit in a refactoring
> stage. It creates a separate one-shot Shepherd service for each activation 
> action, and you can
> declare dependencies between them.
> 
> Since it's using normal Shepherd services underneath the hood, you could for 
> instance depend on
> user-homes and the network being up, so you could SSH in and use GNU Emacs to 
> fix any issues.
> 
> And any arbitrary Shepherd service could also depend on some of these actions 
> - such as for
> instance the various Bacula services.
> 
> * Then I created "service-accounts-service-type" that extends the standard 
> account creation with
> the ability to also create home directories, run and PID directories and the 
> log-file. It's mostly
> used under the hood.
> 
> * Finally, "secrets-service-type" depends on all of the above to do its work.
> 
> It takes a template file - which is typically interned in the store - 
> containing special "tokens"
> that tell it which keys to look up from the secrets database.
> 
> It uses the above mentioned service-accounts-service-type to specify where 
> the substituted
> configuration file should be installed, insuring that the directory has been 
> set up with
> appropriate permissions.
> 
> And then it substitutes the special tokens from the template file with the 
> actual secrets. For
> instance "@password:foo@" would be substituted with a password entry called 
> "foo". For arbitrary
> text or binary data, the template would contain something like "@blob:data@" 
> - this will be
> substituted with the full path name of a file where the actual data will be 
> written to.
> 
> * * * *
> 
> All of the above has been mostly working in early August, just one problem 
> remained:
> 
> I do not want to store any of the actual data inside the VM, but rather use a 
> folder on the NAS
> itself. Even the PostgreSQL database lives on a NFS-mounted volume. The 
> problem is quite simply
> that Synology's Virtual Machine Manager software does not provide any way of 
> exporting or importing
> volumes. You cannot even move them between VMs. And I really don't want to 
> tie my data to the
> lifecycle of the VM.
> 
> Using traditional NFS (either version 2 or 3) worked perfectly fine and since 
> this is a very
> locked-down environment, encrypting the NFS traffic really isn't needed. 
> Like, and attacker that
> got access to either the NAS or the VM running inside it would already have 
> all the data anyway.
> 
> However, I wanted to give it a try regardless and see whether I could get 
> SSSD working with GNU
> Guix.
> 
> And this is where the nightmares began!
> 
> Firstly, I had to make a few changes to GNU Guix itself, most of which I'd 
> like to upstream. The
> code is in my public GitLab repo, but it's a bit of a mess right now, and 
> I'll need at least a day
> or two to clean it up. But I also ran across a couple of questions and issues.
> 
> * GNU Guix is currently using nfs-utils 2.4.3, whereas 2.6.3 is currently the 
> latest ver

Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday
On 8/23/23 11:27 AM, Felix Lechner via Development of GNU Guix and the 
GNU System distribution. wrote:



* Encourage upstream communities like "Guix 'R Us"


Every contributor should have their own channels for packages [1] and
for Guix. [2] Testing patches before they are submitted would vastly
improve the code quality in Guix.

Just fork my repos on Codeberg and use the 'prebuilt' branches. (Also,
please tell me when to advance them to more recent commits.) Here is
how you use them via Guix Home. [3]


I do exactly this! My channels can be found here: 
https://github.com/kat-co/guix-channels/tree/master






Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday

On 8/23/23 12:03 PM, Andreas Enge wrote:

Hello,

Am Wed, Aug 23, 2023 at 10:27:31AM -0700 schrieb Felix Lechner via Development 
of GNU Guix and the GNU System distribution.:

  I can't ever seem to get the GNU style commit messages correct.

Neither can I. The style apparently helps with automated maintenance
of the changelog, but I do not understand why a changelog is useful
for a rolling release model.


personally, I find them super helpful to grep through commit messages
to find changes, like when a file was touched for the last time (now
I think that git wizards will have a better solution; but you get the
idea). Or when a package was added. Or updated to a specific version.


I have no love for Git's CLI (that's one of the reasons the email-based 
workflow grates on me; I use magit for everything), but I found it 
interesting that here you present a valid argument against having to 
learn Git in one way, and elsewhere people are making arguments for 
learning the git send-email command. I draw no conclusions from that, 
but it caught my eye!


FWIW, the git command you could use is:

git log --grep=foo -- the/file/path.scm


* Contributing to Guix is not for you
* It's OK to make lots of mistakes


Definitely "no" to the first one, and "yes" to the second one!
I think that even when one only contributes from time to time, but
regularly, habits will form and mistakes disappear.


I've been contributing to Guix since 2018. I've definitely learned a lot 
about Guix idiosyncrasies, but until I wrote my script, I'd forget to do 
something every time I submitted a patch.


--
Katherine




Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday

On 8/25/23 3:57 AM, Attila Lendvai wrote:


Otherwise I do not get your point: I keep untreated messages with the latest
patch version in my Guix inbox, and file away the others in a separate mbox.
So things are not flat, but have two levels: "to be treated" or "done".



my point is that in a PR based model/workflow things like this is done by a 
program. and each brain cycle you spend on maintaining the sanity of your local 
inbox, is not spent on hacking, and the results of your effort is not even 
mirrored into the inbox of the other contributors.


I was reflecting on what it is about the email-based workflow that I 
find difficult, and I think you've highlighted one thing:


With a PR based workflow, there is a program essentially figuring out 
the equivalent of the `git send-email` flags and doing the submission 
for me. I generally go to the site for a repo's main branch, get 
prompted about my recent branch, click a button, and it's submitted.


And you've also highlighted the core of my original message: it's 
frustrating to spend effort on the meta of changing code instead of 
changing code. There will always be ancillary effort, but it can be 
greatly reduced.



this seems like a small thing, but multiply this with every message, and every 
potential contributor and maintainer... and then consider its cumulative effect 
on the emergent order that we call the Guix community.


A thousand times this.



meta:

the reason i'm contributing to this discussion is not that i'm proposing to 
move to some specific other platform right now. it's rather to nudge the 
consensus away from the conclusion that the email based workflow is good and is 
worth sticking with.

once/if we get closer that consensus, only then should the discussion move on 
to collect our requirements and evaluate the free sw solutions that are 
available today. which again could be organized much better in a wiki than in 
email threads, but that's yet another topic...


I just want to call out that my original message was not strictly about 
the email based workflow. I want reduction of cognitive overhead to be 
an ongoing goal, whatever that comes to mean.


--
Katherine




Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday

On 8/23/23 6:18 PM, Csepp wrote:


* Contributing to Guix is not for you

     I would be really sad if someone ever said this, but I guess it's a
     possibility. Maybe someone like me just can't be a contributor to
Guix until
     I have the bandwidth to manage all the details. I would
preemptively argue
     that diversity of thought and experiences usually leads to better
things.


I really hope we can lower the barrier to entry without sacrificing code
quality precisely because of this.  Lots of important use cases that
Guix could serve are ignored because the people who need them are not
represented in our community and/or they can't contribute and no one is
able/willing to write code for them.


Yes, the goal has to be to lower the cognitive overhead while 
maintaining the level of quality.





* It's OK to make lots of mistakes

     The people who have reviewed my code have been generous both with
their time
     and fixing my mistakes and then applying. Maybe this model is OK?
I still
     feel guilty every time a reviewer has to correct an oversight I've
made. I
     also want to become a committer, but I don't know how that would
work if
     I'm regularly making mistakes. Obviously people would still be
reviewing my
     commits, but presumably a committer should not regularly be making
     mistakes.


In a sense I agree with this, but if mistakes are this easy to make,
then I think something is wrong with the project, not with the
contributor.  Instead of making people learn tightrope walking, maybe we
should be building actual bridges.


Yes! For the same reason that focusing on accessibility helps everyone, 
focusing on making it easy to avoid mistakes helps everyone.



Guix actually fares pretty well in this regard compared to some other
projects I tried contributing to (*stares at Plan 9*), but there is
still a lot of knowledge that experienced developers take for granted
and don't actually document.  Writing new packages is mostly documented
well, but as soon as something breaks, you are thrown into the deep end,
having to dissect logs, bisect commit ranges, learn strace, gdb (which
still doesn't work well on Guix), learn how to compile packages with
debug info (and actually waste some more CPU time and IO on rebuilding a
package you already have), learn how to adapt docs from other distros,
etc, etc, etc.
I've been trying to document these at least for myself, but haven't yet
had time to put them together into something others could read.


A lot of the activities you mentioned fall into the "learn to be a 
developer" category, and I think it's a little too broad of a target, at 
least for what I was trying to point out.



By the way, that's another issue.  Using a TeX based document format for
the docs is, uuuh, maybe not the best idea.  Info is a pretty okayish
documentation reader, but it's a relatively big barrier to entry
compared to what you need to know to make a small edit to the Arch wiki.
This way mostly just experienced contributors write docs, not the
users who just want to document how they made some weird use case
possible.


Another great example. I don't write much documentation because of this.

--
Katherine



Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday

On 8/24/23 6:10 PM, Ekaitz Zarraga wrote:


   Lots of important use cases that
Guix could serve are ignored because the people who need them are not
represented in our community and/or they can't contribute and no one is
able/willing to write code for them.


Yes, and even if you manage to write something yourself, many times you
get now answer to your patches because no one else is interested on what
you did. It's perfectly understandable but also discouraging.

Example: I wanted to push Zig support in guix a while ago. Made a build
system and got no answer. 
The feeling here is that the code proposed is not good enough, but I don't
know how to improve it so I'm stuck. It would be great to feel comfortable
enough with the code to be sure that it can be merged, but I can't find
the resources to make it better. If it was clearer or if it was easier
both sides, maintainers and contributors, would be more effective and
happier.


Yes, a point I'm seeing echoed a lot here which I can't highlight enough is:

Any effort made to make it easier to contribute helps everyone, not just 
those having issues.


Your point about your patch going unanswered is indeed a good example. 
Here we sit with no Zig build system. It can't be proven, but maybe if 
contributing and reviewing were easier, we'd have more hands, and 
getting something like that merged would be easier.



* It's OK to make lots of mistakes

The people who have reviewed my code have been generous both with their
time and fixing my mistakes and then applying. Maybe this model is OK? I
still feel guilty every time a reviewer has to correct an oversight I've
made. I also want to become a committer, but I don't know how that would
work if I'm regularly making mistakes. Obviously people would still be
reviewing my commits, but presumably a committer should not regularly be
making mistakes.


Exactly my feeling. And I've been working with Guix for a while...


Same here. My first commit was in 2018.




Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday
On 8/23/23 9:33 PM, Ahmed Khanzada via Development of GNU Guix and the 
GNU System distribution. wrote:

My wife and I are currently trying, so I hope to be a busy parent soon too!


Good luck to you!


The debate comes down to: the people contributing the most code already
have a very familiar workflow that they have automated (all probably within
Emacs). Why should they change their contribution model for those who
don't contribute much currently, and may never do so? (Not implying this
is you! Just recounting the debate).


I know it's not your argument, but do you see the circular logic?

This type of "reasoning" is used to great effect in society to maintain 
an "in group" and an "out group". E.g. "There aren't a lot of paralyzed 
people at the top of these stairs, so why should we build a ramp if 
there's no one to use it?"



But you're not asking that; you just want to lower the cognitive overhead.


Thank you for acknowledging my central point.


I can't do much about the brutal learning curve of Emacs, Guix, and GNU,
but I certainly can just package it so it's all ready to go with fancy
scripts for the most common workflows.


I don't mind the learning. I actually think I know everything that 
should be done, it's just very cumbersome and error-prone to do 
everything, every time.


--
Katherine





Re: Updates for Go

2023-08-25 Thread John Kehayias
Hi Katherine,

On Wed, Aug 23, 2023 at 10:12 AM, Katherine Cox-Buday wrote:

> On 8/22/23 8:24 AM, Felix Lechner via Development of GNU Guix and the
> GNU System distribution. wrote:
>> Hi Attila,
>>
>> On Tue, Aug 22, 2023 at 6:14 AM Attila Lendvai  wrote:
>>>
>>> currently the go build system in guix does not reuse build artifacts
>>
>> Can Golang reuse build artifacts?
>
> I don't think it's recommended right now. See discussion here:
>
> - 
> - 
>
> Summary:
>
> It sounds like due to the way the compiler can optimize code (e.g.
> inlining), `buildmode=shared` is not recommended, but in the future they
> are looking at allowing linking against a single shared library.

I've not been following in detail this discussion, but where do we currently 
stand? Is the proposed Go 1.21 patch basically ready? Should we create a branch 
and build job to start seeing how far we get in making 1.21 the default Go in 
Guix?

Like others, I have a few random Go packages (a bunch locally I really need to 
clean up too) and am not familiar with the language and our packaging much. 
Still, if I can help review/push some patches and get things moving, please let 
me know.

And thanks for all your work here, it is appreciated!

John




Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday

On 8/24/23 12:33 AM, ( wrote:


* We could support a managed web-based workflow


The problem with this is that it would not be possible without changing
the git hosting entirely to something like Gitea.  I'm personally a fan
of the email-based workflow; what, specifically, is it that bothers you
about it?  If it's:


I can envision some kind of upstream branch that's blessed and merged 
daily. The web crowd commits to that, the email crowd commits to main.



- Setting it up: Yes, this is annoying.  Sadly, our mighty oligarchal
   masters have taken it upon themselves to make it as annoying as
   possible to use email from anywhere but their web or mobile clients.


This isn't it at all, but I agree with your comment. I'm fond of email, 
and it's distressing how centralized it's become.



- Sending the emails: This isn't that bad once you get used to it;
   sadly most Git clients (magit sadly included) don't support send-email
   well or at all.  But on the command line, all you need to do is:

   # for a single commit
   $ git send-email --to=guix-patc...@gnu.org -1 --base=master -a
   # for several commits
   $ git send-email --to=guix-patc...@gnu.org -$N_COMMITS --base=master 
--cover-letter -a

   Or, if sending an amended series:
   $ git send-email --to=$bug_...@debbugs.gnu.org -$N_COMMITS --base=master -a 
-v$VERSION


It's this. Having to:

1. Remember the flags and their values
2. Remember the email address (it might seem silly unless you have forms 
of dyslexia. is it guix-patches? or patches-guix? Wait, what was I doing?)

3. And then the whole deal with what to do with follow ups.

I feel like I know my way around git pretty well, but I struggle with 
how those concepts map onto sending emails.


I have only been able to surmount this by lifting these concepts through 
scripts into higher-order concepts with less cognitive overhead.



- Switching between branches: The best way to handle this is with
   subtrees; see `git subtree --help`.


Interesting! I use worktrees, but maybe subtrees are easier? I'll have 
to read up on this. Thank you!



- Applying patches: This is a bit annoying.  Most email clients won't
   let you set up commands to pipe mailboxes to, unlike aerc.  Perhaps we
   could have a `mumi apply` command to fetch a patch series from debbugs
   and apply it to the checkout.


I wrote some elisp to one-key apply patches from GNUS, but I guess my 
point is: not everyone can do that. How are we to expect more 
contributors if that, or something similar, is the barrier to entry?


--
Katherine





Re: Branch (and team?) for mesa updates

2023-08-25 Thread John Kehayias
Hi Maxim,

On Sun, Jul 30, 2023 at 09:50 PM, Maxim Cournoyer wrote:

> Hi John,
>
> John Kehayias  writes:
>
> [...]
>
>> I'll open a branch merge request issue later today as per new
>> procedure for QA. Though I believe that only builds 2 branches, which
>> is occupied at the moment. Or can someone set a separate build job
>> specifically for mesa-updates, especially if we think it is a good
>> idea to have this going forward?
>
> Do you already have admin access to Cuirass?  We can issue client certs
> for team members needing to create branches on it or restart builds
> there, etc.
>

I do not have access. The mesa-updates branch remains (and still an
active job I think, just nothing pushed since the merge). I plan on
making use of it as soon as 23.2 is out, along with a handful of
pending patches I've seen that will make sense here.

I haven't used Cuirass before but if a hand would be helpful I'm happy
to lend it (let me know if there is someone I should contact directly
or message me off list).

>>> Do we want a "Mesa team" or something a bit larger? Not sure what
>>> exactly, since "graphics" is perhaps too broad. Happy to help
>>> spearhead the Mesa front for Guix (the very package that got me first
>>> involved in the patching process).
>>>
>>
>> This is still a good question I think, of how we want to have a
>> team(s) to handle things like xorg, wayland, mesa, and related
>> packages. They are a bit all over the place in terms of scope and what
>> they touch. For now I'd like to go ahead with a regular mesa-updates
>> branch since that sees regular releases and is pretty self-contained
>> currently.
>
> It seems a 'desktop' team could make sense, covering some of the things
> listed here that makes sense / are already well separated in modules in
> Guix to avoid being added to two teams:
> .

The problem I'm thinking of for a "desktop" team is setting the
correct scope of package files to make use of e.g. auto cc-ing on
patch submissions. Though at least (gnu packages gl) looks pretty
reasonable to start for maybe a graphics team? Maybe with vulkan?

I'm still not sure but I should probably propose something concrete
with at least myself for gl since those patches generally will go to
the mesa-updates branch for convenient building.

Anyone else want in?

John




Re: How can we decrease the cognitive overhead for contributors?

2023-08-25 Thread Katherine Cox-Buday

On 8/24/23 12:53 PM, Simon Tournier wrote:

Hi,

At some point, I sympathize.

On Wed, 23 Aug 2023 at 10:25, Katherine Cox-Buday  
wrote:


   I don't use the email-based patch workflow day-to-day, so this is
another area where I spend a lot of time trying to make sure I'm doing
things correctly.


I agree that Debbugs is not handy at all for submitting patches.  Send
the cover letter, wait, refresh email, wait, refresh email, loop until
an issue number is assigned, and then send the series.


Yes, and imagine that between every step, it is likely that something 
pulls your attention away. Maybe you have ADHD, maybe a kiddo is asking 
you endless (but interesting) questions.


This is where the overhead begins to erode contributions. People can end 
up spending their limited resources managing the workflow instead of 
improving Guix.



Reduce friction for making contributions?  Yes, we all want that, I
guess.  The question is how?  What is the right balance between the
change of habits and the satisfaction of people used to these habits?


I think a good start is to eliminate toil for everyone. It doesn't 
matter how something is done if it doesn't need doing! Manually 
organizing/pruning imports is a good example.


The good news is that we are working with computers which, among other 
things, are purpose-built for automating away toil :)



Well, from my point of view, we are using here the term “contribution”
as it was one homogeneous thing.  Instead, I think the term refers to a
range with a gradual complexity.  And the improvements or tools maybe
also need to be gradual depending on this range.

For example, a two-line patch trivially updating one package is not the
same contribution as several-to-many two-line patches more some package
additions for updating all the R ecosystem.  And that’s not the same as
rewriting the Python build system or as packaging the last version
TensorFlow.

The cognitive overhead for these 3 contributions is not the same.
Therefore, the way to reduce the friction will not be the same, IMHO.


That's a good point, but even along that gradient, there's a baseline of 
toil which is not a function of the complexity of the code.


But also, even for complicated things, a tight loop of exploration is 
useful for getting things right. For some, as I alluded to above, it's 
even a critical component for even completing the task.



* We could support a managed web-based workflow


Here, I am very doubtful that it would help.

For instance, Debian is based on Gitlab since their switch from Alioth
to Salsa.  It would be interesting to know if this “new” web-based
workflow using Merge Request is increasing the number of submissions
and/or increasing the number of occasional contributors.


It's a good question.


Another example is Software Heritage (SWH).  Their web-based workflow is
a Gitlab instance [1].  As an occasional person who deal with the SWH
archive, I am never able to find my way and I just roam on #swh-devel
IRC channel asking for help.


I think there's utility in distinguishing between familiarity and 
eliminating toil. I think it was incorrect of me to suggest forming 
habits in my original message. I think it's better to focus on 
eliminating toil so that whatever workflow remains can more easily be 
habituated.



Another example: the channel guix-science [2] based on GitHub.  Well, I
am able to count using one of my hands the number of PRs. ;-) (And I do
not speak about other channels as nonguix)
  
Well, reading the item above about mistake and the item below about

"Guix 'R Us", and maybe I am wrong, somehow I feel that one “cognitive
overhead” is the willing to submit a perfect patch.  Again, maybe I am
wrong, somehow I feel that one part of the issue is a lack of
self-confidence.  Do not take me wrong, I am speaking about submitting a
patch by occasional contributor and not about the personality of person
that I do not personally know. :-)

This “that’s not perfect so I postpone the submission” is something I
often hear by people attending to Café Guix [3].  Hum, I do not know
what we could do differently for reducing this barrier.

The idea of the team Mentor is in this direction but it does not seem
working… :-(

1: https://gitlab.softwareheritage.org/explore
2: https://github.com/guix-science/guix-science
3: https://hpc.guix.info/events/2022/caf%C3%A9-guix/


Similarly, I think there's utility in distinguishing between learning 
and cognitive overhead. I can learn anything, but there are factors 
beyond my control that cause high levels of cognitive overhead to cause 
enough friction to stop me.


Mentorship is a great thing, and I'm sure I"d learn a lot from a mentor, 
but it wouldn't solve the problem I've raised unless the mentor somehow 
showed me how to eliminate the cognitive overhead, in which case: let's 
just do that for everyone.



For sure, I think that part of the solution is by finding the way to
collaborate.  Somehow, what would be your ex

Re: Help packaging ArrayFire

2023-08-25 Thread Adam Faiz
On 8/20/23 19:35, B. Wilson wrote:
> Hello Guix,
> 
> Knee deep in CMake hell here and would appreciate a helping hand. ArrayFire
> build is defeating me:
> 
> CMake Error at 
> /gnu/store/ygab8v4ci9iklaykapq52bfsshpvi8pw-cmake-minimal-3.24.2/share/cmake-3.24/Modules/ExternalProject.cmake:3269
>  (message):
>   error: could not find git for fetch of af_forge-populate
> Call Stack (most recent call first):
>   
> /gnu/store/ygab8v4ci9iklaykapq52bfsshpvi8pw-cmake-minimal-3.24.2/share/cmake-3.24/Modules/ExternalProject.cmake:4171
>  (_ep_add_update_command)
>   CMakeLists.txt:13 (ExternalProject_Add)
> 
> Apparently, some of the build dependencies get automatically cloned, but I'm
> unable to make heads or tails of how to work around this. The
> `af_forge-populate` makes it look like it's related to Forge, but "ArrayFire
> also requires Forge but this is a submodule and will be checkout during
> submodule initilization stage. AF_BUILD_FORGE cmake option has to be turned on
> to build graphics support," so I'm stumped.
> 
> I need this soon for a project and am willing to pay someone to take this 
> over.
> 
> Here are the official build instructions: 
> https://github.com/arrayfire/arrayfire/wiki/Build-Instructions-for-Linux
> 
> In fact, there's a 2016 thread where Dennis Mungai claims to have successfully
> gotten ArrayFire packaged on Guix: https://issues.guix.gnu.org/23055. However,
> that appears to have never resulted in a patch.
> 
> Thoughts?
> 
I'm willing to work on this, it's a very interesting challenge.



questionable advice about Geiser load path setting

2023-08-25 Thread Csepp
The docs contain this recommended Emacs setting:

@lisp
;; @r{Assuming the Guix checkout is in ~/src/guix.}
(with-eval-after-load 'geiser-guile
  (add-to-list 'geiser-guile-load-path "~/src/guix"))
@end lisp

I haven't been using it for a while because I remember it causing
trouble whenever I was working on other Guile projects.  I have been
running Emacs inside ./pre-inst-env instead, which seems to work just as
well, if not better.

I'd like to make an amendment to the relevant docs, but would welcome
some info on why it was originally written this way, maybe there are use
cases I'm missing.