Re: guidelines for package names (namespaces?)
Hi Andy, There are some guidelines for naming packages, as discussed in the manual: https://guix.gnu.org/en/manual/devel/en/html_node/Package-Naming.html. Ultimately, of course, the final say rests with the commiters who do or do not accept a given patch in a given state. As for namespaces, Guix packages being defined in Guile and thus in Guile modules provides namespacing in most contexts - if not operating at the command line, one need only use the correct module and reference a given package object directly. That said, last night I was just considering the potential for name conflicts. If a given `guix package -i ` command would resolve equally well to two different packages, for example, it seems the priority goes to whichever one comes first in some alphabetic comparisons - or perhaps it's by the order of channels in the channel list; I didn't investigate much more than to find out if Guix offers any manual intervention. I mused aloud about the possibility of adding a switch to specify channels in such situations and was offered the following addition to the command: `-e '(@ (my channel packages) foo)'`. This allows one to specify Scheme objects precisely. In this case, that code resolves to the package `foo` in the module `(my channel packages)`. Of course, this is not necessarily obvious or approachable to a relatively new or casual user - I had to test to understand the incantation above, and I would not have considered it without advice. All of that to say, you raise some good questions. I hope these thoughts prove useful to you in your endeavors. Best, Juli
New England Guix Meetup Interest?
Hello everyone, Coming off the wonderful experience of meeting up with a bunch of fellow Guixers (Guixs? Guix? what is our demonym?) in person for the first time at Guix Days, I would very much like to have similar experiences more often. I know there are Paris and London meetups; would there be interest in a New England meetup? I am based in Boston and know a few other Guix users in the area, but meeting up with even more would be great! In terms of where such a meetup would happen, Boston would be optimal for me, but I can go anywhere the MBTA can take me. And, I don't necessarily have to go in person if there is a large enough group for a meetup that I can't attend--having a hybrid in-person and remote meetup (using Jitsi Meet or Big Blue Button if someone offers an instance) would be a good idea in general I think. If there's interest, I think it would also be fun to partake in something London and Paris have decided to do: review parties (my term, not theirs). As part of discussions started around the patch backlog (see for example the thread starting https://lists.gnu.org/archive/html/guix-devel/2024-02/msg00027.html), some folks who regularly attend those meetups decided to start reviewing patches there. While they suggested a friendly competition of who can review and respond to the most patches, I personally have no competitive spirit but would love to cooperate in this objective. Especially seeing committers bless non-committers reviewing patches, it would be great to help calm the 700-pound gorilla in the Guix room that is our patch backlog. I look forward to hearing if anyone else is interested or has other suggestions! All the best, Juli
Re: Google Season of Docs 2024
Hi Simon, I would absolutely be interested in actually writing documentation. I appear to be rapidly specializing as a technical writer and have long wanted to help write better documentation for Guix (and Guile, for that matter...). Reimbursement would allow me to set aside the time necessary to actually do it. With the caveat that I may be doing other things at the same time (other grants or contracts or who knows; life is unpredictable), I would absolutely be interested in doing some actual writing. I don't have hard ideas about how to improve the docs so would be tapping folks for and accepting feedback. - Juli (she/her)
Re: Patch review session tomorrow (Thursday 7th March)
Thanks so much for heading this up; it was great to see so many folks show up! I've started using reviewed-looks-good to mark patches so committers can check https://debbugs.gnu.org/cgi-bin/pkgreport.cgi?tag=reviewed-looks-good;users=guix for stuff that should need minimal review before merging :) Best, Juli
Distributed GNU Shepherd NLNet Grant
Dear comrades, As some of you already know, in December I submitted an application for an NLNet grant to fund porting our beloved Shepherd to Spritely Goblins [1]. This work would represent a radical evolution in the capabilities of not just Guix's system layer, but of GNU/Linux system layers in general; and would also be the biggest real-world test to date of the Goblins library and its capabilities (pun not intended). Materially, it would allow Shepherd dæmons running on different machines to securely communicate and interact with each other, going so far as to control one machine's dæmons from another machine. I am happy to announce that this grant application was approved! [2] While there remain some administrative tasks to complete before work can begin, I wanted to make the community aware of this upcoming effort and to invite you all to collaborate in this process. My hands may be the ones on the keyboard, but I want this to be a community project. I welcome questions and feedback about the project's goals and direction. You can learn more about object-capability security, the basis of Goblins, from Spritely's "The Heart of Spritely" whitepaper [3] as well as erights.org (which the whitepaper cites heavily). You can learn more about Goblins and this specific project at the links cited above. Thank you to everyone who supported the application process. Ludo, I wouldn't have the courage to attempt this if I didn't know I have your support. Also, this grew from your idea of integrating Goblins and the Shepherd in the first place. Christine, I couldn't do this at all if not for your and Spritely's work, and I wouldn't have applied for this grant without your encouragement. Thank you as well to everyone who's talked with me about this project, shared ideas and excitement, or just not gotten mad at me for emailing them questions out of the blue - I'm sure you know who you are. Knowing the community supports this work only increases my desire to do it. Last but most assuredly not least, thanks to NLNet for funding this project. Y'all are an incredible positive force in free software and thereby the world. Keep up the good work! I look forward to working together with all of you over the coming months! Solidarity, Juli [1] https://spritely.institute/goblins/ [2] https://nlnet.nl/project/DistributedShepherd/ [3] https://spritely.institute/static/papers/spritely-core.html
Re: System deployment commands hoarding all RAM
Hi Sergio, Continuing on from our out-of-band conversation about this topic, the most likely cause of your RAM issue is that you use recursion extensively, but not in tail position. This means that Guile cannot optimize this recursion to re-use stack frames and thus your memory usage is unbounded. I'll try to explain a bit more what that means in this email in the hopes that you'll be able to resolve it. Forgive me if I repeat things you already know; I want to make sure you have all the information you need to solve your problem. You know what recursion is: calling a function from inside itself (or one of the functions it calls). You mentioned you've also heard of tail calls. I'll go ahead and describe tail calls a couple ways just in case these different descriptions help something click for you. I know I needed to hear several different explanations before I really understood them. The common definition is that a tail call is a call made as the last thing a function does. I think of it as calling a function when there's no more work to do in the current function. Functions called in this way are said to be "in tail position." Tail recursion is simply making a recursive call in tail position. Here's a silly example defined first without tail recursion, then with: ``` (define (add-x n x) "Add @var{x} to @var{n} one at a time." (if (= x 0) n (+ 1 (add-x n (- x 1) (define (add-x-tail n x) (if (= x 0) n (add-x-tail (+ n 1) (- x 1 ``` An important note: while the recursive call is the last line of text in the tail-recursive function definition, this isn't necessarily required. That idea threw me off for quite some time. The following function is also tail recursive and produces the same output as the other two definitions: ``` (define (add-x-tail-2 n x) (if (not (= x 0)) (add-x-tail-2 (+ n 1) (- x 1)) n)) ``` Let's return to the first definition for now. You may notice that the recursive call happens inside of a call to `+`. Because arguments have to be fully evaluated to values in order to be passed to functions, the `+` call cannot be fully evaluated without the result of the recursive call. We therefore have to keep around the outer `add-x` call so that we can fully evaluate `+` and return its value from `add-x`. Let's walk through expanding a quick example, skipping some intermediary evaluation to save space and using ellipses to replace parts of the function we don't care about anymore but have to keep around anyway: ``` (add-x 3 2) -> ;; replace add-x with its body, replacing variable names with values (if (= 2 0) 3 (+ 1 (add-x 3 1))) -> ;; repeat the above steps, replacing the inner call to add-x with its body and its variable names with their values (... (+ 1 (if (= 1 0) 3 (+ 1 (add-x 3 0) -> ;; and again (... (+ 1 (... (+ 1 (if (= 0 0) 3) ;; we skip writing the second arm because we don't evaluate it -> ;; now that everything is fully expanded, we can begin evaluating in ernest (... (+ 1 (... (+ 1 3 -> (... (+ 1 4)) -> 5 ``` Compare that to the second function definition. In this definition, the `+` and `-` calls both happen inside the call to `add-x-tail`. These are operating on known values and can be fully evaluated before the call to `add-x-tail`. That is, there is no information in the first `add-x-tail` call that is required to use the result of the second (or third, fourth, etc) call to `add-x-tail`. This reveals another way to think of tail calls. Tail calls are calls for which all arguments are known and whose result is immediately returned by the calling function. Here's where we introduce a new term: recursive tail call optimization. Tail calls and tail recursion just describe situations you find in code. In and of themselves, they only talk about what code does on a theoretical level. Tail call optimization is a way to take advantage of the theoretical aspects I've been discussing. I've been framing tail calls as calls which do not need any information from the function that calls them. Recursive tail call optimization allows the compiler to replace the arguments of the calling function with the new argument values and then simply re-evaluate that function in-place. If you're familiar with how computers operate at the level of machine code and the stack, recursive tail call optimization is usually implemented by moving the new arguments into the same argument registers as were used for the first call to a function, then jumping back to the beginning of the function's stack frame. Guile (and all Scheme implementations compliant with R5RS or later standards) does something very similar. We'll step through evaluation of our tail recursive definition with this understanding in mind, as well as previous rules for making the steps shorter: ``` (add-x-tail 3 2) -> ;; replace add-x-tail with its body and replace variable names with the
Re: /run/setuid-programs via the Shepherd?
Hi Felix, You ask a really interesting question. I think run0 is a great step in the right direction, and I would welcome the Shepherd gaining similar abilities. I also think run0 is a stopgap and we can do much better. Let me try to explain. As I've announced previously, I'm working on porting the Shepherd to the Goblins object-capability security (ocap/ocaps) library, and very similar thoughts have cropped up for me as well while doing this work. Rather than merely imitate a better sudo, though, I think it would be compelling to leverage the system layer to create a security barrier that allows ocap security at the process level. This is beyond the scope of my current work, so what follows are just some ponderings and not representative of my work on the Shepherd. As a big caveat, these thoughts haven't been peer reviewed, as it were, by people more familiar with ocap security at scale. I point that out because, as far as ocaps are concerned, nothing I'm proposing is a new idea, and I may be missing pieces of the puzzle. First, let me define a few terms. A capability is a reference -- in the code sense -- to an object. An object is similar to an actor in the actor model but with some restrictions. What's most important is that these objects encapsulate state and the ability to operate on that state, and they can only manipulate external state through capabilities. An object may receive a capability in one of three ways: it may be created with a capability; it may be granted a capability by an object with a capability on it (A has a capability on B and C; A can grant B a capability on C); or it may create the capability itself. The latter mechanism is only able to create capabilities on the object itself or on objects upon which it has capabilities. In short, "if you don't have it, you can't use it." (As a critical corollary, if you *do* have it you *can* use it, so be careful about the capabilities you hand out.) The overall model is called "object-capability security" because its original name, "capability security," has been applied to several similar but distinct systems since it was first formulated, and the role of the object is the most important and defining feature of this specific model. On to the actual idea. To summarize, in an ocap system, we invert the authority flow of sudo/run0. We can think of sudo as untrusted code claiming to act on behalf of a trustworthy user and thus being allowed to execute as trusted code. (This was built on top of Unix's original security model, knowing where every other person with access to the OS works, so its weaknesses are understandable.) run0, as I understand it from the thread you linked, improves on sudo significantly by making untrusted code ask trusted code to act on its behalf to perform some delimited action. This is much better, but still relies on the identity of some user who gave the code permission to act in this way. The ocap model is closer to run0 (run0 reminds me of an ocap pattern called a powerbox), but ocaps has a key difference. In an ocap system, rather than untrusted code asking trusted code to do some specific task, untrusted code is unable to do anything until trusted code gives it the *capability* (in both the colloquial and ocaps sense) to do so. That is, whereas with run0 and sudo, *untrusted* code tells *trusted* code *what* to do, with ocaps, *trusted* code tells *untrusted* code what it's *allowed* to do. Before we can have meaningful ocap security, we must reduce or eliminate ambient authority. This isn't very hard anymore, thanks in large part to systemd and changes it encouraged in the Linux kernel, like cgroups. run0 significantly reduces ambient authority -- yay! Guix has facilities towards this end as well -- the least authority wrapper comes up frequently. The harder part is bootstrapping capability grants. If we endeavor to build ocap security on top of an access control list (ACL) system, we frequently need something like a powerbox at some point. But if the ACL system in question is Guix, and the powerbox in question is (inside) the Shepherd, we can go much further towards proper capability flows. We can take `guix system reconfigure' (or `guix home reconfigure' for user processes) as the root of our capability bootstrap process. For example, capabilities could be granted at object creation by passing them around in system configurations which are then instantiated by the Guix build daemon at reconfigure/build time. At runtime, the Shepherd, which would receive capabilities at build time as well, could spawn processes in "dead worlds" with only the capabilities they need. Outside of Guix, Shepherd configuration files would be the root of these flows. You may immediately notice this idea is rough. There is a circular dependency in that we need all relevant capabilities for `system reconfigure'/Shepherd configuration if we want to make it the
Re: /run/setuid-programs via the Shepherd?
Hi Felix, ... we must hardcode some paths to /run/setuid-program/... as in this yet-to-be-accepted patch for OpenSMTPd. [1] ... > [1] https://issues.guix.gnu.org/71613 Oh, this is quite a tricky issue... I'm opposed to packaging software in Guix in such a way as to rely on the conventions of a system installation -- or even to assume what software a user chooses for their particular system installation -- because that undermines the core principles of statelessness and user freedom. I don't know enough about this problem to offer good solutions, honestly. For the aforementioned reasons, I don't think the Shepherd is the place to solve it. Or rather, if the problem is solved there (no reason it can't be), there will still need to be accomodations for those who don't want to rely on Shepherd. P.S. Your mail headers included "Reply-To: 87plssoj2z@lease-up.com"; I took the liberty to copy you on this message. Thanks! I don't know how email headers work XD I subscribe to the guix-devel digest so when I want to respond to a particular message, I copy over the "to" and "cc" and "subject" manually and then I go grab the message ID of the particular message I'm replying to and put that in Geary's "reply to" field in the hopes of not breaking threading. Does this not work properly? (feel free to reply out-of-band if there's more to be said) -Juli
Re: An IRC bot called Peanuts
Hi Felix, Firstly, I feel this person could and should have expressed their frustration in a more polite way. I'm sorry someone has taken to using such fierce and critical language for something you've worked on. I hope that you're not too hurt by their rudeness. Secondly, I quite like peanuts. It's handy to get an idea of where a link will take me before being taken there (and potentially misunderstanding a link or wasting my time waiting on a page to load just to find out what an uninformative URL points to), and it's handy to have references to Guix issue and patch numbers automatically turned into links to those issues and patches. As a final note, for the rest of the folks reading guix-devel: please don't cuss at people about their work, nor use ableist insults to describe it, please (nor do those things *at people themselves*, though I trust y'all to know better than that already). Criticism is good and should be couched in polite, ideally positive terms (eg "I find peanuts annoying; could you reduce or eliminate its output?" would have been a better approach). All the best, Juli
Re: Next Steps For the Software Heritage Problem
Hey y'all, I've avoided weighing in on this topic because I'm of two minds about it. Still, when members of the community raise concerns, it's important to take those concerns seriously. We must be careful how we address them because the opinions and concerns of any community member are as legitimate as those of any other. This conversation has at times been contentious. People have not always used the most diplomatic language. And yet, there has been a thorough discussion of this topic. The conclusion appears to be that Guix cannot make changes in relation to SWH. It's clear there is no more room for productive conversation. I therefore echo Ludo's request to let this topic drop. I want to express my gratitude for a community where people are able to express their concerns and have them taken seriously, regardless of who they are. Let's not lose that. Let's not forget that, even when passions are high, we all want Guix to succeed and have a healthy community, and we all work to that end as best as we can with the information and resources available to us. Best, Juli
EU NGI funding cut engagement opportunity
Greetings comrades, As many of you are likely already aware, the European Union has recently made plans to cut funding to its Next Generation Internet (NGI) initiative which funds hundreds of FOSS projects, including many related to Guix itself. The current plan is to shift these funds into artifical intelligence research. Petites Singularités has started an open letter to call on the EU to restore this funding. I would very much support Guix signing onto it. The document, and instructions for participating in this effort, can be found here: https://pad.public.cat/lettre-NCP-NGI For full disclosure, my work on the Shepherd is funded by NGI through NLnet. While my funding is secure, it would be a tragedy if the opportunity which I have been given to build something really cool and (I hope) revolutionary were denied to others. In solidarity, Juli
Goblins Shepherd Design Document
Hey y'all, After a turbulent few months, I have an exciting announcement about the Goblins Shepherd port -- the design document has reached (initial) completion! [1] The purpose of this document is to have a point of reference for thinking about the port as well as explaining it at a high but technical level to those who may be interested. It is both the expression and culmination of experimentation to ensure the ideas in it are sound and applicable. This preparatory work means the port itself should progress comparatively smoothly from this point onwards. All work will happen in the wip-goblinsify branch to be merged back into mainline in the least-disruptive way possible. I look forward to sharing progress with you all along the way, and please feel free to reach out with any questions, concerns, or feedback! Thanks, Juli [1] https://git.savannah.gnu.org/cgit/shepherd.git/tree/goblins-port-design-doc.org?h=wip-goblinsify
Magic Wormhole Package Weirdness/Potential Security Issues?
Hey folks, I tried to update magic-wormhole today and things went super smoothly. All I had to do was change the version number. I didn't even have to change the source hash. If that strikes you as odd, good! It should! To cover all my bases, I pk'd the hash produced by `pypi-uri` and used `guix download` to try to fetch the same file and check its hash, only to find that `guix download` couldn't find anything at that URL or its fallbacks. To test if things were being exceptionally weird, I switched to pulling and building from git, and the build failed, expectedly, probably because one of the dependencies (magic-wormhole-transit-relay) was not the right version, which was what I had initially expected to happen. Does anyone know what might be going on here? Given the intended secure nature of this program, I'm concerned there may be something malicious happening somewhere along the way. I would love an explanation that quiets that concern. You can look at the current magic-wormhole package source and play around with it yourself to see what I'm talking about. Best, Juli PS I was trying to update all three packages in magic-wormhole.scm, but the transit relay in particular requires later versions of twisted and autobahn than the other two, which is minorly annoying. I know twisted can't be updated without rebuilding a bunch of stuff, so I don't plan to pursue this further for the time being.
Thank you for the docs work!
Hey y'all, I've noticed over the last couple years that the documentation for Guix just keeps getting better and better. Thank you to everyone who has contributed and is contributing to this effort! -Juli
Re: Guix (and Guile's) promise, and how to (hopefully) get there
Hey y'all, Ekaitz, thank you for opening this thread. RIP your inbox. I think this thread demonstrates in itself one of our biggest issues. A few folks have mentioned it indirectly. I'll be direct. We can't stay on topic. So once again, Ekaitz, thank you for clarifying what this discussion is supposed to be about. In the context of consensus decision-making, this is part of what's called facilitation, and it's absolutely vital if we want to use a consensus decision-making model for governance. I think we absolutely can do that if we just use the tools others have built for making consensus work -- like the idea of facilitators. But I digress. After that preface, I'm going to respond specifically to the points Ekaitz highlights as the topic of this thread. > - Do we need independent funding so we can pay for our machines and maintenance? I don't know the details of cost and source off the top of my head (they were recently summarized in another thread), but my instincts are screaming, "Yes!" I'll return again to this later, but we should be paying people to do systems administration work because systems administration is boring and tiring after a while. Above all, no single individual should have to carry the weight of paying for our servers. > - Is the Guix Foundation the way to do it? Again, I don't know enough to say with certainty, though most likely. Because... > - Does GNU, or the FSF, have some role on that? ...Guix should break with GNU and the FSF. Moreso the FSF, but the two are irrevocably intertwined in the public conscious -- which is the primary reason Guix needs to break away. To avoid relitigating what has been litigated more than sufficiently already, the FSF made a bad political move that has destroyed its social capital and, as a side-effect, its financial capital as well. Even if it can help us with funding, it shouldn't. (More on why in a bit.) In terms of extant infrastructural support, from what I can tell, the FSF gives us hosting for a simple website, an ancient git forge, and mailing lists. While I can't speak to mailing lists, I can speak to websites and git forges. Given the incredible complexity of our existing CI and QA infrastructure, putting up some HTML and having a gitolite service running on a machine are comparatively no effort. I suspect the mailing list -- after migration -- would be the same, though I reiterate my ignorance here. To forestall misunderstanding, I absolutely do *not* mean that Guix should compromise on free software. Guix's greatest strength is that it is an uncompromisingly idealistic and principled project. If we change anything about our stance on non-free software, it should be that we add a single sentence to the manual informing people about the well-known and well-supported channel providing non-free firmware, followed immediately by a disclaimer that we neither endorse nor support non-free software, and that's *all*. Official Guix channels should never knowingly ship non-free software, nor should we ourselves provide instructions on installing, configuring, or using non-free software itself -- we should just point people to the place that does. Why, though, should we go through the effort of migrating our mailing lists, domains, etc. just because it won't add *that much* more work? This is a big and important question. The short answer is, the FSF is radioactive, and we're getting sick from it. Let me be frank. I promote the heck out of Guix. I've shilled Guix to more people than I can count, from professional systems administrators at internationally-acclaimed universities to hobbiest hackers in the most obscure corners of the internet, and everywhere in-between, all of whom are incredibly capable, knowledgeable, passionate programmers, and some dozens of whom are free software hackers. The main turn-off people cite to me is our association with GNU. As a particularly poignant case study, in conversations with someone who has contributed significantly to Guix on my recommendation and did not stay around, the primary complaint was not the email-based workflow (which was noted as unusual but not overwhelming), but that the GNU affiliation *makes them feel uncomfortable in our community*. They haven't told me of negative interactions with members of the Guix community; the GNU affiliation alone was enough. If we recognize that there is not enough growth in effort going into the project, we should address the primary reason we're not getting new people to bring more effort: GNU. > - Can we improve anything relieving weight from the shoulders of some people instead of putting even more on them? I think so. As I noted above, if we break with GNU, I am highly confident we will see an uptick in new contributors, at least some of whom can help there. In the longer-term, we absolutely need to pay more people to do systems administration for the Guix project. If we start
Guix Days notes email thread
Hello, For those not at Guix Days: We have split into groups discussing various topics. Each group is collecting notes on its discussion. I am starting this thread as a place for these notes, to be distributed as necessary. To kick things off, I've attached my notes on the discussion of "distibuted substitutes" which we clarified referred to participatory/peer-to-peer substitutes. I tried to group things conceptually based on where conversation ended up, but "conclusions" per se are all under "Next Steps" and "Open Questions". Thanks, Juli #+title: Participatory (p2p) Substitutes * Angles ** Building ** Delivering * Why ** substitute servers are slow ** resources *** compute speed cost *** storage ** resilience These problems increase exponentially with users or packages or both. * Problems Source code is easier because we can have absolute knowledge of hash of source -- can cryptographically verify source. By contrast, crypto verification of binary requires compilation. Need to trust source of binary substitutes. ** Trust Someone needs to supply the hash. Currently, this is the central Guix build farm. ** Content-addressed downloads Need architecture for distributed (network topography) delivery. Can already content-address sources and binaries; just need trusted hash. That is, same problem for source and substitutes. ** Nar files Potentially inefficient? ** Obligations on users Users may be expected to contribute back bandwidth, potentially build time to the network. ** Privacy What if we have private info in ~/gnu/store~ eg because of Guix home managing dotfiles? ** Granularity 1. Different people have different security/privacy models. 2. People may want to use different transport mechanisms * Solutions We seemed to quickly shift to envisioning an opt-in network of distributors, eg with Guix system service. Above problems addressed below: ** Trust 1. a server/user you choose to trust gives you a hash; you can get this substitute from any server and hash it yourself. - need to trust central server + can talk to operator 2. apply ~guix challenge~ somehow 3. distribute trust over multiple nodes, eg strongly trust a few nodes, weakly trust more, test hashes against each other - could incorporate this into existing substitute certification infra - existing research in eg Tor exit node trust 4. zero-knowledge proof - expensive - more variables = more expensive - thus, likely not feasible Conversation is tending towards consensus-based trust (trusting hash if plurality of trusted nodes agree on hash) combined with "watchdog" application of ~guix challenge~. ** Content-addressed downloads 1. bittorrent - definitely tackles bandwidth usage - tends towards "supernodes" which advertise lots of smaller nodes + could run this on Guix infra 2. IPFS 3. (bespoke) OCapN/Spritely - could facilitate granular control of access - Spritely envisions distributed storage over ERIS, which is encrypted and complicates this space 4. ~guix publish~ ** Nar files ** Obligations on users 1. have ~guix publish~ already ** Privacy 1. do not advertise hashes, only respond to requests for specific hashes - there is an attack on this (TahoeLFS encountered this?) 2. only advertise specific substitutes eg what's in the core Guix channel - could be used to triangulate what software someone uses by watching what they request + already the case if monitoring requests to central substitute server + could download and distribute software you don't use 3. may not solve all privacy issues, but must communicate privacy concerns to users (ie informed consent) ** Granularity *** Privacy 1. opt-in to share specific nars or equiv (see above) *** Delivery 1. provide abstract interface to a network * Next steps We already have content-addressed distribution. 1. more central substitute servers and mirrors around the world 2. abstract API for decentralized substitute delivery * Open questions 1. trust mechanism 2. exact delivery mechanism 3. who does the work