On Thu, Apr 2, 2015 at 5:23 PM, Benjamin Francis <[email protected]> wrote: > I have some comments on Jonas' proposed new security model for B2G. > Apologies to Jonas if this is a work in progress and not ready for > discussion, but it's been on the wiki for a few days now and I think Tim > linked to it in his blog post so I figured it was fair game ;) > > https://wiki.mozilla.org/FirefoxOS/New_security_model
Doh! I forgot to send this to the b2g list. It's definitely up for discussion. > URLs > > The proposal says that "The format used for the packaging will be the one > defined in the W3C packaging spec draft". In addition to a packaging format > that spec [1] proposes a different URL format than the !// system which is > discussed here. > > In the W3C proposal the package is specified in a <link rel="package" > href="..." scope="..."> link relation and is an alternative way to fetch a > packaged version of a bunch of URLs within a defined URL scope in a single > HTTP request. Before trying to separately GET any resources which fall > within that scope, the user agent should first check inside the package to > see if the resource is included. I don't see how the W3C proposal would let us put the HTML file in the package. It seems to be mainly geared towards putting resources like CSS files, scripts and images into a package which is then used by a freestanding HTML page. In particular, in order to load something from a package, the package must first be declared using a <meta> tag in the HTML file. But in order to see that <meta> tag we must first load the HTML file, which must thus be loaded from outside the package. As far as I can tell, the W3C proposal also makes URLs significantly more complex. Each resource now has two URLs. One URL like "https://website.com/RSSReader2000/picture.jpg", which is the URL that the JS and the DOM sees, the second url is like "https://website.com/RSSReader2000.pak#url=index.html" which is the URL which is the one that can actually be used to load something from the server. This duality of URLs seems like it's a huge source of complexity. If I have a URL, how do I know what type it is? How big changes will be needed to all pieces of software that use URLs to add logic for tracking what type of URL is used where? It also seems like a lot of logic overhead for developers. In short, I think the W3C draft defines a good packaging format. But it's URL handling seems pretty broken. > Signing > > The Streamable Package Format is designed to be consistent with multipart > media types. Should we therefore consider using the multipart/signed format > from RFC 1847 [2] (e.g. [3]) rather than include separate signature files? > Note that this would allow the HTTP headers of the resources to part of the > signed resource. > > Are you expecting individual parts (files) in the package to be signed (each > part could be a multipart/signed), or for the package to be signed as a > whole (the package would be a multipart/signed)? If it's the whole package, > would that effect its streamability? Does the whole package have to be > verified before any of the resources can be used? I don't really have an opinion about rfc 1847 vs. what we use today. But a requirement is that each resource is individually signed. Since, as you note, the package can't be streamed unless we can verify each part separately. I'd prefer to leave the signing format up to the crypto team. > Scope > > You point out that a signed app should run in its own process and its iframe > should only be allowed to navigate to URLs which return signed resources. > This sounds tricky, but it also sounds potentially related to the > "navigation scope" we have discussed around web apps. > > Note that it might not be as simple as only allowing the browsing context to > navigate to URLs of resources which came from the package. We may want to > create Gaia apps which have dynamic URLs like > http://contacts.firefox.com/contacts/123 which the Service Worker intercepts > and generates a page for, but which were not one of the static resources > included in the package. Keep in mind that in order for a SW to be initiated the navigated-to URL needs to be in the scope of the SW. And I expect that for signed packages the scope will be that of the package. Otherwise I don't see any big problems. I.e. I think we can allow navigating to "https://contacts.firefox.com/contacts.pak!//contact/123" even if no "contact/123" resource exists in the package. The only issue is that such a URL is device specific. I.e. the user couldn't share that URL with a friend and have the friend load it. That's not package specific in any way, but is always the case if you use a SW to generate URLs that doesn't exist and don't have a server to generate the same URLs. > Permissions > > If we want to grant permissions to apps which are not installed, I think we > need to at least re-visit all the permissions which have the default granted > permission as "Allow" [4]. Indeed. > Currently our permissions system assumes an > implicit level of trust from the user from the act of installing an app. > Allowing a permission to be used simply by navigating a web page removes > this implicit opt-in from the user and puts a lot more responsibility on > code reviewers at Mozilla. I don't agree with this. We never intended the act of installing as something which the user should think of as making a security decision. I.e. the user was never intended to think "is this safe" before clicking the "yes" button. Which is why the install UI doesn't inform the user of anything security related. > Updates > > If the cache header of a particular resource has expired, could Gecko be > smart enough to just download that one resource from the package (streaming > the package until it comes across the resource it's looking for) rather than > having to download the whole package or use some complex incremental diffing > system? It's important that we don't mix and match content from different versions of a package. This isn't signing specific, but important for packages in general. > Origin > > Are signed resources from the package always considered cross-origin with > other resources on the same server? > > Could a static signed JavaScript file at > http://contacts.firefox.com/js/script.js do an XHR to a dynamic REST API URL > at http://contacts.firefox.com/contacts/123? Some of this stuff needs to be figured out still. But we definitely can't allow unsigned *pages* on the same server to be considered same-origin. Since that would mean that an unsigned page could reach in to a signed page and evaluate code there. Thus gaining the permissions of the signed page. But I would indeed like the signed content to be able to do same-origin XHR requests to the server they were served from without having to use CORS. > Manifest > > Could these new Firefox Apps maybe use the W3C web app manifest format as a > base, and use vendor prefixed proprietary properties where necessary? (e.g. > moz_permissions). Seems like an opportunity for a fresh start. I don't think this is a fresh start since my intent is to convert all the existing packaged content in our marketplace to this new package format. But I do indeed think that we should support both W3C manifests and our own manifest everywhere where we support manifests. / Jonas _______________________________________________ dev-b2g mailing list [email protected] https://lists.mozilla.org/listinfo/dev-b2g
