On Wed, May 8, 2024 at 10:45 AM Neil Madden <neil.e.mad...@gmail.com> wrote:
> > On 8 May 2024, at 17:52, Sam Goto <g...@google.com> wrote: > > On Wed, May 8, 2024 at 7:23 AM Neil Madden <neil.e.mad...@gmail.com> > wrote: > >> Thanks for these slides and recording. This is a fascinating proposal. I >> have plenty of potential thoughts and comments to digest, but I guess the >> most fundamental is that this spec assumes that users and IdPs will be >> happy for their browser to be a trusted party involved in login flows. >> > > Yep, that is, indeed, the privacy and security threat model that we (FedCM > specifically, Web Platforms API in general) use: the user agent is a > trusted party. > > > I’m sure browser developers do of course view their own products as > trustworthy, but not everyone does. > The architecture of the web is constructed in such a way that a user can (and, in fact, do) change user agents if they stop representing you. Same (in terms of the economics and privacy/security threat model) goes for your operating system and your hardware. From a security threat model perspective, it also largely assumes that the user agent (including the OS and the hardware) is trusted by the user. > Episodes like [1] do provoke some distrust. Especially in corporate > environments where users are forced to use a particular user-agent (and may > be subject to mitm proxies), this may not be a universally accepted threat > model. > > [1]: > https://www.theverge.com/2023/4/25/23697532/microsoft-edge-browser-url-leak-bing-privacy > > > >> In particular, the call to the accounts endpoint assumes that the IdP is >> willing to provide PII about the user to the browser. That seems >> questionable. >> > > Aside from a privacy/security threat model perspective (meaning, the user > agent already has visibility over every network request made available to > the content area) > > > Sure, but if I use the recommended auth code flow or encrypted ID tokens, > then PII is not exposed to the browser. > >From a privacy/security threat model perspective, again, If PII is rendered in the DOM, it is exposed to the browser rendering it. When an IdP renders a page with the user's personal information, that's exposed to the browser (in the same way that a HTTP request would). > And it’s not just the browser itself in the current proposal, as the token > is exposed to javascript, of course, so the usual XSS risks. > Yeah, XSS is a risk (extensions, particularly, come to mind), but not one that is bigger than the status quo (e.g. extension can intercept top level redirects too). In fact, with a high-level API (such as FedCM), you can constrain the scope of the memory footprint in ways that low-level APIs (e.g. top level redirects, iframes and pop-up windows) can't, so I only expect FedCM to provide a much higher security bar than the alternatives. > > , I think that, if you look through the lenses of the design of > incentives, this is indeed something that we are still gathering > validation. So far, it seems to strike a good balance, but I think you are > right in that this introduces an extra game theoretical position that can > be questioned. > > > I guess a related question is whether browser vendors are intending for > this to become the only game in town for cross-site authentication? > It is not clear, it is probably too soon to tell either way. Tracking on the web has a lot of moving parts, and not all of them have settled. > If not then those with differing threat models can use other mechanisms. > But if the plan is to eventually completely block all other federation > protocols then it needs to work for all use cases. > > > >> This endpoint also has no CSRF protection, so risks leaking PII more >> generally (eg to any origin that has been CORS-allowlisted). >> > > As far as CSRF goes, we expose a Sec-Fetch-Dest HTTP request, which is a > forbidden > request header <https://fetch.spec.whatwg.org/#forbidden-request-header> > (meaning > that it can't be polyfilled in userland). > > https://fedidcg.github.io/FedCM/#sec-fetch-dest-header > > > Ok, that is good. But it feels like something that IdPs could easily > forget to enforce. In general, being one missed security header check away > from a PII data leak seems not a fun place to be for an IdP. > Yeah, agreed. I'd love to hear about other ways that we could make this endpoint more secure. > > As another general comment, I'd say that if you want this to be easy for >> RPs to apply to existing login flows then it needs to be something that is >> easy to configure/initiate via a reverse proxy. That would suggest HTTP >> header-based rather than a JS API in my opinion. >> > > Yep, that sounds reasonable to me. For the most part, we think of JS APIs > and HTTP request are largely isomorphic in the important parts (again, > privacy/security wise), and we can expose either/both purely based on > ergonomics (as you suggest), so yeah, if this makes it easier for > developers, it is easy to make it happen, I think. > > > — Neil > >
_______________________________________________ OAuth mailing list -- oauth@ietf.org To unsubscribe send an email to oauth-le...@ietf.org