Hi all, The use case I suggested to David I think is the easiest to think of.
I am happy for human users to access my website with no auth. I'm happy for bots that I approve of (e.g. search engine crawlers) to access my website. Bots that I have not approved (AI scrapers, scalpers, etc.) will be subject to more intense scrutiny. I think the key factor here is that, if I am happy that I can distinguish between human and bot activity sufficiently well already, then this lets me distinguish between bots that I approve of and bots I don't approve of or don't know. The current situation is something like: There is a bot that says "My IP range is any AWS IP, and I set my user-agent to 'Good Bot'". Other (potentially malicious) bots can impersonate that bot fairly easily. If a bot connects to my server from an AWS IP and has a user-agent set to 'Good Bot' I have to apply just as much scrutiny as I would have to apply to any other bot. Further, if I start getting a large DoS attack from those IPs and with that user agent then I might have to just block all traffic with that signature, even though that will hurt the good bot, whose traffic I want. However, if the bot has a valid certificate that's in my trust chain (however I decide to configure that), then I can distinguish easily between the imposters and the real deal, and I can block the imposters without collateral damage. If your client is a large scale automated bot (as opposed to a browser) that always has the same certificate configured (as an example, obviously other setups can exist) then it should expect to always send this flag. A malicious bot could send this flag, but wouldn't be able to complete the handshake with an approved certificate. They could send an unknown certificate, but the server wouldn't necessarily accept it. Bots that can dynamically select a certificate should probably not set this flag IMO, as there are other mechanisms available if client auth is necessary, inc. PHA or EAs. This flag is simply a hint that we are on the happy path. @Watson So I think UpA works well for its intended use case, namely authenticating a single browser-based session, but when we tried to implement it for a bots use case it rapidly became difficult to implement because of the tight binding required between the TLS and HTTP layers, which many languages and libraries intentionally isolate from one another, to enable things like connection pooling. I am very keen to see the UpA work move forward in parallel to this, although if someone smarter than me can get UpA working in this case too then I'd be happy to focus there. However a side-goal of this work is to help get the TLS Flags work over the line, so even if we can make UpA work here I think there is some value to this work stream. Regards, Jonathan On Mon, 23 Oct 2023 at 22:22, Watson Ladd <watsonbl...@gmail.com> wrote: > On Mon, Oct 23, 2023 at 9:52 AM Jonathan Hoyland > <jonathan.hoyl...@gmail.com> wrote: > <snip> > >> > >> I'm not following how this identifies web crawlers, unless perhaps > we're using the term to mean different things? I would expect web crawlers > to typically not do much with client certificates, and to typically want to > index the web in the same way that humans with web browsers see it. > > > > > > So typically web crawlers don't use mTLS, they rely on things like > publishing their IP range for auth, which isn't a great signal. > > > > If we could reliably identify e.g. GoogleBot then we could skip some of > our bot detection systems for definitely-allowed traffic, allowing us to be > stricter with unknown traffic, or of course, explicitly-disallowed traffic. > > > > There is an argument that one could show bots different results to > humans, but given that this is explicitly aimed at bots that we approve, I > don't see the incentive. Maybe improved TTFB or something? > > Would unprompted auth in the HTTP working group be adaptable to this? > > Sincerely, > Watson > > > > -- > Astra mortemque praestare gradatim >
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls