On Mon, Oct 23, 2023 at 9:52 AM Jonathan Hoyland
<jonathan.hoyl...@gmail.com> wrote:
<snip>
>>
>> I'm not following how this identifies web crawlers, unless perhaps we're 
>> using the term to mean different things? I would expect web crawlers to 
>> typically not do much with client certificates, and to typically want to 
>> index the web in the same way that humans with web browsers see it.
>
>
> So typically web crawlers don't use mTLS, they rely on things like publishing 
> their IP range for auth, which isn't a great signal.
>
> If we could reliably identify e.g. GoogleBot then we could skip some of our 
> bot detection systems for definitely-allowed traffic, allowing us to be 
> stricter with unknown traffic, or of course, explicitly-disallowed traffic.
>
> There is an argument that one could show bots different results to humans, 
> but given that this is explicitly aimed at bots that we approve, I don't see 
> the incentive. Maybe improved TTFB or something?

Would unprompted auth in the HTTP working group be adaptable to this?

Sincerely,
Watson



-- 
Astra mortemque praestare gradatim

_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to