Am 14.03.2025 23:47 schrieb Nick Owens:
"fixed" it with robots.txt which the particular crawler ("claudebot")
respected. robots.txt:

Esp "claude" is known to me to be very ignorant of robots.txt (kinda
that way it is funny that it is downloading everything BUT robots.txt ...)

one can muse to block it - it's just all AWS ranges.. consider if your
USERs are sitting "behind" aws VMs (or their proxy...).

burn with fire..
--
pb

Reply via email to