Re: [TLS] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Keith Moore
While I agree that TLSv1.0 and TLSv1.1 should be avoided as much as 
possible, I believe this document fails to consider that there are old 
systems that are still in use that cannot be upgraded.   Strict 
implementation of the MUST NOT rules in this document can even prevent 
those systems from being upgraded at all, even when upgrades are 
available.   Strict implementation of the MUST NOT rules in this 
document can also make old embedded systems with built-in servers 
effectively unusable or require the operators of such systems to disable 
TLS entirely.


In general, it should not be assumed that old systems can be upgraded, 
or that old systems are feasibly replaced with newer systems.   There 
are several reasons for that.


 * One is that operating system vendors sometimes stop supporting old
   hardware, and client or server software vendors stop supporting old
   operating systems.
 * Some platforms are certified for medical use with specific versions
   of operating systems for which OS or software upgrades would require
   recertification, and the manufacturers of such systems do not always
   recertify their platforms with the latest operating systems.
 * Some embedded systems either do not have provision for firmware
   upgrades and/or are operated on disconnected networks so that
   upgrades are cumbersome (and may violate company security policies);
   those products sometimes do not have the option of firmware updates
   because there is no revenue stream to support them and they wouldn't
   be used anyway.  And yet, it's common for embedded systems to be
   configured, queried, or monitored using HTTP[S].
 * I have also worked on products for manufacturing environments for
   which upgrades were forbidden; any firmware upgrade would have
   required shutting down the assembly line for days and retesting the
   whole thing.
 * Finally, sometimes software or firmware "upgrades" take away
   functionality present in earlier versions, so that the "upgrade" may
   make that computer useless for its intended purpose.

In general, there are two kinds of problems caused by disabling of TLS 
1.0/1.1 in implementations:


1. Old clients cannot talk to newer servers

Again, sometimes clients run on old machines that cannot be upgraded or 
replaced.   When servers refuse to support old TLS versions, an old 
client may refuse to work at all.   It is not always feasible to 
download the same file from a different machine or different client program.


I have seen this happen when trying to upgrade some software on an old 
MacBook Pro.   The software I was trying to download could only be 
downloaded from Apple using Safari on a Mac. Apple's server would not 
use a version of TLS compatible with the old version of Safari I had, 
and there were no upgrades to that version of Safari.  I tried 
downloading the software from another (non-Apple) computer; the server 
would not let me do so.   I didn't have a more current Mac to use, 
didn't wish to buy one, and the pandemic made using someone else's Mac 
infeasible.


The best idea I came up with was to set up a web proxy that supported 
more recent versions of TLS, and configure Safari to communicate via 
that web proxy.   But I never found time to do that.


I'm not saying the RFC should be fixed for me, but rather, that I've 
personally experienced a situation that many other people undoubtedly 
have experienced and will experience after publication of this RFC.  
(Some servers are already following these recommendations.)


I have also worked with systems operated by a major petroleum producer 
(who will remain unamed) who had (unsurprisingly) very elaborate 
security measures.   Their internal networks were inaccessible from 
outside systems except via multiple layers of remote desktop.  So any 
client software to be used had to be software already vetted and 
installed on their internal machines.   But presumably because of the 
difficulty of vetting new software, the only browser available was MSIE 
5 [don't remember which version of Windows].   (I know this because I 
had to update product software to use GIF files instead of PNG, add some 
JS polyfills, and avoid some HTML5 features, in order to be compatible 
with their browsers).   I cite this only as another example that one 
cannot reasonably expect all client and server to be current, or even 
nearly so.


In some of these cases (when the client cannot be upgraded) an 
appropriate remedy may be to install a web proxy to allow the old client 
to communicate with the server.   Of course this can still come with 
risks, including perhaps exposure of the network traffic between the 
client and the web proxy.


In other cases, server operators might do well to consider whether, for 
their specific users, services, and content, TLS >= 1.2  is really an 
appropriate constraint to impose.   For example the ietf.org web server 
is currently supporting older versions of TLS in order to make IETF 
documen

Re: [TLS] [Last-Call] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Eric Rescorla
Keith,

Thanks for your note. Most of the general points you raise here were
discussed
when the TLS WG decided to move forward with this draft [0], though
perhaps some of that is not reflected in the text. Of course that
doesn't make this points invalid, so I'll try to summarize my
view of the rationale here.

Your primary objection seems to be this has the effect of creating
interop problems for older implementations that are unable to
upgrade. This is of course true, however I think there are a number of
factors that outweight that.

First, while certainly is problematic that people who have
un-upgraded endpoints will find they cannot connect to modern endpoints,
we have to ask what is best for the ecosystem as a whole, and IMO what
is best for that ecosystem is to upgrade to modern protocols. This
would be true in any case, but is doubly true in the case of COMSEC
protocols: What we want is to be able to guarantee a certain mimimal
level of security if you're using TLS, but having weaker versions in
play makes that hard, and not just for the un-upgraded people because
we need to worry about downgrade attacks. While we have made efforts
to protect against downgrade, the number of separate interacting
versions makes this very difficult to analyze and ensure, and so the
fewer versions in play the easier this is. Asking everyone else to
bear these costs in terms of risk, management, complexity, etc. so
that a few people don't have to upgrade seems like the wrong tradeoff.

Second, it's not clear to me that we're doing the people who have
un-upgraded endpoints any favors by continuing to allow old versions
of TLS. As a practical matter any piece of software which is so old
that it does not support TLS 1.2 quite likely has a number of security
defects (whether in the TLS stack or elsewhere) that make it quite
hazardous to connect to any network which might have attackers on it,
which, as RFC 3552 reminds us, is any network. Obviously, people have
to set their own risk level, but that doesn't mean that we have to
endorse everything they want to do.

Finally, as is often said, we're not the protocol police, so we can't
make anyone turn off TLS < 1.2. However, we need to make the best
recommendation we can, and that recommendation is that people should
not use versions prior to TLS 1.2. If people choose not to comply,
that's of course their right. We were certainly aware at the time
this document was proposed that some people would take longer than
others to comply, but the purpose was to move the ecosystem in the
right direction, which is to say TLS >= 1.2. I believe that a MUST
is more effective than a SHOULD here.

-Ekr

P.S. A few specific notes about your technical points here:

> But some of those embedded devices do support TLS, even if it's old
> TLS (likely with self-signed certs... TLS really wasn't designed to
> work with embedded systems that don't have DNS names.)

This is not correct. TLS takes no position at all on how servers
are authenticated. It merely assumes that there is some way to
validate the certificate and outsources the details to application
bindings. For instance, you could have an IP address cert.


> For newer interactive clients I believe the appropriate action when
> talking to a server that doesn't support TLS >= 1.2 is to (a) warn
> the user, and (b) treat the connection as if it were insecure.  (so
> no "lock" icon, for example, and the usual warnings about submitting
> information over an insecure channel.)

I'm not sure what clients you're talking about, but for the clients
I am aware of, this would be somewhere between a broken experience
and an anti-pattern. For example, in Web clients, because the origin
includes the scheme, treating https:// URIs as http:// URIs will have
all sorts of negative side effects, such as making cookies unavailable
etc. For non-Web clients such as email and calendar, having any
kind of overridable warning increases the risk that people will
click through those warnings and expose their sensitive information
such as passwords, which is why many clients are moving away from
this kind of UI.



[0] The minutes are typically sketchy but you can see that people
were concerned about endpoints having trouble upgrading:
https://datatracker.ietf.org/meeting/102/materials/minutes-102-tls-11

On Fri, Nov 27, 2020 at 4:45 PM Keith Moore 
wrote:

> While I agree that TLSv1.0 and TLSv1.1 should be avoided as much as
> possible, I believe this document fails to consider that there are old
> systems that are still in use that cannot be upgraded.   Strict
> implementation of the MUST NOT rules in this document can even prevent
> those systems from being upgraded at all, even when upgrades are
> available.   Strict implementation of the MUST NOT rules in this document
> can also make old embedded systems with built-in servers effectively
> unusable or require the operators of such systems to disable TLS entirely.
>
> In general, it should not be assumed that old

Re: [TLS] [Last-Call] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Keith Moore

On 11/27/20 9:53 PM, Eric Rescorla wrote:


Keith,

Thanks for your note. Most of the general points you raise here were 
discussed

when the TLS WG decided to move forward with this draft [0], though
perhaps some of that is not reflected in the text. Of course that
doesn't make this points invalid, so I'll try to summarize my
view of the rationale here.

Your primary objection seems to be this has the effect of creating
interop problems for older implementations that are unable to
upgrade. This is of course true, however I think there are a number of
factors that outweight that.

First, while certainly is problematic that people who have
un-upgraded endpoints will find they cannot connect to modern endpoints,
we have to ask what is best for the ecosystem as a whole, and IMO what
is best for that ecosystem is to upgrade to modern protocols.


If you're going to try to state what's best for the ecosystem as a 
whole, you need to understand that "the ecosystem" (or at least, the set 
of hosts and protocols using TLS) is a lot more diverse than things that 
are connected to the Internet most of the time, well-supported, and 
easily upgraded.   The idea that everybody should be constantly upgraded 
has not been shown to be workable, and there are reasons to believe that 
it is not workable.


When you give advice that is unworkable because it is based on dubious 
assumptions, you might not only cause interoperability failures, you 
might end up degrading security in very important cases.



This
would be true in any case, but is doubly true in the case of COMSEC
protocols: What we want is to be able to guarantee a certain mimimal
level of security if you're using TLS, but having weaker versions in
play makes that hard, and not just for the un-upgraded people because
we need to worry about downgrade attacks.


This sort of sounds like a marketing argument.   Yes, in some sense we'd 
like for "TLS" to mean "you're secure, you don't have to think about it" 
but more realistically TLS 1.0, 1.1, 1.2, and 1.3 each provide different 
countermeasures against attack (and in some cases, different drawbacks, 
like TLS 1.3 + ESNI being blocked) and you probably do need to be aware 
of those differences.



While we have made efforts
to protect against downgrade, the number of separate interacting
versions makes this very difficult to analyze and ensure, and so the
fewer versions in play the easier this is. Asking everyone else to
bear these costs in terms of risk, management, complexity, etc. so
that a few people don't have to upgrade seems like the wrong tradeoff.
I don't think it's a matter of "asking everyone else to bear these 
costs".  I think TLS < 1.2 should be disabled in the vast majority of 
clients, and in many (probably not all) public facing servers, but some 
users and operators will need to have workarounds to deal with 
implementations for which near-term upgrades (say, for the next 5-10 
years) are infeasible.


Second, it's not clear to me that we're doing the people who have
un-upgraded endpoints any favors by continuing to allow old versions
of TLS. As a practical matter any piece of software which is so old
that it does not support TLS 1.2 quite likely has a number of security
defects (whether in the TLS stack or elsewhere) that make it quite
hazardous to connect to any network which might have attackers on it,
which, as RFC 3552 reminds us, is any network. Obviously, people have
to set their own risk level, but that doesn't mean that we have to
endorse everything they want to do.


Yes, but you might actually increase the vulnerability by insisting that 
they not use the only TLS versions that are available to them.


There's a lot of Bad Ideas floating around about what makes things 
secure, and a lot of Bad Security Policy that derives from those Bad 
Ideas.   But practically speaking, you can't change those Bad Ideas and 
Bad Policies overnight without likely making them much worse.   People 
have to actually understand what they're doing first, and that takes 
time.  And there are a lot of things that have to get fixed besides just 
the TLS versions to make some of these environments more secure.   Lots 
of people having to make security-related decisions simply haven't 
managed to deal with the complexity of the tradeoffs, so it's really 
common to hear handwaving arguments of the form "the only threat we need 
to consider is X, and Y will deal with that threat".   And of course 
that's not anywhere nearly true, and Y is woefully insufficient. None of 
which is a surprise to you, I'm sure.


Personally I often ask myself "what does it take to get devices in the 
field regularly upgraded?"   And it's not a simple answer. First thing 
that's needed is an ongoing revenue stream to provide those upgrades in 
the first place, and there's a huge amount of intertia/mindshare to be 
overcome just for that.   Second thing is you have to keep device vendor 
corporate overlords from repurposing that money

[TLS] Genart last call review of draft-ietf-tls-ticketrequests-06

2020-11-27 Thread Dale Worley via Datatracker
Reviewer: Dale Worley
Review result: Ready

I am the assigned Gen-ART reviewer for this draft. The General Area
Review Team (Gen-ART) reviews all IETF documents being processed
by the IESG for the IETF Chair.  Please treat these comments just
like any other last call comments.

For more information, please see the FAQ at

.

Document:  draft-ietf-tls-ticketrequests-06
Reviewer:  Dale R. Worley
Review Date:  2020-11-27
IETF LC End Date:  2020-12-03
IESG Telechat date:  Not known

Summary:

This draft is ready for publication as a Standards Track RFC.

Editorial comments:

2.  Use Cases

   *  Parallel HTTP connections: To minimize ticket reuse while still
  improving performance, it may be useful to use multiple, distinct
  tickets when opening parallel connections.

To the naive reader, the ordering of the phrases doesn't seem to match
the logical ordering of the concepts.  Perhaps

   *  Parallel HTTP connections: To improve performance, a client
  may open parallel connections.  To avoid ticket reuse, the client
  may use multiple, distinct tickets on each connection.

--

   *  Decline resumption: Clients can indicate they have no intention of
  resuming connections by sending a ticket request with count of
  zero.

"have no intention" seems to me to suggest a decision that will not
change.  Since the future cannot be guaranteed, perhaps better wording
is "do not intend to resume", suggesting a current state that might
possibly change in the future.

   new_session_count  The number of tickets desired by the client when
  the server chooses to negotiate a new connection.

   resumption_count  The number of tickets desired by the client when
  the server is willing to resume using a ticket presented in this
  ClientHello.

If I understand the processing which is suggested correctly, when the
client sends a ClientHello, the server can choose to either negotiate
a new connection, or (if a ticket is present in the ClientHello) the
server can choose to resume the previous connection represented by the
ticket.  These two parameters provide the requested ticket count for
the two situations.

Assuming the above is correct, I would recommend changing the wording
slightly, as "when" suggests a fact which is true over an extended
period of time, whereas the provided counts are applicable in just this
one instance:

   new_session_count  The number of tickets desired by the client if
  the server chooses to negotiate a new connection.

   resumption_count  The number of tickets desired by the client if
  the server chooses to resume (using the ticket presented in this
  ClientHello).

(Change "the" to "a" in the last sentence if the ClientHello can
present more than one ticket among which the server can choose.)

[END]



___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Last-Call] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Eric Rescorla
Hi Keith,

Thanks for your note. I think it's clear we see things differently,
and I don't think it's that useful to get into an extended back and
forth on those points. Accordingly I've done a fair bit of trimming to
focus on the points where I think you may have misunderstood me
(perhaps due to unclear writing on my part).

On Fri, Nov 27, 2020 at 7:39 PM Keith Moore 
wrote:
> On 11/27/20 9:53 PM, Eric Rescorla wrote:
> > This
> > would be true in any case, but is doubly true in the case of COMSEC
> > protocols: What we want is to be able to guarantee a certain mimimal
> > level of security if you're using TLS, but having weaker versions in
> > play makes that hard, and not just for the un-upgraded people because
> > we need to worry about downgrade attacks.
>
> This sort of sounds like a marketing argument.  Yes, in some sense we'd
> like for "TLS" to mean "you're secure, you don't have to think about it"
> but more realistically TLS 1.0, 1.1, 1.2, and 1.3 each provide different
> countermeasures against attack (and in some cases, different drawbacks,
> like TLS 1.3 + ESNI being blocked) and you probably do need to be aware
> of those differences.

Well, I can't speak to how it sounds to you, but it's not a marketing
argument but rather a security one. This is easiest to understand in
the context of the Web, where you have a reference that contains one
bit: http versus https,and all https content is treated the same, no
matter which version of TLS it uses. In that context, having all the
supported versions meet some minimum set of security properties is
quite helpful. It's true that TLS 1.0, 1.1, 1.2,and 1.3 have different
properties, which is precisely why it's desirable to disable versions
below 1.2 so that the properties are more consistent.


> > > But some of those embedded devices do support TLS, even if it's old
> > > TLS (likely with self-signed certs... TLS really wasn't designed to
> > > work with embedded systems that don't have DNS names.)
> >
> > This is not correct. TLS takes no position at all on how servers
> > are authenticated. It merely assumes that there is some way to
> > validate the certificate and outsources the details to application
> > bindings. For instance, you could have an IP address cert.
>
> That's technically correct of course, but I think you know what I
> mean. Without a reliable way of knowing that the server's certificate
> is signed by a trusted party, the connection is vulnerable to an MITM
> attack. And the only widely implemented reliable way of doing that
> is to use well-known and widely trusted CAs.

Yes, and those certificates can contain IP addresses. Not all
public CAs issue them, but some do.


> > I'm not sure what clients you're talking about, but for the clients
> > I am aware of, this would be somewhere between a broken experience
> > and an anti-pattern. For example, in Web clients, because the origin
> > includes the scheme, treating https:// URIs as http:// URIs will have
> > all sorts of negative side effects, such as making cookies unavailable
> > etc. For non-Web clients such as email and calendar, having any
> > kind of overridable warning increases the risk that people will
> > click through those warnings and expose their sensitive information
> > such as passwords, which is why many clients are moving away from
> > this kind of UI.
> UI design is a tricky art, and I agree that some users might see (or
> type) https:// in a field and assume that the connection is secure.

In the Web context this is not primarily a UI issue; web client
security mostly does not rely on the user looking at the URL (and in
fact many clients, especially mobile ones, conceal the URL). Rather,
they automatically enforce partitioning between insecure (http) and
secure (https) contexts, and therefore having a context which is
neither secure nor insecurecreates real challenges. Let me give you
two examples:

* Browsers block active "mixed content": JavaScript from http origins
  loaded into an https origin. In the scenario you posit where we
  treat https from TLS 1.1 as "insecure", then if the target server for
  some reason gets configured as TLS 1.1, then the client would have to
  block it, creating breakage

* Cookies can be set to be secure only. Here again, if you have
  a situation in which some of your servers support TLS 1.2
  and others TLS 1.1, then you can get breakage where cookies
  are not sent.


> But I think it's possible for UI designs to be more informative and less
> likely to be misunderstood, if the designers understand why it's
> important. I also think that IETF is on thin ice if we think we're
> in a better position than UI designers to decide what effectively
> informs users and allows them to make effective choices, across all
> devices and use cases.

I'm not suggesting that the IETF design UI.

We're getting pretty far into the weeds here, but I can tell you is
that the general trend in this area -- especially in browsers but also
in some

Re: [TLS] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Gary Gapinski

  
  
Looking at https://tools.ietf.org/html/draft-ietf-tls-oldversions-deprecate-09
  §2:


  §2 ¶5 has «TLS 1.3, specified in TLSv1.3 [RFC8446]…».
  §2 ¶4 has «TLSv1.2, specified in RFC5246 [RFC5246]…»
  §2 ¶3 has «TLS 1.1, specified in [RFC4346]…»

Were these variant ( specified in plaintext+[link], specified in
  link+[link], specified in [link] ) citation forms deliberate?

TLS 1.2 was given a "v" for version; the others not.
§2 ¶1 cites RFC 7457 twice with hyperlinks.

The document references in square brackets link directly to the
  documents; elsewhere in the document, many square-bracketed
  document references are intra-document links to §10, though RFC
  references seem mostly to be direct (i.e., not intra-document).
  Perhaps all square-bracketed links should be intra-document links
  to §10? RFC 7322
  seems adopt the same seemingly arbitrary (some links are direct;
  some intra-document) hyperlinking without any related etiquette
  guidance.

Regards,
Gary
  


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Last-Call] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Keith Moore

On 11/27/20 11:30 PM, Eric Rescorla wrote:



Well, I can't speak to how it sounds to you, but it's not a marketing
argument but rather a security one. This is easiest to understand in
the context of the Web, where you have a reference that contains one
bit: http versus https,and all https content is treated the same, no
matter which version of TLS it uses. In that context, having all the
supported versions meet some minimum set of security properties is
quite helpful. It's true that TLS 1.0, 1.1, 1.2,and 1.3 have different
properties, which is precisely why it's desirable to disable versions
below 1.2 so that the properties are more consistent.


Sure, it would be great if users and operators never had to worry about 
old TLS versions.  But in practice, they do, for reasons already 
mentioned.  It's simply not feasible to upgrade or discard everything 
using old versions of TLS in the next few years, and a lot of those 
hosts and devices and programs will continue to need to be used for a 
variety of valid reasons.    Pretending otherwise for the sake of an 
unrealistically simple statement of security policy seems unhelpful.




> That's technically correct of course, but I think you know what I
> mean. Without a reliable way of knowing that the server's certificate
> is signed by a trusted party, the connection is vulnerable to an MITM
> attack. And the only widely implemented reliable way of doing that
> is to use well-known and widely trusted CAs.

Yes, and those certificates can contain IP addresses. Not all
public CAs issue them, but some do.


I'm aware of that.  But what really is the point of a cert (especially 
one issued by a public CA) that has an RFC1918 address as its subject?   
Not that it matters that much because the vast majority of sites using 
embedded systems aren't going to bother with them.  Most of those 
systems probably don't support cert installation by customers anyway.





> > I'm not sure what clients you're talking about, but for the clients
> > I am aware of, this would be somewhere between a broken experience
> > and an anti-pattern. For example, in Web clients, because the origin
> > includes the scheme, treating https:// URIs as http:// URIs will have
> > all sorts of negative side effects, such as making cookies unavailable
> > etc. For non-Web clients such as email and calendar, having any
> > kind of overridable warning increases the risk that people will
> > click through those warnings and expose their sensitive information
> > such as passwords, which is why many clients are moving away from
> > this kind of UI.
> UI design is a tricky art, and I agree that some users might see (or
> type) https:// in a field and assume that the connection is secure.

In the Web context this is not primarily a UI issue; web client
security mostly does not rely on the user looking at the URL (and in
fact many clients, especially mobile ones, conceal the URL). Rather,
they automatically enforce partitioning between insecure (http) and
secure (https) contexts, and therefore having a context which is
neither secure nor insecurecreates real challenges. Let me give you
two examples:


To clarify, my suggestion was that https with TLS < 1.2 be treated as 
insecure, not as neither secure nor insecure or any kind of "in between".


(and yes, I was ware of that kind of partitioning)

> But I think it's possible for UI designs to be more informative and 
less

> likely to be misunderstood, if the designers understand why it's
> important. I also think that IETF is on thin ice if we think we're
> in a better position than UI designers to decide what effectively
> informs users and allows them to make effective choices, across all
> devices and use cases.

I'm not suggesting that the IETF design UI.

We're getting pretty far into the weeds here, but I can tell you is
that the general trend in this area -- especially in browsers but also
in some mail and calendar clients -- is to simply present an error and
to make overriding that error difficult if not impossible. This is
informed by a body of research [0] that indicates that users are too
willing to override these warnings even in dangerous settings.


Yes, I'm aware of that too.   You should probably be aware that one 
effect of this that I've seen affect actual products, is to avoid 
supporting TLS in embedded systems.


Keith


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Last-Call] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Eric Rescorla
Well, I think our respective positions are clear, so I'll just limit myself
to one point.

On Fri, Nov 27, 2020 at 8:43 PM Keith Moore 
wrote:

> >
> >
> > > > I'm not sure what clients you're talking about, but for the clients
> > > > I am aware of, this would be somewhere between a broken experience
> > > > and an anti-pattern. For example, in Web clients, because the origin
> > > > includes the scheme, treating https:// URIs as http:// URIs will
> have
> > > > all sorts of negative side effects, such as making cookies
> unavailable
> > > > etc. For non-Web clients such as email and calendar, having any
> > > > kind of overridable warning increases the risk that people will
> > > > click through those warnings and expose their sensitive information
> > > > such as passwords, which is why many clients are moving away from
> > > > this kind of UI.
> > > UI design is a tricky art, and I agree that some users might see (or
> > > type) https:// in a field and assume that the connection is secure.
> >
> > In the Web context this is not primarily a UI issue; web client
> > security mostly does not rely on the user looking at the URL (and in
> > fact many clients, especially mobile ones, conceal the URL). Rather,
> > they automatically enforce partitioning between insecure (http) and
> > secure (https) contexts, and therefore having a context which is
> > neither secure nor insecurecreates real challenges. Let me give you
> > two examples:
>
> To clarify, my suggestion was that https with TLS < 1.2 be treated as
> insecure, not as neither secure nor insecure or any kind of "in between".
>

Well, the problem is that it is secure from the perspective of the site
author
but insecure from the perspective of the client. That's not going to end
well
for the reasons I indicated above.

Regardless, this is not likely to happen on the Web: browsers are already
converging on simply disabling older versions, and I doubt are going to
have any interest in the approach you propose.

-Ekr
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Last-Call] Last Call: (Deprecating TLSv1.0 and TLSv1.1) to Best Current Practice

2020-11-27 Thread Keith Moore

On 11/27/20 11:58 PM, Eric Rescorla wrote:


To clarify, my suggestion was that https with TLS < 1.2 be treated as
insecure, not as neither secure nor insecure or any kind of "in
between".


Well, the problem is that it is secure from the perspective of the 
site author
but insecure from the perspective of the client. That's not going to 
end well

for the reasons I indicated above.


Well that is an interesting point that I missed earlier.   But I think 
the situation will be the same if any of the obvious workarounds is 
used, like a plugin or proxy.


Keith


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls