For the monitoring part, I have never felt the need to monitor anything outside 
the end points of the connections. If I need to decrypt packets online in order 
to troubleshoot it, it’s because my application is currently not providing 
enough information in the debug logs. And in order to consolidate similar logs 
generating at different servers, many of our customers (including several 
national banks) used our SIEM tool to collect all information generated by 
their applications and query it in a centralized way. For me, TLS connections 
should always be opaque pipes. If I want to look at what’s within, I have to 
look from one end.

Regarding IPS/IDS appliances, maybe it’s the time to change the current idea 
and say that IPS services should not be the “big brother” thet are today. I 
would go for “global” IPS/IDS appliances working on traffic content for 
unencrypted connections and traffic trends for encrypted ones. For protection 
of each server, IPS/IDS agents installed within the machine could monitor and 
defend each specific service in it. As server plugins or if necessary, TLS 
termination at the agent, cleartext analysis and passing it to the server in 
cleartext. Client authentication can still be used in this case. For example, 
using Apache+Tomcat, the AJP protocol has allowed for many years ago the 
passing of TLS client credentials from the TLS terminating frontend (Apache) to 
the backend (Tomcat).

If you still feel that you need TLS visibility, for me the mechanisms already 
in place to export the necessary key material to the out-of-band scanners are 
enough for this. You talk about the need for out-of-band scanners to have the 
key available as soon as they start receiving packets, as they can’t possibly 
cache so many packets for so many connections. In that case you can put the TLS 
handshake on hold until you are sure that the out-of-band scanner has received 
the key material, and only then go on with the handshake.

In fact, I guess (I may be wrong, as I have not gone into it yet) that when 
using the method SSL_CTX_set_keylog_callback included in OpenSSL, handshake 
will not go on until the callback has returned (and if it does continue or the 
callback is performed at the end of the handshake, I think it could be an 
improvement to modify the callback this way). If this callback includes the 
transfer of the material to the out-of-band scanner, then by the time the 
callback ends and the handshake is allowed to continue, any out-of-band scanner 
has been provided with the key material and they can decrypt the TLS data 
without having to queue any packet. If this data cannot be sent to out-of-band 
scanners due to the scanners being down, the server has the option to 
automatically abort the connection or allowing it to continue without the 
visibility (your choice).

Summarizing, I think that there are many ways to overcome the visibility 
problem without having to weaken TLS itself. Probably we won’t be able to find 
a one-size-fits-all solution to magically convert what enterprise have today to 
what is required for TLS 1.3, but I think that for most cases, all that is 
needed is a change of mind and some ideas about how to implement those changes.


De: TLS [mailto:tls-boun...@ietf.org] En nombre de Steve Fenter
Enviado el: lunes, 26 de marzo de 2018 13:49
Para: Tony Arcieri <basc...@gmail.com>
CC: tls@ietf.org
Asunto: Re: [TLS] Breaking into TLS for enterprise "visibility" (don't do it)

MITM as a solution doesn't scale for the needs of the enterprise.  Packet 
decryption and inspection is needed at many locations within the data center: 
at many tiers of an application, within the virtual environment, and within the 
cloud environment, all of which may be TLS encrypted.  Speaking as a 
troubleshooter, a problem can happen anywhere in the enterprise network, and 
there are thousands of locations where I need to be able to take a packet trace 
and decrypt it in order to find a slow or failing transaction.

The biggest problem I see with the key escrow solutions being suggested is that 
decryption is in some cases taking place in real time, even though it's out of 
band. This is being done, for example, for security inspection, for fraud 
monitoring, and for application performance monitoring.  TLS decryption 
appliances are going to be getting packets off of 100 gig links, and when the 
packet arrives the keys have to be there.  At this speed there's not a lot of 
time for queuing packets and waiting for keys. If we are going to use exported 
ephemeral keys, I think placing them in the packet as in draft-rhrd is the only 
scalable way to accomplish this.

In response to unwillingness to change, enterprises are doing things today that 
work and that solve our business problems. The alternative suggestions being 
made, like MITM and endpoint monitoring, don't solve our business problems.

In response to how much time we have, it was recently stated on the list that 
NIST has published a draft that disallows all non-DH cipher suites, which 
includes TLS 1.2.  TLS 1.2 with Diffie-Hellman only will be just as big of a 
problem for enterprises as TLS 1.3 is.  I don't have a crystal ball, but I 
don't I think the RSA key exchange is going to last five years as has been 
suggested.  And whenever RSA is deprecated, it takes a long time to implement a 
new solution in a large enterprise, so we have to be well out in front of the 
problem,

Steve Fenter

On Mar 24, 2018, at 3:31 PM, Tony Arcieri 
<basc...@gmail.com<mailto:basc...@gmail.com>> wrote:
On Fri, Mar 23, 2018 at 11:26 PM, Alex C 
<immi...@gmail.com<mailto:immi...@gmail.com>> wrote:
As I understand it (poorly!) the idea is exactly to have a single system on the 
network that monitors all traffic in cleartext.

And more specifically: to be able to *passively* intercept traffic and allow it 
to be decrypted by a central system. "Visibility" with an active MitM is a 
solved problem: have the MitM appliance double as an on-the-fly CA and install 
its root certificate in the trust stores of all the clients you intend to MitM.

It's fundamentally impossible to prevent someone from copying all their traffic 
to another system in cleartext.. If they're going to do it, they will.
The functionality is exactly the same as what could be achieved by installing 
monitoring software on each endpoint, but the logistics are different since the 
monitoring is centralized.

The response from "visibility" proponents is "endpoint agents are hard". 
However, it seems like there is a simple solution to this problem which should 
be compatible with their existing monitoring architectures and require no 
changes to TLS:

Instrument TLS servers and/or client libraries used by internal enterprise 
applications with a little shim that extracts the session master secret, then 
makes another TLS connection to a TLS session key escrow service, and goes 
"here's the session master secret for a session between X.X.X.X and Y.Y.Y.Y 
with nonce ZZZZ...". It could even be encrypted-at-rest to a particular public 
key in advance (which could correspond to e.g. an HSM-backed decryption key).

Enterprises could continue to passively collect TLS sessions in whatever manner 
they already do, and decrypt traffic at will, it would just require looking up 
the session key for a particular session in a key escrow database rather than 
having a single key-to-the-kingdom.

This approach requires no changes to TLS, no changes to how "visibility" 
systems collect traffic, and should provide better security in that using 
session master secrets better scope the authority conferred to the decryption 
service than D-H keys which can grant authority to e.g. resume TLS sessions.

The downsides are you have to instrument clients and/or servers and have the 
decryption service maintain a key escrow database.

However, "visibility" proponents seem unwilling to accept any changes to 
anything they presently do today. This is coupled with a sort of artificial 
emergency where they claim (or outright lie) that compliance with industry 
standards will require them to ship TLS 1.3 everywhere tomorrow. There is a 
total unwillingness to compromise, and all sorts of weasel words being thrown 
around, from the "visibility" euphemism itself to claims that TLS 1.3 will make 
them less secure because it makes implementing a single-point-of-compromise for 
all their encrypted traffic more difficult.

The reality is for these slow-to-change enterprises, the industry standards are 
also slow-to-change. There is no emergency. Many of them are still using TLS 
1.0. The PCI-DSS deadline to adopt TLS 1.1 isn't until this June. I would 
challenge any "visibility" proponent to cite *any* industry standard which will 
mandate TLS 1.3 any time in the next 5 years.

There is lots of time to solve this problem and better ways to solve it than 
introducing codepaths which deliberately break the security of the protocol.

--
Tony Arcieri
_______________________________________________
TLS mailing list
TLS@ietf.org<mailto:TLS@ietf.org>
https://www.ietf.org/mailman/listinfo/tls
_______________________________________________
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls

Reply via email to