The most succinct answer is this: the server and client
cryptographically only verify the session that they renegotiate within
at the time of initial negotiation, and never verify that they're the
same two parties that the session was between at the time of
renegotiation.

This allows for a MITM to initiate a session with a server, inject
data, then connect the client that has connected to the MITM.  For the
attack to work, additional attacks (such as IP network
untrustworthiness, IP rerouting, or DNS spoofing) are necessary.
Unfortunately, network untrustworthiness is something that's more
common than before.  Many places offer free wifi, and the installation
of a proxy to perform this attack is only a little more complex than
child's play.

The worst thing about this attack is that it provides no means for
either the client or server to detect it.  The client will receive the
server's correct certificate, the same way it expects to.  The server
will receive either the client's correct certificate or no certificate
(as the client decides) the same way it expects to.  There is no way
to identify this attack at the TLS protocol level.  Applications can
mitigate the effect of the attack in several ways; the most important
example is webservers, which could (for example) identify any Content
(defined as "the portion of data which is separated from the headers
by the sequence "\r\n\r\n", and which starts immediately after the
last \n of the separator") sent from the client to the server that
looks like the start of an HTTP request, after the header has already
been transmitted, and denying it.  (There is no mechanism defined in
HTML for any method where a form POST begins its data with (^POST )
or (^GET ), so any POST data from a client which contains those
strings -- along with other HTTP method strings -- is not valid, and
should be given a 400 Bad Request -- preferably with a Location:
header so that it is correctly redirected to a place which describes:
how the error was detected, what it means, and what the client should
do (change to another network) -- within the cover of the 'true'
session between the 'true' client and server.  In the preferable
implementation, it would redirect to a location which would accept
post data, send it to /dev/null, and then print out the information
for the client.

But I'm not an HTTP/HTML guru, and I have not evaluated the security
of this.  (Seriously, I didn't think of this until I started writing
this email.  But the reason for accepting POST data, then voiding it,
is to provide a mechanism for the semantics of the Location: redirect
to still function.  It states that when posting to a location, if a
client receives a Location header, it should post the data to the
Location as well.)



On Mon, Jan 11, 2010 at 5:59 AM, Steffen DETTMER
<steffen.dett...@ingenico.com> wrote:
> Hi all!
>
> I miss something around the Re-negotiation flaw and fail to
> understand why it is a flaw in TLS. I hope I miss just a small
> piece. Could anyone please enlight me?
>
> * Kyle Hamilton wrote on Thu, Jan 07, 2010 at 16:22 -0800:
>> It is also, though, undeniably a flaw in the TLS specification
>> that's amplified by the clients of the libraries that implement
>> it -- because the clients don't understand the concept of
>> "security veil", the TLS implementations tend to provide a raw
>> stream of bytes (akin to a read()/write() pair) without the
>> application necessarily being aware of the change.
>
> Could it be considered that a miss-assumption about SSL/TLS
> capabilities caused this situation?

Nobody thought of this attack until late 2009, so it was mis-assumed
that the protocol was as secure as it was thought to be (since
1995/1998/2001+).

> I think since TLS should be considered a layer, its payload
> should not make any assumptions to it (or vice versa). But in the
> moment some application `looks to the TLS state' and tries to
> associate this information to some data in some buffer, I think
> it makes a mistake.

No, it doesn't.  The reason why is inherent with authentication,
authorization, and accountability: data which is accepted from an
unauthenticated source MUST BE considered potentially hazardous as a
matter of course. (It's rather telling that Microsoft changed the
meaning of unauthenticated connections to its RPC server in Windows NT
4.0 Service Pack 3.  Prior to NT4SP3, unauthenticated data was
automatically mapped into the only realm that existed that could hold
it: the Everyone group.  NT4SP3 created "Unauthenticated Users", and
provided a means for "Unauthenticated Users" to be excluded from
"Everyone" (which essentially turned "Everyone" into "Authenticated
Users" without having to change the Everyone SID on all the objects in
the system).

Any system that uses TLS is automatically attempting to impose some
form of security on the communication (be it 'security from the
sysadmin who runs the network, without any regard for whoever is at
the other end' or 'a bank imposing its policies on the connection so
that it doesn't have its arse handed to it by the cops').  This means
that it is considered to be "trusted", so to speak, for that purpose.
In order for a system to be trusted, it must reliably be able to
identify when the event happened, who asked for the event to happen,
what was asked to happen, and whether the request was rejected,
accepted-and-errored, or accepted-and-succeeded.  This means that even
if the TLS-speaking process is not necessarily part of the
host-computer's trusted computing base, it is still part of a
different trusted computing base.

This means that data which is accepted via an unauthenticated cover
cannot be later converted to an authenticated cover, *unless the state
where the data was accepted could provide adequate security for the
request, AND the state where the data was accepted is reliably
attested to by the now-authenticated entity sending the data*.

TLS (as of now, pre-Secure Renegotiation Indication RFC) satisfies the
first prong of that test, but not the second.

> When using HTTP over IPSec, I think no one ever had the idea to
> open or block URLs based on the currently used IPSec
> certificate...

I don't know the truth of what other people have thought, so I can
only hypothesize and theorize: it is possible to do, in several ways.
If the default route of the local-to-user IPsec peer goes to the
remote-to-user IPsec peer, the remote peer can place any set of
firewall policies in place based on the identity authenticated by the
IPsec certificate.  (It could also do it with shared-secret systems,
based on the secret used.  This becomes something like a 'group
password', though.)  Microsoft ISA Server has had the ability to
accept or deny requests based on the identity attempting them ever
since it came out, as has Squid, and all of the other commercial and
noncommercial proxy server software out there.

All that is required is 802.1X and passing that authentication to
other components.

> Am I wrong when I think that those level-mixing causes the
> trouble? If a user (by browsers default configuration) first
> accepts some sillyCA or www.malicious.com but then later does not
> accept it any longer and expects the trust that initially was
> given to be taken back in retroperspective and finds this failing
> and unsafe (impossible), is this really a TLS weakness?

That is not a TLS weakness, and PKI theory is out-of-scope for this
discussion.  Caveat emptor -- if you mess with things you don't
understand, you're going to get bitten in ways you don't understand.

This is why Mozilla Firefox has a huge "YOU DO NOT WANT TO DO THIS"
page that shows up whenever a certificate issued by an unrecognized
authority is found.  The user is ultimately responsible for his or her
own choices.

> It seems it is, so what do I miss / where is my mistake in
> thinking?

TLS is authentication-agnostic.  There is no *requirement* that a
server authenticate itself; the only requirement is that the server
not ask the client for authentication if it hasn't authenticated
itself.  (This is in stark contrast to IPsec, where IKE requires the
client to offer its credentials before the server ever responds.)
There are two currently-defined cryptographic credential systems for
TLS, both based on certifications of some kind.  The first one is the
standard X.509 certificate which has been in place since Netscape put
Verisign's root key in Navigator, back in 1995.  The second is the
OpenPGP key/certificate format.

> I also wondered a lot about the Extended Validation attack from
> last year; I had assumed that in `EV mode' a browsers tab is
> completely isolated against all others and second no other
> connectivity is possible as with the locked EV parameters, but as
> it turned out this is not the case. Everything can change but the
> green indicator remains. Strange...

Well, partly that's because it's not possible to retrofit a different
type and brand of security onto a model which didn't have it as a
primary design goal.  As an example: Firefox didn't want to do EV, and
certainly didn't want to rewrite everything.  The NSS team implemented
EV, and Johnathan Nightingale managed to convince the security-UI team
to put in the green bar.  But, the semantics of an EV certificate were
never well-defined, as relate to non-EV certificates and to EV
certificates issued to other corporations.

(Hell, there's not even any defined algorithm for checking to see
whether two X.509 Subjects are the same -- the best I've been able to
come up with is "check to see that the Distinguished Name components
match, in the same order, and that the Authority matches.")

> Now I ask myself what happens if I connect via HTTPS and read the
> crypto information as displayed by my browser and decide to
> accept it - but after a renegiotation different algorithms are
> used. As far as I understand, I would get absolutely no notice
> about that. I could find myself suddenly using a 40 bit export
> grade or even a NULL chipher to a different peer (key) without
> any notice! If I understand correctly, even if I re-verify the
> contents of the browsers security information pane right before
> pressing a SUBMIT button, even then the data could be transferred
> with different parameters if a re-negotiation happens at the
> `right' time!

HTTP is stateless.  There is essentially a new connection built for
each request (the only caveat being 'HTTP pipelining').  The Submit
button could go to a completely different server, with a completely
different security setup.

This implies that a renegotiation *is* going to happen when you hit
Submit, unless ALL of the following four things are true: The client
keeps a session cache, the server keeps a session cache, the Submit
button goes to the same server (or to a server which the client also
has a connection in its cache for), and the Submit is triggered before
either the server or the client clear the session from their caches.
If ANY of those four things are not true, a full renegotiation occurs.

This is why NULL ciphers are required to be off by default, and I
don't know any browser vendor which still ships anything that accepts
40- or 56-bit keys by default either.  In Firefox, you can disable the
negotiation of lower-strength ciphers through about:config.

> If this would be true, this means the information firefox shows
> up when clicking the lock icon does not tell anything about the
> data I will sent; at most it can tell about the past, how the
> page was loaded, but not reliable, because maybe it changed for
> the last part of the page.

This is correct, as far as it goes.  There is an implicit contract
between the browser and the server: the server will not send HTML
scripting data (and if you don't think "show this form on the page,
and then submit what the user responds" is scripting, then realize
that javascript can change the behavior of *anything* in the page that
it has access to within the Document Object Model) which will cause
the browser to behave maliciously, and the browser will not interpret
the data in a malicious manner.  There's an implicit contract between
the user and the browser (and the computer the browser is running on):
the browser will not behave maliciously, and the user can rely on its
presentation.

Any one of those contracts can be violated without any side other than
the violating one knowing -- for example, Firefox extensions can
silently change the content of pages, and if the user installs a
malicious one...

> Where is my mistaking in thinking?
>
> oki,
>
> Steffen

Please also see Dave Schwartz's excellent response.

-Kyle H
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to