On Tue, Jan 19, 2010 at 6:19 AM, Steffen DETTMER <steffen.dett...@ingenico.com> wrote: > * Kyle Hamilton wrote on Thu, Jan 14, 2010 at 15:50 -0800: >> On Wed, Jan 13, 2010 at 5:58 AM, Steffen DETTMER wrote: >> >> There is currently no way for even an ideal TLS implementation to >> >> detect this issue. >> >[...] >> >> Yes. Please see SSL_CTX_set_info_callback(3ssl). >> > >> > hum, now I'm confused, I think your last both answers contradict >> > each other... >> > If an application can use e.g. SSL_CTX_set_info_callback to >> > reliably avoid this, I have to read more on what the IETF is working >> > on. If there are webservers `trusting' peers without certificates >> > (allowing pre-injection) what should stop people to ignore >> > whatever extension as well... >> >> What SSL_CTX_set_info_callback() does is tell you *when* a >> renegotiation occurs. It doesn't tell you what happened before. > > (assuming, that a peers identity should not change within a > session - but as discussed later in this mail this could be > wrong?): > In the implementation of this callback, shouldn't the HTTP > server in first call store the peer identity (maybe the DN > value of the certificate) and abort when in a second or > subsequent call suddenly another value of DN is found within > the same HTTP session? Does the standard require to allow / to > support chaning a DN during one TLS session? > (of course, most HTTPS services don't use client certificates, > so it won't help in practice)
If there's a certificate DN or Issuer change (even from null to something non-null), I agree that in many/most circumstances it's reasonable to abort the connection. It doesn't necessarily follow, though, that because it's the reasonable thing in most circumstances that it's the correct thing to do in all circumstances. (Imagine an ATM with a TLS-encrypted serial line or other link back to the central processing house. It has its own certificate -- the processing house doesn't want to accept connections that it can't authenticate -- so it builds that TLS channel with mutual authentication... but the ATM doesn't have any accounts, so its key and certificate can't be used alone to compromise any accounts. Then, Imagine USB "ATM cards" that could be plugged in, with an attendant "please enter your PIN" message on the screen of the ATM that would unlock your private key, so that the ATM could then initiate a renegotiation as you... and the server initiates renegotiations every 6 seconds to ensure that you're still there. Then, when you remove your USB stick, the machine reauthenticates as itself. This is not necessarily the best example of an architecture for this, but the principle can work anywhere you need a trusted host to perform a transaction.) >> As I think I mentioned, nobody ever actually mapped out the precise >> semantics of how the green bar is supposed to work. That is EV's >> biggest Achilles's Heel... nobody knows what it means, the same way >> nobody knew what the lock meant. > > I think, most people take security in a very pragmatic way. It > should not cost additional efforts, but the investable efforts > effectively limit the reachable security. > > OT: > I personally would wish to be able to put a browser tab or > better even a browser instance into some `secured' mode (for > online banking HTTPS but not for myspace HTTPS). In this mode, > flash would not work, no plug-ins installed and I would be Note: My bank's multi-factor authentication mechanism uses Flash, so it would be necessary for each site to provide a manifest of what it makes use of and what the maximum privileges each should have on the user's system. However, this is a very good idea. I don't know of any rendering engine or browser that would or could operate in this manner (other than possibly Chrome), but at that point we're still dealing with operating systems that can be compromised by viruses. > warned when the DN of a certificate of a web page changes (now, > I'm warned only if CN does not match DNS name, but I would like > to be informed: "www.xyz.com now is DN=XYZ Ltd. UK but last > time it was DN=XYZ Inc. US" or so). Probably there are many This already exists and is called "Petnames", you might wish to look for it on addons.mozilla.org. > more nice security features that could be turned on. This would > not prevent the twitter attack but maybe could make online > banking attacks more expensive. In my case, my bank's multifactor authentication includes sending a numeric code to my phone that I then enter into their Flash applet. Online banking attacks are regrettably cheap and relatively easy at this point... MITB is the watchword nowadays. > With firefox, this is possible using different profiles with > MOZ_NO_REMOTE but this breaks other things (today, systems seem > to rely on a single running browser instance). Yeah. > or something like `ssh -X safeu...@localhost firefox' :) > Ohh, and this would catch passwords `system modal' just like > ssh-add can do. It is too bad when half of the password reaches > some online chat tool just because the session manager opens > tabs that were open at the point of a previous crash, giving > them focus for a short time... I really really dislike firefox > asking for master password while continuing in the background > with optional focus change... > sorry all of this is off-topic. This is a bit off-topic, but in the grander scheme of user security these are all valid questions. Since that's a completely separate topic (and is being taken up by organizations such as erights.org, with its E programming language based on Java), all this list tends to focus on is X.509, DER encoding and decoding, and SSL/TLS. What you do with it at either end is up to you -- that's why OpenSSL is first and foremost a library. > I think for many a `SSL secured' label on a webpage means that > the running application (lets say a online banking web app) is > secured. That's what the intrusion-detection/resistance-certification services are for. All the lock icon means is that nobody can listen in between you and the other end. This doesn't mean you're wrong, though. I think for most people it does make them think that. >> > I think this is a server (configuration) bug but no TLS bug. >> > How can someone assume it would be safe for preinjection when >> > accepting anonymous connections? >> >> ...because they didn't realize that the prior session isn't >> cryptographically bound to the new session, it's a complete wiping of >> the slate. It is certainly an application-design issue (defense in >> depth is not just a buzzword), but it's also a TLS protocol issue as >> one of the guarantees that the protocol attempted to provide was >> violated. > > Isn't TLS requiring to use client certificates to be able to > guarantee that the client remains the same? > If TLS is not using client certificates, doesn't this mean that > anyone is accepted? TLS requiring client certificates isn't necessarily to guarantee that the client remains the same, rather only to guarantee that the identity of who's submitting the requests is assured. One of the reasons that the Twitter attack worked is because they treated the first anonymous session as being under the same security cloak as the second anonymous session. Since the two weren't cryptographically bound, that assumption was incorrect. (This is the same situation, by the way, as the anonymous-then-authenticated variant of the attack, except that the second anonymous session is an unauthenticated variant of the authenticated party. You cannot trust that two things coming from anonymous sources are coming from the same anonymous source, can you?) >> If there were a standard for a USB cryptoken, someone could write a >> PKCS#11 wrapper around it for every platform that supports USB. > > cool, the certificate to carry with :) Yeah. The problem is that this interface would have to be designed from the ground up to avoid the pitfalls of PKCS#11 (such as the issue that once something with slots is accessed, that slot can never be deinitialized, even if the backing hardware is subsequently removed -- at least, the NSS team couldn't figure out a way to do it). > Maybe defining some serial command interface that also works > via USB would be simple. It shouldn't be too difficult. My problem is that I don't know how to design hardware. :) > People would need to buy and manage USB crypto devices and > keys. My bank offered me one, but the device neither had > secured keyboard nor display and thus I felt myself unable to > trust it - a trojaner on the PC can fool me. I tend to think that a device should have a display and information about what's going on ("A process is attempting to authenticate you to the following site, do you wish to allow this or not?") through a trusted path, not a path that could have keyloggers, trojans, BHOs, etc. > An issue here is that someone could argue that this setup is > certified as legal digital signature which for me is reason > enough not to accept it (I won't like to get married just > because a hacker trojaned by PIN or so :)). ...which is why I'm against the idea of dropping them in willy-nilly and saying "it's the same as a paper signature in all respects". Aside from the fact that it makes the ceremony of signing a document different, there's been no way to introduce it to grandma (who's used to signing her letters based on who she's sending them to -- with my grandmother, sometimes it's "Ophy", sometimes it's "Ophelia") without running the risk of accidentally binding her to something she didn't want to be bound to. > So I think it would be a bit harder. Maybe having a cell phone > or PDA with such an USB interface but not assumed high-security > (e.g. on online banking, I - the user - can decide to require > PIN/TAN number additionally and to limit maximum transaction > amount etc). I'm infuriated by one of my bank's unwillingness to allow account/password data to be cached locally -- it's my system, my fault if it's compromised, my liability if it's compromised. > It should be cheap that I could affort to buy 10 or so and each > has to support mutliple certificates which are easy to change > and I have to be allowed to exchange with friends (all to > protect my privacy). I'm inclined to agree about the "protect privacy" part -- there's no reason why those certificates couldn't be issued by a web forum (for example), which typically only needs a username and password to connect. I'm thinking, though, that each user should be able to have one or more "master" end-entity certificates, under which they sign proxy certificates to delegate authority to act as whichever identity to whatever device they're working with, and then use the device they're working with to sign the final transaction. This would reduce the risk of having the user's master private key being stolen, and would limit the effectiveness of any device authentication to the time that the proxy certificate is good for. > Yes... maybe this is a flaw of HTTPS (if for authenthentication > client certificates are needed but not supported because only > BasicAuth is supported). It's not a flaw of TLS, and I would argue that it's not a flaw of HTTP over TLS. The flaw is in the interface that the users use to control their browsers. In many circumstances, certificates are requested and then rejected because TLS was enabled to require client authentication but then no authorization step occurred after the authentication happened. >> > Yes, but when it comes to webservers, anonymous clients are >> > trusted... >> >> Yes. The difference is that in IPSec, the client must announce its >> identity first before the server gives it a second glance, while in >> TLS the Server must announce its identity before it can even ask the >> client for its identity. (This is an instance of >> "policy-set-in-standard", and I am opposed to it.) > > (I don't understand "policy-set-in-standard") > Is it a problem that the server must tell its identity before the > client? Someone walks up to my door and knocks. What's more likely to happen? I open the door and say, without even looking at who's there, "Hi, I'm Kyle Hamilton, I live at <this address>, and I am going to prove it to you by handing you my driver's license." or I crack the door open and say, "Who's there?" The way I react is dependent upon my policy -- and I have a policy of never doing the first. Why should my webserver have to do what I won't? This is what I mean by "policy-set-in-standard". The way Netscape perceived SSL was as a mechanism to let end users trust that they were dealing with a legitimate business that took steps to prevent their credit card information from being stolen. This is why the interface for generating client certificates is so awful -- it was either a forethought that got dropped, or an afterthought. Businesses tend to be up-front about their identities and credentials to do business, but individuals don't need to be and shouldn't have to be. >> > but TLS cannot be made responsible that its difficult to obtain >> > certificates (using the existing applications)... It's not difficult to obtain certificates. In fact, if you want to, you can go to http://www.startssl.com/ and get user and webserver certificates that are trusted by browsers by Mozilla, Apple, and Microsoft. For free. What's difficult is obtaining certificates that don't have your legal name in them. There aren't any public certification authorities who are willing to do that. This is what I'm aiming to change. > I found a description of Socialist Millionaire's Protocol on the > page, but it seems complicated. > As I understand it verifies both sides use the same value of a > data element that has to be communicated off the channel (e.g. by > cell phone SMS?), correct? > (BTW, wouldn't it be sufficient to sha1(random || secret) and to > send random and secret to each others side?) No, because you also have to authenticate the channel, which requires utilizing data from the channel's security properties. sha1(random . secret) would work to authenticate parties to each other, but it wouldn't work to prove that nobody was listening in the middle. (This is the same problem that WPA and WPA2 suffer from, because wpa_supplicant is supposed to use some derivation from the master_secret, which normally cannot be exposed.) > If something like this is applied on top of TLS but TLS itself > allows the peer to change at any time (e.g. because renegotiation is > possible without knowing anything secret of the previous > negotiation), would this help? It doesn't matter if it helps or hurts. All that matters is that the protocol allow for as many different usage scenarios without demanding that any particular policy be enforced on participants to an instance of that protocol, and that implementations of the protocol expose enough hooks (and document what the hooks are, why they're there, and how to use them) for application designers to be able to know what set of tools -- limited by the policy that is required -- they have available to them to build truly secure systems. > (ok, now I got it, you wrote "is expected to be secure against > this kind of attack, even in the case of single-side > authentication.") Yep. It's undeniably a flaw in TLS, but it's also a flaw in HTTP and how it uses it. >> Client certificates *can* be changed in the middle of a session, and >> if both sides authenticate each other then there's no way for the >> prefix-injection attack to succeed. > > but if the DN (or whatever) would not be checked by the server, > then one twitter user (with a valid certificate) could fool > another twitter user to publish its HTTP request to the first > users page, right? > (since it would not contain BasicAuth, it would probably no > interesting attack, but maybe there are other applications) If two users had certificates, and the cloak that the request came in under was that of the second user, then twitter shouldn't do anything with the first user's data and should drop it, expecting data from and relying solely on the new cloak. >> I'll note that your definition of "reasonable applications" is a >> particularly sneaky and snarky way of attempting to enforce your ideas >> on policy on others, who may or may not need the session identity to >> stay the same during the lifetime of the session. > > ohh yes, I think you are right... > Yes, I was assuming everyone would consider it natural that > within one and the same session the peer identity remains > the same. But this isn't true? I know of at least two instances where it was not appropriate to assume that, and where building systems that enforced one-identity-per-connection would increase the complexity and potential for exploitable bugs. Double-check any assumptions. Especially if "everyone" or "everybody" is involved. Any assumption that is applied in general to an entire class of unknown individuals is false, and typically should not be acted upon. (I would say "...and MUST NOT be acted upon", but I refuse to assume that just because I can't see a reason for it that a reason does not exist -- which is the attitude I wish more protocol standards bodies took.) >> I can think of at least 2 businesses which use processes that, >> when translated directly into the PKI concept, would require an >> initial certificate (the POS terminal operator's) and a >> different certificate (the POS terminal operator's manager, who >> might need to open the drawer or authorize a particular >> transaction). > > (TLS might not be suited best for POS terminals I think. As far > as I know, PKI is great when n:m trust is needed [but still, > authorisation priviledges have to be managed]. For POS terminals > I think often there is one operator per terminal. I think it > usually is not desired to be able to communicate securely to > someone else, someone not known it advance [like a foreign > terminal]) POS terminals have a few issues, and TLS is appropriate for them in conjunction with a few other things. They can submit a request for approval of an exception (like a refund) to the back office, where a manager can look at the details and authorize the request... but they also need to be able to transmit when a manager took a key, switched to manager mode, entered userID and passcode to authorize the pending transaction, and then switched the key back to 'normal operation'. I don't see any reason why the physical key should be needed for those exceptions anymore -- instead, use a USB key that can be put into another port and signed into, that has the authorization to approve those exceptions. (X.509 PKI can be used for many things, including identity and attributes to that identity, such as authorizing exceptions, or digitally signing purchase orders prepared by underlings over the amount that they can sign for individually but less than the maximum that you can sign for. TLS is the most obvious and apparent implementation which uses X.509 PKI, but it can be used for many other things.) >> My bank actually sends me an SMS each time I want to do things that >> change the balances of my accounts, with a code that I have to enter >> to authorize them. > > that sounds great (at least as long as the phone and the network is > reliable and working when needed). If I have to, I'll change it to my google voice number. *that* is reliable. ;) >> Uh, that's kinda what TLS does. It signs the channel initiation, and >> then after negotiating all the cipher specs it sends ChangeCipherSpec. >> The next message after that *must* be Finished, it must be under the >> new cipher spec, and it must contain the hash of all the prior >> handshake packets that it sent. >> >> I'm not sure what you mean by 'signed random number' -- a shared >> secret? That's not scalable. > > I meant something like an SSH host key. If now known, it can be > accepted. But if known from any previous session, it must match > the currently used key (i.e. key cannot change). > Yes, does not scale, because web servers could never change their > key. Banks had to send certificate fingerprints before or so. Okay, so you're expressing a kind of "key continuity management" instead of a PKI. But, an SSH host key is a public key, either RSA or DH. If the SSH software were extended, it could accept certifications for those keys from a trusted source. There's a lot of pieces to this puzzle that haven't been put together yet... much less even built. > For TLS, maybe some way to generate a new client certificate for > a new page automatically done by the browser, send it to the site > and that site stores the fingerprint (not using CA/PKI) as part > of some account registration process. Just in case this would be > better than using no certificate at all. > However, also does not scale (e.g. what happens if user uses a > different browser to connect to the bank?). A transparent process for generating a keypair and installing a certificate? I'd love one. But as to what happens if a user uses a different browser to connect to the bank... well, if the user trusts the machine, and the bank allows the user to make this trust decision, then the bank can ask for multiple pieces of information which has already been shared with it, to verify the identity of the user within acceptable risk limits. (That's another thing that people forget: security is about managing and mitigating risk. Banks do this all the time, as do insurance companies, and it's important to recognize that even they're not pushing for that to be the only means of doing business online.) >> This is the *absolute* fix for the renegotiation attack, for *all* >> versions of SSL and TLS (presuming that the initial negotiation, or >> the initial renegotiation after negotiation of a DHE cipher, is >> authenticated by the client without any application data being passed >> in the meantime). > > ahh ok. So there is an absolute fix that would instantly work, > but at the moment (with given applications etc) it is not > feasible in HTTPS web practice. Yes. Browser keygen systems are jokes. >> The downside is that there's no protocol to allow a bank (for example) >> to specify that it demands hardware security for the key generation >> and use process. > > (I think, a chip, a secured display and a secured keyboard is > needed, otherwise a PC application could spoof/misuse it, right?) > Couldn't this be easily ensured by generating certificates only > for approved security modules? For example, by selling only > completely loaded devices with individual keys + cert? At the very least, a chip and a secure display. The secure input pad may be optional, depending on how it's implemented. But, you just came up (off the top of your head) with Ian's proposed solution. The manufacturer creates a CA and, during manufacturing, generates a keypair, signs the public key with the CA, loads the resulting certificate onto the card, and (presumably after some testing) packages it for the end user. The user then takes the device, plugs it in, logs into bank website, bank sends a "I wish to authenticate this user with strong cryptography, as long as it meets these minimum specifications" list to the browser, the browser sends it to the chip, the chip parses it and implements the generation to the bank's specifications, then essentially generates a CSR that includes the policy used, signs it with the newly-generated private key, and then wraps the entire thing up in a CMS container (though in reality, it could also have been implemented as S/MIME), signs it with its own key, includes its certificate chain in the message, and hands the entire blob to the browser. The browser then sends that blob to the bank, the bank runs processes to verify that the message is actually signed by a device from a manufacturer it trusts, that the policy used wasn't tampered with, and that the internal CSR can be verified with the public key therein, and then utilizes its knowledge of the currently-logged-in user to generate a final certificate to install on that device. >> Now, for the mindbender: under what circumstances might it be >> appropriate to use NULL-NULL-SHA256? > > mmm... > > If the transmission link isn't reliable and might have bit > errors? (since even TCP sometimes does not detect bit errors and > files fetched via FTP can be damaged)... no idea... when it might > be appropriate to use it? TLS specifies that it must be run over a reliable sequenced connection, whether that be TCP or SPX or even a pair of modems running MNP5. TLS also specifies that if the MAC is incorrect, there's no attempt at recovery, it simply sends a fatal alert and closes the connection on the first occurrence. The French government used to, and a lot of governments still do, limit the strength of the encryption that their citizens may use. So, many of them didn't use it. However, they still wanted to ensure that they were talking to who they thought they were supposed to be talking with, so they applied an HMAC. (message authentication codes weren't ever outlawed, but the governments wanted to be able to read the clear text of the transmissions.) However, the reason you cited is, I think, the basis for sftp. (I'm probably incorrect on that.) -Kyle H ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager majord...@openssl.org