* Kyle Hamilton wrote on Tue, Jan 19, 2010 at 16:00 -0800: > On Tue, Jan 19, 2010 at 6:19 AM, Steffen wrote: > > * Kyle Hamilton wrote on Thu, Jan 14, 2010 at 15:50 -0800: > > (assuming, that a peers identity should not change within a > > session - but as discussed later in this mail this could be > > wrong?): > > If there's a certificate DN or Issuer change (even from null to > something non-null), I agree that in many/most circumstances it's > reasonable to abort the connection. It doesn't necessarily follow, > though, that because it's the reasonable thing in most circumstances > that it's the correct thing to do in all circumstances.
mmm... I do a bit hard understanding this... the tunnel is to ensure to talk with an authentic identity. Like in real life someone my tell some things only to his girl but not to the postman. I do not expect that the identity I'm connected to changes during the conversation (i.e. I would consider designs where girls pass the cell phone to the postmen as bad :-)). > (Imagine an ATM with a TLS-encrypted serial line or other link back to > the central processing house. It has its own certificate -- the > processing house doesn't want to accept connections that it can't > authenticate -- so it builds that TLS channel with mutual > authentication... but the ATM doesn't have any accounts, so its key > and certificate can't be used alone to compromise any accounts. Then, > Imagine USB "ATM cards" that could be plugged in, with an attendant > "please enter your PIN" message on the screen of the ATM that would > unlock your private key, so that the ATM could then initiate a > renegotiation as you... and the server initiates renegotiations every > 6 seconds to ensure that you're still there. Then, when you remove > your USB stick, the machine reauthenticates as itself. mmm... I would guess that in such a situation the remote side would want to know that: maybe only the USB tokens are allowed/autorized to communicate here or whatever. But I see that there might be cases where it could be used, ok. So the perfect browser might need a configuration option to accept `Identity Changes' after accepting a certificate, maybe even URL-based to avoid that banks change while performing transactions. > > OT: > > I personally would wish to be able to put a browser tab or > > better even a browser instance into some `secured' mode (for > > online banking HTTPS but not for myspace HTTPS). In this mode, > > flash would not work, no plug-ins installed and I would be > > Note: My bank's multi-factor authentication mechanism uses Flash, so > it would be necessary for each site to provide a manifest of what it > makes use of and what the maximum privileges each should have on the > user's system. However, this is a very good idea. I don't know of > any rendering engine or browser that would or could operate in this > manner (other than possibly Chrome), but at that point we're still > dealing with operating systems that can be compromised by viruses. (Flash was intended as a placeholder for any plug-in or color advertising multimedia extension) Why not keeping the pages simple and stupid. On the highly secured pages used with browser in secured mode the banking frontend could use simple HTML only, no flash and no embedded video. Wouldn't life be so much easier? Maybe someone could run a simple stable browser (not needing almost-daily-updates-with-only-2-year-support), maybe on a mobile phone which does not support any software upgrading except JTAG flashing. For the truely paranoid the adavantage here isn't that it works everywhere but that it securely works in the private basement (to make eavesdropping harder) :-) > > warned when the DN of a certificate of a web page changes (now, > > I'm warned only if CN does not match DNS name, but I would like > > to be informed: "www.xyz.com now is DN=XYZ Ltd. UK but last > > time it was DN=XYZ Inc. US" or so). Probably there are many > > This already exists and is called "Petnames", you might wish to look > for it on addons.mozilla.org. thanks, in case I'd update to Firefox 3 I might give it a try (althrough it might not be exactly what I meant). > > more nice security features that could be turned on. This would > > not prevent the twitter attack but maybe could make online > > banking attacks more expensive. > > In my case, my bank's multifactor authentication includes sending a > numeric code to my phone that I then enter into their Flash applet. > > Online banking attacks are regrettably cheap and relatively easy at > this point... MITB is the watchword nowadays. yes, MITbrowser, MITplug-in, MITtodaysUpdate... > > I think for many a `SSL secured' label on a webpage means that > > the running application (lets say a online banking web app) is > > secured. > > That's what the intrusion-detection/resistance-certification services > are for. All the lock icon means is that nobody can listen in between > you and the other end. (I guess many assume security even if this `SSL secured' label / image is displayed on a plain HTTP served page by the page, not by the browser - sometimes it is difficult for users to see the borders between parts of the `logical' application `online-banking', I guess). > unauthenticated variant of the authenticated party. You cannot trust > that two things coming from anonymous sources are coming from the same > anonymous source, can you?) Yes, exactly, right... > >> If there were a standard for a USB cryptoken, someone could write a > >> PKCS#11 wrapper around it for every platform that supports USB. > > > > cool, the certificate to carry with :) > > Yeah. The problem is that this interface would have to be designed > from the ground up to avoid the pitfalls of PKCS#11 (such as the issue > that once something with slots is accessed, that slot can never be > deinitialized, even if the backing hardware is subsequently removed -- > at least, the NSS team couldn't figure out a way to do it). (Something with slots and slots cannot be deinitialized? Sorry, I do not understand this :() > > Maybe defining some serial command interface that also works > > via USB would be simple. > > It shouldn't be too difficult. My problem is that I don't know how to > design hardware. :) write a spec and send it to a chinese factory? I guess no such products exist because no market exists. There are dongles and USB pluggable security modules - with several integrated chipcard readers, tamper responsive key stores, optionally deliverable with unique/individual keys or certs loaded... I know a great company able to deliver such things :-). In practice it seems to be too expensive for many cases, such as home banking, which is `known' to be safe without any special security hardware. > > People would need to buy and manage USB crypto devices and > > keys. My bank offered me one, but the device neither had > > secured keyboard nor display and thus I felt myself unable to > > trust it - a trojaner on the PC can fool me. > > I tend to think that a device should have a display and information > about what's going on ("A process is attempting to authenticate you to > the following site, do you wish to allow this or not?") through a > trusted path, not a path that could have keyloggers, trojans, BHOs, > etc. Yes, the device should display all detail about the action to be performed, authorized by the user (by pressing a button or entering a PIN, ...). This authorization has to be handled internally in the device and of course by no means a PIN can leave the device or authorization can be forced/faked from anywhere except the local button/keyboard (`no means' less than lets say $100,000). Device could implement hardware security like tamper evident and responsive techniques to make MITD (man in the device) impossible (i.e. too expensive). > > So I think it would be a bit harder. Maybe having a cell phone > > or PDA with such an USB interface but not assumed high-security > > (e.g. on online banking, I - the user - can decide to require > > PIN/TAN number additionally and to limit maximum transaction > > amount etc). > > I'm infuriated by one of my bank's unwillingness to allow > account/password data to be cached locally -- it's my system, my fault > if it's compromised, my liability if it's compromised. ahh yes... difficult topic. The bank may say that they have to take part of the risk (like if someone uses overdraft credit to send money via western union, becomes unable to pay and in the end the bank never sees the credit money coming back). So the `logical' responsibility could be limited by possibility to practically fulfill liability (most private people cannot pay a 1,000,000 fraud). But should up to you, right. Maybe then with small overdraft credit limit or alike, but your decision, right. But when it comes to `ordinary' people, not security skilled, I think this could be difficult. For example I assume that the online banking the bank offers is secure according to common understanding and that I do not have to be concerned (and I don't want to be, it is not my job :-)). Only prob is that according to the General Terms and Conditions usually I have to pay part of a fraud value... > I'm thinking, though, that each user should be able to have one or > more "master" end-entity certificates, under which they sign proxy > certificates to delegate authority to act as whichever identity to > whatever device they're working with, and then use the device they're > working with to sign the final transaction. This would reduce the > risk of having the user's master private key being stolen, and would > limit the effectiveness of any device authentication to the time that > the proxy certificate is good for. ahh yes - and maybe in some cases it might be desired to transfer such an identity without revealing that to the whatever webpage - I'm not saying that I need this but I think it should be possible, this would protect a pseudonymous identity in case someone tries to prove that a particular person did something, because if identities could have been transferred, this couldn't be proved. Such a privacy-protection idea was to buy prepaid cell phone cards and randomly exchange with foreign people (best to exchange together with the phone), maybe even regularily (every morning in the subway or so). But this is against law at least in EU. [...] [ok, now I understood, thank you] > The way Netscape perceived SSL was as a mechanism to let end users > trust that they were dealing with a legitimate business that took > steps to prevent their credit card information from being stolen. > This is why the interface for generating client certificates is so > awful -- it was either a forethought that got dropped, or an > afterthought. Businesses tend to be up-front about their identities > and credentials to do business, but individuals don't need to be and > shouldn't have to be. ahh yes. But a detail here might be that the individuals do authenticate themselfs - by proving that they know their credit card information (as credential, something like Name/expiry date matching, like a PIN just printed on the credit card, awesome :-)). > >> > but TLS cannot be made responsible that its difficult to obtain > >> > certificates (using the existing applications)... > > It's not difficult to obtain certificates. In fact, if you want to, > you can go to http://www.startssl.com/ and get user and webserver > certificates that are trusted by browsers by Mozilla, Apple, and > Microsoft. For free. I think getting a facebook or PHP forum account with username/password is much easier. I miss the browsers `generate new random identity for this site' button :) > What's difficult is obtaining certificates that don't have your > legal name in them. There aren't any public certification > authorities who are willing to do that. > > This is what I'm aiming to change. Personally, I would prefere if this would never happen. In Germany, currently practical `forces' are installed (`ELENA') to move people towards digital signatures. I really dislike that. I do not want to be able to even theoretically marry by knowing a chip cards PIN. Unfortunately, I do not have the option to choose that. > > ohh yes, I think you are right... > > Yes, I was assuming everyone would consider it natural that > > within one and the same session the peer identity remains > > the same. But this isn't true? > > I know of at least two instances where it was not appropriate to > assume that, and where building systems that enforced > one-identity-per-connection would increase the complexity and > potential for exploitable bugs. > > Double-check any assumptions. Especially if "everyone" or "everybody" > is involved. well, easy to say but hard to do (not to oversee any miss-assumption)... but what a pitty. The assumption or requirement to enforce one-identity-per-connection seems to make systems simpler. Interesting that there are example telling the opposite. > Any assumption that is applied in general to an entire class of > unknown individuals is false, and typically should not be acted > upon. Alternatively, someone could divide the class in two subclasses: the one matching the assumption (the requirement) and put the other out of scope (instead trying to make a hyper-complex one-fits-all solution). I think here it could make it easier to use TLS for the 99% of the applications in the first subclass and use uniservalTLS++ for the other subclass? > (I would say "...and MUST NOT be acted upon", but I > refuse to assume that just because I can't see a reason for it that a > reason does not exist -- which is the attitude I wish more protocol > standards bodies took.) (yes, `MUST NOT' in turn could be considered an assumption applied in general to an entire class of unknown assumptions :-)): > > (TLS might not be suited best for POS terminals I think. As far > > as I know, PKI is great when n:m trust is needed [but still, > > authorisation priviledges have to be managed]. For POS terminals > > I think often there is one operator per terminal. I think it > > usually is not desired to be able to communicate securely to > > someone else, someone not known it advance [like a foreign > > terminal]) > > POS terminals have a few issues, and TLS is appropriate for them in > conjunction with a few other things. They can submit a request for > approval of an exception (like a refund) to the back office, where a > manager can look at the details and authorize the request... but they > also need to be able to transmit when a manager took a key, switched > to manager mode, entered userID and passcode to authorize the pending > transaction, and then switched the key back to 'normal operation'. I > don't see any reason why the physical key should be needed for those > exceptions anymore -- instead, use a USB key that can be put into > another port and signed into, that has the authorization to approve > those exceptions. mmm... but it is not required that terminal and Manager use the same authorisation mean. I think this also work work if the terminal has and keeps its ID and if needed for Manager mode additionally the manager key is involved. Since the Manager must trust the terminal to be allowed to enter a passcode, why shouldn't he trust the secured link of that terminal? I could imagine that in such cases it could even be the case that the Manager authorizes himself against the terminal (i.e. unlocks with a physical key locally). I just miss why PKI should be needed when there is one line of back office communication partners (i.e. one server) and terminals that have to be known in advance anyway (the server will not accept new unknown terminals, even if they are authentic - authentic whatever). > > that sounds great (at least as long as the phone and the network is > > reliable and working when needed). > > If I have to, I'll change it to my google voice number. *that* > is reliable. ;) :) and in case you can ask google support staff to help you, just give them the online banking PIN :) > > I meant something like an SSH host key. If now known, it can be > > accepted. But if known from any previous session, it must match > > the currently used key (i.e. key cannot change). > > Yes, does not scale, because web servers could never change their > > key. Banks had to send certificate fingerprints before or so. > > Okay, so you're expressing a kind of "key continuity management" > instead of a PKI. But, an SSH host key is a public key, either RSA or > DH. If the SSH software were extended, it could accept certifications > for those keys from a trusted source. Couldn't key continuity management added additionally to PKI? Then I could decide if I verify the certificates fingerprint by calling the tech support center of my bank whenever it changes. (in theory) > There's a lot of pieces to this puzzle that haven't been put > together yet... much less even built. (yeah, but people want to introduce digital signatures and even biometric fingerprints [easy to copy and left everywhere, how can anyone use this?!?] - no really secure browser exists but working towards to emarry on echuch with epassport...) > > For TLS, maybe some way to generate a new client certificate for > > a new page automatically done by the browser, send it to the site > > and that site stores the fingerprint (not using CA/PKI) as part > > of some account registration process. Just in case this would be > > better than using no certificate at all. > > However, also does not scale (e.g. what happens if user uses a > > different browser to connect to the bank?). > > A transparent process for generating a keypair and installing a > certificate? I'd love one. > > But as to what happens if a user uses a different browser to connect > to the bank... well, if the user trusts the machine, and the bank > allows the user to make this trust decision, then the bank can ask for > multiple pieces of information which has already been shared with it, > to verify the identity of the user within acceptable risk limits. yes, he optionally `registers' a second new key pair. Maybe PIN, TAN, SMS, phone and personal interview needed to authenticate it. > (That's another thing that people forget: security is about managing > and mitigating risk. Banks do this all the time, as do insurance > companies, and it's important to recognize that even they're not > pushing for that to be the only means of doing business online.) yes, trying to make the attack (much) more expensive than its possible revenue usually is considered a safe assumption that the protection is sufficient for this attack. > > (I think, a chip, a secured display and a secured keyboard is > > needed, otherwise a PC application could spoof/misuse it, right?) > > Couldn't this be easily ensured by generating certificates only > > for approved security modules? For example, by selling only > > completely loaded devices with individual keys + cert? > > At the very least, a chip and a secure display. The secure input pad > may be optional, depending on how it's implemented. but when you see 100 sign requests flying over the display per second and all beeing authenticated before you are able to unplug the cord? > But, you just came up (off the top of your head) with Ian's proposed > solution. The manufacturer creates a CA and, during manufacturing, > generates a keypair, signs the public key with the CA, loads the > resulting certificate onto the card, and (presumably after some > testing) packages it for the end user. > > The user then takes the device, plugs it in, logs into bank website, > bank sends a "I wish to authenticate this user with strong > cryptography, as long as it meets these minimum specifications" list > to the browser, the browser sends it to the chip, the chip parses it > and implements the generation to the bank's specifications, then > essentially generates a CSR that includes the policy used, signs it > with the newly-generated private key, and then wraps the entire thing > up in a CMS container (though in reality, it could also have been > implemented as S/MIME), signs it with its own key, includes its > certificate chain in the message, and hands the entire blob to the > browser. The browser then sends that blob to the bank, the bank runs > processes to verify that the message is actually signed by a device > from a manufacturer it trusts, that the policy used wasn't tampered > with, and that the internal CSR can be verified with the public key > therein, and then utilizes its knowledge of the currently-logged-in > user to generate a final certificate to install on that device. I would like to add that I personally want (by default) to authenticate a transaction, i.e. not to login and that's it, but that I have to confirm each transaction on the device to protect against MITB or MITPC (worm, trojaner, ...). > >> Now, for the mindbender: under what circumstances might it be > >> appropriate to use NULL-NULL-SHA256? > > > > mmm... > > > > If the transmission link isn't reliable and might have bit > > errors? (since even TCP sometimes does not detect bit errors and > > files fetched via FTP can be damaged)... no idea... when it might > > be appropriate to use it? > > TLS specifies that it must be run over a reliable sequenced > connection, whether that be TCP or SPX or even a pair of modems > running MNP5. TLS also specifies that if the MAC is incorrect, > there's no attempt at recovery, it simply sends a fatal alert and > closes the connection on the first occurrence. ohh so a transmission error that is not detected by TCP is fatal. I think that is every 65536th error in average? > The French government used to, and a lot of governments still do, > limit the strength of the encryption that their citizens may use. (I hope that the terrorist or whoever is the official target do not forget and accidently use strong cryptography, for example by using OpenSSL and forgetting to enable the restrictions). > So, many of them didn't use it. However, they still wanted to > ensure that they were talking to who they thought they were > supposed to be talking with, so they applied an HMAC. (message > authentication codes weren't ever outlawed, but the governments > wanted to be able to read the clear text of the transmissions.) ahh interesting... Isn't it possible in theory to implement higher security using weak ciphers but strong MACs using many random numbers? Like using a new 40 bit key for each 40 bit data (like a one time pad) by some key exchange protocol that works as long as a strong HMAC is avialable? Would be horrible slow but possible and legal? But when to use NULL-NULL-SHA256 now? oki, Steffen -- --[end of message]------------------------------------------------->8======= About Ingenico: Ingenico is a leading provider of payment solutions, with over 15 million terminals deployed in more than 125 countries. Its 2,850 employees worldwide support retailers, banks and service providers to optimize and secure their electronic payments solutions, develop their offer of services and increase their point of sales revenue. More information on http://www.ingenico.com/. This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. P Please consider the environment before printing this e-mail ______________________________________________________________________ OpenSSL Project http://www.openssl.org User Support Mailing List openssl-users@openssl.org Automated List Manager majord...@openssl.org