Re: [TLS] Fresh results

2015-12-04 Thread Karthikeyan Bhargavan
> Suppose keyUsage is respected.  Who will knowingly shoot themselves
> in the foot and restrict their RSA certificate to just DHE or just
> RSA key transport?  This looks like an impractical counter-measure.

I guess we have to trade-off between different levels of “shooting oneself in 
the foot”.
For a company that has a bunch of certificates, one for each domain, separate 
ECDSA and RSA certificates etc.
spending a few more dollars on different certs for RSA encryption does not seem 
like a lot.

In any case, the main issue is whether we can get recipients to enforce key 
usage, not senders.
If a cert explicitly enables key exchange, encryption, and signature, it can be 
used for all of these.
However, if the cert restricts itself to digital signature, then the client 
should not use it for RSA encryption.
That way, the server gets to choose whether it wants protection from 
cross-protocol attacks or not,
and the client will enforce its wishes correctly.

Regarding ECDSA->ECDH cross-protocol attacks, the code that you pointed out in 
OpenSSL indeed
allows the server to ensure it does not use a sign-only cert for ECDH, but 
other libraries do not make this distinction.
Moreover, as far as I know, OpenSSL clients do not enforce this key usage 
restriction (See crypto/x509/x509type.c for the certificate usage rules.)
Neither, for that matter do other TLS clients like NSS etc. This allows attacks 
like the ECDH downgrade
in [1] and can probably make the ECDSA key recovery attack in [2] quite a bit 
worse.

From the viewpoint of provable security, I’d be suspicious of any TLS modes 
that allow the same
long-term keys to be used in multiple different ways. For RSA/EC, there are 
known attacks. But even otherwise,
their use requires a strong joint security assumption that we don’t really know 
how to justify.

-K.


[1] https://www.smacktls.com/smack.pdf 
[2] http://euklid.org/pdf/ECC_Invalid_Curve.pdf 




> 
> And by the way key usage is enforced by OpenSSL for EC.
> 
>$ git grep -C3 X509v3_KU_ ssl
>ssl/ssl_lib.c-X509_check_purpose(x, -1, 0);
>ssl/ssl_lib.c-# ifndef OPENSSL_NO_ECDH
>ssl/ssl_lib.c-ecdh_ok = (x->ex_flags & EXFLAG_KUSAGE) ?
>ssl/ssl_lib.c:(x->ex_kusage & X509v3_KU_KEY_AGREEMENT) : 1;
>ssl/ssl_lib.c-# endif
>ssl/ssl_lib.c-ecdsa_ok = (x->ex_flags & EXFLAG_KUSAGE) ?
>ssl/ssl_lib.c:(x->ex_kusage & X509v3_KU_DIGITAL_SIGNATURE) : 1;
>ssl/ssl_lib.c-if (!(cpk->valid_flags & CERT_PKEY_SIGN))
>ssl/ssl_lib.c-ecdsa_ok = 0;
>ssl/ssl_lib.c-ecc_pkey = X509_get_pubkey(x);
>--
>ssl/ssl_lib.c-}
>ssl/ssl_lib.c-if (alg_k & SSL_kECDHe || alg_k & SSL_kECDHr) {
>ssl/ssl_lib.c-/* key usage, if present, must allow key agreement */
>ssl/ssl_lib.c:if (ku_reject(x, X509v3_KU_KEY_AGREEMENT)) {
>ssl/ssl_lib.c-SSLerr(SSL_F_SSL_CHECK_SRVR_ECC_CERT_AND_ALG,
>ssl/ssl_lib.c-   SSL_R_ECC_CERT_NOT_FOR_KEY_AGREEMENT);
>ssl/ssl_lib.c-return 0;
>--
>ssl/ssl_lib.c-}
>ssl/ssl_lib.c-if (alg_a & SSL_aECDSA) {
>ssl/ssl_lib.c-/* key usage, if present, must allow signing */
>ssl/ssl_lib.c:if (ku_reject(x, X509v3_KU_DIGITAL_SIGNATURE)) {
>ssl/ssl_lib.c-SSLerr(SSL_F_SSL_CHECK_SRVR_ECC_CERT_AND_ALG,
>ssl/ssl_lib.c-   SSL_R_ECC_CERT_NOT_FOR_SIGNING);
>ssl/ssl_lib.c-return 0;
> 
> --
>   Viktor.
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fully encrypted and authenticated headers (was Re: Encrypting record headers: practical for TLS 1.3 after all?)

2015-12-04 Thread Bryan Ford
On 04 Dec 2015, at 07:56, Valery Smyslov  wrote:
> Hi Bryan,
>  
> I guess Dmitry is talking about the trick when each datagram is encrypted 
> with its own key, 
> derived from the "master" session key using some unique public parameter of 
> the datagram,
> like its sequence_number. This trick makes attacks on encryption key almost 
> useless.
> It is not specifically bound to GOST cipher, however it is sometimes used 
> with this cipher 
> to deal with its short (by current standards) block size. See for example the 
> (now expired) draft 
> https://www.ietf.org/archive/id/draft-fedchenko-ipsecme-cpesp-gost-04.txt 
> 
> (it is about ESP, but the general principles are the same for DTLS).

Ah, I see - thanks for the clarification.

> As far as I understand your proposal makes impossible to use this trick, 
> if we consider packets loss and reordering.

Actually, if I’m understanding correctly how you’re doing this per-datagram 
rekeying, I think it still should be compatible with the hash-table-based 
approach I proposed.  Assuming you’re using some key derivation function that 
takes a master key and sequence number as input and produces a per-datagram 
key, the receiver just needs to pre-compute the per-datagram keys for the 
sequence numbers within the current window, and encrypt the sequence numbers 
with those respective per-datagram keys, in order to populate its hash table.  
I don’t think anything breaks at least.

Cheers
Bryan



smime.p7s
Description: S/MIME cryptographic signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Encrypting record headers: practical for TLS 1.3 after all?

2015-12-04 Thread GUBALLA, JENS (JENS)
Hi Brian,

> -Original Message-
> From: Bryan A Ford [mailto:brynosau...@gmail.com]
> Sent: Donnerstag, 3. Dezember 2015 10:51
> To: GUBALLA, JENS (JENS); Fabrice Gautier
> Cc: tls@ietf.org
> Subject: Re: [TLS] Encrypting record headers: practical for TLS 1.3
> after all?
> 
> Hi Jens,
> 
> On 12/2/15 11:47 AM, GUBALLA, JENS (JENS) wrote:
> >> Fortunately the solution is fairly simple: the receiver simply pre-
> >> computes and keeps in a small hash table the encrypted sequence
> numbers
> >> of all packets with sequence numbers between H-W and H+W, where H is
> >> the highest sequence number correctly received so far (the horizon)
> and
> >> W is the anti-replay window size as specified in 4.1.2.5 of RFC
> 4347,
> >> which typically should be 32 or 64 according to the RFC.  The
> receiver
> >> can precompute all these encryptions because in my proposal TLS
> headers
> >> are encrypted with a stream cipher (or the AEAD operating as a
> stream
> >> cipher), so it's just a matter of producing the correct cipherstream
> >> bytes and XORing them with the uint48 sequence number.
> >>
> >> Whenever the receiver gets a datagram, it looks up the encrypted
> >> sequence number in the hash table, drops it on the floor if it's not
> >> present, and if it's present the receiver gets the decrypted
> sequence
> >> number from the hash table and uses that in the AEAD decryption and
> >> integrity-check.  In the low-probability event of a hash-table
> >> collision (i.e., two uint48 sequence numbers encrypting to the same
> 48-
> >> bit ciphertext in a single 129-datagram window), the receiver can
> >> trial-decrypt with both (or all) sequence numbers in that colliding
> >> hash table entry.  Or the receiver can keep it even simpler and just
> >> drop all but one colliding entry, introducing a pretty low
> probability
> >> of introducing occasional "false packet drops."
> >>
> >> The hash table is pretty trivial to maintain efficiently as well:
> e.g.,
> >> whenever the horizon H moves forward by delta D, remove the first D
> >> entries from the current window and precompute another D encrypted
> >> sequence numbers (where D will most often be 1).  In the simple
> design
> >> that doesn't bother dealing with hash table collisions (e.g., that
> >> allows each hash table entry to contain only one value), perhaps
> don't
> >> even bother clearing/removing old entries; just gradually overwrite
> >> them with new ones as H moves forward.
> >
> > [JG] In case there is a packet loss of at least W subsequent DTLS
> records:
> > How can the receiver then ever adjust its hash table? Wouldn't that
> mean
> > that no records at all would be accepted anymore?
> 
> Excellent question - I had intended to discuss that in my original post
> but in the end forgot to include it.
> 
> Indeed, with this approach as it stands, if every packet within a full
> window of W consecutive packets fails to reach the receiver, then the
> receiver has no way to resynchronize and the connection will simply
> fail.  In congestion-controlled protocols like TCP (or DCCP) that do
> exponential backoff when they detect many consecutive losses, the
> protocol may be more likely simply to hard-timeout than to reach the
> W-packet resynchronization limit.  But admittedly many UDP-based
> protocols aren't (or are rather weakly) congestion-controlled, so this
> may be more of a problem for them.  It's probably the case that the
> "forward-looking window" should be allowed to have a different value
> from the "backward-looking window", and perhaps the "forward-looking
> window" should depend on RTT (e.g., measured maximum packets-in-
> flight).
> 
> However, one way to eliminate this risk of permanent desynchronization,
> at the cost of a bit more complexity in the receiver implementation
> (though this needn't affect the protocol spec at all) is for the
> receiver's "forward-looking window" to consist not of W consecutive
> sequence numbers but a sparse set of sequence numbers at
> exponentially-increasing distances.  For example, if H is the current
> highest sequence number, include in the forward-looking cache the
> encrypted sequence numbers of the sequence number of the next multiple
> of 2^1 beyond H, the next multiple of 2^2, etc., for as many multiples
> of powers-of-two to get sufficiently far out in the sequence number
> space where we're convinced there's no realistic chance of a run of
> total or near-total packet loss unless it really means the connection
> is
> dead anyway. :)
[JG] That would mean legitimate records would be dropped until an entry in
the hash would match. Thus this proposal would potentially degrade the
service, right?

Basically I fail to see how a stream cipher can be operated reliable on top
of an unreliable transport protocol.

Best regards,
Jens

> 
> Again, these are all considerations that need not affect the protocol
> but could be tuned by implementations (perhaps with some
> recommendations
> in the protocol spec).
>

Re: [TLS] Fully encrypted and authenticated headers (was Re: Encrypting record headers: practical for TLS 1.3 after all?)

2015-12-04 Thread Valery Smyslov

As far as I understand your proposal makes impossible to use this trick, 
if we consider packets loss and reordering.


  Actually, if I’m understanding correctly how you’re doing this per-datagram 
rekeying, I think it still should be compatible with the hash-table-based 
approach I proposed.  Assuming you’re using some key derivation function that 
takes a master key and sequence number as input and produces a per-datagram 
key, the receiver just needs to pre-compute the per-datagram keys for the 
sequence numbers within the current window, and encrypt the sequence numbers 
with those respective per-datagram keys, in order to populate its hash table.  
I don’t think anything breaks at least.
I would say that the hash-table-based approach would work in theory.
In practice it either implies too much burden on the receiver or would fail
in some situations. Consider, for example, situation when the sender 
skips large amount of sequence numbers (say 2^32) for some reason.
It would be difficult for receiver to build such a hash table.

Regards,
Valery Smyslov.___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fully encrypted and authenticated headers (was Re: Encrypting record headers: practical for TLS 1.3 after all?)

2015-12-04 Thread Blumenthal, Uri - 0553 - MITLL
I'm with Valery here. IMHO the proposed mechanism makes little sense because 
(politics notwithstanding) it offers little gain in return of a significant 
(possibly unacceptable) burden to important use cases. 

Personally I'm firmly against it. 


Sent from my BlackBerry 10 smartphone on the Verizon Wireless 4G LTE network.
From: Valery Smyslov
Sent: Friday, December 4, 2015 07:57
To: Bryan Ford
Cc: tls@ietf.org
Subject: Re: [TLS] Fully encrypted and authenticated headers (was Re: 
Encrypting record headers: practical for TLS 1.3 after all?)

 
As far as I understand your proposal makes impossible to use this trick, 
if we consider packets loss and reordering.

Actually, if I’m understanding correctly how you’re doing this per-datagram 
rekeying, I think it still should be compatible with the hash-table-based 
approach I proposed.  Assuming you’re using some key derivation function that 
takes a master key and sequence number as input and produces a per-datagram 
key, the receiver just needs to pre-compute the per-datagram keys for the 
sequence numbers within the current window, and encrypt the sequence numbers 
with those respective per-datagram keys, in order to populate its hash table.  
I don’t think anything breaks at least.
I would say that the hash-table-based approach would work in theory.
In practice it either implies too much burden on the receiver or would fail
in some situations. Consider, for example, situation when the sender
skips large amount of sequence numbers (say 2^32) for some reason.
It would be difficult for receiver to build such a hash table.
 
Regards,
Valery Smyslov.



smime.p7s
Description: S/MIME cryptographic signature
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fully encrypted and authenticated headers (was Re: Encrypting record headers: practical for TLS 1.3 after all?)

2015-12-04 Thread Jacob Appelbaum
On 12/2/15, Watson Ladd  wrote:
> On Wed, Dec 2, 2015 at 10:34 AM, Jacob Appelbaum 
>>
>> I think that it eliminates all static distinguisher in the protocol
>> for all data covered by the encryption. That is a fantastically
>> wonderful benefit.
>
> What's a "static distinguisher"? Padding solves this problem as well,
> but it also solves problems resulting from TCP segmentation down the
> stack, which header encryption doesn't. What does header encryption
> offer that padding does not?
>

Fixed parts of a protocol are often considered as static
distinguishers - most are unavoidable unless you take the Scramblesuit
design approach and have a keyexchanged out of band. Elligator is
another useful design in this direction.

In the case of TLS, we've seen a specific Oakley group used as the
distinguisher that selected all related (TCP) flows for disruption.
Changing that to a (well formed) randomly selected value allowed
traffic to flow freely again. Other static values like a site specific
plaintext name are used much more commonly.

I could imagine for example that all records with a given length can
be selected and dropped, for example. Common VoIP applications that
use fixed lengths are thus even easier to censor with an exposed
length field. With that value hidden and with *random* padding, I
think the ease of selecting specific flows would be reduced and the
cost would be much higher. No everyone needs padding but many people
will want that value hidden without a useful way to do it unless the
protocol supports it by default.

All the best,
Jacob

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fully encrypted and authenticated headers (was Re: Encrypting record headers: practical for TLS 1.3 after all?)

2015-12-04 Thread Jeff Burdges
Bryan Ford  wrote : 
> 2. The 2-byte length field in each record's header no longer 
> indicates the length of the *current* record but instead indicates
> the length of the *next* record.  The length of the first record
> might be defined in a new field we add to the handshake/key-exchange
> protocol, or it might simply be set to some well-known standard first
> record size.

Using a standard size for the first message sounds like an amazingly
good idea, irrespective of whether the idea of each message containing
the next message's size works out.  Any thoughts on how big this first
message should be? 

Jeff


signature.asc
Description: This is a digitally signed message part
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: Clarification on interleaving app data and handshake records

2015-12-04 Thread Hubert Kario
On Friday 16 October 2015 22:36:10 Kurt Roeckx wrote:
> On Fri, Oct 16, 2015 at 04:05:34PM +0200, Hubert Kario wrote:
> > On Friday 16 October 2015 09:16:01 Watson Ladd wrote:
> > > Unfortunately I don't know how to verify this. Can miTLS cover
> > > this
> > > case?
> > 
> > you mean, you want an implementation that can insert application
> > data in any place of the handshake?
> 
> Have you tried running any of your tests against miTLS?

Yes, I finally did

miTLS does accept Application Data when it is send between Client Hello 
and Client Key Exchange and rejects it when it is sent between Change 
Cipher Spec and Finished.

Though I will need to modify tlsfuzzer a bit more before I will be able 
to publish an automated test case for that*

 * - miTLS writes HTTP responses in a line-by-line basis, making 
handling of its responses a bit more complex
-- 
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

signature.asc
Description: This is a digitally signed message part.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fresh results

2015-12-04 Thread Hubert Kario
On Friday 04 December 2015 00:52:08 Hanno Böck wrote:
> On Thu, 3 Dec 2015 18:45:14 -0500
> 
> Watson Ladd  wrote:
> > On Tue, Dec 1, 2015 at 3:02 PM, Hanno Böck  wrote:
> > > So as long as you make sure you implement all the proper
> > > countermeasures against that you should be fine. (Granted: This is
> > > tricky, as has been shown by previous results, even the OpenSSL
> > > implementation was lacking proper countermeasures not that long
> > > ago,
> > > but it's not impossible)
> > 
> > Can you describe the complete set of required countermeasures, and
> > prove they work comprehensively? What if the code is running on
> > shared hosting, where much better timing attacks are possible?
> > What's shocking is that this has been going on for well over a
> > decade: the right solution is to use robust key exchanges, and yet
> > despite knowing that this is possible, we've decided to throw patch
> > onto patch on top of a fundamentally broken idea. There is no fix
> > for PKCS 1.5 encryption, just dirty hacks rooted in accidents of
> > TLS.
> 
> No disagreement here.
> 
> The thing is, we have a bunch of difficult options to choose from:
> 
> * Fully deprecate RSA key exchange.
> The compatibility costs of this one are high. They are even higher
> considering the fact that chrome wants to deprecate dhe and use rsa as
> their fallback for hosts not doing ecdhe. ecdhe implementations
> weren't widespred until quite recently. A lot of patent foo has e.g.
> stopped some linux distros from shipping it.

Then maybe Chrome should reconsider.

I think we're overstating the compatibility costs.

very few widely deployed implementations (with the exception of the long 
deprecated Windows XP) lack support for DHE_RSA *and* ECDHE_RSA at the 
same time

-- 
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

signature.asc
Description: This is a digitally signed message part.
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fresh results

2015-12-04 Thread David Benjamin
On Fri, Dec 4, 2015 at 1:17 PM Hubert Kario  wrote:

> On Friday 04 December 2015 00:52:08 Hanno Böck wrote:
> > * Fully deprecate RSA key exchange.
> > The compatibility costs of this one are high. They are even higher
> > considering the fact that chrome wants to deprecate dhe and use rsa as
> > their fallback for hosts not doing ecdhe. ecdhe implementations
> > weren't widespred until quite recently. A lot of patent foo has e.g.
> > stopped some linux distros from shipping it.
>
> Then maybe Chrome should reconsider.
>

Note that Apple has already removed DHE cipher suites from Safari in the
latest OS X and iOS releases, so advertising only DHE is already infeasible
for most servers.

I don't think telling servers to disable RSA ciphers and only advertise
DHE_RSA ciphers makes much sense. The set of servers which...

1. Are willing to disable plain RSA.
2. Don't have ECDHE support.
3. Are unwilling to take updates and get ECDHE support.
4. Support DHE support *with a sensible group*.
5. Are willing to deploy DHE with said sensible group despite the
compatibility and performance hit.

...is certainly all but nil.

David

I think we're overstating the compatibility costs.
>
> very few widely deployed implementations (with the exception of the long
> deprecated Windows XP) lack support for DHE_RSA *and* ECDHE_RSA at the
> same time
>
> --
> Regards,
> Hubert Kario
> Senior Quality Engineer, QE BaseOS Security team
> Web: www.cz.redhat.com
> Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech
> Republic___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Analysis of encrypting the headers - what is the length

2015-12-04 Thread Jim Schaad
I will start by re-iterating my initial position that I would prefer that
the DTLS and TLS analysis is going to be the same in terms of masking the
header information.  So I decided to do some thought experiments about what
happens if the length were to be encrypted and how many different situations
does this not appear to help the situation.

DTLS  - Given that most DTLS situations are going to want to keep the block
of data sent small, there is no to little incentive to send multiple DTLS
blocks in a single UDP packet.  This means that the length of the encrypted
data is going to be easily found based on the length of the UDP packet.
One can probably make some significantly accurate guesses about what the
header fields are going to be for a DTLS packet, but I would assume that
even if the key could be determined it would not compromise the keys used in
protecting the D/TLS content itself.

TLS w/ lock step protocol - Think about using TLS with a lock step protocol
such as exists for POP where you always have a situation where the client
sends a request and the server sends a response.  Even with the length field
encrypted a traffic analysis is going to be able to determine the length of
all of many of the messages based on the amount of data that is flowing from
each of the end-points.  Thus with POP I can make a good guess about the
length your password just by looking at the traffic being sent in terms of
raw byte counts even without the length being directly available.

TLS w/ time breaks between operations - Think about doing browsing on a web
site.  I read a page, which takes time, and then I click a link.  Looking
just at the traffic flow I can make a really good guess about the length of
the link that you just sent to the server based on the number of bytes that
are sent from the client to the server before the server starts chattering.
Any protocol where there are going to be times where there is silence
followed by a request and then responses is going to all for a relatively
easy guess about the length of the request based just on the number of bytes
in the stream.

These are all situations where if one say - My attack model is that exposing
the length of the encrypted data allows the attacker to obtain significant
information about what I am sending - then the simple expedient of
encrypting the header information is clearly insufficient to deal with the
attack proposed.  Additional steps need to be taken as well.  For example
sending all of the randomly updating adware back on the same TLS channel to
prevent time breaks from occurring.  (What a horrid idea of using adware as
a method of preventing attacks.)

I believe that this is the type of analysis that Peter is looking for in
terms of what is the attack you are trying to prevent, what does the
proposed remedy actually do what you think it does.  The situations above
would all appear to be better dealt with padding of the plain text in some
manner rather than encrypting the length field.

Jim


___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Analysis of encrypting the headers - what is the length

2015-12-04 Thread Christian Huitema
On Friday, December 4, 2015 12:57 PM, Jim Schaad wrote:
> To: tls@ietf.org
> Subject: [TLS] Analysis of encrypting the headers - what is the length
> ...
> 
> DTLS  - Given that most DTLS situations are going to want to keep the block
> of data sent small, there is no to little incentive to send multiple DTLS 
> blocks
> in a single UDP packet.  This means that the length of the encrypted data is
> going to be easily found based on the length of the UDP packet.
> One can probably make some significantly accurate guesses about what the
> header fields are going to be for a DTLS packet, but I would assume that
> even if the key could be determined it would not compromise the keys used
> in protecting the D/TLS content itself.

Some DTLS applications that care about privacy. An example is DPRIVE -- DNS 
over DTLS, or over TLS. The length of the packets matter. The length of 
cipher-texts for query and responses can enable guesses on the length of the 
name being looked at, and that's precisely what DPRIVE is trying to prevent. 
But then the answer is not to encrypt the length, because as Jim says it can be 
deduced from the size of the UDP packets, or even the patterns of the TCP 
stream. The solution instead is padding, either using the TLS 1.3 mechanism, or 
using the DNS padding option that DPRIVE is standardizing. 

There are debates on how much to pad exactly, e.g. to the max MTU or to some 
logarithmic scale. Will probably take some time to assess the best strategy. 
But it is going to happen.

-- Christian Huitema










> TLS w/ lock step protocol - Think about using TLS with a lock step protocol
> such as exists for POP where you always have a situation where the client
> sends a request and the server sends a response.  Even with the length field
> encrypted a traffic analysis is going to be able to determine the length of 
> all
> of many of the messages based on the amount of data that is flowing from
> each of the end-points.  Thus with POP I can make a good guess about the
> length your password just by looking at the traffic being sent in terms of raw
> byte counts even without the length being directly available.
> 
> TLS w/ time breaks between operations - Think about doing browsing on a
> web site.  I read a page, which takes time, and then I click a link.  Looking 
> just
> at the traffic flow I can make a really good guess about the length of the 
> link
> that you just sent to the server based on the number of bytes that are sent
> from the client to the server before the server starts chattering.
> Any protocol where there are going to be times where there is silence
> followed by a request and then responses is going to all for a relatively easy
> guess about the length of the request based just on the number of bytes in
> the stream.
> 
> These are all situations where if one say - My attack model is that exposing
> the length of the encrypted data allows the attacker to obtain significant
> information about what I am sending - then the simple expedient of
> encrypting the header information is clearly insufficient to deal with the
> attack proposed.  Additional steps need to be taken as well.  For example
> sending all of the randomly updating adware back on the same TLS channel
> to prevent time breaks from occurring.  (What a horrid idea of using adware
> as a method of preventing attacks.)
> 
> I believe that this is the type of analysis that Peter is looking for in 
> terms of
> what is the attack you are trying to prevent, what does the proposed remedy
> actually do what you think it does.  The situations above would all appear to
> be better dealt with padding of the plain text in some manner rather than
> encrypting the length field.
> 
> Jim
> 
> 
> ___
> TLS mailing list
> TLS@ietf.org
> https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fwww.ie
> tf.org%2fmailman%2flistinfo%2ftls&data=01%7c01%7chuitema%40microsof
> t.com%7c329977d637c04d729a5708d2fcedcbc2%7c72f988bf86f141af91ab2
> d7cd011db47%7c1&sdata=cNl1i6bnNSNrsPR%2bZ9JHOiepttgz6nRmoIQn7en
> Exto%3d

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fresh results

2015-12-04 Thread Fabrice Gautier

> On Dec 4, 2015, at 10:11, Hubert Kario  wrote:
> 
>> On Friday 04 December 2015 00:52:08 Hanno Böck wrote:
>> On Thu, 3 Dec 2015 18:45:14 -0500
>> 
>> Watson Ladd  wrote:
 On Tue, Dec 1, 2015 at 3:02 PM, Hanno Böck  wrote:
 So as long as you make sure you implement all the proper
 countermeasures against that you should be fine. (Granted: This is
 tricky, as has been shown by previous results, even the OpenSSL
 implementation was lacking proper countermeasures not that long
 ago,
 but it's not impossible)
>>> 
>>> Can you describe the complete set of required countermeasures, and
>>> prove they work comprehensively? What if the code is running on
>>> shared hosting, where much better timing attacks are possible?
>>> What's shocking is that this has been going on for well over a
>>> decade: the right solution is to use robust key exchanges, and yet
>>> despite knowing that this is possible, we've decided to throw patch
>>> onto patch on top of a fundamentally broken idea. There is no fix
>>> for PKCS 1.5 encryption, just dirty hacks rooted in accidents of
>>> TLS.
>> 
>> No disagreement here.
>> 
>> The thing is, we have a bunch of difficult options to choose from:
>> 
>> * Fully deprecate RSA key exchange.
>> The compatibility costs of this one are high. They are even higher
>> considering the fact that chrome wants to deprecate dhe and use rsa as
>> their fallback for hosts not doing ecdhe. ecdhe implementations
>> weren't widespred until quite recently. A lot of patent foo has e.g.
>> stopped some linux distros from shipping it.
> 
> Then maybe Chrome should reconsider.
> 
> I think we're overstating the compatibility costs.
> 
> very few widely deployed implementations (with the exception of the long 
> deprecated Windows XP) lack support for DHE_RSA *and* ECDHE_RSA at the 
> same time

The main issue with DHE_RSA is that there are still too many servers which will 
use short DHE group (<1024 bits). When connecting to those servers, using RSA 
is (presumably) safer.

From a compatibility aspect it's much simpler and safer to just disable DHE 
completely than to try to enforce a DHE limit, which would require a fallback 
connection to RSA.


-- Fabrice. 


> -- 
> Regards,
> Hubert Kario
> Senior Quality Engineer, QE BaseOS Security team
> Web: www.cz.redhat.com
> Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
> ___
> TLS mailing list
> TLS@ietf.org
> https://www.ietf.org/mailman/listinfo/tls

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls