I've noticed this before and made a note meaning to investigate further,
which I promptly forgot about. I found the note again and thought I'd
give it an airing.

On an Alix:

$ dmesg | grep glxsb
glxsb0 at pci0 dev 1 function 2 "AMD Geode LX Crypto" rev 0x00: RNG AES

(Note that glxsb only supports 128-bit AES.)

Two runs of this:

$ openssl speed -elapsed -evp aes-128-cbc aes-256-cbc bf-cbc md5 rmd160

First one is with GENERIC.

 type           16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes

 md5             1626.02k     5740.63k    16864.52k    32936.64k    45592.21k
 rmd160           896.17k     2390.60k     4776.75k     6375.45k     7063.91k
 blowfish cbc    6322.96k     7944.50k     8531.00k     8694.92k     8739.18k
>aes-256 cbc      763.62k     1927.76k     3144.16k     3735.24k     3944.74k
 aes-128-cbc      842.88k     3209.04k    10395.46k    23636.94k    37118.16k

Second is with glxsb disabled.

 md5             1411.29k     5071.80k    15398.65k    31483.38k    45262.66k
 rmd160           900.02k     2405.29k     4797.65k     6389.36k     7077.15k
 blowfish cbc    7007.10k     8217.05k     8597.33k     8676.23k     8721.29k
>aes-256 cbc     4216.40k     5463.23k     5918.44k     6041.66k     6073.89k
 aes-128-cbc     4817.48k     6998.22k     7920.43k     8184.28k     8260.40k

The current infrastructure lets us set whether a particular algorithm
is supported, but can't distinguish 128 from 256-bit AES. See glxsb_crypto_setup
in arch/i386/pci/glxsb.c:

        algs[CRYPTO_AES_CBC] = CRYPTO_ALG_FLAG_SUPPORTED;

With the following (over?)simple diff... (note: ipsec not yet tested).

Index: glxsb.c
===================================================================
RCS file: /cvs/src/sys/arch/i386/pci/glxsb.c,v
retrieving revision 1.19
diff -u -p -r1.19 glxsb.c
--- glxsb.c     2 Jul 2010 02:40:15 -0000       1.19
+++ glxsb.c     23 Jul 2010 23:02:36 -0000
@@ -624,7 +624,8 @@ glxsb_crypto_encdec(struct cryptop *crp,
        int offset;
        uint32_t control;
 
-       if (crd == NULL || (crd->crd_len % SB_AES_BLOCK_SIZE) != 0) {
+       if (ses->ses_klen != 128 || crd == NULL ||
+           (crd->crd_len % SB_AES_BLOCK_SIZE) != 0) {
                err = EINVAL;
                goto out;
        }

...aes256 speed looks a lot better, and aes128 is still accelerated:

aes-256 cbc       4964.68k     5741.21k     5992.96k     6055.81k     6071.80k
aes-128-cbc        851.87k     3244.89k    10520.89k    23840.70k    37009.52k

But it looks like there's code which should already handle this case
(see refs to ses_swd_enc), I don't see why that's not working yet (though
I did think it seemed more complicated than necessary). I also don't
really understand why glxsb claims CRYPTO_ALG_FLAG_SUPPORTED for
various things which it doesn't support (though as the speed results
above show, it doesn't seem to hurt). So if anyone can throw light on
why it might have been done that way I'd be interested to hear...

Reply via email to