On 17/12/2018 22:02, Jakob Bohm via openssl-users wrote: > A simpler way is to realize that the formats used by SMIME/CMS (specifically > the PKCS#7 formats) allow almost unlimited file size, and any 2GiB limit is > probably an artifact of either the openssl command line tool or some of the > underlying OpenSSL libraries.
Yes. I started using openssl's smime implementation, then backed out when I realised there were indeed limits - apparently in the underlying libraries. On decrypting I got the same kind of errors described in this bug report thread (and elsewhere if you search, but this is the most recent discussion I could find). "Attempting to decrypt/decode a large smime encoded file created with openssl fails regardless of the amount of OS memory available". https://mta.openssl.org/pipermail/openssl-dev/2016-August/008237.html The key points are: - streaming smime *encryption* has been implemented, but - smime *decryption* is done in memory, consequentially you can't decrypt anything over 1.5G - possibly this is related to the BUF_MEM structure's dependency on the size of an int There's an RT ticket but I could not log in to read this. But it appears to have been migrated to Git-hub: https://github.com/openssl/openssl/issues/2515 It's closed - I infer as "won't fix" (yet?) and this is still an issue as my experience suggests, at least in the versions distributed for systems I will be using. I was using openssl 1.0.2g-1ubuntu4.14 (Xenial) and I've verified it with openssl 1.1.0g-2ubuntu4.3 (Bionic, the latest LTS release fro Ubuntu): $ openssl version -a OpenSSL 1.1.0g 2 Nov 2017 built on: reproducible build, date unspecified platform: debian-amd64 compiler: gcc -DDSO_DLFCN -DHAVE_DLFCN_H -DNDEBUG -DOPENSSL_THREADS -DOPENSSL_NO_STATIC_ENGINE -DOPENSSL_PIC -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/usr/lib/ssl\"" -DENGINESDIR="\"/usr/lib/x86_64-linux-gnu/engines-1.1\"" OPENSSLDIR: "/usr/lib/ssl" ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1" $ dd if=/dev/zero of=sample.txt count=2M bs=1024 $ openssl req -x509 -nodes -newkey rsa:2048 -keyout mysqldump-secure.priv.pem -out mysqldump-secure.pub.pem $ openssl smime -encrypt -binary -text -aes256 -in sample.txt -out sample.txt.enc -outform DER -stream mysqldump-secure.pub.pem $ openssl smime -decrypt -binary -inkey mysqldump-secure.priv.pem -inform DEM -in sample.txt.enc -out sample.txt.restored Error reading S/MIME message 139742630175168:error:07069041:memory buffer routines:BUF_MEM_grow_clean:malloc failure:../crypto/buffer/buffer.c:138: 139742630175168:error:0D06B041:asn1 encoding routines:asn1_d2i_read_bio:malloc failure:../crypto/asn1/a_d2i_fp.c:191 > Anyway, setting up an alternative data format might be suitable if combined > with other functionality requiring chunking, such as recovery from > lost/corrupted data "blocks" (where each block is much much larger than > a 1K "disk block"). I should add that I don't really care about the format, or even the use of openssl - just the ability to tackle large files with the benefits of public key encryption, in a self-contained way without needing fiddly work deploying the keys (as GnuPG seems to require for its keyring, judging from my experience deploying Backup-Ninja / Duplicity using Ansible.) So other solutions, if tried and tested, might work for me. Cheers, Nick
-- openssl-users mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users