Hi Shawn,

On Sat, Mar 11, 2023 at 07:10:30PM -0700, Shawn Heisey wrote:
> On 12/14/22 07:15, Willy Tarreau wrote:
> > On Wed, Dec 14, 2022 at 07:01:59AM -0700, Shawn Heisey wrote:
> > > On 12/14/22 06:07, Willy Tarreau wrote:
> > > > By the way, are you running with OpenSSL
> > > > 3.0 ?  That one is absolutely terrible and makes extreme abuse of
> > > > mutexes and locks, to the point that certain workloads were divided
> > > > by 2-digit numbers between 1.1.1 and 3.0. It took me one day to
> > > > figure that my load generator which was caping at 400 conn/s was in
> > > > fact suffering from an accidental build using 3.0 while in 1.1.1
> > > > the perf went back to 75000/s!
> > > 
> > > Is this a current problem with the latest openssl built from source?
> > 
> > Yes and deeper than that actually, there's even a meta-issue to try to
> > reference the many reports for massive performance regressions on the
> > project:
> 
> A followup to my followup.  Time flies!
> 
> I was just reading on the openssl mailing list about what's coming in
> version 3.1.  The first release highlight is:
> 
> * Refactoring of the OSSL_LIB_CTX code to avoid excessive locking
> 
> Is anyone enough in tune with openssl happenings to know whether that fixes
> the issues that Willy was advising me about?  Or maybe improves the
> situation but doesn't fully resolve it?

According to the OpenSSL devs, 3.1 should be "4 times better than 3.0",
so it could still remain 5-40 times worse than 1.1.1. I intend to run
some tests soon on it on a large machine, but preparing tests takes a
lot of time and my progress got delayed by the painful bug of last week.
I'll share my findings anywya.

> I tried to figure this out for myself based on data in the CHANGES.md file,
> but didn't see anything that looked relevant to my very untrained eye.

Quite frankly I suspect it's the same for those who write that file as
well :-/

> Reading the code wouldn't help, as I am completely clueless when it comes to
> encryption code.

Same for me.

Cheers,
Willy

Reply via email to