On Fri, Sep 18, 2020 at 10:48 AM Martin Grigorov <mgrigo...@apache.org>
wrote:

> Hi,
>
> On Wed, Sep 16, 2020 at 10:24 AM Martin Grigorov <mgrigo...@apache.org>
> wrote:
>
>> Hi Remy,
>>
>> On Tue, Sep 15, 2020 at 6:16 PM Rémy Maucherat <r...@apache.org> wrote:
>>
>>> On Tue, Sep 15, 2020 at 5:08 PM Martin Grigorov <mgrigo...@apache.org>
>>> wrote:
>>>
>>> > Hi Mark,
>>> >
>>> > On Tue, Sep 15, 2020 at 3:34 PM Mark Thomas <ma...@apache.org> wrote:
>>> >
>>> > > On 15/09/2020 12:46, Martin Grigorov wrote:
>>> > > > On Tue, Sep 15, 2020 at 2:37 PM Martin Grigorov <
>>> mgrigo...@apache.org>
>>> > > > wrote:
>>> > > >
>>> > > >> Hi,
>>> > > >>
>>> > > >> I am running some load tests on Tomcat and I've noticed that when
>>> > HTTP2
>>> > > is
>>> > > >> enabled the throughput drops considerably.
>>> > > >>
>>> > > >> Here are the steps to reproduce:
>>> > > >>
>>> > > >> 1) Enable HTTP2, e.g. by commenting out this connector:
>>> > > >>
>>> > > >>
>>> > >
>>> >
>>> https://github.com/apache/tomcat/blob/d381d87005fa89d1f19d9091c0954f317c135d9d/conf/server.xml#L103-L112
>>> > > >>
>>> > > >> 2) Download Vegeta load tool from:
>>> > > >> https://github.com/tsenart/vegeta/releases/
>>> > > >>
>>> > > >> 3) Run the load tests:
>>> > > >>
>>> > > >> 3.1) HTTP/1.1
>>> > > >> echo -e '{"method": "GET", "url": "
>>> http://localhost:8080/examples/"}'
>>> > |
>>> > > >> vegeta attack -format=json  -rate=0 -max-workers=1000
>>> -duration=10s |
>>> > > >> vegeta encode > /tmp/http1.json; and vegeta report -type=json
>>> > > >> /tmp/http1.json | jq .
>>> > > >>
>>> > > >> 3.2) HTTP2
>>> > > >> echo -e '{"method": "GET", "url": "
>>> https://localhost:8443/examples/
>>> > "}'
>>> > > |
>>> > > >> vegeta attack -format=json -http2 -rate=0 -max-workers=1000
>>> -insecure
>>> > > >> -duration=10s | vegeta encode > /tmp/http2.json; and vegeta report
>>> > > >> -type=json /tmp/http2.json | jq .
>>> > > >>
>>> > > >> As explained at https://github.com/tsenart/vegeta#-rate -rate=0
>>> means
>>> > > >> that Vegeta will try to send as many requests as possible with the
>>> > > >> configured number of workers.
>>> > > >> I use '-insecure' because I use self-signed certificate.
>>> > > >>
>>> > > >> On my machine I get around 14-15K reqs/sec for HTTP1.1 with only
>>> > > responses
>>> > > >> with code=200 .
>>> > > >> But for HTTP2 Tomcat starts returning such kind of errors:
>>> > > >>
>>> > > >>  "errors": [
>>> > > >>     "Get \"https://localhost:8443/examples/\": http2: server sent
>>> > > GOAWAY
>>> > > >> and closed the connection; LastStreamID=9259,
>>> ErrCode=PROTOCOL_ERROR,
>>> > > >> debug=\"Stream [9,151] has been closed for some time\"",
>>> > > >>     "http2: server sent GOAWAY and closed the connection;
>>> > > >> LastStreamID=9259, ErrCode=PROTOCOL_ERROR, debug=\"Stream [9,151]
>>> has
>>> > > been
>>> > > >> closed for some time\"",
>>> > > >>     "Get \"https://localhost:8443/examples/\": http2: server sent
>>> > > GOAWAY
>>> > > >> and closed the connection; LastStreamID=239,
>>> ErrCode=PROTOCOL_ERROR,
>>> > > >> debug=\"Stream [49] has been closed for some time\""
>>> > > >>   ]
>>> > > >>
>>> > > >> when I ask for more than 2000 reqs/sec, i.e. -rate=2000/1s
>>> > >
>>> > > That indicates that the client has sent a frame associated with a
>>> stream
>>> > > that the server closed previously and that that stream has been
>>> removed
>>> > > from the Map of known streams to make room for new ones. See
>>> > > Http2UpgardeHandler.pruneClosedStreams()
>>> > >
>>> > > It looks like the client is making assumptions about server behaviour
>>> > > that go beyond the requirements of RFC 7540, section 5.3.4.
>>> > >
>>> >
>>> > This is possible!
>>> > I've just tested with two more HTTP2 impls:
>>> >
>>> > 1) Node.js
>>> >
>>> > http2-server.js
>>> > ===================================================
>>> > const http2 = require('http2');
>>> > const fs = require('fs');
>>> >
>>> > const server = http2.createSecureServer({
>>> >     key: fs.readFileSync('/path/to/server.key'),
>>> >     cert: fs.readFileSync('/path/to/server.crt')
>>> > });
>>> > server.on('error', (err) => console.error(err));
>>> >
>>> > server.on('stream', (stream, headers) => {
>>> >     // stream is a Duplex
>>> >     stream.respond({
>>> >         'content-type': 'text/plain; charset=utf-8',
>>> >         ':status': 200
>>> >     });
>>> >     stream.end('Hello world!');
>>> > });
>>> >
>>> > server.listen(18080);
>>> > ===================================================
>>> >
>>> > run with: node http2-server.js
>>> >
>>> > Runs fine with -rate=0 and gives around 8K reqs/sec
>>> >
>>> > 2) Rust
>>> >
>>> > Cargo.toml
>>> > ===================================================
>>> > [package]
>>> > name = "my-http2-server"
>>> > version = "0.0.1"
>>> > publish = false
>>> > authors = ["Martin Grigorov <mgrigo...@apache.org>"]
>>> > license = "MIT/Apache-2.0"
>>> > description = "Load test HTTP/2 "
>>> > repository = "https://github.com/martin-g/http2-server-rust";
>>> > keywords = ["http2"]
>>> > edition = "2018"
>>> >
>>> > [dependencies]
>>> > actix-web = { version = "3", features = ["openssl"] }
>>> > openssl = { version = "0.10", features = ["v110"] }
>>> > ===================================================
>>> >
>>> > src/main.rs
>>> > ===================================================
>>> > use actix_web::{web, App, HttpRequest, HttpServer, Responder};
>>> > use openssl::ssl::{SslAcceptor, SslFiletype, SslMethod};
>>> >
>>> > async fn index(_req: HttpRequest) -> impl Responder {
>>> > "Hello world!"
>>> > }
>>> >
>>> > #[actix_web::main]
>>> > async fn main() -> std::io::Result<()> {
>>> > let mut builder =
>>> > SslAcceptor::mozilla_intermediate(SslMethod::tls()).unwrap();
>>> > builder
>>> > .set_private_key_file("/path/to/server.key", SslFiletype::PEM)
>>> > .unwrap();
>>> > builder.set_certificate_chain_file("/path/to/server.crt").unwrap();
>>> >
>>> > HttpServer::new(|| App::new().route("/", web::get().to(index)))
>>> > .bind_openssl("127.0.0.1:18080", builder)?
>>> > .run()
>>> > .await
>>> > }
>>> > ===================================================
>>> >
>>> > run with: cargo run
>>> > Again no errors, throughput: 3K reqs/sec
>>> >
>>> > I will test with Netty tomorrow too, but so far it looks like only
>>> Tomcat
>>> > fails under load.
>>> >
>>> >
>>> >
>>> > > >> All the access logs look like:
>>> > > >>
>>> > > >> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/
>>> HTTP/2.0"
>>> > 200
>>> > > >> 1126
>>> > > >> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/
>>> HTTP/2.0"
>>> > 200
>>> > > >> 1126
>>> > > >> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/
>>> HTTP/2.0"
>>> > 200
>>> > > >> 1126
>>> > > >> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/
>>> HTTP/2.0"
>>> > 200
>>> > > >> 1126
>>> > > >> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/
>>> HTTP/2.0"
>>> > 200
>>> > > >> 1126
>>> > > >> 127.0.0.1 - - [15/Sep/2020:13:59:24 +0300] "GET /examples/
>>> HTTP/2.0"
>>> > 200
>>> > > >> 1126
>>> > > >>
>>> > > >> i.e. there are no error codes, just 200.
>>> > > >> Vegeta reports the error with status code = 0. I think this just
>>> means
>>> > > >> that it didn't get a proper HTTP response but just TCP error.
>>> > > >> There are no errors in catalina.out.
>>> > > >>
>>> > > >> Are there any settings I can tune to get better throughput with
>>> HTTP2
>>> > ?
>>> > > >>
>>> > > >> Tomcat 10.0.0-M8.
>>> > >
>>> > > If you really want to maximise throughput then you need to reduce the
>>> > > number of concurrent requests to (or a little above) the number of
>>> cores
>>> > > available on the server. Go higher and you'll start to see throughput
>>> > > tail off due to context switching.
>>> > >
>>> > > If you want to demonstrate throughput with a large number of clients
>>> > > you'll probably need to experiment with both maxThreads,
>>> > >
>>> >
>>> > I've forgot to say that I use maxThreads=8.
>>> >
>>> >
>>> > > maxConcurrentStreams and maxConcurrentStreamExecution.
>>> > >
>>> > > If I had to guess, I'd expect maxConcurrentStreams ==
>>> > > maxConcurrentStreamExecution and low numbers for all of them to give
>>> the
>>> > > best results.
>>>
>>
> These didn't help
>
> I've created a GitHub repo with all HTTP2 server I've tested with so far:
> https://github.com/martin-g/http2-server-perf-tests
> Throughput with default settings:
> - Netty: 29K reqs/s
> - Golang: 19K
> - Node.js: 17K
> - Rust: 3K
> - Tomcat: 400, with many errors
>
> So far only Tomcat returns errors :-/
>
> I'll keep digging!
>

What is the difference
between org.apache.coyote.http2.StreamStateMachine.State#CLOSED_RX
and org.apache.coyote.http2.StreamStateMachine.State#CLOSED_TX ?

I read some parts of https://tools.ietf.org/html/rfc7540 but I don't see
anything related to two types of CLOSED state.




> Martin
>
>
>
>> > >
>>> >
>>> > I will check those tomorrow!
>>> >
>>>
>>> I use h2load, and IMO I have very good performance with h2c. No errors.
>>> However, same as your tool, it's concurrency flag only controls how many
>>> streams it will use *inside a single connection* [I respect that, but I
>>>
>>
>> I think this is not the case here.
>> As far as I can see Vegeta does not use custom ClientConnPool and the
>> default pool in net::http2 shares many connections per host:port key:
>> https://github.com/golang/net/blob/62affa334b73ec65ed44a326519ac12c421905e3/http2/client_conn_pool.go
>>
>>
>>
>>> would have expected instead the tool would also use multiple connections
>>> in
>>> an attempt to simulate multiple users, but nope]. This can only go so far
>>> and the performance numbers caused me to add the flags to limit stream
>>> concurrency.
>>>
>>> Rémy
>>>
>>

Reply via email to