Hey guys!
Turns out that one of the biggest bottlenecks was the type guessing when
parsing the JSON content to a "map[string]any"; As soon as I implemented
more appropriate structs to unmarshall the bytes I immediately got faster
responses. Another big improvement was when changing from the sta
Hi all!
Thanks for the inputs, I'll do some profiling here and update about my
findings.
I don't want to change anything on Nginx because the comparison I'm doing
between different stacks is on the same basis. If I try to tune Nginx or
anything like that I'll be comparing apples to oranges, so
Have you tried a tcpdump of the packets between the Go program and nginx?
Is it using HTTP/1.1 or HTTP/2? If it's HTTP/1.1, does tcpdump show that
it actually starts all 50 HTTP client requests simultaneously?
You are making all these concurrent requests to the same host. The default
of MaxCo
I wonder a bit about io.ReadAll versus constructing a JSON Decoder. In
general, though, using pprof is the best way to start to break down a
question like this. Would the actual workload involve more structured JSON,
or more computation with decoded values?
On Friday, December 2, 2022 at 7:31:5
On Fri, Dec 2, 2022 at 8:13 PM Diogo Baeder wrote:
> Hi guys,
>
> I've been working on some experiments with different web application
> stacks to check their performances under a specific scenario: one in which
> I have to make several concurrent requests and then gather the results
> together (
Hi guys,
I've been working on some experiments with different web application stacks
to check their performances under a specific scenario: one in which I have
to make several concurrent requests and then gather the results together
(in order) and throw them out as JSON in the response body. (T