Hi Jakub,

There's no full profiling path yet - this is something I hope to work on as
we start integrating ASGI code at work early next year.

400ms for a request is already very long; I'm slightly concerned by that
(unless that's normal without Channels installed as well).

If you want to do a basic profiling test, you can just write a very basic
consumer that sends something onto the reply channel and then:

1) Directly inject requests into the channel layer from a python script,
listen for the response, and time the roundtrip. You can import
`channel_layer` from your project's `asgi.py` - if you don't have one the
docs talk about how to make one. You'll want to call
channel_layer.send("http.request", {"reply_channel": "http.response!test",
"path": ....}) and channel_layer.receive_many(["http.response!test"]).

2) Directly make HTTP requests on the server running Daphne to it via
localhost with a similar round trip measurement. Something like `ab` works
well for this; remember to either target a very simple view or compare it
with a similar test using a WSGI server.

Andrew

On Wed, Dec 28, 2016 at 7:31 PM, <jakub.skale...@pvpc.eu> wrote:

> My question from StackOverflow:
>
>
> My technology stack is Redis as a channels backend, Postgresql as a
> database, Daphne as an ASGI server, Nginx in front of a whole application.
> Everything is deployed using Docker Swarm, with only Redis and Database
> outside. I have about 20 virtual hosts, with 20 interface servers, 40 http
> workers and 20 websocket workers. Load balancing is done using Ingress
> overlay Docker network. Ouch, and I'm also using django-cacheops for low
> timeout caching (up to 15 min)
>
>
> The problem is, sometimes very weird things happen regarding performance.
> Most of requests are handled in under 400ms, but sometimes request can take
> up to 2-3s, even during very small load. Profiling workers with Django
> Debug Toolbar or middleware-based profilers shows nothing (timing 50-100ms
> or so)
>
>
> My question: is there any good method of profiling a whole request path
> with django-channels? I would like how much time each phase takes, i.e when
> request was processed by Daphne, when worker started processing, when it
> finished, when interface server sent response to the client. Currently, I
> have no idea how to solve this.
>
>
> EDIT:
>
> Some more information:
>
> Website isn't that big, it tops at 1k users online, with some real-time
> features like notifications, chat, and real-time updates of few views. It
> looks like sometimes website is "blocked", like communication between redis
> / interface servers / workers were incredibly slooow.
>
>
> Maybe you have some experience about server / worker ratios, fine tuning
> channel and redis settings?
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To post to this group, send email to django-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/django-users.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/django-users/18cd97f8-5505-470b-8c22-3d97b1586d83%40googlegroups.com
> <https://groups.google.com/d/msgid/django-users/18cd97f8-5505-470b-8c22-3d97b1586d83%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To post to this group, send email to django-users@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAFwN1uq2my6o9pnU411%3D2FgXembGkFjMwP5kSV6K_qbFcMCv8w%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to