The not so exciting world of Wsgi
What happens when a WSGI Server gets an Async Request?
The exciting not so exciting world of Asgi
What happens when a ASGI Server gets an Sync Request?
ASGI model
Joe Armstrong - Humble engineer / Creator of Erlang
But Iqbal?
This means
Iqbal 1 > One ASGI worker can only serve one synchronous request (we have no async views in our codebase)
Iqbal 2 > Correct
Iqbal 1 > I dont beleive you
Iqbal 2 > I'm full of shit sometimes
Iqbal 1 > Lets validate this
Initiate Shitty benchmarks
Time.sleep()
# In request body
thread_id = random.random()
print("tick {}".format(thread_id))
time.sleep(0.5)
print("tock {}".format(thread_id))
time.sleep(0.5)
print("tack {}".format(thread_id))
bank = Bank.objects.select_for_update().first()
<<time.sleep() only snippet>>
return Response("abc")
DB.get() + time.sleep()
Atomic Transaction
Lock row
bank = Bank.objects.select_for_update().first()
<<time.sleep() only snippet>>
position = int(random.random() * 100) + 1
banks = Bank.objects.filter(id=position).select_for_update()
bank = banks[0]
Atomic Transaction
Lock random row
Sleep in DB
from django.db import connection
thread_id = random.random()
print("tick {}".format(thread_id))
cursor = connection.cursor()
cursor.execute("""SELECT pg_sleep(0.5)""")
print("tock {}".format(thread_id))
cursor.execute("""SELECT pg_sleep(0.5)""")
print("tack {}".format(thread_id))
Initiate Shitty benchmarks
Check
Check
WTF
WTF
WTF
Testing done via apache ab. Two concurrent requests
But Iqbal?
Iqbal 1 > I told you, full of shit
Iqbal 2 > I am sometimes but ...
Iqbal 1 > Resign .. Retire .. Resign. let the younger fol
Iqbal 1 > Stop Taking jobs that belong to young people
Iqbal 3 > The Django Community is your friend
Carlton Gibson - Humble engineer / Django Cast, Django Crispy Forms, maintainer of awesome django & previous DRF Fellow
The frustration. Why doesnt this perform shit
Multiple threads bad for ORM
Multiple threads bad for ORM
A sign
This is an ongoing discussion on the django internals forum: https://forum.djangoproject.com/t/asynchronous-orm/5925/24
Hmmmm
Yep verified asgiref.sync.SyncToAsync.__init__
has boolean set to False whereas in newer ones its true
Concurrency 5 - Random Row Lock 5 workers:
With old asgiref
Daphne 1 worker 3.92
Gunicorn 5 workers 3.75
Uvicorn 1 workers 3.12
Uvicorn 5 workers 3.92
With the latest asgiref (All order has returned to the world. I'll explain why):-
Daphne 1 woker (with latest asgiref > 3.3.0) 0.95
Uvicorn 1 woker (with latest asgiref > 3.3.0) 0.96
Uvicorn 5 woker (with latest asgiref > 3.3.0) 2.55
Moar Benchmarks
Testing done via apache ab. 5 concurrent requests
Sweet
Stability > Performance
Upgrading to the latest version reduced the throughput and in an expected manner. This is documented on djangoproject:-
asgiref version 3.3.0 changed the default value of the thread_sensitive parameter to True. This is a safer default, and in many cases interacting with Django the correct value, but be sure to evaluate uses of sync_to_async() if updating asgiref from a prior version.
Recommended Actions:-
We should upgrade asgiref
And either
Use uvicorn with multiple workers, configure daphne with multiple workers (it can be done) or shift to wsgi from asgi. Asgi by default wraps all django sync code into an sync_to_async function
-- OR --
Use gunicorn or any other wsgi server for now since we dont use async views
Daphne is not bad. It works as expected
Gold standard: having two request routers one for ASGI and one for WSGI
This is an emergency
Random scary pictures to make a point