(or at least trying to get close)
Don't even think about storing stuff on your server!
First line of defense for optimizations
# Bad
def check_for_me(self):
users = Users.objects.filter(role='developer')
for el in users:
# This would execute a db query
# each iteration
prof = el.profile
if prof.real_name is 'Julian Gindi':
return True
return False
# Good
def check_for_me(self):
# This is 'lazy'
users = Users.objects.filter(role='developer')
users = users.select_related('profile')
for el in users:
# This would NOT execute an additional db query
prof = el.profile
if prof.real_name is 'Julian Gindi':
return True
return False
from django.core.cache import cache
def get_manifest():
cached = cache.get('asset_manifest')
if not cached:
manifest_file = settings.PROJECT_ROOT + '/dist/manifests/assets-manifest.json'
json_manifest = yaml.load(open(manifest_file))
cache.set('asset_manifest', json_manifest, 43200)
return json_manifest
else:
return cached
from celery import shared_task
import requests
# If you set up your backend, you can query for status information
# and request the result when the request is complete
@shared_task
def api_call(url, method):
if method is 'GET':
r = requests.get(url)
return r.json()
uWSGI and Nginx: A love story
[uwsgi]
socket = {{ base }}/run/myProject.sock
# Django's wsgi file
module = configuration.wsgi:application
# Creating a pidfile to control individual vassals
pidfile = {{ base }}/run/myProject.pid
# the virtualenv (full path)
home = {{ base }}/venv
master = true
enable-threads = true
single-interpreter = true
# Turning on webscale
cheaper-algo = spare
# number of workers to spawn at startup
cheaper-initial = 2
# maximum number of workers that can be spawned
workers = 10
# how many workers can be spawned at a time
cheaper-step = 2
A great pair: They Speak the same language
upstream django {
server unix:///{{ base }}/run/myProject;
}
# Send all non-media requests to the Django server.
location @uwsgi {
...
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
proxy_redirect off;
}
# OS Limits this for you (default 128)
# modify the below file for more
/proc/sys/net/core/somaxconn
# UWSGI Setting
listen = 2000
Modify the listen queue
Increase number of allowed open files (unix sockets are just files)
> vi /etc/sysctl.conf
fs.file-max = 70000
Would install dependencies (hopefully) using a configuration management system. ServersĀ take a while to become 'ready' to receive new code.
Pre-built images are used that already contain all your dependencies. New code is deployed to a fresh image and deployed using a "canary" system. Since image is "compiled", if the deployment succeeds, one or many servers become active.