Dustin Davis
Developer, Entrepreneur

Have you ever wanted to give your model some month choices relating to integers 1-12. I would guess it’s pretty common – common enough to be included in django.contrib. Well, it is. Here is a quick tip on how to include it in a model:

from django.db import models
from django.utils.dates import MONTHS

class RevenueGoal(models.Model):
    month = models.PositiveSmallIntegerField(choices=MONTHS.items())
    year = models.PositiveIntegerField()
    goal = models.DecimalField('Revenue Goal', max_digits=8, decimal_places=2)

Disclaimer: I am not a sysadmin. I’m just a developer. I welcome and encourage comments to improve this process!

I have set up a couple of Django servers lately and taken copious notes that I have extracted from various sources. Below are the commands I issue to a fresh Ubuntu server install to get Django up and running. This puts everything on one server (PostgreSQL, Celery, RabbitMQ, etc) so it’s nice for a small starter project but don’t expect it to scale.

Log in as root and add a non-root user. Add the user to the sudoers group. Log out and log back in as ‘username’.

adduser username
adduser username sudo

Update the local package index. Upgrade all the packages that can be upgraded. Remove packages that are no longer needed and then reboot for good measure.

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get autoremove
sudo reboot

Install libraries for Python, PIP, PIL/Pillow, PostgreSQL, libevent for gevent, memcached server and library, RabbitMQ, git, nginx, & supervisor

sudo apt-get install build-essential python-dev python-pip libjpeg8-dev libfreetype6-dev zlib1g-dev postgresql postgresql-contrib libpq-dev libevent-dev memcached libmemcached-dev rabbitmq-server git nginx supervisor

Install virtualenv and virtualenvwrapper. To enable it, we need to add a line to our .bashrc file and log out and back in.

sudo pip install virtualenv virtualenvwrapper
echo "" >> .bashrc
echo "source /usr/local/bin/virtualenvwrapper.sh" >> .bashrc
source .bashrc

Make a virtualenv

mkvirtualenv project_env

Install postgres adminpack

sudo -u postgres psql

Change postgres password & create database

sudo passwd postgres
sudo su - postgres
psql -d template1 -c "ALTER USER postgres WITH PASSWORD 'changeme';"
createdb projectdb
createuser username --pwprompt
psql -d template1 -U postgres

Install RabbitMQ

sudo rabbitmqctl add_user username username_pw
sudo rabbitmqctl add_vhost username_vhost
sudo rabbitmqctl set_permissions -p username_vhost username ".*" ".*" ".*"
sudo rabbitmqctl clear_permissions -p username_vhost guest

Generate ssh key to upload to Github, Bitbucket, or wherever you host your code.

ssh-keygen -t rsa -C you@sample.com
cat ~/.ssh/id_rsa.pub

Create some /var/www dirs & set permissions on these directories.

sudo mkdir -p /var/www/static
sudo mkdir /var/www/media
sudo chown -R username:www-data /var/www

Clone your repository to your home directory and install the packages in your requirements file.

git clone git@bitbucket.org:yourusername/project.git
cd project/requirements
pip install -r prod.txt

Remove the default symbolic link for Nginx. Create a new blank config, and make a symlink to it. Edit the new configuration file.

sudo rm /etc/nginx/sites-enabled/default
sudo touch /etc/nginx/sites-available/project
cd /etc/nginx/sites-enabled
sudo ln -s ../sites-available/project
sudo vim /etc/nginx/sites-available/project

Add the following content to nginx config:

# define an upstream server named gunicorn on localhost port 8000
upstream gunicorn {
    server localhost:8000;

# make an nginx server
server {
    # listen on port 80
    listen 80;

    # for requests to these domains
    server_name yourdomain.com www.yourdomain.com;

    # look in this directory for files to serve
    root /var/www/;

    # keep logs in these files
    access_log /var/log/nginx/project.access.log;
    error_log /var/log/nginx/project.error.log;

    # You need this to allow users to upload large files
    # See http://wiki.nginx.org/HttpCoreModule#client_max_body_size
    # I'm not sure where it goes, so I put it in twice. It works.
    client_max_body_size 0;

    # this tries to serve a static file at the requested url
    # if no static file is found, it passes the url to gunicorn
    try_files $uri @gunicorn;

    # define rules for gunicorn
    location @gunicorn {
        # repeated just in case
        client_max_body_size 0;

        # proxy to the gunicorn upstream defined above
        proxy_pass http://gunicorn;

        # makes sure the URLs don't actually say http://gunicorn
        proxy_redirect off;

        # If gunicorn takes > 5 minutes to respond, give up
        # Feel free to change the time on this
        proxy_read_timeout 5m;

        # make sure these HTTP headers are set properly
        proxy_set_header Host            $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
server {
    listen  443 ssl;

    # start mine
    ssl on;
    ssl_certificate /etc/ssl/localcerts/yourdomain_com.crt;
    ssl_certificate_key /etc/ssl/localcerts/yourdomain.com.key;
    ssl_protocols        SSLv3 TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers          HIGH:!aNULL:!MD5:!kEDH;
	server_name  yourdomain.com www.yourdomain.com;

    # look in this directory for files to serve
    root /var/www/;

    # keep logs in these files
    access_log /var/log/nginx/project.access.log;
    error_log /var/log/nginx/project.error.log;

    # You need this to allow users to upload large files
    # See http://wiki.nginx.org/HttpCoreModule#client_max_body_size
    # I'm not sure where it goes, so I put it in twice. It works.
    client_max_body_size 0;

    # this tries to serve a static file at the requested url
    # if no static file is found, it passes the url to gunicorn
    try_files $uri @gunicorn;

    # define rules for gunicorn
    location @gunicorn {
        # repeated just in case
        client_max_body_size 0;

        # proxy to the gunicorn upstream defined above
        proxy_pass http://gunicorn;

        # makes sure the URLs don't actually say http://gunicorn
        proxy_redirect off;

        # If gunicorn takes > 5 minutes to respond, give up
        # Feel free to change the time on this
        proxy_read_timeout 5m;

        # make sure these HTTP headers are set properly
        proxy_set_header Host            $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Restart nginx

sudo service nginx restart

Set up database

cd /home/username/project
python manage.py syncdb --settings=project.settings.prod
python manage.py migrate --settings=project.settings.prod

Run collectstatic command

python manage.py collectstatic -l --noinput --settings=project.settings.prod
sudo /etc/init.d/nginx restart

Configure supervisor

Add the following contents to /etc/supervisor/conf.d/celeryd.conf

sudo vim /etc/supervisor/conf.d/celeryd.conf


# the name of this service as far as supervisor is concerned

# the command to start celery
command = /home/username/.virtualenvs/project_env/bin/python /home/username/project/manage.py celeryd -B -E --settings=project.settings.prod

# the directory to be in while running this
directory = /home/username/project

# the user to run this service as
user = username

# start this at boot, and restart it if it fails
autostart = true
autorestart = true

# take stdout and stderr of celery and write to these log files
stdout_logfile = /var/log/supervisor/celeryd.log
stderr_logfile = /var/log/supervisor/celeryd_err.log

Now we will create CeleryCam in /etc/supervisor/conf.d/celerycam.conf

sudo vim /etc/supervisor/conf.d/celerycam.conf


command = /home/username/.virtualenvs/project_env/bin/python /home/username/project/manage.py celerycam --settings=project.settings.prod
directory = /home/username/project
user = username
autostart = true
autorestart = true
stdout_logfile = /var/log/supervisor/celerycam.log
stderr_logfile = /var/log/supervisor/celerycam_err.log

Create Gunicorn script in /etc/supervisor/conf.d/gunicorn.conf

sudo vim /etc/supervisor/conf.d/gunicorn.conf


command = /home/username/.virtualenvs/project_env/bin/python /home/username/project/manage.py run_gunicorn -w 4 -k gevent --settings=project.settings.prod
directory = /home/username/project
user = username
autostart = true
autorestart = true
stdout_logfile = /var/log/supervisor/gunicorn.log
stderr_logfile = /var/log/supervisor/gunicorn_err.log

Restart supervisor

sudo service supervisor restart

Restart/stop/start all services managed by supervisor

sudo supervisorctl restart all
sudo supervisorctl stop all
sudo supervisorctl start all

Or restart just celeryd

sudo supervisorctl restart celeryd

Or, start just gunicorn

sudo supervisorctl start gunicorn

Reboot and make sure everything starts up

sudo reboot

Bonus: set up ssl

sudo mkdir /etc/ssl/localcerts
cd /etc/ssl/localcerts
sudo openssl req -new -nodes -days 365 -keyout yourdomain.com.key -out yourdomain.com.csr
sudo chmod 400 /etc/ssl/localcerts/yourdomain.com.key
sudo chmod 400 /etc/ssl/localcerts/yourdomain.com.crt

I have been tasked with updating our real-time revenue stats at Neutron. After spending about a week going though and updating our PHP scripts I finally decided it would be worth my time and sanity to start from scratch with Python. I’m building a Django application that will store revenue stats from different sources, which I can then use to build views and an API for stat tools.

So for the past few days I’ve been writing scripts that log in to other websites and scrape data, or accessing the site’s API’s if they have one. I’ve learned a few things.

  1. requests > httplib2
  2. SOAP is the suck, but at least its an API. Suds makes SOAP suck less. I get it that SOAP is basically all .net developers know as far as APIs. ;)
  3. Beautiful Soup is a nice last resort.
  4. I’ve actually surprised how many businesses can survive on such crappy technology.

I saved Google Adsense for last figuring they would have the best API and it would therefore be the easiest to implement. It turned out more challenging than I anticipated. Apparently you can’t just plug in a username/password or API key, you have to go through the whole Oauth2 handshake to gain access to the API.

I found documentation was not as easy to find as I had hoped unfortunately. I found many broken links to documentation. Of all people I thought Google would be better at this. For example, on their most up to date developer docs I could find they point to this broken link to read more about authentication and authorization. (OK, that was weird, as soon as I posted it here, the link started working – I guess you can all thank me for that ;))

So this blog post is an attempt to document the process of getting reports out of Adsense and into my Django application.

In order to use Google’s API for accessing Adsense reports, you need to use the Adsense Management API. This API only support OAuth so you have to do the authentication flow in the browser at least once in order to get your credentials, then you can save these credentials so you have access going forward. To be honest, while I’ve heard about OAuth many times, I have never actually had a need to use it until now. So I’m learning as I go and feel free to leave a comment and point any misunderstandings I might have.

As I understand it, Google has one large API for their various products. Before you can talk to Adsense, you have to register your application through the Google API console. I registered my application. Since I don’t have a live URL yet, I used my development URL for now (localhost:8000). It seemed to work just fine. Download the JSON file with the link provided.

Also, while your managing your APIs. You will need to go to the services tab and turn on AdSense Management API if you have not already done so. Otherwise, when you try to make a request you will just get an error message that says “Access Not Configured”.

Google has created a client library for Python, which is easily installed with pip. They also have a Django sample project that uses this library to go through the OAuth2 handshake. I think it was written in Django 1.1 (Django 1.5 was just released as of this writing) so it is a bit out of date, but helps greatly as a starting point.

My app is simple. I just want to read in the amount of revenue on a given day and store it in my local database.

I created a new app in my django project called ‘adsense’. I created a models.py file to store credentials.

from django.contrib.auth.models import User
from django.db import models
from oauth2client.django_orm import CredentialsField

class Credential(models.Model):
    id = models.ForeignKey(User, primary_key=True)
    credential = CredentialsField()

class Revenue(models.Model):
    date = models.DateField(unique=True)
    revenue = models.DecimalField(max_digits=7, decimal_places=2)

    def __unicode__(self):
        return '{0} ${1}'.format(self.date, self.revenue)

I put the JSON file I downloaded from the API console in my app folder and created a the following views.py.

import os

from django.conf import settings
from django.contrib.auth.decorators import login_required
from django.contrib.sites.models import Site
from django.http import HttpResponseBadRequest, HttpResponse
from django.http import HttpResponseRedirect
from oauth2client import xsrfutil
from oauth2client.client import flow_from_clientsecrets
from oauth2client.django_orm import Storage

from .models import Credential

CLIENT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secrets.json')

FLOW = flow_from_clientsecrets(

def index(request):
    storage = Storage(Credential, 'id', request.user, 'credential')
    credential = storage.get()
    if credential is None or credential.invalid is True:
        FLOW.params['state'] = xsrfutil.generate_token(
            settings.SECRET_KEY, request.user)
        # force approval prompt in order to get refresh_token
        FLOW.params['approval_prompt'] = 'force'
        authorize_url = FLOW.step1_get_authorize_url()
        return HttpResponseRedirect(authorize_url)
        return HttpResponse('Validated.')

def auth_return(request):
    if not xsrfutil.validate_token(
            settings.SECRET_KEY, request.REQUEST['state'], request.user):
        return  HttpResponseBadRequest()
    credential = FLOW.step2_exchange(request.REQUEST)
    storage = Storage(Credential, 'id', request.user, 'credential')
    return HttpResponseRedirect("/adsense/")

Note that on line 32 I added a parameter to force the approval prompt. I was having problems getting “invalid_grant” errors because it seemed my credentials would expire. I’d have to go through the OAuth2 handshake every morning. I learned after much research that I wasn’t getting a refresh_token back. I found this tip on StackOverflow explaining how to get it. This line seemed to fix that problem.

In my main urls.py file I include a link to my app urls file:

main urls.py:

from django.conf.urls import patterns, include, url
from django.contrib import admin


urlpatterns = patterns(
    url(r'^adsense/', include('adsense.urls', namespace='adsense')),

    url(r'^admin/doc/', include('django.contrib.admindocs.urls')),
    url(r'^admin/', include(admin.site.urls)),


from django.conf.urls import patterns, url

urlpatterns = patterns(
    url(r'^$', 'index', name='index'),
    url(r'^oauth2callback/$', 'auth_return', name='auth_return'),

Lastly, I have a class that makes the call to the API to get revenue for given dates. This is located in adsense/tasks.py as I will likely hook this up soon to run as a task with Celery/RabbitMQ.

import datetime
import httplib2

from apiclient.discovery import build
from celery.task import PeriodicTask
from django.contrib.auth.models import User
from oauth2client.django_orm import Storage

from .models import Credential, Revenue

TODAY = datetime.date.today()
YESTERDAY = TODAY - datetime.timedelta(days=1)

class GetReportTask(PeriodicTask):
    run_every = datetime.timedelta(minutes=2)

    def run(self, *args, **kwargs):
        scraper = Scraper()

class Scraper(object):
    def get_report(self, start_date=YESTERDAY, end_date=TODAY):
        user = User.objects.get(pk=1)
        storage = Storage(Credential, 'id', user, 'credential')
        credential = storage.get()
        if not credential is None and credential.invalid is False:
            http = httplib2.Http()
            http = credential.authorize(http)
            service = build('adsense', 'v1.2', http=http)
            reports = service.reports()
            report = reports.generate(
            data = report.execute()
            for row in data['rows']:
                date = row[0]
                revenue = row[1]

                    record = Revenue.objects.get(date=date)
                except Revenue.DoesNotExist:
                    record = Revenue()
                record.date = date
                record.revenue = revenue
            print 'Invalid Adsense Credentials'

To make it work, I got to http://localhost:8000/adsense/. I’m then prompted to log in to my Google account. I authorize my app to allow Adsense access. The credentials are then stored in my local database and I can call my Scraper get_report() method. Congratulations to me, it worked!

I’ve been putting some time into updating an old site this weekend. I noticed that the homepage was taking a long time to load – around 5 to 8 seconds. Not good.

I tried caching queries but it didn’t help at all. Then I realized it was most likely due to my decision long ago to use textile to render text to html.

The site is located at direct-vs-dish.com. It essentially compares DIRECTV to DISH Network. On the home page is a number of features. Each feature represents a database record. Here is my original model for the features:

class Feature(models.Model):
    category = models.CharField(max_length=255)
    slug = models.SlugField()
    overview = models.TextField(blank=True, null=True)
    dish = models.TextField(blank=True, null=True)
    directv = models.TextField(blank=True, null=True)
    dish_link = models.URLField(blank=True, null=True)
    directv_link = models.URLField(blank=True, null=True)
    order = models.PositiveSmallIntegerField()

    def __unicode__(self):
        return self.category

    class Meta:
        ordering = ['order']

Three of the above fields use textile: overview, dish, & directv. I currently have 14 feature records. So that is a potential of 42 textile conversions for the home page.

In order to cache these textile conversions, I added three new fields. I then added a save method to populate the cached html fields. My model now looks like this:

from django.contrib.markup.templatetags.markup import textile

class Feature(models.Model):
    category = models.CharField(max_length=255)
    slug = models.SlugField()
    overview = models.TextField(blank=True, null=True)
    overview_html = models.TextField(blank=True)
    dish = models.TextField(blank=True, null=True)
    dish_html = models.TextField(blank=True)
    directv = models.TextField(blank=True, null=True)
    directv_html = models.TextField(blank=True)
    dish_link = models.URLField(blank=True, null=True)
    directv_link = models.URLField(blank=True, null=True)
    order = models.PositiveSmallIntegerField()
    def __unicode__(self):
        return self.category

    def save(self, **kwargs):
        self.overview_html = textile(self.overview)
        self.dish_html = textile(self.dish)
        self.directv_html = textile(self.directv)
        return super(Feature, self).save(kwargs)
    class Meta:
        ordering = ['order']

I use the Django admin to edit features so I added some styling to hide the cached html fields with an option to show them if you want to see what has been converted and cached.

class FeatureAdmin(admin.ModelAdmin):
    list_display = ('category', 'order')
    prepopulated_fields = {"slug": ("category",)}
    fieldsets = (
        (None, {
            'fields': ('category', 'slug', 'overview', 'dish', 'dish_link',
                       'directv', 'directv_link', 'order')
        ('Auto Generated', {
            'classes': ('collapse',),
            'fields': ('overview_html', 'dish_html', 'directv_html'),
admin.site.register(Feature, FeatureAdmin)

My template tags went from this:

{{ feature.overview|textile }}

To this:

{{ feature.overview_html|safe }}

This has dropped my homepage rending time to about 750ms. This is without any caching of queries. Huge win!

sentryIf you are hosting a Django site, Sentry will make your life easier.

After my review of various hosting companies I decided to put EnvelopeBudget.com on Webfaction. But, I was still impressed with Digital Ocean so I kept my virtual server. Why not? It’s only $5 per month for full root access! Because all their servers have SSD’s I’ver never seen a virtual server boot so fast. Soon will be the day when you will hear someone say, “remember when computers had moving parts?” I kept it because I figured I’d find a use for it eventually. Well, I found a use for it.

I love Sentry. We used it at SendOutCards to help us better manage our server errors. I think we were running a pre 1.0 release when it was just called django-sentry. It has come a long way. I set up an account on GetSentry.com and loved it. Since I’m bootstrapping a start-up, I decided to set up my own sentry server on my Digital Ocean account.

I documented the process I went through setting up the server.

Create Ubuntu 12.10 X32 Server droplet & ssh into it as root

# add non-root user
adduser sentry

# add to sudoers
adduser sentry sudo

# log out of root and log in as sentry

# update the local package index
sudo apt-get update

# actually upgrade all packages that can be upgraded
sudo apt-get dist-upgrade

# remove any packages that are no longer needed
sudo apt-get autoremove

# reboot the machine, which is only necessary for some updates
sudo reboot

# install python-dev
sudo apt-get install build-essential python-dev

# download distribute
curl -O http://python-distribute.org/distribute_setup.py

# install distribute
sudo python distribute_setup.py

# remove installation files
rm distribute*

# use distribute to install pip
sudo easy_install pip

# install virtualenv and virtualenvwrapper
sudo pip install virtualenv virtualenvwrapper

# to enable virtualenvwrapper add this line to the end of the .bashrc file
echo "" >> .bashrc
echo "source /usr/local/bin/virtualenvwrapper.sh" >> .bashrc

# exit and log back in to restart your shell

# make virtualenv
mkvirtualenv sentry_env

# install sentry
pip install sentry

# create settings file (file will be located in ~/.sentry/sentry.conf.py)
sentry init

# install postgres
sudo apt-get install postgresql postgresql-contrib libpq-dev

# install postgres adminpack
sudo -u postgres psql

# change postgres password & create database
sudo passwd postgres
sudo su - postgres
psql -d template1 -c "ALTER USER postgres WITH PASSWORD 'changeme';"
createdb your_sentry_db_name
createuser your_sentry_user --pwprompt
psql -d template1 -U postgres
GRANT ALL PRIVILEGES ON DATABASE your_sentry_db_name to your_sentry_user;

# update config file to use postgres & host (with vim or your editor of choice)
sudo apt-get install vim
vim .sentry/sentry.conf.py

The following are the contents of my sentry.conf.py file

    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'your_sentry_db_name',
        'USER': 'your_sentry_user',
        'PASSWORD': 'your_password',
        'HOST': 'localhost',

You will also want to configure your SMTP mail account. I just used my gmail account.

# going to need psycopg2
workon sentry_env
pip install psycopg2

# set up databse
sentry upgrade

# let's try it out!
sentry start

# install nginx
sudo apt-get install nginx

# remove the default symbolic link
sudo rm /etc/nginx/sites-enabled/default

# create a new blank config, and make a symlink to it
sudo touch /etc/nginx/sites-available/sentry
cd /etc/nginx/sites-enabled
sudo ln -s ../sites-available/sentry

# edit the nginx configuration file
sudo vim /etc/nginx/sites-available/sentry

Here are the contents of my nginx file:

server {
    # listen on port 80
    listen 80;

    # for requests to these domains
    server_name yourdomain.com www.yourdomain.com;

    # keep logs in these files
    access_log /var/log/nginx/sentry.access.log;
    error_log /var/log/nginx/sentry.error.log;

    # You need this to allow users to upload large files
    # See http://wiki.nginx.org/HttpCoreModule#client_max_body_size
    # I'm not sure where it goes, so I put it in twice. It works.
    client_max_body_size 0;

    location / {
        proxy_pass http://localhost:9000;
        proxy_redirect off;

        proxy_read_timeout 5m;

        # make sure these HTTP headers are set properly
        proxy_set_header Host            $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

That’s about it.

# restart nginx
sudo service nginx restart

I’m not really sure of the proper way to keep sentry going. But I just run:

sentry start &

Perhaps someone more knowledgable can leave a comment and suggest the best way to start the service automatically on reboot.

Oh, I also just moved my ZNC bouncer to the same server as it is much more reliable than connecting to my Mac Mini at home.


I set up supervisor as recommend in the comments and the docs to keep sentry runny (though it has never crashed, it does make restarting easier)

sudo apt-get install supervisor
sudo vim /etc/supervisor/conf.d/sentry.conf

Add the following to the sentry.conf file:

command=/home/sentry/.virtualenvs/sentry_env/bin/sentry start http

Restart supervidord

sudo killall supervisord
sudo supervisord

Upgrading Sentry:

I’ve upgraded twice. It was a painless process…

workon sentry_env
pip install sentry --upgrade
sentry upgrade
sudo supervisorctl restart sentry-web

Hosting Decisions

Note: Since writing this I have become more comfortable with managing my own servers. The pricing point of DigitalOcean moved me in that direction to be honest. Since writing this, DO has created a referral program that allows you to get a $10 credit when you sign up and I in turn get a referral credit. Use this link to sign up at DigitalOcean and get a $10 credit.

Where do you fit on this scale?

Sysadmin -> DBA -> Backend Programmer -> Frontend Programmer -> Designer

I have a range in the middle, but lack on each end of the spectrum. So when it comes to setting up a hosting server, I’d rather turn it over to someone more experienced in the sysadmin realm. But, when bootstrapping a startup, you find yourself becoming a jack of all trades (and master of none).

I’ve been in the process of re-writing Inzolo and re-launching as Envelope Budget. It recently came time to launch (ready or not). I spent way more time than I intended setting up a hosting account. I have been hosting Inzolo on Webfaction since its inception. Overall I’ve been quite pleased. I don’t really have any performance or downtime issues that I can remember, Webfaction has a nice interface to set up everything I need. I’ve actually been pleasantly surprised in how it has met my needs.

I’ve been hearing a lot of buzz about Heroku though. And so, I thought I’d try deploying there before I went live. First of all, let me explain my stack. EnvelopeBudget.com is written in Django and I’m using PostgreSQL as my database. I’m making use of johnny-cache and using Memcached to speed up the site a bit. I wrote a utility to import Inzolo accounts into Envelope Budget and found that I finally had a real need for asynchronous processing, so I implemented Celery and RabbitMQ to process the import and return status updates to the browser.

I was impressed after doing the Getting Started with Django tutorial on Heroku. What kind of magic is this? So I attempted to get my EnvelopeBudget stack up and running next. I modified my django project structure to be more Heroku friendly. I probably spend a good 8 hours leaning how Heroku makes deployment so simple though it never really seemed simple. I got it up and running but in the end I decided it wasn’t for me (at least for this project) mainly due to the price. Minimally it would cost me $55 per month because I needed two dynos (one web and one worker), and the SSL add-on. Seriously, why do they charge $20 per month to use SSL? SSL set up is free on the other 3 hosting plans I’m reviewing here. That was probably the biggest deal breaker. Also, this price was for using the dev PostgreSQL add-on which wouldn’t last long. Soon I’d need to upgrade to the Basic ($9/mo) or Crane ($50/mo) package. So, now my hosting was looking more like $105 per month. On top of that, you deploy by pushing to git (‘git push heroku master’). This is cool, but it seemed to take forever each time. It was annoying since I had to keep committing and pushing to troubleshoot problems. Deploying with fabric is much faster for me on the other three servers. Time to move on.

So at this point I’ve decided I’ll just go back to Webfaction. As I’m riding the train home from work and reading through my twitter feed I come across a link to a Complete Single Server Django Stack Tutorial. I read through it and it suddenly didn’t seem so scary setting my up own server. I’ve don’t pretty much all of this before on my own development environment. So, I go to the best place I know to spin up a new server fast – Linode. It probably took me about 2 hours to get everything up and running. I took copious notes along the way though. After getting it to work on the 512 plan ($20 per month), I destroyed that linode and set it up again on a 1 GB plan ($40/month). It took about 40 minute the second time (setting it up twice was faster than figuring out Heroku). I was surprised at how much faster the performance was on Linode. Webfaction & Heroku felt about the same, but Linode felt significantly faster.

After getting it all set up I got a tweet from a friend recommending I try out DigitalOcean while I’m at it. After looking at the prices and specs, I could get a 1 GB server for half the price and it had an SSD to make it faster – but only one core instead of 4. I took the time to set it up. The process was pretty much the same as with Linode. It only took about 30 minutes this time. Overall the site felt slower than Linode though. I’m guessing it was due to having only one core and because I’m located in Utah, my Linode was in Texas and DigitalOcean is New York. Still, installing packages seemed to take a lot longer so I’m thinking it was their data center’s internet speed that was source of slower speeds. Sorry, I don’t have any benchmarks so I can’t really give real numbers. One thing that really impressed me though was the reboot time of the server. It seemed about 5 times faster than my linode likely due to the SSD.

So, now it was time to make a choice. I had a launch counter ticking down on my homepage and I had to decide NOW. I had already spent 3 days making a decision. I finally decided to go with Webfaction’s 1 GB plan which is $40 per month (or $30 per month if paid yearly). I like the idea of having a managed plan. The biggest downside for me is that I don’t have root or sudo access. They don’t use virtualenv for their application setup and setting up projects is a bit kludgy felling because of it. Also, setting up Celery & RabbitMQ doesn’t feel as painless, but I managed it thanks to Mark Liu’s tutorial. I know there is a way to use virtualenv and gunicorn on Webfaction, but I doubt I’ll take the time to set my project up that way.

There was a snag though. I had originally set up my account on their most basic account with only has 256 MB of RAM. My site was already killed for running 2x that amount. I needed to upgrade ASAP but I need someone there to set up the new account and migrate my existing account. So I actually ended up launching on Linode. The site is up now and hosting performance is great, but I will likely move back to Webfaction because I soon started to realize there is always something else to set up. I have a git repo, a Trac system, email, & FTP already set up on Webfaction. I would likely want to put a WordPress blog at /blog. All of this is so easy with Webfaction and its more I have to research to do all of this on Linode.


So here is my tl;dr version in alphabetical order:

DigitalOcean: I love their pricing. For as little as $5 per month I can spin up a linux server. This would be great for a ZNC IRC bouncer for example. They seem fairly new still so time will tell how they compete with Linode. Their internet connection seemed a bit slow, but for root access to a server, it can be overlooked.

Heroku: If I were a hipster I’d bite the bullet and host here to get in with the cool crowd. Overall it was just too expensive for a bootstrapped startup project. The biggest benefit I see with Heroku is the ability to scale fast, both forwards and backwards when you need to. Scaling is a good problem to have. If I get to that that point, money won’t be an issue and I will revisit Heroku. I would probably also use it if I built a very small site where the specs fit within their free model or if I was in the middle of a hack-a-thon and needed to get online fast.

Linode: This seems to be the standard for spinning up your own dedicated server with root access. If I root access, performance and a history of good support, I’ll go here.

Webfaction: I’ve been around the block and learned that the grass is not really greener on the other side. Although I don’t have root access and it’s hosted on CentOS rather than Debian/Ubuntu which I’m more familiar with, it has so many features for making it easy to set up email, multiple domains, SSL, different types of apps (Django + PHP + Ruby on Rails anyone?), Trac, Git, etc. The price is competitive, the support is good, the uptime and performance is good – I haven’t found sufficient reason to leave.


After doing a number of installs at work I got more comfortable with deploying on gunicorn & nginx, so I ended up switching to DigitalOcean. This is where EnvelopeBudget.com is currently hosted and a have a couple other droplets hosting YouTrack & Sentry. The main reason I left Webfaction was that I needed to update my SSL certificate ASAP and there is a slight lag time with Webfaction because you have to submit a ticket to complete your SSL setup.

Reasons for leaving Webfaction:

  • Total control of SSL setup
  • Performance – I wanted SD drives
  • Price – more computing power for the price
  • Virtualenv – Upgrading is a lot easier when using virtualenv

Things to consider before leaving Webfaction

  • Webfaction comes with email. I’m now using Zoho for free email.
  • Easier to configure – It took a while to figure out how to run WordPress on /blog with nginx. Also, I had to learn the whole process of configuring an SSL certificate.
  • I didn’t bother migrating Trac. Webfaction had a nice one click installer. I’ve moved to YouTrack instead.
  • There are a number of other one-click install solutions available on Webfaction. Be sure you know what you are leaving.
After my final football game my senior year (1995) with my dad the offensive coordinator.

After my final football game my senior year (1995) with my dad the offensive coordinator.

During my holiday vacation while going through my social feed I happened across a post by Alex Lawrence entitled Don’t Wait Until January. I read it because it looked interesting, not because I had any desire to start exercising or lose weight. Something in the article moved me though. It moved me into activity. Alex’s story resonated with me. I too used to play a lot of sports. I too had back problems. Thankfully, despite doctors saying it would most likely require surgery, I didn’t need surgery.

You wouldn’t know it now if you met me, but I was voted by my senior class as most athletic. I started varsity 7 seasons in 3 sports. Like Alex said, it’s not cool for me to be out of shape.

I love food. My parents had a hard time keeping our pantry stocked. We never had left overs after a meal because I would just eat what was left as I cleared up the table. I never had to worry about weight. I was a bean pole. In high school I was 6’3″ and 170 lbs. I tried to gain weight but it seemed I never could. I was generally exercising at least 2 hours per day through sports.

Last day of my mission at the Johannesburg temple - a stop on the way home.

Last day of my mission at the Johannesburg temple – a stop on the way home.

My senior year I broke my arm pitching in the state tournament. I was bed ridden for a while and once my arm healed I left to serve a two year mission where I didn’t exercise except for once a  week when we would often play sports, and I was in a car the whole time after my first 5 months. I came home weighing 22o lbs. Most people told me I look normal so it felt like a comfortable weight. I maintained that weight eating all I wanted and playing a lot of basketball, volleyball, and softball. That was the what I weighed when I got married.

On my mission I met a couple that agreed to never gain more than 20% of their marriage weight. I thought that was a cool idea so I told myself and my wife I would never weight more than 242. I’ve stuck with that commitment. Once I get up in the 240′s (which I have a number of times), then I cut back, eat less, exercise a little and get down to the 230 range. I don’t think I have been down to 220 since being married though.

The most recent photo I could find - with the family

The most recent photo I could find – with the family

After reading Alex’s article I decided to jump in and make a public commitment. I left a comment on his blog and even suggested we change the twitter hash tag to #TmFit rather that #FitLife because there was less noise so it was easier to follow. Alex concurred. My goal is to weigh 220 by March 1st, 2013. In the distant past I used to make goals public. Then I read that keeping goals private can actually be more beneficial. So I was hesitant to make a public commitment, but I decided to do it anyway.

So far it’s been great. I started using the gym membership that I was planning on canceling. Alex was serious about encouraging each other. I have pushed my workouts a bit farther than planned because of twitter feedback and encouragement. I will admit, I HATE exercising for the sake of exercising. My life motto was “I don’t believe in exercise unless it is in the form of a sport.” Well, with four kids, a full-time job, and the life of an entrepreneur, I don’t really have time to play all the sports I would like to. So, I have got to learn to like exercising – or at least learn to endure it.

During the holidays it was easier for me to take time to exercise. Now the real challenge starts as I try to find a workable routine to get my daily exercise in. Come join us on #TmFit and let’s help each other reach our goals!

I written a post like this before, but that was 2009 and I was using Windows 7. I have since switched to Linux and then OSX, so I figured it would be a good time to visit the topic again.

Here are the applications and tools I use:

  • PyCharm: I spend the majority of my days in this application. For a long time I wasn’t a fan of IDE’s, but this one does so much for me and makes me a better programmer. I can’t imagine working without it now.
  • Chrome: My browser of tools. I guess really this is the most used app on my computer. I love the developer tools as well. It took me a while to give up Firebug, but once I did, there has been no reason to open other browsers.
  • iTerm2: I prefer this terminal app to the default in OSX.
  • Tower: I jump back and forth between GUI and CLI for git, but I’ll be honest, I’m a GUI kind of guy and I love using tower – especially for reviewing code changes before I commit.
  • DiffMerge: This is the merge tool I have integrated with Tower. It makes merging conflicts so much easier. Until BeyondCompare becomes available for the Mac, this is the best I could find.
  • PgAdmin3, Base, Sequel Pro: GUI tools for working with databases.
  • LimeChat: For all my IRC communication.
  • Adium: For instant messaging (Google Talk mainly)
  • Tweetbot: Yes, I bought a twitter client. It is that good.
  • Jing: For quickly making screenshots and screencasts under 5 minutes to use add clarity to Trac & YouTrack tickets.
  • Camtasia or ScreenFlow: For more professional screencasts. (Camtasia for Mac is not nearly as good as Camtasia for Windows)
  • Photoshop, Illustrator, InDesign, Pixelmator: Image editing tools as needed.
  • Optimal Layout: To help manage my window layout.
  • MySpeed: For speeding up online videos.
  • Dropbox: If you use more than one computer you should have a dropbox account.
  • Evernote: I use it, but not as much as everyone raves about it.
  • Picasa: For managing all my personal photos. Love that I have the same experience on Windows, Linux & Mac.

There are other apps, but nothing I use enough to write home about.

Also, there are web apps I use quite frequently that should also get a shout out:

  • Inzolo: My virtual envelope system of budgeting. I wouldn’t generally toot my own horn, but I use this almost daily. I may be moving to a new budgeting system soon though ;)
  • BitBucket: Not quite as popular as GitHub, but I love that they have free private repos! Plus they seem to be improving month after month. No regrets moving all my private repos here.
  • GitHub: Our Git repository of choice at work.
  • StackOverflow: Generally I find the answers to programming questions here first.
  • Then there are the old standbys: Gmail, Google, YouTube, Facebook, etc.

I bought a Raspberry Pi after my GuruPlug died. I figured I’d use it for a ZNC bouncer. But then I bought a Mac Mini and starting using it instead. The Raspberry Pi just sat on my desk as I couldn’t think of a good enough reason to find time to tinker with it. Then I thought of one…

I’ve dropped cable/satellite TV. I’m using SickBeard to download a couple of shows I can’t get on Hulu Plus, Amazon Prime, or Netflix. I have a Roku (with Roxbox) on one TV and an Apple TV connected to another. The problem is that SickBeard downloads my shows in .mkv format. I then have to use HandBrake to convert them to .mp4 (H.264) to get them to play on either device. It often takes longer to convert them that id does just to find a torrent offering the H.264 version. Either way, it’s not as automated as I would like it to be.

I tried once to play an mkv file through Roxbox. It messed up my Roku so it wouldn’t connect to the internet anymore. I had to do a factory reset to get it working again. It just happened again. This time though, I decided to spend some time seeing what I could do with the Raspberry Pi that has been sitting on my desk for months.

I quickly found Raspbmc. Wow! I found an 8GB SD card, borred the charger for my Kindle Fire, and followed the instructions for setting it up. Everything went smoothly and I had a media center up and running in short time. Out of the box, it’s pretty cool. It has a nice user interface, though not as simple as Roku or Apple TV, but like most open source software, much more robust & configurable.

The Problems

Of course it can’t all be THAT easy – at least with me. I set this up on a TV upstairs. My router is on the main floor in my office. There is no wireless on the Raspberry PI, so I have to have it wired. Luckily, I have an extra Airport Extreme that got fried in a lightening storm. The incoming port doesn’t work, but it still works as an access point and so I could use it to plug an ethernet cable into my Raspberry Pi. On my main Airport Extreme I have and external hard drive. This was the tricky part getting it mounted on my guru plug, and proved to be a challenge with the Raspberry Pi as well.

I got a bee in my bonnet trying to get this to work and I finally found the solution.

I had to ssh into my Raspberry Pi and install cifs-utils because apparently Raspbmc doesn’t come with it.

sudo apt-get install cifs-utils

Then I could mount my hard drive (Elements is the name of my HDD):

sudo mount -t cifs // -o username=MYUSERNAME,password=MYPASSWORD /home/pi/Elements/

XBMC plays the mkv files perfectly, so now I just need to add a few automated tools to put my files in the right place on my network drive and this whole thing will be so much more hands-off :)

I got a somewhat unique request on a project the other day. My client has a lead tracking system where his salesman input leads and often upload scanned documents to include with the leads. I implemented this all with standard Django forms and a formset wizard to input multiple files.

My client was worried that a lot of images would be uploaded and he would have to start paying extra for storage. He asked if I could compress images on upload to save space. After searching the web I found examples of a few different ways of doing it. But after reading about Upload Handlers in the Django docs, this seemed like it would be the best method for accomplishing this so I wouldn’t have to modify my models or forms at all. Unfortunately for me, it didn’t go as straightforward as I had hoped. I couldn’t find a good example of someone else doing this sort of thing and it took me MUCH longer than the 30-45 minutes I had planned for.

The good news is that I figured it out so I’m posting it here for all to benefit hopefully.

I created a file named uploadhandlers.py in my app and added the following code:

import os
from django.conf import settings
from django.core.files.uploadhandler import MemoryFileUploadHandler
from PIL import Image
    from cStringIO import StringIO
except ImportError:
    from StringIO import StringIO
class CompressImageUploadHandler(MemoryFileUploadHandler):
    def file_complete(self, file_size):
        Return a file object if we're activated.
        if not self.content_type is None and 'image' in self.content_type:
            newfile = StringIO()
            img = Image.open(self.file)
            width, height = img.size
            width, height = scale_dimensions(width, height, longest_side=settings.IMAGE_LONGEST_SIDE)
            img = img.resize((width, height), Image.ANTIALIAS)
            img.save(newfile, 'JPEG', quality=settings.JPEG_QUALITY)
            self.file = newfile
            name, ext = os.path.splitext(self.file_name)
            self.file_name = '{0}.{1}'.format(name, 'jpg')
            self.content_type = 'image/jpeg'
        return super(CompressImageUploadHandler, self).file_complete(file_size)
def scale_dimensions(width, height, longest_side):
    if width  1:
        return longest_side, int(longest_side / ratio)
    # Portrait
        return int(longest_side * ratio), longest_side

You can see from the code that I am simply extending the MemoryFileUploadHandler, which is one of the Django default upload handlers. I’m overriding the file_complete function to change the size and jpeg quality – which are settings in my settings file.

To implement the change, I update my views. The view that contains the form has to be csrf_exempt, and the view handling the uploads switches to this upload handler on the fly with the following code:

request.upload_handlers.insert(0, CompressImageUploadHandler())