Dustin Davis
That's me

I have been tasked with updating our real-time revenue stats at Neutron. After spending about a week going though and updating our PHP scripts I finally decided it would be worth my time and sanity to start from scratch with Python. I’m building a Django application that will store revenue stats from different sources, which I can then use to build views and an API for stat tools.

So for the past few days I’ve been writing scripts that log in to other websites and scrape data, or accessing the site’s API’s if they have one. I’ve learned a few things.

  1. requests > httplib2
  2. SOAP is the suck, but at least its an API. Suds makes SOAP suck less. I get it that SOAP is basically all .net developers know as far as APIs. ;)
  3. Beautiful Soup is a nice last resort.
  4. I’ve actually surprised how many businesses can survive on such crappy technology.

I saved Google Adsense for last figuring they would have the best API and it would therefore be the easiest to implement. It turned out more challenging than I anticipated. Apparently you can’t just plug in a username/password or API key, you have to go through the whole Oauth2 handshake to gain access to the API.

I found documentation was not as easy to find as I had hoped unfortunately. I found many broken links to documentation. Of all people I thought Google would be better at this. For example, on their most up to date developer docs I could find they point to this broken link to read more about authentication and authorization. (OK, that was weird, as soon as I posted it here, the link started working – I guess you can all thank me for that ;))

So this blog post is an attempt to document the process of getting reports out of Adsense and into my Django application.

In order to use Google’s API for accessing Adsense reports, you need to use the Adsense Management API. This API only support OAuth so you have to do the authentication flow in the browser at least once in order to get your credentials, then you can save these credentials so you have access going forward. To be honest, while I’ve heard about OAuth many times, I have never actually had a need to use it until now. So I’m learning as I go and feel free to leave a comment and point any misunderstandings I might have.

As I understand it, Google has one large API for their various products. Before you can talk to Adsense, you have to register your application through the Google API console. I registered my application. Since I don’t have a live URL yet, I used my development URL for now (localhost:8000). It seemed to work just fine. Download the JSON file with the link provided.

Also, while your managing your APIs. You will need to go to the services tab and turn on AdSense Management API if you have not already done so. Otherwise, when you try to make a request you will just get an error message that says “Access Not Configured”.

Google has created a client library for Python, which is easily installed with pip. They also have a Django sample project that uses this library to go through the OAuth2 handshake. I think it was written in Django 1.1 (Django 1.5 was just released as of this writing) so it is a bit out of date, but helps greatly as a starting point.

My app is simple. I just want to read in the amount of revenue on a given day and store it in my local database.

I created a new app in my django project called ‘adsense’. I created a models.py file to store credentials.

from django.contrib.auth.models import User
from django.db import models
from oauth2client.django_orm import CredentialsField

class Credential(models.Model):
    id = models.ForeignKey(User, primary_key=True)
    credential = CredentialsField()

class Revenue(models.Model):
    date = models.DateField(unique=True)
    revenue = models.DecimalField(max_digits=7, decimal_places=2)

    def __unicode__(self):
        return '{0} ${1}'.format(self.date, self.revenue)

I put the JSON file I downloaded from the API console in my app folder and created a the following views.py.

import os

from django.conf import settings
from django.contrib.auth.decorators import login_required
from django.contrib.sites.models import Site
from django.http import HttpResponseBadRequest, HttpResponse
from django.http import HttpResponseRedirect
from oauth2client import xsrfutil
from oauth2client.client import flow_from_clientsecrets
from oauth2client.django_orm import Storage

from .models import Credential


CLIENT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secrets.json')

FLOW = flow_from_clientsecrets(
    CLIENT_SECRETS,
    scope='https://www.googleapis.com/auth/adsense.readonly',
    redirect_uri='http://{0}/adsense/oauth2callback/'.format(
        Site.objects.get_current().domain))


@login_required
def index(request):
    storage = Storage(Credential, 'id', request.user, 'credential')
    credential = storage.get()
    if credential is None or credential.invalid is True:
        FLOW.params['state'] = xsrfutil.generate_token(
            settings.SECRET_KEY, request.user)
        # force approval prompt in order to get refresh_token
        FLOW.params['approval_prompt'] = 'force'
        authorize_url = FLOW.step1_get_authorize_url()
        return HttpResponseRedirect(authorize_url)
    else:
        return HttpResponse('Validated.')


@login_required
def auth_return(request):
    if not xsrfutil.validate_token(
            settings.SECRET_KEY, request.REQUEST['state'], request.user):
        return  HttpResponseBadRequest()
    credential = FLOW.step2_exchange(request.REQUEST)
    storage = Storage(Credential, 'id', request.user, 'credential')
    storage.put(credential)
    return HttpResponseRedirect("/adsense/")

Note that on line 32 I added a parameter to force the approval prompt. I was having problems getting “invalid_grant” errors because it seemed my credentials would expire. I’d have to go through the OAuth2 handshake every morning. I learned after much research that I wasn’t getting a refresh_token back. I found this tip on StackOverflow explaining how to get it. This line seemed to fix that problem.

In my main urls.py file I include a link to my app urls file:

main urls.py:

from django.conf.urls import patterns, include, url
from django.contrib import admin

admin.autodiscover()

urlpatterns = patterns(
    '',
    url(r'^adsense/', include('adsense.urls', namespace='adsense')),

    url(r'^admin/doc/', include('django.contrib.admindocs.urls')),
    url(r'^admin/', include(admin.site.urls)),
)

adsense/urls.py:

from django.conf.urls import patterns, url

urlpatterns = patterns(
    'adsense.views',
    url(r'^$', 'index', name='index'),
    url(r'^oauth2callback/$', 'auth_return', name='auth_return'),
)

Lastly, I have a class that makes the call to the API to get revenue for given dates. This is located in adsense/tasks.py as I will likely hook this up soon to run as a task with Celery/RabbitMQ.

import datetime
import httplib2

from apiclient.discovery import build
from celery.task import PeriodicTask
from django.contrib.auth.models import User
from oauth2client.django_orm import Storage

from .models import Credential, Revenue


TODAY = datetime.date.today()
YESTERDAY = TODAY - datetime.timedelta(days=1)


class GetReportTask(PeriodicTask):
    run_every = datetime.timedelta(minutes=2)

    def run(self, *args, **kwargs):
        scraper = Scraper()
        scraper.get_report()


class Scraper(object):
    def get_report(self, start_date=YESTERDAY, end_date=TODAY):
        user = User.objects.get(pk=1)
        storage = Storage(Credential, 'id', user, 'credential')
        credential = storage.get()
        if not credential is None and credential.invalid is False:
            http = httplib2.Http()
            http = credential.authorize(http)
            service = build('adsense', 'v1.2', http=http)
            reports = service.reports()
            report = reports.generate(
                startDate=start_date.strftime('%Y-%m-%d'),
                endDate=end_date.strftime('%Y-%m-%d'),
                dimension='DATE',
                metric='EARNINGS',
            )
            data = report.execute()
            for row in data['rows']:
                date = row[0]
                revenue = row[1]

                try:
                    record = Revenue.objects.get(date=date)
                except Revenue.DoesNotExist:
                    record = Revenue()
                record.date = date
                record.revenue = revenue
                record.save()
        else:
            print 'Invalid Adsense Credentials'

To make it work, I got to http://localhost:8000/adsense/. I’m then prompted to log in to my Google account. I authorize my app to allow Adsense access. The credentials are then stored in my local database and I can call my Scraper get_report() method. Congratulations to me, it worked!

I’ve been putting some time into updating an old site this weekend. I noticed that the homepage was taking a long time to load – around 5 to 8 seconds. Not good.

I tried caching queries but it didn’t help at all. Then I realized it was most likely due to my decision long ago to use textile to render text to html.

The site is located at direct-vs-dish.com. It essentially compares DIRECTV to DISH Network. On the home page is a number of features. Each feature represents a database record. Here is my original model for the features:

class Feature(models.Model):
    category = models.CharField(max_length=255)
    slug = models.SlugField()
    overview = models.TextField(blank=True, null=True)
    dish = models.TextField(blank=True, null=True)
    directv = models.TextField(blank=True, null=True)
    dish_link = models.URLField(blank=True, null=True)
    directv_link = models.URLField(blank=True, null=True)
    order = models.PositiveSmallIntegerField()

    def __unicode__(self):
        return self.category

    class Meta:
        ordering = ['order']

Three of the above fields use textile: overview, dish, & directv. I currently have 14 feature records. So that is a potential of 42 textile conversions for the home page.

In order to cache these textile conversions, I added three new fields. I then added a save method to populate the cached html fields. My model now looks like this:

from django.contrib.markup.templatetags.markup import textile


class Feature(models.Model):
    category = models.CharField(max_length=255)
    slug = models.SlugField()
    overview = models.TextField(blank=True, null=True)
    overview_html = models.TextField(blank=True)
    dish = models.TextField(blank=True, null=True)
    dish_html = models.TextField(blank=True)
    directv = models.TextField(blank=True, null=True)
    directv_html = models.TextField(blank=True)
    dish_link = models.URLField(blank=True, null=True)
    directv_link = models.URLField(blank=True, null=True)
    order = models.PositiveSmallIntegerField()
    
    def __unicode__(self):
        return self.category

    def save(self, **kwargs):
        self.overview_html = textile(self.overview)
        self.dish_html = textile(self.dish)
        self.directv_html = textile(self.directv)
        return super(Feature, self).save(kwargs)
        
    class Meta:
        ordering = ['order']

I use the Django admin to edit features so I added some styling to hide the cached html fields with an option to show them if you want to see what has been converted and cached.

class FeatureAdmin(admin.ModelAdmin):
    list_display = ('category', 'order')
    prepopulated_fields = {"slug": ("category",)}
    fieldsets = (
        (None, {
            'fields': ('category', 'slug', 'overview', 'dish', 'dish_link',
                       'directv', 'directv_link', 'order')
        }),
        ('Auto Generated', {
            'classes': ('collapse',),
            'fields': ('overview_html', 'dish_html', 'directv_html'),
        }),
    )
admin.site.register(Feature, FeatureAdmin)

My template tags went from this:

{{ feature.overview|textile }}

To this:

{{ feature.overview_html|safe }}

This has dropped my homepage rending time to about 750ms. This is without any caching of queries. Huge win!

sentryIf you are hosting a Django site, Sentry will make your life easier.

After my review of various hosting companies I decided to put EnvelopeBudget.com on Webfaction. But, I was still impressed with Digital Ocean so I kept my virtual server. Why not? It’s only $5 per month for full root access! Because all their servers have SSD’s I’ver never seen a virtual server boot so fast. Soon will be the day when you will hear someone say, “remember when computers had moving parts?” I kept it because I figured I’d find a use for it eventually. Well, I found a use for it.

I love Sentry. We used it at SendOutCards to help us better manage our server errors. I think we were running a pre 1.0 release when it was just called django-sentry. It has come a long way. I set up an account on GetSentry.com and loved it. Since I’m bootstrapping a start-up, I decided to set up my own sentry server on my Digital Ocean account.

I documented the process I went through setting up the server.

Create Ubuntu 12.10 X32 Server droplet & ssh into it as root

# add non-root user
adduser sentry

# add to sudoers
adduser sentry sudo

# log out of root and log in as sentry
exit

# update the local package index
sudo apt-get update

# actually upgrade all packages that can be upgraded
sudo apt-get dist-upgrade

# remove any packages that are no longer needed
sudo apt-get autoremove

# reboot the machine, which is only necessary for some updates
sudo reboot

# install python-dev
sudo apt-get install build-essential python-dev

# download distribute
curl -O http://python-distribute.org/distribute_setup.py

# install distribute
sudo python distribute_setup.py

# remove installation files
rm distribute*

# use distribute to install pip
sudo easy_install pip

# install virtualenv and virtualenvwrapper
sudo pip install virtualenv virtualenvwrapper

# to enable virtualenvwrapper add this line to the end of the .bashrc file
echo "" >> .bashrc
echo "source /usr/local/bin/virtualenvwrapper.sh" >> .bashrc

# exit and log back in to restart your shell
exit

# make virtualenv
mkvirtualenv sentry_env

# install sentry
pip install sentry

# create settings file (file will be located in ~/.sentry/sentry.conf.py)
sentry init

# install postgres
sudo apt-get install postgresql postgresql-contrib libpq-dev

# install postgres adminpack
sudo -u postgres psql
CREATE EXTENSION "adminpack";
\q

# change postgres password & create database
sudo passwd postgres
sudo su - postgres
psql -d template1 -c "ALTER USER postgres WITH PASSWORD 'changeme';"
createdb your_sentry_db_name
createuser your_sentry_user --pwprompt
psql -d template1 -U postgres
GRANT ALL PRIVILEGES ON DATABASE your_sentry_db_name to your_sentry_user;
\q
exit

# update config file to use postgres & host (with vim or your editor of choice)
sudo apt-get install vim
vim .sentry/sentry.conf.py

The following are the contents of my sentry.conf.py file

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'your_sentry_db_name',
        'USER': 'your_sentry_user',
        'PASSWORD': 'your_password',
        'HOST': 'localhost',
    }
}

You will also want to configure your SMTP mail account. I just used my gmail account.

# going to need psycopg2
workon sentry_env
pip install psycopg2

# set up databse
sentry upgrade

# let's try it out!
sentry start

# install nginx
sudo apt-get install nginx

# remove the default symbolic link
sudo rm /etc/nginx/sites-enabled/default

# create a new blank config, and make a symlink to it
sudo touch /etc/nginx/sites-available/sentry
cd /etc/nginx/sites-enabled
sudo ln -s ../sites-available/sentry

# edit the nginx configuration file
sudo vim /etc/nginx/sites-available/sentry

Here are the contents of my nginx file:

server {
    # listen on port 80
    listen 80;

    # for requests to these domains
    server_name yourdomain.com www.yourdomain.com;

    # keep logs in these files
    access_log /var/log/nginx/sentry.access.log;
    error_log /var/log/nginx/sentry.error.log;

    # You need this to allow users to upload large files
    # See http://wiki.nginx.org/HttpCoreModule#client_max_body_size
    # I'm not sure where it goes, so I put it in twice. It works.
    client_max_body_size 0;

    location / {
        proxy_pass http://localhost:9000;
        proxy_redirect off;

        proxy_read_timeout 5m;

        # make sure these HTTP headers are set properly
        proxy_set_header Host            $host;
        proxy_set_header X-Real-IP       $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

That’s about it.

# restart nginx
sudo service nginx restart

I’m not really sure of the proper way to keep sentry going. But I just run:

sentry start &

Perhaps someone more knowledgable can leave a comment and suggest the best way to start the service automatically on reboot.

Oh, I also just moved my ZNC bouncer to the same server as it is much more reliable than connecting to my Mac Mini at home.

Update

I set up supervisor as recommend in the comments and the docs to keep sentry runny (though it has never crashed, it does make restarting easier)

sudo apt-get install supervisor
sudo vim /etc/supervisor/conf.d/sentry.conf

Add the following to the sentry.conf file:

[program:sentry-web]
directory=/home/sentry/
command=/home/sentry/.virtualenvs/sentry_env/bin/sentry start http
autostart=true
autorestart=true
redirect_stderr=true

Restart supervidord

sudo killall supervisord
sudo supervisord

Upgrading Sentry:

I’ve upgraded twice. It was a painless process…

workon sentry_env
pip install sentry --upgrade
sentry upgrade
sudo supervisorctl restart sentry-web

Hosting DecisionsWhere do you fit on this scale?

Sysadmin -> DBA -> Backend Programmer -> Frontend Programmer -> Designer

I have a range in the middle, but lack on each end of the spectrum. So when it comes to setting up a hosting server, I’d rather turn it over to someone more experienced in the sysadmin realm. But, when bootstrapping a startup, you find yourself becoming a jack of all trades (and master of none).

I’ve been in the process of re-writing Inzolo and re-launching as Envelope Budget. It recently came time to launch (ready or not). I spent way more time than I intended setting up a hosting account. I have been hosting Inzolo on Webfaction since its inception. Overall I’ve been quite pleased. I don’t really have any performance or downtime issues that I can remember, Webfaction has a nice interface to set up everything I need. I’ve actually been pleasantly surprised in how it has met my needs.

I’ve been hearing a lot of buzz about Heroku though. And so, I thought I’d try deploying there before I went live. First of all, let me explain my stack. EnvelopeBudget.com is written in Django and I’m using PostgreSQL as my database. I’m making use of johnny-cache and using Memcached to speed up the site a bit. I wrote a utility to import Inzolo accounts into Envelope Budget and found that I finally had a real *need* for asynchronous processing, so I implemented Celery and RabbitMQ to process the import and return status updates to the browser.

I was impressed after doing the Getting Started with Django tutorial on Heroku. What kind of magic is this? So I attempted to get my EnvelopeBudget stack up and running next. I modified my django project structure to be more Heroku friendly. I probably spend a good 8 hours leaning how Heroku makes deployment so simple though it never really seemed simple. I got it up and running but in the end I decided it wasn’t for me (at least for this project) mainly due to the price. Minimally it would cost me $55 per month because I needed two dynos (one web and one worker), and the SSL add-on. Seriously, why do they charge $20 per month to use SSL? SSL set up is free on the other 3 hosting plans I’m reviewing here. That was probably the biggest deal breaker. Also, this price was for using the dev PostgreSQL add-on which wouldn’t last long. Soon I’d need to upgrade to the Basic ($9/mo) or Crane ($50/mo) package. So, now my hosting was looking more like $105 per month. On top of that, you deploy by pushing to git (‘git push heroku master’). This is cool, but it seemed to take forever each time. It was annoying since I had to keep committing and pushing to troubleshoot problems. Deploying with fabric is much faster for me on the other three servers. Time to move on.

So at this point I’ve decided I’ll just go back to Webfaction. As I’m riding the train home from work and reading through my twitter feed I come across a link to a Complete Single Server Django Stack Tutorial. I read through it and it suddenly didn’t seem so scary setting my up own server. I’ve don’t pretty much all of this before on my own development environment. So, I go to the best place I know to spin up a new server fast – Linode. It probably took me about 2 hours to get everything up and running. I took copious notes along the way though. After getting it to work on the 512 plan ($20 per month), I destroyed that linode and set it up again on a 1 GB plan ($40/month). It took about 40 minute the second time (setting it up twice was faster than figuring out Heroku). I was surprised at how much faster the performance was on Lindode. Webfaction & Heroku felt about the same, but Linode felt significantly faster.

After getting it all set up I got a tweet from a friend recommending I try out DigitalOcean while I’m at it. After looking at the prices and specs, I could get a 1 GB server for half the price and it had an SSD to make it faster – but only one core instead of 4. I took the time to set it up. The process was pretty much the same as with Linode. It only took about 30 minutes this time. Overall the site felt slower than Linode though. I’m guessing it was due to having only one core and because I’m located in Utah, my Linode was in Texas and DigitalOcean is New York. Still, installing packages seemed to take a lot longer so I’m thinking it was their data center’s internet speed that was source of slower speeds. Sorry, I don’t have any benchmarks so I can’t really give real numbers. One thing that really impressed me though was the reboot time of the server. It seemed about 5 times faster than my linode likely due to the SSD.

So, now it was time to make a choice. I had a launch counter ticking down on my homepage and I had to decide NOW. I had already spent 3 days making a decision. I finally decided to go with Webfaction’s 1 GB plan which is $40 per month (or $30 per month if paid yearly). I like the idea of having a managed plan. The biggest downside for me is that I don’t have root or sudo access. They don’t use virtualenv for their application setup and setting up projects is a bit kludgy felling because of it. Also, setting up Celery & RabbitMQ doesn’t feel as painless, but I managed it thanks to Mark Liu’s tutorial. I know there is a way to use virtualenv and gunicorn on Webfaction, but I doubt I’ll take the time to set my project up that way.

There was a snag though. I had originally set up my account on their most basic account with only has 256 MB of RAM. My site was already killed for running 2x that amount. I needed to upgrade ASAP but I need someone there to set up the new account and migrate my existing account. So I actually ended up launching on Linode. The site is up now and hosting performance is great, but I will likely move back to Webfaction because I soon started to realize there is always something else to set up. I have a git repo, a Trac system, email, & FTP already set up on Webfaction. I would likely want to put a wordpress blog at /blog. All of this is so easy with Webfaction and its more I have to research to do all of this on Linode.

So here is my tl;dr version in alphabetical order:

DigitalOcean: I love their pricing. For as little as $5 per month I can spin up a linux server. This would be great for a ZNC IRC bouncer for example. They seem fairly new still so time will tell how they compete with Linode. Their internet connection seemed a bit slow, but for root access to a server, it can be overlooked.

Heroku: If I were a hipster I’d bite the bullet and host here to get in with the cool crowd. Overall it was just too expensive for a bootstrapped startup project. The biggest benefit I see with Heroku is the ability to scale fast, both forwards and backwards when you need to. Scaling is a good problem to have. If I get to that that point, money won’t be an issue and I will revisit Heroku. I would probably also use it if I built a very small site where the specs fit within their free model or if I was in the middle of a hack-a-thon and needed to get online fast.

Linode: This seems to be the standard for spinning up your own dedicated server with root access. If I root access, performance and a history of good support, I’ll go here.

Webfaction: I’ve been around the block and learned that the grass is not really greener on the other side. Although I don’t have root access and it’s hosted on CentOS rather than Debian/Ubuntu which I’m more familiar with, it has so many features for making it easy to set up email, multiple domains, SSL, different types of apps (Django + PHP + Ruby on Rails anyone?), Trac, Git, etc. The price is competative, the support is good, the uptime and performance is good – I haven’t found sufficient reason to leave.

After my final football game my senior year (1995) with my dad the offensive coordinator.

After my final football game my senior year (1995) with my dad the offensive coordinator.

During my holiday vacation while going through my social feed I happened across a post by Alex Lawrence entitled Don’t Wait Until January. I read it because it looked interesting, not because I had any desire to start exercising or lose weight. Something in the article moved me though. It moved me into activity. Alex’s story resonated with me. I too used to play a lot of sports. I too had back problems. Thankfully, despite doctors saying it would most likely require surgery, I didn’t need surgery.

You wouldn’t know it now if you met me, but I was voted by my senior class as most athletic. I started varsity 7 seasons in 3 sports. Like Alex said, it’s not cool for me to be out of shape.

I love food. My parents had a hard time keeping our pantry stocked. We never had left overs after a meal because I would just eat what was left as I cleared up the table. I never had to worry about weight. I was a bean pole. In high school I was 6’3″ and 170 lbs. I tried to gain weight but it seemed I never could. I was generally exercising at least 2 hours per day through sports.

Last day of my mission at the Johannesburg temple - a stop on the way home.

Last day of my mission at the Johannesburg temple – a stop on the way home.

My senior year I broke my arm pitching in the state tournament. I was bed ridden for a while and once my arm healed I left to serve a two year mission where I didn’t exercise except for once a  week when we would often play sports, and I was in a car the whole time after my first 5 months. I came home weighing 22o lbs. Most people told me I look normal so it felt like a comfortable weight. I maintained that weight eating all I wanted and playing a lot of basketball, volleyball, and softball. That was the what I weighed when I got married.

On my mission I met a couple that agreed to never gain more than 20% of their marriage weight. I thought that was a cool idea so I told myself and my wife I would never weight more than 242. I’ve stuck with that commitment. Once I get up in the 240′s (which I have a number of times), then I cut back, eat less, exercise a little and get down to the 230 range. I don’t think I have been down to 220 since being married though.

The most recent photo I could find - with the family

The most recent photo I could find – with the family

After reading Alex’s article I decided to jump in and make a public commitment. I left a comment on his blog and even suggested we change the twitter hash tag to #TmFit rather that #FitLife because there was less noise so it was easier to follow. Alex concurred. My goal is to weigh 220 by March 1st, 2013. In the distant past I used to make goals public. Then I read that keeping goals private can actually be more beneficial. So I was hesitant to make a public commitment, but I decided to do it anyway.

So far it’s been great. I started using the gym membership that I was planning on canceling. Alex was serious about encouraging each other. I have pushed my workouts a bit farther than planned because of twitter feedback and encouragement. I will admit, I HATE exercising for the sake of exercising. My life motto was “I don’t believe in exercise unless it is in the form of a sport.” Well, with four kids, a full-time job, and the life of an entrepreneur, I don’t really have time to play all the sports I would like to. So, I have got to learn to like exercising – or at least learn to endure it.

During the holidays it was easier for me to take time to exercise. Now the real challenge starts as I try to find a workable routine to get my daily exercise in. Come join us on #TmFit and let’s help each other reach our goals!

I written a post like this before, but that was 2009 and I was using Windows 7. I have since switched to Linux and then OSX, so I figured it would be a good time to visit the topic again.

Here are the applications and tools I use:

  • PyCharm: I spend the majority of my days in this application. For a long time I wasn’t a fan of IDE’s, but this one does so much for me and makes me a better programmer. I can’t imagine working without it now.
  • Chrome: My browser of tools. I guess really this is the most used app on my computer. I love the developer tools as well. It took me a while to give up Firebug, but once I did, there has been no reason to open other browsers.
  • iTerm2: I prefer this terminal app to the default in OSX.
  • Tower: I jump back and forth between GUI and CLI for git, but I’ll be honest, I’m a GUI kind of guy and I love using tower – especially for reviewing code changes before I commit.
  • DiffMerge: This is the merge tool I have integrated with Tower. It makes merging conflicts so much easier. Until BeyondCompare becomes available for the Mac, this is the best I could find.
  • PgAdmin3, Base, Sequel Pro: GUI tools for working with databases.
  • LimeChat: For all my IRC communication.
  • Adium: For instant messaging (Google Talk mainly)
  • Tweetbot: Yes, I bought a twitter client. It is that good.
  • Jing: For quickly making screenshots and screencasts under 5 minutes to use add clarity to Trac & YouTrack tickets.
  • Camtasia or ScreenFlow: For more professional screencasts. (Camtasia for Mac is not nearly as good as Camtasia for Windows)
  • Photoshop, Illustrator, InDesign, Pixelmator: Image editing tools as needed.
  • Optimal Layout: To help manage my window layout.
  • MySpeed: For speeding up online videos.
  • Dropbox: If you use more than one computer you should have a dropbox account.
  • Evernote: I use it, but not as much as everyone raves about it.
  • Picasa: For managing all my personal photos. Love that I have the same experience on Windows, Linux & Mac.

There are other apps, but nothing I use enough to write home about.

Also, there are web apps I use quite frequently that should also get a shout out:

  • Inzolo: My virtual envelope system of budgeting. I wouldn’t generally toot my own horn, but I use this almost daily. I may be moving to a new budgeting system soon though ;)
  • BitBucket: Not quite as popular as GitHub, but I love that they have free private repos! Plus they seem to be improving month after month. No regrets moving all my private repos here.
  • GitHub: Our Git repository of choice at work.
  • StackOverflow: Generally I find the answers to programming questions here first.
  • Then there are the old standbys: Gmail, Google, YouTube, Facebook, etc.

I bought a Raspberry Pi after my GuruPlug died. I figured I’d use it for a ZNC bouncer. But then I bought a Mac Mini and starting using it instead. The Raspberry Pi just sat on my desk as I couldn’t think of a good enough reason to find time to tinker with it. Then I thought of one…

I’ve dropped cable/satellite TV. I’m using SickBeard to download a couple of shows I can’t get on Hulu Plus, Amazon Prime, or Netflix. I have a Roku (with Roxbox) on one TV and an Apple TV connected to another. The problem is that SickBeard downloads my shows in .mkv format. I then have to use HandBrake to convert them to .mp4 (H.264) to get them to play on either device. It often takes longer to convert them that id does just to find a torrent offering the H.264 version. Either way, it’s not as automated as I would like it to be.

I tried once to play an mkv file through Roxbox. It messed up my Roku so it wouldn’t connect to the internet anymore. I had to do a factory reset to get it working again. It just happened again. This time though, I decided to spend some time seeing what I could do with the Raspberry Pi that has been sitting on my desk for months.

I quickly found Raspbmc. Wow! I found an 8GB SD card, borred the charger for my Kindle Fire, and followed the instructions for setting it up. Everything went smoothly and I had a media center up and running in short time. Out of the box, it’s pretty cool. It has a nice user interface, though not as simple as Roku or Apple TV, but like most open source software, much more robust & configurable.

The Problems

Of course it can’t all be THAT easy – at least with me. I set this up on a TV upstairs. My router is on the main floor in my office. There is no wireless on the Raspberry PI, so I have to have it wired. Luckily, I have an extra Airport Extreme that got fried in a lightening storm. The incoming port doesn’t work, but it still works as an access point and so I could use it to plug an ethernet cable into my Raspberry Pi. On my main Airport Extreme I have and external hard drive. This was the tricky part getting it mounted on my guru plug, and proved to be a challenge with the Raspberry Pi as well.

I got a bee in my bonnet trying to get this to work and I finally found the solution.

I had to ssh into my Raspberry Pi and install cifs-utils because apparently Raspbmc doesn’t come with it.

sudo apt-get install cifs-utils

Then I could mount my hard drive (Elements is the name of my HDD):

sudo mount -t cifs //10.0.1.1/Elements/ -o username=MYUSERNAME,password=MYPASSWORD /home/pi/Elements/

XBMC plays the mkv files perfectly, so now I just need to add a few automated tools to put my files in the right place on my network drive and this whole thing will be so much more hands-off :)

I got a somewhat unique request on a project the other day. My client has a lead tracking system where his salesman input leads and often upload scanned documents to include with the leads. I implemented this all with standard Django forms and a formset wizard to input multiple files.

My client was worried that a lot of images would be uploaded and he would have to start paying extra for storage. He asked if I could compress images on upload to save space. After searching the web I found examples of a few different ways of doing it. But after reading about Upload Handlers in the Django docs, this seemed like it would be the best method for accomplishing this so I wouldn’t have to modify my models or forms at all. Unfortunately for me, it didn’t go as straightforward as I had hoped. I couldn’t find a good example of someone else doing this sort of thing and it took me MUCH longer than the 30-45 minutes I had planned for.

The good news is that I figured it out so I’m posting it here for all to benefit hopefully.

I created a file named uploadhandlers.py in my app and added the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import os
 
from django.conf import settings
from django.core.files.uploadhandler import MemoryFileUploadHandler
from PIL import Image
 
try:
    from cStringIO import StringIO
except ImportError:
    from StringIO import StringIO
 
class CompressImageUploadHandler(MemoryFileUploadHandler):
    def file_complete(self, file_size):
        """
        Return a file object if we're activated.
        """
        self.file.seek(0)
        if not self.content_type is None and 'image' in self.content_type:
            newfile = StringIO()
            img = Image.open(self.file)
            width, height = img.size
            width, height = scale_dimensions(width, height, longest_side=settings.IMAGE_LONGEST_SIDE)
            img = img.resize((width, height), Image.ANTIALIAS)
            img.save(newfile, 'JPEG', quality=settings.JPEG_QUALITY)
            self.file = newfile
 
            name, ext = os.path.splitext(self.file_name)
            self.file_name = '{0}.{1}'.format(name, 'jpg')
            self.content_type = 'image/jpeg'
 
        return super(CompressImageUploadHandler, self).file_complete(file_size)
 
def scale_dimensions(width, height, longest_side):
    if width  1:
        return longest_side, int(longest_side / ratio)
    # Portrait
    else:
        return int(longest_side * ratio), longest_side

You can see from the code that I am simply extending the MemoryFileUploadHandler, which is one of the Django default upload handlers. I’m overriding the file_complete function to change the size and jpeg quality – which are settings in my settings file.

To implement the change, I update my views. The view that contains the form has to be csrf_exempt, and the view handling the uploads switches to this upload handler on the fly with the following code:

request.upload_handlers.insert(0, CompressImageUploadHandler())

I was recently asked to respond to the following questions:

If we were to change our standards of what is taught in our computer literacy classes in high school, what would be 6-10 things that you think should be taught in the class? What should every high school student know about computing, software, applications, etc.?

The following was my response off the top of my head…

Thinking about what I wish my family members knew about computers from least to greatest:

  1. The directory structure. How to get to various paths in a GUI browser and CLI. Differences & similarities of Windows, Linux, & OS X.
  2. Along those lines, when you are prompted to save a file – know where you are saving it and how to find it again.
  3. How to find files on an external hard drive, USB drive, SD card, etc. How to import pictures from a camera. How to sync an mobile device.
  4. The difference between a browser and the Internet and the difference between different browsers.
  5. The difference between a file and an application – how files relate to applications.
  6. How to set up a printer and scanner and install the necessary drivers.
  7. How to install applications on each operating system listed above.
  8. The difference between a text file and a binary or application file. How a text file can be read as an application file (CSV opens in Excel for example).
  9. HTML & CSS
  10. Basic programming in Javascript & some other scripting language (Python, Ruby, PHP)

What do you think? What would you add/remove/change from my list above?

Part of the reason I don’t blog as much as I used to is not that I don’t have much to say, but rather it hurts too bad to say (type) it. I’ve been suffering with RSI or Carpal Tunnel off & on for over 10 years. I’ve visited doctors, bought all kinds of ergonomic helpers (keyboards, trackpads, doctor stools, GeekDesks, wrist braces, pads, etc.) The effectiveness of each has been debatable. At first I think it helps, then its effectiveness wears off as time goes on.

It used to be that when it would flare up I’d start taking 800mg of Ibuprofen 2-3 times per day so I could continue working. A couple of months ago my whole body started swelling up as I think the Ibuprofen combined with my Gleevec (chemo pills) were taking their toll on my kidneys. I stopped taking Ibuprofen figuring that dealing with the pain & swelling in my wrists was better than knocking out my kidneys.

Two weeks ago I went to the doctors office on a Saturday afternoon. I was wearing wrist braces on both hands because they had become so sore and swollen I could barely do anything without pain. I was icing my writsts every night before bed. But this isn’t what I was there for. I had been sick and my asthma was acting up. After doing a albuterol nebulizer treatment and chest x-rays, I left the doctors office and picked up prescriptions for antibiotics (pnuemonia), albuterol (asthma), and prednisone (steroid for asthma). I started taking 40 mg of prednisone each day for a week.

I was surprised that as I started taking prednisone, the swelling and pain went away in my wrists. I asked my brother who is a doctor if prednisone helps RSI. He said it certainly would, but it’s a short term solution and you definitely don’t want to keep taking it.

After my week of prednisone the pain and swelling immediately came back in my wrists. I can’t tell you how frustrating that was. On Monday when I went in to work, I was determined to schedule an appointment with someone – anyone who could possibly help me. I was past traditional medicine. I was now going looking for alternative treatment – something I have never done before. I was looking at chiropractors, massage therapists, acupuncturists, etc. I spend the whole morning browsing around trying to decide which one to try first.

In the progress of researching I came across this post by Aaron Iba title How I Cured my RSI Pain. His whole story resonated with me. But when I got to the part where he started describing how he finally cured his RSI I was quite confused and a little skeptical. His cure – he read a book! What, that’s crazy and impossible right?

Well, you get to a point where you decide anything is possible. I turned my attention to Dr. John E. Sarno to learn more about this. I was surprised that there were so many people singing his praises. There must be something to this.

I watched two vidoes on YouTube (inserted below) and decided I was jumping in with both with feet. I bought the the book Aaron mentioned in his blog, The Mindbody Prescription – well, I actually bought the audio book with my Audible account.

I’ve listened to the audio book and this morning I bought another one of his books – Healing Back Pain, which I just started listening to this morning. Those who follow my blog know about my herniated disc experience that I blogged about. To digress slightly I want to say that my back has been feeling great. I did physical therapy and after about 3 months the numbness in my leg and foot went away and I haven’t really had any pain since then. When it happened I really worried I would never be normal again, especially since the specialists all told me they would be surprised if I didn’t need surgery. Well, I didn’t need surgery. It turns out herniated discs can heal. I even played in a softball tournament this summer and it didn’t bother me at all.

It is now Friday, just four days since reading Aaron’s blog post. I have removed all my ergonomic crutches. I have been typing away like crazy and ignoring the pain – which was really hard at first. The first two days I still had to wear a wrist brace on my right hand so I could work faster and ignore the pain better. I have since put my braces and everything else in a drawer – out of sight. The swelling has gone down, my range of motion has increased and the pain is about 60% gone from where it was. Considering where it was, that is HUGE for me!

So I’m writing this post for a few reasons:

  1. Because I can! It doesn’t hurt to do so.
  2. To document a starting point. It has only been four days, but I totally anticipate being 100% pain free at some point in the future. I don’t want to put a time limit on it.
  3. To share with others who might be going through RSI or carpal tunnel issues.

The confusing point is with this “cure” there is no real defined solution. I’m sure this post is confusing to someone looking for the cure. Basically it is the knowledge that your brain is triggering this problem and you need to convince your subconscious that you’re not going to let this happen anymore. It is the knowledge that becomes the power of the cure. I know it totally sounds crazy, but it’s crazy to me that I sweat and blush when I get nervous in front a crowd. Yes, my brain can control my body and I’m telling my subconscious brain to chillax on my wrists already.

It is working.

** UPDATE 2013-01-11 **

It’s been over 3 months and so I thought I should post an update. It took a couple of weeks, but since then I have been pain free. There have been some minor pain in my wrists at full extension on occasion, but they generally go away. An odd phenomenon did occur. After my wrist pain went away, my lower back starting hurting. In Mindbody Prescription, Dr. Sarno warns you that the pain will likely move to a new location as your brain realizes its pain defense mechanism isn’t going to work on this particular body part anymore. I just applied the same principles – generally laying down and meditating on what could be bothering me emotionally and telling my brain to send oxygenated blood to the sore part of my body. After the pain left my lower back my wrists started to act up but the pain went away quickly, then it moved to my upper back and made things difficult for about a week and a half. Same principles applied.

I told my boss about this book. She read it and applied it to her migraines she would experience about 2-3 times per week. At our next 1 on 1 she said she was 30 days without a migraine! She also gave me some tips that she had learned in the past from a therapist. We discussed some emotional triggers from my teenage years. After that meeting my upper back pain went away and I haven’t had any issues since.

Now I try to share this information with everyone I can. I get some strange looks sometimes, but I’m just trying to help.  I think the people who are willing to try it first are those who have tried everything else and are desperate for some pain relief.

** UPDATE 2014-04-18 **

I thought I’d share a few more updates. This update mainly addresses back pain. I haven’t had any RSI pain in the past year. I mentioned earlier that my back healed and I was playing softball and basketball. I happened to re-herniate my disk on a “pioneer trek” where I helped you push heavy handcarts up steep hills, hiking 8 hours a day for 3 days and sleeping on a sagebrush. By the third day I woke up and couldn’t move my back. I continued to walk all that day and by the end of that day I was in excruciating pain and I was losing feeling in my left leg.

I went through all the physical therapy and exercising as I did the first time. It has been 10 months now and while the back pain has gone away, I still don’t have feeling in the anterior of my left leg and foot. It feels like I constantly have Novocain injected in my leg. I lost strength in that leg as well. I couldn’t lift my body on my left calf for about 5 months. I am regaining the strength now, but I still don’t have feeling back.

The good news I want to emphasize is that I’m not in pain! This really surprised my back surgeon. He is about 80 years old (at least it seems it to me) so he has seen a lot of backs. He was shocked based on my MRI that my pain went away. He thought I would for sure need a microdiscectomy after my MRI. But after 90 days and being pain free he said that the surgery may or may not help the nerve damage, but he didn’t recommend it if I wasn’t in pain.

So I know I have real physical damage in my back. But that doesn’t mean I have to live with constant pain.

One other resource I found was a book by Scott Brady title “Pain Free For Life”. Dr. Brady found pain relief through Dr. Sarno’s methods and then developed techniques to help his patients recover from chronic pain. I wish there were an audio book for this one.

One last thing… I noticed an interesting side effect from on this mind-body magic. I was not sick one day in all of 2013. My wife and four kids got sick a number of times but I never caught it. I’m used to getting sick at least once or twice a year. But while I was focusing on these mind-body techniques I surprisingly never got sick. Coincidence? Maybe.

Clicky