Category Archives: Finish Writing Me Plz

Finish Writing Me Plz Nerd

Apache Bench

For years, my tool for simple load tests of HTTP sites has been ApacheBench.

For years, my reference for how to visualize ApacheBench results has been Gnuplot

For years, my reference for how to use Gnuplot has been

But do you really want to be writing Gnuplot syntax? It turns out that Pandas will give you great graphs pretty much for free:

df = pd.read_table('../gaussian.tsv')
# The raw data as a scatterplot
df.plot(x='seconds', y='wait', kind='scatter')
# The traditional Gnuplot plot
# Histogram



You can see the full source code at tsv_processing.ipynb

And re-recreate these yourself by checking out the parent repo: github/crccheck/abba

So now you might be thinking: How do you get a web server that outputs a normal distribution of lag? Well, I wrote one! I made a tiny Express.js server that just waits a random amount, packaged it in a Docker image, and and you can see exactly how I ran these tests by checking out my Makefile.

Finish Writing Me Plz

Prometheus Contained

After becoming smitten with Graphite last year, I’m sorry to say I’ve become entranced by the new hotness: Prometheus. For a rundown between the two, Prometheus’s docs do a good job. The docs aren’t bad, but there are a lot of gaps I had to fill. So I present my hello world guide for using Prometheus to get metrics on a host the Docker way. In addition to the official docs, I found Monitor Docker Containers with Prometheus to be useful. For the node explorer, discordianfish’s Prometheus Demo was valuable.

Start a Container Exporter container

This container creates an exporter that Prometheus can talk to to get data about all the containers running on the host. It needs access to cgroups to get the data, and the docker socket to know what containers are running.

docker run -d --name container_explorer \
  -v /sys/fs/cgroup:/cgroup:ro \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \

Start a Node Exporter container

This container uses the --net=host option so it can get metrics about the host network interface.

docker run -d --name node_exporter --net=host \

I was afraid that this would result in distorted stats because it’s in a container instead of the host, but after testing against a Node Exporter installed on the bare metal, it looks like it’s accurate.

Start a Prometheus container

This is the container that actually collects data. I’ve mounted my local prometheus.conf so Prometheus uses my configuration, and mounted a data volume so Prometheus data can persist between containers. There’s a link to the container explorer so Prometheus can collect metrics about containers. There’s a add-host so this container can access the node explorer’s metrics. Port 9090 is exposed because Prometheus needs to be publicly accessible from the Dashboard app. I’m not sure how to lock it down for security. I may add a referrer check since I don’t want to do IP rules or a VPN.

docker run -d --name $@_1 \
  -v ${PWD}/prom/prometheus.conf:/prometheus.conf:ro \
  -v ${PWD}/prom/data:/prometheus \
  --link container_explorer:container_explorer \
  --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') \
  -p 9090:9090 \

Setup a Prometheus Dashboard database

Here, I’m running the rake db:setup task to set up a sqlite database. The image is my own pending pull request 386.

docker run \
  -e DATABASE_URL=sqlite3:/data/dashboard.sqlite3 \
  -v ${PWD}/prom/dashboard:/data:rw \
  crccheck/promdash ./bin/rake db:setup

Start a Prometheus Dashboard

Now that the dashboard has a database, you can start the Dashboard web app. You can access it on port 3000. You’ll need to create a server, then a site, then a dashboard, and finally, your first graph. Don’t be disappointed when your first graph doesn’t load. You may need to tweak the server settings until the graph preview from the Dashboard opens the right data in Prometheus.

docker run-d --name promdash \
  -e DATABASE_URL=sqlite3:/data/dashboard.sqlite3 \
  -v ${PWD}/prom/dashboard:/data:rw \
  -p 3000:3000 \

My almost unabridged setup

What’s missing in this gist is the config for my nginx layer in front of the dashboard, which is why there’s no exposed port in this gist. To get started, all you have to is put prometheus.conf in a prom sub-directory and run make promdash

Finish Writing Me Plz Nerd

Demo: Fuckin A

I’ve been learning Ansible off and on and off for the past year. I have so many complaints, but I still think it’s the least worst provisioning system out there.

It’s online at, but I’ve also embedded it below:

I’d go more into Ansible, just just thinking about it make me so angry. So instead, I want to go over some of the new (JavaScript) things I learned working on this project.


I’ve converted some projects from Sass to LibSass before, but this is the first time I used LibSass from the start. Not having to mess with a Gemfile and Bundler is so freeing. It’s too bad there are still so many bugs in the libsass, node-sass, grunt-sass chain. They finally fixed sass2scss in the summer, I think Sass maps are still messed up. Despite this, I would definitely go with LibSass first.


This is the first project I’ve used `grunt-contrib-connect`. Prior, I would have just used `python -m SimpleHTTPServer`. What I like about using Connect is how it integrates with Grunt, and how I have better control over LiveReload without having to remember to enable my LiveReload browser extensions.


I’ve experimented with Browserify before with, where I had previously experimented with RequireJS and r.js. I was very happy with my experience with it for this project:

  1. It found my node modules automagically. I didn’t know it did that. I just tried it and it worked.
  2. It made writing JavaScript tests so much easier. I did something similar with text-mix, but that used Mocha, and I did not have a pleasant experience setting that up. Seriously, why are there so many Grunt/Mocha plugins? And why do so many of them just not work? This time, I used Nodeunit, which was available as a Grunt contrib plugin and a breeze to set up.

If you don’t need a DOM, writing simple assertion tests is definitely the way to go. If it’s faster to write tests, you’re more likely to write them. And best of all, since Browserify runs in an IIFE, it doesn’t put `module` or `define` into the global scope and mess up everything.

Best Practices Finish Writing Me Plz Nerd

Managing Technical Debt in Django Projects

This is fine

I’ve been thinking about this subject a lot, and I’ve been meaning to write something. Rather than procrastinate until I have a lot of material, I’m going to just continuously edit this post as I discover things. Many of these principles aren’t specific to Django, but most of this experience comes from dealing with Django.

Some of these tips don’t cost any time, but some involve investing extra time to do things differently. It’s in the name of saving time in the long run. If you’re writing a few lines of JavaScript that’s going to be thrown away in a day, then you shouldn’t waste time building up a castle of tests around it. Managing technical debt is a design tradeoff. You’re sacrificing some agility and features for developer happiness.

Don’t reuse app names and model names

You can have Django apps with the same name. And you can have models with the same name, but your life will be easier if you can reference a model without having to think about which app it came from. This makes it easier to understand the code; makes it easier to to use tools (grep) to analyze and search makes it easier to use shell_plus, which automatically loads every model for you in the shell.

Leave XXX comments

You should leave yourself task comments in your code, and you should have three levels (like critical, error, warning or high, medium, low) for the priority. I commonly use  FIXME for problems,  TODO for todos,  DELETEME for things that really should be deleted,  DEPRECATED for things I really ought to look at later, and  XXX for documenting code smells and anti-patterns. Some examples:

  • FIXME this will break if user is anonymous
  • TODO  make this work in IE9
  • DELETEME This code block is impossible to reach anyways
  • DEPRECATED use new_thing.method instead of this
  • WISHLIST make this work in IE8
  • XXX magic constant
  • FIXME DELETEME this needs csrf

The comment should be on the same line, so when you grep TODO, you’ll be able to quickly scan what kind of todos you have. This is what other tools like Jenkin’s Task Scanner expect too. Many people say you shouldn’t add TODO comments to your code. You should just do them. In practice, that is not practical, and leads to huge diffs that are hard to review.

There is such a thing has too many comments. For example, instead of writing a comment to explain a poorly named variable, you should just rename the variable.

Naming Things

One of Phil Karlton’s famous two hard things, finding the write names can make a huge difference. With Django code, I find that I’m happiest when I’m following the same naming conventions as the Django core. You should be familiar with the concepts of Hungarian notation and coding without comments (remember the previous paragraph?).

Single letter variables are almost always a bad idea, except with simple code. They’re un-greppable and except for the following, have no meaning:

  • i (prefer idx) — a counter
  • x — When iterating through a loop
  • k, v — Key/Value when looping through dict items
  • na count/total/summation (think traditional for-loop)
  • a, b — When looking at two elements, like in a reduce function (very rare)

An exception is math, where x and y, and m and n, etc. are commonplace.

If the name of your variable implies a type, it should be that type. You would not believe how common the name of the variable lies.

When writing code, it should read close to English. Getter functions should begin with “get_”. Booleans and functions that return booleans commonly begin with “is_” (though anything that is readable and obvious will work).

Write tests

My current philosophy is “if you liked it then you should have put a test on it”. The worst part of technical debt is accidentally breaking things. To its credit, Django is the best framework I’ve used for testing. Unit tests are good for TDD, but functional tests are probably better for managing technical debt because they verify the output of your system for various inputs. Doing TDD, getting 100% coverage, and taking into account edge cases… never happens in practice. That does not mean you shouldn’t try. Adding tests to preventing regressions is the second best thing you can do. And the best thing you can do is to write those tests to begin with.

Get coverage

Running coverage is commonly done at the same time as tests. I skip the html reports and use  coverage report from the command line to get faster feedback. When you have good coverage, you can have higher confidence in your tests.

Be cognizant of when you’re creating technical debt

Of course, every line of code is technical debt, but I’ve started adding a “Technical Debt Note” too all pull requests. This is inspired by how legislation will have a fiscal note to assess how much it would cost. Bills can get shot down because they cost to much for what they promise. Features should be the same. Hopefully, you’re already catching these before you even write code, and you’re writing small pull requests. If we find that a pull requests increases technical debt to an unreasonable degree, we revise the requirements and the code.

Clean as you cook

Most people dread cooking because of the mountain of mess that has to get cleaned after the meal. But if you can master cleaning as you cook, there’s a much more reasonable and manageable mess. As you experiment with code, don’t leave behind unused code clean up inconsistencies as you go. Don’t worry about deleting something that might be useful or breaking something obscure. That’s what source control and tests are there for. Plan ahead for the full life cycle. That means if you’re experimenting with a concept, don’t stop when it works: update everything else to be consistent across the project. If it didn’t work, tear it down and kill it. Don’t get into a situation where you have to support two (or three or four or more) different paradigms.

The Boy Scout rule

“Always leave the campground leaner then you found it”. Feel free to break paradigm that a pull request should only do one thing. If you happen to clean something while working on a feature, there’s no shame in saying you took a little diversion.

Make it easy for others to jump in

Projects with a complicated bootstrapping processes are probably also difficult to maintain. Wipe your virtualenv once in a while. Wipe your database. If it’s painful, and you have to do it often enough, you’ll make it better. Code that doesn’t get touched often has the most technical debt.

tetris game over

Educate your organization that they can’t just ask for a parade of features

This problem fixes itself, one way or another. Either you keep building features and technical debt until you’re buried and everything comes to a standstill and you yell at each other, or you find a way to balance adding features.

Prevent Dependency Spaghetti

Just like how you should try to avoid spaghetti code, having a lot of third party apps that try to pull in dependencies will come back to bite you later on.

  1. Specify requirements with ==, not >=. Not every package uses semantic versioning. And using semver does not guarantee that a minor or bugfix release won’t break something.
  2. Don’t specify requirements of requirements. This is to avoid an explosion of requirements to keep track of.
  3. There’s no easy way to know when it’s safe to delete a requirement. Even if you have good test coverage, your test environment is not the same as production. For example, you can safely delete psycopg2 and still run your tests, but have fun trying to connect to your PostgreSQL database in production.

Don’t support multiple paths

If you’re writing a library consumed by many people, supporting get_queryset and get_query_set is a good idea. For yourself, only support one thing. If you have an internal library that’s used by multiple parts of the code base and you want add functionality while preserving backwards compatibility, you can write a compatibility layer, but then you should update it all within the same pull request. Or at least create an issue to clean it up. Supporting multiple code paths is technical debt.

Avoid Customizing the Admin

The moment you start writing customizations for the admin, you’ve now pinned yourself to whatever the admin happened to be doing that version. Unless you’re running automated browser tests to verify their functionality, you’re setting yourself up for things to break in the future. The Django admin always changes in major ways every version, and admin customizations always have weak test coverage.

Do Code Review and Peer Programming

Code review makes sure that more than one person’s input goes into a feature, and peer programming takes that even further. It helps make sure that crazy functionality and hard to read code doesn’t get into the main code base that others will then have to maintain. If you’re a team of one, do pull requests anyways. You’ll be amazed at all the mistakes and inconsistencies you’ll find when you view your feature all at once. Even better: sleep on your own pull requests so you can see them in a new light.

Don’t Write Unreadable Code

Code review and peer programming are supposed to keep you from writing unmaintainable code. We’ve embraced linters so that we can write code in the same style, coverage so we know when we need to write tests, but what if we could automatically know when we were writing complicated code? We can, using radon or PyMetric‘s McCabe’s Cyclomatic Complexity metric.

Additional Resources

  • Docker and DevOps by Gene Kim, Dockercon ’14
  • Inheriting a Sloppy Codebase by Casey Kinsey, DjangoCon US ’14

Special thanks to Noah S.

Finish Writing Me Plz

Austin Burgers

Burgers near Downtown Austin, roughly in order of best to worst. Factors include price, convenience to me, and of paramount: quality.

Second Bar + Kitchen

$$$ Patty – A, Bun – A, Atmosphere – Restaurant
One of the most expensive burgers, but worth it.

Last tested: 2015-03

Frequency: about 3 times a year

Counter Cafe

$$ Patty – B, Bun – A, Atmosphere – Diner

I suggest a side other than getting fries.

Last tested: 2015-07

Frequency: TBD

Jos Cafe

$$ Patty – B, Bun – B, Atmosphere – Cafe

Great flavor combinations and excellent attention to detail.

Frequency: not enough

P. Terry’s

$ Patty – B, Bun – C, Atmosphere – Fast Food
There’s something magical about these.

Last tested: 2015-06

Frequency: about 8 times a year

Svantes Stuffed Burger

$$ Patty – A, Bun – C, Atmosphere – Food Truck

Space Jam

Last tested:

Frequency: about twice a year

Peached Tortilla JapaJam Burger

$$ Patty – B, Bun, B, Atmosphere – Food Truck

Great flavor combination. If you’ve ever been to Umami Burger in LA, this is like that, but better.

Last tested:

Frequency: not enough

Texas Chili Parlor

$$ Patty – C, Bun – C, Atmosphere – Texas Chili Parlor
My raw ratings aren’t that good, but they make a damn good burger. They don’t have fries on the menu.

Last tested: 2014

Frequency: about once a year


$$ Patty – B, Bun – B, Atmosphere – Bar
Another place you normally wouldn’t go to get a burger. This wing joint has a good basic burger. Patty is better than average, the bun is sometimes an A.

Last tested: 2015

Frequency: about twice a year


$$ Patty – C, Bun – C, Atmosphere – Cafe
The bacon… is ok, but you can customize it and try different kinds of bacon. Note: the house bacon usually isn’t that crispy because it’s so thick.

Last tested: 2014


$$ Patty – B, Bun – A, Atmosphere – Diner
Quality burger. Avoid the “premium” meats as they are overpriced. Good shakes. Not worth waiting in line for.

Last tested: 2015

Frequency: about 4 times a year


$ Patty – C, Bun – C, Atmosphere – Fast Food
If you’re in the mood for Whataburger, nothing else will satisfy it.

Last tested: 2015

Frequency: about 8 times a year


$$ Patty – B, Bun – B, Atmosphere – Steakhouse
I remember this being good. Needs to be re-tested.

Annie’s Cafe & Bar

$$$ Patty – B, Bun – B, Atmosphere – Cafe
I remember this being good. Needs to be re-tested. I also remember the duck fat fries being good.

Roaring Fork

$$ Patty – B, Bun – B, Atmosphere – Steakhouse
They only have one burger, the Big Ass Burger. It is great. Strong, basic ingredients (thick bacon, melty cheese). The establishment is really more a restaurant than a steakhouse, but the atmosphere is steakhouse. Update: I think the quality has gone downhill. My last Big Ass Burger was kind of terrible.

Last tested: 2015-02

Frequency: about twice a year


$$ Patty – B, Bun – D, Atmosphere – Restaurant
Good burger, overrated. Would be one of the best patties, but mine had too much gristle.

Last tested: 2015-03

Iron Cactus

$$ Patty – C, Bun – C, Atmosphere – Restaurant
The patty is a little unusual. It’s a nicer fast food patty: fine grind and molded. The fries are lightly battered and great.

Last tested: 2013?


Pretzel: $ Patty – C, Bun – B, Atmosphere – Fast Food
Not bad. Was lured here by the marketing for their pretzel bun.


$$ Patty – C, Bun: C, Atmosphere: Diner
Decent. Okay shakes, they definitely used to be better.

Dirty Martin’s

$$ Patty – C, Bun – C, Atmosphere – Diner
Needs to be retested. I haven’t been here in over a decade.

Camino El Casino

$$ Patty – C, Bun – D, Atmosphere – Bar
Better when drunk.


$$ Patty – C, Bun – C, Atmosphere – Diner
Overrated. Avoid. I haven’t been here in probably six years, but including them for completeness.

24 Diner

$$$ Patty – C, Bun – C, Atmosphere – Diner
Avoid the burger, just get the $20 chicken and waffles. Shake was weak too.

Last tested: 2013


Other places I like but aren’t “central”

Phil’s Ice House

$$ Patty – B, Bun – C, Atmosphere – Fast Food

Last tested: 2015-06

Frequency: about 3 times a year

Places to try

Elevation Burger


No longer open near downtown

Wholly Cow

$$ Patty – C, Bun – C, Atmosphere – Diner
Convenient. Local ingredients. Too bad they cook the hell out of ’em.


$ Patty – C, Bun – C, Atmosphere – Diner
Get ’em while you can, UT owns the land under them and has the option to kick them out with six months notice. You can get a comically large shake here.

Finish Writing Me Plz Life=Boring Meh Practices

GTA IV 1.0.7 bootstrap guide

There are lots of websites and youtube videos all trying to tell you how to mod GTA IV. Unfortunately, they’re almost all out of date, or youtube videos.

I’m using steam installed Grand Theft Auto IV 1.0.7

A decent guide:

Prepping the Game

Install This:

Xliveless (v0.999b7 md5: F7FD7512F6AC8959CDDA6A2B2E014C68)

SparkIV (v0.7.0b1)

The original tool, currently trying OpenIV below instead of this.

 OpenIV (v1.5.0)

Seems to be much newer than SparkIV. Still testing it out. It actually installs (as opposed to SparkIV, which you just unzip and run) and is more difficult to set up that it needs to be. Some mods require OpenIV.

Installing a simple mod

Most mods will tell you exactly what to do in their readme.txt. I’ll only give installation highlights.

No Intro Screen

Who has time to look at intro screens? This mod is easy to install and well worth it. Just copy loadingscreens_pc.dat to common/data. It’s plaintext so you can see what changes it makes.

Rockstar Social Club

I couldn’t find a way to disable it. I added these to my hosts file anyways:

At least the time saved taking off the intro screens make up for having to click through the Social Club screens.

Installing some intermediate mods

scripthook.dll (v0.5.1 md5: 7260B388AAC8329C3CF615084AA7DB83)

You may need to dig around for this download; the download link in the homepage is dead. Sometimes it’s redistributed inside a mod. Just copy it into root GTAIV directory.


inGame Trainer (v1.9.0)

This is the trainer you see most often in youtube videos. You can change the weather, teleport, change your wanted level, enable god mode, and enable cheats. It’s not an open sandbox game without a little god mode.


CarSpawner In-Game Menu

The thing about this mod is that for some reason, the readme is a word doc. Don’t worry though, the text on the homepage is the exact same as what the readme says. This mod defaults to taking over F1 to open the gui, just like inGameTrainer, so you’ll need to edit at least one of them.

Installing an advanced mod

Iron Man IV (v1.2)

See the readme.txt. It’s an involved process but detailed nicely. Remember to make backups!


Other noteworth mods:

  • iCEnhancer

Other hacks


Finish Writing Me Plz Life=Boring Nerd

PyCon US 2013 Wrap Up

I just got back from my first PyCon!

There were a lot of familiar faces. Which is quite an achievement for an introvert like myself. I finally got to meet people in the Python community as far away as Austin, TX. While finding people was easy in a crowd of 2,500, finding food was not. Luckily, we were too late to get the official hotel, and were housed in a hotel a mile down the road which forced us to walk past a great strip mall full of good food. If it’s there’s one thing I’ve learned about conferences in general, it’s that it’s always worth it to strike out on your own to find good food. In NICAR, it was Hammerheads in Louisville. For PyCon, it was finding Bistro Siam. So onto some Python.

Talks I went to I liked (in roughly chronological order):

Videos I look forward to watching:

Posters from the Poster Session I particularly liked:

And of course… the surprise guest star of PyCon this year was the Raspberry Pi. I look forward to playing with it (once I find time!).

Hopefully, I’ll come back and expand what I got especially liked about the talks I highlighted as I re-watch the videos. You can keep an eye on with me as they release videos.

Finish Writing Me Plz Nerd

print_r() for javascript

I typically just use uneval() to figure out what’s inside an array/object, but what about when it’s large and heterogeneous? I wanted to find a version of php’s print_r() for JavaScript. Here is link to the Original Version of dump() I based mine off of:

When I tried it, the first thing I noticed what that it put quotes around everything, and that long strings with line breaks got messy, so I did a quick adaptation to suit my immediate needs and came up with this:

function dump(arr,level) {
  function magicquotes(value) { return (isNaN(value)) ? '"' + value.replace(/\n/g,"\n"+indent) + '"' : value; }
  level = level | 0;
  var indent = new Array(level+1).join("\t"), dumped_text = "";

  if(typeof(arr) == 'object') { //Array/Hashes/Objects
    for(var item in arr) {
      var value = arr[item];

      if(typeof(value) == 'object')
          dumped_text += indent + "'" + item + "' :\n" + dump(value,level+1);
          dumped_text += indent + "'" + item + "' => " + magicquotes(value) + "\n";
  } else { //Stings/Chars/Numbers etc.
    dumped_text = "===>"+arr+"<===("+typeof(arr)+")";
  return dumped_text;

I also found links to many other print_r(), var_dump() equivalents, but they either depended on write, were overly complicated, or returned a lot of excess text I wasn’t interested in.

So why not just call it print_r ? Well, the original I copied was called dump, and I’ve always been annoyed typing that underscore, so I just didn’t change it.

Update: found another alternative. It’s really long and puts out a lot of extraneous information, but it’s worth looking at: