Category Archives: Nerd

Best Practices Nerd

Flow Type for coding in JavaScript with guardrails

Getting started

Getting fast feedback in your IDE is vital to having a happy developer experience. Flow adds that value for me for JavaScript programming.

Atom

Install ide-flowtype. I just switched from the lighter-weight linter-flow and I’m liking it.

The good parts

  • You can use Flow with no transpilation
  • You can use Flow with no transpilation
  • The documentation it provides is very helpful. It’s more rigorous than JSDoc, but less human friendly because there’s no description. You’ll need an old fashioned comment (JSDoc?) for that.
  • Speeds dev time. The fast feedback, jump to definition, and documentation makes it easier to do work (guardrails)
    • Even with weak mode and lazy annotations (just using primitive types instead of fully specifying), you get very useful feedback as you code
  • Flow updates regularly with improvements
  • Flow is a Facebook thing, so the integration with React is great, even if it changes drastically. This happened to me and tool they distributed to upgrade syntax worked great
  • You support one evil mega corporation over another evil mega corporation
  • You can use Flow with no transpilation

Criticisms

  • For “gradual” typing, Flow type is heavy handed (but getting better).
    1. “weak” mode might be deprecated. “It makes incremental adoption of flow much easier for those of us who are attempting to add it to a codebase that was largely written without the guard rails that typing provides.” UPDATE: weak mode isn’t going away anytime soon
    2. It’s supposed to infer type, but it feels not that great. It’s not as good as humans. Flow gets especially annoying with object destructuring.
  • Objects have to be fully spec’ed out or it’s an error, for example, you can’t add arbitrary properties to an Object. UPDATE: the documentation about optional properties is much better UPDATE2 They call these sealed vs unsealed UPDATE3 They’re going to transition to “exact objects by default“.

    const a = {
      foo: 'hi'
    };
    // property `bar` Property not found in object literal
    a.bar = 'uh oh';
    // you're supposed to do this instead
    const b: {foo: string, bar?: string} = {
      foo: 'hi'
    };
    b.bar = 'well ok';
  • Flow has no mechanism for turning off or ignoring some kinds of errors besides the $FlowFixMe suppression comment, and Flow can’t infer everything so your’e gonna either add a lot of $FlowFixMes or defensive statements.
  • .flowconfig is not well documented
  • The Flow binary is extremely finicky. I’ve caught it stealing all my CPU. It’s slow. In the Atom integration, you have to run code through twice to before you see an error. Which means if you fix something, you won’t get feedback until your seconds save. UPDATE: This gets better with every release, but it’s still really fragile. The first flow check can take 30 seconds to run while it warms up, even longer with monorepos.
  • The syntax consumes a lot of lines, and results in very long lines because you have to describe large objects on one line. Especially since I only use comment syntax.
  • Flow comment syntax is poorly documented. All the literature assumes you’re using the syntax that requires a build step to remove Flow annotation.
  • If you’re not using Flow comment syntax, all your annotations are being stripped in the artifact. Meaning consumers of your package won’t see annotations.
  • If you’re defining your own types in a myLibDef.js, Eslint will go nuts. You have to install the flowtype plugin (see addendum).
  • Flow changes quickly and is pre 1.0, so there are frequent, breaking syntax changes
  • For example, in Flow 0.55, libraries that used Bluebird started erroring, and this has happened before
  • Flow will often throw stupid errors, requiring $FlowFixMe comments often
  • The https://github.com/flowtype/flow-typed/ ecosystem for third party type definitions is a trainwreck

TypeScript

I initially chose Flow because TypeScript required a transpilation step. Since then, TypeScript has come out with a comment syntax based on JSDoc, but it seems to be extremely limited. https://github.com/Microsoft/TypeScript/wiki/JSDoc-support-in-JavaScript. I read a lot of TypeScript but don’t write it, and it still feels ham-fisted compared to Flow. I especially don’t like how browsing TypeScript in GitHub, it’s hard read business logic when over half the lines are type definitions.

Best practices with Flow

  1. Use comment syntax
  2. Start very general, then gradually refine your types to be more specific

Addendum

eslintrc.js
{
    // Allow underlines and Flow comment syntax
    'spaced-comment': ['error', 'always', {exceptions: ['/'], markers: [':', '::']}],
  plugins: [
     ...,
     'flowtype'
   ],
  rules: {
    'flowtype/define-flow-type': 1,
  }
}
.flowconfig
[include]
src/
[ignore]
.*/node_modules/<add an entry for every problematic package, don't ignore all node_modules>

Further reading

Flow was version 0.86.0 as of writing

Finish Writing Me Plz Nerd

Apache Bench

For years, my tool for simple load tests of HTTP sites has been ApacheBench.

For years, my reference for how to visualize ApacheBench results has been Gnuplot

For years, my reference for how to use Gnuplot has been http://www.bradlanders.com/2013/04/15/apache-bench-and-gnuplot-youre-probably-doing-it-wrong/

But do you really want to be writing Gnuplot syntax? It turns out that Pandas will give you great graphs pretty much for free:

df = pd.read_table('../gaussian.tsv')
# The raw data as a scatterplot
df.plot(x='seconds', y='wait', kind='scatter')
scatter
# The traditional Gnuplot plot
df.plot(y='wait')
wait
# Histogram
df.wait.hist(bins=20)

distribution

 

You can see the full source code at tsv_processing.ipynb

And re-recreate these yourself by checking out the parent repo: github/crccheck/abba

So now you might be thinking: How do you get a web server that outputs a normal distribution of lag? Well, I wrote one! I made a tiny Express.js server that just waits a random amount, packaged it in a Docker image, and and you can see exactly how I ran these tests by checking out my Makefile.

Nerd

Django Nose without Django-Nose

Tycho Brahe

I’ve grown to dislike Django-Nose. It’s been over three months since Django 1.8 has been released and they still don’t have a release that fully supports it. These are the advantages they currently tout:

  • Testing just your apps by default, not all the standard ones that happen to be in INSTALLED_APPS
    • The Django test runner has been doing this since 1.6 https://docs.djangoproject.com/en/1.8/releases/1.6/#discovery-of-tests-in-any-test-module
  • Running the tests in one or more specific modules (or apps, or classes, or folders, or just running a specific test)
    • They all can do this, even the old Django test runner
  • Obviating the need to import all your tests into tests/__init__.py. This not only saves busy-work but also eliminates the possibility of accidentally shadowing test classes.
    • The Django test runner has this since 1.6
  • Taking advantage of all the useful nose plugins
    • There are some cool plugins
  • Fixture bundling, an optional feature which speeds up your fixture-based tests by a factor of 4
    • Ok, Django doesn’t have this, but you shouldn’t be using fixtures anyways and there are other ways to make fixtures faster
  • Reuse of previously created test DBs, cutting 10 seconds off startup time
    • Django can do this since 1.8 https://docs.djangoproject.com/en/1.8/releases/1.8/#tests
  • Hygienic TransactionTestCases, which can save you a DB flush per test
    • Django has had this since 1.6 https://docs.djangoproject.com/en/1.6/topics/testing/tools/#django.test.TransactionTestCase
  • Support for various databases. Tested with MySQL, PostgreSQL, and SQLite. Others should work as well.
    • Django has had this forever

So what if you need a certain nose plugin? Say, xunit for Jenkins or some other tooling? Well, you still have to use Nose because django-jux hasn’t been updated in 4 years.

Here’s a small script you can use that lets you use Django + Nose while skipping the problematic Django-nose:

Run it like you would Nose:

DJANGO_SETTING_MODULE=settings.test python runtests.py --with-xunit --with-cov

One choice I made is that I use Django 1.8’s --keepdb flag instead of the REUSE_DB environment variable, but you can see how to adapt it if you wanted it to feel more like Nose. Adapting the command above to reuse the database would look like:

DJANGO_SETTING_MODULE=settings.test python runtests.py --with-xunit --with-cov --keepdb

Meh Practices Nerd Patterns

Django management commands and verbosity

Ren and Stimpy

[update: This post has been corrected, thanks to my commenters for your feedback]

Every Django management command gets the verbosity option for free. You may recognize this:

optional arguments:
  -h, --help            show this help message and exit
  -v {0,1,2,3}, --verbosity {0,1,2,3}
                        Verbosity level; 0=minimal output, 1=normal output,
                        2=verbose output, 3=very verbose output

We rarely use it because doing so usually means lots of if statements scattered through our code to support this. If you’re writing quick n’ dirty code, this may look familiar in your management commands:


if options.get('verbosity') == 3:
    print('hi')

In a recent Django project, I came up with a few lines of boilerplate to support the verbosity option, assuming you’re using also the logging library and not relying on print:


import logging
class Command(BaseCommand):
    def handle(self, *args, **options):
        verbosity = options.get('verbosity')
        if verbosity == 0:
            logging.getLogger('my_command').setLevel(logging.WARN)
        elif verbosity == 1:  # default
            logging.getLogger('my_command').setLevel(logging.INFO)
        elif verbosity > 1:
            logging.getLogger('my_command').setLevel(logging.DEBUG)
        if verbosity > 2:
            logging.getLogger().setLevel(logging.DEBUG)

github.com/texas/tx_mixed_beverages/blob/master/mixed_beverages/apps/receipts/management/commands/geocode.py

So what does this do?

At the default verbosity, 1, I display INFO logging statements from my command. Increasing verbosity to 2, I also display DEBUG logs from my command. And going all the way to verbosity 3, I also enable all logging statements that reach the root logger.

Go forth and log!

Neato! Nerd

Check yo queries before you wreck yo site

When building high performance Django sites, keeping the number of queries down is essential. And just like controlling technical debt and maintaining test coverage, being successful means making monitoring queries a natural part of your workflow. If your momentum has to be stopped to examine database queries, you’re not going to do it. The solution for most developers is Django-Debug Toolbar, which sits off to the side in your web browser, but I’m going to share the way I do it.

I do like Django-Debug Toolbar, but most of the time, it just gets in my way: it only works on pages that return HTML, the overhead adds to the response time, and it modifies the response (adding to the response size and render time). What I prefer is displaying SQL queries along HTTP requests in runserver’s output:

Terminal Output with Colorizing Output and SQL

There’s three parts to getting this to work:

  1. ColorizingStreamHandler — this lets me distinguish http requests (gray/info) from SQL queries (blue/debug)
  2. ReadableSqlFilter — this reformats output by stripping the SELECT arguments so you can focus on the WHERE clauses
  3. Opt-in — having SQL spat out everywhere can be distracting, so it’s opt-in with an environment variable

Getting started

ColorizingStreamHandler and ReadbleSqlFilter are a logging handler and a logging filter packaged in project_runpy. Add it to your Django project with pip install project_runpy (no installed apps changes needed). They get thrown into your Django logging configuration like:

[python]LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'root': {
        'level': os.environ.get('LOGGING_LEVEL', 'WARNING'),
        'handlers': ['console'],
    },
    'filters': {
        'require_debug_false': {
            '()': 'django.utils.log.RequireDebugFalse',
        },
        'require_debug_true': {
            '()': 'django.utils.log.RequireDebugTrue',
        },
        'readable_sql': {
            '()': 'project_runpy.ReadableSqlFilter',
        },
    },
    'handlers': {
        'console': {
            'level': 'DEBUG',
            'class': 'project_runpy.ColorizingStreamHandler',
        },
    },
    'loggers': {
        'django.db.backends': {
            'level': 'DEBUG' if env.get('SQL', False) else 'INFO',
            'handlers': ['console'],
            'filters': ['require_debug_true', 'readable_sql'],
            'propagate': False,
        },
        'factory': {
            'level': 'ERROR',
            'propagate': False,
        },
    }
}
[/python]

From: https://github.com/crccheck/crccheck-django-boilerplate/blob/master/project/project_name/settings.py#L84

I found that the logging can get obtrusive. It’ll show up in your runserver, your shell, when you run scripts, and even in your iPython notebooks. So using a short environment variable makes it easy to flip on and off. I’m also using project_runpy’s  env environment variable getter, but if you don’t want that, I suggest using  if 'SQL' in os.environ to avoid how  '0' and  'False' evaluate to  True when reading environment variables.

I was afraid that adding ReadableSqlFilter would add a performance penalty, but I don’t notice any. The extra verbosity is also nice when you have a view with several expensive queries because you can see them run before the page is rendered.

How to use this

Just code as usual! If you’re like me, you keep at least part of your runserver terminal visible. If you are me, hello! Use your peripheral vision to look for long flashes of blue text. When you do, you know you’ve hit something that’s making too many queries. The main culprits will be missing select_relateds and prefetch_relateds. Eventually, you’ll develop a sixth sense for when your querysets could be better, and then you’ll look back at your old code like this:

that-queryset-ain't-right

Continuing

A warning: don’t think too much about the query time reported. You can’t compare database queries done locally with what happens in production. One thing we did confirm is 10 one second queries are much slower than one 10 second query, even in the same AWS availability zone. Something that isn’t as obvious when everything is local. Just pay attention to the number of queries.

Finally, if you really want to keep your database hits low as you continue to develop, document them in your tests. Django makes an assertion available called  assertNumQueries that you can throw in your tests just to document how many queries an operation takes. They don’t even have to be good; for example, I wrote these scraper tests to document that my code currently makes too many database queries. It’s similar to making sure your views return 200s. Make sure you know how many queries you’re getting yourself into.

Too Many Queries

Finish Writing Me Plz Nerd

Demo: Fuckin A

I’ve been learning Ansible off and on and off for the past year. I have so many complaints, but I still think it’s the least worst provisioning system out there.

It’s online at crccheck.github.io/fuckingansible/, but I’ve also embedded it below:

I’d go more into Ansible, just just thinking about it make me so angry. So instead, I want to go over some of the new (JavaScript) things I learned working on this project.

LibSass

I’ve converted some projects from Sass to LibSass before, but this is the first time I used LibSass from the start. Not having to mess with a Gemfile and Bundler is so freeing. It’s too bad there are still so many bugs in the libsass, node-sass, grunt-sass chain. They finally fixed sass2scss in the summer, I think Sass maps are still messed up. Despite this, I would definitely go with LibSass first.

Grunt-Connect

This is the first project I’ve used `grunt-contrib-connect`. Prior, I would have just used `python -m SimpleHTTPServer`. What I like about using Connect is how it integrates with Grunt, and how I have better control over LiveReload without having to remember to enable my LiveReload browser extensions.

Browserify

I’ve experimented with Browserify before with lazycolor.com, where I had previously experimented with RequireJS and r.js. I was very happy with my experience with it for this project:

  1. It found my node modules automagically. I didn’t know it did that. I just tried it and it worked.
  2. It made writing JavaScript tests so much easier. I did something similar with text-mix, but that used Mocha, and I did not have a pleasant experience setting that up. Seriously, why are there so many Grunt/Mocha plugins? And why do so many of them just not work? This time, I used Nodeunit, which was available as a Grunt contrib plugin and a breeze to set up.

If you don’t need a DOM, writing simple assertion tests is definitely the way to go. If it’s faster to write tests, you’re more likely to write them. And best of all, since Browserify runs in an IIFE, it doesn’t put `module` or `define` into the global scope and mess up everything.

Nerd

Testing Ansibile Playbooks with Vagrant

I’ve been interested in Ansible for a long time now, and thanks to my coworker’s expertise, I’ve been able to get my feet wet. My main barrier has been finding a way to run playbooks.

  • For my first attempt at learning Ansible, I tried running it locally using `ansible_connection=local`. But I found that running it on a system where changes persisted made it hard to trace what was going on. The final straw was trying to locate guides and resources about how to use Ansible was so difficult. You need documentation for how to read the documentation.
  • For my second attempt at learning Ansible, I tried running Vagrant in Ubuntu 14.04, but for some reason, it would not even install.
  • For my third attempt at learning Ansible, I tried running it against a Docker container, but the lack of ssh and pid 1 made re-creating the full experience too difficult. I found that I could get 80% of the way there using tricks from http://phusion.github.io/baseimage-docker/ and https://docs.docker.com/examples/running_ssh_service/
  • For my fourth attempt at learning Ansible, I tried getting Vagrant up and running on my Windows machine, and it worked! It also started working on my Ubuntu machine too.

Then I started following reading: docs.ansible.com/guide_vagrant.html, but modified it for my own purposes. I got this working in Windows first, but everything here works exactly the same in Linux and OSX too.

I made a few minor modifications I’ll go over now. I changed the base box to be similar to what we use in production and to one that wasn’t two years old. In my case, trusty64:

$ vagrant init ubuntu/trusty64

Then, I adjusted the Vagrantfile to use a shell provisioner. Here’s a mine (with comments removed):

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.hostname = "ansible-test"
  config.vm.box_check_update = false
  config.vm.network "public_network"
  config.vm.provision :shell, :privileged => false, :path => 'vagrant-install.sh'
end

And this is contents of vagrant-install.sh referenced above:

echo `whoami`
sudo apt-get update -qq
sudo apt-get install -y avahi-daemon

Using avahi/zeroconf gives me a hostname on my local network so I can use a predictable and human friendly hostname.

Let’s get started!

$ vagrant up

And now let’s run our playbook, jenkins.yml

$ ansible-playbook --private-key=~/.vagrant.d/insecure_private_key -u vagrant jenkins.yml

To get it to connect to ansible-test.local, I hacked my  /etc/ansible/hosts file according to these instructions: docs.ansible.com/intro_inventory.html#hosts-and-groups

So how do I feel about Ansible? It’s meh. I still hate it, but I hate it less than Chef and Puppet. I hate them all so much.

Let the hate flow through you

 

Best Practices Finish Writing Me Plz Nerd

Managing Technical Debt in Django Projects

This is fine

I’ve been thinking about this subject a lot, and I’ve been meaning to write something. Rather than procrastinate until I have a lot of material, I’m going to just continuously edit this post as I discover things. Many of these principles aren’t specific to Django, but most of this experience comes from dealing with Django.

Some of these tips don’t cost any time, but some involve investing extra time to do things differently. It’s in the name of saving time in the long run. If you’re writing a few lines of JavaScript that’s going to be thrown away in a day, then you shouldn’t waste time building up a castle of tests around it. Managing technical debt is a design tradeoff. You’re sacrificing some agility and features for developer happiness.

Don’t reuse app names and model names

You can have Django apps with the same name. And you can have models with the same name, but your life will be easier if you can reference a model without having to think about which app it came from. This makes it easier to understand the code; makes it easier to to use tools (grep) to analyze and search makes it easier to use shell_plus, which automatically loads every model for you in the shell.

Leave XXX comments

You should leave yourself task comments in your code, and you should have three levels (like critical, error, warning or high, medium, low) for the priority. I commonly use  FIXME for problems,  TODO for todos,  DELETEME for things that really should be deleted,  DEPRECATED for things I really ought to look at later, and  XXX for documenting code smells and anti-patterns. Some examples:

  • FIXME this will break if user is anonymous
  • TODO  make this work in IE9
  • DELETEME This code block is impossible to reach anyways
  • DEPRECATED use new_thing.method instead of this
  • WISHLIST make this work in IE8
  • XXX magic constant
  • FIXME DELETEME this needs csrf

The comment should be on the same line, so when you grep TODO, you’ll be able to quickly scan what kind of todos you have. This is what other tools like Jenkin’s Task Scanner expect too. Many people say you shouldn’t add TODO comments to your code. You should just do them. In practice, that is not practical, and leads to huge diffs that are hard to review.

There is such a thing has too many comments. For example, instead of writing a comment to explain a poorly named variable, you should just rename the variable.

Naming Things

One of Phil Karlton’s famous two hard things, finding the write names can make a huge difference. With Django code, I find that I’m happiest when I’m following the same naming conventions as the Django core. You should be familiar with the concepts of Hungarian notation and coding without comments (remember the previous paragraph?).

Single letter variables are almost always a bad idea, except with simple code. They’re un-greppable and except for the following, have no meaning:

  • i (prefer idx) — a counter
  • x — When iterating through a loop
  • k, v — Key/Value when looping through dict items
  • na count/total/summation (think traditional for-loop)
  • a, b — When looking at two elements, like in a reduce function (very rare)

An exception is math, where x and y, and m and n, etc. are commonplace.

If the name of your variable implies a type, it should be that type. You would not believe how common the name of the variable lies.

When writing code, it should read close to English. Getter functions should begin with “get_”. Booleans and functions that return booleans commonly begin with “is_” (though anything that is readable and obvious will work).

Write tests

My current philosophy is “if you liked it then you should have put a test on it”. The worst part of technical debt is accidentally breaking things. To its credit, Django is the best framework I’ve used for testing. Unit tests are good for TDD, but functional tests are probably better for managing technical debt because they verify the output of your system for various inputs. Doing TDD, getting 100% coverage, and taking into account edge cases… never happens in practice. That does not mean you shouldn’t try. Adding tests to preventing regressions is the second best thing you can do. And the best thing you can do is to write those tests to begin with.

Get coverage

Running coverage is commonly done at the same time as tests. I skip the html reports and use  coverage report from the command line to get faster feedback. When you have good coverage, you can have higher confidence in your tests.

Be cognizant of when you’re creating technical debt

Of course, every line of code is technical debt, but I’ve started adding a “Technical Debt Note” too all pull requests. This is inspired by how legislation will have a fiscal note to assess how much it would cost. Bills can get shot down because they cost to much for what they promise. Features should be the same. Hopefully, you’re already catching these before you even write code, and you’re writing small pull requests. If we find that a pull requests increases technical debt to an unreasonable degree, we revise the requirements and the code.

Clean as you cook

Most people dread cooking because of the mountain of mess that has to get cleaned after the meal. But if you can master cleaning as you cook, there’s a much more reasonable and manageable mess. As you experiment with code, don’t leave behind unused code clean up inconsistencies as you go. Don’t worry about deleting something that might be useful or breaking something obscure. That’s what source control and tests are there for. Plan ahead for the full life cycle. That means if you’re experimenting with a concept, don’t stop when it works: update everything else to be consistent across the project. If it didn’t work, tear it down and kill it. Don’t get into a situation where you have to support two (or three or four or more) different paradigms.

The Boy Scout rule

“Always leave the campground leaner then you found it”. Feel free to break paradigm that a pull request should only do one thing. If you happen to clean something while working on a feature, there’s no shame in saying you took a little diversion.

Make it easy for others to jump in

Projects with a complicated bootstrapping processes are probably also difficult to maintain. Wipe your virtualenv once in a while. Wipe your database. If it’s painful, and you have to do it often enough, you’ll make it better. Code that doesn’t get touched often has the most technical debt.

tetris game over

Educate your organization that they can’t just ask for a parade of features

This problem fixes itself, one way or another. Either you keep building features and technical debt until you’re buried and everything comes to a standstill and you yell at each other, or you find a way to balance adding features.

Prevent Dependency Spaghetti

Just like how you should try to avoid spaghetti code, having a lot of third party apps that try to pull in dependencies will come back to bite you later on.

  1. Specify requirements with ==, not >=. Not every package uses semantic versioning. And using semver does not guarantee that a minor or bugfix release won’t break something.
  2. Don’t specify requirements of requirements. This is to avoid an explosion of requirements to keep track of.
  3. There’s no easy way to know when it’s safe to delete a requirement. Even if you have good test coverage, your test environment is not the same as production. For example, you can safely delete psycopg2 and still run your tests, but have fun trying to connect to your PostgreSQL database in production.

Don’t support multiple paths

If you’re writing a library consumed by many people, supporting get_queryset and get_query_set is a good idea. For yourself, only support one thing. If you have an internal library that’s used by multiple parts of the code base and you want add functionality while preserving backwards compatibility, you can write a compatibility layer, but then you should update it all within the same pull request. Or at least create an issue to clean it up. Supporting multiple code paths is technical debt.

Avoid Customizing the Admin

The moment you start writing customizations for the admin, you’ve now pinned yourself to whatever the admin happened to be doing that version. Unless you’re running automated browser tests to verify their functionality, you’re setting yourself up for things to break in the future. The Django admin always changes in major ways every version, and admin customizations always have weak test coverage.

Do Code Review and Peer Programming

Code review makes sure that more than one person’s input goes into a feature, and peer programming takes that even further. It helps make sure that crazy functionality and hard to read code doesn’t get into the main code base that others will then have to maintain. If you’re a team of one, do pull requests anyways. You’ll be amazed at all the mistakes and inconsistencies you’ll find when you view your feature all at once. Even better: sleep on your own pull requests so you can see them in a new light.

Don’t Write Unreadable Code

Code review and peer programming are supposed to keep you from writing unmaintainable code. We’ve embraced linters so that we can write code in the same style, coverage so we know when we need to write tests, but what if we could automatically know when we were writing complicated code? We can, using radon or PyMetric‘s McCabe’s Cyclomatic Complexity metric.

Additional Resources

  • http://youtu.be/SaHbtEeu37M Docker and DevOps by Gene Kim, Dockercon ’14
  • Inheriting a Sloppy Codebase by Casey Kinsey, DjangoCon US ’14

Special thanks to Noah S.