Wes Turner



Criteria for Success and Test-Driven-Development

There are conceptual similarities and differences among TDD, the Scientific Method, and Goal-Directed Organizational Planning.

How does Test-Driven-Development (TDD) relate to experimental Hypotheses?

When practicing the Scientific Method (and keeping with Design of Experiments), we’re supposed to start with a hypothesis, experiment, and then test the hypothesis before drawing a conclusion.

  • If the hypothesis is not met (in terms of statistical constraints, boolean test pass/fail), then we reject the conclusion.
  • With Six Sigma DMAIC (Define, Measure, Analyze, Implement, Control), in the Control step we re-evaluate whether the implemented approach is achieving the intended objectives that we defined ahead-of-time.
    • “Validated Learning” emphasizes the need to control given changes in predefined metrics, as well.
  • Control Theory and Systems Theory are all about how changes in inputs (and feedback) affect measurable state (outputs, systemic outcomes).

History, and various domains of science and policy, is chock-full of examples of how attempts to Optimize (Minimize or Maximize) for certain metrics have resulted in Unintended Consequences: where changes in unmeasured variables are resultant from (or just “linked with”, in terms of Confounding, Correlation and Causation) attempts to optimize. Can you think of a few salient examples in your domains of knowledge?

There is math for solving multi-dimensional (“high-dimensional”) optimization problems.

  • MCDA (Multiple-Criteria Decision Analysis) intends to find – often multiple – solutions to multi-dimensional optimization problems.

Most systems are inherently multi-dimensional: there are stocks and flows (sort of like nodes with magnitude and uni-/bi-directional edges with magnitude). A systems metaphor: a balloon animal filled with water.

We’d like to think that software is more discrete; that is, software is describable in terms of how output results from changes in input variables.

Formal methods of software design and analysis are often cost-prohibitive. Practically, the problems we intend to solve with software are often ill-specified: we don’t start with a complete functional specification, we start with a vague idea of the problem we’d like to solve (more efficient, more auditable, more logs) and work from there. Agile development methods are designed to support teams of stakeholders who are seeking continuous success in satisfying changing objectives.

So, then, there are two things to optimize (maximize) for when attempting to deliver software which successfully achieves customer expectations:

  • Functional Specification Coverage: whether all of the desired behaviors are enumerated in at least one test.

    • Each Use Case and/or User Story should have at least one test. (Behavior-Driven-Development tools make it easier to write these tests with something like natural language; so that developers/engineers can do what they do best).
    • User Stories can be written and rewritten from the 5 W’s (Who, What, When, Where, Why) to a Given-When-Then (and Why) pattern:
      • “Users can register for user accounts”
      • “As an unregistered user, when I complete the registration form, and then click on the confirmation link sent to my email, then I have a registered user account (obviously: in order to onboard new users)”
  • Code Coverage: whether all of the actual code in the software is covered by at least one test.

    • When some of the tests no longer pass, this is called “breaking the build”. Teams working with Continuous Integration (CI) are aware of this because all of the tests run for every change to the software.

      • When the build breaks, it is the whole team’s (and specifically, the person whose changes broke the build’s) responsibility to stop what they are doing and un-break the build (often by just “backing out” the new changes (where they can remain in a separate feature branch)) so that the whole team is not blocked.
      • Teams devise various ways to indicate that the build is broken (CI emails, a build lamp, respectful verbal indications).
    • Function Coverage, Statement Coverage, Branch Coverage, Condition Coverage:

      Statement Coverage is a Code Coverage metric for determining whether each text line of the software is executed by the tests. Statement Coverage can be misleading for a number of reasons:

      1. There may be multiple branches on the same line.
      def func(a, b):
          if (a > 10) or (b < 20):
              return True
      def test_func():
          assert func(271, 314)  # 100% coverage
          assert func(0, 0)      # still 100% coverage

      So, optimizing for Statement Coverage (when what we really want is Branch Coverage) can be misleading: can lead to false confidence.

      Branch Coverage and Statement Coverage are not independent measurements: if there’s high Branch Coverage, there’s likely also high Statement Coverage. Stated another way, change in Statement Coverage can be predicted given change in Branch Coverage. Is this confounding?


      Because Branch Coverage and Statement Coverage are not conditionally independent, there are a number of mathematical analyses which are not appropriate (e.g. Bayesian inference).

      It can be said, then, that Branch Coverage and Statement Coverage are not independent components.

How does TDD relate to goal-driven organizational planning?

How do we know whether we’ve gotten there if we haven’t yet defined where we need to be?

Organizations define criteria for success:

  • Mission
  • Goals
  • Objectives / Targets
  • Indicators / Metrics / Key Performance Indicators

In terms of Test-Driven-Development, many organizational “tests” are not yet “passing” (“reasonability”, “realism”).

There’s an acronym for defining objectives: “SMART”. There are various interpretations of each of the letters. One such interpretation is “Specific, Measurable, Achievable, Relevant, Time-Bound”. Here, we’ll assume that a Goal is more of a high-level grouping for one or more objectives (though it could be argued that really all we have are nested sets of Goals or nested sets of Objectives):

  • Are we upholding the Organizational Mission?
  • Is this objective {S, M, A, R, and T}?
  • Is this objective Measurable?
  • Is this objective Achievable (Assignable)?
  • [...]

With tests for things in the future, we can define confidence intervals (low, medium, high) as e.g. “pessimistic”, “realistic”, and “optimistic”.

Assuming the objective is to maximize, given an interval, there are then four possible boolean tests for success (or just one, if we limit ourselves to TDD pass/fail):

  • Is the metric between the low and high thresholds?
  • Is the metric above the low threshold?
  • Is the metric within a defined threshold around the medium threshold?
  • Is the metric below the high threshold?

System Administrators sometimes define events for these types of threshold intervals:

  • If utilization is below L, reduce the number of allocated servers.
  • If utilization is above H, increase the number of allocated servers.

Organizations sometimes define events for these types of threshold intervals:

  • If we reach our sales target by date/time, everyone gets a raise. (See also: profit sharing).
  • If the planetary temperature increases by n degrees, we’ll suddenly realize we need to minimize the effects of climate change. (And, by then, it’ll be too late)

These are reactive approaches.

TDD is proactive, in that we test first, early, and often.

So, there are differences and similarities between defining criteria for success in software development and in adaptive, proactive, organizations.

A bit about the Global Goals:

  • The Global Goals for Sustainable Development are also known as the United Nations Sustainable Development Goals (SDGs).
  • There are 17 Global Goals (#Goal1#Goal17).
  • There are Targets for each Goal, and there are Indicators for Targets:
    • Goal
      • Target
        • Indicator(s)
  • Some Indicators are relevant to multiple Targets.
  • All Indicators should be relevant to all of us (“you people”).
  • The UN Millennium Development Goals were through 2015.
  • The UN Sustainable Development Goals are through 2030.
    • By 2030, we intend to be able to say that we’ve achieved ( or exceeded) each Target, given an evaluation of the Indicators. This is our criteria for success.
      • We aren’t yet passing the tests we’ve set for ourselves.
      • We’re all working to find and implement solutions for achieving these Targets in our own states and worldwide.

Concepts / References

Teaching Test-Driven-Development First

Hello world is wrong! Every textbook is doing it wrong, and here’s why:

A traditional first hello world program:

# helloworld.py

def hello_world(name):
    return "Hello, {}".format(name)

if __name__ == "__main__":  # w/o __name__ this'd run inadvertantly

And then run the program:

python ./helloworld.py
# "It should say 'Hello, @westurner!'"  # (check this manually every time)

Hello world with tests (Test-Driven-Development (TDD)):

# test_helloworld.py

import unittest

class TestHelloWorld(unittest.Testcase):
    def setUp(self):
        # print("setUp")
        self.data = {'name': 'TestName'}

    def test_hello_world(self, data=None):
        if data is None:
            data = self.data
        name = data['name']
        expected_output = "Hello, {}!".format(name)
        output = hello_world(name)
        assert expected_output == output
        self.assertEqual(ouput, expected_output)

     # def tearDown(self):
     #    print("tearDown")
     #    print(json.dumps(self.data, indent=2))

And then run the automated tests:

python -m site                         # sys.path and $PWD (`pwd`)
python -m unittest test_helloworld     # ./test_helloworld.py
python -m unittest -v test_helloworld  # ./test_helloworld.py
test $? -eq 0 || echo "Tests failed! (nonzero returncode)"

Why should this first run of the tests fail?

  • Because we didn’t remember to import from helloworld import helloworld in test_helloworld.py.
  • Because, in keeping with TDD, we run the test first to make sure it actually fails without our changes.

Advantages of this Test-Driven-Development (TDD) first approach:

  • TDD from the start: any future breaking changes will be detected (given the completeness of the functional test specification)
  • This teaches Object-Orientation (OO) (Encapsulation, Separation of Concerns)
  • This teaches separation of data and code.
  • This teaches Given-When-Then.

And then what? (Teach what next?):

  • Testable software design patterns
    • Refactoring: Square, Rectangle, Triangle obj.area()
    • Test factories / test fixtures
  • Additional testing tools (beyond unittest):

Concepts and References:


This weekend, I managed to get a fixed navigation bar configured with the Bootstrap affix JS and a fair amount of CSS iteration for Sphinx (with the excellent sphinxjp.themes.basicstrap) and have been merging the new styles into various sets of docs that I’ve been working on:

Update: 2015-07-04

  • [X] DOC: 2015/03/02/documentation.rst: Update inlined WRD R&D Documentation Table of Contents

  • [x] UBY: show current location in navbar toctree (#6)



    • [o] [UBY] show the currently #manually-selected link in the navbar when the fixed navbar is scrolled beyond the viewport (i.e. when showing the complete table of contents in the full width sidebar navbar).

      • [x] Assert #anchor exists as a DOM element with an id="anchor" property.

      • [o] Find and style each link to #anchor (href="#anchor"):

        • [X] mobile header navbar:

          • [X] UBY: Bold and add an arrow ⬅ next to the heading, in place of the ¶ sphinx heading selector link.
        • [X] full width sidebar navbar:

          • [X] UBY: Bold and add an arrow ⬅ next to the heading, in place of the ¶ sphinx heading selector link.

          • [X] UBY: If the full width sidebar navbar is on the screen; and there’s a link in the table of contents to the given #anchor; and that link is not displayed, scroll the sidebar navbar so that the given navbar link is displayed (with a few at the top, for context).

            ## pseudo-JS
            sidebar = $('#sidebar');
            link =  $(sidebar).find('a[href=<#anchor>]');
            if !(jquery.isonscreen.js(sidebar, link)) {
                jquery.scrollTo(sidebar, link);
  • [ ] Learn ReadTheDocs in order to WriteTheDocs:

    • The default ReadTheDocs theme is sphinx_rtd_theme.

    • Sphinx themes are configured in a conf.py file. From http://stackoverflow.com/a/25007833 (CC-BY-SA 3.0):

      # on_rtd is whether we are on readthedocs.org
      import os
      on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
      if not on_rtd:  # only import and set the theme if we're building docs locally
          import sphinx_rtd_theme
          html_theme = 'sphinx_rtd_theme'
          html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
      # otherwise, readthedocs.org uses their theme by default, so no need to specify it
    • From casual inspection, ReadTheDocs rtd_theme takes a different approach:

      • ReadTheDocs rtd_theme does support scrolling the left navbar independently from the document;
      • ReadTheDocs rtd_theme scrolls the navbar and the document;
      • The ReadTheDocs rtd_theme navbar displays a document-expanded but otherwise collapsed table of contents.

WRD Research and Development


Working on Lately





EDIT: 2014-11-01: Updated links to wiki

EDIT: 2014-12-16: Updated date format to https://en.wikipedia.org/wiki/ISO_8601

EDIT: 2015-05-04: Updated links to resume

Literate Configuration

Problem: configuration files which specify keyboard shortcuts can be difficult to grep through. It’s not always easy to get a simple commented list of configured keyboard shortcuts.

Solution: Approach configuration documentation like literate programming: “literate configuration”.

  1. Markup headings with two or more # signs.
  2. Markup comment lines with a # prefix and at least two spaces.


Take an abbreviated excerpt from the i3wm .i3/config that I cleaned up this morning.

#### i3 config file (v4)
### Notes
#  #  Default location: ~/.i3/config
#  #  List the commented command shortcuts with::
#  #     cat ~/.i3/config | egrep '(^(\s+)?##+ |^(\s+)?#  )'
## Change volume
#  <XF86AudioRaiseVolume>   -- volume up
bindsym XF86AudioRaiseVolume exec --no-startup-id $volume_up
#  <XF86AudioLowerVolume>   -- volume down
bindsym XF86AudioLowerVolume exec --no-startup-id $volume_down

## Start, stop, and reset xflux
#  <alt> [         -- start xflux
bindsym $mod+bracketleft    exec --no-startup-id $xflux_start
#  <alt> ]         -- stop xflux
bindsym $mod+bracketright   exec --no-startup-id $xflux_stop
#  <alt><shift> ]  -- reset gamma to 1.0
bindsym $mod+Shift+bracketright  exec --no-startup-id $xflux_reset

## Resize Mode
#  <alt> r      -- enter resize mode
bindsym $mod+r  mode "resize"

mode "resize" {
    ## Grow and shrink windows
    # These bindings trigger as soon as you enter the resize mode
    # ...
    # back to normal: Enter or Escape
    #  <enter>  -- exit resize mode
    bindsym Return      mode "default"
    #  <esc>    -- exit resize mode
    bindsym Escape      mode "default"

Run it through extended grep with a simple conditional regular expression:

cat ~/.i3/config | egrep '(^(\s+)?##+ |^(\s+)?#  )'

Peruse the output for that one excellent keyboard shortcut

#### i3 config file (v4)
### Notes
#  #  Default location: ~/.i3/config
#  #  List the commented command shortcuts with::
#  #     cat ~/.i3/config | egrep '(^(\s+)?##+ |^(\s+)?#  )'
## Change volume
#  <XF86AudioRaiseVolume>   -- volume up
#  <XF86AudioLowerVolume>   -- volume down
## Start, stop, and reset xflux
#  <alt> [         -- start xflux
#  <alt> ]         -- stop xflux
#  <alt><shift> ]  -- reset gamma to 1.0
## Resize Mode
#  <alt> r      -- enter resize mode
    ## Grow and shrink windows
    #  <enter>  -- exit resize mode
    #  <esc>    -- exit resize mode

Add the -n switch to grep to display the source line numbers of the relevant configuration file documentation.

The docco homepage lists quite a few more heavyweight approaches to generating documentation from comment strings embedded in various languages (such as Markdown in a shell script).

I’ve added documentation with this pattern to my dotfiles:

This simple pattern saves time when looking up my keyboard shortcuts.

EDIT: 2014-12-16: Updated links to https://westurner.org/dotfiles/

Tech News for America

So I started to prepare a mega-tweet for our new substitute teacher, and started to reference a few links – because what good is an internet page without links – from my trusty ol’ redditlog, here: https://westurner.org/redditlog/.

Responsive Web Design


  • Does it work on my device?
  • Why am I looking at margins?
  • Why should we prefer #anchors over page numbers?

Hello World!

from __future__ import print_function

def hello_world():
    return "Hello World!"

if __name__ == "__main__":

Updated: 2013-12-15

New blog! I thought best to accentuate “Hello World” with an exclamation point. [1]

This Blog

This blog is created from reStructuredText sources hosted by GitHub which are processed by Tinkerer (source), which extends Sphinx (source, wikipedia).

Syntax Highlighting

Code syntax highlighting – pygments-style-jellywrd – is adapted from jellybeans.vim and a PatchColors function in my dotvim Vim (source, wikipedia) configuration with a Python (source, wikipedia) script called vim2vim, which generates the requisite Pygments (source) style.

[1]TIL that “Hello World” originated from The C Programming Language by Kernighan and Ritchie (1978) [2]