Funding minimally extractive protocols

Is there a better way to fund minimally extractive protocols?

This is the question that stuck in my mind after reading “Protocols as minimally extractive coordinators” by Chris Burniske. In this article, I propose using Augmented Bonding Curves as a solution that is worth exploring further.

But first, let’s recap Chris’ thesis.

Protocols are minimally extractive coordinators

Chris argues that protocols provide a structure in which businesses operate, but are not businesses in and of themselves. In other words; they are systems of logic (encoded in smart contracts) that coordinate exchange between suppliers (businesses), consumers and distributors of a service.

In a hypothetical “distributed Uber” protocol, the protocol sets the rules that connect drivers (the suppliers) with passengers (the consumers) through an app (the distributor).  It acts as the grease for the wheels of exchange. As coordinators of exchange, protocols should reduce friction and must therefore be minimally extractive. Businesses, on the other hand, are incentivized to be maximally extractive. The more they extract the more profit they make. The more profit they make the higher they are valued.

From this angle, protocols can be seen as routers of economic activity. Just as the routers of the internet are as lean and efficient as possible, so too should crypto’s protocols trend.

Chris Burniske

Ultimately the less extractive a protocol is in coordinating exchange, the more that form of exchange will happen.

If we accept the premise that protocols should be minimally extractive the next question becomes:

How minimal is minimal?

While protocols should extract the minimum value possible they still have costs associated with developing, maintaining and growing the protocol. This includes but is not limited to :

  • Ongoing protocol development and maintenance
  • Design and user experience
  • Business development
  • Marketing
  • Supplier and consumer education
  • Technical support
  • R&D
  • Legal fees

For decentralized protocols to compete for talent with the likes of Facebook and Google they need to have the funds to offer competitive packages.

Current funding options

Before proposing an alternative funding model it makes sense to review some of the current methods protocols use to fund ongoing development.

Do an ICO, preallocate funds to a foundation, sell assets

This is the most common pattern. A good example is Ethereum, where a successful ICO allowed Ethereum to build a formidable war chest. The Ethereum Foundation uses these funds to do ongoing R&D, develop the ecosystem and give grants to worthy projects.

The problem here is that this model only works in an environment of ever increasing asset prices, and even then it isn’t sustainable in the long run.  Continuously selling off assets to fund protocol development without a continuous funding model to replenish assets is not a good long term plan.

In this model, as tokens go up in value, you will have to sell some on the market to deploy that capital. This means you become less and less exposed as the market moves in your favor. On the other hand if the market turns against you the most prudent thing to do might be to sit on your tokens rather than try to sell them into an illiquid and unfavorable market. This means you maintain or gain exposure as the market moves against you. In the bond world this is called being short convexity, and that’s not a good thing!

Add in the problem of misaligned incentives between early funders and founders who’s tokens have vested vs. users and later investors and the limitations of this model are clear. 

Inflation funding

Dash, Decred, Horizen and ZCash follow this model. The token model introduces inflation that partly goes to funding core protocol development. While this produces an ongoing funding stream it also disincentivizes holding the asset over the long term. The longer you hold it, the more your position is diluted. Short term speculators (arguably the least valuable participants in the network) also get to ride the price swings without having to worry as much about dilution over the long term.

Donations, grants and volunteer contributions

Another option is to rely on funding from donations and grants from organisations. We have seen a number of experiments in this space most notably MolochDAO and TrojanDAO. The ethereum foundation has also funded a number of projects such as Uniswap. While admirable, these projects are at a decided disadvantage given the well known problems associated with the tragedy of the commons.

There are incentivisation issues with this model too. As new people donate and the funds are spent early contributors have a continuously decreasing stake over a shrinking pool of communal funds, disincentivizing future contributions. 

Government funding?

Governments are making funds available for protocol development with programs such as the EU ledger project. While this is a good way to get initial funding the types of projects that will be funded are largely dependent on government agendas. Additionally there is likely to be way more demand for funding than the government is willing to supply.

What do we really want?

Now that we have some sense for the problems with the current models, let’s take a step back and define what properties we want from an ongoing funding model for minimally extractive protocols.

We want a token model that : 

  • Provides guaranteed liquidity.
  • Is stable in the early stages.
  • Incentivises good governance.
  • Keeps coordination costs as low as possible.
  • Is sustainable.

Guaranteeing liquidity with bonding curves

Bonding curves are a crypto economic primitive originally proposed by Simon de la Rouviere and implemented independently by Bancor. They are a mathematical curve that defines a relationship between token price and collateral.

Bonding curves allow you to deposit a token (eg Dai) into a smart contract, in return you receive a new token (eg #projectToken). The Dai is kept as collateral within the smart contract.

The price is a function of the current #projectToken collateral. Buying or selling a #projectToken un-bonds the collateral associated with that token thereby changing the price of the remaining #projectTokens. These price functions are hardcoded according to some algorithmic curve.

Bonding curves have a few interesting properties that are useful in our quest for a sustainable funding model:

  1. The shape of the curve can be changed to create different incentive structures. This allows us to control how we reward early adopters vs late adopters.  
  2. A curve can be designed to always be liquid by ensuring there is collateral bonded in the curve to buy back the remaining supply of #projectToken.
  3. Everything remains on chain so there is no need for market makers.

Bonding curves act as automated market makers and take care of liquidity, but what about the rest of our requirements?

Introducing Augmented Bonding Curves

To meet our remaining goals (early stage stability, good governance, low coordination costs) we need to introduce some additional mechanisms. Enter the Augmented Bonding Curve (ABC).

The Augmented Bonding Curve as used for the Commons Stack is based off of Michael Zargham’s complex system research at BlockScience. It augments a traditional bonding curve by adding conservation principles and mechanisms that align incentives, generates returns and manages speculation (more on this later).

ABC’s were initially designed for the Commons Stack to help fund commons projects that suffer from the “free-rider problem”, also known as the tragedy of the commons. The free-rider problem is a type of market failure that occurs when those who benefit from resources, public goods, or services of a communal nature do not pay for them. The problem of protocol funding is an example of the free-rider problem. The people who benefit from these protocols (including speculators) have no direct incentive to fund protocol development.

For brevity I will only discuss the aspects of an Augmented Bonding Curve that pertain to our goal of sustainable funding for minimally extractive protocols. If you are interested in a more detailed explanation of Augmented Bonding Curves the article “Deep Dive : Augmented Bonding Curves” is the best place to start.  

As an overview, an ABC is a typical bonding curve with the addition of :

  • 2 phase life cycle
  • funding pool
  • token lock-up/vesting mechanism
  • inter-system feedback loops

Let’s look at how these additions help us.

Providing early stage stability

Blockchain networks are especially vulnerable during the early phases of their lifecycle. Pump and dump schemes and the influx of non-aligned participants could severely derail a new blockchain protocol.

To help shepherd a project through these early stages, an ABC has 2 phases: a Hatch Phase and on Open Phase.

During the Hatch Phase, initial contributors participate in a Hatch Raise. Typically contributors in this phase are founding members, devoted contributors and initial investors. The point of this phase is to gather contributors and pool capital.

Unlike conventional bonding curves, ABC’s have both a reserve pool and a funding pool. During the Hatch Phase a percentage of funds paid into the curve is put into the Reserve Pool and “bonded” to the curve minting new #projectTokens. The amount of funds bonded in the Reserve Pool determines the initial spot price used during the Open Phase.

The remaining percentage is put into the Funding Pool. The capital in the Funding Pool remains un-bonded as a floating source of capital that can be distributed as real capital outside of the bonding curve.

In return for their contribution (time, resources, capital) hatchers receive newly minted #projectTokens. In a typical bonding curve system #projectTokens could then be burned at any point or after an arbitrary deadline. In an Augmented Bonding Curve system #projectTokens minted during the Hatch Phase are locked in a vesting process. They cannot be burnt until they are slowly unlocked in correlation to how much capital has been allocated. This combats any harmful early speculation/arbitrage that would affect the stability of the Reserve Pool.

Incentivising good governance

The Open phase starts after the goal for the Hatch Raise is met. The Hatch Phase is designed to align incentives and ensure good governance. Hatchers’ tokens are slowly unlocked in correlation to how much capital has been allocated to fund the project roadmap.

For hatchers’ tokens to vest, capital from the Funding Pool has to be allocated to fund projects that positively impact the protocol. This creates an economic incentive for hatchers to participate in the system’s governance process. If they don’t allocate funds, they cannot burn their tokens to reclaim their capital. This combats a known incentive misalignment observed in capital-allocating DAOs; capital allocations are rare because members are too selective or stingy with funds.

Incentives on their own are not enough to ensure good governance in a distributed protocol. We also need tools. There are a number of governance and accountability tools available with more experiments being done all the time. Some of the tools I think are especially promising in this area are Conviction Voting for making governance decisions, Quadratic funding for allocating existing capital to projects and the Giveth DApp for accountability. There are a lot of experiments and development work happening in this space.

Keeping coordination costs low

Now on to the tricky problem of extracting enough capital from the network to fund core protocol development, while still keeping costs for producers and consumers as low as possible. One way to achieve this is to take additional fees from actors that are currently not contributing to protocol funding. 

As stated in the review of other funding models above; speculators largely get a free ride. They profit from buying the dip and selling the pump without directly contributing to protocol development. Is there a way to involve speculators in protocol funding?

In Augmented Bonding Curves an Exit Tribute is charged on exit from the bonding curve. When tokens are burned, a small percentage of their returns are sent back into the Funding Pool. This provides funding for the protocol while contributors are earning returns. The protocol always benefits, even if speculators are just looking to make returns.

Another way to think of this Exit Tribute is as volatility tax. We are no longer dependent on steadily rising assets prices to provide continuous funds. As long as there are participants entering and exiting the curve the protocol will receive funding. Since speculators are also taxed the overall tax rate can be lower reducing the costs for protocol participants.


The last piece of the puzzle is to bake ongoing volatility into the protocol design. The protocol itself should create buy and sell pressure for the #projectToken. There are many ways to do this. As a simple example a protocol could pay internal incentives in #projectToken. This creates buying pressure as the protocol itself is buying tokens from the curve. On the other hand it creates selling pressure as suppliers need to sell #projectTokens to cover costs. Since this mechanism is based on volatility and not purely on asset growth, this model is sustainable over the long run. 

As long as there is economic activity on the protocol, there is a funding model. Weather this funding model is sufficient depends on a number of factors and needs to be considered on a case by case basis. The required volatility to cover costs depends on the underlying transaction costs involved and the costs involved in sustaining the protocol. The limits of this model and the kinds of protocols it is suited to is an open question and more research and modelling is required in this area.


By taking a step back and thinking of protocol funding as a common good that suffers from the free rider problem, we were able to apply solutions pioneered by the Commons Stack to solve the funding problems of minimally extractive protocols.

With this design we have a model that :

Provides guaranteed liquidity.

By using a well designed bonding curve that ensures there is always sufficient collateral bonded to the curve we get guaranteed liquidity and an automated market maker.

Is stable in the early stages.

To protect our protocol during the vulnerable early stages we use the hatch phase of an augmented bonding curve.

Incentivises good governance.

By vesting hatcher tokens over time we ensure that founders and early investors incentives are aligned with the long term success of the protocol.

Keeps coordination costs as low as possible.

By charging an exit tax when participants leave the curve we subsidise coordination costs.

Is sustainable.

By creating buy and sell pressure for #projectTokens in the protocol design we ensure a long term funding mechanism. 


Thanks to Jeff Emmett, Griff Green, Colin Andrews and Simon de la Rouvier for their input on this article.

A Token Engineering Process

The rapid development of the blockchain ecosystem has given rise to a new engineering discipline in the form of token engineering. What makes this discipline particularly complex is that we are dealing with the design of cyber-physical systems. The software (cyber) and social (physical) components in blockchain systems are inextricably linked. They operate on different time scales and interact differently depending on context. It is often assumed that understanding the rules of a system combined with the engineering know-how required to build the software is enough. But automated systems that interact with humans often behave in ways we don’t expect.

So how do we go about engineering these kind of systems?

Understand the whole system

Meddling with a small part of a complex system without first understanding how the whole system works is a recipe for unintended consequences. 

When dealing with these complex cyber-physical systems we have to take a systems approach. Systems thinking is a holistic approach to analysis that focuses on the way that a system’s parts interrelate, how systems work over time and how they operate within the context of larger systems. Much like natural systems, cyber-physical systems live in an ecosystem of actors and other systems that all influence each other.

There is a good introduction to systems thinking on the cadCAD community discord. For those who want to dig a bit deeper “Thinking in systems : A Primer” by Donella H Meadows and Diana Wright is a good place to start. Leyla Acaroglu also does a good job of outlining the fundamental concepts of systems thinking.

At this stage of the design process we are trying to do two things. Build stakeholder taxonomies by identifying stakeholder groups, their possible actions, and the form their incentives or individual utilities might take. Secondly we want to lay out the system dynamics and agent goals.

We call this system mapping. During system mapping we are trying to identify what concepts, constructs and stakeholders are relevant to our model and to begin to define their relationships to one another, as well as what the goals are for the system as a whole.

Tools for system mapping

There are a number of tools that help us take a step back and look at the dynamics and interconnections within the system we are trying to model. 

Ecosystem canvas

When filling out this canvas you are putting the purpose of the system at the centre and lay out the key players, contributors, and/or users in your ecosystem in the concentric circles radiating outwards.

Adapted from Simone Cicero’s platform design toolkit by Ville Eloranta.

Motivation matrix

Use the Motivation Matrix to see who creates value and who shares value with whom in the system.

Adapted from Simone Cicero’s platform design toolkit by Ville Eloranta.

Other useful tools include connected circles maps and brain dump maps.

This article barely scratches the surface of the theory and practice of systems thinking. For those interested in a deeper dive, here are some additional resources:

Formalising the design

Now that we have a sense for the overall design of the system we can start formalising our insights using causal loop diagrams and stock and flow diagrams.

Causal loop diagrams

A causal loop diagram (CLD) is a causal diagram that aids in visualising how different variables in a system are interrelated. They can be thought of as sentences that are constructed by identifying the key variables in a system (the “nouns”), and indicating the causal relationships between them via links (the “verbs”). By linking together several loops, you can create a concise story about a particular problem or issue. 

Causal loop diagram of Adoption model

Stock and flow diagrams

Stock and flow diagrams provide a richer visual language than causal loop diagrams. There are six main elements: stocks, flows, converters, connectors, sources and sinks. Below is an example stock and flow diagram as used by the token engineering community. For an in-depth explanation of this diagram see Abbey Titcomb’s article “Deep Dive : Augmented Bonding Curves

Causal loop diagram of Adoption model. Credit Michael Zargam

For a deeper understanding of this part of the process see the following resources:

Modularising the logic and building a model

Now that we have a more formal representation of our understanding we can start modelling our problem in cadCAD. cadCAD is an open-source Python package that assists in the processes of designing, testing and validating complex systems through simulation.

The first step in this process is producing a differential specification as in the example below. The differential specification syntax matches very closely with the code structure used in cadCAD models. Producing a specification in this format modularises the logic used to solve the problem. This allows us to swap out strategies and mechanisms easily as our understanding evolves.

Example differential specification. Credit Michael Zargam

Once this is done we can jump into coding our model. The best place to start is to set up a cadCAD development environment and work through the tutorials provided by BlockScience

Refining the models

Once we have a working model it’s time to refine. Our first model will most likely not be optimal, so we need to do quantitative and qualitative backtesting to refine the model. cadCAD also offers Monte Carlo simulations and parameter sweeping to help identify optimal parameter values and failure modes. 

Evaluate and improve the running system

Once a system has been designed and implemented cadCAD can serve as an invaluable tool in Computer Aided Governance (CAG) as proposed by Jeff Emmett and Michael Zargham. CAG is a decision support process that leverages a digital twin (modelled in cadCAD) of a running blockchain system.

The integration between the design and deployment loop. Credit Michael Zargam

Having a digital twin of a running system allows token engineers to :

  • evaluate proposed changes to the system
  • test parameter sensitivity
  • explore success criteria
  • explore failure modes
  • evaluate behaviours or polices
  • make recommendations to governing bodies


This process is just a start and is likely to evolve as the community deepens their understanding and experience. Much of the content for this article has come out of discussions on the cadCAD discord, the commons stack telegram group and personal input from Michal Zargham, Jeff Emmett and Sebnem Rusitschka, among many others.

You Must Tame Complexity to Become a Better Programmer

Have you ever worked on a system that was just impossible to maintain?

You spend hours trawling through the code until you finally think you understand what’s going on but when you make your change things fall apart. You introduce ten new bugs in places you thought had nothing to do with the code you changed.

You wade through line after line of code only to discover the method you are trying to understand isn’t being called anymore. It’s dead weight dragging the codebase down.

It feels like you’re seeing double, there are multiple pieces of code that seem to do almost the same thing but not quite. They are 90% the same with a few minor differences.

You are not alone.

This scenario is more the norm than an exception. Luckily there is an explanation and a way to avoid having your code end up in the same situation:

You need to tame complexity.

Complexity is the root cause of the majority of software problems today. Problems like unreliability, late delivery, lack of security and poor performance. Taming complexity is the most important task for any good programmer.

In the words of Edsger W. Dijkstra:

… we have to keep it crisp, disentangled and simple if we refuse to be crushed by the complexities of our own making.

So why is complexity so dangerous?

To avoid a problem you must first understand the problem.

The more complex a system is the harder it is to understand. The harder a system is to understand the more likely you are to introduce more unnecessary complexity.

This is the reason complexity is so dangerous. Every other problem in software development is either a side effect of complexity or only a problem because of complexity.

Complexity has the same impact on your codebase as compound interest has on your credit card balance.

… it is important to emphasise the value of simplicity and elegance, for complexity has a way of compounding difficulties.

-Fernando J. Corbató

How complexity impacts understanding

In a previous post I took an in depth look at how programmers understand code. For the purposes of this discussion we can simplify this to two broad approaches. Testing and informal reasoning. Both are useful but both have limits.

Complexity makes testing less effective

With testing you try to understand the code from the outside. You observe how the system behaves under certain conditions and make assumptions based on that.

The problem with testing is that all it tells you is how a system acts under the conditions you tested. It doesn’t say anything about how it would act under different conditions.

The more complex a system the more potential states it might be in. The more potential states a system has the more tests you need. Unfortunately you can never have enough tests.

…testing is hopelessly inadequate… (it) can be used very effectively to show the presence of bugs but never to show their absence.

-Edsger W. Dijkstra

Complexity makes informal reasoning more difficult

When using reasoning you try to understand the system from the inside. By using the extra information available you are able to form a more accurate understanding of the program.

If you have a more accurate understanding of the program you are better able to foresee and avoid potential problems.

The more complex a system the more difficult it becomes to hold all that complexity in your mind and make well informed decisions.

While improvements in testing will lead to more errors being detected improvements in reasoning will lead to less errors being created.

The three main causes of complexity

Before we can avoid complexity we need to understand what creates it.

Some complexity is just inherent in the problem you are trying to solve. Complex business rules for example. Other complexity is accidental and not inherent in the problem.

Your aim as a programmer is to keep accidental complexity to an absolute minimum. A program should be as simple as possible given the requirements.

The three main causes of accidental complexity are: state, control flow and code volume.


One of the first things my computer science teacher taught us in high school was to avoid global variables like the plague. They would cause endless bugs. As Forrest Gump would say:

A global variable is like a box of chocolates, you never know what you’re gonna get.

As you reduce the scope of a variable you reduce the damage it can do but you never really make the problem go away. This is why pure functional languages like Haskell don’t allow any state.

State makes programs hard to test.

Tests tell you about the behaviour of a system in one state and nothing at all about it’s behaviour in a another state.

The more states you have the more tests you need.

Even if you cover all the possible states (which is virtually impossible for any reasonably sized project) you are relying on the fact that the system will always act the same way given a set of inputs regardless of the hidden internal state of that system. If you have ever tried testing a system with even a tiny bit of concurrency you know how dangerous this assumption can be.

State makes programs hard to understand.

Thinking about a program involves a case by case mental simulation of the behaviour of the system.

Since the total possible state grows exponentially (the total number of inputs raised to the power of their possible individual states) this mental process buckles very quickly.

The other problem is that if a procedure that is stateless uses any other procedure that is stateful it also becomes stateful. In this way state contaminates the rest of your program once it creeps in. There is an old proverb that says “If you let the camel’s nose into your tent, the rest of him is sure to follow”.

Beware of the camel’s nose.

Control Flow

Control flow is any code that determines the order in which things happen.

In any system things happen and they must happen is a specific order so at some point this must be relevant to somebody.

The problem with control flow is that it forces you to care not just what a system does but also how it does it. In most languages this is a fairly tricky problem to solve as order is implicit.

Functional languages are slightly better at hiding exactly what is being done than pure imperative languages.

Compare a map function in a functional language to an explicit foreach loop. With the former you just need to know what “map” does, with the latter you need to inspect the loop, and figure out that it is creating a new set of values from an old set.

Since we mostly work in languages where control flow is implicit we need to learn a few tricks to limit its impact on our code. Unfortunately these tricks are outside the scope of this article. I’ll spend some time covering this topic at length in a future article.

Code Volume

This is the easiest cause of complexity to measure.

Code volume is often just a side effect of the previous two problems. It’s worth discussing on it’s own because it has such a compounding effect. The complexity of a system grows exponentially with the size of the code base and the bigger the code base the more complex the system.

This interaction quickly spirals out of control so it’s vital to keep a tight grip on code volume.

Secondary causes of complexity

Besides the three main causes discussed there are a variety of secondary causes of complexity that include:

  • Duplicated code
  • Dead code (unused code)
  • Missing abstractions
  • Unnecessary abstraction
  • Poor modularity
  • Missing documentation

This list can go on and on but it can be summarised with these three principles:

Complexity breeds complexity

Complexity is so insidious because it multiplies. There are a whole host of secondary causes of complexity that are introduced simply because the system is already complex.

Code duplication is a prime example. The more code you have the harder it is to know every piece of functionality in the system. Often duplication is introduced simply because you forget or never knew a piece of code already exists that does what you need.

Even if you know there is a piece of code that does something similar to what you want to do you are often not sure if it does exactly what you want. When there is time pressure and the code is complex enough that it would take significant effort to understand there is a huge incentive to duplicate.

Simplicity is hard

It often takes significant effort to achieve simplicity.

The first solution is hardly ever the simplest. It takes effort to understand the problem deeply and try a number of approaches until you find the simplest possible way to solve it.

This is hard to do, especially if there is existing complexity or there is time pressure. Luckily we never have to work with legacy code under unreasonable time pressures (note the sarcasm).

Simplicity can only be achieved if it is recognised, sought and prised.

Getting non technical stakeholders to understand this is difficult; especially since the cost of neglecting simplicity is only paid further down the line.

Power corrupts

If the language you use gives you the option to do something that introduces complexity at some point you will be tempted to do it.

In the absence of language enforced constraints like Haskell’s immutability mistakes and abuses can and will happen.

Garbage collection is a good example where power is traded for simplicity. In garbage collected languages you loose the power of manual memory management. You give up explicit control of how and when memory is allocated. In return memory leaks become a less common issue.

In the end the more powerful a language is the harder it is to understand systems constructed in it.

Simplicity is not optional

Unnecessary complexity is a dangerous thing.

Once it creeps into your solution it grows like a cancer. Over time it strangles your ability to understand, change and maintain your code.

Every decision you make should prioritise simplicity. Learn to recognise complexity when you see it. Strive to find a simpler solution. Value clear code over complex solutions.

Your mission is to pursue simplicity at all costs.

Become a better programmer

Why do some programmers seem to have this magical ability to extract meaning from code in the blink of an eye?

To try and answer this question I’ve gone digging to see what science knows about how we understand code.

As it turns out we know a lot about the psychology of code comprehension and we can use this knowledge to become better programmers. It allows you to develop all aspects of the understanding process so you don’t end up with bottlenecks in your programming skills.

In this post I take a look at what we know about program understanding and discuss three ways we can use this knowledge to become better programmers.

To understand code you have to build a mental model

The first step when programming is to build a mental model of the problem so you can then complete your task. Your mental model is the driver behind understanding a problem or program.

The journey from code on a screen to a model in your head follows a fairly well understood progression. Our understanding of the process is by no means complete but what we do know can be used to identify areas to focus on for improvement.

Let’s take a look at how we understand code.

Your mental model is built up of matches between general and specific knowledge.

The knowledge you use to understand your code is either general programming knowledge or software specific knowledge.

General knowledge includes knowledge about computer science concepts, programming languages, frameworks, and programming principles. Most tutorials will focus on this type of knowledge — things like design patterns, effective web stacks, proven enterprise architectures, anything generally applicable to a variety of solutions. Specific knowledge is knowledge about the particular program or problem you are busy with.

Forming a mental model consists of making associations between the code you are reading and your existing general and specific knowledge. “This is a class“. “That is a loop“. “This function is filtering invoices by price“.

Both of these types of knowledge can be new or existing. Sometimes you will need to learn new general knowledge to solve a problem. How a round robin scheduler works, for example. Specific knowledge is more often new than existing, but sometimes you will have existing knowledge about the program you are currently working on through a history with that particular codebase.

Your mental model consists of the set of links between the general and specific knowledge you have that is relevant to this problem.

These matches are formed by making, testing and modifying hypotheses.

The way we form matches is by making hypotheses.

Let’s say you spot something that you recognise in the code. A beacon that reminds you of some higher level concept. “That loop looks like a sort“.

You then look for ways to test this hypothesis. “Let’s see if we are swapping two items in the loop“.

We then modify the hypothesis or accept it and start looking for new hypotheses to build on the one we just made.

You make a prediction about what something is, find ways to prove or disprove your prediction, modify based on results and repeat.

So how does this help us in understanding how to be a better programmer?

There are three ways you can become a better programmer:

Once you know that your ability to understand code is dependent on three things:

  1. Knowledge — The building blocks to solve the problem
  2. Links — The glue between the building blocks
  3. Hypotheses — The tools by which you form the links

It becomes clear that getting better at programming requires a holistic approach.

1. You can gain more general knowledge

Since your ability to comprehend code depends on the number of matches you can make between your existing knowledge and the problem you are trying to solve it stands to reason that the more knowledge you have to work with the more success you will have.

As programmers we spend a large portion of our time acquiring new knowledge. It’s necessary if you want to stay current in the technology world. To get the most out of your research it’s important to focus on principles and not technologies.

With that in mind let’s look at the types of knowledge you can add to your bag of tricks:

Language specific knowledge

Language specific knowledge is the area many developers focus on.

It’s about learning the ins and outs of your language or framework of choice. Getting to know the API and language constructs, finding the strange language quirks and knowing exactly how things work under the hood.

It is usually fairly easy to find good courses and information for this category of knowledge.

Some places worth checking out:

This kind of knowledge is vital and every developer needs to know his tool-set inside out.

The problem with this kind of knowledge is that there’s always more. A new framework comes out. The next version of a language is released. The longer you have had this knowledge the less valuable it becomes (knowing how to read a punch-card isn’t a hot skill anymore)

Programming concepts

This type of knowledge has a longer shelf life. A sort will still be a sort in 20 years time.

Computer Science degrees spend a lot of time on these topics. You also learn these concepts as a side effect of learning languages and frameworks. The problem with learning these concepts from a language or framework is that it’s sometimes difficult to separate the underlying concept from how it is expressed in syntax.

Some languages are also better or worse at expressing certain concepts. Knowing a few different frameworks and languages is helpful here. The alternative is to learn the concepts first and then learn how it is applied in different languages. It is much harder to find information and courses that take this approach. These concepts includes things like patterns, algorithms, data structures and many more.

Over time I will be releasing more content that focuses on learning the underlying concepts.

Sign up here for a free e-book on the topic and to be notified of new content.

Domain knowledge

Understanding the industry you are working in gives you an extra set of non programming concepts to use in your mental model. Understanding how an investment instrument works helps you understand code dealing with investment instruments.

2. You can get better at matching code to general knowledge

Once you have enough general knowledge you can focus on getting better at forming matches. If you know what clues to look for in the code and practice identifying them you will quickly get better at extracting meaning from code.

Learn to recognize code beacons

Code beacons are patterns in your code that hint at an underlying concept. These patterns can span varying levels of complexity. They are snippets of code that light the way to higher level concepts.

For example, when you see code that follows this pattern:

Iterate over the elements in an array

Put elements into a new array based on a condition

You know you are dealing with a filter.

Thinking about this block of code as “a filter” instead of “a loop, with an if condition that then puts some items from the old array into a new array” allows you to hold more ideas in your head at the same time. You chunk a few smaller ideas into a bigger one.

Traditionally in software development a “pattern” refers to the famous gang of four book Design Patterns: Elements of Reusable Object-Oriented Software. While code beacons and design patterns are related they are not the same thing. For example, there are code beacons for design patterns.

In a future post I will list some of these beacons and explain how to identify them.

Learn the rules of discourse

Rules of discourse are the conventions and coding styles used within a framework or language. Just like the dialogue rules in a conversation they set expectations in the mind of the programmer. The way you name methods is different in Ruby and C#. Rails makes heavy use of the MVC pattern, other frameworks don’t (Meteor.js for example).

Writing code that follows the expected rules of discourse makes the code significantly easier to understand. Even for experts.

This bit comes fairly naturally, you pick up these rules from reading example code or from your colleagues. Sometimes when moving to a new language or framework it is worth paying special attention to this. It’s a quick way to feel more comfortable in a new language.

3. You can get better at forming and revising hypotheses

The last piece of the puzzle is getting better at forming and revising hypotheses. The better you are at making a hypothesis that is likely to be correct the faster you will be able to build your mental model.

Use a systematic approach

A systematic approach to building a mental model involves reading every line of code and focusing on building your knowledge as you go. It generally yields the best results but quickly becomes impractical for larger code bases. This is best suited for highly critical code that is of a manageable size. I’ve found this to be quite rare in the real world. Usually you deal with large sprawling code bases that have grown over many years.

Use an opportunistic approach

With an opportunistic approach you look for interesting pieces of code, form a hypothesis about what it does then start digging to see if you are on the right track. Being good at recognizing beacons both at the syntax level and at the higher levels of abstractions really help you form better hypotheses.

The results in terms of complete understanding are not as good with this approach but you get to a relatively good understanding much quicker. This is also what leads to making a quick fix and then breaking some other part of the system you didn’t understand though, so be careful.

To become a world class programmer you need to master all three

We all want to be the best programmers we can be. In today’s technology world where things change all the time it can be challenging keeping up with all the latest frameworks and methodologies. Luckily you can gain an advantage over the other programmers out there. If you know what to look for and can identify your weak points, you can progress further and faster with the same amount of effort.

To me the thing that distinguishes a decent programmer from a truly excellent one has always been how well they understand the core concepts in programming.

What makes a programmer exceptional to you? Let me know in the comments below.

Web application architecture

Picking the web application architecture best suited to your unique needs is vital.

In this post I am going to talk about a few of the most common web application architectures you are likely to come across. Every web application has it’s own unique set of requirements and constraints so the architectures I will be listing here are just general examples. Your app might suit one of the architectures below, a hybrid or something completely different. This list is by no means exhaustive. I have explicitly excluded asynchronous architectures as that is a topic all on it’s own.

I’m also assuming your app has at least some server side logic and connects to some kind of database.

This post is part of a series, if you haven’t seen the previous posts in this series you can find them here:

Single web server

This is by far the simplest web application architecture and is hardly ever enough for a production application. Your HTTP server and database server live on the same machine behind your firewall. The advantage of this kind of setup is that it is extremely simple and is great for the early stages of a project where you don’t have any real users yet. The largest disadvantage is that you have a single point of failure. Single points of failure are places in your architecture where a single machine going down will cause your entire application to stop working. If you need to do a deployment or something goes wrong with your server users will not be able to access your application.

Single web server with separate database server

In this web application architecture we start to separate the HTTP application server from the database server. Once again this is not yet suited for a production application. You still have a single point of failure and can’t re-deploy your application without taking it offline. It does however set you up for a web farm and other more complex architectures.

Web farm

With this architecture we are starting to eliminate the HTTP server as a single point of failure. We do so by introducing a load balancer. Load balancing will distribute the incoming request between 2 or more servers. This spreads the load and also reduces the risk of a catastrophic outage by allowing multiple redundant servers to handle the workload. It is important to consider the fact that your load balancer might become a new single point of failure so a highly available load balancer is a good idea.

In this architecture our database is still a single point of failure. If your database goes offline for some reason you are still left with an application that is unable to service your users.

Web farm with a database cluster

By introducing a database cluster we eliminate the final single point of failure in our architecture. For any mission critical production system this is probably the simplest configuration that allows for a robust highly available system. I have intentionally left the details of the clustering vague as there are many many ways in which you could cluster a database. MySQL and MS SQL Server have their way of clustering while No SQL databases like MongoDB do it slightly differently. The important thing to understand is why you would want to cluster. The exact technical details of how to go about setting this up differs depending on the technology you are using.

Web farm with dedicated application servers

For more complicated scenarios where you have multiple subsystems and/or 3rd party systems (this is often the case for medium to large size corporates) you might start introducing application servers. Each web front end communicates with one or more application servers behind an additional firewall. For the sake of simplicity I have included only 1 application server per web server but it is quite feasible to have many application servers per web server. Often these are split by function for example payroll, billing, CRM etc. Splitting your solution up in this way allows you to keep the complexity of each subsystem lower.

Having an additional firewall allows you to create a DMZ (demilitarized zone). The purpose of the DMZ is to add an extra level of protection between potential intrusion from the internet and your internal systems. If a hacker where to compromise your web servers they would need to get through an additional firewall to gain access to highly sensitive internal data and systems.

In this architecture each HTTP server is mapped to a specific set of application servers. This eliminates the need for additional load balancers so it is slightly easier to set up infrastructurally. It does however bring a whole set of problems along with it. Configuration is more complicated and it is hard for the load balancer balancing the HTTP servers to know if a downstream app server is offline.

Web farm with load balanced application servers

This architecture is very similar to the previous one except that it introduces another load balancer. This simplifies configuration because all your HTTP servers can use the same IP address for each application server. It also makes scaling out easier. If one of your application servers is running near capacity you can add more instances to the cluster to help cope with the workload. In almost all cases it is desirable to have a load balancer between your HTTP servers and your app servers.


While these examples should give you a small taste of the forces acting on your architecture it is just that, a small taste. The important thing to remember is that each decision is a trade off. More redundancy at the expense of simplicity. Easy configuration at the cost of flexibility. You need to understand your environment and needs to be able to make the optimal trade offs for your situation. There is no such thing as a universally “good” architecture. There are only architectures that are appropriate or inappropriate for a specific set of constraints. These constraints evolve and change as your system grows and your user base grows so you should periodically re-visit your decisions and ensure they are still valid.

12 Attributes of a good web application architecture

So you have a working web app but is the architecture any good?

While every solution is unique there are a few attributes that any good web application architecture should display. If you have been asking yourself the questions I listed previously you should have a solution that shows most of these attributes.

Have a look at your last web app and see how it scores on this list of 12 quality attributes.

Developer productivity

Since smart people are the most precious resource you have any framework or architecture we adopt needs to help optimize developer productivity time.


  • Simplicity
  • Concise but not obtuse
  • Standardized way of doing things
  • Great supporting tools
  • Short feedback loops
  • Expressiveness
  • Quality 3rd party packages?


The elegance of the solution speaks to how well the solution fits the problem space and how coherent the solution is.


  • Consistent way of solving a problem.
  • The most common tasks are the easiest to do.
  • Clear guidance on how to make architectural choices.
  • Easily extendable in the appropriate places.
  • As simple as possible but no simpler.
  • Strong cohesion / low coupling
  • The problem space forms a large percentage of the frameworks solution space


Usability is vitally important for a number of reasons. It improves trust, customer satisfaction and reduces support costs. Any technology you use should allow you to build a world class user experience.


  • No vendor specific technologies
  • Support the latest standards
  • Must provide fast response times in the UI
  • Allow for use of graphic and charting capabilities
  • Allow animation where appropriate
  • Must support A/B testing
  • Must support analytics


Security is the capability of a system to reduce the chance of malicious or accidental actions outside of the designed usage of the system, and prevent disclosure or loss of information.


  • Passes 3rd party penetration tests
  • Uses security standards wherever possible
  • Follows security best practices.


Reliability is the ability of a system to continue operating in the expected way over time. Reliability is measured as the probability that a system will not fail and that it will perform its intended function for a specified time interval


  • It doesn’t crash
  • Autonomic — when it crashes it heals itself
  • No single point of failure


Performance is an indication of the responsiveness of a system to execute specific actions in a given time interval. It can be measured in terms of latency or throughput. Latency is the time taken to respond to any event. Throughput is the number of events that take place in a given amount of time.


  • Support an appropriate level of performance.
  • Low latency to the UI (< 250 ms for 90% of requests,, <2s for all requests) or provide mechanisms to compensate (messaging, caching, etc)


Scalability is the ability of a system to either handle increases in load without impact on the performance of the system, or the ability to be readily enlarged.


  • We prefer scaling out to scaling up.
  • Easy to add more processing nodes.
  • Easy to load balance new nodes.
  • Each node should be low overhead.
  • Licencing should not prevent scaling.


Testability is a measure of how well a system or components allow you to create test criteria and execute tests to determine if the criteria are met.


  • Provide mechanisms to mock data.
  • Trigger back end processes via scripting.
  • Batch processes should be fast when using small data sets.
  • Easy to create known data.
  • Ability to automate UI testing.


Do you play well with others? Communication protocols, interfaces, and data formats are the key considerations for interoperability. Standardization is also an important aspect to be considered when designing an interoperable system.


  • Use open standards where available.
  • Publish standards where not available.
  • Provides you with many options when selecting 3’rd party systems

Transparency and troubleshooting.

When something goes wrong how easy is it to track down the error and re-produce it?


  • All errors and important events are logged in a meaningful way
  • Easily comprehensible stack traces
  • All data needed to re-produce an error is included in the log
  • Debug logs can be turned on and off
  • It should be easy to trace an error all the way through the application.

Community and Product Growth

There should be a strong community behind the product you are using. Having other people who have already solved the problems you are facing is a major factor in how easy a product or framework is to live with.


  • Many plugins and open source projects related to the framework
  • Active repo on github if it’s open source
  • Lots of questions and answers on stack overflow.
  • A Google trends graph that is going up and to the right.
  • Many books, blogs and tutorials.


Deployment and propagation through different environments is a huge cost. A product that is difficult to deploy requires longer release cycles and makes it harder to respond to change or fix bugs.


  • Automated scriptable deployments
  • Automated tests are easy to write.
  • Fast build times.
  • File based configuration or easily scriptable configuration.
  • Small physical size.
  • Licencing should not prevent multiple environments.
  • Easy rollback

How do you stack up?

Next time you evaluate a web stack or your application architecture go through this list and try see if you can tweak your design to tick more of the boxes outlined above. If you keep these attributes in mind while building your app you will end up with a better end product.

Why your code is so hard to understand

“What the hell was I thinking?!?”

It’s 1:30AM and I am staring at a piece of code I wrote no more than a month ago. At the time it seemed like a work of art. It all made sense. It was elegant and simple and amazing. Not anymore. I have a deadline tomorrow and discovered a bug a few hours ago. What seemed simple and logical at the time just doesn’t make sense anymore. Surely if I wrote the code I should be smart enough to understand it?

After one too many experiences like this I started thinking seriously about why my code makes perfect sense while I am writing it but looks like gibberish when I go back to it a few weeks or months later.

Problem #1, overly complex mental models.

The first step in understanding why your code is hard to read when you come back to it after a break is understanding how we mentally model problems. Almost all the code you write is trying to solve a real world problem. Before you can write any code you need to understand the problem you are trying to solve. This is often the hardest step in programming.

In order to solve any real world problem we first need to form a mental model of that problem. Think of this as the intent of your program. Next you need to form a model of a solution that will achieve your programs intent. Lets call this the semantic model. Never confuse the intent of your program with your solution to that intent. We tend to think primarily in terms of solutions, and often bypass the formation of a model of intent.

Your next step is to form the simplest semantic model possible. This is the second place things can go wrong. If you don’t take the time to really understand the problem you are trying to solve you tend to stumble onto a model as you code. If on the other hand you really think about what you are trying to do you can often come up with a much simpler model that is sufficient to achieve your original intent.

Eliminating as much of this accidental complexity as possible is crucial if you want easy to maintain, simple code. The problems we are trying to solve are complex enough. Don’t add to it if you don’t have to.

Problem #2, poor translation of semantic models into code.

Once you have formed the best semantic model you can it’s time to translate that into code. We’ll call this the syntactic model. You are trying to translate the meaning of your semantic model into syntax that a computer can understand.

If you have an amazing semantic model but then mess it up in the translation to code you are going to have a hard time when you need to come back to change your code at a later stage. When you have the semantic model fresh in your mind it’s easy to map your code onto it. It’s not hard to remember that a variable named “x” is actually the date a record was created and “y” the date it was deleted. When you come back 3 months later you don’t have this semantic model in your head so now those same variable names make no sense.

Your task in translating a semantic model into syntax is to try and leave as many clues as possible that will allow you to rebuild the semantic model when you come back at a later time.

So how do you do this?

Class structure and names.

If you are using an OO language try and keep your class structure and names as close to your semantic model as possible. Domain Driven Design is a movement that places extreme importance on this practice. Even if you don’t buy into the full DDD approach you should think very carefully about class structure and names. Each class is a clue you leave for yourself and others that will help you re-build your mental model when you return later.

Variable, parameter and method names.

Try avoid generic variable and method names. Don’t call a method “Process” when “PaySalesCommision” makes more sense. Don’t call a variable “x” when it should be “currentContract”. Don’t have a parameter named “input” when “outstandingInvoices” is better.

Single responsibility principle (SRP).

The SRP is one of the core Object Oriented Design Principles and ties in with good class and variable names. It states that any class or method should do one thing and one thing only. If you want to give classes and methods meaningful names they need to have a single well defined purpose. If a single class reads and writes from your database, calculates sales tax, notifies clients of a sale and generates an invoice you aren’t going to have much luck giving it a good name. I often end up refactoring a class because I struggle to give it a short enough name that describes everything it does. For a longer discussion on the SRP and other OO principles have a look at my post on Object Oriented Design

Appropriate comments.

If you need to do something for a reason that isn’t made clear in your code have pity on your future self and leave a note describing why you had to do it. Comments tend to get stale quickly so I prefer having the code as self describing as possible and the comments are there to say why you had to do something, not how it was done.

Problem #3, not enough chunking.

Chunking in psychology is defined as the grouping of information as a single entity. So how does this apply to programming? As you gain experience as a developer you start to see repeating patterns that crop up over and over again in your solutions. The highly influential Design Patterns: Elements of Reusable Object-Oriented Software was the first book to list and explain some of these patterns. Chunking doesn’t only apply to design patterns and OO though. In functional programming (FP) there are a number of well known standard functions that serve the same purpose. Algorithms are another form of chunking (more on this later).

When you use chunking (design patterns, algorithms and standard functions) appropriately it allows you to stop thinking about how the code you write does something and instead think about what it does. This reduces the distance between your syntactic model (your code) and the semantic model (the model in your head). The shorter this distance the easier it is to re-build your mental model when you return to your code at a later stage.

If you are interested in learning more about the functions used in FP have a look at my article on functional programming for web developers.

Problem #4, obscured usage.

Up to now we have mainly spoken about how to structure your classes, methods and variable names. Another important part of your mental model is understanding how these methods are supposed to be used. Once again this is quite clear when you initially form your mental model. When you come back later it’s often quite difficult to reconstruct all the intended uses of your classes and methods. Usually this is because different usages are scattered throughout the rest of your program. Sometimes even across many different projects.

This is where I find test cases to be very useful. Besides the obvious benefits associated with knowing if a change broke your code, tests provide a full set of example use cases for your code. Instead of having to trawl through a hundred files, looking for references you can get a full picture just by looking at your tests.

Bear in mind that in order for this to be useful you need to have a complete set of test cases. If your tests only cover some of your intended uses you are going to be in trouble later on if you assume the tests are complete.

Problem #5, no clear path between the different models.

Often your code is technically very good, and extremely elegant, but there is a very unnatural jump from program intent to semantic model to code. It’s important to consider the transparency of the stack of models you select. The journey from the program intent to semantic model to code needs to be as smooth as possible. You should be able to see all the way through each model to the problem. It may at times be better to choose a particular class structure or algorithm not for its elegance in isolation, but for its ability to connect the various models and leave a natural path towards reconstructing intent. As you go from abstract program intent to concrete code the choices you make should be driven by the clarity with which you’re able to represent the more abstract model below it.

Problem #6, inventing algorithms.

Often we as programmers think we are inventing algorithms to solve our problems. This is hardly ever the case. In almost all cases there are existing algorithms that can be put together to solve your problem. Algorithms like Dijkstra’s algorithm, levenshtein distance, voronoi tessellations etc. Programming for the most part consists of choosing existing algorithms in the right combination to solve your problem. If you are inventing new algorithms you either don’t know the right algorithm or are working on your PhD thesis.


In the end it boils down to this: as a programmer your goal is to construct the simplest possible semantic model that would solve your problem. Translate that semantic model as closely as possible into a syntactic model (code) and provide as many clues as possible so that whomever looks at your code after you can re-create the same semantic model you originally had in mind.

Imagine you are leaving breadcrumbs behind you as you walk through the brightly lit forest of your code. Trust me, when you need to find your way back later on, that forest is going to seem dark and misty and foreboding.

It sounds simple but in reality it is very difficult to do well.

A special thanks to Nic Young and Ulvi Guliyev for their input on this article

I write

I write in bed early in the mornings with a cup of coffee next to me, while my mind is clear and writing comes easy. Which isn’t always. Sometimes I feel dark thoughts scuttle across the corners of my mind like spiders hiding from the light. At times like these I seem to have more important things to do, like tidying up last nights dishes or checking my email. Anything really. As long as I don’t have to pull back the curtains and let the light into those lonely corners.

I write at my computer. I write about my job and what I know.

I sit at my desk, making lists. Lists of things I need to do and things I will never do, reasons to stay and reasons to go. I go back to the lists sometimes to cross off the things I have done or decided aren’t worth doing. Other times simply writing them is enough.

I write to distill years of trial and error into something that might save others from the mistakes I made.

The scratch of my pen fills the house, empty and alone around me. When my relationship of 6 years was slowly falling apart I wrote about how I wished things could be the way they were when we still made each other smile. I wrote about how I wanted her to change. I didn’t write about how I too drowned our love. I didn’t write about why I chose to stay in a relationship that was making both of us sad and lonely. I didn’t write about wanting to feel loved and safe in my own home again.

I write to convey the concepts and attitudes that make me good at what I do so that other people can learn and grow.

I write in a quiet corner of the office. We broke up and I can’t focus on anything but the sadness. I have to get these thoughts out of my head. They spill from my pen and sit on the page like a coral snake on freshly pressed linen. They’re still venomous but now I can keep my distance. Tonight they’ll slither back into my mind and heart, but weaker, slower. Less powerful now that they’re out of my head.

I write to discover what I have learnt and I write to see what I still need to figure out.

I write in a wooden chair, the morning sun reflecting off the page. I think of how Harry, my little kitten, would’ve chewed the page, casting motes adrift in the sunlight. For months after he died I would write about what I wanted from work, how I felt about success and what it meant to me. How I could learn to be more comfortable with myself. I didn’t write about how my heart ached for the fragile thing I felt safe to love because he couldn’t judge or reject me. I didn’t write about how hard it is for me to open up to people and how easy it is to love a pet. I didn’t write about the guilt I felt at waiting one more day to see if he felt better before I took him back to the vet. I didn’t write about how I wished I could have held him one last time, as his lungs filled with fluid and he took his last struggling breath.

I write to help people understand.

The plastic table wobbles as I write alone in front of a cheap cafe, looking up at snowboarders on the Swiss Alps, the smell of snow in the air. A waitress wearing flesh coloured tights and a fake smile brings me a soggy burger and curly fries. I listen to audiobooks and podcasts and take notes on art and marketing and work and success. I scribble and sketch to distract me from the fact that I am alone. I construct a bridge made out of ideas between me and the strangers I listen to. I broaden my mind while my heart shrinks. It’s always been easier for me to think than feel. It’s harder still for me to express those feelings to others. Maybe that’s what attracted me to writing in the first place. Nobody had to hear me. I started writing about logical things, safe things. Ideas, theories, research, but those feelings would surface despite myself, accidentally slipping out and staining the page like mustard on a new tie.

I write to I learn why I think what I think.

I wonder at the power of words. The act of committing a thought to paper makes me feel more ridged, less adaptable. When it’s all in my head I can change my mind and say to myself “I never thought that”. When I write these things down they sit there on the page. They dare me to contradict them while begging me to be consistent. At the same time there is a terrible beauty in that moment between the impulse to write something and the act of writing it. Often what I mean to write and what comes out are only distant cousins. Writing for me is not just a record of my thoughts, it’s a tool that shapes the way I think.

I wonder about the interplay between the sentence and the idea it represents, the war between a true thought said ineloquently and a beautiful lie. What happens when I tell a lie I want to believe in a way that feels true?

This question floats in the back of my mind whenever I write. Especially when I get close to exposing those dark thoughts. I sometimes I feel it’s safer to keep the curtains closed. I’m afraid I will discover something lurking in those dusty corners. Something that doesn’t fit with my carefully constructed idea of who I am. Something true.

I write.

Originally published at on October 28, 2014.

12 Skills you need to build a damn good product

So you know how to build a digital product?

Agile methodologies all talk about “cross functional teams” yet if you look at the vast majority of agile teams they probably have the following “roles”:

  • Developer
  • Tester
  • Analyst

Every once in a while a graphic designer is thrown in to the mix. While cross functional skills are the right idea we tend to leave some critical skills out of the mix.

Skills like:

Infrastructure Automation

If you build a web app you should run it.

Infrastructure automation and administration should live inside the team. This allows you to feel how easy or hard your app is to live with.

Key responsibilities

  • Automated app deployment (yes even for mobile)
  • Self healing infrastructure
  • Test driven infrastructure
  • 12 Factor apps

Community Building

If you don’t know who your clients are, or even worse if you don’t have any yet, how do you know you are building the right app?

You should start building a dialog with your community before you start writing a single line of code.

Key responsibilities

  • Knowing where your users are
  • Understanding your users
  • Understanding why you do what you do
  • Articulating why you do what you do to your users.
  • Giving users something of value first
  • Developing and executing a launch strategy for your app


This is an obvious one. I will say that developers need to focus more on principles and not technologies. You should also constantly be working on gaining perspective.

Key responsibilities

  • Build the simplest responsible solution.
  • Pick technologies that serve the users needs.
  • Have a strategic outlook on the architecture.
  • Understand that you will probably build the wrong app initially.
  • Write great code!


Text is still one of the most important mediums for communication when building mobile and web apps. You should craft the sentences in your product with as much care as you craft the code.

Key responsibilities

  • Speak your customers language
  • Tell your story
  • Why is more important than what
  • Be concise
  • Be clear

Data visualisation

The visual system is by far the highest bandwidth information channel available to humans. Good data visualisation optimises this channel. Using 3D pie charts clogs it with junk data.

Which would you rather do?

Key responsibilities

  • Understanding the data in your web application.
  • Understanding the insights your users need to have.
  • Applying data visualisation theory to turn data into insight.

User Experience

Just because you like the way it looks doesn’t mean it’s the best way for a web app to work. Designs are only good if they are useful, usable and delightful. If you would like to know more I wrote a resource guide on ways to learn user experience design

The only way to know how something works in the real world is to watch people using it.

Key responsibilities

  • Paper prototyping.
  • Usability testing.
  • A/B test to improve existing features. (this is harder on mobile apps)
  • Use metrics to make informed decisions.


In order to optimise impact and revenue you need to understand where the money comes from and where it goes. How much does it cost to get a new user, how long do they need to keep using your app to earn that money back?

Where is your major cost, are you optimising the right part of your app to limit costs and maximise revenue. Each web app is different and you need to understand how yours is unique.

Key responsibilities

  • Knowing how your business makes money
  • Prioritisation of features
  • Knowing where costs come from
  • Understand what competitors are doing
  • Finding innovative ways to maximise revenue and minimise costs.


Innovate or die!

That is the silicon valley mantra. If you want your web app to compete on the global scene you need to live by it.

Key responsibilities

  • Reduce cost of failure
  • Challenge traditional thinking
  • Broaden the scope of possible solutions


Another of the more traditional roles. You need to get into the nitty gritty of how things work. Understand the users needs back to front and think about all those edge cases.

Key responsibilities

  • Understand and optimise business processes
  • Understand and optimise customer workflows
  • Legal requirements


Are you putting your best foot forward?

People make snap judgements on the quality of your web app. You need to be damn sure they make a good first impression. Luckily there is a wealth of information available on line that can help you learn web design.

Design every interaction, think about the whole experience.

Key responsibilities

  • Make the app pretty
  • Design for target devices and user needs
  • Design is how the app works not just how it looks
  • Design the whole system
  • Make the small things matter


Is your product a smooth ride or do the wheels come off on a regular basis?

You need to make sure you have a solid testing framework. This includes unit, acceptance, infrastructure, performance and load testing. All automated, all repeatable.

Key responsibilities

  • Automate!
  • Find errors as soon as possible
  • Identify root causes
  • Test the infrastructure

Domain Knowledge

I left this one for last since it is the one that varies the most from project to project. If you are building medical software you best know about medicine. Make sure you know what you are talking about!

Key responsibilities

  • Challenge existing industry thinking.
  • Understand the latest research in your field.
  • Become an expert.

While having all these skills is by no means a recipe for success it definitely gives you a massive advantage over people who don’t.

Even if you think you aren’t doing all these things you are, you might just be doing them badly. None of them should be an after thought.

Does your team have what it takes?

More information

For more information on what it takes to become a world class software developer check out some of the posts below:

How to become a web developer. Part 1 : The fundamentals

How to become a web developer. Part 2 : Larger projects

How to become a web developer. Part 3 Object Oriented Design

How to become a web developer. Part 4 : Functional Programming

Originally published at on April 2, 2014.

10 Questions Developers Should be Asking Themselves

So you want to become a web developer?

Well then it’s time to put down that “Learn Super Duper Language v8.3 in 24 hours” book. Instead, make it a habit to ask yourself these 10 questions every day.

Is there a pattern here?

Looking for patterns in what works and what doesn’t work leads to discovering the underlying principles that drive seemingly unrelated concepts and behaviours. To get a deeper understanding of the work that you do make it a habit of asking yourself “Is there a pattern here?”.

This applies to more than just your code. Is there a pattern in the types of changes requested by business? Is there a pattern in the way technologies evolve? Are you seeing the same types of bugs popping up again and again?

To understand is to perceive patterns — Isaiah Berlin

How can I make this simpler?

Often as web developers we want to produce complex and scalable solutions. Making something tremendously complex makes you feel like the master of your universe. The problem is that you will never be able to predict how your product and business is going to change in the future.

“Architecture” and coding is much more like gardening than architecture. You need to be able to adapt to an ever changing environment. The more complex your solution the more difficult this becomes.

Simplicity is the ultimate sophistication. — Leonardo da Vinci

Why does it work like that?

Knowing that something works and knowing why it works that way are two very different things. If you know why something behaves the way it does you are able to make significantly better decisions.

The difference between a great coder and somebody who knows a programming language is the depth of understanding that comes from understanding why.

The same principle applies when fixing an issue. “Just restart the service.” “Have you tried rebooting it?” We have all said something along those lines when a problem pops up. Every time you say something like that you lose a golden opportunity to learn.

Understanding why something broke allows you to fix the root cause and eliminate this class of issues permanently. At the very least you won’t make the same mistake again.

Has somebody done this before?

Whenever you find yourself inventing a complex algorithm you are probably on the wrong track. Unless you are busy researching a PHD thesis chances are extremely good that somebody else has already solved this problem.

Need to write an algorithm to add a label to the item closest to a users mouse? Have a look at Voronoi Tesselations. Want to find the shortest path for a delivery truck? Look at Dijkstra. Want to find tags similar to the one the user just entered, how about figuring out it’s Levenschtein distance.

Those are just a few examples but trust me, they are everywhere.

If I have seen further it is by standing on ye sholders of Giants. — Isaac Newton

Who said it first?

So you think you know REST right?

Have you read Roy Fielding’s original paper describing REST and do you understand it’s intended purpose? That blog post by that guy who has 5 minutes more experience than you using the REST API generation wizard in Super cool IDE v7 doesn’t count.

Do yourself a favour and always try read the original source of a concept or theory. Then by all means go read the latest developments by industry thought leaders but if you don’t know where they started how can you follow where they are going?

Do I love what I’m doing?

Lets face it programming is hard.

Besides being hard programming is constantly evolving. The state of the art framework from 2 years ago is a clunky dinosaur by todays standards. To stay at the top of your game you will need to commit to a lifelong process of learning and research.

If you don’t love what you are doing you don’t have a hope in hell of keeping up with the guys who do. So find out what kind of coding gets you fired up. Don’t decide to become a security specialist because there is a gap in the market or because it pays well, don’t become a UX expert just because an article just came out in WIRED saying that UX is the hottest job in tech.

I’ll say it again, do what you love.

Do what you love and the necessary resources will follow. — Peter McWilliams

Where else could I use this?

One of the biggest limits I see web developers placing on themselves is a failure of imagination.

If we learn something in a specific context or see a technique used to solve a specific problem we assume that’s the only place it applies. This is almost always wrong. Every time you learn something new ask yourself: “Where else could I use this?”.

Found great new positioning methods to place nodes on a graph, how about applying that same technique to find interesting data points in a dataset that has 2 dimensions? Found a cool way to send data over websockets from the client to the server? How would this apply in making a scalable set of backend services? Sometimes you will be wrong, but sometimes you will be right.

Which brings us to our next question…

Logic will get you from A to Z; imagination will get you everywhere. — Albert Einstein

What did I fail at today?

One of the easiest ways to increase innovation is to lower the cost of failure.

The game developing company Valve has embraced this like few others. The same applies to your progression along the path to becoming a web developer, if you are afraid to fail you will never make those big breakthroughs.

Be brave, try something, fail, learn and try again.

Do not fear mistakes. You will know failure. Continue to reach out. — Benjamin Franklin

How can we make this possible?

In the world we live in there really is very little that is impossible (with a few exceptions).

Start from the assumption that whatever you want to do is possible and then work your way back. You might find that what you wanted to do is impractical for the time being but with the pace of change in todays world, it might become practical sooner than you think.

It always seems impossible until its done. — Nelson Mandela

Who can I learn from?

You should never work anywhere where you are the smartest person in the room.

Pick jobs and companies where you can work with people who inspire you and challenge you to be better. It doesn’t have to be coding related, there is a world outside your text editor and the command line. Learn things from other fields and find ways to apply it in your job.

Being competent isn’t good enough anymore.