The Continuous Economy

Value

The Internet has passed 20 years old, and so, after two decades of what we quaintly called e-commerce and now just call “business,” it’s appropriate to consider whether the concept of value has changed.

In a world of pervasive information devices, a trillion percent reduction in the cost of computing since the birth of the Internet and more transactions occurring online than through physical exchange, can value still be the same?

Noting Mark Andreesen’s famous quote that “Software is eating world,” we all know software has become the dominant method chosen by consumers for selecting, consuming, and paying for their goods and services.

So how is one piece of information more valuable than another? Or, more specifically, how is one piece of software more valuable than another?

A Brief – But Essential – Digression on the History of Value

In Medieval times, the value of a good was thought to consist of its utility, or usefulness, and its scarcity. So, wood in deserts would be more valuable than wood in rainforests. The problem with this view was that it did not take into account what it cost to produce something. Given a marketplace full of wooden wagons, which wagon would have the most value?

In the 17th century, theorists like Petty and Cantillon tried to solve this problem by rooting value in the basic factors of production – labor and land, respectively. Some wood is harder to chop than others (apparently). Philosophers like John Locke, however, countered that utility remained a decisive factor following an earlier philosopher, John Law, who coined the water diamond paradox – diamonds have no utility to a parched man in the desert.

Notwithstanding this, throughout most of the 18th and 19th centuries, economists adopted a labor-centered theory of value. This made sense politically and socially as land became claimed for industrial scale development and skilled tradesmen from the country were drafted to work in factories.

But how much should they be paid? Karl Marx famously claimed that, “All commodities are only definite masses of congealed labor time.” Capital, like machines (and software much later) locked up value and released it for those who owned the capital rather than those who made it, the laborers.

The problem with labor and land-based theories of production, though, was that they did not explain why prices failed to go up – even for goods where, regardless of demand increases, supply always stayed the same; for instance, a cooper can only make so many barrels.

In the 19th century, Jevons and Menger revisited the utility theory of value, independently claiming that all value was based on utility. Jevons’ quote outlines the rather massive shift now being considered, “Cost of production determines supply, supply determines final degree of utility, final degree of utility determines value.” This was the so-called “marginalist” revolution.

We finally arrive in the 20th century and contemplate work done by Alfred Marshall. He attempted to reconcile the labor and utility theories of value by considering “time effects” – i.e. the time each factor takes to have an impact on the final value.

So we now find ourselves with a basic model of value that is determined by the factors of production (land, labor and capital), utility and time.

This time dimension is critical to our story in the 21st century.

Indeed, 20 years ago, customers chose products based on product features. To some extent, that’s still true today. But it’s more important to recognize that customers in 2016 are continuously looking for new products, embracing them, and then continuously switching. In fact, the willingness to switch is so strong, according to Harris Interactive, that 86 percent of consumers quit doing business with a company because of a single bad experience. see

If you think this is an outlier that only applies to hip Web properties, think again. Over 40 UK banks guarantee the switching of bank accounts at no cost to the customer

So, it appears that we are now experiencing an economy with a very short memory for value.

The Frictional Value of a Product

We call the perceived value associated with being able to switch at no cost, the frictional value of a product. And frictionless is better, and of higher utility, because it allows customers to embrace a product and then discard it much more easily when they find a better one.

Take this to the next logical business level and you see that some transformation of your organization is required.

Why is this an economic imperative right now? Is this just an academic idea that has no bearing on us working stiffs?

Consider a simple example. Two decades ago, you would choose a car based on your ideal product characteristics (make, size etc.) and how much you could afford to spend. Your choice would live with you while you paid it off, or until you could justify changing it. The idea of choosing products based on product features is, of course, at the core of all consumer choice. If this weren’t true, how would we decide what to buy?

But what happens when we relax the assumption that you have to live with your choice for some period of time? What would happen to your choice of car if any choice you made today could be rescinded tomorrow and a new choice made? What if your car was updated every week instead of every year? What if there was an April 2016 model of the new Ford instead of just the 2016 model?

Well, the customer now has the opportunity to make feature choices at a greater frequency than was possible in the past. This is already happening at the software level as car operating systems are updated wirelessly and provide new features – through software updates – to existing owners. In this case, the car becomes a platform for software. The physical choice of platform (car) is also beginning to change, though. For those who wish to change their car regularly, there are many services (e.g. Zip Car) http://www.zipcar.com that allow you to rent access to a car and return it (or just leave it somewhere) when you’re done.

The contrasting cases of cars being easily swapped when you don’t want to own one and the case of a car you own being software-updated regularly are instructive – They both aim to reduce the friction of making changes. The former, of the physical product itself; and the latter, the speed of adding new features, independent of the physical platform itself. But, in both cases, these features are accessed using some software – car operating system or car app.

Let’s apply this to software itself.

Imagine if, for a particular piece of software, say a task management application, you could get it for nothing (or at a very low price) – such that changing it becomes unconstrained by its cost to you. Add to this that new features are being bolted on to the application – and its competitors’ products, too – very frequently. The only feature limiting your ability to move to another application is the application’s ability to let you transport your personalization data (e.g. your tasks in the task app) to the new app. If your requirements changed very slowly, and other products changed very slowly, it is likely – as in the past – that the value of switching would be low. But if you were constantly on the lookout for new features and wanted to try them out, and new features were continuously being added, it follows that every app you chose to move to would need to enable you to move “back” again. This is because the prevailing assumption of the software product consumer is that other applications will conceivably meet their needs better in future.

Obviously, every software product has a set of threshold features without which consumers will tend not to choose it. A car needs to be waterproof (!) with airbags and ABS brakes. A task app needs to allow you to add dates to your tasks. This set of features starts off as what is often termed a minimum viable product.

I believe that this set now includes the ability to switch away from the product and back again. In other words, software products are being designed to reduce utility friction.

Value and Time in Software

What does this mean in terms of defining value for software?

For those economists of the marginalist persuasion, it establishes that value is driven by utility. However, it also highlights that this utility really increases with the shortening of the time required for it to have effect in the market.

It therefore follows that we need product development processes that are lean and high velocity, enabling the translation of new product features into customer choice as easily as possible.

I call this trend the “continuous economy.”

To return to the issue of labor value, though, companies simply can’t compete today unless they’re continuously shipping innovative software products and improving the way they produce them.

The reason for this is easy to see – increased value is dependent on increased labor productivity as well as increasing utility at a specific time. Or, put another way, engineers need care and feeding if they’re to produce the goods.

Indeed, the organization, and its people and processes producing software, must be able to pivot with agility in what seems like a nanosecond to keep pace with the quicksilver velocity of the 21st century’s continuous economy.

This is all very different than what we’ve experienced, even in the recent past.

Two decades ago, transformation was a discrete event. An organization confronted a new competitor in the market, so it responded by developing and deploying new operations, organization or software to meet the challenge.

Today, we need to continuously develop and deploy new software as part of an embedded and sustained organizational culture.

If you develop and deploy discretely in the continuous economy, you won’t be in business very long. The type of skills you need in this economy are those developed continuously as you learn from your customer.

Summary Equation

Continuous Software Delivery = Continuous Product Delivery = Continuous Organizational Transformation

Every organization today is a software organization because every organization’s products are at least partly software. Therefore, continuous software delivery enables continuous product delivery, which is what business requires right now.

And, taking this a step further, once an organization truly embraces continuous delivery, it must also embrace continuous testing and continuous integration.

There are tremendous benefits for businesses here.

If you’re able to pull all this together – the continuous shipping of code, along with continuous testing and continuous integration – you’ll almost certainly get more stable and easily understood systems, systems that can be broken down into smaller increments, and systems that customers can readily interact with.

On a macro-level, this continuous economy, created by Web innovators, threatens to fundamentally re-shape traditional enterprises, which are now being called upon to continuously transform.

But, more specifically, the continuous economy means that enterprises must continuously reshape and refashion their offers to customers.

Disruption and Value

In the continuous economy, companies that learn slower than their customers are doomed, because they will be disrupted.

Obviously, this means acquiring new knowledge fast. But, more profoundly, in order to continuously improve your teams and your products, and meet customers where they are today, you need to learn how to learn.

Let’s consider why this is the case.

We use the term, “disruptive innovation” very often these days – so much so that its real meaning has been obscured. And, from my perspective, it’s a great example of companies learning in the wrong way.

In the mid 1990’s, Clayton Christensen coined this term, and the basic idea was that competitors fighting against established leading companies in a market float under the radar of these market leaders because they produce products that are initially of much lower quality.

Of course, these competitors also produce at much lower cost and typically invest a great deal of effort increasing quality over time. So, as they attack the established market from the bottom with low-price products, they’re also innovating the quality of their products until they can capture the high-end with high-quality products at a fraction of the production cost of the established companies.

If ever there was an example of disruption, it’s the personal computer; initially hard to use and configured by specialists, it had replaced the majority of word processing hardware and typewriters in industry by the 1990’s.

Disruption occurs because companies learn using established methods; in other words, they listen to the customers they have, who buy the most from them. The process of responding to a customer in this way could be called single-loop learning – your customer tells you that the noise of the typewriter is deafening so you make a quieter one.

There is another way, however.

Learning

A simple example from Chris Argyris explains the concept of “double-loop learning” –

“[A] thermostat that automatically turns on the heat whenever the temperature in a room drops below 68°F is a good example of single-loop learning. A thermostat that could ask, ‘why am I set to 68°F?’ and then explore whether or not some other temperature might more economically achieve the goal of heating the room would be engaged in double-loop learning.”

So, what if typewriter manufacturers had continuously reflected on whether they were serving the right customers? They might have seen the massive shift in usage of the PC in academic institutions and the reduction in the use of their hardware for traditional use in these places.

But, to be fair, this is not an easy process – learning what you still need to learn.

Organizations that transform at speed need to have an awareness of where they need to develop so that they can transform “toward” it.

So, sure, you have to learn from customers and gain an understanding of what they want and need.

Every organization does that now.

But it’s not enough to simply grasp that insight and intelligence and run with it.

No, you have to continuously improve how and what you learn from customers in this fluid and frictionless world.

Let’s see what this means in real-world organizational terms for a software company (and we’re all working for software companies)

Assume, for a moment, that you’re shipping an update of 1,000 lines of software code each week. That’s great. Maybe you are using continuous delivery, DevOps or some other unicorn methods.

But it’s not continuous learning – not unless you’re continuously improving how you’re learning from customers and continuously improving the teams inside your organization.

The Continuity Principle

If you can’t continuously improve how you’re learning from customers and continuously improve the teams inside your organization, you can’t ship continuously improved products. So, you need to improve what you ship and how you ship to continuously transform your organization.

Here’s an example of how this works.

Let’s say that your software development team thinks that the customer wants an improved red button that’s square instead of round. Okay, development of an improved red button proceeds and is completed in one week. But, at the same time, the team creates an automated test for the red button as well as clear metrics for determining if the red button is actually the problem. As we will discuss in more depth in Part 3, the idea that the button needed changing is a hypothesis until it has been tested with customers.

So, in addition to shipping the new square button to some customers, the team keeps some subset of customers using the round button. The customer test is then simply whether the click rate of the square and round button differs. This is classic A/B testing used every day by Web companies.

Imagine that the processes are not currently in place to enable the development team to receive these metrics – many teams are kept in splendid isolation from the results of their work – then the establishment of this relationship, perhaps with the Web operations team, is also going to be required. Agile retrospectives and blameless post-mortems are also great examples of activities that promote double-loop learning.

This is as much a requirement of successful deployment of these features as the features themselves.

The focus on the delivery of product features, as well as developing internal capability to understand the success of the features, is a core principle of continuous learning.

Following directly from this is the insight that the capability of the team in general precedes the capability of the product being developed.

Therefore, we need to ensure that our teams are improving continuously, too. we know that we have to be releasing new features frequently. And note – we learn very little of value by deploying to production, but not releasing to customers.

This results in “The Continuity Principle”

we build to release, we release to learn, we learn to build.

To recap, we need to build software fast, always improving both it and the teams that produce it; creating an environment that supports learning is of equal importance to delivery.

Underpinning the Continuity Principle is another principle that gets to the heart of how we can enable teams to learn and build at velocity.

The Two-Product Principle and the IOTA Model

This is the “Two-Product Principle.”

In a nutshell, the Two-Product Principle says that you always ship twice – every time you release to customers, you’re releasing the next iteration of your team’s capability to learn from its customers.

How does a team accomplish this?

Let’s get practical.

To illustrate the approach, I’ll introduce a model called IOTA that we use in product development and that ties many of the concepts discussed in this series together. The model is diagrammed below and the remainder of this article uses an example to show its use “in the wild.”

Think of the product we ship to the customer as “Product 1” – this is all about realizing our hypotheses of what we think the customer wants. But it isn’t practical action until we have actually delivered it and received feedback from the customer.

Meanwhile, the internal capabilities we wish to develop – “Product 2” – are our view of what we need to improve while we build. These capabilities are, therefore, not theoretical in the same way as “Product 1.” We can see their effectiveness, and our effectiveness at building them, the very minute we decide we need them. “Product 2” is at hand immediately.

How often, though, do we release software features that were “urgent,” but whose effectiveness is never actually measured?

Every feature should be measured.

And every feature is the realization of a hypothesis concerning capabilities the customer not only needs but will value.

What do we do with the results of our measures?

These inform the next cycle (or sprint), where we either build on our success when it comes to accurately providing for the customer or decide whether to remediate our miscalculation.

A great example of this, which will be familiar to software practitioners in banks everywhere, is the budgeting application that many banks have built into their online offerings. The hypothesis was that customers needed help budgeting. This is almost certainly true, but the means in which this was tested was, in most cases, by providing a budgeting solution and then measuring its use. This, incidentally, is very low across the board.

Instead, consider this simplified IOTA model for the budgeting problem –

  • Hypotheses Customers want to budget using historical data from their bank accounts rather than control future spending (which is how most people understand the term, “budget”). Customers understand how they spend money by categorizing their transactions.
  • Capabilities (Features) Create an alert that notifies whenever spend in a category goes over a budgeted amount. E.g. $100 from Amazon. Enable the categorization of transactions.
  • Conditions Obtain from the data warehouse team any attributes they already use to analyze transactions so that we can provide categories for users as a starter set. Determine if we can deduce categories of things bought using the Amazon transaction reference.
  • Targets Within a month 5 percent of Internet banking customers have categorized some transactions. More than 75 percent of those customers have activated a budget alert. A few things to note –

If the targets are not met, it automatically results in a questioning of the entire cycle from the start (hypotheses) again. In the next sprint, we may decide to further refine our target because we got mixed results – for instance, 20 percent signed up, but only 10 percent used alerts.

Notice also the conditions (Product 2). These actions will not necessarily improve the team if the basic hypotheses are false, i.e. if customers really do not want to budget using their online banking service, but they may still be useful. The use of product 2 to investigate deeper and probe outside of the current scope of the hypotheses can be a useful means of not falling into the disruption trap.

To see why this is the case, consider that a bank wanted a budgeting application because it was afraid of disintermediation by apps like Mint.com. The response was to create some similar features as shown above. After the results above, the bank decided to test the ability to decline transactions by SMS, based on the budget categories and implemented by an MVP that enabled this only for Amazon purchases. On release, both the adoption of categorization and alerts spiked sharply.

For those in banking, there are many variables in this; but for the purposes of elaboration here, it’s clear that the ability to control the transaction seems important to the value perceived by the customer. And the basis for disintermediation – flashy graphical features (like budget pie charts) – seems less valuable than transaction control.

As a result, in the next cycle, we’re forced to look at other transaction types, or perhaps payment methods, that are more transparent. In this way, we use the customer to investigate value more openly. The basis for this will often be those conditions built up in Product 2 activities.

There are many Product 2 examples that are very mundane, however, like getting more powerful laptops for the team so they can run full local testing, or writing automated tests for dependent API’s.

Either way, ensuring that Product 2 features are given the same priority as Product 1 features means that, over time, the team improves the conditions within which it works, as well as its understanding of the broader ecosystem within which the product functions.

If you take this model to heart, then your organization will do all it can to shorten the time to deployment and action in order to shorten the time to gain new learning in the market. This limits the length of the costly business guessing game.

Change

I have yet to find a team that did not embrace the IOTA model as a method for sprint planning (accompanying their existing methods).

If we look deeply into what it means to operate at velocity, we see that it has to do with ensuring that those who are in control of the factors of production (i.e. the engineers in control of the code) are also intimate with the needs of the customers.

Indeed, we are in the midst of a maker-driven revolution. The makers, and not the managers, are at the sharp end of deciding what will win in the market.

This is difficult to absorb for the majority of large corporate teams.

The secret in driving this transformation is to realize that the same principles we use for software development – small batches, frequent iterations, continual value delivery – are the same for effective organizational transformation.

The key here is picking a project, then trying to assemble as close to a full stack team as you can, with a product owner, or at least a business representative that is open to dialogue about what to develop next.

Even in the case of a fairly traditional project manager who wants his or her 500 stories delivered in the next 18 months (no arguments), the practice of stating hypotheses and measuring success is difficult for any commercially minded manager to refute.

So choose an organizational barrier to overcome, one that the whole organization will agree is important but thinks insurmountable. In some companies this is as simple as provisioning a VM in less than a month (!), but in others it might be more complex. The problem could be the massive drag some compliance requirement puts on value delivery.

Either way, the combination of a project that delivers fast, actively mines Product 2, and also shows a solution to a recurrent organizational nightmare, is unstoppable.

And here’s another hard lesson – Individuals don’t change organizations (even if they’re the CIO), events do.

Every project you work on should be as much a value event for your customers as it is for your organization.

At the same time, every project iteration is an opportunity to create new capabilities, both for your customer and for your organization. An event in this definition has a special meaning. An event is the visible solution to a problem the organization thought was unsolvable. An event triggers a questioning process within an organization, and the questions lead to conversations among teams about what is possible. We move from a language of impossibility to a language of possibility. So, for example, “What would it take for us to reduce our infrastructure provisioning process from two weeks to two minutes?”

The ELSA Model

This requires the organization to communicate in a language of possibility, which is where our ELSA (Event Language Structure Agency) Model comes in.

And when people at all levels of the company communicate in this language of possibility, when the impetus for transformation doesn’t just come from the top down, everyone wants to pitch in, take responsibility for change and help solve recurrent problems – not just anointed, or self-anointed, individuals

Continuous transformation relies on effective communication that breaks down silos, because this is the only way to get insight from everybody on what needs to be built now, what the design interaction should be today and what organizational changes are needed immediately – not in the future, and not based on the past. Too often, we try to anticipate the future in software development, but in continuous transformation the priority is to act on the present.

This is the real science of action.

Conway’s Law and Anticipating the Outcome

Conway’s Law was coined by Melvin Conway in the 1980’s, and it has had the same impact on how we think of technology transformation as Moore’s Law has had on how we build computers. Conway’s Law states, “Organizations will tend to build systems that reflect their communication structures.” This simple idea has some very powerful implications.

Consider the typical separation between the functions required to deliver software in a technology organization. For simplicity, let’s say it’s Development to Testing to Quality Assurance to Deployment.

If you are solely concerned with your functional role, say development, and not the final outcome – successful software the customer loves using – then you will tend to optimize for that. Testing is somebody else’s problem. We see this with the frightening “systems integration testing” phase where our individually crafted units all come together. The feedback from this testing phase to the developers is often imprecise. This is because the testers test the outcome, as defined in their tests, and use this language to describe success, whereas the developers need to understand which part of their code is not functioning correctly.

Consider what the result is when we collapse unit and integration testing into the development itself. Developers take responsibility for the quality of the tests and the code across the full scope of the application being developed. They are responsible for creating working software, not just delivering code for testing. Another outcome is the increased velocity that arises from removing the hand-off between development and testing.

The most pernicious effect of Conway’s Law is that the silos we create promote supervision of the hand-off between the silos. If you were a tester and were measured on the success of your team at testing, you would want to ensure that you received code in a manner that optimized your ability to test it. Not meeting a standard format or method would mean that you would be held responsible for the low-quality inputs of another team! Therefore somebody has to keep an eye on this and measure how often this failure occurs. Of course, with all these “managers” across the functions there needs to be additional managers that collate their findings and report back up to executives on the overall quality of delivery. In lean terms, this is all “waste” – not adding to the actual quality of the outcome for the customer.

Our organizations have many smart and experienced people, so why do we do it this way? Consider a case where the project we were doing required a single line of code to be deployed to production. In this extreme example, the ability of a group of people (even in their silos) to quickly agree on what was good enough and promote it to production would be much greater than for a much larger block of code or functionality. Why do we work on passing such large functional units between the silo functions at one time; why do we not break it up into smaller units? This answer lies in another key aspect of large enterprise planning.

We want to do big things.

Doing big things entails a great deal of risk. We want to ensure that we do each piece of the big thing well, but we also want to ensure that each piece is done by those who can do it best. So we create specialized silos with experts in each. Each of these experts lets us know what could possibly go wrong with “the big thing” We look into the future and try to anticipate every eventuality. This is called risk management. We then create processes to check for these risks as the project progresses.

Of course, we are subject to a confirmation bias here as we tend to look for data that confirms our risk hypotheses. Because of these processes, we believe that we can do big things. After all, with all this resource focused on managing risk, we should be able to do really big things! The reality, as anyone who has managed very large projects will know, is that the risks that sink a project are either completely unexpected and un-anticipatable or emerge from the internal dynamics of a project as the requirements of the project evolve. This is why the research shows very strongly that large, year-long projects seldom, if ever, deliver as predicted or required on the planned budget or timeline.

Even with great risk management processes, large blocks or units of work are very difficult to evaluate for quality. If you develop and integrate months worth of work, it is likely that things have changed since you specified the work to be done, and certainly once problems are found, it is very difficult to separate out what is functioning correctly from what is causing a specific error.

Establishing an organization that is capable of setting a big vision but breaking the delivery of it up into small pieces, using teams that are unconstrained by the traditional functional silos is what we are trying to change.

Many approaches have been developed to help us get to this point.

Some address specifically the issue of trying to remediate the complex interdependent code bases that traditional organizations create:

Brian Foote coined a set of practices called, “Big Ball of Mud,” which addresses the fact that many legacy systems in the enterprise are haphazardly created and, thus, difficult to integrate, grow elegantly or replace. Eric Evans in Domain Driven Design identified the concept of “Bounded Context,” which refers to the practice that breaks down complex systems into well-defined areas with specific walled-off boundaries.

In response to these problems, some practices have emerged and begun to attain great currency –

  • Continuous Delivery described by Jez Humble and Dave Farley in their book of the same name.
  • DevOps as defined by Adam Jacob at Chef Software
  • Agile Practices as defined in the Agile Manifesto

We need to respect this thinking, but if we’re really interested in transformation, we can’t be imprisoned by lineage. And we can’t try to reconcile 15 different languages and 10 vague organizing principles to create a hybrid. We also need to make sure we’re not bogged down in process.

The bottom line is that we have to be like Bruce Lee (to borrow the metaphor from Adam Jacob), who stripped martial arts down to their very simplified essence in order to conquer an opponent. For our part, we have to focus on the core idea of continuous transformation – continuously changing the way we deploy our people, processes, skills and resources to satisfy customers. And, right now, that means acquiring new knowledge fast. The same old knowledge, the knowledge of the past, and thinking that applies solely to the software development processes, just doesn’t work in the world of now.

Every Act of Design Is an Act of Feedback

Technology companies frequently throw a ball up in the air and walk away – before it actually lands. This prevents them from obtaining crucial feedback. We see this in software development when an organization releases a new quarterly product or feature and returns four months later; sadly, this approach doesn’t allow for much effectiveness tracking, learning, or improvement.

Which is why every single action in the continuous transformation process needs to be an act of design. The act of design is an act of feedback. And, in continuous transformation, you’re continuously getting feedback – whether you’re shipping product externally to a customer or you’re making changes inside you’re organization. You can see this in the IOTA Model, which encourages and stimulates small and very fast increments so you can test your assumptions about the world as well as your effectiveness in responding to those assumptions.

Entering the Crucible

In this approach, we get people and organizations geared up for continuous transformation by putting them through an exercise we call the Crucible. Basically, once you’re in the Crucible, we ask you and your team to think hard about what software you can build in two weeks – what your assumptions are, what the product improvement is, what the organizational learning change is, and how you’ll measure success.

The Crucible starts with a two-minute orientation during which the group discusses what’s possible in two weeks. Then, once a decision about what’s going to be built has been made, the team has five minutes to construct an IOTA Model. After that, it reflects for five minutes on how it worked and what it created. Finally, there’s a rest period, followed by non-work conversation.

We run this exercise 10 times in a row. At the end of 10 iterations, the team is incredibly clear about what can be built in two weeks, what problem can be solved in two weeks and what can be fixed within the organization in two weeks. It’s simple, stripped down and perfectly clear, so there’s no need for estimation.

Think of the Organization as a Product Venture

If you have 20 teams operating at this speed and cadence – and with this much clarity – you will have continuous transformation in your organization. And the continuous change will actually address and solve recurrent problems that exist for customers right now.

There’s one last part of continuous transformation, and that’s mindset. Organizations that want to continuously change need to think of themselves as product ventures – a collection of products and a collection of decisions to invest in those products.

And every team in a continuously transforming organization is a product team; so each team must include people who reflect the entire stack – infrastructure people, architecture people, apps people, code people, people who talk to customers and people who ask customers about prototype features, for example.

Team members also need to understand the economics and value of what they create. We must make their work visible. You write the software, you see the software in world and you know it made a difference in the world – for the organization and the customer. This is the link to reality, and it is essential to creating better software faster in an organization that is continuously transforming.