The Architecture of Speed
The ability to change fast defines modern software
In Righting Software by Juval Löwy, one idea stood out to me: Volatility-based decomposition.
The premise is simple: “Decompose your system along axes of change.”
Change here refers to areas of high volatility. Volatility means the potential for large changes that might have a ripple effect across the system (or systems), not small variations that can easily be handled with conditional logic.
What are some examples of volatility that need to be modeled when architecting software?
Business rules
Pricing models
Regulatory logic
Integration contracts
Customer configuration
Reporting formats
Detection / scoring / remediation algorithms
These elements evolve at different rates and for different reasons.
Volatility-based decomposition suggests that if two parts of the system change independently, they should be isolated from one another. The discussion shifts from “How should we structure the code?” to “How do we prevent change in one area from cascading into others?”.
Change Speed Is a First-Class Requirement
The ability to change quickly is an unstated requirement for most software, especially platforms that hope to gain a dominant position in the market or support a small but dedicated customer base reliably for a long, long time.
It’s not a stretch to say that the speed with which we can make changes to our software influences:
Market competitiveness
Customer retention
Product learning cycles
Organizational morale
When volatile concerns are tightly coupled, every change expands regression risk and coordination cost. It’s natural in this situation for engineers to become cautious. Planning cycles and discussions stretch endlessly while delivery slows.1
In these situations, engineering cultures often become protective of stability. Change requests are evaluated through the lens of risk and effort. That instinct is understandable when the architecture amplifies change.
Contain the change
When volatility is isolated, teams can evolve parts of the system independently. The has the effect of localizing risk, making change easier to reason about and accelerating our ship-learn cycles.
One way to frame architecture discussions and efforts, especially at the start of major projects is that the architecture should limit and confine the effects of change.
When boundaries align with volatility:
A regulatory update affects a specific module.
A pricing modification does not ripple across unrelated domains.
A new integration leaves core business logic untouched.
Change continues, but its impact is bounded.
Containment reduces coordination overhead, cognitive load, and unintended consequences. Teams can operate with greater autonomy because responsibilities are clear and the blast radius is limited.
Predictable value delivery at speed depends on this containment.
“Features Are Aspects of Integration, Not Implementation”
This was another takeaway from the book. The idea of “Manager” and “Engine” components (along with clients, resources, data access layers, utilities) where most parts of the system can be expensive to build and must be built with care, but “Manager” components orchestrating all of those parts must be built to be “almost expendable”.
Customers do not interact with implementation details. They experience workflows, policies, and integrations.
A feature often represents:
A new workflow composed of existing capabilities
A policy applied to existing signals
An integration with another system
A configuration that alters behavior
A new aggregation or presentation of data
When systems are structured using traditional functional decomposition or domain decomposition techniques, features tend to cut across boundaries with delivery requiring synchronized changes across those boundaries.
When systems are structured by capability and volatility boundaries, features emerge from composing stable building blocks. Existing components are integrated in new ways i.e. “Manager” components are born, change and die with ease.
Key Insights
Architectural agility influences competitive position. Organizations that can evolve safely and frequently learn faster and respond more effectively to market shifts. The impact compounds over years.
Volatility-based decomposition requires domain awareness. Architects must understand which aspects of the system change independently and why. This demands close alignment with product strategy and business context.
Clear boundaries and stable contracts are essential. Leaky abstractions increase coupling and erode the benefits of decomposition. Early discipline creates long-term flexibility.
Change is the default state of software and architecture shapes how that change propagates. When change is contained, teams move with confidence and speed becomes sustainable.
I can see codebases that leverage volatility-based decomposition being worked on more easily by Agents, and I can see those codebases experiencing a tailwind of velocity. Changes to a single file, a single module, or a very small set of files in a codebase decomposed based on volatility have a lower chance of falling apart and going off the rails than Agent sessions that would need to make changes spread across multiple modules and dozens of files.
The blame for this dysfunction doesn’t always lie with software architecture. Weak or toxic organizational culture, competing incentives and various other factors can contribute just as easily.


