Picture a crisp morning in Pisa, Italy, around 1590. A crowd gathers at the base of the famously tilting tower. Above them stands Galileo Galilei, a 26-year-old mathematician with a rebellious glint in his eye. In his hands are two spheres—one heavy, one light. According to the teachings of Aristotle, which had dominated scientific thought for nearly two thousand years, the heavier ball should fall much faster than the lighter one. The accepted wisdom held that the speed of falling objects was directly proportional to their weight.
Galileo steps to the edge. The crowd falls silent. He extends his arms and releases both spheres simultaneously. The crowd watches, necks craned upward, as the balls descend through the air. And then something remarkable happens—something that challenges everything they've been taught to believe. Both spheres hit the ground at virtually the same moment.
In that simple act, Galileo shattered two millennia of established dogma. What made this moment revolutionary wasn't just the result, but the method. Instead of arguing about what should happen based on philosophical principles, Galileo decided to actually test what would happen in reality. This shift—from authority to evidence, from deduction to experimentation—formed the cornerstone of the scientific revolution.
Galileo's experiment accomplished something profoundly simple yet transformative: it showed that physical reality doesn't bend to human authority or conventional wisdom. Nature reveals its secrets not to those who argue most eloquently, but to those who design the most careful tests. This matters because it established a method for finding truth that transcends human bias and preconception—a method that would eventually unlock the mysteries of electricity, medicine, genetics, and ultimately lead to the technological world we inhabit today.[1]
This experimental mindset took centuries to fully penetrate the business world. Frederick Taylor's stopwatch transformed factory floors in the early 1900s. W. Edwards Deming's statistical approach to quality revolutionized manufacturing in the mid-century. Japanese firms embraced Kaizen, making small experimental improvements the foundation of their rise to global prominence. Each evolution brought experimentation closer to the strategic heart of business.
The progression from isolated experiments to systematic experimental infrastructure marks the difference between accidental learning and continuous adaptation. Organizations that build such infrastructures create compounding advantages that prove difficult for competitors to overcome. This advantage comes not from any single experiment, but from the cumulative knowledge generated by hundreds or thousands of experiments working in concert, each refining the organization's understanding of its customers, operations, and competitive environment.
Today, we stand at another inflection point. Experimentation has outgrown its operational origins and demands a place at the highest level of organizational governance: the boardroom. In environments characterized by rapid change and fundamental uncertainty, boards that don't experiment don't learn. And boards that don't learn put their organizations at existential risk.
Consider what happens when you search for something on Google. Behind that seemingly simple interaction lies one of the most sophisticated experimental engines ever built. Your query might be part of thousands of simultaneous experiments testing subtle variations in algorithms, interfaces, and features. The results feed back into machine learning systems that continuously improve the product.
Hal Varian, Google's chief economist, once made a surprising claim that caught many off guard. He suggested that Google's true competitive advantage wasn't its vast data resources or computational power. Instead, it was the experimental infrastructure the company had built. This insight runs counter to the conventional wisdom that data itself constitutes the moat around digital giants.
The scale of this experimentation dwarfs anything previously seen in business. Google runs approximately 15,000-20,000 search experiments annually, resulting in 500-600 changes to its search algorithm each year. Amazon's retail platform hosts thousands of simultaneous experiments. Microsoft's transformation under Satya Nadella centered on building experimental capabilities across its product suite.
These companies don't just have data. They have systematic mechanisms to learn from that data through continuous, rigorous experimentation. They've built organizational muscles that convert questions into experiments, experiments into insights, and insights into action—all at unprecedented speed and scale.
Three Case Studies in Organizational Experimentation
While tech giants garner most attention for their experimental prowess, sophisticated experimental infrastructures exist across industries. These case studies demonstrate how scientific experimentation powers organizational learning far from Silicon Valley.
Capital One: The Bank That Thinks Like a Lab
Long before "data science" became a buzzword, Capital One built its entire business model around scientific experimentation. Founded in 1994 by Richard Fairbank and Nigel Morris, the bank pioneered the application of information-based strategy in financial services.[2]
Unlike traditional banks that offered standardized credit products, Capital One created a laboratory-like infrastructure to test thousands of variations in credit offers, interest rates, credit limits, and marketing messages. They called this approach "test and learn," and it permeated every aspect of their operations.
The heart of Capital One's competitive advantage lay in their "champion/challenger" testing methodology. For any given product or process, they maintained a "champion" version while continuously testing "challenger" variants. When a challenger consistently outperformed the champion, it became the new standard against which future innovations would compete.
This experimental infrastructure enabled Capital One to grow from a credit card spinoff into the 10th largest bank in America. Their board doesn't just review experimental results; they actively design strategic experiments to test fundamental assumptions about customer behavior, market evolution, and competitive dynamics. When artificial intelligence entered the banking sector, Capital One's pre-existing experimental infrastructure gave them a significant advantage in leveraging these new capabilities.
Novartis: Reimagining Pharmaceutical Research
The pharmaceutical industry has always conducted experiments, but traditionally in a linear, stage-gate process with enormous costs and timelines spanning decades. Under CEO Vas Narasimhan, Novartis transformed this approach by building an experimental infrastructure that enables faster, more iterative learning.
Their Novartis Institutes for BioMedical Research implemented a "fast-fail" experimental model.[3] Rather than investing years before determining a compound's viability, they designed experiments specifically to disqualify unsuccessful candidates early. This seemingly simple shift—designing experiments to disprove rather than prove hypotheses—dramatically accelerated their learning velocity.
Novartis complemented this approach with investments in AI-powered drug discovery, creating digital twins of disease pathways to simulate thousands of experimental outcomes before conducting physical tests.[4] Novartis also uses AI to conduct virtual experiments, predicting drug approval.[5] The results speak volumes: Novartis has decreased drug development timelines by approximately 20% while increasing their success rate in late-stage clinical trials—a tremendous achievement in an industry where failure represents billions in sunk costs.
Procter & Gamble: Mass Experimentation in Consumer Products
P&G, a 180-year-old consumer products giant, demonstrates how experimental infrastructure can reinvigorate even the most established companies. Under former CEO A.G. Lafley and current CEO David Taylor, P&G transformed from traditional consumer research to dynamic experimentation at scale.
Their approach centers on converting consumer insights into thousands of micro-experiments. Rather than betting on a few major product launches annually, P&G continuously tests hundreds of small variations in product formulations, packaging, pricing, and marketing. They've built sophisticated experimental infrastructure in their manufacturing facilities, enabling rapid prototyping and small-batch production for market testing.
P&G's experimental infrastructure doesn't just inform product development; it shapes their entire corporate strategy. Their successful pivot toward premium products in developing markets stemmed directly from strategic experiments that tested fundamental assumptions about consumer aspirations across different economic segments.
Together, these cases illustrate that experimental infrastructure isn't just for tech companies. Organizations across sectors—from banking to pharmaceuticals to consumer products—create sustainable competitive advantage by building systems for continuous hypothesis testing and learning. What separates these organizations isn't their industry but their commitment to evidence over intuition, learning over certainty, and adaptation over tradition.
Strategic Experiments: The Board's Learning Infrastructure
Now imagine bringing this experimental mindset to the boardroom. Most boards spend their time monitoring performance against historical plans, ensuring regulatory compliance, and occasionally making binary decisions on major investments or leadership changes. They operate more like judges than scientists, evaluating evidence rather than creating it.
But what if boards approached their strategy work differently? What if they identified the most critical uncertainties facing their business and designed experiments to resolve them? What if they treated strategy not as one plan to be executed but as a portfolio of strategies to be tested and evaluated? Evaluation of strategies should anyways be at the core of board work.
Strategic experiments differ fundamentally from the operational experiments that have become standard in product development. They seek answers to existential questions about the business model, market position, competitive environment, and future trajectories. They unfold over longer time frames—quarters or years rather than days or weeks. Their results don't just optimize current operations; they potentially redirect the entire organization.
When a board embraces strategic experimentation, the conversation changes. Rather than asking, "How did we perform against the plan?" board members ask, "What critical uncertainties did we resolve this quarter?" Instead of focusing exclusively on outcomes, they examine the quality of learning. They recognize that in uncertain environments, the ability to adapt often matters more than the ability to predict.
Artificial intelligence transforms the board's capacity to design, execute, and learn from strategic experiments. It serves as both catalyst and enabler for a more scientific approach to governance.
Consider the challenge of formulating the right strategic questions. Traditional approaches rely heavily on executive intuition, often colored by cognitive biases and limited perspectives. AI can analyze vast troves of competitive intelligence, market data, and internal metrics to identify emerging patterns that human analysts might miss. It can surface non-obvious strategic challenges and quantify which uncertainties, if resolved, would create the most value.
A board wondering whether to enter an adjacent market might traditionally rely on consultant projections and executive judgment. With AI-enhanced experimentation, they can simulate multiple entry scenarios, identify the specific assumptions most critical to success, and design targeted experiments to test those assumptions before committing significant resources.
The design of these experiments benefits similarly from AI enhancement. Boards have traditionally struggled with experimental rigor, often conflating correlation with causation or failing to control for confounding variables. AI can simulate experimental designs to identify potential biases, calculate required sample sizes for statistical validity, and even identify natural experiments already occurring within the company's operations.
When testing whether a premium positioning strategy will expand margins without sacrificing growth, AI might help design a matched-market approach that controls for demographic and economic factors across test and control regions. It could continuously monitor results, flagging when sufficient data exists to draw statistically valid conclusions.
The data gathered through strategic experiments expands exponentially with AI assistance. Beyond traditional financial and operational metrics, AI can process unstructured data from customer interactions, employee observations, competitive movements, and market signals. It can identify subtle patterns in this multi-dimensional data that would elude conventional analysis.
Experiments become more multi-dimensional.
In an experiment testing customer response to a new service model, AI might analyze sentiment across social media, support interactions, and sales conversations. It could identify emerging patterns faster than traditional focus groups or surveys, allowing rapid refinement of the experimental approach.
Perhaps most importantly, AI helps boards extract maximum learning from experimental results. It can distinguish signal from noise in complex data sets, identify causal relationships rather than mere correlations, and quantify confidence intervals around conclusions. It can connect experimental outcomes to strategic implications and even generate new hypotheses based on unexpected findings.
The Board's Oversight of Operational Experimental Capacity
The board's relationship with experimentation extends beyond conducting strategic experiments directly. It must also ensure the organization builds robust experimental capabilities throughout its operations. This creates a dual responsibility: running strategic experiments while fostering a broader experimental culture.
Smart boards regularly examine whether formal experimental frameworks exist across all key business functions. They assess whether these frameworks produce reliable, unbiased results that genuinely inform decision-making. They look beyond product development to ensure marketing, pricing, operations, and talent management all benefit from experimental approaches.
They ask probing questions: Are experimental results systematically incorporated into operational decisions or treated as interesting academic exercises? Does the volume of experimentation match industry benchmarks and the pace of market change? Where do capability gaps exist, and what investments would close them?
CEOs reporting to experimentally-minded boards find themselves discussing metrics beyond the traditional financial statements. They share the number of experiments conducted quarterly across different functions. They track the average time from hypothesis formulation to experimental conclusion. They measure the percentage of significant business decisions informed by experimental data and calculate the ROI of their experimental program by comparing costs to the value of improved decisions.
By establishing this oversight, boards ensure that experiential learning becomes embedded in organizational culture rather than remaining a siloed capability. They create alignment between their own strategic experiments and the organization's operational experimentation, building a learning organization from the boardroom out.
We evaluate boards primarily on financial outcomes and governance processes. Did shareholder value increase? Were regulatory requirements met? These metrics matter, but they tell an incomplete story in rapidly changing environments. They capture results but not capabilities. They measure outcomes but not learning.
What if we also evaluated boards on learning velocity? This would require new metrics: How many strategic hypotheses does the board test annually? How quickly does it design and execute strategic experiments? How effectively does it incorporate experimental results into strategic decisions? How systematically does it refine its experimental approach over time? How well does it balance experimental learning with strategic execution?
Board evaluations would then capture not just what the board knows, but how rapidly it learns. They would recognize that in uncertain environments, the board that experiments, learns. And the board that learns, leads.
Conclusion
As artificial intelligence transforms business operations, it simultaneously enables a more scientific approach to strategic governance. By building a strategic experiment infrastructure, boards can move beyond intuition and experience to evidence-based decision making. They can convert uncertainty from a threat to be avoided into a source of competitive advantage.
For board members looking to embrace this experimental mindset, five essential questions can guide the journey:
First, "How many operational experiments does our organization conduct annually, and what protocols exist for evaluating them?" This question reveals whether experimentation exists merely as rhetoric or as systematic practice. It uncovers whether the organization has built the muscles for continuous learning or merely gestures at the concept. A mature experimental organization can readily provide not just numbers but also taxonomies of experiments across different business functions, clear evaluation criteria, and mechanisms for translating results into action.
Second, "What strategic experiments should we, as a board, directly oversee in the coming year?" This question shifts the board from a purely evaluative role to one actively engaged in testing critical strategic assumptions. It forces clarity about which uncertainties, if resolved, would most significantly impact strategic decisions. Examples might include experiments testing customer willingness to pay for sustainable offerings, organizational capacity to integrate potential acquisitions, or market response to radical business model innovations.
Third, "How does our organization's experimental infrastructure compare with best-in-class examples both within and beyond our industry?" This question combats the tendency toward complacency that often afflicts successful organizations. It encourages benchmarking against not just direct competitors but experimental leaders across sectors. Capital One's champion/challenger methodology, Novartis's fast-fail protocols, and P&G's consumer immersion experiments all offer models that transcend industry boundaries.
Fourth, "What mechanisms exist to elevate insights from operational experiments to strategic decision-making?" This question addresses the frequent disconnect between frontline experimentation and boardroom deliberation. It examines whether pathways exist for experimental learning to flow upward through the organization, challenging strategic assumptions and informing board-level decisions. Without such mechanisms, strategic experiments become isolated from operational reality, while operational experiments fail to inform strategy.
Fifth, "How do we measure and reward learning velocity alongside financial outcomes?" This question acknowledges that what gets measured gets managed. It challenges boards to develop metrics for learning, not just performance, and to incorporate these metrics into executive compensation and capital allocation decisions. Organizations that reward only outcomes inevitably sacrifice learning for short-term results, while those that reward learning capacity build resilience for uncertain futures.
In an era of unprecedented change and complexity, the companies that thrive will be those whose boards can formulate, test, and learn from strategic hypotheses most effectively. They will build experimental infrastructures that extend from the boardroom throughout the organization, creating cultures that prize learning over certainty and adaptation over prediction.
The experimental board doesn't just monitor—it learns. It doesn't just govern—it explores. And in doing so, it transforms uncertainty from an existential threat into the very engine of adaptation and growth.
How many experiments is your organization conducting today?
[1] Now, the story of the tower was told by one of Galileo’s disciples, and has been argued to be apocryphal, but the point remains. A robust discussion of Galileo’s project and his relentless curiosity can be found here: https://plato.stanford.edu/entries/galileo/#GaliScieStor
[2] As detailed in Davenport, T. and Harris, J., 2017. Competing on analytics: Updated, with a new introduction: The new science of winning. Harvard Business Press.
[3] For more on this see Krystal, A.D., Pizzagalli, D.A., Mathew, S.J., Sanacora, G., Keefe, R., Song, A., Calabrese, J., Goddard, A., Goodman, W., Lisanby, S.H. and Smoski, M., 2019. The first implementation of the NIMH FAST-FAIL approach to psychiatric drug development. Nature Reviews Drug Discovery, 18(1), pp.82-84
[4] See Bordukova, M., Makarov, N., Rodriguez-Esteban, R., Schmich, F., & Menden, M. (2023). Generative artificial intelligence empowers digital twins in drug discovery and clinical trials. Expert Opinion on Drug Discovery, 19, 33 - 42. https://doi.org/10.1080/17460441.2023.2273839.
[5] See Siah, K.W., Kelley, N.W., Ballerstedt, S., Holzhauer, B., Lyu, T., Mettler, D., Sun, S., Wandel, S., Zhong, Y., Zhou, B. and Pan, S., 2021. Predicting drug approvals: The Novartis data science and artificial intelligence challenge. Patterns, 2(8).