“AI won’t replace humans. But humans who use AI will replace those who don’t.” The line is widely repeated, and it has recently been attributed to Sam Altman in mainstream coverage. It also appears in a widely circulated Harvard Business Review framing of the same idea.
For security audits, the quote is directionally right, but incomplete. If the goal is simply to produce more output faster, AI makes that trivial. The harder question is whether we can use AI to produce better audits while preserving and compounding the human capability that makes audits valuable.
In other words, the risk is not that AI will replace auditors. The risk is that AI will change incentives inside security organizations so that expertise stops compounding. That is where the competitive edge becomes fragile.
The Knowledge Collapse Frame
A useful way to reason about this comes from Acemoglu, Kong, and Ozdaglar’s theoretical paper “AI, Human Cognition and Knowledge Collapse.” Their model distinguishes between two outputs humans produce when they do difficult work.
One output is immediate: a solution to the problem at hand. The other output is durable: knowledge that can be reused and shared. It accumulates as collective understanding, improved heuristics, stronger reviews, more effective training, and sharper instincts about where to focus first.
The paper’s warning is that when AI substitutes for human effort, it can improve near-term performance while reducing the incentives to generate that durable knowledge. In the long run, the overall knowledge base can weaken, even as short-term productivity looks strong.
This is not a claim that AI is harmful by default. It is a claim about how people respond to tools that produce convincing answers at low cost. When answers become cheap, the community tends to underinvest in understanding.
Why This Matters for Security Audits
Security audits are ultimately evaluated by the quality of the decisions they support. Their value is not the volume of findings, the polish of the report, or how quickly words are produced. It is the reliability of the conclusions under uncertainty, including what was examined, what was assumed, what remains unknown, and what risk is still present.
AI is excellent at compressing time on synthesis and drafting. In a security context, that is a real advantage, because it can free time for deeper thinking. The concern is that teams may start treating AI output as a substitute for the human work that builds long-run security judgment. A similar point appears in a recent Zcash Foundation post on AI-assisted contributions: AI can accelerate output, but review standards and human accountability do not change.
That is how “humans using AI will replace those who don’t” can become a trap. The competitive pressure pushes organizations to move faster. If faster means fewer human cycles spent on root cause, bug classes, reusable patterns, and organizational learning, then the work product may look better today while the organization becomes weaker tomorrow.
The goal is not to avoid AI. The goal is to integrate AI in a way that increases throughput while still producing the durable outputs that compound security expertise.
Why Ignoring AI Is Not a Serious Option
AI changes the baseline for security work in two ways.
First, it is increasingly effective at surfacing implementation-level defects, which makes routine bug discovery cheaper and easier to do earlier in the development cycle. As a result, large volumes of findings matter less on their own, and the differentiator shifts toward how well those findings are interpreted, prioritized, and verified.
Second, offensive teams are adopting the same capabilities. AI reduces the cost of exploration and iteration, compressing attacker timelines and increasing the pace at which systems are probed.
For a security auditing firm, this is not an abstract trend. We operate on the defensive side of the ecosystem, and we have to keep pace with how quickly systems can now be tested, changed, and attacked. Our mission has always been to help teams build and maintain systems they can rely on, and doing that well requires preserving human judgment and accumulated expertise as tooling evolves.
What This Means in Practice
If the Altman quote is to hold in a way that strengthens security work, the distinction is straightforward.
Cory Doctorow offers a helpful shorthand for this framing: a “centaur” is a human assisted by a machine, while a “reverse centaur” is a machine that uses a human as its assistant. In security audits, the aim is the centaur model, where AI amplifies human judgment and investigation rather than reducing people to validating machine-paced output.
Under that model, AI lowers the cost of producing first drafts and candidate hypotheses without eroding the organization’s investment in the work that compounds expertise, including understanding, generalization, and internal knowledge-sharing. In turn, when AI accelerates early-stage work, the time it saves should be reinvested in the parts that create durable advantage.
In security, that durable advantage is not just finding issues faster. It is shortening the time to justified confidence and strengthening the organization’s ability to handle unusual cases that do not resemble prior incidents.
For a practical discussion of how AI can support audits and why verification remains essential, see our earlier post, “Exploring AI-Assisted Security Audits.”
How We Can Help
Even as AI makes it easier to generate hypotheses, summaries, and analysis quickly, the work that matters most still depends on human accountability, explicit assumptions, and evidence that holds up under scrutiny. At Least Authority, we use modern tooling to accelerate the right parts of security work while keeping audit conclusions grounded, explainable, and defensible. Through our audits, we help teams identify risks that matter, communicate them clearly, and build the internal understanding needed to maintain security over time. If you are preparing for an audit, we would welcome the opportunity to support your next review.
Written by: Jessy Bissal
Contributions by: Jasper Hepp