Planning in the Polycene: Why AI Changes Everything Beneath Your Strategy
- Russell E. Willis

- Feb 26
- 3 min read
Strategic planning practice over the past half century was built on an assumption so foundational we stopped noticing it: that the future is continuous enough with the present to chart a course forward.
We now live in what Thomas Friedman calls the Polycene — an era shaped not by one disruptive force but by many converging at once. Of all those forces, AI is the most organizationally consequential, because it doesn't just change the environment around your organization. It changes the organization itself. And when the organization itself is changing, the assumption that the future is continuous enough with the present to chart a course forward no longer holds.
Traditional strategic planning practice built that assumption into its foundations three ways: that we can specify where we're going clearly enough to measure progress; that the future is stable enough for five-year timelines to mean something; that cause and effect are connected closely enough that good execution produces predicted outcomes.
These assumptions weren't unreasonable. For most of the last century, they held well enough to build on successfully and repeatably.
AI didn't just stress-test those assumptions — it is crushing them.
AI accelerates feedback loops you didn't design. It reshapes the relationships your organization depends on. It automates processes your strategy was built around — sometimes before the ink on the plan is dry. And it does all of this at a speed that makes the conventional planning cycle look like it was designed for a different era.
Which it was.
Whether your planning process feels functional right now or you already sense something is broken, the planning challenge is the same: how do you orient an organization when the ground beneath it keeps shifting?
The organizations navigating this well aren't the ones with better forecasting tools or more sophisticated scenario planning — though those help.
They're the organizations that know what they fundamentally are, not just what they currently do. When the ground shifts, identity holds. When processes get automated, purpose remains. When the five-year timeline becomes a two-year reality, mission is what keeps the organization from losing itself in the adaptation.
Strategic planning in the Polycene cannot be anchored by what your organization is capable of doing — leveraging technology and honing best practices. It must be anchored by why your organization exists and what it means to remain true to that purpose when what we can do displaces what we should do, and be.
That kind of planning starts with a harder, more important question: Is our vision worth having — and do we know ourselves well enough to pursue it faithfully when the world stops cooperating?
That question has always mattered. Now it's urgent. Now it's foundational.

**************************************************
What assumption is your current planning process making that AI is quietly dismantling?
Russell E. Willis, Ph.D., works at the intersection of technology, ethics, and organizational leadership — as an AI governance consultant, strategic planning adviser, and author. His book AI and the Crisis of Control: How Leaders Can Reclaim Responsibility in the Age of AI (Archway Publishing, 2026) introduces the ASSUME Model and Five Pillars of responsible AI stewardship. He has spent fifty years at the intersection of technology and responsibility — as an engineer, academic leader, and entrepreneur — and works with executives, boards, and policymakers through Got Vision Consulting.




Comments