Back to Insights
AI Pilot to Production6 min read

What a 90-minute AI readiness workshop should decide

By Pascal Music, Founder at TokenShift

What a 90-minute AI readiness workshop should decide

What should an AI readiness workshop actually decide? Not admire pilot activity — but determine what must be true for production to happen and who owns each part of that path. With IDC projecting worldwide AI spending will reach $632 billion by 2028 (IDC Spending Guide, 2025), the stakes of poorly structured readiness decisions are higher than ever.

Yet most readiness workshops fail this test. They produce inventories of pilot activity, lists of perceived risks, and a general sense that “more work is needed” — without specifying what work, by whom, or by when. The executive team leaves with a shared understanding that the AI programme exists but not a shared commitment to what happens next. That is not readiness. That is awareness, and awareness does not move programmes to production.

The five questions a readiness workshop must answer

A 90-minute workshop has limited time. That constraint is an advantage — it forces prioritisation. A well-structured workshop must produce definitive answers to five questions.

1. Who owns production? Not the pilot. Production. This means identifying the executive sponsor accountable for the AI system’s performance in a live operating environment — including its outputs, its failures, and its governance. If this person is not in the room, the programme is not ready. Ownership clarity is the foundation of Decision Clarity.

2. What is the risk profile? The workshop must produce a shared understanding of which risks are material, which are manageable, and which require mitigation before production. This is a triage exercise, not a comprehensive assessment.

3. Is the workforce ready? Workforce readiness is not training completion. It is whether the people who will operate the AI-augmented workflow have the skills, authority, routines, and management support to do so effectively. McKinsey’s State of AI research consistently identifies workforce readiness as the factor most correlated with successful AI scaling.

As Thomas Davenport, professor at Babson College and author of The AI Advantage, has observed: “Most AI failures are not technology failures. They are failures of organizational readiness — the inability to make decisions, assign ownership, and change workflows at the speed the technology requires.”

4. Who reports to the board? AI programmes in production generate board-level questions about risk, ROI, regulatory compliance, and workforce impact. The workshop must identify who owns the board narrative — who translates operational reality into governance language.

5. What are the go/no-go criteria? The criteria must be measurable, time-bound, and owned by named individuals. “We need more data” is not a go/no-go criterion. “The data pipeline must deliver 95% completeness on the three specified input fields by April 30, verified by the data engineering lead” is.

Workshop vs. assessment vs. audit

One reason workshops underperform is that organisations use them when a different format is required.

A workshop is a decision-forcing event. It brings decision-makers together and requires commitments: ownership, sequencing, go/no-go criteria. A workshop is appropriate when the programme has enough information to make decisions but has not yet created the forum to make them. The AI readiness self-assessment can serve as useful pre-work.

An assessment is an analytical exercise. It evaluates the programme’s maturity across defined dimensions and produces a structured diagnostic. An assessment is appropriate when the organisation suspects it is not ready but does not know where the gaps are. Gartner’s AI maturity frameworks provide useful reference models.

An audit is a verification exercise. It examines the programme against a defined standard and produces a compliance determination. The AI investment audit is designed for this moment: when the question is not “what should we do?” but “have we done what we said we would?”

The mistake organisations make most frequently is running a workshop when they need an assessment, or an assessment when they need a decision.

Next step:Take the AI Readiness Self-Assessment — a structured pre-workshop exercise that ensures your team arrives with a shared baseline.

What “readiness” actually means

Readiness is one of the most overused and underspecified words in enterprise AI. Organisations claim readiness based on technology deployment, pilot completion, or executive enthusiasm. None constitute readiness in any operational sense.

Readiness means the organisation can sustain the AI-augmented workflow in production — not for a demo, not for a quarter, but as a permanent operating capability. This requires maturity across five dimensions:

Technology readiness: The AI system performs reliably in the target environment with production-grade data and integration requirements met.

Data readiness: The data pipelines are production-grade — automated, monitored, governed, and maintained by a named team.

Workforce readiness: The people who operate the workflow have the skills, routines, and management support to work effectively with the AI system.

Governance readiness: The oversight, documentation, and escalation structures required by regulation and internal policy are in place.

Operating model readiness: Reporting lines, decision rights, performance metrics, and budget ownership support the AI-augmented workflow.

MIT Sloan’s research on AI-augmented productivity demonstrates that organisations achieving measurable returns addressed all five dimensions — not just technology.

Common mistakes: workshops that produce reports instead of decisions

The most reliable indicator of a failed readiness workshop is the output format. If the workshop produces a report, it has almost certainly failed.

Wrong participants. If the people in the room cannot make binding decisions about budget, sequencing, or ownership, the workshop becomes a discussion forum. The right participants are the programme sponsor, business owner, technology lead, workforce transition lead, and governance owner.

No pre-work. Participants who arrive without a shared understanding of the programme’s state spend the first 45 minutes building a common picture — leaving no time for decisions.

Too many topics. A 90-minute workshop cannot address technology architecture, data quality, workforce transition, regulatory compliance, and ROI modelling in depth. Scope to the decisions that are blocking progress.

No named next actions. Every decision must be accompanied by a named owner, a deadline, and a defined output. “We will look into this” is not a next action. Deloitte’s AI readiness methodology similarly emphasises decisiveness over comprehensiveness.

According to Accenture Research, AI could boost labor productivity by up to 40% by 2035 (Accenture) — but only for organizations that convert readiness conversations into binding operating decisions.

McKinsey’s 2024 Global Survey found that 87% of organizations experience skill gaps in AI adoption (McKinsey, 2024). A well-structured readiness workshop addresses this gap head-on by converting abstract awareness into binding decisions about ownership, sequencing, and workforce capability.

What this means for your next decision

If you are planning a readiness workshop, the preparation matters as much as the session itself. Define the five questions in advance. Ensure the right decision-makers are in the room. Provide pre-work that establishes a shared baseline. Scope the agenda to decisions, not discussions.

A well-run 90-minute workshop can move a programme from ambiguity to action. A poorly run one produces another layer of documentation that no one acts on. The difference is not facilitation technique. It is the willingness to force decisions that the organisation has been deferring.

Related reading

Continue reading

View all insights