William Morgan-Harrold

Director of Consulting

Why a corporate communications consultancy chose to align with the FRC’s AI framework

The use of generative AI in corporate reporting has moved quickly. Perhaps too quickly.

For the past few years, most organisations have been operating in, what can only be described as, an exploration phase. Large language models have become widely accessible, discussed and experimented with. Tools such as Claude, ChatGPT, Gemini and others have shaped how people think about productivity and content creation.

But in a regulated environment like corporate reporting, exploration carries risk.

Until recently, there’s been little formal guidance on how these tools should be used in a way that stands up to scrutiny. The result is that many teams have adopted broad, general-purpose models without fully addressing the implications of how they generate outputs.

The issue is well understood. Hallucinations, inconsistencies and misinterpretations are not theoretical risks. In most contexts, they are inconvenient. In corporate reporting, they can be material.

There are already examples of this playing out. In one instance, the use of a large language model without sufficient verification contributed to inaccurate data being reflected externally. That, in turn, influenced how a business was assessed by ratings agencies, with consequences for how it was perceived against its peers. The work to correct that is ongoing, but the impact is real.

That’s the gap the industry now has to close.

The Financial Reporting Council has now moved decisively to close it.

In June 2025, the FRC published its initial guidance on AI in audit, setting out documentation expectations and an illustrative example of how AI-enabled tools should be governed. In March 2026, it went further with its Generative and Agentic AI Guidance, a comprehensive framework covering the risks these technologies pose to quality and the mitigations required to manage them.

Together, these publications shift the conversation from experimentation to accountability. They establish a structured approach to how AI tools should be designed, certified, governed and reviewed. They identify three categories of risk: deficient output, misuse of output and non-compliant methodology. They also set out four pillars of mitigation: system design and development, certification, staff education and governance, and human-in-the-loop review and oversight.

This guidance is directed at audit firms and the technical teams responsible for developing AI tools within that context. Jones+Palmer is not an auditor. We are corporate communications consultants. But the work we produce – annual reports, governance narratives, sustainability disclosures, investor messaging – is subject to audit scrutiny. It sits within the same regulated environment and is held to the same standards of accuracy, completeness and compliance.

If the content an agency produces cannot withstand the rigour that auditors are now expected to apply to AI-enabled processes, it becomes a liability rather than an asset. The FRC’s framework may not have been written for agencies, but the principles it sets out – controlled design, defined workflows, human oversight, clear accountability – are directly relevant to any organisation whose outputs form part of a regulated deliverable.

That is why Jones+Palmer has chosen to align with it. Not because we’re required to, but because the work demands it.

The direction of travel is now explicit. The question is how each organisation responds.

At Jones+Palmer, the response began before the guidance arrived. Not because the rules were anticipated in their specific form, but because the risks they address were already visible.

The starting point was a principle that has shaped every decision since. The methodology belongs to the people, not to the technology. Every model, workflow and client-facing output is built on the same regulatory knowledge and professional standards that our consultants are trained to apply without any AI tools at all.

This matters for a specific reason: if the technology was removed tomorrow, the work would continue to the same standard. AI does not introduce a new methodology. It encodes an existing one within a structured, controlled environment. That’s the foundation on which everything else is built.

Governance came first. Enterprise-level platforms were selected to ensure that data inputs were not used for external model training and that supplier relationships aligned with existing standards such as ISO 27001 and Cyber Essentials. This doesn’t remove risk entirely, but it establishes a baseline that’s consistent with how other critical suppliers are managed.

From there, the focus shifted to design.

Rather than using off-the-shelf models in their default state, proprietary large language models were built for specific purposes. Each model is intentionally narrow in its application. It’s given a defined task, a fixed set of instructions and a controlled knowledge base built from assured sources.

This constraint is deliberate. In the FRC’s framework, it maps directly to what is described as mitigating GenAI component performance risk through appropriate system design: distributing cognitive load across steps, restricting model scope and using prewritten prompts tested for fitness. Those are design principles. They’re also practical decisions, made before the framework gave them formal language.

Capabilities are tightly managed. In some cases, external search is disabled entirely so that the model operates only within its approved knowledge base. In others, it is selectively enabled depending on the level of risk associated with the task.

Over time, this has evolved into something more than a set of individual tools.

A structured workflow has emerged, where each model performs a specific role and passes its output to the next stage. At each point, there are defined opportunities for review, validation and challenge. Outputs aren’t generated in a single step – they’re developed progressively, with multiple checkpoints along the way.

This creates, what is effectively, a production system rather than a prompt.

It also addresses a risk the FRC identifies as combination risk – the possibility that, individually, minor errors amplify as they pass through multi-step systems. Senior review at defined checkpoints is the primary control. Each stage is assessed on its own terms before it feeds into the next, so that drift is caught early rather than compounding through the chain.

The principle that runs through the entire process is constant. Human judgement is not replaced. It’s structured, supported and made more visible.

There’s a tendency to frame the value of AI in terms of speed and efficiency. In corporate reporting, that’s not where the real advantage lies.

The real value comes from the introduction of purposeful friction.

By designing workflows with multiple stages and defined checkpoints, teams are given the space to review, test and refine outputs before they progress. Consultants validate at each stage. At the final stage, senior team members and relationship leads come together to stress test the work before it’s presented to the client.

Pace becomes a byproduct of the system, not the objective. The objective is to improve the quality and completeness of what is delivered.

This also speaks directly to the FRC’s concern about misuse of output – the risk that someone receiving an AI-generated output misinterprets its scope, limitations or certainty. In many organisations, that risk is real because the people using AI tools aren’t trained in the regulatory standards the outputs need to meet. At Jones+Palmer, that risk is structurally different. Every consultant who works with these tools has been trained to advise, draft and create content against the same regulatory requirements that govern the final deliverable. The AI isn’t asking them to interpret something unfamiliar. It’s presenting work that follows the same logic they would apply themselves.

The models are designed to be used in conjunction with an expert, not independently. That’s not a caveat. It’s the design.

For clients, this has a tangible impact.

Corporate reporting teams are typically lean. The demands placed on them continue to increase, while reporting timelines remain largely fixed. In many cases, there’s a gap between the ambition for the report and what can realistically be delivered within the cycle.

Recommendations are made, but only a portion are implemented. Under pressure, teams often revert to updating prior-year content rather than moving the narrative forward.

What a structured AI workflow enables is a different outcome.

Content can be developed to a near-complete state, allowing clients to focus on refining and validating rather than starting from scratch. This increases the likelihood that more recommendations are implemented and that the report evolves meaningfully year on year.

It also allows greater depth in the areas that matter most, whether that’s strategy, sustainability or investor messaging.

The response from clients reflects this. The benefit isn’t simply that things are done faster. It’s that more is achieved within the same constraints, with greater confidence in the result.

The third risk the FRC identifies – non-compliant methodology – concerns the possibility that an AI-enabled approach produces work that falls outside what auditing standards require. In the context of corporate reporting, the equivalent concern is that AI-generated content doesn’t meet the regulatory and governance frameworks against which it will be assessed.

This’s where the methodology-first principle matters most. The proprietary models aren’t inventing an approach to corporate reporting. They’re executing one that already exists, built on the same regulatory knowledge that consultants apply every day. The workflow is designed around the structure of the regulatory landscape, not grafted onto it after the fact. When the output reaches the client, it has been produced by the same methodology that would have been applied without any AI involvement, reviewed by the same experts and held to the same standards.

If the FRC’s framework asks whether the methodology is compliant, the answer is that the methodology predates the technology. The technology serves it.

There’s a further advantage to this approach, which becomes more important as the technology continues to evolve.

The underlying system isn’t dependent on any single model or platform.

Because each proprietary model is defined by its instructions, knowledge base and constraints, those components exist independently of the technology used to execute them. They can be reviewed, updated and redeployed as needed.

In practical terms, that creates optionality.

Workflows can be migrated between different providers as the market develops. Improvements in capability can be adopted without rebuilding the entire system. The knowledge base, instructions and validation structure move with it.

In a rapidly changing landscape, that flexibility matters. It reduces the risk of lock-in and avoids dependency on any single provider’s roadmap, pricing or model behaviour.

That independence also creates a pathway into something more client facing.

The most effective of these workflows are now being developed into Jones+Palmer owned and proprietary platform interfaces, designed to enhance the client experience. Rather than remaining tools used solely by consultants, they can also be made available to clients themselves.

This creates a different kind of working relationship.

Clients have the option to engage directly with the same structured tools that underpin the work, bringing their own expertise into the process while maintaining the same levels of control and consistency. The result is a more collaborative model, where the client’s knowledge of their business and the agency’s expertise in communication are combined more directly.

Importantly, this doesn’t replace the need for expert support. Many organisations will continue to rely on experienced partners to lead and deliver their reporting, but it introduces a new level of transparency and empowerment for those who want it.

Looking ahead, the next phase of development will test these principles further.

2026 is the year of emerging agentic technologies that move beyond single outputs towards more autonomous, multi-step execution. The FRC’s March 2026 guidance addresses these directly, defining agentic AI as systems that can orchestrate and execute multiple tasks toward a goal with some degree of autonomy, and setting out a detailed framework of risks and mitigations specific to that capability.

The opportunity is significant. So is the risk. When errors are no longer confined to a single output but can propagate across an entire chain of actions, the stakes around control, oversight and accountability rise substantially.

For that reason, the approach does not change.

Agentic capabilities can be introduced within the same structured system, using the same controlled tools and workflows that consultants already operate within. The same narrow application, controlled inputs, defined checkpoints and human ownership at every stage. In that context, agents aren’t a replacement for the existing approach – they’re an extension of it, operating within an environment that has already been designed to manage the risks the regulator has now formalised.

The technology will change. The methodology will not.

The question is no longer whether generative AI will be used in corporate reporting. It’s whether it’s being used in a way that stands up to scrutiny – today and as the technology continues to evolve.

Find out more