top of page
Search

The Deloitte AI Misstep: Why Understanding Still Matters


ree

When Deloitte was forced to refund part of a $440,000 government contract because its report contained AI-generated errors and fake citations, the headlines almost wrote themselves.

It is easy to frame it as another “AI gone rogue” story, but the real problem was not artificial intelligence.It was human complacency.


What actually happened

For those who missed it:

• Deloitte was paid $440,000 by the Australian government to conduct an assurance review.

• After release, researchers found fake legal references and fabricated citations in the report.

• Deloitte later admitted sections were generated using AI tools.

• The firm corrected the report and refunded part of the contract.


What researchers uncovered are known as AI hallucinations. This happens when an AI system confidently produces information that sounds credible but is not true. These errors occur when AI outputs are not properly checked, guided, or grounded in a verified source of truth.

It was not an AI failure. It was a human oversight failure.


The real risk is not AI. It is us.

At Cornerstone, we use AI every day to analyse data, summarise documentation, and turn technical insights into clear, client-ready language. It is a game-changer for productivity.


But productivity means nothing without understanding. You still need to know the content, grasp the context, and be able to recognise when something doesn’t look right.


That is exactly what seems to have happened at Deloitte. A report meant to provide assurance ended up undermining trust, not because AI failed, but because people stopped questioning it. No one checked the sources. No one validated the claims. AI wrote confidently, and confidently wrong.


The same mindset is showing up at home

This attitude is not just creeping into business; I am seeing it in my own house. My high-school-aged kids tell me all the time:

“Dad, I don’t need to study that — I’ll just ask AI.”

And every time, I give them the same response:


“If you don’t understand the content, how will you know when it’s wrong?”


Because that is the real trap. AI does not just get things wrong. It gets them wrong confidently.

It writes with authority even when it is making things up. Without understanding, you cannot tell the difference between confidence and correctness, and that is where trust collapses.


AI should support us, not replace us

In cybersecurity, blind trust is dangerous. The same goes for AI.

At Cornerstone, we always point AI toward a source of truth such as Microsoft documentation, ACSC guidance, NIST, or CIS, and we validate everything it produces. When you build trust with clients, that trust is fragile. One fabricated fact or hallucinated citation can undo years of credibility.

We see AI as a supporting tool, not a substitute for knowledge. It helps us move faster, but only because we know enough to spot when it is wrong.


AI governance means more than policy

The Deloitte case highlights why every organisation needs more than an AI policy sitting in a document library. Governance is not just about permission; it is about education and discernment.


Everyone in a business should understand:

What AI should be used for such as research, drafting, summarising, or data analysis.

How to interact with it responsibly by providing accurate context, verified data, and defined boundaries.

When to leave it on the sideline especially for anything involving facts, legal interpretation, or professional judgement.


Policy sets the rulebook. Education builds the instinct.

When people know how and when to engage AI, they do more than comply. They make better decisions, protect the organisation’s credibility, and reduce the risk of Deloitte-style mistakes.

That is the real value of good governance: informed humans using powerful tools wisely.


The wake-up call

The Deloitte case is a wake-up call for all of us, not to fear AI but to re-establish accountability. AI should never be a shortcut to understanding. It should amplify expertise, not replace it.

Whether you are writing a report, managing a business, or raising kids, the principle is the same. Use the tool, but keep your brain switched on.


At Cornerstone Cyber

We believe assurance means accountability. Our insights, reports, and recommendations are grounded in evidence and checked by humans who actually understand the work.

AI helps us work faster, communicate clearer, and deliver better value. But the thinking? That is still ours.

Because in the end, security and truth depend on people who care enough to check.


Secure today. Stronger tomorrow.

That is how we see it.

 

 
 
 

Comments


bottom of page