If there is one mantra that has defined the last decade of tech, it is “move fast and break things.” And looking at the Australian ecosystem right now, we are certainly moving fast. But I worry we are about to break the wrong things.
There is a dangerous disconnect playing out in our sector. On one side, we have technical velocity that is off the charts. Founders are integrating complex AI models aggressively. But on the other side, there is a silence where the safety protocols should be.
This disconnect isn’t just an impression. It is the defining story of the 2025 Startup Muster Report.
The numbers paint a picture of an ecosystem that has its foot firmly on the accelerator. We have reached a tipping point where 51% of Australian startups are actively building AI products or services. Even more significant is that 56% are using AI for key internal functions. The technology isn’t coming; it is already here, embedded in the codebases and workflows that will drive our digital economy.
But here is the statistic that stopped me cold.
89% of founders could not name a single voluntary AI safety standard published by the Australian Government.
Let that sink in for a moment. Nearly nine out of ten leaders driving this technological revolution are operating without a map for the safety protocols designed to protect them.
I know the counter-argument: “Malcolm, I have six months of runway. Governance is a luxury for Series B. Right now, I need to ship.“
I get it. In the early stages, survival is the only metric that matters. But this attitude assumes that governance is just administrative overhead, a tax on speed. It also assumes that the core safety work has already been done for you.
There is an unspoken assumption that because we are building on top of world-class models, the safety is “baked in.” It is the belief that “OpenAI has this covered,” so we don’t need to worry about it. But if you read the terms of service for most upstream model providers, their liability is often limited to the cost of the API usage. You own the output. You own the hallucination. You own the bias.
When you have widespread adoption of stochastic models, systems that are probabilistic by nature, you are building on rented land without insurance. Without a governance framework, you are risking reputational collapse and data sovereignty breaches that can kill a company faster than running out of cash.
This is where we need to reframe the conversation from “compliance” to “commercial advantage.”
Enterprise customers are waking up to these risks. They are asking harder questions during procurement. They want to know your data lineage. They want to know your guardrails. If your answer is a blank stare, you lose the deal to the competitor who can show a framework.
In this context, the 89% awareness gap is not just a safety failure; it is a commercial failure. It represents a massive missed opportunity for organisations like the National AI Centre (NAIC) to step in. We do not need heavy-handed regulation or academic PDFs that go unread.
We need practical, high-level education that positions the Australian AI Safety Standards as a tool for closing enterprise deals.
Leadership needs to understand that governance is not about slowing down. As racing legend Mario Andretti famously said, it is the brakes that allow the car to go fast. Safety standards give you the confidence to push the limits because you’re confident you can survive the corners.
If we can close this knowledge gap, Australian startups won’t just be fast. They will be robust. They will be defensible. And they will be investable in a global market that is increasingly looking for responsible AI.
The 2025 Startup Muster report is a wake-up call. The engine is running hot, but we need to make sure someone is actually watching the road.
If your AI model hallucinated a liability for your biggest client tomorrow, will you be able to show your board a standard of care, or will you just point to a Terms of Service agreement that offers you zero protection?
Appendix
If you want to move from the 89% who don’t know to the 11% who do, here is your starter pack. Do not view these as compliance hurdles; view them as sales collateral for your next enterprise RFP.
1. The “Guidance for AI Adoption” (NAIC)
The report mentions the Voluntary AI Safety Standard, but the National AI Centre evolved this into the Guidance for AI Adoption in late 2025. It condenses safety into six essential practices. If you read only one document, make it this one. It is the blueprint for “safe speed.”
2. Australia’s AI Ethics Principles
These are the “North Star” values: Fairness, Reliability, Accountability, and Transparency. They sound high-level, but they are the first thing a serious procurement officer will quiz you on. “How do you ensure fairness?” is a question you need a prepared answer for. You can find the official principles here.
3. ISO/IEC 42001
This is the international heavyweight. It is the management system standard for AI. You likely don’t need to be certified right now, but knowing it exists and aligning your roadmap to it puts you ahead of 90% of your competitors when talking to banks or government clients. Read more at the ISO official page.

