Less than four years after the first AI chatbot took the world by storm in 2022, the AI industry faces a make or break moment in 2026.
Companies like OpenAI, Microsoft, Anthropic, and xAI are piling billions upon billions of dollars into developing massive data centres to cater for the enormous amount of computing power that is required to run their systems.
As the race towards superintelligent AI models capable of functioning without human input continues to heat up, the question beckons: In a world where AI is being integrated into every system that can work with the output that it generates, who gets the blame if one of these systems makes the wrong call?
“The real storm brews in the legal vacuum around AI agency, where machines are beginning to act, decide, and execute without human hands at every step. This isn’t science fiction anymore. It’s real… the uncomfortable truth is that our laws are still stuck in past decades,” Dingli told The Shift.
While the clunky AI chatbots most people are familiar with may seem like a far cry from agentic AI, which can execute tasks without human input, autonomous systems powered by AI are already a reality.
In Ukraine, which has become the world’s leading innovator in drone technology throughout Russia’s brutal four-year invasion, elite units in the Ukrainian army are currently deploying drones which can fall back on AI systems if the operator loses connection, switching to autonomous mode to lock onto a target and strike without further human input.
In other words, the era of the killer robot has arrived – cinematic warnings like Terminator be damned.
Outside the battlefield, AI companies are pushing for the deployment of agentic AI across virtually all fields of human expertise.
What’s visibly missing, notes AI expert and university Professor Alexiei Dingli, are robust legal frameworks that actually envisage a society where AI agents become ubiquitous.

“Most jurisdictions refuse to grant AI systems legal personhood, and rightly so. But this creates a paradox. If no human directly gave the order, and no programmer wrote the fatal line of code, then where does the blame fall?” he added.
The professor further explained how courts and regulators are placing the weight on deployers, a term he uses to describe people or organisations who created the systems.
“We cannot have autonomous power without human responsibility. Whether it’s a drone, an AI tutor, or a digital hiring agent, there must always be a human in the loop or, at the very least, on the hook,” the AI expert added.
Multiple examples of how mass deployment of agentic AI could go wrong abound – a scenario that is especially worth considering in Malta since the government has pledged to invest €100 million in funding to spur AI integration.
One of the primary uses for AI is to generate code. What if an AI agent assists in criminal activity by writing poorly written code that’s easy to exploit?
AI has turned out to be surprisingly good at detecting specific medical conditions. What if an AI agent points towards the wrong diagnosis, or dismisses the correct one? What if an AI tutor messes up? The list of possibilities is endless.
“Can we punish code? Of course not. But can we hold the people who built or deployed it accountable? We must. Otherwise, we drift into a world where creators hide behind complexity, and users claim innocence through ignorance. This is not just a legal gap. It’s a moral failing waiting to happen,” Dingli maintains.
While Malta’s leading AI expert believes legislation must be updated to address these concerns, the real issue is whether human decision-making should always reign supreme.
If so, he says, every AI system must include a transparent chain of custody. Someone must own its outcomes, because a machine cannot be dragged to court.
“Otherwise, we lose control not because the machines took it, but because we gave it away,” he adds.
Beyond the significant ethical and legal nightmares caused by the use of agentic AI, the industry’s seemingly booming company valuations may turn out to be a bubble begging to burst.
As reported by Bloomberg News at the end of last year, investment within the AI industry is largely circular – literally shaped like an actual bubble.
Take, for example, how OpenAI inked a $300 billion dollar cloud storage deal with Oracle. Simultaneously, Oracle spends tens of billions of dollars to buy AI processor chips from Nvidia, which, in turn, agreed to invest $100 billion back into OpenAI.
If you think that’s confusing, then you may want to scroll past the chart below to avoid a full-blown migraine. If not, click here to view an enlarged version of the image below.

“The companies shaping AI’s future are now deeply entangled in each other’s success, to the point where their revenue streams loop back like a digital circle. The big players, Microsoft, Google, and Amazon, have invested billions in AI labs and startups, which are contractually obligated to spend most of that investment on cloud infrastructure owned by their investors,” Dingli explained.
“It’s like lending someone money to buy your product, then calling it revenue. On paper, the growth looks phenomenal. But scratch the surface, and you realise we’re witnessing a financial feedback loop where the capital never truly leaves the ecosystem. It’s not inherently problematic, but it is fragile,” he added.
While the industry’s harshest critics believe the bubble-popping moment isn’t far off, the AI expert believes that it will be more like “a slow deflation”.
Companies will be forced to correct their course, adjust spending, and likely leave plenty of retail investors out in the lurch. The product will survive, but reckless, small-fry investors won’t.
Despite the industry’s perils, Dingli remains optimistic.
“We have the opportunity now to embed responsibility into the foundations of these systems before they become too complex to govern. The economic bubble may be sustained by capital gymnastics for a while longer, but the liability bubble, if left untethered, could prove far more dangerous when it bursts,” he concluded.
Sign up to our newsletter Stay in the know
"*" indicates required fields
Tags
#Agentic AI
#AI
#Alexiei Dingli
#Anthropic
#artificial intelligence
#Elon Musk
#Microsoft
#Nvidia
#OpenAI
#Oracle
#Russia
#Sam Altman
#ukraine
#xAI