- Towards AGI
- Posts
- Artificial Superintelligence is the endgame everyone talks about
Artificial Superintelligence is the endgame everyone talks about
yet no one can build it
Today, we’re diving into:
Hot Tea: ASI explained and why it’s still far away
Open AI: AI firms quietly becoming cybersecurity’s new gatekeepers
Open AI: UAE bets big on agentic AI government systems
Closed AI: DeepSeek V4 signals global AI power reshuffle
Artificial superintelligence, or ASI, is often described as the point where machines surpass human intelligence across every domain. Not just faster calculations or better pattern recognition, but superior reasoning, creativity, and decision-making in ways that humans cannot match.
That sounds dramatic, but the reality is far less immediate.
What we have today is still firmly in the era of narrow AI. Systems like ChatGPT or recommendation engines operate within defined boundaries. They perform exceptionally well at specific tasks, but they do not understand the world in a general sense, and they cannot transfer knowledge across domains without retraining.
ASI sits several layers above this. It assumes the existence of Artificial General Intelligence first, a system capable of learning and reasoning across domains with human-like flexibility. Only after that foundation exists can intelligence scale beyond human limits into something superintelligent.
The gap between where we are and where ASI sits is not just about compute. It is about architecture. Today’s systems rely heavily on large language models, neural networks, and massive datasets. While these are important building blocks, they are insufficient on their own. ASI would require advances in areas like multisensory learning, neuromorphic computing, and self-improving systems that can generate and refine their own code without human intervention.
Even then, there is no guarantee it works. Human intelligence itself is not fully understood, which makes replicating or surpassing it an unsolved problem. Many researchers question whether intelligence as we know it can even be cleanly translated into software.

From a business perspective, this distinction matters more than the headline promise.
There is a tendency to anchor strategy around where AI might eventually go instead of where it actually is. ASI is often framed as the ultimate competitive advantage, a system that can optimize decisions, invent products, and run operations autonomously. While that future is compelling, it is also speculative and distant.
What is real today is the gradual stacking of capabilities that resemble fragments of that future. Generative models are improving language understanding. Autonomous systems are learning to operate in dynamic environments. AI-generated programming is reducing the cost of building software. These are not signs of ASI, but they are signals of direction.
My view is that ASI is less a product and more a phase transition. It will not arrive as a single breakthrough moment. It will emerge from the convergence of multiple systems that already exist in isolation today.
For business leaders, the implication is clear. The risk is not that ASI suddenly appears and disrupts your industry overnight. The real risk is misallocating resources toward a distant abstraction while competitors build advantage using the systems that exist now.
The companies that win will not be the ones waiting for superintelligence. They will be the ones that learn how to operate in a world where intelligence is gradually becoming cheaper, more autonomous, and increasingly embedded into every decision layer of the business.
OpenAI drops GPT-5.5 - and it’s starting to think more like an operator
OpenAI just released GPT-5.5, and the headline isn’t just better performance. It is a shift in how AI systems approach work.
Coming less than two months after GPT-5.4, this release highlights the pace at which frontier models are evolving. But speed alone is not the story. What stands out is capability per instruction. According to Greg Brockman, GPT-5.5 can take vague, underspecified problems and determine what needs to happen next. That sounds simple, but it is a meaningful step toward systems that require less orchestration and more autonomy.
In practical terms, GPT-5.5 improves across coding, software operation, research, and structured output generation. It can analyze datasets, debug code, interact with tools, and produce documents or spreadsheets with minimal prompting. This is not just incremental improvement. It reduces the cognitive load on the user. Instead of breaking a task into steps, the model begins to infer the workflow itself.
That shift matters because most enterprise friction with AI today comes from prompt engineering and process fragmentation. Teams spend more time figuring out how to use the model than actually extracting value from it. GPT-5.5 moves closer to eliminating that gap.
There is also a competitive undertone here. Google and Anthropic are pushing aggressively with their own frontier systems, including Anthropic’s Mythos Preview, which has already raised concerns around its ability to detect deep software vulnerabilities. OpenAI’s response is not just capability expansion, but controlled deployment.
GPT-5.5 is classified under a “High” risk category, not “Critical.” That distinction is important. It signals that while the model is more capable, it is still within a controllable boundary. OpenAI has emphasized extensive red teaming and third-party testing, especially around cybersecurity and biological risk vectors. This reflects a broader industry trend where capability is advancing alongside tighter governance frameworks.
From a systems perspective, GPT-5.5 is starting to behave less like a model and more like an execution layer. It does not just generate outputs. It navigates tasks. That is a subtle but important transition toward agentic behavior, even if it is not fully autonomous yet.

For business leaders, the implication is immediate.
The value of AI is no longer tied only to output quality. It is tied to how much coordination overhead it removes. A model that can interpret ambiguity, decide next steps, and execute across tools compresses entire workflows. This affects software development cycles, research timelines, internal operations, and even decision-making structures.
My view is that GPT-5.5 is not a breakthrough in intelligence. It is a breakthrough in usability. And usability is what drives adoption at scale.
The companies that benefit will not be the ones experimenting at the edges. They will be the ones redesigning workflows around systems that can act with partial context. Because once AI starts figuring out the “next step” on its own, the bottleneck is no longer the model. It is how your organization is structured to use it.
The UAE is rebuilding government around AI - not just adding it
United Arab Emirates is not experimenting with AI at the edges anymore. It is redesigning the core.
The government has announced plans to shift 50 percent of its public services to artificial intelligence within two years, powered by agentic systems that can execute tasks, analyze data, and make decisions with minimal human input. This is not about chatbots or automation layers. It is about embedding AI directly into how the state functions.
The initiative, led by Mohammed bin Rashid Al Maktoum, targets roughly half of all government sectors, services, and operations. Federal entities will be evaluated based on how quickly they adopt AI, redesign workflows, and integrate intelligent systems into daily execution. At the same time, every federal employee is expected to undergo AI-specific training, which signals that this is as much an organizational transformation as it is a technological one.
What stands out is the emphasis on agentic AI. These are not passive systems waiting for instructions. They are designed to operate continuously, coordinate across functions, and make decisions in real time. In effect, the UAE is building a government layer that behaves more like a network of autonomous operators than a traditional bureaucracy.
This builds on a long runway. The UAE has spent the last two decades digitizing its public infrastructure through initiatives like UAE Pass and Government Services 2.0. What it is doing now is layering intelligence on top of that foundation. Abu Dhabi’s earlier commitment to become a fully AI-native government by 2027 fits directly into this trajectory.
From a systems perspective, this is one of the first large-scale attempts to operationalize agentic AI at a national level. And it changes the benchmark.
Most organizations still treat AI as a tool that enhances existing processes. The UAE is treating it as a replacement for how processes are designed in the first place. That distinction matters. When AI becomes the default execution layer, you do not optimize workflows. You rebuild them.
This is where platforms like AgentsX become relevant. The real challenge is not deploying individual models. It is orchestrating multiple agents, ensuring they operate within defined rules, and integrating them seamlessly into existing systems. Governments and enterprises both face the same constraint: without structured orchestration, agentic systems create more complexity than they remove.
My view is that this move is less about efficiency and more about control over scale. Governments deal with massive volumes of repetitive, rules-based interactions. If agentic AI can handle even a portion of that workload reliably, it fundamentally changes cost structures, response times, and citizen experience.
For business leaders, the signal is hard to ignore.
If a government can aim to automate half its operations within two years, the question is not whether your organization should adopt AI. It is whether your current operating model can even support that level of transformation. Because once services become faster, cheaper, and more responsive at a national level, customer expectations across industries will reset just as quickly.
DeepSeek V4 drops - China’s AI DeepSeek drops new model
DeepSeek has released a preview of its new V4 model, and while the headline is stronger reasoning and agentic capabilities, the deeper signal is geopolitical and structural.
The Hangzhou-based startup is positioning V4 to compete directly with frontier systems from OpenAI, Anthropic, and Google. On paper, the upgrades are significant. V4 improves multi-step reasoning, processes larger token volumes more efficiently, and introduces stronger agentic behavior, including autonomous coding and task execution.
But this is not DeepSeek’s first disruption. Its earlier R1 model shocked markets by delivering near top-tier performance at a fraction of the cost, triggering a sell-off in US AI stocks and raising uncomfortable questions about the sustainability of billion-dollar infrastructure bets.
This time, the reaction is calmer. As Ivan Su points out, V4 is not a shock. It is a continuation. And markets have already adjusted to the idea that Chinese AI can be both competitive and cheaper.
What makes V4 strategically important is not just performance, but how it is built and distributed.
Unlike most Western models, DeepSeek continues to lean into open source. That decision is not ideological. It is tactical. Open models scale faster, attract developer ecosystems, and embed themselves into real-world applications across sectors like ecommerce, robotics, and enterprise software. In a market where capital and chips are constrained, distribution becomes the advantage.
On the infrastructure side, DeepSeek is reducing reliance on US hardware. Instead of Nvidia, V4 runs on domestic chips through partnerships with Huawei and others. This is a direct response to export restrictions, but it also signals something bigger. AI capability is no longer tied to a single supply chain. It is becoming modular and regionally independent.
That shift has long-term consequences. If AI systems can be built, trained, and deployed without Western infrastructure, the competitive landscape fragments. You do not have one global AI stack. You have multiple.
There is also a growing tension around intellectual property. US officials and companies have raised concerns about “distillation,” where models learn from outputs of other frontier systems. Whether those claims hold or not, they highlight a new kind of competition. Not just building models, but extracting and replicating capabilities faster than rivals.
From a systems perspective, V4’s emphasis on agentic workflows is the most relevant development for businesses. Models are no longer just generating outputs. They are executing tasks, coordinating steps, and acting with limited supervision.
This is where platforms like AgentsX come into play. As models become more autonomous, the challenge shifts from capability to orchestration. Businesses need structured environments where multiple agents can operate reliably, interact with systems, and stay within defined boundaries. Without that layer, agentic AI creates fragmentation instead of efficiency.
My view is that DeepSeek is not trying to win on raw intelligence alone. It is competing on accessibility, cost, and speed of adoption. And in many markets, those factors matter more than marginal performance gains.
For business leaders, the takeaway is direct.
The AI race is no longer just US versus China in terms of capability. It is proprietary versus open, expensive versus efficient, centralized versus distributed. And depending on your choices, you are not just adopting a model. You are choosing which ecosystem your business will depend on as AI becomes the default execution layer.
Journey Towards AGI
Research and advisory firm guiding on the journey to Artificial General Intelligence
Know Your Inference Maximising GenAI impact on performance and Efficiency. | Model Context Protocol Connect with us, and get end-to-end guidance on AI implementation. |
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How's your experience? |
Thank you for reading
-Shen & Towards AGI team