- Towards AGI
- Posts
- Is Your Enterprise Ready for an Agent-First Economy?
Is Your Enterprise Ready for an Agent-First Economy?
Commerce, consulting, and compute are being re-architected
Here is what’s new in the AI world.
AI news: Google’s New Open Standard for AI Agents in Commerce
Open AI: Personal Intelligence Turns Gemini Into Your Digital Memory
Hot Tea: McKinsey’s 25,000 AI Agents and the New Workforce
OpenAI: OpenAI’s $10B Cerebras Deal Redefines Scaling AI
UCP: Google’s Bet on an AI-Native Commerce Layer
Sundar Pichai recently signaled a quiet but consequential shift in how digital commerce may work by announcing the Universal Commerce Protocol (UCP) - an open standard designed to let AI agents transact across the entire shopping journey, end to end.
At its core, UCP is an interoperability layer.
Built in partnership with Shopify, Etsy, Wayfair, Target, and Walmart, it allows AI systems to “talk” to merchant platforms in a shared language: discovering products, checking availability, comparing prices, and initiating checkout without one-off integrations.
Google has confirmed that UCP will soon power native checkout directly inside AI Mode and the Gemini app, removing the need to hand users off to external sites.
Why does this matter?
For the last decade, ecommerce has been optimized for humans clicking through screens. But consumer behavior is already shifting.
Over 40% of Gen Z shoppers now begin product discovery on AI-powered or search-based assistants rather than retailer websites.
BCG even estimated that AI-assisted shopping could influence more than $1 trillion in global retail spending by 2030. UCP is infrastructure for that future.

For businesses, the implications are structural, not cosmetic.
First, distribution changes.
If AI agents become the primary interface, merchants no longer compete only on SEO or ad spend, but on how well their systems can be understood and negotiated with by machines.
UCP standardizes that interface, lowering integration costs and reducing reliance on proprietary marketplaces.
Second, conversion friction collapses.
Today, cart abandonment rates hover around 70% globally.
Native AI checkout shortens the path from intent to transaction, which early pilots suggest could improve conversion rates by 10–20%, particularly for repeat and replenishment purchases.
Third, small businesses gain leverage.
Open standards historically favor smaller players. Just as HTTP enabled millions of websites to exist without negotiating with browsers, UCP allows merchants to be “AI-readable” without building custom agent integrations.
UCP doesn’t replace storefronts overnight.
But it quietly establishes a commerce layer where AI agents act as buyers, negotiators, and executors on behalf of humans.
For businesses, the takeaway is clear: commerce is no longer just user-facing - it’s becoming agent-facing. And standards, not apps, will decide who gets seen.
Google Launches Personal Intelligence - Activates an AI Moat Others Can’t Reach
Google has introduced Personal Intelligence inside the Gemini app, a new capability that allows Gemini to connect (only with user permission) to data from Gmail, Google Photos, Search, and YouTube history.
It’s launching in beta, framed as a way to make Gemini “more helpful.” But what’s actually happening runs deeper than better recommendations.
For years, AI assistants have worked with partial context. They respond to prompts, remember a few preferences, maybe recall past conversations.
Personal Intelligence shifts that model.
Gemini can now reason across the digital traces people have already created: emails that describe plans, photos that reveal habits, searches that capture curiosity, videos that signal taste.
Instead of asking users to explain themselves from scratch, Gemini can infer.
That difference shows up immediately in how interactions feel.
Ask for tires, and Gemini doesn’t just list popular brands, it understands your car. Ask for travel ideas, and it moves beyond generic lists to places aligned with how you explore.
The assistant stops being reactive and starts feeling situational.
What makes this moment notable isn’t the feature set, it’s the foundation.
Every AI company wants personalization. Some store conversation history. Others track behavior inside their own apps.
Google’s position is fundamentally different. It sits on a living archive of daily life: billions of inboxes, trillions of photos, decades of search intent, and the largest video consumption graph in the world.
This is data that can’t be recreated through clever prompting or short-term usage. It exists because Google has been the default layer of the internet for twenty years.
The competitive implications are hard to ignore.
When AI models increasingly resemble each other in capability, differentiation moves elsewhere.
It moves to context. To continuity. To whether an assistant understands not just what you ask, but why you’re asking it, based on patterns that span years, not sessions.
This is the same logic that underpins the shift toward multi-agent AI.
Instead of one model guessing at intent, specialized agents handle different dimensions like memory, reasoning, planning, execution and coordinate in real time. That orchestration layer is where intelligence compounds.
AgentsX is built around this idea from the ground up, treating AI not as a monolith but as a matrix of experts dynamically collaborating across workflows.
Thus the next set of industry leaders won’t just be those with strong models or large datasets, but those who can coordinate intelligence at scale, deciding which agent should think, which should act, and which should stay out of the way.

McKinsey’s 25,000 Agents and the Quiet Redefinition of “Workforce”
If a 40,000-person consulting firm adds 25,000 AI agents in under two years, something fundamental has changed, and it’s not just productivity.
That’s the reality McKinsey & Company is now living.
According to CEO Bob Sternfels, the firm’s effective workforce has grown to roughly 60,000, made up of 40,000 humans and about 25,000 AI agents. A year and a half ago, that number was only in the low thousands.
In another year and a half, Sternfels expects every McKinsey employee to be “enabled by at least one or more agents.”
This isn’t a side experiment. It’s a redefinition of how the firm operates.
AI agents, unlike traditional tools, don’t wait for instructions step by step. They decompose problems, execute tasks, and move work forward autonomously.
Embedded across research, analysis, modeling, drafting, and execution, they function less like software and more like junior teammates, except they scale instantly and never leave the firm.
The immediate effect is leverage. A single consultant supported by multiple agents can cover more ground, iterate faster, and move from insight to implementation with far less friction.
But the deeper impact is structural. McKinsey is no longer organized purely around human capacity; it’s organized around human judgment amplified by machine execution.
That shift helps explain why QuantumBlack, McKinsey’s 1,700-person AI arm, now accounts for roughly 40% of the firm’s work.
Consulting is moving away from decks and recommendations toward systems that are built, deployed, and operated over years. Slideware doesn’t need agents. Living systems do.
The talent profile is changing too. McKinsey and its peers are no longer optimizing only for polished analysts or industry experts.
They’re looking for people who can move fluidly between strategy and engineering, who can think like consultants and work alongside machines. BCG’s “forward-deployed consultants,” who build AI tools directly with clients, point in the same direction.
Perhaps the most telling signal is the business model shift.
McKinsey is moving away from classic fee-for-service advisory toward outcome-linked engagements, where it helps underwrite and deliver measurable business results.
That’s only viable if you can stay embedded, automate execution, and scale impact beyond human hours. Agents make that possible.
What’s emerging here isn’t just “AI adoption.” It’s a preview of how knowledge work itself is reorganizing. Firms won’t compete based on headcount alone, but on how effectively they orchestrate humans and agents together.
The bottleneck is no longer access to AI capability. It’s coordination.
Once agents, models, and automated systems begin to multiply inside an organization, the real challenge becomes knowing what is connected to what, who can access which data, and how decisions flow across systems without breaking trust or compliance.
That’s the gap DataManagement.AI is designed to close.
As companies move from pilots to production, they need a way to integrate AI across fragmented data environments, oversee models and data streams in one place, and enforce governance without slowing teams down.
DataManagement.AI provides that control layer helping enterprises turn expanding AI footprints into reliable, auditable, and measurable business outcomes rather than disconnected experiments.

OpenAI and Cerebras Are Redefining What “Scaling AI” Really Means
OpenAI’s $10 billion computing deal with Cerebras isn’t just another headline about buying more chips. It’s a signal that compute itself has become a strategic weapon, and that the shape of the AI stack is starting to bend in new directions.
Under the agreement, OpenAI will purchase up to 750 megawatts of computing capacity over three years, using Cerebras-built systems to run inference and reasoning workloads for ChatGPT and related products.
That’s an enormous amount of power, roughly on the scale of a small city, and it comes at a moment when the industry’s bottleneck is no longer model quality but how fast, cheaply, and reliably those models can think in real time.
What makes this deal especially telling is why OpenAI is doing it.
Cerebras doesn’t compete with Nvidia by building smaller, faster GPUs. Instead, it builds wafer-scale engines, entire silicon wafers turned into a single chip, designed to keep massive models on one piece of hardware.

For inference and reasoning models, where latency matters as much as raw throughput, that architectural difference can translate into faster responses and lower costs at scale.
In other words, OpenAI isn’t hedging for novelty. It’s diversifying compute to optimize for speed of thought, not just training scale.
There’s a second layer here: leverage.
By adding Cerebras to its compute mix, OpenAI reduces dependence on any single supplier at a time when Nvidia GPUs are scarce, expensive, and politically sensitive.
This mirrors what hyperscalers did a decade ago when they began designing custom silicon, not because CPUs disappeared, but because control over infrastructure meant control over margins and roadmap.
The deal also reframes the AI arms race. Training ever-larger models grabs attention, but inference is where usage, revenue, and user experience live.
As reasoning models become more deliberate, taking time to “think” before answering, the cost of inference threatens to balloon. Whoever solves that efficiently gains a structural advantage that compounds with every query.
Finally, this partnership hints at a future where the AI stack fragments by workload.
GPUs won’t vanish, but neither will they be the only game in town. Specialized silicon, purpose-built data centers, and bespoke cloud services are starting to matter as much as model architecture.
In that sense, OpenAI’s partnership with Cerebras isn’t about chips at all. It’s about acknowledging that in the next phase of AI, compute decisions will quietly determine who can scale, and who can’t.
Journey Towards AGI
Research and advisory firm guiding industry and their partners to meaningful, high-ROI change on the journey to Artificial General Intelligence.
Know Your Inference Maximising GenAI impact on performance and Efficiency. | Model Context Protocol Connect AI assistants to all enterprise data sources through a single interface. |
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How's your experience? |
Thank you for reading
-Shen & Towards AGI team