- Towards AGI
- Posts
- The Quiet End of the Smartphone-Centered AI Era
The Quiet End of the Smartphone-Centered AI Era
How OpenAI is leading the change
Here is what’s new in the AI world.
AI news: OpenAI reimagines smartphones
Hot Tea: Space will now host data centers
Open AI: Claude aims to wipe out cloud
ClosedAI: OpenAI launches ‘ChatGPT Health’

The Next AI Device May Not Look Like a Device At All
For more than a decade, the smartphone has been the default gateway to every major technological shift.
Social media, cloud computing, payments, and now generative AI have all been squeezed into the same glass rectangle.
That era may be nearing its limits.
OpenAI’s reported work on a pen-shaped, screenless consumer device, internally referred to as “Gumdrop” is not interesting because it’s a new gadget, but because it signals a deeper design philosophy shift:
AI is moving away from screens, apps, and constant visual engagement toward ambient, intent-driven interaction..
The rumored device is small, screenless, and designed to be carried rather than stared at.
Its cameras and microphones absorb context. Handwritten notes are captured and converted directly into structured input for ChatGPT.
In other words, the interface fades, while capability remains.
What OpenAI appears to be exploring instead is continuous, low-friction cognition, meaning AI that listens, observes, and assists without demanding constant attention.
The strategic signal becomes clearer when viewed through the lens of OpenAI’s $6.5 billion acquisition of Jony Ive’s hardware startup and the involvement of LoveFrom.
Apple’s greatest success was never raw compute power. It was interface minimization that is removing keyboards, buttons, and friction until technology felt invisible.
AGI runs on the same philosophy. It’s about how intelligence is embedded into daily life.

Earth Can’t Power Intelligence Anymore: Gen AI is Forcing Compute Off the Planet
As generative AI models grow larger, more capable, and more energy-intensive, the limits of Earth-based infrastructure are becoming harder to ignore.
Power constraints, cooling costs, water usage, and land availability are no longer abstract concerns, they are active bottlenecks on the path to scaled intelligence.
That pressure is now pushing compute somewhere unexpected: orbit.
Starcloud (formerly Lumen Orbit), a member of the NVIDIA Inception program, is developing space-based data centers designed specifically to address the two biggest constraints facing GenAI infrastructure today - energy and cooling.
The company recently outlined a concept for a 5-gigawatt orbital data center, supported by a 4 km² solar and thermal array, roughly six square miles in size.
What sounds extreme begins to look rational when viewed through a GenAI lens.
In sun-synchronous orbit, Starcloud estimates a 95% capacity factor, enabled by near-constant solar exposure without atmospheric loss. Cooling—a massive cost and environmental issue on Earth—is handled by the vacuum of space itself, eliminating the need for water-intensive evaporative systems.
The company projects energy costs 10–20× cheaper than terrestrial data centers, even after launch expenses, and estimates a 10× reduction in carbon emissions over the system’s lifetime.
Space-based data centers aren’t about novelty. They’re a signal that GenAI has reached a scale where planetary constraints matter.
Claude Code and the Quiet Collapse of the Cloud-Centric Model
For the last decade, software has steadily moved in one direction: upward into the cloud.
Apps became services. Services became subscriptions. And our dataeverything from work documents to personal projects became something we accessed, not something we truly owned.
This model gave cloud providers power: the more features they offered, the more locked-in users became.
Claude Code hints at a reversal.
Balaji Srinivasan isn’t claiming Claude Code will instantly dethrone cloud software.
Instead, he highlights a subtler, potentially revolutionary shift: it dramatically lowers a barrier that has kept cloud dominance secure for so long, which is the difficulty of running complex software locally.
If an AI can take a moderately complex cloud app and generate a usable local version (one that runs directly on your own files) the balance of power starts to tip.
These local clones won’t be flawless. They don’t need to be. All they have to do is be functional enough to experiment, iterate, and improve.
That’s where the real significance lies.
Historically, files endured while apps disappeared. Word processors came and went, but .txt, .pdf, and .md survived.
The artifacts you created were portable; the software that created them was ephemeral.
Claude Code collapses that distinction. If AI can generate local tools on demand that operate directly on durable, user-controlled files, apps themselves become portable, malleable artifacts, not fixed services.
The implications are profound:
Reclaim control: Individuals and teams can rebuild internal tools without depending on proprietary platforms.
Fork and iterate: Workflows become flexible, forkable, and customizable like open-source code repositories.
Software as material: Programs shift from being contracts with vendors to objects you can manipulate, improve, and preserve.
The cloud will remain useful, but the era of absolute dependency on centralized platforms could be drawing to a close.
ChatGPT Health: AI Stops Answering Questions and Starts Knowing You
For decades, shopping followed a familiar path: search, compare, read reviews, then decide.
Thus, brands also optimized for visibility within that flow: SEO, ads, influencer content, and shelf placement.
Generative AI rewires that sequence.
According to new research from BCG, more than 60% of consumers now express high trust in GenAI-generated shopping recommendations, and shopping has become the third most common use case for GenAI, behind only work and general information.
What’s more telling is where this is happening: not just for expensive electronics or travel, but for everyday purchases like groceries. This marks a deeper behavioral shift.

Consumers are no longer just using AI to find products; they are delegating judgment.
What makes this moment different from earlier recommendation engines is trust. Consumers describe GenAI as direct, objective, transparent, and personalized. In other words, it doesn’t feel like advertising.
For frequent users, GenAI assistants now rank as the most influential touchpoint in the purchase journey, surpassing search engines, marketplaces, and brand websites.
Thus, visibility is no longer about ranking on a page or winning an auction. It’s about whether your product shows up inside an AI-generated answer, and how it is framed.
Brands are no longer competing only with other brands. They are competing with the AI’s interpretation of value.
In markets like India, Brazil, and the US, GenAI usage now spans both professional and personal decisions. This cross-context trust reinforces itself: once people rely on AI at work, relying on it to choose a product feels natural.
But, over time, this could reduce brand loyalty as traditionally understood.
Thus, for brands, now the question is whether your brand will still exist in a world where the assistant, not the shelf, decides what gets seen.

For any organization ready to move from experimentation to execution, the answer is DataManagement.AI.
Journey Towards AGI
Research and advisory firm guiding industry and their partners to meaningful, high-ROI change on the journey to Artificial General Intelligence.
Know Your Inference Maximising GenAI impact on performance and Efficiency. | Model Context Protocol Connect AI assistants to all enterprise data sources through a single interface. |
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How's your experience? |
Thank you for reading
-Shen & Towards AGI team