- "Towards AGI"
- Posts
- "500ml Per Question": Altman Reveals ChatGPT's Thirsty AI Secret
"500ml Per Question": Altman Reveals ChatGPT's Thirsty AI Secret
AI's Thirst Trap.
Here is what’s new in the AI world.
AI news: The Staggering Water Cost of ChatGPT Queries
What’s new: Alibaba’s $10B Bet
Open AI: New Player in China's OSS AI Boom
OpenAI: OpenAI Delays Open-Source Launch
Hot Tea: OpenAI’s $40 Bn Play
Sam Altman: A Single ChatGPT Query Drinks 500ml Water, The Shocking Truth

OpenAI CEO Sam Altman has shared an unexpected detail about ChatGPT’s environmental impact: each query uses only about one-fifteenth of a teaspoon of water. That equates to approximately 0.000085 gallons, or just a few drops per interaction.
As discussions around the energy and water demands of AI heat up, Altman’s claim brings a fresh perspective. In a blog post on Tuesday, he noted that the average ChatGPT request consumes about 0.34 watt-hours, similar to what an oven uses in a second or what a high-efficiency bulb consumes in a couple of minutes.
Altman also suggested that the cost of AI intelligence could eventually shrink to match the cost of electricity, hinting at a future where AI becomes more affordable and sustainable.
Sam Altman's new essay is saying that we have reached incredible levels of AI.
It also point out that our level of "I'm impressed" is growing even faster than AI is progressing.
On a fun note, have you ever wondered how much water and electricity is waster on your requests?
— Oleg Zaremba (@olzare)
6:50 PM • Jun 12, 2025
However, the situation is more complex than it sounds.
Large AI models like GPT-4 rely on enormous data centers that require continuous cooling, often using water to keep operations stable. A previous Washington Post report revealed that generating just a short, 100-word email with GPT-4 could use more than a full bottle of water. The exact water usage varies depending on where the data center is located and local climate conditions.
Altman’s new water usage figure comes as environmental concerns over AI’s resource consumption are growing. Some experts even warn that by 2025, AI could use more energy than the notoriously power-hungry Bitcoin mining industry.
OpenAI hasn’t disclosed how Altman arrived at the water-per-query estimate, and many experts will likely demand more clarity. Still, the announcement appears aimed at reassuring the public about AI’s environmental impact.
So, the next time you interact with ChatGPT, it may only use a drop of water, but the broader question is whether AI can scale up in a way that’s truly sustainable.
Alibaba’s $10B Bet: Ant’s GenAI Platform Targets Global Fintech Domination

Ant International, a global fintech company, has introduced a new generative AI platform designed for super apps and financial services providers. The platform, Alipay+ GenAI Cockpit, leverages AI capabilities trained on Ant International’s core business units, including its merchant payment service (Antom) and cross-border business account solution (WorldFirst).
This enables users to develop AI-agentic and AI-native financial services with improved efficiency, security, and flexibility.
The launch underscores Ant International’s three-pillared AI strategy:
AI Security
Vertical Fintech Expertise
Full-Stack Platform Support
"The future of finance will be shaped by agentic AI that autonomously executes tasks in real automated workflows within sophisticated financial and compliance contexts. It must reliably interact, evolve, and learn rapidly with growing precision."
Platform Foundation & Deployment
The Cockpit was refined using Ant International’s four key business units:
Wallet gateway service (Alipay+)
Merchant payment service (Antom)
Cross-border business accounts (WorldFirst)
Embedded finance (treasury management, lending, credit tech)
Ant goes AI 🧠
Alipay GenAI Cockpit just dropped — built for fintechs to supercharge their services 🚀
Big bet from #AntInternational on generative AI in payments.
#fintech#AI#Alipay#GenAI#startups#payments#innovation— Paykademy (@paykademy)
3:34 PM • Jun 9, 2025
Clients in Southeast Asia and South Asia are expected to begin official deployment in June 2025.
Three-Pronged Strategy in Detail
1. Security Shield for Trusted AI
Addresses surging AI scamming threats (e.g., deepfakes), which grew >10x YoY, impacting 22% of businesses via AI-generated payment fraud.
Combats external attacks and internal model risks (hallucinations/bias) through the AI SHIELD framework, covering:
System architecture design
Data processing
Model training/inferencing
Provides real-time risk assessment using 100+ recognition models and 600,000 risk lexicons to detect adversarial prompts/data leaks.
Ant’s fraud loss rate is 5% of the industry average.
2. Deep Vertical Financial Expertise
Integrates > 20 leading LLMs, including Ant’s proprietary Falcon Time-Series Transformer FX Model.
Enhances precision with fintech-specific knowledge bases (e.g., bank transfer rules, dispute policies).
Supports tools for:
Retrieval-augmented generation
Post-training
Evaluation/benchmarking
Powers Antom Copilot—the world’s first AI agent for merchants, streamlining payment integration, channel optimization, code correction, onboarding automation, and natural-language risk configuration.
3. Full-Stack FinAI Platform Support
Pre-built agents for tasks like:
Customer service
Targeted marketing content
AI-assisted coding
Customizable agents for specialized scenarios:
Travel advisory
Tax refunds
Cross-border remittance
Loyalty rewards
Model Context Protocol (MCP) Marketplace:
Supports existing MCP servers.
Enables businesses to build custom MCP servers for autonomous task completion.
Flexible deployment via public clouds (e.g., Google Cloud) or on-premise environments.
"The FinAI sector is at its big-bang moment. We aim to collaborate with the industry to expand this toolbox and ecosystem, accelerating growth for financial businesses."
The platform merges Ant’s fintech-specific tools, dynamic knowledge bases, and business-ready AI innovations to redefine financial service development.
China's Open-Source AI Push Gains RedNote With dots.llm1 Release

Chinese social media giant RedNote, also known domestically as Xiaohongshu, has released its first open-source large language model (LLM), “dots.llm1.” This move aligns RedNote with a broader trend among Chinese tech firms favoring open-source AI strategies as an alternative to the proprietary models championed by Western companies like OpenAI and Google.
Developed by RedNote’s Humane Intelligence Lab, dots.llm1 activates 14 billion parameters out of a possible 142 billion when handling tasks, a design choice aimed at maintaining strong performance while keeping computational costs down.
According to its listing on Hugging Face, the model was trained on 11.2 trillion high-quality tokens (excluding synthetic data), achieving performance on par with Alibaba’s Qwen2.5-72B model. The release makes RedNote a notable player in China’s competitive AI space, leveraging its vast user base of over 300 million monthly active users.
A Strategic, Not Just Technical, Split
Industry experts say this is about more than just tech decisions. While Western companies pursue closed models focused on monetization and platform control, Chinese firms like RedNote are open-sourcing their AI to build influence, gain developer loyalty, and encourage local adoption.
“This isn’t just a licensing issue, it’s a deeper divergence in trust frameworks”.
“Chinese companies are positioning open-source LLMs as tools of geopolitical strategy, while Western firms prioritize shareholder returns and control”, he added.
China has a new player in the LLM race -- dots.llm1 from Rednote. No synthetic data, real open source license 👏.
Will this be the next star model🌟
— Priccc (@CuVyxe)
9:21 AM • Jun 9, 2025
Performance vs. Strategic Value
Dots.llm1 scored 56.7 on C-SimpleQA, a benchmark for Chinese language understanding, not as high as DeepSeek-V3’s 68.9 but still impressive for a debut model. Some analysts believe RedNote could gain more by focusing on specialized models that align with its e-commerce and user behavior data.
“RedNote is sitting on a rich dataset of user preferences and buying habits. Tailoring its AI toward commercial use could be more impactful,” noted Neil Shah, VP at Counterpoint Research.
Efficiency and the Economics Behind It
The model uses a “mixture of experts” architecture, activating only relevant parts for each task to save resources. But the benefits go beyond cost savings. As Gogia points out, “Dots.llm1 isn’t about generating direct revenue, it’s about accelerating ecosystem adoption.”
He emphasized that government subsidies, procurement policies, and regulatory exemptions allow Chinese firms to pursue such aggressive open-source strategies, something Western companies would struggle to justify financially.
Through these open models, China aims to extend its soft power globally, embedding its technology and values into software ecosystems across developing regions.
RedNote has already begun applying AI on its platform, notably with Diandian, an internal search engine that handles around 600 million daily queries. With the recent surge in U.S. signups, partly driven by TikTok regulatory issues, and the opening of a Hong Kong office, RedNote is expanding beyond China.
Balancing Transparency and Control
For enterprises evaluating open-source options, cost and performance are no longer the only metrics. The real question is whether transparency can replace trust, especially when models come from geopolitically sensitive regions.
“Open models allow for self-hosting, customization, and auditability,” Gogia explained. “But that shifts governance responsibility to the enterprise, which must now ensure AI integrity internally.”
He added that in industries like finance, healthcare, or defense, transparency doesn’t guarantee trust, especially when the AI's origin country introduces geopolitical risk.
A New Paradigm for Enterprise AI
RedNote’s open-source release signals more than technical innovation; it reflects a global shift in AI power dynamics. As high-performing models become freely available, companies must weigh the advantages of open-source flexibility against new governance burdens and international trust concerns.
Dots.llm1 is not just a model; it’s a marker of how AI strategy is evolving across borders.
The Gen Matrix Advantage
In a world drowning in data but starved for clarity, Gen Matrix second edition cuts through the clutter. We don’t just report trends, we analyze them through the lens of actionable intelligence.
Our platform equips you with:
Strategic foresight to anticipate market shifts
Competitive benchmarks to refine your approach
Network-building tools to forge game-changing partnerships
OpenAI Delays Open-Source AI, Promises Big Summer Upgrade

OpenAI is delaying the launch of its first open-weight model in years, CEO Sam Altman announced on X. Initially scheduled for a June release, the model is now expected “later this summer” as the company fine-tunes what Altman described as a significant breakthrough in performance.
“We’re taking a bit more time with our open-weights model,” Altman wrote. “Something unexpected and remarkable came out of our research, and although it needs extra work, we believe the wait will be well worth it.”
This postponement comes as competition in the open-source AI space intensifies. Rivals are rapidly releasing high-performance models, increasing the pressure on OpenAI to deliver a model that is not only transparent but also a leader in quality.
The upcoming model is anticipated to include advanced reasoning capabilities similar to OpenAI’s proprietary o-series models and aims to outshine competitors like DeepSeek’s R1.
Just days ago, French AI firm Mistral released a new line of reasoning-centric models called Magistral, while Chinese AI lab Qwen introduced hybrid models in April that switch between fast responses and more thoughtful, step-by-step reasoning — a new standard in AI benchmarks.
huge win for open source!
the newly updated DeepSeek R1 is now nearly on par with the openAI o3-high model on LiveCodeBench
— Haider. (@slow_developer)
9:15 PM • May 28, 2025
OpenAI has also hinted that its open model might eventually integrate with its cloud-based, proprietary systems to tackle more complex tasks. However, it remains uncertain whether such features will be available at launch.
Beyond technical aspects, the delay also reflects broader concerns about OpenAI’s image. Altman has previously acknowledged criticism over the company's shift away from open-sourcing its cutting-edge models.
In recent months, researchers and developers have increasingly called for greater transparency, arguing it is crucial for ensuring safety, reproducibility, and innovation in AI.

Why It Matters?
For Leaders: Benchmark your AI strategy against the best.
For Founders: Find investors aligned with your vision.
For Builders: Get inspired by the individuals shaping AI’s future.
For Investors: Track high-potential opportunities before they go mainstream.
$40 Billion Mega-Deal: OpenAI In Talks With Reliance For Landmark Funding

OpenAI, the creator of ChatGPT, is reportedly in discussions with Reliance Industries, Saudi Arabia’s Public Investment Fund (PIF), and current investor MGX from the UAE to raise a massive $40 billion in funding.
According to The Information, each of these potential investors might contribute several hundred million dollars. The funding is aimed at supporting OpenAI’s ambitious infrastructure initiative, Stargate, and further advancing its AI model development. OpenAI has yet to comment publicly on these reports, though updates may follow.
Back in March, OpenAI had announced plans to raise $40 billion at a $300 billion post-money valuation, signaling its intent to strengthen its AI research and expand compute capabilities.
OpenAI is exploring a partnership with India's Reliance and has discussed cutting ChatGPT subscription price in the country by 75-85% - The Information
— Manish Singh (@refsrc)
5:21 PM • Mar 22, 2025
Earlier, OpenAI was also reported to be exploring a partnership with Reliance Industries to launch new AI ventures. This included potential collaboration with Jio, Reliance’s telecom arm, to distribute or sell OpenAI’s AI tools, including ChatGPT, in India.
One proposal involved Reliance using its cloud infrastructure to offer OpenAI’s models to enterprise clients via API access. However, Microsoft, OpenAI’s major partner, currently holds exclusive rights to resell these models to businesses through its API, which could complicate the deal.
Recently, OpenAI also introduced data residency support in India, Japan, South Korea, and other Asian countries. This feature allows users to store data locally, helping Indian enterprises comply with national data sovereignty regulations while using OpenAI’s tools like the API, ChatGPT Enterprise, and ChatGPT Edu.
did we get the openai x reliance deal?
— Yash (@ekyashjha)
2:55 AM • May 16, 2025
In February, OpenAI launched its agentic AI feature “Operator” in several countries, including India, for Pro users. This AI assistant can perform interactive tasks like reading and navigating web pages by clicking, scrolling, and typing on behalf of users.
Meanwhile, just a day ago, ChatGPT experienced a global outage, affecting thousands of users in India and beyond.
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How did you like our today's edition? |
Thank you for reading
-Shen & Towards AGI team