Ready to Make Real Money on AI?

How Dell Plans to Finally Monetize the AI Revolution?

Here is what’s new in the AI world.

AI news: Dell's GenAI Profit Moment Has Arrived

Hot Tea: Why Your CFO Should Care About "Free" AI Models?

Open AI: MCP for Ads: Google's Open Source Move

OpenAI: Jack Ma-Backed Ant Group Releases New Rival Model

Only a few spots left. Hurry up!

Dell Says It Can Finally Make Significant Money from Generative AI

While market researchers make predictions, public companies like Dell face real consequences for being wrong. At its recent securities analyst meeting, Dell moved from cautious optimism to placing massive bets on the AI infrastructure boom, but Wall Street remains concerned about how profitable these bets will ultimately be.

The Staggering Growth of AI Servers

Dell's AI server business has exploded, growing from $1.6 billion in fiscal 2024 to a projected $9.8 billion in fiscal 2025, a sixfold increase.

This surge was largely fueled by the improved availability of Nvidia GPUs. Looking ahead, Dell forecasts this segment to more than double to $20 billion in fiscal 2026.

In contrast, its traditional server business is growing at a mature, modest rate of around 4.5% annually. This divergence highlights a fundamental market shift: the AI server boom is happening on top of, not instead of, the steady demand for core enterprise compute.

The Profitability Puzzle

However, this explosive growth comes with a catch. Dell's operating profit margin for its infrastructure group is projected to dip to 11.2% in fiscal 2026. This pressure is a direct result of the large, competitive deals it has struck with AI pioneers like xAI and CoreWeave.

These deals are "dilutive to profits," meaning they have lower margins, even as they add substantial revenue.

This is the classic high-performance computing (HPC) trade-off: accept lower margins on large-scale hardware deals to establish market leadership, then make up the profit elsewhere.

The Long Game: Betting on the Enterprise Mainstream

Dell's strategy isn't to remain a low-margin supplier to AI giants. The company is playing a longer game, based on two key insights:

  1. AI Complexity is Buying Time: The initial, massive expectations for AI have been surpassed by the reality of its technical complexity. This gives enterprises more time to develop their AI strategies, creating a future market for smaller, more customized deployments that are inherently more profitable for OEMs like Dell.

  2. The Enterprise On-Premises Shift is Coming: According to Dell's own research, a significant shift is expected: while 71% of AI workloads currently run in the cloud, Dell projects that by 2026, 69% will run in corporate datacenters. Driven by data sovereignty, security, and latency concerns, enterprises will repatriate AI workloads to their own infrastructure once it becomes feasible.

A Server Upgrade Cycle Meets AI

Dell is positioning itself at the center of a perfect storm. Over 70% of its existing PowerEdge server installed base is three generations old or older. The latest generation of servers offers 4-5x more cores and over 2x better power efficiency.

This creates a compelling opportunity for enterprises to:

  • Consolidate: Replace a large number of old servers with a much smaller, more efficient fleet.

  • Reallocate: Free up power, space, and budget to install new AI servers right in their own datacenters.

This upgrade cycle is the bridge to the on-premises AI future Dell is betting on.

Dell's current deals with AI hyperscalers are a strategic loss leader. The company is gaining the experience and scale needed to build rack-scale AI systems efficiently. The real prize is the pipeline of 6,700 unique companies that represent the future, more profitable, enterprise customer base.

These smaller-scale deployments will require more hand-holding, services, and financing, all of which carry higher margins. Dell forecasts that this strategy will allow its infrastructure business to grow 11-14% annually, enabling it to increase its dividend by at least 10% each year through 2030.

In essence, Dell is sacrificing high margins on a few giant deals today to build the capability and credibility to win thousands of profitable enterprise deals tomorrow.

Don't Let Open-Source AI Bankrupt You: A Guide to Managing Compute Costs

Open-source AI models like Meta's Llama and IBM's Granite promise a compelling alternative to proprietary systems: powerful, customizable AI without licensing fees. However, the "free" download is a mirage.

The true total cost of ownership (TCO) for open-source AI often far exceeds that of paid, managed services, catching many IT leaders by surprise.

The Illusion of "Free" and Where Costs Actually Hide

Adopting open-source AI simply shifts costs from licensing to other, often higher, areas.

The financial burden falls into three main categories:

  1. Massive Compute and Infrastructure: Training, fine-tuning, and running inference on large language models (LLMs) consume expensive GPU resources. A small model might cost $300/month to run, but a high-performance model for a real product can require a server cluster costing $30,000+ per month.

  2. Scarce and Expensive Talent: Organizations need specialized machine learning engineers to deploy, maintain, and optimize these systems—a talent pool that is costly and in high demand.

  3. The Strategic Cost of Delay: The time spent debating "build vs. buy" and struggling with implementation can allow more agile competitors to gain a significant market lead.

Why Costs Spiral: The Production Reality

Costs explode when moving from a proof-of-concept to a production system. Initial infrastructure estimates are often "drastically insufficient."

A project that starts as a weekend experiment can quickly demand an enterprise-scale GPU cluster.

Real-world examples are telling:

  • One team found that bringing an AI capability in-house with an open-source model ended up costing roughly triple their original API-based approach.

  • The "concurrency tax" is a major factor. Server costs are based on single requests, but production systems handle many at once. Serving just five concurrent streams can double the infrastructure costs.

A Strategic Framework for Decision-Making

To make the right choice, IT leaders should evaluate based on three pillars:

  • Control vs. Convenience: Does your use case involve highly regulated data or proprietary processes that justify the control of open-source? Or is it a general-purpose task better suited to a commercial API?

  • Total Cost of Ownership (TCO): For moderate and fluctuating usage, commercial APIs with predictable, per-use pricing are often more economical. Open-source requires large, upfront investments in infrastructure and talent.

  • Strategic Value: Is AI a core differentiator for your business that warrants heavy customization? Or is it an auxiliary function where speed and predictability are more important?

Source: Microsoft

A hybrid approach is often best: use commercial APIs for scale and reliability, and reserve open-source for strategic, highly customized applications.

How to Control Your AI Spending?

Successfully managing AI costs requires disciplined practices:

  1. Forecast Rigorously: Model all costs, GPU, storage, networking, talent, before starting. Map technical needs to realistic business usage.

  2. Optimize Infrastructure: Use cost-saving options like spot instances for training and smaller models where performance is sufficient.

  3. Implement Governance: Require business case reviews and approval for large-scale projects to prevent experiments from ballooning into massive bills.

  4. Monitor in Real-Time: Use cloud cost dashboards and monitoring tools to track spending by project and team, ensuring accountability.

  5. Prioritize Business Value: Tie every AI project to a clear ROI. Be prepared to cut projects that don't deliver measurable value within a set timeframe.

Open-source AI is a powerful tool, but it is not a cost-saving shortcut. It is a strategic investment that only pays off when an organization has the specific use cases, scale, and in-house expertise to justify the substantial infrastructure and operational overhead.

For most, a pragmatic mix of commercial and open-source solutions will deliver the best balance of innovation, control, and cost-effectiveness.

Navigating this complex trade-off is exactly where DataManagement.AI provides critical value.

Our platform offers the unified intelligence and governance layer you need to make informed build-versus-buy decisions, and then successfully manage the entire lifecycle of whichever path you choose.

By providing a clear view of data infrastructure costs, performance, and lineage, DataManagement.AI ensures your AI initiatives, whether built on open-source or commercial APIs, are efficient, scalable, and deliver measurable ROI.

Open Source MCP Server for Ads Data Integration

Google has taken a significant step toward integrating AI with digital advertising by open-sourcing its Google Ads API Model Context Protocol (MCP) Server.

Now available on GitHub, this release allows developers to securely connect AI applications directly to Google Ads data using plain English.

Built on the open Model Context Protocol (MCP) standard, this server acts as a bridge, enabling large language models to pull data from Google Ads accounts for analysis and diagnostics.

For the first time, marketers and developers can use natural language queries to gain insights from their campaigns, streamlining reporting and analytics.

Key Implications

  • Secure, Read-Only Access: The initial version is focused on analytics, allowing AI tools to securely retrieve data for diagnostics without making changes to live campaigns.

  • Future Potential: This foundation paves the way for future versions that could enable AI-driven campaign optimization and management directly through conversational interfaces.

  • Accelerated Innovation: By making the server open-source, Google is inviting the developer community to contribute, which will likely lead to faster innovation and more robust functionality than a closed, proprietary system could achieve.

This move highlights the practical future of AI in marketing, making powerful data analysis more accessible and integrated directly into workflows. It allows for tools like Google Gemini to connect seamlessly with advertising data, promising greater efficiency for marketing teams.

This is a powerful example of the industry-wide shift towards composable AI systems, a trend that is core to our focus at Towards MCP. The Model Context Protocol (MCP) is the foundational standard that enables this exact type of seamless, secure integration between AI models and the tools and data they need.

As more platforms, like Google Ads, adopt MCP, it creates a universal language for AI to interact with the digital world, moving us beyond isolated tools towards a truly integrated and agentic future.

Want to understand how MCP will redefine your business operations? Explore the future of AI integration.

Jack Ma's Ant Group Throws Its Hat in the AI Ring with New OpenAI Rival

Chinese fintech giant Ant Group has intensified its competition in the global AI race with the release of Ling-1T, a new trillion-parameter large language model that it has made open-source.

The company, backed by Jack Ma and Alibaba, claims the model demonstrates high performance on multiple complex reasoning benchmarks. A key feature highlighted by Ant Group is its "constrained output token limits," which suggests a design focused on efficiency and controlled responses.

According to the company's statement, Ling-1T delivers "improved results across diverse use cases," excelling in areas like:

  • Code generation and software development

  • Solving competition-level mathematics problems

  • Logical reasoning

Chinese media reports indicate that the model has outperformed rivals, including DeepSeek's V3.1 Terminus and OpenAI's GPT-5 on several major coding benchmarks.

This release marks a significant step-up for Ant Group, which first entered the AI arena in 2023 with a financial-focused LLM and last month released another trillion-parameter model called "Ring-1T-preview."

The move solidifies Ant Group's position among several Chinese tech giants aggressively expanding their AI portfolios to gain a stronger foothold in the rapidly growing industry.

In a parallel strategic push, the company is also ramping up its investments in China's domestic semiconductor industry, aligning with Beijing's broader initiative to source more AI processors locally.

Your opinion matters!

Hope you loved reading our piece of newsletter as much as we had fun writing it. 

Share your experience and feedback with us below ‘cause we take your critique very critically. 

How's your experience?

Login or Subscribe to participate in polls.

Thank you for reading

-Shen & Towards AGI team