- Towards AGI
- Posts
- The Marie Curie of Data? Healthark's New AI Brain for Enterprise Data
The Marie Curie of Data? Healthark's New AI Brain for Enterprise Data
Meet Curie.
Here is what’s new in the AI world.
AI news: Healthark's New AI Brain for Enterprise Data
Hot Tea: Open-Source Models Take 30% of Global Usage
Open AI: Master Open Source and Orchestration
OpenAI: OpenAI Sees Shift to Full AI Integration
Healthark Debuts "Curie," a Next-Generation AI Platform for Enterprise Data
Healthark has unveiled Curie, a next-generation enterprise platform that combines generative AI, automation, and governance into a single layer to streamline decision-making and automate complex workflows across global organizations.
Early results indicate that Curie can cut decision-making cycles in half and improve workforce and regulatory compliance by approximately 60%.
Additionally, its automation capabilities handle nearly 90% of repetitive workflows, allowing teams to focus on strategic initiatives.
Curie is structured around four specialized, AI-driven modules, each designed to tackle key operational challenges while working seamlessly together:
Curie Clinical: Streamlines end-to-end study operations for life sciences, accelerating documentation, site oversight, and compliance.
Curie People: Provides HR leaders with real-time workforce insights, engagement analytics, and intelligent tools to boost organizational performance.
Curie Finance: Enhances financial oversight through automated approvals, spend tracking, and audit-ready workflows.
Curie Insights: Connects internal systems with external web data, enabling instant research, pattern detection, and informed strategic decisions.
Curie represents a major shift in how organizations operate. Instead of working across disconnected systems, teams can now think, ask, and act through one unified AI layer.
Built for rapid adoption, Curie supports API-based integration, allowing organizations to connect existing tools, databases, and workflows without overhauling their IT infrastructure. It functions seamlessly over both legacy and modern cloud systems, creating a central intelligence hub without disrupting operations.
The platform prioritizes trust and compliance with built-in governance features, including automatic redaction of sensitive data (PHI/PII), continuous audit logs, role-based access control, and end-to-end compliance measures tailored for regulated industries.
Users can interact using natural language to receive accurate, actionable answers in seconds.
With its blend of intelligence, automation, and compliance, Curie aims to redefine enterprise operations by delivering faster, clearer, and more reliable decision-making at scale.

30% of Global AI Usage Now Comes From Chinese Open-Source Models
Chinese-developed open-source AI models now account for nearly 30% of global AI model usage, according to a recent study by OpenRouter and venture capital firm Andreessen Horowitz.
This surge has been driven primarily by models from Alibaba’s Qwen family, DeepSeek’s V3, and Moonshot AI’s Kimi K2.
Despite the rapid rise of Chinese open-source models, proprietary Western models—such as OpenAI’s GPT-4o and GPT-5, still dominate with a 70% global share.
The report, based on an analysis of 100 trillion tokens (units of data processed by AI), shows Chinese open-source models started from just 1.2% of global usage in late 2024 but grew rapidly to nearly 30% by late 2025.

On average this year, Chinese models accounted for 13% of weekly token volume, nearly matching the 13.7% from the rest of the world’s open-source models combined.
“China has emerged as a major force, not only through domestic consumption but also by producing globally competitive models,” the report stated, highlighting the country’s emergence as a peer to the United States in AI development despite U.S. restrictions on China’s access to advanced AI chips.
The growth is attributed to the rapid release cycles and strong performance of models like Qwen and DeepSeek, which have helped developers handle increasing workloads more efficiently and cost-effectively.
As a result of this momentum, Chinese has become the second most used language for AI prompts globally, accounting for nearly 5% of all requests, far exceeding Chinese’s approximately 1.1% share of overall internet content.

In terms of total LLM token share by country, China ranks fourth behind the U.S., Singapore, and Germany.
The open-source landscape has also diversified: from a market led almost exclusively by DeepSeek in late 2024, it has evolved into a competitive field with multiple Chinese models, where no single open-source model now holds more than a 25% share.
This data reinforces China’s growing influence in the global AI ecosystem, driven by open-source innovation that balances quality, efficiency, and accessibility.
The 2026 AI Playbook: Open Source for Innovation, Orchestration for Control
Imagine asking yourself: what if you could truly see, shape, and control the AI that runs your business, not just use it? That’s the foundational promise of open source, which transformed the cloud.
Now, AI stands at a similar crossroads. Will it remain a closed system controlled by a few, or will it become open, inspectable, and shaped by a community you can trust?
On the surface, the open-source playbook seems straightforward: release the models and let innovation flourish. But AI is not just code. It learns, judges, and acts, which means its behavior can’t be controlled like traditional software.
For you, this means true transparency must go beyond the model to understanding how it evolves, where its decisions come from, and how you can audit and orchestrate it at scale.
What "Open" Really Means for Your Enterprise
Policymakers and developers are grappling with this definition right now. While initiatives like the White House's AI Action Plan encourage open environments, and companies like OpenAI release "open-weights" models, this is just the beginning.
As Red Hat’s CEO Matt Hicks points out, if you can’t probe, rerun, or modify the AI systems you rely on, you aren’t truly in control.
For you, openness shouldn't stop at model weights. You need an entire open ecosystem, including the tools, platforms, and inference servers, to prevent vendor lock-in and ensure genuine transparency.
Demand more than just open weights. Look for standardized, open datasets and a complete, open-source AI stack that gives you the choice and control you had with traditional open-source software.
Why Orchestration is Your Non-Negotiable
As you adopt AI that can reason and act autonomously, the challenge shifts from seeing what it does to coordinating how it behaves. This is where orchestration becomes indispensable.
It helps you decide which model handles which task, sequences actions, manages resources, and ensures human oversight where judgment is needed. Without it, you risk a tangled mess of disconnected automation.
Nowhere is this pressure more acute than in cybersecurity. Attackers can move from initial compromise to spreading through your network in under a minute. Your defense can't rely on slow, monolithic AI models.
A Blueprint from the Frontlines: The Agentic SOC
Consider CrowdStrike’s response: an 'agentic Security Operations Center (SOC).' This isn't just automation; it's a coordinated fleet of specialized AI agents, each focused on a distinct task (like threat hunting or detection triage), working in concert under your human-defined guardrails.
At the center is Charlotte AI, a security assistant that matches human conclusions with 98% accuracy and saves about 40 hours of manual work per week.
The key is Charlotte Agentic SOAR, the orchestration layer that lets you chain these agents together into dynamic, multi-step investigations and responses.
How this works for you?
A detection agent finds a threat.
It triggers a triage agent to assess the risk.
That agent can then call a remediation agent to act.
Charlotte AI supervises the entire process, provides context, and escalates to your human analysts only when necessary.
Instead of following rigid, pre-defined scripts, these agents dynamically reason through unpredictable conditions. You define the mission; the agents collaborate to determine the next best action. This gives you back your most valuable resource: time.
Openness + Orchestration
The philosophies of open-source advocates and security leaders converge on a critical point for your strategy.
The future is a hybrid model:
Open, Inspectable Systems: To see, trust, and modify the AI you use.
Governed, Agentic Orchestration: To safely coordinate and control AI at machine speed and scale.
Openness without orchestration gives you visibility into systems you still can’t manage. Orchestration without openness creates powerful but unaccountable black boxes. You need both.
What do you need to do?
Look at AI as an extension of your hybrid cloud. Demand the same choice and control.
Prioritize data quality and governance. Your agents are only as good as the data they use and the rules they follow.
Embrace a culture of data-driven, risk-managed decision-making. Guide how your teams interpret and act on machine outputs.
The real race in AI is no longer just about who has the biggest model. It's about who can build systems that are trustworthy, defensible, and under your control, open by design and orchestrated in practice. This is how you will secure and scale AI in your enterprise for 2026 and beyond.

OpenAI's New Enterprise Reality: Goodbye Pilots, Hello Full Integration
Enterprise AI is no longer a pilot project; it’s now powering daily operations with deep workflow integration.
According to OpenAI, organizations are moving beyond simple text generation to assigning complex, multi-step tasks to AI, signaling a fundamental shift in how businesses deploy generative models.

With OpenAI’s platform now serving over 800 million weekly users, consumer familiarity is fueling professional adoption. Over a million business customers use these tools, with the goal of shifting toward even deeper integration.
However, a growing divide is emerging: while "frontier" adopters achieve significant gains, the median enterprise risks falling behind if usage remains superficial.
From Basic Queries to Deep Reasoning
The true measure of AI maturity isn't user count, but task complexity. While ChatGPT message volume grew eightfold year-over-year, a more telling metric is the 320x increase in API reasoning tokens per organization.
This surge indicates that companies are systematically integrating advanced models to handle logic and workflow automation, not just basic queries.
Customization is now a prerequisite for professional use. Weekly users of Custom GPTs and Projects, tools that allow models to be instructed with institutional knowledge, increased roughly 19x this year.
About 20% of all enterprise messages now flow through these tailored environments.

Measurable Productivity Gains
On average, users report saving 40–60 minutes per active day with AI, with savings even higher for data science, engineering, and communication roles (60–80 minutes daily). The impact spans departments:
87% of IT workers report faster issue resolution.
75% of HR professionals see improved employee engagement.
AI is also blurring role boundaries. Outside of traditional technical roles, coding-related queries from non-technical teams have grown by 36% in the past six months, enabling analysis that once required specialized developers.

The Widening Competence Gap
A clear split is forming between organizations that merely provide AI access and those deeply embedding it into operations. The "frontier" class, workers in the 95th percentile of adoption, generate six times more messages than the median user.
At the company level, leading firms produce twice as many messages per seat and seven times more messages to custom GPTs than the median enterprise.
Usage depth directly correlates with value. Users who engage AI across seven distinct task types save five times more time than those limiting use to three or four basic functions. A "light touch" deployment is unlikely to deliver the expected ROI.
Global Adoption Accelerates
While tech, finance, and professional services remain early leaders, other sectors are catching up rapidly:
Technology: 11x year-over-year growth
Healthcare: 8x growth
Manufacturing: 7x growth
Adoption is also globalizing. Business customer growth exceeds 140% year-over-year in markets like Australia, Brazil, the Netherlands, and France. Japan now holds the largest number of corporate API customers outside the U.S.
Real-World Impact Across Industries
Retail (Lowe’s): An AI tool deployed to 1,700+ stores boosted customer satisfaction scores by 200 basis points and more than doubled online conversion rates when customers used the AI assistant.
Pharmaceuticals (Moderna): AI automated the extraction of key facts for Target Product Profiles, reducing a process that took weeks of cross-functional effort to just hours.
Finance (BBVA): An AI solution automated over 9,000 annual legal queries, freeing the equivalent of three full-time employees for higher-value work.
Integration Over Access
The primary barrier is no longer AI capability, but organizational readiness. Roughly one in four enterprises still hasn't connected their AI to secure company data, limiting models to generic knowledge instead of organizational context.
Success requires:
Executive sponsorship with clear mandates.
Codifying institutional knowledge into reusable AI assets.
Enabling deep system integrations that give AI access to real-time, relevant data.
As OpenAI’s data shows, enterprise AI success now depends on delegating complex workflows with deep integration, not just using AI for isolated tasks. The technology must be treated as a core engine for revenue growth and operational transformation.
Journey Towards AGI
Research and advisory firm guiding industry and their partners to meaningful, high-ROI change on the journey to Artificial General Intelligence.
Know Your Inference Maximising GenAI impact on performance and Efficiency. | Model Context Protocol Connect AI assistants to all enterprise data sources through a single interface. |
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How's your experience? |
Thank you for reading
-Shen & Towards AGI team