- Towards AGI
- Posts
- How CFOs Can Lead Cultural Acceptance of AI
How CFOs Can Lead Cultural Acceptance of AI
When Employees Fear the Bot.
Here is what’s new in the AI world.
AI news: The CFO as Change Agent
Hot Tea: Your City's New Co-Pilot!
Open AI: All Hype, No Revenue?
OpenAI: Tether's QVAC Genesis II Aims to Empower Developers
Meet the 100 Leaders Defining the Future of AI

Your AI ROI is Stuck? Start by Fixing Employee Resistance
You, as a CFO, are at a pivotal moment. Facing a shrinking talent pipeline, rising strategic demands, and expanding expectations from the CEO and board, you are under a clear mandate to transform.
Generative AI (GenAI) presents itself not just as a technological tool but as your strategic enabler for agility, productivity, and long-term growth.
However, the success of your GenAI initiatives hinges on a critical factor beyond the technology itself: your team's trust in its implementation.
As finance teams rightfully wonder how AI will change their roles and daily work, you have an opportunity to lead with transparency, collaboration, and conviction.
The Changing Talent Equation: How GenAI Becomes Your Strategic Lever
In the U.S., while accounting degree enrollments are seeing a slight rise, attracting top talent to the finance profession remains a major challenge for you. Simultaneously, expectations of your role have expanded.
Today's CEOs expect you to be a strategic partner who brings data fluency and forward-looking insights, not just accurate reporting.
Historically, your teams have lacked the bandwidth and capabilities to fully step into this strategic driver role. GenAI can be your powerful accelerant.
According to Deloitte’s CFO Signals survey, 79% of CFOs plan to use GenAI in the next two years to overcome skills gaps.
By applying AI-driven solutions, you can augment core responsibilities, automate manual tasks, and free your team to focus on high-value analysis and strategic planning, positioning yourself as an influential strategy leader across the organization.
Building Confidence from the Inside Out: Addressing the Root of Resistance
Despite this potential, you are likely encountering employee resistance, which can stall adoption.
Nearly half of surveyed CFOs (48%) cite staff resistance as a top barrier. The good news is that this resistance is often rooted in uncertainty, not unwillingness.
Your leadership is key to shifting the narrative. You can build confidence by clearly communicating how AI is designed to complement, not replace, human work.

For your finance team, GenAI should mean less time on routine data entry and more time on strategic analysis and insight generation. More advanced applications can assist with modeling and forecasting, further elevating your function's value.

Why Your CIO is Your Essential Ally
You cannot navigate this transformation alone. To maximize momentum and build trust, you must partner closely with your CIO. Together, you can ensure your teams understand:
How GenAI will work in their daily workflows.
Why is it being implemented?
What support and training are in place to help them succeed?
Many leading organizations are already taking this collaborative approach, with tech functions engaging directly with frontline employees to understand needs, coordinating cross-functional messaging, and prioritizing clear, business-friendly communication about the value of new tools.
Lead Through Change, Not Around It
The road to successful AI integration starts with your leadership. By meeting employee concerns with clarity, empathy, and vision, you can unlock more than new capabilities; you can build trust, loyalty, and organizational momentum.
Your call to action is clear: work in lockstep with your CIO, engage your teams early and often, and transparently communicate the "why" behind the technology. Transform apprehension into adoption and skepticism into shared success.
The future of your finance function will be shaped not just by what GenAI can do, but by how you guide your people through the change.
The Next Urban Revolution: Advancing City Resilience with Agentic and Generative AI
The latest IDC FutureScape report highlights a clear trend: cities worldwide are rapidly adopting AI, driven by budget pressures and rising public needs.
Two predictions in particular signal a fundamental shift from viewing technology as a tool to treating it as a collaborative "teammate" or "personal intern." For city leaders, this represents more than automation; it's a reimagining of how government operates.
Prediction 1: Agentic AI as a Cross-System Orchestrator
By 2027, 65% of cities will deploy AI agents to orchestrate end-to-end workflows, reducing manual workloads while managing risks like "process debt", the inefficiency built up from fragmented systems and redundant tasks.
Unlike narrow AI tools, agentic AI can understand goals, coordinate across departments, and execute entire processes, from permit approvals to budget reconciliation.

But success requires groundwork: cities must first map workflows, clean data, and redesign broken processes before automation can deliver real value.
This shift will transform public sector work. Entry-level clerical roles may evolve, but new positions like "AI process managers" and data specialists will emerge.
The result? Faster decisions, lower service delivery costs, and staff freed to focus on strategic and empathetic tasks that require human judgment.
By 2026, 50% of state and local governments will invest in fine-tuning LLMs on their own protected, siloed data, unlocking decades of untapped records from zoning, traffic, health, and housing systems.
Most city data has never been used to train the AI models they rely on. By fine-tuning LLMs on internal records (with strict governance), cities can create AI systems that "speak government," understand local regulations, and generate insights grounded in real municipal context.
Imagine an AI trained on planning documents that can summarize zoning precedents or a model using social services data to predict households at risk of homelessness.
A Smarter, More Responsive City
These two trends are interconnected. Agentic AI depends on accessible, high-quality data, while data intelligence requires intelligent systems to act on insights.
Together, they form a "digital nervous system" for the future city, enabling AI agents to move seamlessly across departments, making decisions informed by the city's own history and conditions.
A Four-Step Action Plan for City Leaders
Modernize Your Data Foundation: Build secure, interoperable data platforms. Invest in metadata management, data lineage, and ethical AI governance to prepare data for fine-tuning and automation.
Pilot Strategically: Start with high-impact, measurable workflows like licensing or procurement. Use sandboxed environments to test AI agents safely before scaling.
Center People in the Transition: Partner with HR to redefine roles and build AI literacy across teams. Transparent communication is essential for maintaining public trust and employee morale.
Design for Accountability: Incorporate audit trails, explainable AI, and citizen feedback mechanisms. The legitimacy of AI-driven decisions will determine long-term success more than technical sophistication.
Early adopters like Boston, Singapore, and Barcelona are already demonstrating the potential, using AI to integrate policy, climate data, and citizen feedback.
Their example shows that when agentic AI and responsible data governance converge, cities can become more proactive orchestrators of well-being, equity, and sustainability.

For Smart City leaders planning 2026 strategies, the time to build readiness for this AI-driven transformation is now.
Is AI Inference Worth $160M? vLLM Tests Investor Faith With Pre-Revenue Raise
The open-source AI inference project vLLM, originally developed at UC Berkeley, is in advanced discussions to raise at least $160 million in a major funding round, according to sources familiar with the matter.
The proposed financing includes an initial $60 million, followed by a second tranche of at least $100 million, potentially valuing the startup at around $1 billion.
vLLM was founded in the UC Berkeley lab of Ion Stoica, co-founder of Databricks.

Despite having generated minimal revenue to date, raising only about $300,000 so far, including backing from Sequoia, the project has attracted strong investor interest due to its widely adopted technology rather than commercial traction.
It currently lacks a formal website or clear revenue model beyond donations.
Why Investors Are Betting on vLLM
The appeal lies in vLLM’s open-source software, hosted on GitHub, which dramatically improves the efficiency of running large language models (LLMs). The technology optimizes GPU memory usage, allowing AI inference workloads to be processed on fewer servers.
Unlike cloud-only alternatives, vLLM enables organizations to run optimized inference on their own hardware and infrastructure. It has become one of the most starred AI projects on GitHub, signaling strong developer adoption.
This investment trend reflects a broader shift in AI economics. As spending moves from model training to model inference (running models in production), inference costs are becoming a major financial burden for AI companies.
For example, OpenAI reportedly spends over 25% of its Sora revenue on inference costs, a situation its head, Bill Peebles, called “completely unsustainable.”
The Growing Inference Market and Open-Source Precedent
Investors are increasingly funding inference-focused startups like Fireworks, Baseten, and Fal, seeing parallels with successful open-source companies such as Red Hat, GitLab, and MongoDB.
Red Hat, acquired by IBM for $34 billion in 2019 after generating $3.4 billion in revenue, demonstrated how open-source software can be commercialized through enterprise services and support.
Dylan Patel, founder of Semianalysis, noted the scale of the opportunity: “There’s going to be hundreds of billions of dollars spent on inference... I think it’s possible for vLLM to do the same.”
The proposed funding round signals a significant bet on open infrastructure as a solution to rising AI operational costs, positioning vLLM to potentially become a foundational layer in the AI inference stack.
Tether Goes All-In on OpenAI with Major New Training Dataset
Tether Data's AI research division, QVAC, has released QVAC Genesis II, significantly expanding the largest publicly available synthetic educational dataset for AI training. The new release adds 107 billion tokens to its predecessor, bringing the total to 148 billion tokens across 19 academic domains.
A Major Boost for Open AI Research
This release marks a substantial increase in open, structured training material available to researchers worldwide, at a time when many advanced datasets remain proprietary.
Genesis II is specifically designed for AI pre-training, with a focus on improving reasoning, explanation, and decision-making rather than surface-level language fluency.
Building on Genesis I with Enhanced Methodology
Genesis II builds upon the foundation of Genesis I, which covered core STEM subjects. The new version expands into ten additional fields, including:
Chemistry, Computer Science, Statistics, and Machine Learning
Astronomy, Geography, Econometrics, and Electrical Engineering
A key innovation is the "Option-Level Reasoning" data generation method.
Unlike approaches that treat a correct answer as the final output, this technique deconstructs every multiple-choice option, explaining why correct answers are right and analyzing the misconceptions behind incorrect ones.
This teaches AI models causal reasoning and decision logic.

Focus on Understanding, Not Just Fluency
The dataset is structured to move beyond training models to simply predict text sequences. Instead, it aims to teach AI systems to demonstrate a genuine understanding of underlying concepts, prioritizing clarity and logical consistency.
This aligns with growing research emphasis on building reliable, explainable AI for education, science, and decision-support applications.
Open Access and Support for Decentralized Development
QVAC Genesis II is released under a Creative Commons Attribution–NonCommercial 4.0 license, available on Hugging Face alongside detailed technical documentation. This open access is intended to lower barriers for academic and independent researchers who lack access to large proprietary datasets.
The release supports decentralized AI development, enabling experimentation and local training even with limited compute resources, and reducing reliance on dominant, centralized AI platforms.
Tether's Strategic Expansion into AI Research
While Tether is best known for its role in digital assets and stablecoins, its investment in QVAC represents a strategic expansion into AI and data research.
The Genesis project positions the company within the growing intersection of fintech and advanced AI, focusing on infrastructure that supports open, transparent development.
Implications and Future Directions
The scale and quality of Genesis II may influence how researchers approach model training, supporting work on smaller, more efficient models and explainable AI. It also serves as a benchmark for future synthetic data projects that prioritize reasoning quality over sheer volume.
By providing a high-quality, openly accessible educational dataset, QVAC aims to foster a more distributed and transparent AI research ecosystem, reinforcing the belief that reliable AI must be grounded in structured reasoning and accessible foundations.
Journey Towards AGI
Research and advisory firm guiding industry and their partners to meaningful, high-ROI change on the journey to Artificial General Intelligence.
Know Your Inference Maximising GenAI impact on performance and Efficiency. | Model Context Protocol Connect AI assistants to all enterprise data sources through a single interface. |
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How's your experience? |
Thank you for reading
-Shen & Towards AGI team