- "Towards AGI"
- Posts
- Survey Finds 99% of Leaders Recognize GenAI's Importance in Achieving Success
Survey Finds 99% of Leaders Recognize GenAI's Importance in Achieving Success
This heightened focus on AI adoption reflects the urgency for businesses to embrace generative AI to stay competitive.
A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.
Whether you're passionate about machine learning, neural networks, or the ethics surrounding GenAI, our platform offers cutting-edge insights, resources, and collaborations on everything AI.
What to expect from Towards AGI: Know Your Inference (KYI): Ensuring your AI-generated insights are accurate, ethical, and fit for purpose Open vs Closed AI: Get expert analysis to help you navigate the open-source vs closed-source AI GenAI Maturity Assessment: Evaluate and audit your AI capabilities Expert Insights & Articles: Stay informed with deep dives into the latest AI advancements But that’s not all!
We are training specialised AI Analyst Agents for CxOs to interact and seek insights and answers for their most pressing questions. No more waiting for a Gartner Analyst appointment for weeks, You’ll be just a prompt away from getting the insights you need to make critical business decisions. Watch out this space!!
Visit us at https://www.towardsagi.ai to be part of the future of AI. Let’s build the next wave of AI innovations together!
TheGen.AI News
Survey Finds 99% of Leaders Recognize GenAI's Importance in Achieving Success

A recent report reveals that an impressive 99% of C-suite leaders in India view generative AI as a crucial element, signaling a significant shift towards AI-driven transformation within the country’s corporate sector. The findings, based on a survey of over 300 C-suite executives by enterprise software giant Salesforce, show that 60% of leaders from large companies already have a defined generative AI strategy, while 32% are in the process of developing one.
This heightened focus on AI adoption reflects the urgency for businesses to embrace generative AI to stay competitive. According to the report, business leaders prioritize generative AI for enhancing customer expectations through faster, more personalized experiences (56%), boosting productivity and efficiency (55%), and meeting the rising demand for AI tools among employees (49%). However, they also face significant challenges, such as issues with accessibility and inclusivity (38%), concerns over inaccurate outputs (34%), incomplete customer and company data (32%), and a lack of governance (30%).
“In India’s business environment, the pressure on leaders to swiftly and effectively implement generative AI has never been greater,” said Arun Parameswaran, Managing Director - Sales at Salesforce India. “Our mission is to guide businesses through the path of responsible AI innovation, helping them fully realize the potential of this technology. By driving productivity and accelerating growth, we aim to empower Indian businesses to meet and surpass the evolving expectations of their customers and workforce,” he added. Additionally, all surveyed C-suite leaders expressed confidence in delegating at least one task to AI without human oversight within the next three years, indicating a growing trust in AI’s capabilities.
Generative AI to Fuel 5% of WNS Analytics Revenue in FY25, with Growth on the Horizon

As companies rush to embrace AI, a pressing concern looms: Will the drive for AI-fueled efficiency result in job losses, even as profits increase? While businesses see AI as a catalyst for growth, employees are anxious about whether automation could replace their roles.
In an interview with AIM, Gautam Singh, head of WNS Analytics, shared that they anticipate revenue growth driven by generative AI. “We expect about 5% of our revenue in the fiscal year 2025 to be influenced by generative AI. For example, we’ve achieved efficiency gains of 30-40% through analytics, AI, and automation,” he noted.
For fiscal 2025, WNS projects revenue (excluding repair payments) between $1,293 million and $1,357 million, with adjusted net income (ANI) ranging from $206 million to $218 million.
“Currently, around $200 million of our $1.3-1.4 billion revenue is derived from analytics, and this share is steadily increasing,” Singh added.
WNS Analytics, the data, analytics, and AI division of WNS, offers industry-specific, productized services tailored to meet clients' needs. These solutions are delivered through a combination of proprietary AI assets and subject matter expertise, bolstered by the company’s AI labs and strategic alliances.
Singh highlighted WNS Analytics’ adoption of outcome-based models, where compensation is directly tied to the value or revenue generated for clients. These models are increasingly appealing to clients, as they reduce risk and ensure payment only when results are delivered.
A Focus on Upskilling
As WNS Analytics expects a surge in revenue influenced by AI, it recognizes the importance of upskilling its workforce to fully leverage these advancements. “We’ve partnered with training providers to educate our employees on generative AI. So far, over 16,000 of our nearly 63,000 employees have completed this training,” Singh mentioned.
The company also runs the Q-Riosity Project, an AI learning initiative featuring sessions led by experts from Harvard University and Nanyang Business School. This ongoing training effort, especially targeted at their analytics teams but extended across the company, aims to empower all employees to utilize generative AI tools.
For instance, every employee’s computer is equipped with Copilot, and they are being trained on how to use it to enhance productivity.
Data: The Foundation of AI
A key to successful AI-driven automation is effective data use.
“While automation improves efficiency, the true impact comes from leveraging analytics for better outcomes. Automation sets the baseline, but deeper insights from analytics are needed to achieve exceptional results,” Singh explained.
Data plays a crucial role in making this possible. AI and data are tightly connected, as data often exists in isolated silos due to the legacy systems many clients have relied on for years.
New data sources, such as unstructured, image, audio, and social media data, can complement the traditional, structured data from legacy systems.
“The strength of technologies like AI and generative AI lies in their ability to combine and process diverse data to generate deeper insights,” Singh noted.
Singh also advised that instead of starting with large-scale data consolidation efforts like data lakes, companies should first focus on specific business use cases. “My recommendation is to start with a ‘data pond,’ a smaller, targeted dataset tailored to the business need, and expand from there,” he suggested.
AI Surge to Demand Upskilling for 80% of Engineers by 2027

A recent Gartner report highlights the significant impact generative AI (GenAI) is expected to have on the software engineering field, predicting that up to 80% of the engineering workforce will need to upskill by 2027. This transformation will bring about new roles and redefine existing ones as companies increasingly integrate AI into their software development processes.
Philip Walsh, a senior principal analyst at Gartner, discussed the changing role of engineers, saying, “Bold claims about AI’s capabilities have sparked speculation that it might reduce the need for human engineers or even replace them. However, while AI will reshape the future of software engineering, human expertise and creativity will remain vital for creating complex and innovative software solutions.”
Gartner’s analysis outlines three phases of AI's influence on software engineering:
Short Term: Initially, AI will support developers by enhancing existing workflows, leading to incremental productivity improvements. These gains will be most beneficial to senior developers in companies with mature engineering practices.
Medium Term: As AI agents become more advanced, they will take on more tasks, ushering in a shift to AI-native software engineering. During this phase, AI will generate the majority of code, requiring developers to adopt an "AI-first" mindset. Walsh highlighted that skills in natural language prompt engineering and retrieval-augmented generation (RAG) will become essential.
Long Term: Looking further ahead, the need for professionals with a unique blend of software engineering, data science, and AI/machine learning expertise will grow, leading to a new role known as the AI engineer. Walsh emphasized, “Creating AI-driven software will require a new type of software professional: the AI engineer.”
A Gartner survey conducted in late 2023 found that 56% of software engineering leaders view AI/machine learning engineers as the most in-demand roles for 2024, pointing to a substantial skills gap in applying AI/ML to software applications.
To build effective AI capabilities, organizations must invest in AI developer platforms, which are crucial for scaling AI integration into enterprise solutions. Walsh explained, “Organizations will need to upskill their data engineering and platform engineering teams to adopt the tools and processes that enable continuous integration and development of AI models.” This strategic investment will be essential for companies looking to succeed in the evolving AI-driven landscape.
TheOpensource.AI News
How Uber Blends Open Source and In-House Innovation for LLM Optimization?

Generative AI, powered by Large Language Models (LLMs), is utilized across various applications at Uber, such as personalized recommendations for Uber Eats, search functionality, customer support chatbots, code generation, and SQL query creation.
To power these applications, Uber employs a mix of open-source models like Meta® Llama 2 and Mistral AI Mixtral®, as well as closed-source models from OpenAI, Google, and other providers. As a leader in mobility and delivery, Uber also leverages its extensive domain-specific expertise to optimize LLM performance. One method used to incorporate this expertise is through Retrieval Augmented Generation (RAG).
Uber is also exploring ways to customize LLMs to its knowledge base by employing continuous pre-training and instruction fine-tuning. For instance, a model fine-tuned with Uber-specific data, like information on Uber Eats items, dishes, and restaurants, has improved accuracy in item tagging, search queries, and understanding user preferences compared to general open-source models. These tailored models have achieved comparable performance to GPT-4 models while supporting greater traffic at Uber’s scale.
Uber’s efforts are bolstered by support from the AI community and open-source tools, such as the transformers library, Microsoft DeepSpeed®, and PyTorch FSDP, which allow for rapid development and efficient LLM training. Additionally, emerging open-source solutions like Meta® Llama 3 llama-recipes, Microsoft LoRA®, QLoRA™, and Hugging Face PEFT™ simplify the fine-tuning process and reduce engineering efforts. Tools like Ray® and vLLM™ help maximize the efficiency of large-scale pre-training, fine-tuning, offline batch prediction, and online model serving for open-source LLMs.
Uber’s unique approach to in-house LLM training ensures flexibility and speed in developing Generative AI services. By leveraging state-of-the-art open-source models, Uber is able to conduct faster, more cost-effective, secure, and scalable experimentation. This optimized in-house LLM training helps Uber stay at the forefront of technological advancements, ultimately benefiting Uber’s users.
Infrastructure Stack
Uber's LLM training relies on a well-tested infrastructure stack that supports rapid experimentation.
Layer 0: Hardware
Uber’s LLM workflows run on two types of compute instances: (1) NVIDIA® A100 GPU instances in Uber’s on-premises clusters, and (2) NVIDIA H100 GPU instances on Google Cloud. Uber’s on-prem A100 machines are equipped with 4 A100 GPUs, 600 GB of memory, and 3 TB of SSD storage. Google Cloud machines are equipped with 8 H100 GPUs, 1872GB of CPU memory, and 6TB of SSD storage, all managed under the Crane infrastructure stack.Layer 1: Orchestration
Computing resources are managed using Kubernetes® for workload scheduling and hardware management, along with Ray and the KubeRay operator to distribute workloads among workers.Layer 2: Federation
A federation layer manages multiple Kubernetes clusters, scheduling tasks based on resource availability. Training jobs are structured as a series of tasks defined in the JobSpec, which outlines resource needs, including instance types, compute and storage requirements, and setup commands.
Training Stack
Uber’s LLM training stack is built with open-source tools, including PyTorch, Ray, Hugging Face, DeepSpeed, and NCCL, integrated into the Michelangelo platform.
PyTorch is the chosen deep learning framework, given its wide adoption for state-of-the-art open-source LLMs.
Ray Train provides an API for distributed training with PyTorch on Ray clusters.
Hugging Face Transformers offers APIs for downloading and training advanced transformer models.
DeepSpeed optimizes training and inference for deep learning, allowing for improved scale and speed.
Distributed Training Pipeline
Uber developed a distributed training pipeline for LLMs, handling host communication, data preparation, distributed model training, and checkpoint management:
Multi-host and multi-GPU communication: A TorchTrainer in Ray Train sets up Ray Actors, manages communication with the Ray Object Store, and initializes a distributed process group with DeepSpeed across GPUs.
Data preparation: The training framework accesses remote data sources from Uber’s HDFS, Terrablob, and public datasets on Hugging Face.
Model training: The pipeline uses tokenization to convert text into integers for model input. During distributed training, Hugging Face Transformers Trainer objects are initialized on each GPU using DeepSpeed’s ZeRO stage options.
Saving results: Training metrics are saved to Uber’s Comet server, while model weights and configurations are stored in Terrablob.
Training Results and Optimization
Uber demonstrated that its Michelangelo platform can train large open-source LLMs at scale, including full-parameter and parameter-efficient fine-tuning using LoRA and QLoRA. Results showed that LoRA and QLoRA reduce GPU usage and speed up training, though the reduction in training loss was lower compared to full-parameter fine-tuning. Thus, optimizing throughput and Model Flops Utilization (MFU) is crucial for improving the performance of full-parameter tuning.
To enhance training throughput, Uber explored optimizations like CPU offload and flash attention:
DeepSpeed ZeRO-stage-3 CPU Offload: This reduced GPU memory usage by 34%, enabling larger batch sizes and doubling training throughput.
Flash Attention: By using flash attention, Uber cut GPU memory usage by 50% with the same batch size, allowing for even larger batches while maintaining comparable training speed.
Uber measured training efficiency through Model Flops Utilization (MFU), using the DeepSpeed Flops Profiler to assess throughput relative to the hardware’s peak potential. By maximizing batch sizes and minimizing overhead, Uber ensured efficient use of GPU resources throughout its training experiments.
OSI Criticizes Meta for Misleading 'Open Source' Label on Llama AI Models

The concept of open source encourages the free sharing of software, allowing anyone to access, modify, and distribute copies of the original code. It embodies the spirit of collaboration and freedom. A key advocate of this movement is the Open Source Initiative (OSI), a California-based public benefit corporation that aims to promote open source globally.
Organizations like OSI often clash with those that mislead or mishandle their open-source releases, which can negatively impact the experience of the broader open source community.
Recently, Meta has faced criticism for how it has positioned its Llama AI model as open source, with many finding it misleading. This criticism, according to some, was anticipated.
OSI Chief Criticizes Meta's Approach
A screenshot of the Llama website suggests that it is an open source AI model. However, in an interview with the Financial Times (paywalled), OSI’s Executive Director, Stefano Maffulli, criticized Meta, arguing that the company is misguiding users and distorting the meaning of “open source” by labeling its Llama models this way.
Maffulli further highlighted that this misrepresentation is especially harmful at a time when government bodies like the European Commission (EC) are actively supporting open source technologies not controlled by a single entity.
He went on to point out that many of the recent AI models touted as open source, including Llama, do not fully align with true open-source principles, as they restrict experimentation and innovation.
The core of the criticism lies in Meta’s approach: although Llama is advertised as an open-source AI model, Meta only shares the model weights, which are essential for learning patterns and making predictions. However, other critical components, such as the datasets, code, and training methods, remain undisclosed. This has led to the AI community adopting the term 'open weight' to better describe such models, as it more accurately reflects their limited openness. Additionally, the license under which Llama is distributed does not meet the OSI's definition of open source, as it imposes significant restrictions on how the software can be used.
TheClosedsource.AI News
Bain & Company Expands Partnership with OpenAI to Boost AI Solutions Delivery

Bain & Company announced an enhanced partnership with OpenAI, the creators of ChatGPT and advanced AI models like GPT 4o and the new OpenAI o1, aiming to accelerate AI's transformative influence across leading global companies.
Since 2022, Bain and OpenAI have worked closely, formalizing their global services alliance in early 2023 to bring OpenAI’s cutting-edge AI capabilities to Bain's clients worldwide. Bain has also integrated OpenAI tools, including ChatGPT Enterprise, into its operations, enabling employees to leverage these technologies for increased efficiency and productivity in developing tailored AI applications.
With this expanded partnership, Bain and OpenAI are set to deepen their collaboration, combining OpenAI’s technological advancements with Bain's strategic expertise and AI implementation skills. This partnership will enable them to provide powerful AI solutions tailored to meet the unique needs of their clients and guide them through their AI transformation journeys.
As part of this broader collaboration, Bain is establishing an OpenAI Center of Excellence (CoE), led by a dedicated team with deep expertise and the latest insights into OpenAI’s technologies. This CoE will focus on helping clients harness the business value of OpenAI’s innovations.
Bain plans to integrate its industry knowledge with OpenAI’s technology to deliver significant value for clients. The initial focus will be on co-designing solutions for the retail and healthcare life sciences sectors, with plans to expand to other industries over time. The OpenAI CoE will be equipped with advanced technical resources to develop solutions that leverage OpenAI’s frontier technologies, including multi-modal, real-time, and reasoning capabilities.
Christophe De Vusser, Bain’s Worldwide Managing Partner, commented, “Our partnership with OpenAI has demonstrated its power through the transformative results we’ve achieved for clients and within our own operations. With this expanded collaboration, we aim to push the limits further, leading industry innovation and delivering even greater impact.”
Brad Lightcap, COO of OpenAI, added, “We are building on our collaboration with Bain to translate cutting-edge AI into tangible results for enterprises across various sectors. Our goal is to help businesses fully seize the opportunities to improve efficiency, enhance customer service, and drive a new wave of innovation.”
This partnership builds upon the strong results Bain has already achieved for clients such as The Coca-Cola Company and Amgen. Bain has worked with OpenAI to embed AI solutions into client operations, yielding measurable improvements in processes, operating models, technology architectures, talent, and data management. The collaboration will continue to leverage OpenAI’s rapidly evolving platforms for AI transformation consulting, which includes developing AI strategies, refining processes, building workforce capabilities, and enhancing technology infrastructure, guiding leaders through their AI journeys. Additionally, Bain and OpenAI plan to host joint industry roundtables and events to highlight the transformative impact of their partnership for clients worldwide.
Sam Altman's Worldcoin Rebrands as World Network, Expands Iris-Scanning Initiative

Worldcoin, a cryptocurrency initiative founded by Sam Altman, CEO of OpenAI, announced on Thursday that it is rebranding to "World Network." The project is ramping up its global efforts to scan individuals' irises using specialized devices known as "orbs." The primary offering is the World ID, which the company describes as a "digital passport." This ID is intended to verify that a person is real and not an AI chatbot.
At a recent event in San Francisco, World Network unveiled a new version of its iris-scanning orb, featuring enhanced connectivity, privacy, and security measures. The company also outlined plans to improve access to these orbs through dedicated retail locations and a partnership with Rappi, a delivery service in Latin America.
How to Obtain a World ID
To obtain a World ID, individuals must participate in an in-person iris scan using the orb, which resembles a silver sphere about the size of a bowling ball. Once the scan confirms that the person is authentic, they are issued a World ID. In some countries, new users receive a cryptocurrency token called WLD as an incentive for signing up.
The Purpose of World ID
World IDs are positioned as vital in an era where AI chatbots, like ChatGPT, can closely mimic human communication. These IDs aim to help differentiate between real people and AI entities online.
The company behind World Network, Tools for Humanity, operates out of offices in San Francisco and Erlangen, Germany. Since the project launched in July 2023, over 6.9 million individuals have signed up for iris scans.
However, the project has faced scrutiny from privacy advocates over its data collection, storage, and usage practices. Earlier this year, Spain and Portugal temporarily halted the project, while Argentina and the UK are currently conducting reviews.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!