• "Towards AGI"
  • Posts
  • Honeywell Partners with Google to Deploy Gemini AI in the Industrial Sector

Honeywell Partners with Google to Deploy Gemini AI in the Industrial Sector

Through this collaboration, Google’s AI agents will help automate tasks for engineers and assist technicians in resolving maintenance issues.

A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.

Whether you're passionate about machine learning, neural networks, or the ethics surrounding GenAI, our platform offers cutting-edge insights, resources, and collaborations on everything AI.

What to expect from Towards AGI: Know Your Inference (KYI): Ensuring your AI-generated insights are accurate, ethical, and fit for purpose Open vs Closed AI: Get expert analysis to help you navigate the open-source vs closed-source AI GenAI Maturity Assessment: Evaluate and audit your AI capabilities Expert Insights & Articles: Stay informed with deep dives into the latest AI advancements But that’s not all!

We are training specialised AI Analyst Agents for CxOs to interact and seek insights and answers for their most pressing questions. No more waiting for a Gartner Analyst appointment for weeks, You’ll be just a prompt away from getting the insights you need to make critical business decisions. Watch out this space!!

Visit us at https://www.towardsagi.ai to be part of the future of AI. Let’s build the next wave of AI innovations together!

Honeywell Partners with Google to Deploy Gemini AI in the Industrial Sector

Google Gemini, Alphabet's flagship generative AI, is being utilized by Honeywell to analyze the company's vast data sets, aiming to cut maintenance costs, enhance productivity, and provide new opportunities for employee skill development. 

“The path to autonomy requires assets working harder, people working smarter, and processes working more efficiently,” stated Vimal Kapur, CEO of Honeywell, in the announcement of this partnership, which aims to deliver AI-driven insights to industrial clients starting in 2025.

In a recent CNBC interview, Kapur highlighted that AI’s biggest impact in industrial settings lies in addressing the generational labor shortage caused by declining birth rates in developed nations, leading to fewer workers available for traditional roles. He noted that AI can help an employee with five years of experience reach the effectiveness of a veteran with 15 years, through AI co-pilots.

Through this collaboration, Google’s AI agents will help automate tasks for engineers and assist technicians in resolving maintenance issues. Kapur also mentioned that Honeywell plans to integrate connectivity into jet engines, enabling predictive maintenance and reducing shop time.

While generative AI is already making strides in the industrial sector, this partnership aims to go beyond existing solutions, integrating Google AI with the Honeywell Forge IoT platform. Honeywell Forge, which combines information from industrial designs, manuals, and real-world performance data, will use Google Cloud’s Vertex AI and large language models to create AI agents tailored to this data.

Suresh Venkatarayalu, Honeywell’s CTO and president of Honeywell Connected Enterprise, emphasized in a Google blog post that the goal is to move from automation to autonomy, equipping workers with AI agents that provide real-time support on factory floors and in the field.

These AI agents will allow workers to ask practical questions, such as, “How did this unit perform last night?” or “Why is my system making this sound?” The AI will then respond with images, videos, text, or sensor data.

Carrie Tharp, Google Cloud’s vice president of strategic industries, highlighted the crucial role of industrial companies in everyday life and the pressure they face with an aging workforce and fewer younger workers stepping in.

Honeywell is also exploring the use of Gemini Nano, a smaller version of the AI that operates directly on devices, for settings like data centers, hospitals, refineries, and warehouses, especially in rural areas with limited internet connectivity. This on-device AI can enable autonomous operations on scanners, sensors, and controllers.

For AI leaders like Google, encouraging adoption across various industries is key to turning generative AI into a profitable venture. Honeywell's data reveals that while 82% of industrial companies consider themselves AI leaders, only 17% have fully implemented AI plans.

Additionally, companies aim to make their internal data as valuable as large language models like Gemini. Clément Delangue, CEO of the AI startup Hugging Face, supported by Amazon, Nvidia, and Google, emphasized at the CNBC Evolve AI Opportunity event that data and datasets are the next major focus for AI. On Hugging Face’s platform, over 200,000 public datasets are available, with their growth rate surpassing that of new language models.

Delangue predicts that the future will see every company, industry, and use case developing its own customized models. Similar moves are being made by other industry players, such as Siemens and Microsoft, who formed an AI partnership for the industrial sector last year, featuring an AI co-pilot.

Kapur is optimistic about generative AI’s potential to drive growth in the labor-challenged industrial sector, viewing it more as a revenue-generating opportunity than just a productivity tool. He expects the adoption of AI in the industrial space to accelerate significantly by 2025-2026, calling it a pivotal period for the technology.

Generative AI Could Eliminate 100,000 Contact Center Jobs, Forrester Warns

A recent report by Forrester predicts that generative AI (GenAI) could lead to the displacement of 100,000 frontline agents by 2025. The study, which surveyed voice of the customer (VoC) and customer experience (CX) professionals, warns that the contact center outsourcing industry faces significant job cuts due to GenAI's rise.

Forrester highlights that 62% of customer-facing industries outsource their contact centers, and the two biggest providers in this sector employ nearly one million workers. The growing use of GenAI for automating basic customer inquiries is expected to contribute directly to job reductions in these outsourcing firms.

However, the report also suggests that companies can adapt to this shift. Sharyn Leaver, Forrester's Chief Research Officer, noted that many businesses eagerly adopted GenAI in 2024 to transform their marketing and customer experience efforts but underestimated the time required for effective implementation. She emphasized that, while GenAI has the potential to revolutionize these areas, the process requires patience and strategic planning.

In 2025, B2C marketing, digital, and CX leaders are encouraged to build on past experiences with GenAI, focusing on enhancing their data infrastructure for deeper customer insights. As companies refine their use of GenAI, the report advises that outsourcing firms should concentrate on performance-based models to optimize automation and redeploy human workers to higher-value roles.

Beyond the impact on contact centers, Forrester’s report provides other key predictions for the future of CX:

CX in 2025
The report emphasizes the need for CX leaders to take bold steps. While it acknowledges that many brands will continue to struggle with mediocrity, those that push forward can distinguish themselves from competitors.

Cross-Departmental Integration Will Increase
One prediction is that 20% of CX teams will integrate more closely with design, research, and delivery teams. This shift aims to enhance the effectiveness of CX efforts through deeper collaboration. Despite challenges, such as 41% of leaders citing interdepartmental cooperation as a major hurdle, Forrester believes that embedding CX teams into broader business functions will drive better customer insights and improvements.

The “Score Obsession” in CX Will Persist
Forrester predicts that only 20% of CX teams will move beyond simply gathering feedback to making it actionable. Most teams remain focused on improving CX scores, with 58% tying these metrics to incentives. Even those that try to shift away from score-based approaches face obstacles like skill shortages and outdated technology, leading to a continued emphasis on metrics rather than customer-centric outcomes.

Simplifying Technology
Contrary to the trend toward AI solutions, the report finds that one in four CX teams plan to replace underused tools with simpler enterprise software suites. Many CX teams currently use about four technologies but often rely only on basic functionalities that overlap with existing enterprise systems. To address this, IT departments may reduce redundant software costs, and Forrester advises CX leaders to proactively simplify their tech stack. This shift would allow them to focus budgets on more impactful tools that directly address challenges, like aligning CX improvements with business goals and utilizing customer insights effectively.

Overall, Forrester's report paints a picture of a changing CX landscape, where companies must adapt to the rise of GenAI, balance technology use, and integrate CX efforts across departments to stay competitive.

NVIDIA and Meta Empower Indian IT Firms to Boost GenAI Capabilities

Indian IT is making significant strides in generative AI. At Meta’s Build with AI Summit in Bengaluru on October 23, Infosys announced a collaboration with Meta to use the Llama stack—a suite of open-source large language models and tools—to develop AI solutions across various sectors.

As an early adopter of Llama 3.1 and 3.2 models, Infosys is incorporating these into its in-house AI platform, Infosys Topaz, to create tools that enhance business operations. One such tool is a document assistant that uses Llama to streamline contract reviews. An Infosys representative shared that the company is building industry-wide solutions, including a proof-of-concept for market research, and internally leveraging Llama models for its AI-focused initiatives. The company is also exploring use cases like production scenarios and document summarization.

This development follows Infosys' announcement that it is working on small language models for clients' varied applications. During a recent earnings call, Infosys CEO and MD Salil Parekh highlighted the company’s unique approach, which combines open-source components, specific industry data, and Infosys’ proprietary data.

Earlier this year, Meta AI’s chief, Yann LeCun, mentioned a meeting with an Infosys co-founder, who was funding a project using Llama 2 to support all 22 official Indian languages.

Infosys has further deepened its relationship with Meta by establishing a Meta Center of Excellence (COE). This COE aims to speed up the integration of AI solutions into enterprises and enhance contributions to open-source projects. It will focus on building expertise in the Llama stack, developing industry-specific applications, and encouraging the adoption of generative AI among customers.

In addition, Infosys has teamed up with NVIDIA, incorporating NVIDIA AI Enterprise into the Infosys Topaz platform. This collaboration helps businesses integrate generative AI into their operations more efficiently. Infosys has also launched a dedicated NVIDIA Center of Excellence to focus on employee training, solution development, and broader adoption of NVIDIA technology.

Competitors’ AI Strategies

Infosys isn't alone in this AI push. NVIDIA is supporting other major Indian IT firms as well. Tata Consultancy Services (TCS) is using NVIDIA’s NIM Agents Blueprints to develop AI solutions across sectors like telecommunications, retail, manufacturing, automotive, and finance. TCS offers domain-specific language models powered by NeMo, capable of handling inquiries and providing insights across different enterprise functions, including IT, HR, and operations.

In Q2 FY25, TCS reported a surge in generative AI engagements, with over 600 projects, up from around 270 in the previous quarter. TCS Chief K. Krithivasan highlighted this growth, noting that 86 of these projects had moved into production, a significant increase from just eight last quarter.

Wipro, meanwhile, is focusing on AI-driven consulting and extensive reskilling efforts to create an “AI-powered Wipro” aimed at boosting efficiency and driving transformation. According to Wipro CEO and MD Srini Pallia, GenAI is poised to positively impact both the company and the industry. He noted that Wipro has trained and certified over 44,000 employees in advanced AI skills, with many actively using AI development tools across client projects.

Wipro also integrates NVIDIA AI Enterprise software, including NIM Agent Blueprints and NeMo, to help businesses create customized conversational AI solutions like digital assistants for customer service.

Tech Mahindra recently launched Indus 2, a Hindi-focused AI model powered by Nemotron-4-Hindi 4B, to enhance engagement in local languages. The company has reskilled 45,000 employees, aligning with its AI strategy through an internal skills development framework.

With these partnerships and significant investments, Indian IT companies are rapidly expanding their generative AI capabilities, positioning themselves for widespread AI adoption across industries in the country.

Meta Teams Up with IndiaAI to Foster AI Innovation and Skill Development

On Friday, Meta announced a strategic collaboration with 'IndiaAI' under the Ministry of Electronics and Information Technology (MeitY) to boost open-source AI innovation, research, and skill development in India.

This collaboration includes the creation of the Center for Generative AI, Shrijan, at IIT Jodhpur, as well as the launch of the 'AI for Skilling and Capacity Building' initiative in partnership with the All India Council for Technical Education (AICTE).

The partnership aims to develop indigenous AI applications and enhance AI skill development. It will also strengthen research capabilities, supporting India’s mission to achieve technological sovereignty and create AI solutions tailored specifically for the country.

According to a press release, Meta’s collaboration with 'IndiaAI' follows a Memorandum of Understanding focused on advancing open-source AI in India.

S. Krishnan, IT Secretary, emphasized that the initiative will contribute to India's ambition of becoming a USD 5 trillion economy by preparing the next generation to lead in the global AI space, solidifying India’s status in technology and economic growth.

“These initiatives are crucial for building a strong ecosystem for innovative research, skill development, and open-source advancements in AI, while ensuring its responsible and ethical use,” Krishnan noted.

Granite 3.0 Models Highlight IBM’s Continued Commitment to Open AI

Open source and AI have a complex relationship. While AI development heavily relies on open-source contributions, many companies are reluctant to make their AI programs or large language models (LLMs) open-source. However, IBM has taken a different approach, previously open-sourcing its Granite models. Now, the company is expanding its commitment with the release of the Granite AI 3.0 models under the Apache 2.0 license.

IBM developed these models using pretraining data from publicly accessible sources, including GitHub Code Clean, Starcoder data, public code repositories, and GitHub issues, ensuring compliance with copyright laws to avoid legal risks.

The reluctance of other AI companies to follow this path is largely due to their reliance on datasets containing copyrighted or proprietary information, which could expose them to legal challenges if made public. For example, News Corp has sued Perplexity for using content from publications like The Wall Street Journal and The New York Post without permission.

In contrast, IBM’s Granite models are specifically tailored for business needs, focusing on programming and software development. The latest Granite AI 3.0 models have been trained with three times more data than their predecessors and offer enhanced flexibility, including support for external variables and rolling forecasts.

The Granite 3.0 lineup includes 8B and 2B models, which serve as "workhorse" models for enterprise AI applications. They are designed for tasks such as Retrieval Augmented Generation (RAG), classification, summarization, entity extraction, and tool integration. These models are also available in two variants: "Instruct," aimed at helping users learn specific languages, and "Guardian," which focuses on identifying risks in user prompts and AI outputs to counter threats like prompt injection attacks.

The Granite models range from 3 billion to 34 billion parameters and are trained on 116 programming languages using 3 to 4 terabytes of tokens. They are accessible through platforms like Hugging Face, GitHub, IBM's Watsonx.ai, and Red Hat Enterprise Linux (RHEL) AI. A curated selection of Granite 3.0 models is also available on Ollama and Replicate.

Additionally, IBM has updated its Watsonx Code Assistant for application development. With Granite integration, this tool offers coding support across languages like C, C++, Go, Java, and Python, alongside advanced modernization tools for Enterprise Java applications. The Granite code capabilities are now also accessible through a Visual Studio Code extension, IBM Granite.Code.

Unlike some other major LLMs that impose commercial restrictions despite being labeled open-source, the Apache 2.0 license used by IBM allows both research and commercial use. This makes IBM's models more accessible for developers and researchers to build on and enhance.

IBM's open-source approach aims to reduce barriers to AI development, providing models that, according to the company, offer performance similar to larger, more costly alternatives. While Granite might not be suited for tasks like writing homework essays or creative storytelling, it is well-suited for developing practical software and AI-based expert systems.

Yotta and Sarvam AI Collaborate to Launch India’s First Open AI Model

Yotta Data Services has teamed up with AI startup Sarvam to develop Sarvam 1, India's first open-source foundational model. Sarvam 1 is the first large language model (LLM) developed entirely by an Indian company, trained from scratch using an internal dataset of 4 trillion tokens. The model, which supports 10 Indian languages, was built using Yotta’s Shakti Cloud infrastructure, offering top-notch performance and reliability.

Leveraging Yotta’s Shakti Cloud, Sarvam has also introduced products like Sarvam Agents and developer-friendly APIs for various industries. The flagship offering, Sarvam Agents, provides advanced tools for users to interact with AI-powered bots through telephone, WhatsApp, or in-app, using natural, voice-based conversations. These agents support 10 Indian languages, including Hindi, Tamil, Telugu, Malayalam, Punjabi, Odia, Gujarati, Marathi, Kannada, and Bengali, and are designed to streamline processes such as customer support, feedback collection, and employee engagement.

Yotta’s Shakti Cloud, powered by NVIDIA's advanced computing platform, including NVIDIA Hopper GPUs and the NVIDIA AI Enterprise software, accelerates AI model training while ensuring consistent and seamless performance.

“Shakti Cloud enabled Sarvam AI to access NVIDIA’s accelerated computing and AI Enterprise software, allowing them to develop models with billions of data points efficiently and launch their services on schedule,” said Sunil Gupta, Co-Founder, CEO, and MD of Yotta Data Services. “This is just the start. We are enthusiastic about the future possibilities this partnership brings and are committed to advancing India’s AI capabilities.”

This collaboration gives Sarvam broader access to AI technologies within India, ensuring data sovereignty and security. Yotta’s competitive pricing for Shakti Cloud provides a significant advantage to Sarvam, backed by some of the country's most advanced AI infrastructure.

Darshan Hiranandani, Co-Founder and Chairman of Yotta, emphasized that partnering with Sarvam to create the Sarvam 1 foundational model is a key step in advancing India’s AI ecosystem. He noted that Shakti Cloud's state-of-the-art infrastructure accelerates AI model training, offering Indian businesses the resources needed for innovation and global competitiveness.

Vivek Raghavan, Co-Founder of Sarvam, highlighted that the partnership with Yotta enables large-scale AI innovation in India. With Yotta’s cutting-edge computing power, Sarvam can provide affordable and dependable AI solutions across various sectors, using NVIDIA AI Enterprise software, including NVIDIA NeMo, to support GPU-accelerated data processing and scalable generative AI training.

Kari Briski, Vice President of AI Software at NVIDIA, remarked, “India is emerging as a global leader in AI innovation. With local tech partners like Yotta, NVIDIA supports the development of sovereign language models, helping Indian businesses and communities leverage AI for economic and social progress.”

Yotta aims to make advanced AI technologies more accessible to Indian businesses through flexible pricing models, including pay-per-hour options and long-term reservations. This initiative aligns with Yotta’s mission to support India’s digital transformation and foster innovation across various industries.

OpenAI Strengthens Compliance Efforts with Scott Schools Appointment

OpenAI has announced the appointment of Scott Schools as its Chief Compliance Officer, reinforcing its commitment to advancing AI responsibly. In his new role, Scott will collaborate closely with various teams within OpenAI and work alongside the Board of Directors, further supporting the organization’s efforts to navigate the evolving regulatory landscape thoughtfully.

Che Chang, OpenAI’s General Counsel, highlighted Scott's expertise, stating, “Scott's deep experience will enhance our team's ability to deliver beneficial AI technology while maintaining the highest standards of integrity and adapting to fast-changing regulatory environments.”

Scott expressed his enthusiasm for joining OpenAI, noting that he admires the company’s innovative work and views contributing to the responsible development of impactful technologies as a significant opportunity.

With a career spanning decades in both public and private sectors, Scott brings a wealth of legal expertise to OpenAI. He previously served as Associate Deputy Attorney General at the U.S. Department of Justice, where he was instrumental in shaping national legal strategy and advising department leadership on ethics. Most recently, he held the role of Chief Ethics and Compliance Officer at Uber Technologies, guiding the company through complex regulatory challenges. His experience as U.S. Attorney for the Northern District of California and South Carolina further demonstrates his deep commitment to upholding legal and ethical standards.

Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.

In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.

Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!