- "Towards AGI"
- Posts
- HubSpot CTO Dharmesh Shah Envisions a Future Driven by AI Agents
HubSpot CTO Dharmesh Shah Envisions a Future Driven by AI Agents
Shah highlighted the idea of agents building other agents, comparing this to the transformative impact of Apple’s app store on technology.
A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.
Whether you're passionate about machine learning, neural networks, or the ethics surrounding GenAI, our platform offers cutting-edge insights, resources, and collaborations on everything AI.
What to expect from Towards AGI: Know Your Inference (KYI): Ensuring your AI-generated insights are accurate, ethical, and fit for purpose Open vs Closed AI: Get expert analysis to help you navigate the open-source vs closed-source AI GenAI Maturity Assessment: Evaluate and audit your AI capabilities Expert Insights & Articles: Stay informed with deep dives into the latest AI advancements But that’s not all!
We are training specialised AI Analyst Agents for CxOs to interact and seek insights and answers for their most pressing questions. No more waiting for a Gartner Analyst appointment for weeks, You’ll be just a prompt away from getting the insights you need to make critical business decisions. Watch out this space!!
Visit us at https://www.towardsagi.ai to be part of the future of AI. Let’s build the next wave of AI innovations together!
TheGen.AI News
HubSpot CTO Dharmesh Shah Envisions a Future Driven by AI Agents

At HubSpot’s Inbound event, co-founder and CTO Dharmesh Shah introduced Breeze AI but emphasized that this is just the beginning for AI agents. During his keynote, Shah shared an ambitious vision of a future where AI agents not only manage business tasks but also create other agents to expand their capabilities.
Shah described this year as pivotal, saying, "When we look back, you’ll remember it as the year of AI agents."
Shah, known for his hands-on approach, has been working on experimental projects, including a concept he calls agent.ai, a professional network similar to LinkedIn, where businesses can hire, collaborate with, and build AI agents. He humorously remarked, "Agent.ai is the number one professional network for AI agents... and also the only one."
He also predicted that future teams will be hybrid, made up of both humans and AI agents working together. Shah highlighted the idea of agents building other agents, comparing this to the transformative impact of Apple’s app store on technology. “Just like ‘there’s an app for that,’ our vision is ‘there’s an agent for that’ — for every possible marketing, sales, and customer service scenario, even ones we can’t yet imagine.”
While Breeze AI currently integrates into HubSpot’s CRM to simplify customer interactions, Shah sees it as just the start of more advanced AI applications. He confidently predicted that many people, not just developers, would start building their own AI agents soon.
Though AI agents are still developing, Shah believes their potential is immense, hinting at a future where businesses will increasingly depend on AI for a range of tasks, both simple and complex.
Google Reveals 2 Million Developers Utilizing Generative AI Solutions

Google’s AI model, Gemini, is powering businesses in various ways, offering solutions, insights, and new functionalities to employees and customers alike. According to a media kit shared with PYMNTS on Tuesday (Sept. 24), over 2 million developers are utilizing Google’s generative AI tools. Thomas Kurian, CEO of Google Cloud, highlighted in a blog post how swiftly customers are moving from testing to implementing their ideas using the Vertex AI platform. Kurian also noted a significant productivity boost with Gemini in Google Workspace, revealing that enterprise customers are saving an average of 105 minutes per user every week.
Ahead of the global virtual event, Gemini at Work, Google shared a range of real-world applications for its AI solutions. Indonesian FinTech giant GoTo Group, for example, developed a voice assistant for its GoPay app, allowing users to navigate features by speaking commands in Bahasa Indonesia.
Pods, a moving and storage company, leverages Gemini to enrich its marketing strategies with real-time data such as location, time, weather, and local insights, which resulted in a 33% rise in quote requests for one of its campaigns.
Meanwhile, Puma, the sportswear brand, is using Imagen 2 to generate customized product images for its online store, speeding up digital campaigns and enhancing click-through rates by targeting specific markets. Scotts Miracle-Gro, a lawn and gardening supplier, developed an AI tool to give personalized gardening tips and product suggestions. Initially used by sales staff, this AI agent will soon be available to customers.
Snapchat integrated Gemini through Vertex AI to boost the performance of its AI-powered chatbot, My AI, which doubled engagement for its “Snapping to My AI” feature in the U.S.
Google also announced that the standalone Gemini app will now be included in Workspace Business, Enterprise, and Frontline plans, providing AI assistance to a wider range of workers. Additionally, it launched the Customer Engagement Suite, combining advanced conversational AI with omnichannel contact center functionality for an enhanced customer service experience.
Uber Develops GenAI Gateway to Simplify LLM Integration Across 60+ Use Cases

Uber has developed a unified platform for managing large language models (LLMs) from both external vendors and internally hosted models, choosing to mirror the OpenAI API to encourage internal adoption. This platform, known as the GenAI Gateway, offers a consistent and efficient interface, supporting more than 60 LLM use cases across various departments.
As one of the early adopters of LLMs, Uber integrated AI-driven functionalities into areas like process automation, customer support, and content generation. However, the lack of a centralized approach led to duplicated efforts and inconsistencies. To address these issues, Uber introduced the GenAI Gateway to centralize the management of LLM models.
Senior software engineers Tse-Chi Wang and Roopansh Bansal from Uber explained that the GenAI Gateway simplifies the integration of LLMs for teams. Its easy onboarding process minimizes the effort required, providing a streamlined path to utilize LLMs. Additionally, a standardized review process, overseen by the Engineering Security team, ensures all use cases comply with Uber’s data handling standards before gaining access to the gateway.
The team adopted the OpenAI API for the gateway, leveraging its widespread use and open-source tools like LangChain and LlamaIndex. By mirroring this familiar API, they made the onboarding process smoother and expanded the platform's capabilities.
GenAI Gateway, built as a Go service, integrates external platforms like OpenAI and Vertex AI with internal LLMs, offering features such as authentication, account management, caching, and monitoring.
Volkswagen Introduces AI-Driven Virtual Assistant in myVW App for 2024 Vehicles


Volkswagen of America has partnered with Google Cloud to introduce a generative AI-powered virtual assistant into its myVW mobile app for vehicle owners.
The myVW Virtual Assistant is now available for 2024 Atlas and Atlas Cross Sport models, with plans to expand to most 2020 and newer Volkswagen vehicles by 2025, according to a press release on Tuesday (Sept. 24).
This virtual assistant provides owners with specific vehicle-related information, such as instructions for changing a flat tire or the meaning of indicator lights. Users can also point their smartphone cameras at indicator lights to receive detailed information.
"AI is becoming a useful tool for Volkswagen owners to understand their vehicles better and get quicker, easier answers," said Abdallah Shanti, Chief Information Officer at Volkswagen Group of America. "Our collaboration with Google Cloud allows us to bring innovative technology into our vehicles, enhancing our connection with both the car and our customers."
Volkswagen used Google Cloud's Vertex AI and BigQuery to fine-tune the Gemini models with various data sources, while Google Cloud Consulting assisted in designing and deploying the app. The virtual assistant was trained using owner’s manuals, FAQs, help center articles, official Volkswagen YouTube videos, and step-by-step guides.
"Volkswagen is setting a new benchmark for driver experiences by integrating advanced generative AI directly for its customers," said Thomas Kurian, CEO of Google Cloud, in the release.
This announcement coincided with Google’s news that over 2 million developers are now using its generative AI tools. Kurian highlighted in a blog post how companies are rapidly transitioning ideas from testing to production using Vertex AI, ahead of the global event "Gemini at Work."
TheOpensource.AI News
IBM and NASA Launch Open-Source AI Model to Address Weather and Climate Challenges

IBM has introduced a new AI foundation model designed for a variety of weather and climate applications, which is now available in open-source for the scientific, developer, and business communities. This model, developed by IBM in collaboration with NASA and contributions from Oak Ridge National Laboratory, provides a scalable and adaptable solution for tackling challenges related to both short-term weather forecasting and long-term climate projections.
Due to its unique architecture and training process, the weather and climate foundation model can address a wider range of applications compared to existing weather AI models, as described in the arXiv paper titled Prithvi WxC: Foundation Model for Weather and Climate. Its potential applications include generating localized forecasts based on real-time data, predicting severe weather patterns, enhancing the spatial resolution of global climate simulations, and improving how physical processes are represented in numerical models. In one experiment mentioned in the paper, the model was able to accurately reconstruct global surface temperatures using only five percent of the original data, highlighting its potential for data assimilation tasks.
The model was pre-trained using 40 years of Earth observation data from NASA’s Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2). It is designed to be fine-tuned for global, regional, and local weather studies, offering flexibility across a range of research applications.
Available for download on Hugging Face, the model comes with two fine-tuned versions for specific scientific and industry applications:
Climate and Weather Data Downscaling: This version enhances low-resolution data inputs such as temperature, precipitation, and surface winds to produce high-resolution outputs. Capable of generating localized forecasts and climate projections at up to 12x resolution, the downscaling model is accessible via IBM Granite’s Hugging Face page.
Gravity Wave Parameterization: This version addresses the challenge of gravity waves, which impact various atmospheric processes but have traditionally been underrepresented in climate models. The foundation model helps better estimate gravity wave effects to improve the accuracy of numerical weather and climate models. This version is available on the NASA-IBM Hugging Face page as part of the Prithvi family of models.
Karen St. Germain, Director of NASA’s Earth Science Division, emphasized that the foundation model will help produce actionable science to better inform decisions regarding weather and climate preparedness, response, and mitigation. Juan Bernabe-Moreno, Director of IBM Research Europe, highlighted that the model’s flexibility allows it to tackle a variety of meteorological phenomena, such as hurricanes and atmospheric rivers, and increase the resolution of climate models for better understanding of severe weather events.
Arjun Shankar, Director of the National Center for Computational Sciences at Oak Ridge National Laboratory, noted that the collaboration with IBM and NASA in creating the Prithvi foundation model supports national priorities in weather and climate research by advancing computational science and model precision.
IBM has also been working with Environment and Climate Change Canada (ECCC) to test the model’s adaptability for other weather forecasting applications, such as short-term precipitation forecasting through precipitation nowcasting, which uses real-time radar data. The downscaling technique is also being tested to improve forecast resolution from 15 km to 1 km.
This new model is part of the ongoing collaboration between IBM Research and NASA to use AI for Earth exploration, building on the Prithvi family of AI models. Last year, IBM and NASA released the Prithvi geospatial AI model, the largest open-source geospatial AI model on Hugging Face, which has since been used by governments, companies, and institutions to analyze disaster patterns, biodiversity, land use, and more. The weather and climate foundation model, along with the gravity wave parameterization model, is available on the NASA-IBM Hugging Face page, while the downscaling model is accessible on IBM Granite’s page on Hugging Face.
Open Source AI Wars Ignite Over Definition Disputes

The Open Source Initiative (OSI) and its collaborators are nearing the creation of a formal definition for open-source AI. Stefano Maffulli, OSI's executive director, hopes to announce this definition at the All Things Open conference in late October, though some leaders in the open-source community are already expressing reservations about it.
Here's some context: many companies, including Meta, have been labeling their AI models as open source, despite not truly meeting the criteria. To address this, OSI, along with various organizations, has been working on a thorough definition for open-source AI. The OSI is well-known for defining open-source software through its Open Source Definition.
In the latest draft of the Open Source AI Definition (version 0.0.9), which was introduced at KubeCon and Open Source Summit Asia in Hong Kong, notable revisions were made, frustrating some supporters of open source. Key changes include:
Role of Training Data: While training data is recognized as beneficial, it is not required for modifying AI systems. This reflects the challenges associated with sharing data, particularly legal and privacy issues. The draft categorizes data into open, public, and unshareable non-public types, each with specific rules aimed at improving transparency and understanding potential biases in AI systems.
Separation of Checklist: The license evaluation checklist has been removed from the main definition and aligned with the Model Openness Framework (MOF). This shift allows for more focused discussions on open-source AI identification while preserving core principles in the definition.
As explained by Jim Zemlin, executive director of the Linux Foundation, at KubeCon and Open Source Summit China, the MOF helps assess whether a model qualifies as open. It uses three tiers of openness. In the first tier, everything – data, components, and instructions – is fully open for replication. The second tier allows for partial openness, while in the third tier, some data may be unavailable, though descriptions of the datasets are accessible to ensure the model's overall transparency.
However, not everyone is pleased. Tara Tarakiyee, FOSS Technologist at the Sovereign Tech Fund, argues that any system relying solely on proprietary data is inherently proprietary, and no amount of redefinition can change that. She criticizes the new draft for being filled with vague language, which she believes leaves ample room for proprietary AI systems to falsely claim they are open source.
TheClosedsource.AI News
Sam Altman Predicts AI Superintelligence Could Be Achieved in a Matter of Years

On Monday, OpenAI CEO Sam Altman shared his vision for a future driven by AI, where technological progress and global prosperity are accelerated, in a new blog post titled "The Intelligence Age." In the essay, Altman predicts the potential emergence of superintelligent AI within the next decade.
"It’s possible we’ll achieve superintelligence in a few thousand days; it may take longer, but I’m confident we’ll get there," Altman wrote.
OpenAI’s current focus is on developing AGI (artificial general intelligence), which refers to technology that could match human intelligence across a wide range of tasks without specific training. Superintelligence, which surpasses AGI, represents a hypothetical level of intelligence that could vastly outperform humans in all intellectual tasks.
Superintelligence, sometimes referred to as "ASI" (artificial superintelligence), has been a subject of interest—and at times controversy—among the machine learning community. Philosopher Nick Bostrom notably addressed the topic in his 2014 book Superintelligence: Paths, Dangers, Strategies. Former OpenAI co-founder Ilya Sutskever recently launched a company called Safe Superintelligence. Altman himself has discussed the potential development of superintelligence since at least last year.
Altman’s reference to "a few thousand days" likely signifies an uncertain timeline for ASI's arrival, potentially within a decade. For context, 2,000 days is roughly 5.5 years, 3,000 days is around 8.2 years, and 4,000 days is nearly 11 years. Altman’s use of a vague timeframe reflects the unpredictability of future AI advancements, although as the CEO of OpenAI, he may have insights into AI research not yet known to the public.
While Altman’s prediction has sparked enthusiasm, it has also drawn criticism. Computer scientist Grady Booch, a vocal critic of AI hype, responded to Altman’s prediction on social media, stating, "I am so tired of all the AI hype: it has no basis in reality and only serves to inflate valuations, inflame the public, grab headlines, and distract from real work in computing."
Despite the skepticism, Altman’s comments are notable, given his position as CEO of a leading AI company. He acknowledges the importance of building the infrastructure necessary to support widespread AI use. "If we want to put AI in the hands of as many people as possible, we need to drive down compute costs and increase its availability," Altman writes. Without sufficient infrastructure, he warns, AI could become a limited resource, potentially leading to conflicts and benefiting only the wealthy.
Jony Ive Joins Forces with OpenAI to Develop Revolutionary AI Device

Jony Ive has officially confirmed his collaboration with Sam Altman’s OpenAI on developing a new AI-powered device. This announcement comes nearly a year after rumors surfaced about Ive’s involvement with Altman, speculating on the creation of an OpenAI phone. The pair is reportedly working on a next-generation device that could revolutionize personal computing. Although details remain limited, the project is already generating significant buzz in the tech industry.
Ive and Altman reportedly connected through Airbnb CEO Brian Chesky. Now, with backing from Laurene Powell Jobs' Emerson Collective and Ive himself, they’ve embarked on what could be Ive’s most notable post-Apple endeavor. The team behind the venture is small, consisting of just 10 employees, but includes notable talent. Two key figures from Ive’s time at Apple, Tang Tan and Evans Hankey, who were instrumental in the development of the iPhone, have joined the project. They’re working out of a sleek 32,000-square-foot office in San Francisco, part of a $90 million real estate acquisition made by Ive.
Though still in the early phases, speculation is rife about the nature of the device. The New York Times reports that discussions between Ive and Altman have focused on the potential of generative AI to drive a new type of computing device, one capable of handling far more advanced tasks than current software. Given OpenAI’s successes with tools like ChatGPT and DALL-E, it’s easy to envision AI playing a central role in this groundbreaking project.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.
Transform the way you run your business using AI (Extended Labour day Sale)💰
Imagine a future where your business runs like a well-oiled machine, effortlessly growing and thriving while you focus on what truly matters.
This isn't a dream—it's the power of AI, and it's within your reach.
Join this AI Business Growth & Strategy Masterclass and discover how to revolutionize your approach to business.
In just 4 hours, you’ll gain the tools, insights, and strategies to not just survive, but dominate your market.
What You’ll Experience:
🌟 Discover AI techniques that give you a competitive edge
💡 Learn how to pivot your business model for unstoppable growth
💼 Develop AI-driven strategies that turn challenges into opportunities
⏰ Free up your time and energy by automating the mundane, focusing on what you love
🗓️ Tomorrow | ⏱️ 10 AM EST
This is more than just a workshop—it's a turning point.
The first 100 to register get in for FREE. Don’t miss the chance to change your business trajectory forever.
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!