- "Towards AGI"
- Posts
- Vodafone Expands Partnership with Google in Groundbreaking GenAI Deal
Vodafone Expands Partnership with Google in Groundbreaking GenAI Deal
Google will leverage Vodafone's connectivity services to boost its workforce productivity.
A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.
Whether you're passionate about machine learning, neural networks, or the ethics surrounding GenAI, our platform offers cutting-edge insights, resources, and collaborations on everything AI.
What to expect from Towards AGI: Know Your Inference (KYI): Ensuring your AI-generated insights are accurate, ethical, and fit for purpose Open vs Closed AI: Get expert analysis to help you navigate the open-source vs closed-source AI GenAI Maturity Assessment: Evaluate and audit your AI capabilities Expert Insights & Articles: Stay informed with deep dives into the latest AI advancements But that’s not all!
We are training specialised AI Analyst Agents for CxOs to interact and seek insights and answers for their most pressing questions. No more waiting for a Gartner Analyst appointment for weeks, You’ll be just a prompt away from getting the insights you need to make critical business decisions. Watch out this space!!
Visit us at https://www.towardsagi.ai to be part of the future of AI. Let’s build the next wave of AI innovations together!
TheGen.AI News
Vodafone Expands Partnership with Google in Groundbreaking GenAI Deal

Vodafone, a strong advocate for generative AI (GenAI) technologies to enhance its operations and services, has signed a major new deal with Google. This billion-dollar, 10-year agreement aims to help Vodafone stand out by providing more customers across Europe and Africa with access to Google’s GenAI-powered devices, supported by Google Cloud and its Gemini (formerly Bard) models.
As part of the deal, Vodafone will expand access to AI-powered Pixel devices, continue promoting the Android ecosystem, and integrate Google Cloud's GenAI technology into Vodafone TV set-top boxes. By 2025, select regions will also see the introduction of Google One AI Premium subscription plans, featuring Gemini Advanced. Vodafone plans to further strengthen its partnership with Google Cloud and develop a cloud-native security service for its business clients. The agreement will extend AI-driven storage, security, and assistance to customers in 15 countries and Vodafone’s partners in 45 additional markets. In return, Google will leverage Vodafone's connectivity services to boost its workforce productivity.
Paolo Pescatore, founder and analyst at PP Foresight, described the agreement as a significant victory for Google in the race to lead in AI, while also marking a key step in Vodafone's long-term growth strategy during a time when telecom companies are facing pressure on their margins. Vodafone's CEO, Margherita Della Valle, emphasized the deal’s potential to provide millions of consumers with access to new AI-powered content and devices, unlocking innovative ways to learn, create, communicate, and consume media.
While Vodafone has excelled in improving its enterprise services, Pescatore noted that there is still room to enhance consumer experiences. He also suggested that Vodafone should not limit itself to partnerships with one tech company and should explore collaborations with other major players.
Kester Mann, an analyst at CCS Insight, pointed out that the partnership with Google will be interesting to observe in relation to Vodafone’s existing 10-year AI-focused deal with Microsoft, which has already led to improvements in its TOBi chatbot. Mann highlighted that the extensive scope of the Google deal reflects telecom operators' increasing dependence on large tech companies to support customers, deliver new services, and differentiate from competitors. He also noted that expanding the range of Pixel devices could help revive the struggling smartphone market, which has been dominated by Apple and Samsung.
Vodafone and Google Cloud have previously collaborated to create a data repository, or data lake, housing Vodafone’s data and AI services. Under the expanded partnership, Vodafone will use Google Cloud’s Vertex AI platform to develop and scale machine learning models and AI applications. Additionally, Vodafone plans to offer enhanced cyber protection for its business customers through a new cloud-native cybersecurity solution built on Google Cloud’s Security Operations platform.
GenIP Sees Strong Early Demand for GenAI Services Post-Launch

GenIP, a company specializing in generative artificial intelligence (GenAI) services, shared positive initial results on Tuesday following the rollout of its enhanced GenAI services on 1 September. The AIM-listed firm, recently spun off from Tekcapital, reported securing orders worth $121,000 since the launch, which accounts for over 40% of its total 2023 revenue from its legacy services.
This included more than 80 orders for Invention Evaluator GenAI analytical assessments and four executive search assignments through its Vortechs division.
The board highlighted swift adoption of the new GenAI services by clients, with a Fortune 500 technology company doubling its order pace after the enhancements.
GenIP also revealed it is in advanced discussions with several research organizations, signaling a strong pipeline for future orders, and expects demand for its GenAI services to keep growing from both current and new clients.
The company's Invention Evaluator service, now fully powered by GenAI algorithms, is providing clients with faster, more efficient commercialization assessments.
Meanwhile, Vortechs, which offers executive search services utilizing advanced machine learning and natural language processing, has secured new assignments in the technology transfer sector, with further opportunities in development.
Along with its commercial success, GenIP has been invited to participate in high-profile international technology transfer events, including a healthcare-focused event in Singapore and a major technology transfer event in Brazil. At the latter, GenIP will address over 500 attendees from more than 60 technology transfer offices across Brazilian universities and research centers.
GenIP expressed optimism about its future business prospects and noted that it will provide updates on significant developments, though it does not plan to issue monthly sales or order reports.
“We are very encouraged by the strong demand for GenIP’s GenAI analytic services over the past five weeks, with total orders reaching approximately $121,000,” said CEO Melissa Cruz. “We remain focused on sustaining this momentum, expanding our client base, and delivering our services to research organizations worldwide.”
Meta Unveils 'Movie Gen' AI Model: Features, Functionality, and More

Meta has introduced its latest artificial intelligence model, Movie Gen, designed to generate both video and audio from text prompts. Competing with OpenAI's Sora, the Movie Gen AI model can create videos based on user descriptions, complete with matching audio. The company also noted that it can produce personalized videos by using real photos of individuals, placing them in various scenarios. These generated videos can be further refined or edited using additional text inputs. However, unlike the Llama AI model series, Meta is unlikely to release Movie Gen for open developer use, according to Reuters.
In a research paper outlining the model, Meta explained that Movie Gen has been trained for both text-to-image and text-to-video generation. When given a prompt, the model creates multiple colored images, each acting as a frame for the video. Movie Gen can generate high-definition (1080p) videos up to 16 seconds long at 16 frames per second (FPS). The model supports variable resolutions and durations in different aspect ratios. Meta noted that the model has learned to understand real-world visuals by "watching" videos, allowing it to reason about object movement, camera angles, subject-object interactions, and more.
For audio generation, Meta stated that Movie Gen can create corresponding sound using video-to-audio and text-to-audio methods. It can produce 48kHz audio with cinematic sound effects and music that sync to the video input. While the model is limited to generating only a few seconds of video, it can create coherent, long-form audio for videos lasting several minutes.
Meta highlighted that Movie Gen is trained to use both text and images, enabling it to generate videos featuring a specific person from a real image, while the actions are based on the user's input. The model also has video editing capabilities for both generated and real videos. According to the company, Movie Gen can make “precise and imaginative edits” to a provided video based on user descriptions. In a demonstration, the model successfully edited a video by altering the background and adding new elements to the main subject.
TheOpensource.AI News
OSI’s Open Source AI Definition Sparks Legal Concerns for Enterprise Adoption

As the CTO of Lightning AI and an early core contributor to PyTorch, open source is a core principle for me. After two decades in open source and a decade in AI, I am particularly interested in the Open Source Initiative’s (OSI) recent definition of open source AI. While the definition is comprehensive in many areas, I believe it overlooks a critical issue, especially for developers and businesses seeking to confidently adopt open source AI models.
The key issue that remains unaddressed — except in a footnote at the end of the document — is that the draft does not take a position on whether model parameters require a license or other legal instruments, nor whether they can be legally controlled once shared.
In practice, this means that an OSI-approved open source license for an AI system may not automatically guarantee "free for commercial use," as is typically assumed with traditional open source software.
A model could be trained on unlicensed data (such as books or movies) and still be considered open source if details about the data sources, preparation scripts, and related materials are made available. This ensures transparency, allowing for due diligence on the data's licensing status, but it differs from the traditional understanding of what open source guarantees.
For businesses, grasping these nuances is vital for making informed decisions about incorporating open source models into their AI systems. To be valuable, especially in business contexts, an open source AI definition must give users confidence that the licensed material can legally be used or redistributed as intended.
Two fundamental questions apply to the use of licensed software, whether open source or not:
What conditions does the licensor impose on users? Is the software free to use for any purpose, or are there limitations? Can modified versions be redistributed? Can it be used to build a SaaS without restrictions, or are royalties required?
Can the licensor release the software under the stated terms? The accompanying copyright notice usually addresses this.
In essence, the licensor must either own the copyright or have a license for the material used in the software, clearly defining the allowable uses and redistribution conditions.
Let’s explore this with some examples:
Example 1: Software Systems
I write software from scratch and, since I own the copyright, I can release it under the Apache 2.0 license, which permits anyone to use, modify, and redistribute it.
Example 2: Software Systems
I write software by copying parts of other software released under restrictive licenses, then release it under the Apache 2.0 license. However, it’s questionable whether I can legally do this, and users of this software might face legal action from the original authors.
Example 3: AI Systems
I train a model on copyrighted data (like books or YouTube videos) and release the model, including code and weights, under the Apache 2.0 license. This raises the question: do I hold the copyright over those model weights? Legal opinions vary, making this issue more complex.
The OSI’s definition leaves this issue out of scope. If a model is trained on unlicensed data but the scripts and weights are openly documented, it is still considered open source by their standards. While understandable, this approach offers limited practical value for companies trying to assess the legal risks of using such models.
By not addressing the licensing of weights, the OSI leaves a significant gap, reducing the effectiveness of these licenses in determining whether OSI-licensed AI systems are legally viable for real-world use.
To achieve widespread enterprise adoption, open source AI must address this issue. If the OSI does not, more precise definitions will likely emerge from other sources to fill the gap.
Voxel51 Launches FiftyOne 1.0: A Game-Changer for Visual AI Development

Voxel51, a company dedicated to advancing visual AI, has announced the major release of version 1.0 of its open-source platform, FiftyOne. FiftyOne Open Source serves as the foundation for Voxel51’s commercial product, FiftyOne Teams, and is designed to simplify and enhance the development of reliable visual AI applications.
Visual data accounts for nearly 60% of all data traffic and is crucial for AI systems that interact with the real world. However, AI developers often struggle with tools that can't efficiently manage the large volumes and diverse types of visual data, leading to delays and failures in over 80% of AI projects.
To address these challenges, Voxel51’s experts in computer vision and machine learning developed FiftyOne to revolutionize the use of visual data in AI development. The latest open-source release delivers an integrated solution that extracts valuable insights from visual data and automates the iterative process of building robust AI models. This milestone version is highly flexible, user-friendly, and customizable, offering extensive capabilities for the successful development of visual AI applications. Key features include:
Support for various types of visual data, such as 3D meshes, point clouds, and geometries, throughout exploration, curation, and model development.
A customizable framework that allows the creation of interactive data applications, custom dashboards, and operations using Python to fine-tune models and datasets.
Native integration with Elasticsearch for vector search, expanding the range of compatible tools.
Open-source machine learning techniques from FiftyOne Brain that help uncover hidden structures and relationships in visual data by assessing factors like uniqueness and similarity.
The open-sourcing of FiftyOne Brain’s machine learning algorithms underscores Voxel51’s commitment to open-source AI. As the AI industry debates the true meaning of open source, Voxel51 stresses the importance of transparency in every aspect of AI, from models to data and systems.
Thousands of AI developers already rely on FiftyOne to build more accurate models, with reported improvements in team productivity by up to 50% and model accuracy by 30%. FiftyOne Teams, the commercial version, is used by organizations such as LG Electronics, Berkshire Grey, and Precision Planting to bring visual AI projects to fruition.
Key Quotes:
“It’s nearly impossible to create trustworthy visual AI when faced with the challenge of managing millions of data samples. This release provides a solid foundation for innovation in visual AI, giving developers the tools to ensure quality data and analysis throughout the development process. FiftyOne’s ease of use, flexibility, and transparency are democratizing best practices for visual AI development, and we’re excited to see continued progress in the community.”
— Brian Moore, Voxel51 co-founder and CEO
“FiftyOne offers us a centralized platform to better understand our data, uncover issues, and resolve annotation and model problems. Its flexibility and numerous integrations make it easy to work with, and it’s rapidly becoming the standard in the computer vision field.”
— Chris Hall, Data Scientist, Vivint Smart Home
“Open source fosters the transparency, collaboration, and rigor required for ongoing innovation in AI. We’re thrilled to continue investing in open source so the AI ecosystem can work together to deliver reliable, trustworthy advancements.”
TheClosedsource.AI News
OpenAI Faces Another Executive Departure After Mira Murati

Tim Brooks, a prominent figure at OpenAI, is departing the company to join Google’s DeepMind. At OpenAI, Brooks co-led the development of "Sora," an AI-powered video generation system, alongside William Peebles. Although Sora has yet to be released due to technical difficulties, Brooks played a major role in its development. Announcing his departure on X (formerly Twitter), Brooks expressed his appreciation for his two years at OpenAI, particularly his contributions to the Sora project.
Brooks is now set to work at Google DeepMind, Google’s AI research division, focusing on video generation technologies and "world simulators." These simulators, which are AI-driven models capable of creating entire virtual worlds, could have applications in gaming, film production, and AI training. Demis Hassabis, CEO of DeepMind, welcomed Brooks and noted that his expertise would be instrumental in making the "dream of a world simulator" a reality. These simulators, such as DeepMind’s Genie, can generate interactive virtual environments from various sources like images, sketches, and photos.
What Was Tim Brooks' Role at OpenAI?
At OpenAI, Tim Brooks played a crucial role in the development of the Sora video generation model. He guided the project’s research and led efforts in training large models for video creation. Sora was introduced in February 2023 but encountered technical challenges, particularly its slow processing speed, which took over 10 minutes to generate a 1-minute video clip. This inefficiency made it less competitive compared to faster video generation tools from companies like Luma and Runway. OpenAI is currently working on an improved version of Sora to address these performance issues, but Brooks has opted to explore new opportunities at DeepMind, where his skills can further advance video generation and AI technologies.
What Will Tim Brooks Do at Google DeepMind?
At Google DeepMind, Brooks will focus on video generation technologies and developing "world simulators." These simulators, though still in development, are models that create interactive, virtual environments, with applications in gaming, movies, and AI training. For instance, DeepMind’s Genie can generate virtual worlds from synthesized images, real photos, or sketches, which users can then interact with.
The potential for these world simulators is vast, from revolutionizing game design to simplifying film production and visual effects creation. In AI research, these simulators could train AI systems in virtual environments before real-world deployment. Brooks’ move to DeepMind highlights his interest in these groundbreaking technologies, and his experience with video generation at OpenAI will be valuable in advancing world simulators and AI-driven video creation.
In summary, Tim Brooks is leaving OpenAI, where he co-led the Sora video generation project, to join Google DeepMind. At DeepMind, he will work on cutting-edge video generation technologies and world simulators, which have significant potential in gaming, filmmaking, and AI training.
Unify Lands $12M Series A for Personalized ‘Warm Outbound’ Sales Approach

Unify, an AI-powered startup that leverages data to connect with potential buyers, has raised $12 million in a Series A funding round led by existing investors Emergence Capital and Thrive Capital. Other participants in the round include OpenAI Startup Fund, Neo, Abstract, 20Sales, and AltCap.
Founded 20 months ago, Unify went through OpenAI's Converge I accelerator last year and subsequently raised nearly $7 million in seed funding from the same group of investors.
The company was co-founded by Austin Hughes, who previously led Ramp's sales outbound program. During his time there, Hughes noticed that cold email outreach had become less effective over the past decade, which inspired the idea that sales, marketing, and revenue teams should better utilize data to reach customers with personalized content at the optimal time—when they're ready to buy.
Hughes joined forces with Connor Heggie, a machine learning research engineer at Scale AI, to develop AI-powered messaging that helps sales teams generate leads and close deals more efficiently.
While the concept of AI-enhanced sales tools isn't new, with the fast-growing sector of AI Sales Development Representative (AISDR) companies like 11x.ai, Reggie.ai, and Artisan attracting significant venture capital, Hughes emphasizes that Unify is not an AISDR. Instead, Unify offers what Hughes calls "warm outbound" messaging. He explains that Unify allows users to fine-tune every aspect of the process, from the copy to the data sources.
Unify integrates with CRMs and other data warehouses to tailor its messages to potential buyers, while also scanning online data sources to identify prospects and detect buying signals. Rather than focusing solely on messaging, Hughes sees Unify as a data company, drawing inspiration from businesses like Zoominfo and Outbound.io.
Despite its distinct approach to AI-driven sales outreach, Unify is growing rapidly. The company’s revenue has already reached millions, and it boasts a roster of clients including Justworks and Lattice.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.
Looking for unbiased, fact-based news? Join 1440 today.
Upgrade your news intake with 1440! Dive into a daily newsletter trusted by millions for its comprehensive, 5-minute snapshot of the world's happenings. We navigate through over 100 sources to bring you fact-based news on politics, business, and culture—minus the bias and absolutely free.
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!