- "Towards AGI"
- Posts
- Why OpenAI and Google Are Investing Big in This $1.5 Billion AI Unicorn?
Why OpenAI and Google Are Investing Big in This $1.5 Billion AI Unicorn?
Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.
Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.
TheGen.AI News
Why OpenAI and Google Are Investing Big in This $1.5 Billion AI Unicorn?

On July 4, 2022, Winston Weinberg joined a call with the entire executive team of OpenAI. Just two weeks earlier, Weinberg, then 27, and his friend Gabe Pereyra had reached out to the company about a chatbot they had developed that could answer legal questions by pulling information from a public online forum. With an impressive 86% accuracy rate, the tool caught the attention of OpenAI’s executives, leading them to schedule a meeting on a holiday.
That July Fourth call marked the beginning of Harvey, a legal AI platform that enables lawyers to upload files and input requests to automate tasks such as document analysis, consolidating extensive research, or tracking productivity. Last month, Harvey secured a $100 million series C funding round with contributions from Google Ventures, OpenAI, Kleiner Perkins, Sequoia Capital, and tech investor Elad Gil, bringing the company’s total funding to $216 million. This new deal has given Harvey a $1.5 billion valuation, making it the highest-valued startup in OpenAI’s portfolio.
“I practiced law for less than a year, and quickly realized that much of the work done by associates could have been handled even before attending law school,” says Weinberg, who was named to Forbes’ Under 30 list for 2024.
After law school, Weinberg began using ChatGPT for the monotonous tasks assigned to new associates, such as reviewing thousand-page documents. It was then that he approached Pereyra, a former Google AI research scientist and his roommate, with an idea: automate legal tasks so that associates like himself could focus on more meaningful work, like developing thoughtful arguments for clients.
Weinberg recalls working over 120 hours some weeks—juggling the AI project while being a full-time litigation lawyer—before publicly launching Harvey in late 2022.
Upon its launch, Harvey announced its first client was the billion-dollar law firm A&O Shearman, followed by PwC, one of the Big Four accounting firms. Unlike many business-facing companies that start by securing smaller clients, Harvey targeted the industry’s biggest players from the outset.
“One thing I was certain about, even though others doubted me, was going after the largest and most prestigious firms first,” says Weinberg, a 2024 Forbes Under 30 honoree. “Trust is the biggest factor with AI, and earning the trust of these major institutions early on is crucial since we’re helping them with very high-profile work.”
To win over such clients, Weinberg would find the most recent public legal document a potential client had filed and use Harvey’s model to generate counterarguments that could be used against them in court. This personalized approach proved successful, and it only took gaining the trust of one or two major clients to get momentum. Since then, they’ve signed on law firms like O’Melveny & Myers, Vinson & Elkins, Gleiss Lutz, ReedSmith, Macfarlanes, and more.
While it’s an impressive start, AI is still relatively new and unregulated, which could lead to issues such as breaching confidentiality or producing inaccurate information. Recent incidents—like Donald Trump’s former lawyer Michael Cohen submitting a court motion with AI-generated fake rulings or rapper Pras seeking a retrial because his attorney misused the technology—are prompting law firms and startups to exercise greater caution when adopting AI, according to legal technology strategist Nicole Black.
To navigate the challenges of using AI in the legal field, Weinberg has brought in industry veterans, including long-time Google lawyers Andrew Hyman and John LaBarre, who joined Harvey as general counsel, and former Wachtell partner Gordon Moodie as Harvey’s chief product officer. Weinberg says the latest funding round will help the company hire more top-tier lawyers and engineers to continue refining their model, ensuring it remains highly specialized for law firms while maintaining accuracy and data privacy.
Despite the competition in the legal AI space, Harvey’s broad range of services gives it an edge over legal tech startups with narrower focuses, like Spellbook, which specializes in contracts. However, direct competitors like Casetext, acquired by Thomson Reuters last year for $650 million, pose a significant challenge. Still, among the thousands of startups developing software using OpenAI’s technology, OpenAI’s COO Brad Lightcap believes Harvey’s speed and ambitious vision make it stand out.
“You give them a bit of context and advice, and they come back a week later significantly improved,” Lightcap says. “They see themselves not just as software sellers, but as partners in the industry.”
The Fall of Generative AI: A Necessary Step Toward Long-Term Success

Two years ago, the term "generative AI" started flooding my inbox. Although it wasn’t new—it was mentioned in one of Gartner’s hype cycle reports back in 2020—by the end of summer 2022, the number of messages and pitches I received made it clear that AI tools capable of creating content, like text, images, and code, were gaining traction. When OpenAI released ChatGPT in November 2022, generative AI quickly became a mainstream phenomenon.
However, recent developments have shifted this optimistic narrative. Gartner’s latest hype cycle indicates that generative AI has moved past the "Peak of Inflated Expectations" and is now heading toward the "Trough of Disillusionment." If this is accurate, the consequences could be harsh—investment may dwindle, startups might fail, and layoffs could follow.
For those who have worked hard and taken risks in the generative AI space, this market correction may feel unfair and harsh. But, according to Kjell Carlsson, a former Forrester Research analyst now leading AI strategy at Domino Data Lab, this adjustment is crucial for the long-term health of the AI industry. Carlsson pointed out that generative AI is just one part of a broader AI toolkit that includes other technologies like predictive AI and machine learning, which were already delivering real value before generative AI gained popularity. "There's no magic button; it's about using the right technologies for the right use cases," he explained.
We shouldn’t fear the trough. Generative AI isn't going away. Tools like ChatGPT, Microsoft Copilot, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama have already integrated into our daily lives for productivity, efficiency, or entertainment. Just as we’ve come to rely on Google for instant information, we'll soon expect easy-to-read meeting summaries, automated memos, and image and presentation creation with just a few words.
However, it's important to acknowledge that the massive $1 trillion investment in generative AI has yet to fully pay off. While this might not be as extreme as the dot-com bubble, it's clear that generative AI is facing a necessary reality check.
Carlsson, who was one of the first industry analysts to support generative AI, reflected that while its success is undeniable, the expectations for how quickly it would impact large organizations were unrealistic. Gartner’s global chief of research, Chris Howard, recently explained in a video that the "Trough of Disillusionment" is an essential stage for any technology. After initial excitement, mainstream users often find that new technology doesn’t meet their high expectations, leading to a period of adjustment and refinement.
"It's not a dark or dangerous place," Howard said. "It's where we figure out how to make something work—or not."
Businesses Reap Big Rewards by Combining WhatsApp with Gen AI

The Gulf countries are emerging as leaders in advanced AI adoption, making substantial investments that position them at the forefront of global innovation. These economies alone are projected to see AI contribute up to $150 billion, accounting for 9 percent of their combined GDP.
The increasing accessibility of generative AI solutions is driving businesses to invest in the technology, unlocking immense business value. Generative AI enables personalized real-time interactions, automates routine tasks, and delivers seamless customer experiences at scale.
A key area where generative AI is making a significant impact is in conversational messaging, particularly through platforms like WhatsApp. With over 200 million monthly active users in the Middle East, WhatsApp's extensive reach combined with GenAI’s transformative potential offers businesses an unparalleled opportunity to engage customers throughout their journey, driving growth.
Unlike traditional communication methods, WhatsApp for Business facilitates personalized and contextually relevant two-way conversations. For example, its Flows feature allows businesses to streamline appointment scheduling, drive customer acquisition through tailored offers, and enhance retention by reducing churn.
With the addition of Gen AI, enterprises can achieve even more. They can provide conversational buying guidance with tailored product recommendations, enabling seamless product discovery that enhances the buyer experience and helps customers make informed decisions. Ecommerce giants like India’s JioMart have leveraged WhatsApp's capabilities to significantly boost conversions and sales, achieving a remarkable 15 percent conversion rate and a 68 percent repeat purchase rate through its WhatsApp conversational assistant, which can fulfill grocery orders directly on the platform. Other brands have similarly enhanced their RoI through personalized marketing campaigns on WhatsApp, using AI to improve customer satisfaction.
Integrating Gen AI capabilities with WhatsApp surpasses traditional WhatsApp marketing, promising remarkable outcomes for businesses. AI agents represent the next big shift, enabling businesses to handle even complex tasks end-to-end, reimagining efficiency. As user behavior shifts from search to prompts, marketing is evolving into more of a dialogue, with AI-powered conversational interfaces engaging customers in real-time, personalized conversations at scale.
Businesses that harness the combined power of WhatsApp and generative AI will be well-positioned for sustainable growth and innovation in the digital age. This strategic integration will set new standards for customer engagement, operational efficiency, and overall business performance.
Major Finance Firm Launches Gen AI Bootcamp for All 35,000 Staff

S&P Global has announced that it will begin a “comprehensive gen AI learning program” for all 35,000 of its employees starting this month. The training, developed in partnership with Accenture, will utilize curated content from Accenture’s LearnVantage platform to build “AI fluency” among workers. This initiative is part of S&P Global’s strategy to address the increasing dependence on AI within the financial services industry and to equip its employees with the skills needed to effectively leverage generative AI.
"AI is for everyone, and at S&P Global, we aim to empower our employees and customers to adopt, build, and innovate with gen AI," said Bhavesh Dayalji, S&P Global’s chief AI officer. He emphasized that generative AI will continue to revolutionize the financial services sector by improving daily operations and customer interactions.
Accenture and S&P Global are also collaborating on AI development and benchmarking across the financial services industry. This partnership combines Accenture’s Foundation Model Services, which assist companies in managing and scaling large language models (LLMs), with S&P AI Benchmarks by Kensho, which assess LLM performance for financial and quantitative applications.
LLMs are advanced AI models trained to understand and generate content. A forthcoming study from the University of Chicago Booth School of Business suggests that these models can sometimes outperform financial analysts in predicting future earnings.
In recent years, AI's expanding capabilities have become a focal point in the financial services industry. Banks and financial institutions have intensified their efforts to hire AI talent, deploy AI-powered tools and assistants, and explore hundreds of new AI use cases across their operations.
For instance, Bank of America plans to invest $4 billion this year alone in developing new technologies, including AI. JPMorgan Chase, a leader in AI adoption among major banks, has over 2,000 AI and machine learning experts and data scientists working on more than 400 AI use cases. Mary Erdoes, head of JPMorgan’s asset and wealth management division, stated at the bank’s investor day in May that all new hires will receive AI training.
S&P Global believes its collaboration with Accenture will enable banks, insurers, and capital markets firms to enhance the performance and effectiveness of their solutions while ensuring that responsible AI practices are integrated into every application.
TheOpensource.AI News
LG Launches South Korea’s First Open-Source AI, Taking on Global Leaders

LG AI Research has introduced Exaone 3.0, marking South Korea’s debut into the competitive global AI arena, which has been largely dominated by U.S. tech giants and rising contenders from China and the Middle East. This is the country’s first open-source artificial intelligence model, a significant move aimed at advancing AI research and fostering a strong AI ecosystem in Korea.
Exaone 3.0, a 7.8 billion parameter model proficient in both Korean and English, represents a strategic shift for LG, traditionally recognized for its consumer electronics, as the company positions itself at the cutting edge of AI innovation. By making Exaone 3.0 open-source, LG is not only demonstrating its technological capabilities but also potentially paving the way for new revenue opportunities in cloud computing and AI services.
This model enters a highly competitive field of open-source AI models, including China’s Qwen from Alibaba and the UAE’s Falcon. Qwen, which saw a major update in June, has quickly gained traction, boasting over 90,000 enterprise clients and leading performance rankings on platforms like Hugging Face, outpacing Meta’s Llama 3.1 and Microsoft’s Phi-3.
Similarly, the UAE’s Technology Innovation Institute launched Falcon 2, an 11 billion parameter model, in May, claiming it surpasses Meta’s Llama 3 in several benchmarks. These advancements emphasize the growing global competition in AI, with countries beyond the U.S. making substantial progress. The rise of models from Asia and the Middle East signifies a shift in the AI landscape, challenging the dominance traditionally held by Western nations.
LG’s strategy mirrors that of Chinese companies like Alibaba, using open-source AI to expand cloud services and accelerate commercialization. This approach allows LG to rapidly enhance its AI models through community input while also building a potential customer base for its cloud offerings. By providing a powerful open-source model, LG could attract developers and enterprises to create applications on its platform, thereby boosting the adoption of its broader AI and cloud infrastructure.
Exaone 3.0 offers improved efficiency, with LG reporting a 56% reduction in inference time, a 35% decrease in memory usage, and a 72% reduction in operational costs compared to its previous model. These enhancements are vital in the competitive AI field, where efficiency directly translates into cost savings for businesses and better user experiences. The model has been trained on 60 million cases of specialized data related to patents, codes, math, and chemistry, with plans to expand this to 100 million cases by the end of the year, reflecting LG’s dedication to creating a versatile and comprehensive AI system.
Exaone 3.0 represents South Korea’s significant leap into the global AI competition. LG’s entry into open-source AI has the potential to challenge the dominance of established players like OpenAI, Microsoft, and Google. It also showcases South Korea’s ability to develop cutting-edge AI models that can compete on a global scale, a noteworthy achievement for a country known for its technological innovation but relatively quiet in the open-source AI sector until now.
The success of Exaone 3.0 could have far-reaching effects. For LG, it could signify a successful diversification into AI and cloud services, opening up new revenue streams. For South Korea, it represents a bold entry onto the global AI stage, potentially attracting international talent and investment. On a broader level, the proliferation of open-source models like Exaone 3.0 could democratize access to advanced AI technologies, driving innovation across various industries and regions.
As the AI competition heats up, the true impact of Exaone 3.0 will be measured not just by its technical specs but by its ability to foster a vibrant ecosystem of developers, researchers, and businesses leveraging its capabilities. The upcoming months will be critical in determining whether LG’s ambitious move will succeed in reshaping the global AI landscape.
Meet Felafax: The Startup Slashing ML Training Costs with Open-Source AI

Spinning up AI workloads on the cloud can be a challenging process. It involves a lengthy training setup that requires installing various low-level dependencies, often leading to notorious CUDA errors. The process also includes attaching persistent storage, waiting up to 20 minutes for the system to boot, and dealing with other complexities. Additionally, machine learning (ML) support for non-NVIDIA GPUs is limited. In contrast, Google TPUs and other alternative chipsets offer a 30% lower total cost of ownership while delivering superior performance. The increasing size of AI models, like the LLaMa 405B, necessitates complex multi-GPU orchestration, as they cannot be run on a single GPU.
Enter Felafax, a promising startup. Felafax's new cloud layer simplifies building AI training clusters, starting with 8 TPU cores and scaling up to 2048 cores. To help you get started quickly, they offer pre-configured templates for PyTorch XLA and JAX that are easy to set up. They also provide simplified LLaMa fine-tuning with pre-built notebooks for LLaMa 3.1 models (8B, 70B, and 405B). Felafax has handled the complex multi-TPU orchestration for you.
Felafax is set to launch an open-source AI platform in the coming weeks, positioned as a competitor to NVIDIA’s CUDA. The platform, based on JAX and OpenXLA, offers 30% cheaper performance than NVIDIA while supporting AI training on various non-NVIDIA hardware, including Google TPU, AWS Trainium, AMD, and Intel GPUs.
Key Features
- Quickly spin up large training clusters with one click, ranging from 8 to 1024 TPUs or non-NVIDIA GPU clusters. The framework manages training orchestration effortlessly, regardless of cluster size.
- Built on a non-CUDA XLA architecture, Felafax’s bespoke training platform delivers unparalleled performance at a lower cost—30% less expensive, with performance equivalent to NVIDIA’s H100.
- Personalize your training run easily by integrating it into your Jupyter notebook with a single button click, ensuring complete control and minimizing errors.
- Felafax handles all the heavy lifting, including optimizing model partitioning for LLaMa 3.1 405B, managing distributed checkpointing, and orchestrating training across multiple controllers, allowing you to focus on innovation rather than infrastructure.
- Choose from standard templates for PyTorch XLA and JAX, with pre-configured environments that include all necessary dependencies, enabling you to start immediately.
- LLaMa 3.1’s JAX implementation reduces training times by 25% and increases GPU utilization by 20%, helping you maximize your investment in high-cost computing resources.
Felafax is developing an open-source AI platform designed for next-generation AI technology, aiming to reduce machine learning training costs by 30%. The company is committed to making high-performance AI computing more accessible through its open-source platform, focusing on GPUs that are not manufactured by NVIDIA. Although there is still much progress to be made, Felafax’s efforts could revolutionize artificial intelligence by lowering costs, expanding accessibility, and fostering innovation.
TheClosedsource.AI News
OpenAI's New System Card Identifies GPT-4o as a 'Medium' Risk Model

OpenAI has released the GPT-4o System Card, a comprehensive research document detailing the safety protocols and risk assessments conducted before the model's public debut in May. The document provides insights into OpenAI's efforts to address potential risks linked to its latest multimodal AI model.
Before the launch, OpenAI followed its standard procedure by involving external red teamers—security experts tasked with identifying system vulnerabilities. These experts examined possible risks related to GPT-4o, such as unauthorized voice cloning, the generation of inappropriate content, and copyright violations.
According to OpenAI's internal framework, GPT-4o was classified as having a "medium" risk level. This overall risk rating was based on the highest individual risk score across four main categories: cybersecurity, biological threats, persuasion, and model autonomy. While most categories were deemed low risk, the persuasion category stood out, as certain GPT-4o-generated text samples showed a higher persuasive ability than human-written texts.
"The system card includes preparedness evaluations created by an internal team, as well as external testers listed on OpenAI’s website as Model Evaluation and Threat Research (METR) and Apollo Research, both of which contribute to evaluations of AI systems," stated OpenAI spokesperson Lindsay McCallum Rémy.
This system card follows similar publications for earlier models like GPT-4, GPT-4 with vision, and DALL-E 3, reflecting OpenAI's commitment to transparency and collaboration with external parties in evaluating its AI systems.
The release is particularly timely as OpenAI faces ongoing criticism over its safety practices. Concerns have been voiced by internal employees and external stakeholders, including a recent open letter from Senator Elizabeth Warren and Representative Lori Trahan, calling for increased accountability and transparency in OpenAI's safety review processes.
The release of a powerful multimodal model like GPT-4o close to the US presidential election has raised worries about the potential for misinformation and harmful use. OpenAI's system card aims to address these issues by highlighting the company's proactive risk mitigation efforts through real-world scenario testing.
Despite these efforts, there are ongoing demands for more transparency and external oversight. The focus has expanded beyond just training data to include the entire safety testing process. In California, new legislation is being considered to regulate large language models, including holding companies responsible for any harm caused by their AI systems.
OpenAI Expands Free Tier to Include DALL-E, with Daily Image Limit

OpenAI is now granting free-tier users access to its image generation AI model, DALL-E 3. In a post on the social media platform X, the Microsoft-backed AI company announced that it is enabling ChatGPT free users to generate up to two images per day using the DALL-E 3 model. Free users can prompt the ChatGPT chatbot to create images based on their input.
DALL-E 3, a generative AI model launched by OpenAI in September last year, was initially available only to ChatGPT Plus subscribers. A key feature of DALL-E 3 is its ability to allow users to craft precise prompts through ChatGPT’s conversational interface, making it easier to generate the desired images. The AI chatbot’s text-generating capability simplifies and refines the input prompts, which the model then uses to create images.
Earlier this year, OpenAI introduced the ability for users to edit specific sections of images generated by DALL-E 3. The company added a new selection tool that can be accessed by clicking on a generated image. This tool allows users to select a portion of the image to edit and then describe the changes in chat through a text prompt.
In related news, OpenAI has published a "System Card" for its latest multimodal AI model, GPT-4o. This document outlines the safety measures the company has implemented and provides an evaluation report of the AI model. According to the document, OpenAI assessed risks such as unauthorized voice generation, the creation of violent speech, the generation of copyrighted music, and more. The evaluation process resulted in the GPT-4o model being rated as "Medium Risk" overall. This rating was determined based on four categories: cybersecurity, biological threats, model autonomy, and persuasion. While GPT-4o was rated as low risk in the first three categories, research indicated that the model's generated content could be particularly persuasive.
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!