• "Towards AGI"
  • Posts
  • Elon Musk Sues OpenAI Executives for Fraud and Breach of Contract

Elon Musk Sues OpenAI Executives for Fraud and Breach of Contract

Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.

Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.

Elon Musk Sues OpenAI Executives for Fraud and Breach of Contract

Elon Musk is suing OpenAI CEO Sam Altman and President Greg Brockman in federal court for fraud and breach of contract, according to a report by Deadline. The California court filing describes the situation as “of Shakespearean proportions,” accusing Altman and his associates of betrayal once OpenAI’s technology neared Artificial General Intelligence (AGI). 

The complaint alleges that OpenAI shifted from its stated charitable mission to benefit the public and protect humanity to a focus on self-enrichment for Altman and his partners. This change is highlighted by OpenAI's partnership with Microsoft and the creation of numerous for-profit affiliates, recently valued at $100 billion. Musk, a co-founder of OpenAI who invested millions in its creation in 2015, initially withdrew the lawsuit on June 11 with the option to refile later, which he has now done to push OpenAI back to its original mission of developing AGI for the benefit of humanity.

Marc Toberoff, Musk's attorney, has previously represented high-profile clients such as the creators of Superman, Jack Kirby of Marvel, and the estate of Steve Ditko, co-creator of Spider-Man and Iron Man. Toberoff also represented Ray Charles, once securing a preliminary injunction against Warner Bros for a Dukes of Hazzard movie.

PwC India Launches GenAI Experience Lab in Gurgaon

PwC India has launched a GenAI Experience Lab at their Novus Tower office in Gurgaon. This facility is designed to explore the transformative potential of Generative AI, providing an immersive experience across various domains. The lab aims to demonstrate how Generative AI can revolutionize processes and enhance decision-making.

The GenAI Lab is equipped with new hardware to encourage creativity and collaboration, along with advanced AI experiments. Visitors can engage with award-winning prototypes and interact with PwC innovators to gain insights into practical GenAI applications.

Manpreet Singh Ahuja, PwC India's Chief Digital Officer and TMT Industry Leader, invited users on LinkedIn to visit the lab for immersive workshops. Multiple visitors or teams can explore over 30 assets to inspire ideas on transforming enterprise functions and gaining a competitive business advantage.

In July, PwC announced a strategic partnership with Google Cloud to leverage Generative AI for enhancing enterprise solutions. This partnership combines Google Cloud's AI expertise, including the Vertex AI platform and Gemini models, with PwC India's industry insights and consulting experience.

The collaboration aims to develop generative AI solutions to transform key business operations in sectors such as tax, healthcare, and legal services. Sanjeev Krishan, Chairperson of PwC India, highlighted the potential of Generative AI to create transformative solutions for businesses and communities. He emphasized the opportunity to democratize GenAI across industries to address real-world needs, aiming to create a positive societal impact and accelerate India's digital future by integrating Google Cloud's technology with PwC India's expertise.

Zoom Unveils Gen-AI Powered 'Docs' for Enhanced Workplace Collaboration

Zoom has introduced a new feature to enhance workplace collaboration. The video conferencing platform has added Zoom Docs, an AI-powered collaborative document editor, utilizing the company's generative AI assistant, Zoom AI Companion. Zoom Docs enables users to have AI generate documents that compile information discussed during meetings.

This tool simplifies file sharing among meeting participants and allows users to edit or annotate content on shared files using generative AI. Zoom Docs also offers customizable templates for various meeting types and project updates. Key features include multi-language translation support, various layouts with different content blocks, and integration with apps like Google Drive and Figma. Users can also request Zoom's AI Companion to track task progress, create tables and checklists, and generate meeting transcripts.

These new collaborative features are available on Zoom workplace app version 6.1.6 or later. Users with Basic free accounts can create up to 10 shared docs without AI functionalities, while the full set of features is accessible to paid Zoom Workplace plan subscribers. There is no additional charge for Zoom Docs if the user already has a paid Zoom Workplace license.

Previously, many organizations had to use other workplace tools even if they used Zoom for meetings. This new feature offers more collaborative tools in one place. Google and Microsoft also provide AI features within their workplace collaboration platforms, Google Workspace and Microsoft Teams, respectively.

Coursera Launches Gen AI Skills Training and New Credentials

Learning platform Coursera is expanding its Generative AI Academy training portfolio with a new offering for teams. Building on the existing GenAI for Everyone and GenAI for Executives programs, the new GenAI for Teams program is designed to "equip teams to apply GenAI skills and best practices tailored to their specific business functions," according to a company blog post.

GenAI for Teams offers a catalog of hundreds of courses curated to help teams "unlock innovation and productivity across critical functions," the company explained. Topic areas include:

  • GenAI for Software and Product Teams: Courses include "Generative AI for Software Developers" by IBM, "Developing with GitHub Copilot and VS Code" by Microsoft, and "Responsible AI for Developers" by Google Cloud.

  • GenAI for Data Teams: Courses include "Generative AI for Data Scientists" by IBM, "GenAI in Data Analytics" by Meta, and "Practical Steps for Building Fair AI Algorithms" by Fred Hutch Cancer Center.

  • GenAI for Marketing Teams: Courses include "GenAI for Social Media Marketing" by Meta, "Content Marketing Using Generative AI" by the University of Virginia Darden, and "Generative AI for the Resilient Pricer" by Dartmouth College.

Additionally, Coursera has introduced seven new generative AI courses, specializations, and certificates to help learners develop their GenAI skills:

  • Generative AI for Software Development Skills Certificate from DeepLearning.AI.

  • Artificial Intelligence Graduate Certificate from the University of Colorado Boulder, which can count toward credit for the university's online Master of Science in Computer Science degree.

  • Generative AI in Marketing Specialization from UVA Darden.

  • Programming with Generative AI course from the Indian Institute of Technology Guwahati.

  • Responsible Generative AI Specialization from the University of Michigan.

  • Change Management for Generative AI course from Vanderbilt University.

  • Generative AI for Kids, Parents, and Teachers course from Vanderbilt.

Furthermore, eight of Coursera's entry-level Professional Certificates have been updated with generative AI-related content, including projects, readings, and videos:

  • Data Analyst from IBM.

  • Data Engineering from IBM.

  • Data Science from IBM.

  • Full Stack Software Developer from IBM.

  • Cybersecurity Analyst from Microsoft.

  • Power BI Data Analyst from Microsoft.

  • Marketing Analytics from Meta.

  • Social Media Marketing from Meta.


How Small Language Models and Open Source are Revolutionizing AI?

In the IT industry, the term "lean" is often used to describe processes that need to be more efficient and cost-effective, and this extends to Generative AI. Enterprise AI systems can be extremely costly, running into millions of dollars and consuming vast amounts of energy. As a result, many businesses are seeking more efficient, or "lean," AI solutions.

Many enterprises turn to public cloud providers to quickly integrate generative AI, given that these providers offer comprehensive ecosystems accessible through a simple dashboard. Large cloud providers have experienced revenue growth from this initial surge in AI spending. However, many enterprises find that using cloud services can result in higher operational costs compared to traditional in-house systems. Despite this, there is still a strong focus on cloud usage, prompting companies to explore more cost-effective cloud strategies, leading to the adoption of lean AI.

Lean AI emphasizes efficiency, cost-effectiveness, and minimal resource consumption while maximizing business value. This strategic approach borrows from lean methodologies originally used in manufacturing and product development.

Lean AI focuses on optimizing the development, deployment, and operation of AI systems. It uses smaller models, iterative development practices, and resource-efficient techniques to minimize waste. By prioritizing agile, data-driven decision-making and continuous improvement, lean AI enables enterprises to leverage AI power sustainably and scalably, ensuring impactful and economically viable AI initiatives.

Enterprises are realizing that bigger isn't always better, with the evolving AI landscape now favoring small language models (SLMs) and open-source advancements. This shift is driven by the high costs and resource demands of generative AI systems using large language models (LLMs). Many businesses are reassessing the balance between costs and business value.

Challenges with LLMs: Large language models like OpenAI’s GPT-4 and Meta’s Llama exhibit remarkable capabilities in understanding and generating human language. However, their high computational demands and associated cloud costs strain budgets and limit widespread adoption. Energy consumption also poses financial and environmental challenges. Additionally, operational latency can hinder real-time responsiveness, and managing these complex models requires specialized expertise and infrastructure that many organizations lack.

Shifting to SLMs: This has led to the growing adoption of small language models for generative AI deployment in both cloud and non-cloud environments. SLMs are more efficient in terms of computational resources and energy consumption, resulting in lower operational costs and a better return on investment. Their faster training and deployment cycles make them attractive to enterprises needing agility and responsiveness in a fast-paced market.

Rather than relying on LLMs, enterprises are building tactically focused AI systems for specific use cases like equipment maintenance, transportation logistics, and manufacturing optimization, where lean AI approaches can provide immediate business value.

SLMs offer enhanced customization, being finely tuned for specific tasks and industry domains, resulting in specialized applications that deliver measurable business outcomes. Whether in customer support, financial analysis, or healthcare diagnostics, these leaner models prove their effectiveness.

The Open Source Advantage: The open source community has played a key role in advancing and adopting SLMs. Meta’s latest Llama 3.1, for example, offers various sizes with robust capabilities and lower resource demands. Models like Stanford’s Alpaca and Stability AI’s StableLM show that smaller models can rival or surpass larger ones, especially in domain-specific applications.

Cloud platforms and tools from Hugging Face, IBM’s Watsonx.ai, and others are making these models more accessible, reducing entry barriers for enterprises of all sizes. This democratization of AI capabilities allows more organizations to incorporate advanced AI without relying on expensive proprietary solutions.

Snowflake and Meta Partner to Advance Open-Source AI Development

The rapid advancement of Generative AI (GenAI) is fostering increased partnerships, co-creation initiatives, and co-selling opportunities between major software companies and other entities within the AI ecosystem. For instance, Snowflake has partnered with Meta to host and optimize Meta's latest and most advanced open-source family of large language models (LLMs), Llama 3.1.

In late July, Meta introduced the Llama 3.1 model family, describing Llama 3.1 405B as “the first openly available model that matches top AI models in terms of general knowledge, steerability, mathematics, tool use, and multilingual translation.”

Optimized by Snowflake Coinciding with Llama 3.1’s launch, Snowflake announced it would host these LLMs in its Snowflake Cortex AI platform, aiming to facilitate the use of open-source models for building scalable AI applications.

One of the models included is Llama 3.1 405B, Meta’s largest and most capable open-source LLM. Snowflake’s AI Research Team has optimized the model and open-sourced its Massive LLM Inference and Fine-Tuning System Optimization Stack. This enables Cortex AI users to fine-tune the 405B model using just a single processing node, significantly reducing costs.

“Snowflake’s AI Research Team is pioneering how enterprises and the open-source community can efficiently leverage state-of-the-art open models like Llama 3.1 405B for inference and fine-tuning,” said Vivek Raghunathan, VP of AI Engineering at Snowflake.

Alongside the Llama 3.1 announcement, Snowflake also revealed the general availability of Snowflake Cortex Guard. This feature utilizes Meta’s Llama Guard 2 to help secure applications built in Cortex AI using Llama 3.1, as well as LLMs from AI21 Labs, Google, Mistral AI, Reka, and Snowflake.

Democratizing GenAI Development Snowflake’s collaboration with Meta allows its customers to easily access, fine-tune, and deploy Llama 3.1 within its AI Data Cloud. “By integrating Meta’s Llama models into Snowflake Cortex AI, we provide our customers with access to the latest open-source LLMs,” said Matthew Scullion, CEO and co-founder of Matillion. “The inclusion of Llama 3.1 offers our team and users more options and flexibility to select the most suitable large language models for their use cases and remain at the forefront of AI innovation. Llama 3.1 will be available immediately on Snowflake’s launch day.”

Raghunathan highlighted that Snowflake is not only making Meta’s models directly accessible to customers via Snowflake Cortex AI but also providing new research and open-source code that supports 128K context windows, multi-node inference, pipeline parallelism, 8-bit floating point quantization, and more to advance AI for the broader ecosystem.

Earlier this year, we reported on Snowflake’s release of the Open-Source Arctic LLM. At that time, Sridhar Ramaswamy, CEO of Snowflake, stated, “By delivering industry-leading intelligence and efficiency in a truly open manner to the AI community, we are pushing the boundaries of what open-source AI can achieve. Our research with Arctic will greatly enhance our ability to deliver reliable, efficient AI to our customers.”

The availability of Llama 3.1 on Cortex AI, along with the open-sourcing of Snowflake’s advanced fine-tuning and inferencing systems, further cements Snowflake’s position at the forefront of open-source AI development. As the AI industry evolves, this competitive landscape is expected to become increasingly dynamic.

This is because more tech companies are collaborating with open-source communities to deliver what customers want: world-class AI infrastructure from trusted vendors with the flexibility to tailor AI applications to specific business needs.

OpenAI Co-Founder John Schulman Leaves to Join Anthropic

Nine years ago, Schulman and Brockman joined the founding team of OpenAI. One of OpenAI’s co-founders, John Schulman, is now leaving the company to join competitor Anthropic, which is backed by Google and Amazon.

“I’ve made the difficult decision to leave OpenAI. This choice stems from my desire to deepen my focus on AI alignment and start a new chapter in my career where I can return to hands-on technical work. I’ve chosen to pursue this goal at Anthropic, where I believe I can gain new perspectives and conduct research alongside individuals deeply engaged with the topics I’m most interested in,” John Schulman shared on X (formerly known as Twitter).

“To be clear, my departure is not due to a lack of support for alignment research at OpenAI. In fact, the company leaders have been very committed to investing in this area. My decision is personal, based on how I want to focus my efforts in the next phase of my career,” he added.

Despite Schulman's departure, not all senior executives have left OpenAI. Greg Brockman, OpenAI’s President and Co-Founder, announced on X that he would be taking a leave of absence until the end of the year.

“I’m taking a sabbatical through the end of the year. This is my first break since co-founding OpenAI nine years ago. The mission is far from complete; we still have a safe AGI to build,” said Brockman.

Another OpenAI co-founder, Ilya Sutskever, left the company in May to establish Safe Superintelligence Inc. (SSI), an AI startup, alongside Daniel Gross, a former Y Combinator partner, and Daniel Levy, a former OpenAI engineer.

Earlier this year, Jan Leike also departed from OpenAI and is currently working at Anthropic.

In February, Andrej Karpathy, another founding member of OpenAI, left to start the AI-powered education platform Eureka Labs.

“I am confident that OpenAI and the teams I was part of will continue to thrive without me. The post-training is in good hands with a deep bench of amazing talent. I’ve been encouraged to see the alignment team coming together with some promising projects. With leadership from Mia, Boaz, and others, I believe the team is in very capable hands,” Schulman added.

OpenAI Cancels Watermarking for ChatGPT Content

OpenAI has chosen not to implement text watermarking for ChatGPT-generated content, despite having the technology ready for nearly a year.

This decision, reported by The Wall Street Journal and confirmed in a recent OpenAI blog post update, arises from user concerns and technical challenges.

The Watermark That Wasn’t

OpenAI’s text watermarking system, designed to subtly alter word prediction patterns in AI-generated text, promised near-perfect accuracy. Internal documents cited by the Wall Street Journal claim it was “99.9% effective” and resistant to simple paraphrasing.

However, OpenAI revealed that more sophisticated tampering methods, like using another AI model for rewording, can easily circumvent this protection.

User Resistance: A Key Factor

A significant factor in OpenAI’s decision was potential user backlash.

A company survey found that while global support for AI detection tools was strong, almost 30% of ChatGPT users said they would use the service less if watermarking was implemented.

This presents a significant risk for a company rapidly expanding its user base and commercial offerings.

OpenAI also expressed concerns about unintended consequences, particularly the potential stigmatization of AI tools for non-native English speakers.

The Search for Alternatives

Rather than abandoning the concept entirely, OpenAI is now exploring potentially “less controversial” methods.

Its blog post mentions early-stage research into metadata embedding, which could offer cryptographic certainty without false positives. However, the effectiveness of this approach remains to be seen.

This news may be a relief to the many marketers and content creators who have integrated ChatGPT into their workflows.

The absence of watermarking means greater flexibility in how AI-generated content can be used and modified. However, it also means that ethical considerations around AI-assisted content creation remain largely in users’ hands.

OpenAI’s move highlights the difficulty in balancing transparency and user growth in AI.

The industry needs new ways to tackle authenticity issues as AI content booms. For now, ethical AI use is the responsibility of users and companies.

Expect more innovation here, from OpenAI or others. Finding a sweet spot between ethics and usability remains a key challenge in the AI content game.

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!