- "Towards AGI"
- Posts
- 'I’m Just a Fool': Elon Musk’s Frustration Over OpenAI Funding
'I’m Just a Fool': Elon Musk’s Frustration Over OpenAI Funding
The lawsuit cites years of internal disputes over funding and organizational strategy, supported by email exchanges between Musk and OpenAI’s leadership.
A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.
Introducing the GEN Matrix: Your Essential Guide to Generative AI Trailblazers!
Dive into the forefront of Generative AI with the GEN Matrix—your ultimate resource for discovering the innovators, startups, and organizations leading the AI revolution. Our platform features three categories spotlighting:
Organizations: Early adopters advancing GenAI in production.
Startups: Pioneers across diverse GenAI layers (chips, infrastructure, applications, etc.).
Leaders: Key figures driving GenAI innovation and adoption.
Know someone making strides in GenAI? Nominate them to be featured in the GEN Matrix! Whether you're a business seeking AI solutions or a developer looking for tools, explore GEN Matrix to stay at the forefront of AI excellence.
TheGen.AI News
'I’m Just a Fool': Elon Musk’s Frustration Over OpenAI Funding
Billionaire entrepreneur Elon Musk, co-founder of OpenAI, has filed an amended lawsuit against the AI organization and its CEO, Sam Altman, in a U.S. court. The suit alleges that OpenAI has strayed from its original non-profit mission, which Musk claims violates the founding principles of the organization.
The lawsuit cites years of internal disputes over funding and organizational strategy, supported by email exchanges between Musk and OpenAI’s leadership. Musk, a key financial backer during OpenAI’s early years, reportedly contributed over $44 million in cash between 2016 and 2020, in addition to providing office space and covering operational expenses. These contributions, the suit argues, were vital to OpenAI’s establishment and success.
"Without Musk’s involvement, backing, and substantial supportive efforts, there would have been no OpenAI," the lawsuit asserts.
Key Disputes Highlighted
In 2016, OpenAI’s leadership, including Altman, negotiated a deal with Microsoft for discounted compute services. Musk initially rejected the agreement due to a clause requiring OpenAI to promote Microsoft’s products, calling the arrangement "nauseating" in an email to Altman. Although the deal was later revised to remove the promotional requirement, it remained a source of tension.
The lawsuit also focuses on a 2017 proposal by Altman and co-founder Greg Brockman to transition OpenAI into a for-profit entity. The move was intended to attract shareholders and raise capital, but Musk strongly opposed it. In a September 2017 email, he expressed frustration, writing:
"Either go do something on your own or continue with OpenAI as a non-profit. I will no longer fund OpenAI until you have made a firm commitment to stay, or I’m just being a fool providing free funding to a start-up. Discussions are over."
Broader Concerns
Musk argues that OpenAI’s pivot to a for-profit model undermines its mission of making AI broadly accessible and beneficial. He also accuses OpenAI and Microsoft of monopolizing the generative AI market, deviating from their original commitment to ensure AI technologies are distributed widely and responsibly.
"The wise course of action is to approach the advent of AI with caution and ensure its power is not concentrated in the hands of any one company or individual. That is why we created OpenAI," Musk stated.
Legal Demands
The lawsuit seeks to void OpenAI’s licensing agreements with Microsoft and recover what Musk describes as "ill-gotten gains." It further accuses the two organizations of compromising OpenAI’s founding vision in favor of profit-driven objectives.
Generative AI Takes Center Stage: Promise Studio Secures Powerhouse Backing
A new Hollywood entertainment studio, Promise, is aiming to place generative artificial intelligence (Gen AI) at the core of its operations, backed by two influential entities: The North Road Company and Andreessen Horowitz (a16z). The studio is led by industry veterans George Strompolos, Jamie Byrne, and Dave Clark. Strompolos, the founder and former CEO of Fullscreen, will serve as CEO; Byrne, previously head of creator partnerships at YouTube, will take on the role of president and COO; and Clark, a filmmaker experienced in leveraging AI, will be the chief creative officer.
The investment round was spearheaded by Peter Chernin’s North Road and a16z partner Andrew Chen, providing not only financial backing but also access to a powerful network spanning Hollywood and Silicon Valley. Notably, Chernin’s TCG had previously invested in Fullscreen.
Promise is entering the market during a turbulent time for the entertainment industry, as Gen AI continues to generate debate. The technology was a key topic in last year’s Writers Guild of America (WGA) and SAG-AFTRA strikes, with unions advocating for protections and compensation for creatives whose work or likenesses might be used in AI-generated productions.
The studio plans to leverage its leaders' expertise in creator-focused content and proprietary technology to help storytellers enhance their productions. Promise aims to collaborate with leading Gen AI artists, Hollywood talent, and rights-holders to develop a multi-year slate of groundbreaking films and series. According to Strompolos, the studio is committed to investing in a new generation of creators who blend traditional filmmaking skills with advanced technical expertise, aiming to redefine storytelling through AI.
Peter Chernin praised Promise’s approach, emphasizing its dedication to prioritizing artists and creatives while integrating Gen AI into the production process. “This team has developed the most innovative and user-friendly model we’ve encountered,” he stated.
At the heart of Promise’s operations is MUSE, a proprietary software platform that integrates Gen AI tools throughout the production workflow. This technology will underpin all of the studio’s projects.
In an open letter on its website, Promise’s founders emphasized their vision of combining advanced technology with human creativity, highlighting the essential role of writers, actors, producers, directors, and other creative professionals in bringing stories to life. “Technology is the backbone, but the creative community is the heart and soul,” the letter stated.
North Road’s portfolio includes stakes in Kinetic Content, Left/Right, 44 Blue Productions, Words + Pictures, Questlove’s Two One Five Entertainment, and Peyton Manning’s Omaha Productions. Meanwhile, a16z, with $44 billion in committed capital, has supported companies such as Airbnb, Slack, Instacart, Substack, and Roblox.
Sageance Unveiled: Revolutionizing Gen AI with 90% Power Savings
Sageance, a Silicon Valley startup, is making strides in machine learning with analog chips designed to drastically reduce energy consumption, particularly for large generative AI models. The company claims its technology can operate Meta's Llama 2-70B large language model (LLM) at a fraction of the power, cost, and space required by Nvidia’s H100 GPU systems—one-tenth the power, one-twentieth the cost, and one-twentieth the space.
Sageance’s CEO and founder, Vishal Sarin, who has worked on flash memory technology for over 30 years, says the company’s focus on power efficiency stems from recognizing energy consumption as a major barrier to AI adoption. “The issue has grown exponentially with the rise of generative AI, which has caused model sizes to balloon,” he explains.
The startup’s energy-saving edge comes from analog AI’s ability to avoid extensive data movement and utilize fundamental physical laws for machine learning’s most computationally demanding task—multiply and accumulate (MAC). This process is achieved by applying Ohm’s Law and Kirchhoff’s Current Law in innovative ways, eliminating the need to transfer neural network parameters between memory and computing circuits, as they are already embedded within the chip.
The Role of Flash Memory in Analog AI
Sageance uses flash memory cells as conductance values, critical for analog computation. While traditional flash cells store 3 to 4 bits, Sageance’s algorithms allow their chips to hold 8 bits per cell—providing the precision required for large language models and transformer-based architectures. By embedding 8-bit numbers in single transistors instead of using the 48 transistors needed in conventional digital memory, the company achieves significant cost, space, and energy savings.
These flash cells are operated in deep subthreshold mode, consuming minimal power by producing very low current. While this would slow digital circuits, analog computation handles all operations simultaneously, maintaining speed.
Overcoming Analog AI Challenges
Analog AI has faced hurdles in scaling for datacenters, primarily due to variability in conductance across cells, temperature-related drifts, and compounded noise through deep neural networks. Sageance addresses these issues with reference cells and proprietary algorithms for calibration and temperature tracking.
Additionally, analog AI systems often require multiple analog-to-digital and digital-to-analog conversions, which consume power and chip area. Sageance has developed low-power converters that operate efficiently with the narrow voltage range needed for deep subthreshold flash memory.
Product Roadmap and Future Vision
Sageance plans to launch its first product, aimed at vision systems, in 2025. This will be followed by chips optimized for generative AI, which will be scaled using 3D-stacked analog chiplets. These chiplets will be integrated with a CPU and high-bandwidth DRAM using the Delphi package, based on the Universal Chiplet Interconnect (UCIe) standard.
In simulations, a Delphi-based system could run Llama 2-70B at 666,000 tokens per second, consuming just 59 kilowatts, compared to 624 kilowatts required by Nvidia H100-based systems. This promises significant efficiency improvements for generative AI applications.
Sageance’s innovative approach positions it as a potential game-changer in AI hardware, offering a path to energy-efficient, high-performance solutions for increasingly complex AI workloads.
Siemens Launches Next-Gen AI-Powered Electronic Design Software
Siemens Digital Industries Software has unveiled its latest innovation in electronic systems design: a next-generation solution that integrates Xpedition, HyperLynx, and PADS Professional software into a unified platform. This advanced release introduces cloud connectivity and AI capabilities, aiming to revolutionize electronic systems design by addressing key challenges in the industry, including talent shortages, supply chain disruptions, and increasing design complexity.
The new solution delivers an intuitive, AI-enhanced, cloud-connected, and secure platform, empowering engineers and organizations to navigate a rapidly evolving landscape. It focuses on providing tools that minimize learning curves, incorporate predictive engineering, and enhance workflows through AI-driven support and optimization. Cloud connectivity enables seamless collaboration across the value chain, offering engineers real-time access to specialized resources, supply chain insights, and stakeholder collaboration—regardless of location.
Addressing Industry Challenges
“Engineering talent shortages, supply chain uncertainties, and design complexities have created significant hurdles for engineers and the development ecosystem,” said AJ Incorvaia, Senior Vice President of Electronic Board Systems at Siemens. “This next-generation solution represents our most thoroughly tested release, with feedback from hundreds of participants. By unifying the Xpedition, HyperLynx, and PADS Pro environments and incorporating AI, we’re equipping our customers to tackle these challenges with confidence.”
Key Features and Benefits
The solution’s multidisciplinary and integrated approach ensures a seamless flow of data and information throughout the product lifecycle, leveraging digital threads to enhance collaboration, decision-making, and design optimization. It also introduces new features such as:
Predictive engineering and AI-powered assistance to streamline workflows.
Enhanced collaboration with Siemens Teamcenter for product lifecycle management and NX software for product engineering, offering multi-BOM support and improved ECAD-MCAD integration.
Security enhancements with configurable, geo-located data access restrictions that adhere to stringent industry protocols, backed by partnerships with top cloud providers.
Model-based systems engineering (MBSE) support through integrated design and verification requirements management.
Industry Collaboration and Feedback
Tom Pitchforth, VP of Electronics Engineering at Leonardo, emphasized the importance of this toolset in meeting strategic and tactical goals in a dynamic industry. “Siemens has been a critical partner for over 20 years. The new toolset enables organizational flexibility and rapid time-to-productivity, aligning with our evolving needs in a complex environment.”
Availability
The next-generation software suite, including Xpedition NG and HyperLynx NG, is now available, while PADS Pro NG is slated for release in the second quarter of 2025. By merging advanced technology with user-centric design, Siemens aims to redefine the future of electronic systems design.
TheOpensource.AI News
MIT Unveils Boltz-1: Open-Source AI Revolutionizing Biomolecular Structure Prediction
Understanding biomolecular interactions is essential in fields like drug discovery and protein design. Traditionally, determining the 3D structures of proteins and other biomolecules required expensive, time-intensive lab experiments. The launch of AlphaFold3 in 2024 marked a turning point, demonstrating that deep learning could predict biomolecular structures with experimental-level accuracy. However, challenges remained in accurately modeling complex interactions between biomolecules, such as proteins, nucleic acids, and ligands, leaving a gap in structural biology.
Introducing Boltz-1: A New Frontier in Biomolecular Modeling
Researchers at MIT have unveiled Boltz-1, an open-source, commercially accessible model that matches AlphaFold3-level accuracy in predicting biomolecular complexes. Unlike previous models, Boltz-1 is entirely open to the public, with its model weights, training methods, and inference code released under the MIT license. This approach aims to foster global collaboration and propel advancements in biomolecular modeling.
Key Innovations
Boltz-1 builds on the framework established by AlphaFold3 but incorporates several key innovations, such as:
Advanced MSA Pairing Algorithms: Leveraging taxonomy data to enhance the quality of multiple sequence alignments, Boltz-1 captures co-evolutionary signals crucial for modeling biomolecular interactions.
Unified Cropping Approach: A new method balances spatial and contiguous cropping during training, improving the diversity of training data.
Enhanced Confidence Model: A robust mechanism for pocket-conditioning enables Boltz-1 to adapt to real-world scenarios by incorporating partial binding pocket information.
These improvements allow Boltz-1 to maintain high accuracy while significantly reducing computational requirements compared to AlphaFold3.
Performance Benchmarks
Boltz-1’s capabilities were showcased in various benchmarks. During CASP15, a protein structure prediction competition, it achieved exceptional results, including:
Protein-Ligand Prediction: Achieved an LDDT-PLI of 65%, outperforming Chai-1’s 40%.
Protein-Protein Prediction: Delivered a DockQ success rate of 83%, surpassing Chai-1’s 76%.
These results highlight Boltz-1’s reliability, particularly in protein-ligand complex prediction, where it excelled in aligning small molecules with their binding pockets.
Impact and Accessibility
Boltz-1 democratizes access to high-accuracy biomolecular modeling, offering significant potential to accelerate progress in drug design, structural biology, and synthetic biology. By releasing Boltz-1 under an open-source license, MIT is enabling researchers and industries worldwide to leverage its capabilities, fostering innovation and collaboration.
Conclusion
Boltz-1 represents a major step forward in biomolecular modeling, combining state-of-the-art accuracy with open accessibility. Its performance rivals commercial models like AlphaFold3 while remaining open-source, making it a transformative tool for academic and industrial research. With its potential to accelerate breakthroughs in pharmaceuticals and other fields, Boltz-1 is poised to inspire future advancements and collaborative efforts, addressing some of the most complex questions in biology.
Indosat and GoTo Launch Sahabat-AI to Empower Local Language AI
Indosat Ooredoo Hutchison, in collaboration with GoTo, has introduced an open-source language model tailored to local Indonesian languages. Named Sahabat-AI, the model is designed to generate responses in Bahasa Indonesia and various other regional languages spoken across the archipelago.
The initiative aims to enable Indonesians to create AI-driven services and applications that reflect local linguistic and cultural nuances, addressing a gap often overlooked by Western-centric AI models.
"By creating an AI model that speaks our language and reflects our culture, we empower every Indonesian to harness advanced technology's potential," said Vikram Sinha, President Director and CEO of Indosat. He emphasized the initiative’s role in democratizing AI to foster growth, innovation, and empowerment across Indonesia’s diverse society.
Research by Omdia highlights a growing demand in the Asia-Pacific region for AI solutions that are culturally and linguistically aligned with local needs. Sahabat-AI aims to address this demand by not only providing a language model but also building an ecosystem for businesses, research institutions, and government agencies to leverage localized AI technology.
Nvidia supports the initiative, with the model’s development being carried out by AI Singapore and Tech Mahindra. The team utilized Nvidia's AI Enterprise software and NeMo training platform to enhance Sahabat-AI’s proficiency in local languages.
The model’s launch, celebrated on Indonesia AI Day, featured key figures including Nvidia founder and CEO Jensen Huang, Indosat CEO Vikram Sinha, GoTo CEO Patrick Walujo, and Indonesian Minister of State-Owned Enterprises Erick Thohir. "Sahabat-AI launches Indonesia's AI journey and demonstrates how large language models can address unique linguistic and cultural needs," Huang remarked, praising the country’s spirit of mutual collaboration, or "gotong royong."
Sahabat-AI will be available in two versions, featuring 8 billion and 9 billion parameters, making them smaller and more affordable to operate compared to traditional large-scale language models. Nvidia plans to continue supporting Indosat in expanding the Sahabat-AI model family.
Hippocratic AI, a healthcare-focused startup, is among the early adopters, integrating Sahabat-AI into its services for Indonesian residents. "Our vision for Sahabat-AI is to put the power of AI into the hands of everyone in Indonesia," said Patrick Walujo, highlighting how the model addresses critical cultural and contextual gaps left by global AI systems.
Although Sahabat-AI is being described as an open-source system, there has been no confirmation about whether the training data will be published—a key criterion under the OSI definition for true open-source AI systems.
TheClosedsource.AI News
ServiceTitan Flags Microsoft, OpenAI LLMs as Business Risks in IPO Filing
Every IPO S-1 prospectus includes a section outlining potential risks associated with investing in the company, often featuring standard warnings about financial performance, geopolitical events, or natural disasters. However, some risks are specific to the company in question.
In the case of ServiceTitan, a cloud service startup that recently filed for an IPO, a new type of warning appears to be emerging: the potential risks posed by large language models (LLMs).
ServiceTitan’s IPO filing includes a detailed, 1,150-word section discussing how its reliance on generative AI could negatively impact its business. The filing highlights several risks, including the possibility of LLM hallucinations producing inaccurate or discriminatory content, infringement on copyright or intellectual property, and increased vulnerability to data breaches due to the data exposure required to train these models. Conversely, the company also warns that insufficient data access could hinder its ability to develop or maintain AI products.
Other concerns include the potential for employees or contractors to inadvertently expose customers’ private data to third-party systems, which could lead to security breaches or the misuse of that data for AI training purposes. ServiceTitan also raises the issue of AI systems potentially running afoul of social and ethical standards or future regulations, which could incur additional costs.
The company acknowledges challenges in attracting and retaining skilled AI talent, citing the high cost of expertise as a potential hurdle. Furthermore, it flags its reliance on third-party providers such as Microsoft and OpenAI, warning that issues with these partners or the unavailability of their services could disrupt its operations.
These warnings are notable because ServiceTitan operates in industries where generative AI, particularly LLM-driven agents, is expected to have a transformative impact. The company provides software for small field-service businesses such as construction contractors, HVAC technicians, and landscapers. Its tools handle tasks like marketing, CRM, customer support, and accounting.
ServiceTitan has been incorporating AI-powered services into its offerings since 2023, under the banner of Titan Intelligence, and recently launched a suite of AI agents tailored for sales, customer service, and call centers. However, as generative AI evolves, its foundational capability to generate content by analyzing words, images, and data inherently involves “making things up.”
While reliability issues with LLMs are expected to improve as more companies develop specialized and dependable AI agents, ServiceTitan’s risk disclosures underscore an important point: At this early stage, adopting generative AI can present as many challenges as it resolves. By acknowledging these risks in their legal filings, companies like ServiceTitan are candidly highlighting the complexities of integrating AI into business workflows.
Apple Delays Gemini AI Integration, Prioritizes OpenAI Partnership: Report
Google's native AI model, Gemini, is reportedly not set to integrate with iPhones for at least another year, according to recent updates. While Apple had previously suggested plans for broader collaboration with multiple AI platforms, the company appears to be focusing on its partnership with OpenAI's ChatGPT for the time being.
In a report by Bloomberg’s Mark Gurman, it’s noted that Apple’s AI suite, Apple Intelligence, will not include ChatGPT integration when it launches globally in December. Gurman speculates that Apple may be intentionally delaying the incorporation of Gemini into its ecosystem, potentially giving OpenAI an exclusive operational window.
These developments follow announcements made at Apple’s WWDC 2024, where Craig Federighi, Apple’s Senior VP of Software Engineering, hinted at the possibility of Apple Intelligence working with other AI models, including Gemini. However, Gurman’s latest insights indicate that Gemini's integration with iOS won’t happen until 2025.
Apple Intelligence will debut with the iOS 18.2 update this December, reaching all compatible iPhones but initially excluding Europe and China. European Union expansion is expected in April 2025, while China may see the rollout with iOS 19 in October 2025.
Gurman also raises questions about Apple’s exclusive focus on ChatGPT, suggesting it could stem from either contractual obligations with OpenAI or a strategic choice, as Apple reportedly isn’t paying OpenAI for the integration.
As for Gemini, its exact timeline remains uncertain. Gurman speculates it may arrive with iOS updates in the first half of 2025 or closer to the release of the iPhone 17 series. With the competitive AI landscape evolving rapidly, Apple enthusiasts are sure to keep a close eye on future announcements regarding the company’s AI strategy.
Don’t miss out on the insights driving the future of Artificial Intelligence! Join a community of researchers, developers, and AI enthusiasts to stay ahead of the curve in Generative AI. Each edition delivers exclusive updates, expert analysis, and thought-provoking discussions straight to your inbox. Subscribe today and be part of the journey toward AGI innovation.
Contact us for any paid collaborations and sponsorships.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.