- "Towards AGI"
- Posts
- Elon Musk & OpenAI Head to Court—And It’s Happening Sooner Than Expected
Elon Musk & OpenAI Head to Court—And It’s Happening Sooner Than Expected
This decision follows months of legal disputes between the two parties.

Next-generation integration technology for the Generative AI world.
Forget AI As You Know It; AgentsX Introduces The First Outcome Engine
AgentsX envisions AGI not as a single massive LLM but as a network of AI agents—each working together, learning, and evolving. Instead of a one-size-fits-all model, their approach is dynamic and modular, resembling an ecosystem rather than a single centralized brain.
Relying on a single AI model for everything can be inefficient, slow, and risky. In contrast, a system of specialized agents offers scalability, adaptability, and intelligence. Plus, if one goes rogue, it can be easily shut down, preventing worst-case scenarios like a runaway AI.
This concept mirrors how the human brain functions, with different regions handling specific tasks while remaining interconnected to form intelligence. A modular AGI system is not only more transparent and flexible but also more resilient than a monolithic AI model.
For AgentsX, true AGI isn’t a black box or a fragile, all-encompassing system. It’s an evolving, decentralized intelligence, designed to be robust, adaptable, and safe.
You can use our services by clicking here AgentsX.
OpenAI: OpenAI’s For-Profit Move Faces Legal Fire
Gen AI: Google x MediaTek: The AI Chip Power Duo?
What’s New: Roblox Drops AI-Powered 3D Generator
Open AI: Open-Source AI Takes on Medical Giants
Dope Tech: China’s Chitu Takes on NVIDIA
Closed AI: Coding Automation Is Coming Sooner!
OpenAI, Musk Agree To Speedy Trial In Legal Battle Over For-Profit Shift

OpenAI and Elon Musk have agreed to speed up their legal battle over the AI company’s shift to a for-profit model, with the trial set to take place in autumn 2025. This decision follows months of legal disputes between the two parties.
Key Developments:
Both parties jointly proposed an expedited trial but have delayed deciding whether it will be judged by a jury or solely by a judge.
This follows the court’s March 4 ruling, which denied Musk’s attempt to halt OpenAI’s transition to a for-profit structure.
Elon Musk is trying to buy Wisconsin's Supreme Court.
We won’t let him.
— Democrats (@TheDemocrats)
5:04 PM • Mar 18, 2025
According to a filing in the US District Court for the Northern District of California, both sides jointly proposed an expedited trial but have postponed the decision on whether the case will be decided by a judge or a jury. This comes after the court’s March 4 ruling, which denied Musk’s request to pause OpenAI’s transition to a for-profit structure. OpenAI welcomed the ruling, accusing Musk of attempting to hinder the company’s progress for his own personal benefit.
Pay attention to this: Elon Musk is now trying to buy a seat on the Wisconsin Supreme Court.
Why aren’t more Democrats talking about this??
— CALL TO ACTIVISM (@CalltoActivism)
1:24 AM • Mar 18, 2025
Musk, who co-founded OpenAI with CEO Sam Altman in 2015 before parting ways, has claimed that the company has strayed from its original mission of developing AI for the public good rather than corporate profits. In response, OpenAI and Altman have rejected these allegations, arguing that the restructuring is necessary to secure funding and remain competitive in the rapidly evolving AI industry.
The lawsuit arises as OpenAI continues its efforts to raise capital. The company recently secured a $6.6 billion funding round and is reportedly planning another round, targeting $40 billion, with backing from SoftBank Group. These investments depend on OpenAI’s transition to a for-profit structure.
In a separate move, Musk led an unsolicited $97.4 billion takeover bid for OpenAI, which the company ultimately rejected. Altman has made it clear that OpenAI is not for sale, emphasizing the need to preserve its independence to continue advancing its AI research and development.
Google Taps MediaTek To Supercharge AI Chip Development

Alphabet-owned Google is reportedly collaborating with Taiwan's chip manufacturer MediaTek to design and develop its next generation of artificial intelligence (AI) chips, known as Tensor Processing Units (TPUs), according to The Information on Monday. These AI chips are expected to be produced in 2025.
Google chose MediaTek partly due to its strong ties with Taiwan Semiconductor Manufacturing Co. (TSMC) and its lower costs compared to Broadcom, the report stated. However, neither Google nor MediaTek has officially confirmed the partnership.
Google reportedly bringing in MediaTek to work on TPU I/O in bid to diversify away from Broadcom while spinning up internal core processor design efforts makes quite a bit of sense to me.
Wen IFS silicon rumors?
— Renny (@rennyzucker)
3:39 PM • Mar 17, 2025
While Google continues to develop more AI chips in-house, it is still expected to work with external partners like Broadcom and MediaTek for manufacturing, packaging, and quality testing. The tech giant currently has an existing collaboration with Broadcom for AI chip production.
Last year, Google introduced its sixth-generation TPUs to provide an alternative to Nvidia's widely used AI chips for both its own use and cloud customers. Reports from Omdia, cited by Reuters, estimated that Google invested between $6 billion and $9 billion in TPUs in 2024. These chips play a crucial role in Google's AI strategy, supporting internal research, cloud computing, and services such as Google Search, YouTube, and Gemini AI models.
Following news of Google's potential partnership with MediaTek, Broadcom's stock dropped on Monday, initially falling to $187.50 before recovering slightly to $193.82, marking a 0.9% decline. Meanwhile, Alphabet shares were down 0.48%, trading at $166.81.
Fuite Google Pixel 10 Series 📱🔍
• 📌 SoC Tensor G5 (TSMC 3nm)
• ⚡ CPU: Cortex-X4 (3.4GHz) + A725 + A520
• 🎮 GPU: PowerVR DXT-48-1536
• 📷 Vidéo: 8K30 supporté
• 📡 Modem: MediaTek/Exynos 5400
• 🔋 Charge sans fil 60W (CPS4041)
• 🔑 Bootloader en Rust
#Pixel10— JérémKO (@JeremKOYTB)
9:54 AM • Mar 11, 2025
In a separate development, Intel’s incoming CEO, Lip-Bu Tan, is reportedly considering major changes to the company’s chip manufacturing and AI strategies ahead of his official return on Tuesday, according to Reuters. After news of these potential changes, Intel's stock surged over 7% on Nasdaq. During a recent town hall meeting following his appointment, Tan informed employees that Intel would need to make "tough decisions" moving forward.
Stream the latest AI trends, expert interviews, and podcasts in high quality on our Towards AGI YouTube channel. |
Roblox Unleashes AI-Powered 3D Image Generator, Game Dev Just Got Easier!

Roblox announced on Monday an early version of a 3D object generator, marking the first in a series of AI models the company plans to roll out.
Why it matters?
The tool aims to accelerate content creation on the platform, supporting Roblox’s goal of capturing 10% of the $180 billion global gaming market.
Key details:
The new tool, called Mesh Generator API, is powered by CUBE 3D, a 1.8 billion-parameter AI model that Roblox will release as open source this week.
It allows developers and players to generate 3D images using text prompts, such as "draw a car with racing stripes and a spoiler."
Future updates will enable users to create 3D objects from still images.
Roblox is also developing additional AI models, including speech-to-text, text-to-speech, and real-time language translation capabilities.
The company's long-term vision is to provide tools that can generate entire scenes on demand.
Roblox’s perspective:
"Beyond mesh generation, we plan to expand into scene generation and understanding," the company stated in a blog post.
For example, developers could prompt the API to change the lush green leaves in a forest scene to autumn foliage, reflecting a seasonal shift.
By simplifying 3D content creation, Roblox aims to attract a broader range of developers and content creators, extending beyond traditional game design.
While these generative AI tools will be free for creators and developers, there will be usage limits in place. "We do have limits, for obvious reasons," said Nick Tornow, Roblox’s VP of Engineering, in an interview with Axios.
Open-Source AI Solves Complex Medical Cases Like the Best

Artificial intelligence is poised to revolutionize medicine in numerous ways, including serving as a reliable diagnostic assistant for busy clinicians.
The Rise of Open-Source AI in Medicine
Over the past two years, proprietary AI models—also known as closed-source models—have demonstrated exceptional ability in solving complex medical cases requiring advanced clinical reasoning. These models have consistently outperformed open-source alternatives, whose publicly available source code allows for modifications and improvements by anyone.
However, a new NIH-funded study, led by researchers at Harvard Medical School (HMS) in collaboration with clinicians from Beth Israel Deaconess Medical Center and Brigham and Women’s Hospital, suggests that open-source AI may have caught up.
Key Findings
Published on March 14 in JAMA Health Forum, the study found that an open-source AI model, Llama 3.1 405B, performed on par with GPT-4, a leading proprietary model.
Researchers evaluated both models on 92 diagnostically complex cases published in The New England Journal of Medicine.
The results indicate that open-source AI tools are becoming increasingly competitive and could serve as viable alternatives to proprietary models.
“This is the first time an open-source AI model has matched GPT-4’s performance on such challenging cases, as assessed by physicians. The fact that Llama models have caught up so quickly is remarkable and could benefit patients, healthcare providers, and hospitals alike.”
Open-Source vs. Closed-Source AI: Pros and Cons
There are fundamental differences between open-source and closed-source AI models:
Data Privacy: Open-source models can be hosted on a hospital’s private servers, keeping patient data secure. In contrast, closed-source models require data transmission to external servers.
Adoption in Healthcare: Many hospital administrators, chief information officers (CIOs), and physicians may find open-source models more appealing due to the ability to keep sensitive data in-house.
“There’s something fundamentally different about patient data staying within the hospital versus being sent to an external entity, even if it’s a trusted one,” said lead author Thomas Buckley, a doctoral student in the AI in Medicine track at HMS’s Department of Biomedical Informatics.
Second, open-source models offer flexibility, allowing medical and IT professionals to customize them for specific clinical and research needs. In contrast, closed-source tools are typically more challenging to modify.
"This is crucial," said Buckley. "With open-source models, you can fine-tune them using local data—whether through simple adjustments or more advanced techniques—so they better serve the needs of your physicians, researchers, and patients."
Third, closed-source AI providers like OpenAI and Google manage their own models and offer traditional customer support. Meanwhile, open-source models require users to handle setup and maintenance themselves. Additionally, closed-source models have, so far, been easier to integrate with electronic health records and hospital IT systems.
Do you think AI can replace doctors soon? |
🚀 Gen AI Leading Startups in Telecom
Ranked #1: Inorsa
Featured on the Gen Matrix by Towards AGI Powered by @HitachiDS for advancing telecom infrastructure with AI-driven solutions.
Download the full report for free: towardsagi.ai/gen-matrix
— TowardsAGI (@Towards_agi)
1:35 PM • Jan 22, 2025
Follow, like, comment, and share to be a part of our Towards AGI community. |
AI to Surpass Human Coders ‘Forever’? OpenAI CPO Drops a Bombshell

OpenAI’s Chief Product Officer, Kevin Weil, has made a bold prediction: AI will permanently surpass human coders by the end of this year. Speaking on the YouTube show Overpowered with Varun Mayya and Tanmay Bhat, Weil responded to Anthropic’s claim that full coding automation would arrive by 2027, saying he believes it will happen much sooner.
Weil highlighted the rapid advancements in OpenAI’s models, noting how each new version of GPT has significantly improved in competitive programming. He explained that the GPT-01 preview initially ranked around the millionth-best competitive programmer in the world, placing it in the top 2-3% of all programmers globally. Later, GPT-01 climbed even higher, ranking among the top 1,000 competitive coders worldwide.
The next iteration, GPT-03, is set to achieve an even greater milestone. Weil revealed that based on benchmarks, GPT-03 is currently ranked 175th in the world for competitive coding, and its successor models are already outperforming it.
"In the same way that computers surpassed humans in arithmetic decades ago and AI became unbeatable at chess 15 years ago, this is the year that AI overtakes humans in programming—forever”.
A Future Where Everyone Can Code?
Weil emphasized how AI-driven coding could revolutionize software development, making it accessible to everyone, not just engineers. "Imagine a world where you don’t need to be a programmer to create software," he said. "AI surpassing humans in coding is far more significant than AI mastering chess, because software enables the creation of anything. The impact of this democratization could be immense."
However, he acknowledged that human expertise will still play a crucial role. While AI can handle much of the coding, deciding which problems to solve and where to focus efforts will remain a human responsibility.
Rather than replacing human workers, Weil envisions AI as an assistant that enhances productivity across all fields. "People will increasingly act as managers of AI-powered employees that handle routine tasks," he predicted. "AI will be used every day to augment human capabilities in the workplace."
DeepSeek R2: A Desperate Rush?
Despite all the drama, DeepSeek is speeding up the release of its R2 model. But if R2 has the same accuracy and security issues, it could face global rejection.
NEW DeepSeek R2 is INSANE! 🤯
— Julian Goldie SEO (@JulianGoldieSEO)
10:02 AM • Mar 6, 2025
AI Ethics and the Future of Regulation
The DeepSeek mess highlights big issues in AI development:
✔ Did DeepSeek steal OpenAI’s work?
✔ How should AI firms prove their data is legally obtained?
✔ Will governments demand transparency in AI training?
If OpenAI confirms that DeepSeek used its data, it could set a legal precedent. Stricter rules, potential lawsuits, and market bans might be coming. The AI world is watching closely.
China’s Chitu Gears Up To Challenge NVIDIA’s AI Chip Dominance

NVIDIA currently dominates the AI chip market, but increasing U.S. hardware restrictions on China and the emergence of non-NVIDIA GPUs have prompted some businesses to explore alternatives.
In response, a team linked to China’s Tsinghua University has introduced Chitu, a new open-source AI framework designed to reduce reliance on NVIDIA’s products. The framework, released under the Apache-2.0 license, is optimized for large language model (LLM) inference, prioritizing efficiency, flexibility, and broad hardware compatibility.
Chitu: The new high performance inference framework, 3.5x faster with 50% cost github.com/thu-pacman/chi…
— shudong (@shu_inf)
7:25 PM • Mar 15, 2025
According to the South China Morning Post, Chitu supports popular LLMs such as DeepSeek, the LLama series, and Mixtral, and is capable of running on China-made chips, challenging NVIDIA’s dominance—particularly its Hopper series GPUs. The project was developed by the startup Qingcheng.AI in collaboration with Professor Zhai Jidong from Tsinghua University’s computer science department. Qingcheng.AI, founded in 2023 by Jidong and his students, is backed by Beijing’s municipal AI industry fund.
“We not only focus on NVIDIA GPUs but also support various hardware environments, including legacy GPUs, non-NVIDIA GPUs, and CPUs. Our goal is to create a versatile framework that meets diverse deployment needs.”
The team claims significant performance improvements, reporting a 315% increase in inference speed while reducing GPU usage by 50% compared to other open-source frameworks. These results were achieved during tests using DeepSeek-R1 on NVIDIA’s A800 GPUs. According to the developers, Chitu is now production-ready and deployed in real-world applications.
Stay in touch with us by subscribing to our newsletter to receive weekly pieces and updates.
Please rate our newsletter below. Your feedback matters to us.
How's your experience? |
Thank you for reading
-Shen Pandi & Team