- "Towards AGI"
- Posts
- AI vs. Creativity? Kelly Yu Proves Gen AI Is Music’s New Superpower
AI vs. Creativity? Kelly Yu Proves Gen AI Is Music’s New Superpower
The Gen AI Music Video That Broke the Internet.
Here is what’s new in the AI world.
AI news: Why Gen AI Could Kill Traditional Music Video Production
What’s new: AI Is Great at Hacking Itself
Open AI: AI Is Eating the World Like Linux
OpenAI: Gen Z Devs Are Betting Big on Open-Source AI
Hot Tea: AI Agents with Crypto Wallets Running the Show
How Kelly Yu’s AI Magic Created a Viral Visual Masterpiece

Chinese-Canadian artist Kelly Yu has partnered with AI firm CreateAI to produce the music video for her latest single, “Werewolf”, marking the first time a major recording artist has used generative AI to craft a video from concept to final cut.
The collaboration blends human creativity with cutting-edge technology, setting a new benchmark for the music industry and redefining possibilities for future artist-AI partnerships.
Innovative Fusion of Art and Technology
The video merges human-directed keyframes and AI-generated visuals through CreateAI’s platform, enhancing the song’s emotional depth with intricate details. According to the creators, this approach slashed production time by 50% and saved millions in costs.
The 3.5-minute film weaves a dark, fairy-tale narrative featuring castles, enchanted forests, and a haunting romance between a werewolf and a human, portrayed through stylized dance sequences. Notably, the AI overcame typical challenges in choreography, delivering fluid movements and consistent character expressions.
Global Release and Industry Impact
Set to debut on major streaming platforms, the video aims to captivate global audiences while showcasing AI’s potential in entertainment. CreateAI emphasized its mission to empower artists by merging generative AI with traditional creativity, fostering a collaborative ecosystem for the digital age.
Artist and CEO Perspectives
Yu expressed enthusiasm for the project: “Working with CreateAI let me push creative boundaries and finally realize a vision I’ve held for years. I hope fans connect with this innovative work.”
CreateAI CEO Cheng Lu added, “This partnership exemplifies how AI can amplify human artistry, turning imaginative concepts into captivating realities.”
By bridging artistic vision with AI efficiency, “Werewolf” not only advances music video production but also signals a transformative shift in how technology and creativity intersect.
The Gen Matrix Advantage
In a world drowning in data but starved for clarity, Gen Matrix second edition cuts through the clutter. We don’t just report trends, we analyze them through the lens of actionable intelligence.
Our platform equips you with:
Strategic foresight to anticipate market shifts
Competitive benchmarks to refine your approach
Network-building tools to forge game-changing partnerships
AI Can Spot the Bugs, But Can’t Fix Them?

Organizations are faltering in addressing vulnerabilities uncovered during penetration testing, with fewer than half (48%) of exploitable flaws remediated, a figure that plummets to 21% for generative AI (Gen AI) applications, according to Cobalt’s State of Pentesting Report.
While critical vulnerabilities see a higher fix rate (69%), technical, organizational, and cultural barriers persist, worsened by the complexity of Gen AI systems.
Remediation Gaps: Legacy systems, resource constraints, and competing priorities hinder patching. Many firms prioritize compliance over risk mitigation, delaying fixes unless breaches occur.
Gen AI Complications: Vulnerabilities in large language models (LLMs), such as prompt injection or data leakage, are harder to resolve due to unpredictable behavior and dependencies on untested frameworks. Traditional code fixes apply to app layers, but LLM flaws require retraining models without guaranteed success.
Improved Timelines: Median resolution time for critical issues dropped from 112 days in 2017 to 37 days in 2023, credited to “shift left” security integration.
There has been a backlash against Cursor over the last couple of days.
It seems that the Cursor support system is 100% based on AI, and it clearly gave very bad answers to users who could not log into Cursor because of a bug, leading to many customers cancelling their
— Julien Salinas (@JulienSalinasEN)
7:16 AM • Apr 18, 2025
Expert Recommendations
Prioritize Ruthlessly: Focus on high-risk vulnerabilities exposed to the internet, accidental exposures, and technical debt.
Contextualize Findings: Pen test results must be analyzed for real-world exploit potential, not just severity ratings.
Streamline Ownership: Assign accountability for remediation and integrate security tools early in development.
Manage Overload: Filter noise from vulnerability scans by validating exploitability and network accessibility.
Gen AI’s Unique Risks
Gen AI introduces novel attack surfaces and unpredictable model behaviors. “Fixing AI model flaws isn’t like traditional patching, it’s iterative and uncertain,” noted Inti De Ceukelaire of Intigriti.
Cobalt emphasizes that current pen testing focuses on LLM-supported systems, not full model behavior, leaving gaps in risk assessment.
Security leaders must balance speed and safety amid pressure to deploy AI rapidly. As Thomas Richards of Black Duck advises, “Context is key—prioritize based on actionable risk, not just scan outputs.”
By fostering collaboration between security and engineering teams, enterprises can navigate the dual challenges of legacy systems and AI-driven complexity.
AI Agents with Crypto Wallets

I’ve been closely following the rise of AI agents, and the numbers are staggering. The global AI agent market is expected to grow from 5.1 billion USD in 2024 to 47.1 billion USD by 2030, that’s a massive CAGR of 44.8%.
According to Deloitte, 25% of enterprises using AI will have adopted AI agents by 2025, and that figure is set to double to 50% by 2027.
What really fascinates me is how AI agents are evolving. Some now even come equipped with crypto wallets, capable of managing micropayments, token-based services, and automated financial transactions. It’s not just automation, it’s a whole new business model.
Why I Believe AI Agents Are Game-Changers?
In today’s fast-paced digital world, businesses like mine and probably yours are constantly looking for ways to streamline repetitive tasks, make quicker decisions, and create new revenue streams. That’s where AI agents come in.
These agents aren’t just glorified chatbots. They’re autonomous software systems that can analyze their environment, make decisions, and take action, all without constant human intervention.
They blend machine learning, natural language processing, and advanced planning to respond to changing conditions—and even collaborate with other agents.
Open-Source AI Is the New Linux, But on Steroids

Matt Asay, MongoDB’s Head of Developer Relations and a prominent open-source advocate, sees DeepSeek as far more than a milestone in Chinese tech; it’s a symbol of how open source is redefining ownership, collaboration, and innovation speed.
“The moment DeepSeek hit Hugging Face, it ceased being a ‘Chinese’ model,” Asay wrote in InfoWorld, where he also contributes. “No one, not even the U.S. government, can put the open-source genie back in the bottle.”
When DeepSeek launched its advanced AI model on Hugging Face, it catalyzed a new chapter for open-source AI. Developers around the world jumped in, including China’s Beijing Academy of Artificial Intelligence (BAAI), which responded with a competing initiative, OpenSeek.
That project aims to surpass DeepSeek while rallying the global open-source community to push forward advancements in algorithms, data, and infrastructure.
The U.S. government reacted swiftly, placing BAAI on a blacklist. However, for Asay, such efforts to rein in open-source AI are missing the point. “Trying to control this movement shows a deep misunderstanding of what open source is,” he argues.
Linus Towalds (creator of Linux) is very based about AI!
"LLMs can help identify bugs real time"
>but they're just autocomplete on steroids
"I think they're much more than that and humans are also autocomplete on steroids to some degree"
>but are you scared of bugs in code— Burny — Effective Omni (@burny_tech)
12:12 PM • Jan 19, 2024
According to Asay, DeepSeek is no longer just a moment, it’s a full-fledged movement. The ecosystem around it is rapidly growing, with thousands of developers, from researchers to enthusiasts, working collaboratively to enhance and expand open-source AI.
Hugging Face has become a global hub for this activity, outpacing even the most agile corporate R&D teams. While it’s a single company, the developer communities it nurtures are resilient and operate far beyond the scope of centralized control.
This democratization is already reshaping industries. Startups like Perplexity are embedding open-source AI into real-world products, proving that cutting-edge tools are no longer reserved for tech giants or government labs.
Asay compares this wave to the rise of Linux: “It started with a spark, became a movement, then infrastructure, and eventually a global standard. But this time, it’s happening in months instead of decades.”
Linux flourished not because of institutional support, but because it inspired passionate contributions. Asay sees the same dynamic at work in today’s open-source AI boom.
Meanwhile, companies clinging to closed models like OpenAI are, in Asay’s words, “trying to dam an ocean.” Despite nods toward transparency, few match the openness of projects like DeepSeek and OpenSeek.
Policymakers face tough choices. “Open source doesn’t care about borders or embargoes, it’s just a pull request away,” Asay points out. Efforts to restrict its spread may ultimately backfire, weakening domestic innovation while accelerating progress elsewhere.
The emergence of DeepSeek and its open-source peers signals a seismic shift in how technology is created and shared. Governments, companies, and developers must decide whether to embrace this movement or risk being left behind.
Asay sums it up simply: “No one owns this wave. No one can stop it. No one can contain it.”

Why It Matters?
For Leaders: Benchmark your AI strategy against the best.
For Founders: Find investors aligned with your vision.
For Builders: Get inspired by the individuals shaping AI’s future.
For Investors: Track high-potential opportunities before they go mainstream.
Open-Source AI Is the New Startup Dream for Young Coders

Many people remain skeptical of proprietary AI due to concerns around transparency and how data is managed and protected. In contrast, open-source AI is gaining momentum, especially among younger developers who are drawn to its flexibility and potential.
While creating large open-source language models (LLMs) isn’t for everyone, a growing number of developers are exploring ways to use, customize, and host these models for their own solutions or as services.
Whether they're building tools on top of open-source LLMs or deploying them locally, younger developers are clearly leaning into this movement.
A recent Stack Overflow survey of over 1,000 developers sheds light on this trend and the mindset of the next generation of AI builders.
The Growing Appeal of Open-Source AI
Although many leading AI models and chatbots remain closed-source, the industry is shifting. OpenAI’s Sam Altman recently announced plans to release the company’s first open-weight language model since GPT-2, noting, “It feels important to do now.”
Even if strategic motives exist, OpenAI isn’t alone. Meta’s development of Llama 4 reflects a broader push for openness. While licensing restrictions from these companies may not fully align with the Open Source Initiative’s standards, the move toward more open AI development is undeniable.
Globally, startups like India’s VOGIC AI are leveraging open-source models to build innovative solutions. It’s a trend driven by accessibility, flexibility, and cost-efficiency.
One of the most impressive AI demo I've seen.
This is the future of customer service.
Agents that can understand text, speech, images and even live video.
Soon to be all open-source.
— Lior⚡ (@LiorOnAI)
7:23 PM • Apr 20, 2025
What Younger Developers Are Saying
According to the Stack Overflow survey, younger developers are more likely to engage positively with AI and open-source projects. They’re active in communities, contribute feedback, and are more enthusiastic about interacting with AI tools compared to older respondents, who tend to be more cautious, especially with proprietary technologies.
Early-career developers cited learning as their top use case for AI. They also place greater trust in open-source models for education, personal projects, and creative work.
Soumyadeep Ghosh, a Google Summer of Code contributor with KDE, shared that open-source AI has become essential to his learning process. “I like understanding how AI works, how it’s trained, and how I can refine or fine-tune it,” he explained.
Ghosh recently trained the Llama 3.2 1B model using local data to help refactor large codebases, a task he says wouldn’t be possible without open access to the model.
He also raised concerns around data privacy, questioning how closed models scrape and train on publicly available content. While impressive, he sees it as a trade-off he’s not willing to make.

Explore Gen Matrix second edition today and transform uncertainty into advantage. Because in the age of AI, knowledge isn’t just power, it’s profit.
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very seriously.
How did you like our today's edition? |
Thank you for reading
-Shen & Towards AGI team