• "Towards AGI"
  • Posts
  • Social Media’s Death Blow? AI Travel Planning Is 10x More Personal

Social Media’s Death Blow? AI Travel Planning Is 10x More Personal

Type, Don’t Scroll.

Here is what’s new in the AI world.

AI news: Gen AI Crafts Trips Social Algorithms Can’t Match

Open AI: Thousands of Apps, One Fix

OpenAI: Today’s AI Is an Intern

Hot Tea: ChatGPT Just Got a Voice Makeover

Social Media Dethroned: Gen AI Now #1 Travel Discovery Tool (43% Choose Bots!)

The British Film Institute (BFI) warns that the current methods of training generative AI models pose a "direct threat" to the UK screen industry's economic foundations. Their report reveals AI companies have used approximately 130,000 copyrighted film and TV scripts, plus YouTube videos and pirated books, largely without permission from creators or rights holders.

AI models learn storytelling structures, language, and styles from this data, enabling them to create new content at a fraction of the original cost.

While potentially assisting creators, these AI tools could also compete directly against the very people whose work trained them, undermining traditional business models and potentially displacing skilled workers.

Recommendations for UK Leadership

  1. Establish a World-Leading IP Licensing Market: Create a robust framework forcing AI companies to obtain permission and strike licensing deals before using copyrighted content ("opt-in" regime). Leverage the UK's strong copyright laws to become a hub for ethical AI content production.

  2. Minimize AI's Environmental Impact: Develop guidelines requiring transparency about the significant carbon footprint of large AI models and push for minimizing this impact.

  3. Develop Ethical AI Tools: Foster collaboration between AI developers and the screen sector to ensure tools meet industry needs and public values, avoiding cultural homogenization and ethical problems.

  4. Share Knowledge & Build Skills: Create an "AI observatory" and "tech demonstrator hub" to provide structured intelligence and hands-on experience, especially for freelancers and SMEs. Formalize AI training to help the workforce adapt and build complementary skills.

  5. Ensure Transparency for Audiences: Mandate clear disclosures when AI is used in screen content to maintain public trust. National institutions like the BBC and BFI should lead by example.

  6. Boost Investment in Creative Tech: Provide targeted financial support to overcome barriers (limited capital, risk aversion) hindering the growth of the UK's creative technology sector.

  7. Empower Independent Creators: Invest in accessible AI tools, training, and funding for independent creators to foster an inclusive creative economy where AI enhances, rather than replaces, human creativity.

Initiatives like the Charismatic consortium (Channel 4, Aardman), BBC pilots, and BFI National Archive experiments show AI's existing role in the sector.

However, the BFI stresses this is an "inflection point" requiring swift action to harness AI's opportunities (like democratizing creation, speeding workflows) while mitigating its significant risks to jobs, business models, and trust.

The Gen Matrix Advantage

In a world drowning in data but starved for clarity, Gen Matrix second edition cuts through the clutter. We don’t just report trends, we analyze them through the lens of actionable intelligence.

Our platform equips you with:

  • Strategic foresight to anticipate market shifts

  • Competitive benchmarks to refine your approach

  • Network-building tools to forge game-changing partnerships

Thousands Of Apps, One Fix: AI Tool Targets Major Open Source Risk

Researchers have created an AI tool that automatically scans open-source repositories (like GitHub) to detect and patch specific code vulnerabilities. In testing, it identified 1,756 vulnerable Node.js projects affected by a long-standing path traversal flaw and successfully fixed 63, showcasing AI's potential to massively scale open-source security improvements.

Major Challenges Remain

  1. Patch Risks: AI-generated fixes may unintentionally create new bugs.

  2. "Poisoned" Training Data: LLMs trained on vulnerable code can reproduce these flaws, making both codebases and models hard to cleanse.

  3. Scale Issues: Copied/forked code fragments (e.g., from Stack Overflow) are pervasive, and the tool currently targets only one vulnerability pattern.

  4. Accountability Gaps: Experts question liability for faulty patches and how to establish trust in AI-generated code, noting limited post-patch testing.

  5. Developer Inertia: Warnings about this vulnerability were repeatedly ignored or dismissed over a decade across platforms and even teaching materials, partly due to inadequate testing methods masking the flaw.


The tool will be released in August, with plans to expand its detection capabilities and patching accuracy. While acknowledging its technical promise, security experts like Robert Beggs remain skeptical about AI's current readiness for safe, large-scale code modification, viewing the research as an initial, but not yet production-ready, step forward.

OpenAI CEO Predicts Leap from AI Interns to Engineer-Level Talent by 2026

OpenAI CEO Sam Altman described current AI capabilities as akin to an "intern that can work for a couple of hours," predicting significant advancements. He stated that by next year, AI agents could begin helping humanity "discover new knowledge" or solve complex business problems in limited ways.

Job Loss Debate Intensifies


These comments come amid growing anxiety about AI displacing workers. Altman previously asserted, "You’re not going to lose your job to an AI, but you’re going to lose your job to someone who uses AI."

Contrasting CEO Views

  1. Pessimistic Outlook: Anthropic CEO Dario Amodei predicted AI could eliminate nearly half of entry-level white-collar jobs within 5 years.

  2. Optimistic Outlook: Google CEO Sundar Pichai countered that AI will primarily serve as an "accelerator," freeing humans for more creative work. He respectfully disagreed with Amodei's prediction, noting Google still plans to hire software engineers. Pichai emphasized AI would augment rather than wholly replace coders.

Both Google and OpenAI have recently launched AI agents aimed at automating software engineering tasks, highlighting the tension between their predictions and product development.

Do you agree with Altman?

Login or Subscribe to participate in polls.

Why It Matters?

  • For Leaders: Benchmark your AI strategy against the best.

  • For Founders: Find investors aligned with your vision.

  • For Builders: Get inspired by the individuals shaping AI’s future.

  • For Investors: Track high-potential opportunities before they go mainstream.

ChatGPT Gets a Voice Upgrade with More Natural Speech

OpenAI has upgraded its paid "Advanced Voice" feature, making ChatGPT's spoken responses sound significantly more human-like. Key improvements include:

  1. More Natural Speech: Voices now have subtler intonation, realistic cadence (pauses, emphasis), and better expressiveness for emotions like empathy or sarcasm.

  2. Continuous Translation: Users can now ask ChatGPT to translate an entire conversation continuously until instructed to stop or switch languages.

  3. Wider Availability: The enhanced voice mode is now live for all ChatGPT Plus, Team, and Enterprise subscribers globally across platforms.

Limitations Noted

  • Occasional audio glitches (unexpected tone/pitch variations) may still occur.

  • Known issues like unintended sounds, gibberish, or background music (linked to hallucinations) remain unfixed.

Your opinion matters!

Hope you loved reading our piece of newsletter as much as we had fun writing it. 

Share your experience and feedback with us below ‘cause we take your critique very critically. 

How did you like our today's edition?

Login or Subscribe to participate in polls.

Thank you for reading

-Shen & Towards AGI team