• Towards AGI
  • Posts
  • Tired of False Positives? Corelight’s GenAI Cuts Through the Noise

Tired of False Positives? Corelight’s GenAI Cuts Through the Noise

Threat Detection, Supercharged by Gen AI.

Here is what’s new in the AI world.

AI news: The AI Arms Race in Cybersecurity Just Got Hotter

What’s new: Sorry, Xbox, AI Class Is In Session

Open AI: China's AI Play

OpenAI: Zuck Hires the Architect of Rival AI

Hot Tea: OpenAI’s Agent Plays Human Too Well

Explore Gen Matrix Q2 2025


Uncover the latest rankings, insights, and real-world success stories in Generative AI adoption across industries.

See which organizations, startups, and innovators are leading the AI revolution and learn how measurable outcomes are reshaping business strategies.

GenAI Meets Cybersecurity: Corelight’s Breakthrough in Threat Detection

Cybercriminals are now leveraging generative AI to deploy sophisticated attack methods, such as living off the land (LotL) and lateral movement that were once exclusive to elite hackers, according to Brian Dye, CEO of Corelight, a San Francisco-based network detection and response (NDR) provider.

How AI is Changing the Threat Landscape

  • Democratizing Advanced Attacks: AI tools are enabling mid-tier attackers to execute complex techniques at unprecedented speed.

  • Lowering the Barrier to Entry: Previously, only highly skilled threat actors could conduct stealthy, low-and-slow attacks; now, AI is automating these methods.

Corelight’s AI-Powered Defenses

To counter these evolving threats, Corelight has integrated generative AI into its security operations:

  • Natural Language Alerts: Translates technical alerts into plain language, helping junior analysts respond faster.

  • Payload Summarization: Provides concise, actionable insights from complex data.

  • Investigation Guidance: Assists less-experienced analysts in making confident, informed decisions.

Expanding Detection Capabilities

Dye also highlighted Corelight’s enhancements in,

  • Endpoint & Vulnerability Context: Enriches network telemetry with deeper threat intelligence.

  • YARA for Static File Analysis: Improves detection coverage against file-based threats.

  • Custom AI Solutions for Financial Services: Addresses regulatory challenges with tailored AI models.

Dye’s Cybersecurity Expertise

With a background spanning McAfee, Citrix, and Symantec, Dye brings decades of leadership in infrastructure security, cloud security, and threat intelligence to Corelight’s mission of modernizing cyber defense.


As AI accelerates both attack sophistication and defensive innovation, organizations must adopt AI-augmented security tools to stay ahead of adversaries.

Is AI the New Piano Lessons? Why Parents Are Encouraging the Switch

Move over coding, AI, and Gen AI are now the hottest tech skills for children. Across India, kids as young as six are swapping mindless screen time for AI classes, signaling a major shift in what parents consider essential future-proofing education.

The Rise of AI-First Learning

Edtech startups like BrightCHAMPS, eduSeed, and Codevidhya are rolling out structured AI courses for the 6-15 age group, with curricula spanning:

  • Ages 6-8: Pattern recognition, block-based coding.

  • Ages 9+: Machine learning basics, chatbot development, ethical AI.

  • Teens: Python-driven AI projects, image generation, game design.

Priced between ₹250-600 per session, these programs (typically 30-150 hours) emphasize hands-on projects over theory—think building AI chatbots or content generators.

Why Parents Are Racing to Enroll Their Kids

  • Future-Readiness: 50% of parents believe schools aren’t preparing children for an AI-driven world (BrightCHAMPS survey).

  • Competitive Edge: Early exposure helps in Olympiads, hackathons, and college admissions.

  • Natural Curiosity: Kids want to understand the AI behind their daily tech—from Alexa to homework helpers.

By the Numbers:

  • BrightCHAMPS reports a 4x surge in AI course inquiries for ages 11-15.

  • eduSeed notes that 12% of revenue now comes from kids’ AI programs.

  • Codevidhya sees 40% higher engagement after launching AI bootcamps.

The Tier 2/3 Boom

Demand is exploding beyond metros. Startups are bridging the gap with:

  • Vernacular content.

  • Affordable group classes.

  • School partnerships (B2B) and direct-to-parent (B2C) models.

Schools Lag Behind

While parents push for AI education, most schools remain cautious:

  • Only pilots or extracurriculars offered so far (Naturenurture data).

  • No nationwide curriculum overhaul yet, creating a private edtech gold rush.

In the AI era, parents aren’t waiting for schools to adapt.

Learning AI isn’t just about careers anymore, it’s about staying relevant.”

As one CEO put it.

Smaller, Faster, Cheaper: Inside China's Latest Open AI Contender

China's AI sector continues to make waves with the release of GLM-4.5, a new open-source large language model (LLM) from startup Z.ai (formerly Zhipu).

The company claims its model is more cost-effective than competitors like DeepSeek, further intensifying the global AI competition.

What Z.ai’s New Model Offers

The company introduced three variants:

  • GLM-4.5 - A flagship model for high-performance tasks

  • GLM-4.5-Air - A lightweight, efficient version

  • GLM-4.5-Flash - A completely free model optimized for coding and reasoning

While similar to offerings from OpenAI, Google’s Gemini, and Anthropic’s Claude, GLM-4.5 stands out as fully open-source, a potential advantage for developers. However, concerns remain over data privacy and government oversight, given China’s strict regulations.

Why the West Is Wary

Despite its open-source nature, GLM-4.5 faces skepticism outside China due to:

  • Data Privacy Risks - Chinese AI models often send user data back to servers in China, raising security concerns.

  • Censorship Practices - Models like DeepSeek have been shown to restrict politically sensitive content.

  • Government Backing - Z.ai is among China’s state-supported "AI Tigers," aligning with national tech ambitions.

The Bigger AI Arms Race

China’s rapid advancements, 1,509 LLMs released in recent months, have prompted the U.S. to respond:

  • The Trump administration unveiled an AI Action Plan to maintain American dominance by cutting red tape and expanding AI integration in government.

  • OpenAI has previously warned about risks tied to Chinese AI models.

A Double-Edged Sword for Users

While GLM-4.5’s open-source model offers transparency, data control remains limited. Unlike privacy-focused alternatives (e.g., Proton’s Lumo), many AI systems, including Western ones, still exploit user data for training.


China’s AI push is forcing faster innovation globally, but adoption of its models in the West remains unlikely due to security and censorship concerns. For now, the AI race is heating up with geopolitical implications growing stronger by the day.

Legacy silos slow you down. Intelligent migration fuels growth.

With DataManagement.AI, you gain -

  • Seamless Data Unification – Break down silos without disruption

  • AI-Driven Migration – Cut costs and timelines by 50%

  • Future-Proof Architecture – Enable analytics, AI, and innovation at scale

Outperform competitors or fall behind. The choice is yours.

Meta Hires ChatGPT Co-Creator to Lead New Superintelligence Lab

In a strategic move to bolster its artificial intelligence capabilities, Meta Platforms has named Shengjia Zhao, the key architect behind ChatGPT and GPT-4, as Chief Scientist of its newly formed Superintelligence Lab, CEO Mark Zuckerberg announced Friday.

Key Developments

  • Leadership Shift: Zhao, a former OpenAI research scientist, will define Meta’s AI research roadmap alongside Zuckerberg and Chief AI Officer Alexandr Wang (recruited from Scale AI).

  • Talent War Escalation: His hiring follows a wave of OpenAI researchers joining Meta, reflecting Zuckerberg’s aggressive recruitment to compete in advanced AI.

  • Lab Mission: The Superintelligence Lab will focus on evolving Meta’s Llama models and pursuing artificial general intelligence (AGI), operating independently of Yann LeCun’s FAIR research division.

Context & Strategy

  • Compensation Arms Race: Meta is luring top AI talent with unmatched Silicon Valley pay packages and startup acquisitions.

  • Open-Source Ambitions: Zuckerberg reaffirmed plans to develop "full general intelligence" and release it openly, a controversial approach drawing mixed reactions.

  • Course Correction: The push follows Llama 4’s tepid reception, with Meta now prioritizing breakthroughs to rival OpenAI and Google DeepMind.


As AI innovation accelerates, Meta’s latest moves signal a high-stakes bid to dominate the next frontier of technology with ChatGPT’s co-creator now at the helm.

Why ChatGPT Solving CAPTCHAs Is a Cybersecurity Wake-Up Call

In a striking demonstration of AI's evolving capabilities, OpenAI's ChatGPT Agent has achieved what was once considered impossible, bypassing CAPTCHA verification, the very system designed to block automated bots.

Key Developments

  • Security Breach: The AI agent successfully cleared Cloudflare’s “I am not a robot” test during a task, even verbally acknowledging the step’s purpose.

  • Human-Like Reasoning: It remarked, “This step is necessary to prove I’m not a bot,” showcasing advanced contextual understanding.

  • Real-World Implications: The feat highlights how AI agents can now mimic human behavior to navigate multistep web interactions, undermining traditional bot defenses.

How It Happened

  • Reddit Discovery: User "logkn" shared screenshots of the Agent effortlessly clicking through CAPTCHA while converting a video, with live narration of its actions.

  • Controlled Autonomy: Though the Agent operates in a virtual OS with internet access, it requires user approval for real-world actions like purchases.

Why It Matters

  • Security Concerns: CAPTCHA systems, long the frontline against bots, may soon be obsolete against sophisticated AI agents.

  • AI’s Rapid Evolution: This incident underscores the blurring line between human and machine behavior online, prompting urgent debates about new verification methods.


As AI agents grow more autonomous, the tech industry faces a critical challenge: reinventing cybersecurity for the age of artificial intelligence.

Towards MCP: Pioneering Secure Collaboration in the Age of AI & Privacy

Towards MCP is a cutting-edge Model Context Protocol platform that lets to connect with any data source and apply intelligence in minutes, and helps you to centrally manage MCP server and client configuration.

Your opinion matters!

Hope you loved reading our piece of newsletter as much as we had fun writing it. 

Share your experience and feedback with us below ‘cause we take your critique very critically. 

How did you like our today's edition?

Login or Subscribe to participate in polls.

Thank you for reading

-Shen & Towards AGI team