- Towards AGI
- Posts
- The Cyber Cold War Just Got Hot, Thanks to GenAI
The Cyber Cold War Just Got Hot, Thanks to GenAI
Who Will Win the AI Cyber War?
Here is what’s new in the AI world.
AI news: It’s Complicating the Battle for Defenders
Hot Tea: Forget the Models, Fix the Culture
Open AI: The Blueprint for AI's Future?
OpenAI: OpenAI, xAI, and Meta Get Failing Grades
Is GenAI a Net Win for Security? It’s Complicating the Battle for Defenders.
Generative AI (GenAI) is revolutionising work, but its role in cybersecurity presents a dual reality: while defenders are exploring its potential, adversaries are already leveraging it, making the threat landscape more complex and difficult to quantify.
How Adversaries Are Using GenAI
Threat actors are integrating AI into their operations, though current applications often remain experimental and require significant human oversight. Key trends include:
Evasion & Social Engineering: Using AI to craft more convincing phishing emails and lures, and embedding deceptive prompts in malware or DNS records to fool AI-based analysis tools.
Lowered Barriers: The proliferation of open, "uncensored" AI models has made advanced capabilities accessible to a broader range of threat actors, not just well-funded state-sponsored groups.
Invisible Impact: Much adversarial AI usage, like fixing code errors, researching targets, or "vibe coding", leaves no obvious trace in the final malicious software, making its true prevalence hard to measure.
Looking ahead, as models become more efficient and accessible, adversaries are likely to gain a temporary advantage, using AI to scale attacks like automated vulnerability hunting and persistent intrusion attempts.
The Vulnerability Hunting Arms Race
GenAI is supercharging the discovery of software vulnerabilities, a tool used by both attackers and defenders.
It can rapidly analyse millions of lines of code in open-source projects or assist in triaging outputs from automated testing (fuzzing).
This creates a surge in vulnerability discovery, with outcomes depending on the user's intent, responsible disclosure or weaponisation for the black market.
How Defenders Can Leverage GenAI
For security teams, GenAI is a powerful force multiplier to address overwhelming data volumes and skill shortages.
Threat Intelligence Triage: AI can parse massive streams of threat data from hundreds of sources, extract key insights, and help automate defensive actions.
Incident Response: During an attack, AI can quickly sift through reams of logs to pinpoint malicious activity (like lateral movement), allowing responders to focus on critical evidence.
Proactive Security: Integrating AI into the development lifecycle to analyse code commits for common vulnerabilities can prevent bugs from reaching production.
Red Teaming: Security testers can use AI to prototype attacks and identify weaknesses more efficiently.
The Future: Agents, Integration, and the Human Factor
Agentic AI: The rise of autonomous AI agents will create a new dynamic. Adversaries could deploy persistent agents to hunt for victims or vulnerabilities. Defenders can use agents as 24/7 analysts to monitor logs, secure endpoints, and filter phishing attempts.
Tool Integration: Protocols like the Model Context Protocol (MCP) allow AI to connect with various security tools and datasets, enabling more structured assistance in tasks like malware analysis.
The Human Element: Ultimately, GenAI's effectiveness depends on human expertise. Knowledgeable professionals can use it to achieve remarkable gains in productivity and security. In contrast, those without foundational skills may produce unreliable or insecure outputs. As noted in recent reports, even advanced AI agents still require human guidance to execute complex attacks effectively.
In conclusion, GenAI is a transformative but dual-use technology in cybersecurity. Its evolution will favour those, whether attackers or defenders, who can best combine its capabilities with deep human skill and strategic insight.

Unlocking GenAI's Potential Demands a Learning Culture Revolution
A truly effective Generative AI (Gen AI) learning culture goes beyond formal training; it must become embedded in your organisation's identity.
This means creating an environment where continuous learning is part of everyday work, seen as essential for both personal growth and company success.
Such a culture encourages curiosity, innovation, and adaptability, which are vital for navigating today's rapidly evolving business landscape and leveraging transformative technologies like Gen AI.
Five Essential Elements for a Gen AI Learning Culture
To fully harness the potential of Gen AI, your organisation must cultivate a learning culture built on these key principles:
1) Strategic Alignment: Your Gen AI learning initiatives must directly support the company's strategic objectives. Simply introducing AI tools without clear business relevance leads to limited adoption and impact.
For example, if your goal is to enhance customer personalisation, training should focus on relevant AI applications like natural language processing and sentiment analysis, making the learning purpose-driven and meaningful for employees.
2) Leadership Commitment: Leaders must actively champion Gen AI learning. When executives participate in workshops, openly support AI initiatives, and discuss both the opportunities and ethical considerations, it signals that AI fluency is a priority.
This top-down commitment motivates employees to engage deeply with their own development.
3) Collaborative Learning: Gen AI education should be a shared journey. Encourage peer-led workshops, cross-functional AI project teams, and internal hackathons.
Collaboration allows employees to exchange knowledge, tackle implementation challenges together, and learn from diverse perspectives, strengthening both skills and community.
4) Accessible Resources: Provide easy access to high-quality, role-specific Gen AI resources. This includes online courses on prompt engineering and model deployment, internal wikis for shared code and best practices, and regular knowledge-sharing sessions.
Empowering employees to learn at their own pace ensures they can find and apply what they need.
5) Recognition and Celebration: Acknowledge and celebrate Gen AI learning milestones and successes. From public shout-outs for innovative AI applications to formal awards for impactful implementations, recognition reinforces the value of continuous development and highlights your organisation's commitment to technological growth.
Case in Point: A Healthcare Provider’s Transformation
A regional healthcare provider successfully integrated Gen AI by redesigning its learning culture with a consultant's guidance. They began with a detailed needs assessment to identify role-specific skill gaps.
Tailored learning paths were created: medical staff trained on AI for diagnostics and treatment plans, while administrative teams learned to automate tasks and improve patient communication.
Learning was embedded into daily workflows through on-demand modules and dedicated "AI exploration days." Leadership led by example, with the CEO participating in prompt engineering workshops and ethical discussions.
The organisation also established AI learning circles and an online forum to foster collaboration.
Within 12 months, the results were clear: administrative tasks were reduced by 33%, diagnostic accuracy improved by 18%, and efficiency increased by 24%.
A culture of continuous AI-driven improvement took root, leading to innovative patient care solutions and more engaged, proficient teams.
Key Takeaways for Your Organisation
Building a sustainable Gen AI learning culture is an ongoing effort, not a one-time program. Success depends on:
Making Gen AI learning a strategic priority with dedicated resources.
Having leaders actively participate in and advocate for AI education.
Creating structured opportunities for collaborative learning and knowledge sharing.
Continuously measuring the impact of learning initiatives and adapting as needed.
By embracing these principles, your organisation can foster a culture where Gen AI learning drives innovation, adaptability, and long-term competitive advantage in an increasingly AI-driven world.
NVIDIA's Open AI Gambit Takes Centre Stage at NeurIPS
At the NeurIPS AI conference, NVIDIA announced significant new contributions to the open-source community, expanding its portfolio of open AI models, datasets, and tools across both digital and physical AI domains.
A Major Focus: Open-Source Autonomous Driving
A standout release is NVIDIA DRIVE Alpamayo-R1 (AR1), described as the world’s first industry-scale, open-source "reasoning vision language action" (VLA) model for autonomous vehicle (AV) research.
Unlike previous systems, AR1 integrates chain-of-thought AI reasoning with path planning, enabling vehicles to better understand and navigate complex real-world scenarios, such as pedestrian-heavy intersections or unexpected road obstacles, by reasoning through each step like a human driver.
This model is now available for researchers to customise for non-commercial use on platforms like GitHub and Hugging Face.
Empowering Broader Physical AI Development
NVIDIA also introduced the Cosmos Cookbook, a comprehensive guide for developers working on any physical AI application, from robotics to simulation.
It provides step-by-step instructions for data curation, synthetic data generation, and model training. New tools built on the Cosmos platform include:
LidarGen: A model that generates synthetic lidar sensor data for AV simulation.
Omniverse NuRec Fixer: A tool that rapidly corrects artifacts in neural reconstruction data.
Cosmos Policy: A framework for converting large video models into robust robot control policies.
ProtoMotions3: An open-source framework for training physically simulated digital humans and robots.
Major industry partners, including Voxel51, 1X, and Figure AI, are already using these foundational models to advance their own physical AI applications.
Advancements in Digital AI Tools
On the digital AI front, NVIDIA expanded its Nemotron family of open models and its NeMo toolkit with new releases:
MultiTalker Parakeet: A speech recognition model for understanding multi-speaker, overlapping conversations.
Sortformer: A real-time model for distinguishing between different speakers in audio.
Nemotron Content Safety Reasoning: A reasoning-based model for enforcing custom AI safety policies.
New Synthetic Datasets: Including an audio dataset for training safety guardrails.
Open-Source Libraries: Such as NeMo Gym for reinforcement learning and the NeMo Data Designer Library for generating high-quality synthetic datasets.
Companies like CrowdStrike, Palantir, and ServiceNow are leveraging these Nemotron and NeMo tools to build secure, specialised AI agents.
Showcasing Cutting-Edge Research
NVIDIA researchers are presenting over 70 papers at NeurIPS. Highlights include advancements in language models, such as:
Audio Flamingo 3: A large audio model capable of reasoning across speech, sound, and music for up to 10-minute segments.
Efficient Model Compression: New techniques like Minitron-SSM that compress hybrid AI models without sacrificing accuracy.
Latency-Optimal Models: The Nemotron-Flash architecture, designed for optimal speed and accuracy in small language models.
Extended Training Methods: ProRL (Prolonged Reinforcement Learning), a technique that enhances reasoning capabilities in large language models through longer training periods.
These initiatives reinforce NVIDIA's commitment to open-source AI, recently recognised by an independent "Openness Index" that rated the NVIDIA Nemotron family among the most transparent and permissively licensed in the industry.
By providing these advanced resources, NVIDIA aims to accelerate innovation across virtually every field of research.

Are AI Giants Sacrificing Safety for Speed? New Report Suggests Yes
A new report from the Future of Life Institute has issued a stark warning about the safety practices of leading artificial intelligence companies, stating that models developed by OpenAI, xAI, Anthropic, and Meta are failing to meet emerging international safety standards.
The institute's AI Safety Index, compiled by an independent panel of AI and ethics experts, suggests the industry is advancing faster than it can implement adequate safeguards.
The report accuses major tech firms of prioritising speed and market dominance over ensuring the stability and safety of increasingly powerful AI systems.
According to the study, no leading AI lab currently meets the "robust governance and transparency standards" that experts deem necessary for the safe development of next-generation models.
The authors warn that this oversight gap could leave societies vulnerable to a range of harms, from widespread misinformation to more extreme risks involving uncontrolled AI behaviour.
Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards.
The report's publication coincides with rising public anxiety about AI's potential dangers, highlighted by recent cases linking chatbot interactions to self-harm and suicide, which have spurred global calls for stricter oversight.
In the evaluation, companies like Anthropic, OpenAI, Meta, and xAI scored poorly on measures of accountability and transparency. The report found limited disclosure about how these firms test for bias, manage safety incidents, or plan to control advanced autonomous systems.

In contrast, smaller European and Asian research labs were noted for being more transparent with their safety documentation and public risk assessments.
Industry responses to the findings were mixed. A Google DeepMind spokesperson stated the company would “continue to innovate on safety and governance at pace with capabilities.” xAI, founded by Elon Musk, offered a terse, seemingly automated reply: “Legacy media lies.”
The report underscores a growing debate within the AI community about balancing innovation with necessary restraint.
It concludes that the AI race is progressing faster than safety measures can keep up and warns that unless major companies overhaul their governance structures, the gap between capability and control will continue to widen, potentially with significant consequences.
Journey Towards AGI
Research and advisory firm guiding industry and their partners to meaningful, high-ROI change on the journey to Artificial General Intelligence.
Know Your Inference Maximising GenAI impact on performance and Efficiency. | Model Context Protocol Connect AI assistants to all enterprise data sources through a single interface. |
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How's your experience? |
Thank you for reading
-Shen & Towards AGI team