- Towards AGI
- Posts
- Is This The Start Of AI Warfare?
Is This The Start Of AI Warfare?
US Military's New GenAI Tool Called 'Critical First Step'.
Here is what’s new in the AI world.
AI news: The First Shot in the AI War
Hot Tea: What If GenAI Could See All Your Data?
Open AI: Building AI Safely at Speed
OpenAI: OpenAI Suggests It Might Be
Meet the 100 Leaders Defining the Future of AI

US Military's New GenAI Tool Marks Pivotal Shift in Future Combat, Says Analyst
The recently introduced "GenAI" platform for U.S. military and Department of Defense employees marks an essential initial phase in modern combat's evolution, a defense analyst states.
This platform, GenAI.mil, utilizes Google's Gemini AI. Defense Secretary Pete Hegseth stated that it aims to provide troops with direct AI access to transform victory methods.
This week, the Pentagon added Elon Musk's xAI Grok models, permitting secure use for tasks with sensitive, unclassified data.
In a discussion, Emelia Probasco, a former Navy officer and Pentagon official now at Georgetown University, explained that the tool will train Defense Department staff on applying AI in daily tasks, readying them for deeper military AI integration.
Probasco said the tool will significantly affect daily operations. Before, personnel used inferior tools or even unauthorized home computers. Now, a more secure space lets them test these tools and learn their capabilities and limits.
While she doesn't believe platforms like GenAI fully alter warfare, Probasco considers it the vital first training step for proper use. The Defense Department has explicitly pushed for innovation and AI adoption in the past year, she noted.
The GenAI tool provides a testing environment for future, larger innovations. Responsible officials are determining its best uses through safe experiments so the U.S. is prepared and ahead of adversaries when conflict arises, she elaborated.
Probasco stated the Pentagon knows rivals like China are also developing AI. This month, President Trump partly lifted a Biden-era ban, allowing Nvidia to export advanced AI chips to China.
Congress is divided on this, with some viewing it as risky and others as tactical. Regardless, Probasco said evidence shows China is fast-testing AI across all warfare areas, not just chatbots, but for espionage targeting and sophisticated cyber-attacks.
A competitive race exists as both sides determine how to adopt this technology, she explained. However, the GenAI tool itself won't necessarily be the weapon providing a U.S. advantage.

The AI system that will offer a real military edge is in development, but it isn't something released for every service member's use, she assured. Using a chatbot for brainstorming or talking points won't win wars. More complex military systems employ generative and other advanced AI techniques.
Those systems have been underway for years, Probasco explained, and their rollout won't come with a major public announcement for widespread use.
Your Data's AI Moment of Truth is Coming
You are facing a landscape where sensitive data is everywhere and expanding rapidly. A new report reveals that unstructured data, duplicate files, and risky sharing habits are creating major headaches for your security team.
The findings indicate that generative AI tools like Microsoft Copilot are adding new layers of complexity, while persistent old issues like oversharing and poor data hygiene continue to expose your organization.
AI advances quickly, but your data security must move even faster. Generative AI is spreading across your enterprise, from customer service to marketing. It promises speed and innovation, but it also introduces new and unfamiliar security risks.
As your company rushes to adopt these tools, you may discover your data protection strategies are unprepared for the challenges AI creates.
GenAI is fueling smarter fraud for you, but broken teamwork is your real problem. Generative AI has made fraud faster, cheaper, and harder for you to detect.
You now face spoofed logins, vendor impersonation, invoice fraud, and deepfakes combined in sequences that mimic normal workflows.
Your defenses likely remain tied to single systems. Your training, manual verification, and email filtering continue to fail when attacks span multiple platforms. Nearly nine in ten of your peer organizations saw at least one safeguard break down during a major incident.
Your employees are racing to build custom AI apps despite the security risks you face. You are seeing a 50% increase in GenAI platform usage among your end-users, driven by employee demand for tools to develop custom applications.
Despite a shift toward safe enablement, the growth of shadow AI, unsanctioned apps used by your staff, continues to compound your potential security risks, with over half of all current app adoption estimated to be shadow AI.
Your employees uploaded over a gig of files to GenAI tools last quarter. In Q2 2025, an analysis found that sensitive data from organizations like yours is being exposed through GenAI tools, a fear many of your security leaders share but find hard to measure.
In this data, 22% of files and 4.37% of prompts contained sensitive information, including your source code, access credentials, proprietary algorithms, M&A documents, and internal financial records.
GenAI is everywhere in your workplace, but your security policies likely haven’t caught up. Nearly three out of four European IT professionals say staff like yours are already using generative AI at work, but just under a third of organizations have formal policies in place.
63% of leaders are extremely concerned that generative AI could be turned against them, yet only 18% are investing in deepfake-detection tools. This disconnect leaves your business exposed as AI-powered threats evolve.
You know GenAI is risky, so why aren’t you fixing its flaws? Even though GenAI threats are a top concern for your security team and leadership, your current level of testing and remediation isn’t keeping pace.
Only 66% of organizations like yours regularly test their GenAI-powered products. 48% of professionals believe a “strategic pause” is needed to recalibrate defenses, but that pause isn’t happening for you.
Your users lack control as major AI platforms share personal info with third parties. As generative AI becomes part of daily life, your users are often unaware of what personal data these tools collect, how it’s used, and where it ends up.

Researchers analyzing leading platforms found issues with transparency in privacy practices and the scope of data collection and third-party sharing that affect you.
Many of you are rushing into GenAI deployments, frequently without a security net. 70% of organizations view the rapid pace of AI development as their leading security concern, followed by lack of data integrity and trustworthiness.
A third of respondents indicate GenAI is already being integrated or transforming their operations, often without adequate safeguards.
Your CISO is likely watching the GenAI supply chain shift closely. In supply chain operations, GenAI is gaining traction, but security leaders remain uneasy. 97% are using some form of GenAI, but only a third use tools designed for supply chain tasks.
Nearly half worry about how their data is used or shared with GenAI, and 40% don’t trust the answers it gives.
94% of firms like yours say pentesting is essential, but few are doing it right. You are particularly struggling with vulnerabilities within your GenAI web apps. 95% of firms have pentested these apps in the last year, with 32% of tests finding serious vulnerabilities.
Of those findings, a mere 21% were fixed, leaving risks like prompt injection and data leakage in your systems.
GenAI is turning your employees into unintentional insider threats. The amount of data your business shares with GenAI apps has exploded, increasing 30x in one year. Your average organization now shares over 7.7GB of data monthly with AI tools, including sensitive source code, passwords, and intellectual property.
With 75% of your enterprise users accessing applications with GenAI features, you now have a bigger issue: the unintentional insider threat.

Here are 8 steps for you to secure GenAI integration in financial services. GenAI offers your institution enormous opportunities, particularly in analyzing unstructured data, but also increases security risks.
It can organize oceans of information to improve your operations, maximize your markets, and enhance customer experience. Those analyzed datasets can reveal information about fraud and threats, presenting remarkable security opportunities for you.
One in ten GenAI prompts from your users puts sensitive data at risk. Despite their potential, you may hesitate to fully adopt GenAI tools due to concerns about sensitive data being shared and used to train systems.
While most employee use is straightforward, summarizing text or editing blogs, 8.5% of prompts are a concern and put your sensitive information at risk.
Malicious actors’ GenAI use has yet to match the hype for your defenses. Generative AI has helped lower the barrier to entry for malicious actors, making them more efficient at creating deepfakes and mounting phishing campaigns against you.
For now, though, it hasn’t made attackers “smarter” nor completely transformed the cyber threats you face.
Anaconda and Docker's Play: Open-Source Images for Faster, More Secure AI Dev
Docker's decision to release over 1,000 of its Docker Hardened Images as free, open-source software represents a major step forward in speeding up AI software development and securing the software supply chain, according to David DeSanto, CEO of Anaconda.
He explained that these pre-configured, security-fortified containers, combined with Anaconda’s tools, provide developers with a trusted foundation, addressing one of their biggest challenges: ensuring they use secure, trustworthy components.
This is especially critical in AI, where DeSanto noted up to 80% of projects fail to reach production, often due to insecure components and strict governance requirements slowing down prototyping.
The open-sourcing of these images under the Apache 2.0 license means developers and organizations can freely use and build upon them.
Anaconda's partnership with Docker integrates its Python-based development and environment management capabilities with these secure containers.
This alliance helps developers create portable AI applications more quickly and gives them confidence that their work will meet production security standards, thereby accelerating the journey from prototype to deployment.
Ultimately, DeSanto said, the collaboration aims to help the global community of developers build trusted, scalable, and secure AI workloads faster.
The Unfixable Hack? OpenAI Warns AI Browsers May Always Be at Risk
OpenAI acknowledges that prompt injection attacks, a method of manipulating AI agents through hidden malicious instructions, remain an enduring threat as it strengthens the security of its Atlas AI browser.
In a recent blog post, the company stated that, similar to web scams, this vulnerability is unlikely to ever be completely eliminated. The introduction of “agent mode” in ChatGPT Atlas has notably expanded the potential avenues for such attacks.
Since its launch in October, security researchers have demonstrated that even text in a Google Docs file could alter the browser’s behavior.
This aligns with broader industry warnings, including from the U.K.’s National Cyber Security Centre, which advises focusing on risk reduction rather than expecting a total solution.
OpenAI’s defensive strategy involves a proactive, rapid-response cycle and a unique tool: an “LLM-based automated attacker.”
This AI bot, trained with reinforcement learning, acts as a simulated hacker to discover novel attack methods by repeatedly testing malicious prompts against the target AI in a controlled environment.

This internal simulation provides insights into the AI’s reasoning that external attackers lack, theoretically allowing OpenAI to identify and patch vulnerabilities faster. The company reports this method has uncovered complex, multi-step attack strategies missed by human testers.
While this automated testing is a common AI safety practice, OpenAI combines it with large-scale evaluations and faster updates to harden Atlas.
The company recommends users mitigate risk by limiting an agent’s access, requiring confirmation for sensitive actions, and providing specific instructions rather than broad autonomy.
However, security experts like Rami McCarthy of Wiz note that reinforcement learning is just one layer of defense. He highlights the inherent risk of agentic browsers, which combine moderate autonomy with high access to sensitive data.
McCarthy suggests that for many everyday uses, the current risk may outweigh the benefits, a balance that will need to evolve as the technology matures.
Journey Towards AGI
Research and advisory firm guiding industry and their partners to meaningful, high-ROI change on the journey to Artificial General Intelligence.
Know Your Inference Maximising GenAI impact on performance and Efficiency. | Model Context Protocol Connect AI assistants to all enterprise data sources through a single interface. |
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How's your experience? |
Thank you for reading
-Shen & Towards AGI team
