- "Towards AGI"
- Posts
- After Developer Layoffs, Netflix Doubles Down on Gen AI for Games
After Developer Layoffs, Netflix Doubles Down on Gen AI for Games
In recent years, Netflix has ventured into gaming, allowing users to stream games and acquiring studios to develop exclusive titles.
A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.
Whether you're passionate about machine learning, neural networks, or the ethics surrounding GenAI, our platform offers cutting-edge insights, resources, and collaborations on everything AI.
What to expect from Towards AGI: Know Your Inference (KYI): Ensuring your AI-generated insights are accurate, ethical, and fit for purpose Open vs Closed AI: Get expert analysis to help you navigate the open-source vs closed-source AI GenAI Maturity Assessment: Evaluate and audit your AI capabilities Expert Insights & Articles: Stay informed with deep dives into the latest AI advancements But that’s not all!
We are training specialised AI Analyst Agents for CxOs to interact and seek insights and answers for their most pressing questions. No more waiting for a Gartner Analyst appointment for weeks, You’ll be just a prompt away from getting the insights you need to make critical business decisions. Watch out this space!!
Visit us at https://www.towardsagi.ai to be part of the future of AI. Let’s build the next wave of AI innovations together!
TheGen.AI News
After Developer Layoffs, Netflix Doubles Down on Gen AI for Games
A Netflix executive recently disclosed that the company's gaming division is heavily investing in generative AI to “boost development speed and introduce unique, innovative gaming experiences that will engage and inspire players.” This announcement comes only a few weeks after Netflix closed its high-profile game studio and laid off several developers.
Mike Verdu, who now serves as Netflix’s VP of Generative AI for Games, shared his enthusiasm in a LinkedIn post announcing his new role, stating, “I’m thrilled to be driving a transformative moment in game development and player experiences with generative AI. I haven’t felt this energized about the gaming industry since the 90s, an era when every new game release seemed to push boundaries. It was an amazing period of creative breakthroughs, and I believe we’re entering a similar phase now, marked by rapid innovation and surprises for players every few months.”
In recent years, Netflix has ventured into gaming, allowing users to stream games and acquiring studios to develop exclusive titles. Among its ambitious projects was Team Blue, an internal studio composed of industry veterans from titles like Call of Duty, God of War, and Halo. However, Netflix recently shut down Team Blue, resulting in 35 layoffs, as first reported by Game File.
Verdu addressed media speculation about Netflix Games' restructuring, clarifying in his LinkedIn post that recent changes were part of a "planned transition."
While the exact direction for generative AI in Netflix's game development remains unclear, the technology offers potential for creating 3D models, concept art, voice acting, or in-game dialogue. More experimental applications include generative AI engines capable of real-time environment creation, though these are still in early stages.
Generative AI's integration into game development has sparked concerns among artists, developers, and voice actors, as the technology relies on human labor for training and could potentially displace creative jobs. However, Verdu emphasized his “creator-first vision for AI,” envisioning it as a tool that empowers creators rather than replaces them. He believes AI will allow large teams to work faster and provide small teams with unprecedented capabilities.
OWASP Expands GenAI Security Measures to Combat Rising Deepfake Threats
Deepfake and generative AI attacks are becoming increasingly common, signaling a likely surge of such incidents. AI-generated text, for example, is now frequently appearing in emails, and security firms are developing detection tools to identify machine-generated messages. According to one report, the percentage of human-written emails has dropped to 88%, while content generated by large language models (LLMs) now makes up around 12%, up from approximately 7% in late 2022.
To help organizations strengthen defenses against AI-based threats, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) released a set of guidelines on October 31. Alongside its previous AI cybersecurity and governance checklist, OWASP introduced a guide for managing deepfake incidents, a framework for establishing AI security centers of excellence, and a curated database of AI security tools.
While OWASP’s original Top 10 guide was aimed at companies creating AI models and services, this new guidance targets businesses that use AI. Co-project lead Scott Clinton notes that these organizations "want to adopt AI safely, guided by best practices — it’s essential for competitiveness.” He adds that with competitors embracing AI, companies need to find secure, effective ways to follow suit, without security becoming a barrier.
Illustrating the real-world impact of these threats, security vendor Exabeam recently experienced a deepfake incident. A candidate for a senior security analyst role had passed initial screenings, but during the final interview, Jodi Maas, Exabeam's GRC team lead, noticed irregularities. Although the interview began normally, it quickly became evident that the candidate’s responses seemed digitally altered: their movements were minimal, the audio didn’t align with lip movements, and there was a lack of expression. Maas completed the interview but shared her concerns with Exabeam's CISO, Kevin Kirkwood, who concluded the interviewee was likely a deepfake.
This incident prompted Exabeam to revise its security procedures, including training HR and advising employees to be cautious on video calls. As a humorous precaution, Kirkwood even asked a reporter to turn on their camera midway through an interview to confirm they were human.
The prevalence of deepfakes has IT professionals concerned. In a survey by Ironscales, nearly half of respondents (48%) expressed strong concern over deepfakes, with 74% expecting them to be a significant threat. Ironscales CEO Eyal Benishti warns that while today’s deepfakes may still be detectable, future improvements will make them harder to spot, making human training alone insufficient.
Kirkwood of Exabeam envisions an escalating battle between detection technologies and deepfake advancements, as each tries to outpace the other. He looks forward to tools that could be integrated into security systems to automatically detect deepfake elements.
Clinton from OWASP concurs. Rather than relying on employees to detect fake video chats, organizations should implement structures for verifying identity, particularly in high-stakes interactions, and establish incident response protocols. “Training people to spot deepfakes isn’t practical,” Clinton says, emphasizing that objective, technology-driven solutions are necessary. He adds that OWASP has developed practical steps combining technology and procedures to address this growing threat.
TheOpensource.AI News
How Open Source AI Can Bolster U.S. Leadership and Global Security
Meta’s open-source Llama models are now widely used by researchers, entrepreneurs, developers, and government entities. Notably, Meta has extended access to Llama to U.S. government agencies, including those focused on defense and national security, as well as private sector partners assisting these efforts. Collaborating companies include Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake.
Oracle is leveraging Llama to synthesize aircraft maintenance documents, enabling technicians to diagnose issues more quickly, thus reducing repair times and enhancing fleet readiness. Scale AI is adapting Llama to assist national security teams in operational planning and identifying potential adversary weaknesses. Lockheed Martin has integrated Llama into its AI Factory, using it for code generation, data analysis, and process improvements.
Amazon Web Services and Microsoft Azure are making Llama accessible to government agencies via secure cloud solutions for sensitive data. Meanwhile, IBM’s watsonx platform brings Llama to national security agencies in self-managed environments.
These ethical and responsible applications of open-source AI like Llama are designed to enhance U.S. security and prosperity while establishing American-led standards in the global AI landscape. Meta, as a U.S.-based company rooted in the nation’s entrepreneurial and democratic values, aims to support America’s safety, security, and economic interests, as well as those of allied nations.
Large language models like Llama can support numerous national security tasks by analyzing vast data, streamlining logistics, tracking terrorist financing, and fortifying cyber defenses. For decades, open-source systems have been essential to the technological advancements of the U.S. military and in setting global technology standards. They have expedited defense research, improved security protocols, and bridged communication gaps between systems.
As economic strength, innovation, and job growth are increasingly linked to national security, widespread adoption of American open-source AI models aligns with both economic and security goals. Other nations, including U.S. competitors like China, recognize this connection and are rapidly advancing their own open-source models to gain a competitive edge.
Meta believes that the success of American open-source models benefits both the U.S. and the global democratic community. As open-source AI gains traction, an international standard for these models will likely emerge, similar to Linux and Android. This standard will influence future AI development, becoming integral to technology, infrastructure, manufacturing, global finance, and e-commerce.
To ensure that this standard upholds openness, transparency, and accountability, American leadership in setting high standards is crucial. This is especially vital as nations use AI in ways that align with international laws and principles, such as those outlined in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.
Meta envisions a “virtuous cycle” in which the U.S. maintains its technological lead while promoting responsible and ethical AI practices that serve the strategic interests of America and its allies. Open-source AI models can foster this by accelerating innovation, reducing costs, and creating superior products through global developer contributions.
Chinese Military AI, ChatBIT, Built on Meta’s Open-Source Llama Model Nears GPT-4 Performance
Chinese researchers affiliated with China’s People’s Liberation Army (PLA) have developed an AI model named ChatBIT, tailored for military use and built on Meta’s open-source Llama model, according to a report by Reuters. Some of these researchers are linked to the PLA’s Academy of Military Science (AMS), the organization’s primary research body.
Information from three academic papers and multiple analysts confirms that ChatBIT is based on Meta’s Llama 13B large language model (LLM), which has been adapted to support intelligence gathering and analysis, enabling military planners to leverage it for operational decisions.
A Reuters-cited paper notes that ChatBIT is “optimized for dialogue and question-answering in military contexts” and reportedly performs at approximately 90% of OpenAI’s GPT-4 LLM performance level. However, the paper does not specify how this performance comparison was conducted or if the model has been deployed in real-world scenarios. By utilizing an open-source model, ChatBIT could potentially achieve benchmark scores comparable to those of current American AI models.
“This is the first significant evidence of PLA military experts in China systematically researching and utilizing open-source LLMs, particularly Meta’s, for military aims,” said Sunny Cheung, an Associate Fellow at the Jamestown Foundation, a D.C.-based think tank focused on China’s emerging dual-use technologies, including AI. Although Meta’s license restricts Llama’s use for military purposes, its open-source status makes enforcing such limitations challenging.
In response, Meta noted that ChatBIT’s use of the Llama 13B LLM, which it described as an “outdated version” since it is already working on Llama 4, is largely inconsequential, particularly given China’s multi-trillion-dollar investment in AI advancements. Additionally, experts pointed out that ChatBIT’s training involved only 100,000 military dialogue records, a minimal amount compared to the trillions of data points used by leading models.
While some experts question whether this limited data set is sufficient for effective military AI training, ChatBIT could merely be a proof of concept, with Chinese military research institutions potentially planning more expansive models. The Chinese government’s publication of these research papers may also signal to the U.S. its willingness to leverage AI for technological advantage on the world stage.
Regardless of the project’s current scope, it has raised concerns in Washington about the use of American open-source technologies by adversaries for military purposes. In response, the U.S. is considering broader export controls on China and exploring restrictions on open-source and open-standard technologies, such as RISC-V, to limit China’s access to these resources. The U.S. government is also working to prevent American entities from investing in such initiatives.
TheClosedsource.AI News
OpenAI Enters Talks with Regulators for Nonprofit-to-For-Profit Transition
OpenAI is progressing in its efforts to transition from a nonprofit to a for-profit entity, engaging in initial discussions with regulators as part of the process. The organization is in early talks with the California attorney general's office regarding the procedural aspects of this shift, according to sources familiar with the matter. Regulators are expected to carefully examine the valuation of OpenAI’s valuable intellectual property portfolio, including its ChatGPT application.
The attorney general’s office in Delaware has also reached out to OpenAI about this transition, as noted in a letter to the company.
Founded in 2015 with a mission to develop safe and beneficial AI, OpenAI’s potential shift to a for-profit model represents a move toward a traditional business structure, which may appeal more to investors but could raise questions about its commitment to its original mission.
OpenAI did not comment on the regulatory discussions but emphasized that the nonprofit would still exist post-restructuring. OpenAI nonprofit board chairman Bret Taylor stated that “any potential restructuring would ensure the nonprofit continues to exist and thrive” while receiving fair value for its stake in the for-profit entity, allowing it to further its mission.
In 2019, OpenAI established a capped for-profit subsidiary to support the high costs of developing AI models. This year, CEO Sam Altman was briefly dismissed and later reinstated by the nonprofit board amid disagreements over balancing AI safety with commercial objectives.
OpenAI’s intellectual property, including its proprietary ChatGPT, is highly valuable and distinguishes it from typical nonprofits. In California, OpenAI has initiated discussions with Attorney General Rob Bonta’s office, intending to submit restructuring details after finalizing its proposal.
A spokeswoman for Bonta’s office stated that it is “committed to protecting charitable assets for their intended purpose” but did not comment specifically on OpenAI’s discussions.
OpenAI intends to restructure as a public benefit corporation, a move that Bloomberg previously reported. This structure would allow it to retain its mission-driven focus while operating as a for-profit entity. Chief Strategy Officer Jason Kwon informed employees in September that this structure would include a nonprofit arm with a significant stake in the for-profit division, according to a source.
The stake allocated to the nonprofit and the valuation of OpenAI’s assets will be central factors in obtaining regulatory approval for the conversion, legal experts note. According to Daren Shaver, a partner at Hanson Bridgett LLP, this process requires meticulous accounting of asset values.
In California, this type of conversion typically involves multiple rounds of review with the attorney general’s office and could be prolonged due to the charitable considerations tied to OpenAI’s valuable intellectual property.
Meanwhile, Delaware Attorney General Kathleen Jennings has requested that OpenAI submit its final conversion plans to her office’s fraud and consumer protection division for review. The restructuring will also require coordination with the secretaries of state in Delaware and California, as well as state and federal tax agencies.
Sam Altman Advocates for Balanced Hiring, Highlights Value of Young Talent
OpenAI CEO Sam Altman champions a hiring strategy focused on talent rather than age, emphasizing the value of a workforce that includes both seasoned professionals and emerging young talent. Speaking on "The Twenty Minute VC (20VC)" podcast, Altman underscored the benefits of combining diverse experience levels within tech firms.
“I believe it would be a mistake to adopt a hiring approach that exclusively targets either younger or older employees,” Altman said in the November 4 episode. He argued that inexperience does not equate to a lack of potential, encouraging companies to take chances on promising early-career individuals.
Altman also stressed the need for balance, particularly with complex projects. While he recognizes the contributions of young talent, he expressed caution about assigning inexperienced employees to high-stakes work, such as designing costly systems.
This balanced approach to hiring comes at a time of increasing divide in the tech job market. Companies are fiercely competing for senior AI experts, as seen in Google’s recent $2.7 billion acquisition of Character.ai and Microsoft’s $650 million investment in Inflection, while entry-level candidates face considerable barriers.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!