- "Towards AGI"
- Posts
- Amazon’s Andy Jassy: How Generative AI Saved $260M and 4,500 Developer Years
Amazon’s Andy Jassy: How Generative AI Saved $260M and 4,500 Developer Years
Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.
Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.
TheGen.AI News
Amazon’s Andy Jassy: How Generative AI Saved $260M and 4,500 Developer Years

Generative AI is already making an impact in IT, particularly in software development. It is being applied in areas such as code generation, documentation, test case creation, test automation, and code optimization. Although generative AI is still in its early stages, technology leaders and development teams are seeing potential benefits, despite facing some challenges. Early results indicate that this technology can help create and improve applications, though there are some caveats.
Amazon is one company that has seen significant gains through its use of generative AI. CEO Andy Jassy shared how the company’s internal AI tool has saved it hundreds of millions of dollars and thousands of developer hours. In a LinkedIn post, Jassy highlighted Amazon Q’s ability to transform software development, particularly in addressing the often-overlooked challenge of updating core software—an essential but typically tedious task for development teams.
For instance, upgrading applications like transitioning to Java 17 used to take around 50 developer days. However, with Amazon Q’s code transformation feature, this process has been cut down to just a few hours. Jassy mentioned, “We estimate this has saved us the equivalent of 4,500 developer-years of work (yes, that number is crazy but real).”
The results are impressive. In less than six months, Amazon upgraded over 50% of its production Java systems to modernized versions, achieving this milestone with a fraction of the usual time and effort. Remarkably, 79% of the auto-generated code reviews required no additional changes, demonstrating the reliability of AI-generated code.
Beyond the time savings, Jassy noted other significant benefits of these upgrades, including enhanced security and lower infrastructure costs, resulting in approximately $260 million in annual efficiency gains. This showcases the immense value of using advanced AI technologies in large enterprises.
Amazon offers a commercial version of Amazon Q, called Amazon Q Business, tailored for enterprises. It’s designed to integrate company data, information, and systems to improve operations. According to AWS, this AI assistant can be customized to solve specific business challenges, automate workflows, and generate content, streamlining tasks across departments. It seamlessly integrates with existing systems and offers easy data synchronization through automatic updates, providing a user-friendly experience even for those without technical expertise.
Jassy also mentioned that the Amazon Q team is working on additional transformations to further enhance developer productivity.
Benefits:
Increased Efficiency: Automates repetitive tasks like code generation and documentation, enabling developers to focus on complex issues.
Faster Development Cycles: Speeds up application development, reducing the time needed to bring new features and products to market.
Improved Code Quality: AI can help optimize and refactor code, leading to cleaner and more maintainable codebases.
Enhanced Testing: Automates test case generation and execution, boosting software reliability and reducing bugs.
Personalized Development Support: Provides context-specific suggestions, improving developer productivity.
Knowledge Sharing: Assists with documentation, promoting better knowledge transfer within teams.
Challenges:
Quality Control: AI-generated code may not always meet quality standards, requiring human review and intervention.
Contextual Understanding: AI may struggle to fully grasp the specific context or requirements of a project, leading to irrelevant or suboptimal results.
Dependency Risks: Over-reliance on AI tools could diminish developers’ coding skills and problem-solving abilities.
Integration Challenges: Integrating generative AI tools into existing systems and workflows may need considerable adjustments and training.
Security Concerns: AI-generated code might inadvertently introduce vulnerabilities, necessitating thorough security checks.
Ethical Considerations: Issues like intellectual property and potential biases in AI outputs require careful management.
While generative AI shows promise in software development—automating code generation and providing real-time bug-fix suggestions can cut development time by up to 50%—teams should be mindful of these challenges to fully capitalize on its benefits, even as Amazon’s success serves as an encouraging example.

NVIDIA introduced the NVIDIA NIM Agent Blueprints on Tuesday, offering businesses a streamlined way to kickstart the development of enterprise generative AI applications. This catalog features pretrained, customizable AI workflows tailored for various applications, such as customer service avatars, retrieval-augmented generation (RAG), and virtual screening for drug discovery.
According to Justin Boitano, NVIDIA’s vice president of enterprise AI software products, "NVIDIA NIM Agent Blueprints are ready-to-use AI workflows designed for specific use cases that any developer can easily modify." He emphasized that this growing collection of reference applications is built on lessons learned from NVIDIA’s experiences with early adopters.
The blueprint catalog is based on NVIDIA NIM, a suite of microservices made up of downloadable software containers that simplify the deployment of enterprise generative AI applications.
Boitano explained that enterprises have traditionally relied on teams of developers to build and maintain custom, purpose-specific, in-house applications to manage essential business processes. These applications typically function as a database connected to a web UI that integrates multiple teams into a business process. He noted, "It’s time to scale the impact of generative AI by making it easier for the millions of enterprise app developers to create this new form of in-house, enterprise application."
The sample applications in the catalog are developed with NVIDIA NeMo, NVIDIA NIM, and partner microservices, including reference code, customization guides, and a Helm chart for deployment. Businesses can customize these sample applications with their own data and run the resulting generative AI solutions across data centers and cloud environments. The blueprints are free to download and can be deployed in production using the NVIDIA AI Enterprise software platform.
NVIDIA’s initial release of NIM Agent Blueprints covers three main use cases: a digital human workflow for customer service, a multimodal PDF data extraction workflow for enterprise RAG, and a generative virtual screening workflow for drug discovery.
Boitano stated, "We prioritized these initial workflows because they are essential across industries, but we plan to release more workflows monthly."
The digital human workflow for customer service allows enterprises to bring their applications to life through a 3D animated interface powered by NVIDIA Tokkio, an interactive avatar SDK for virtual customer service. Designed to create customer service solutions with lifelike interactions, the blueprint incorporates NVIDIA software like NVIDIA ACE, Omniverse RTX, Audio2Face, and Llama 3.1 NIM microservices. It also integrates seamlessly with existing enterprise gen AI applications built using RAG.
YouTube Videos Used to Train AI: What Creators Need to Know

In June, Mustafa Suleyman, CEO of Microsoft’s new AI division, made a bold statement during an interview with CNBC’s Andrew Ross, claiming that anything posted online effectively becomes "freeware" and can be copied and used to train AI models. Recently, there has been growing attention on how generative AI companies may be extracting videos and transcripts from YouTube, using the work of independent creators to train their AI systems without permission. In July, the online publication 404 Media revealed that generative AI video company Runway had trained its models using thousands of videos without obtaining consent.
Over the past few months, the issue of YouTube content being used to train generative AI models has sparked intense debate within the creator community. This complex issue has raised significant concerns regarding consent, compensation, and creator rights. This article delves into the matter, explores the responses from major tech companies, and examines how training AI models on YouTube content is affecting creators.
Why Is This a Contentious Issue Among Creators?
As generative AI rapidly advances, companies require vast amounts of data to develop more efficient and powerful models. The main concern among content creators is that their videos are being used to train these large AI models without their explicit consent. Recent investigative reports suggest that AI companies are leveraging vast quantities of YouTube content—including audio, visuals, and transcripts—to build proprietary models. Although major tech companies have not openly admitted to this practice, it raises serious ethical, legal, and financial questions. Many creators feel uneasy, and some even feel exploited. For instance, this month, YouTuber David Millette filed a lawsuit against Nvidia, alleging the company developed a video model by scraping YouTube content without obtaining authorization from the creators.
Similarly, a July investigation by Proof News, a data-focused reporting and analysis platform, found that subtitles from 173,536 YouTube videos across 48,000 channels were used by tech giants like Nvidia, Apple, Anthropic, and Salesforce to train their AI models. The investigation revealed that these subtitles included video transcripts from online educational platforms such as Harvard, MIT, and Khan Academy. Proof News has developed a tool for content creators to check whether their work has been included in the YouTube AI training dataset. The report also noted that videos from well-known creators like Marques Brownlee, MrBeast, and PewDiePie were used to train AI models.
What Is the Core Issue?
For many YouTubers, the key concern is that their content is being used to train AI models without their direct approval. When creators upload videos to YouTube, they agree to the platform’s terms of service, which grants YouTube a broad license to use their content. The terms allow YouTube to reproduce, distribute, and create derivative works from the content. However, these terms do not specify that the content can be used to train AI models, an application that did not exist when the terms were originally drafted.
The current YouTube terms of service state: “By providing Content to the Service, you grant to YouTube a worldwide, non-exclusive, royalty-free, transferable, sublicensable license to use that Content (including to reproduce, distribute, prepare derivative works, display and perform it). YouTube may only use that Content in connection with the Service and YouTube’s (and its successors’ and Affiliates’) business, including for the purpose of promoting and redistributing part or all of the Service.” However, there is no mention of using the content to train AI models, which is the primary concern for creators today.
It’s Poll Time On Know Your Inference (KYI)
Following from the last the poll on Know Your Inference (KYI), we are keen to follow up more on KYI:
What specific ethical concerns do you have regarding AI inferences in your industry? |
How do you currently measure the cost-efficiency of your AI implementations? |
What barriers do you face in implementing more environmentally friendly AI practices? |
How prepared is your organization to adopt KYI practices within the next year? |
Which type of GenAI models does your organisation primarily use? |
In which areas is your organisation currently applying GenAI? |
What is your organisation's primary motivation for adopting GenAI? |
What is the biggest challenge your organisation faces in scaling GenAI implementations? |
TheOpensource.AI News
Hugging Face and AI2 Set to Discuss Open-Source AI at Disrupt 2024

Some see open-source AI as a potential escape from the proprietary software landscape that the technology has inevitably fallen into. At TechCrunch Disrupt 2024 in San Francisco, happening from October 28-30, Hugging Face’s Irene Solaiman and AI2’s Ali Farhadi will explore this complex issue during a panel discussion.
While AI may be a relatively new technology, in some respects it remains stuck in the past, with a few long-established companies driving its development and funding. Unlike desktop operating systems or office software, however, the resource demands of AI models make creating open-source alternatives particularly challenging. What will it take to overcome these obstacles?
The discussion will center on the opportunities and hurdles in defining, building, and making open AI systems accessible. Representing two key advocates for openness, Hugging Face provides open access to models, datasets, and leaderboards, while AI2 (the Allen Institute for Artificial Intelligence) is committed to full transparency in its research, data, and models.
Irene Solaiman, Hugging Face’s head of global policy, champions safe, open, and responsible AI, both at Hugging Face and through collaborations with other tech organizations. Ali Farhadi, after leading AI2 spinoff XNOR (acquired by Apple), returned to head AI2, continuing his work on openness and transparency. Both acknowledge the unique challenges in realizing these principles within the AI landscape.
This panel promises to be a fascinating discussion among these AI leaders (and moderated by yours truly), so be sure to secure your Disrupt 2024 ticket and catch the conversation on the AI Stage.
Zuckerberg and Spotify CEO Unite to Push for Open-Source AI in Europe

Meta CEO Mark Zuckerberg’s latest focus in his lobbying efforts is promoting open-source AI. He has recently been advocating for his company’s open approach to AI models, positioning it as a contrast to the more closed practices of major disruptors in the sector—who, incidentally, are also looking to challenge Meta’s business. Supporting Zuckerberg’s stance is Spotify CEO Daniel Ek.
On Friday, the two CEOs published a joint blog post urging European regulators to embrace open-source AI. They wrote, “A key opportunity for European organizations lies in open-source AI—models with publicly released weights under a permissive license. This prevents power from being concentrated among a few large players and, like the internet before it, fosters a level playing field.”
Critics might point out that both social media and music streaming are industries dominated by a few powerful companies, from which both Meta and Spotify benefit. However, this doesn’t necessarily mean their support for open-source AI is entirely self-serving—there could be genuine benefits to the approach.
The real question is whether EU regulators will share this perspective. Ek and Zuckerberg argued, “Europe should streamline and harmonize regulations by capitalizing on the advantages of a unified yet diverse market. Europe needs a fresh approach with clearer policies and more consistent enforcement”.
TheClosedsource.AI News
OpenAI Taps Former Meta Executive Irina Kofman to Lead Strategic Initiatives

At OpenAI, Kofman will focus on safety and preparedness, according to Bloomberg. She will report directly to OpenAI's Chief Technology Officer, Mira Murati. Previously, Kofman spent five years at Meta as a senior director of product management for generative AI, where she led initiatives in operations, marketing, and specialized programs like the AI Residency and the Deepfake Detection Challenge. Before Meta, she was pivotal in launching TensorFlow at Google AI and played a key role in developing AI for Social Good teams and promoting ML fairness. Meta has not commented on her departure.
As AI startups like OpenAI grow, they are increasingly drawing experienced leaders from major tech companies. Recently, OpenAI welcomed former Twitter and Instagram executive Kevin Weil as Chief Product Officer and ex-Nextdoor CEO Sarah Friar as Chief Financial Officer.
However, OpenAI has also seen key exits, including researcher Jan Leike and co-founder John Schulman, both of whom moved to Anthropic. Anthropic has been drawing talent as well, including hiring former Instagram co-founder Mike Krieger as Chief Product Officer.
Former OpenAI Researcher Warns ChatGPT Maker Is Close to Reaching AGI

In recent months, numerous OpenAI staff members have left the tech startup for various reasons. Recently, OpenAI co-founder Greg Brockman announced he would be taking a sabbatical until the end of the year, while researcher John Schulman revealed he was leaving the company to join Anthropic, focusing on AI alignment.
The departure of several high-profile executives from OpenAI began following the controversial firing and reinstatement of CEO Sam Altman by the board of directors. Among those who left was former super alignment lead Jan Leike, who indicated that disagreements with company leadership over issues like safety and adversarial robustness led to his exit. Leike also noted that safety concerns took a backseat to product development priorities.
In a conversation with Fortune, Daniel Kokotajlo, a former OpenAI researcher, mentioned that over half of OpenAI's super alignment team had already left the company. “It wasn’t coordinated; people just seemed to be giving up individually,” Kokotajlo said. OpenAI, originally founded with the mission to ensure artificial general intelligence (AGI) benefits all of humanity, has shifted toward operating as a for-profit company, raising concerns about its direction.
Elon Musk has openly criticized OpenAI for deviating from its founding mission, labeling it a "stark betrayal." Earlier this year, Musk filed and later withdrew a lawsuit against the company and Sam Altman over this issue, only to launch another complaint alleging racketeering activities. According to Musk’s legal team, “The previous suit lacked strength.”
OpenAI remains focused on achieving the AGI benchmark, but concerns are growing among users regarding its potential impact on humanity. One AI researcher suggested there is a 99.9% chance AI could end humanity, and the only solution is to avoid developing it altogether.
Although OpenAI has created a new safety team led by CEO Sam Altman to ensure technological advancements align with safety and security standards, the company appears more focused on product development and commercialization.
After disbanding its safety alignment team, OpenAI reportedly rushed the launch of GPT-40, even sending event invitations before testing began. The company admitted that the remaining safety and alignment team was under significant pressure with limited time for testing.
It’s difficult to pinpoint the exact reasons behind the mass departure of executives and staffers in recent years, though some have gone on to establish competing firms dedicated to safe superintelligence. Kokotajlo speculates that the exodus is tied to OpenAI nearing the AGI benchmark without the necessary knowledge, regulations, or tools to manage the consequences.
Meanwhile, OpenAI recently opposed a proposed AI bill by Senator Scott Wiener aimed at implementing safety protocols to keep the technology on track, arguing instead for federal legislation.
OpenAI Introduces Fine-Tuning for GPT-4o to Enhance Enterprise Applications

OpenAI has introduced a fine-tuning capability for its large language model (LLM) to enhance its performance in enterprise settings. The new feature allows developers to tailor the GPT-4o model to meet their organization’s specific needs at a relatively low cost. By customizing the LLM with their own datasets, early users have reported significant improvements in performance.
“Fine-tuning enables the model to adjust the structure and tone of its responses, or to follow complex, domain-specific instructions,” stated OpenAI in their official announcement. “Developers can achieve strong results for their applications with as few as a few dozen examples in their training dataset.”
The fine-tuning option isn’t just a technological upgrade; it signals a shift toward offering AI as a Service. OpenAI has set the cost of fine-tuning at $25 per million tokens, with inference costs at $3.75 per million input tokens and $15 per million output tokens. These fees are expected to significantly contribute to OpenAI’s revenue as more companies customize the LLM for their unique needs. To encourage adoption, OpenAI is offering one million free training tokens daily to organizations until September 23, while users of GPT-4o mini can access two million free tokens per day.
OpenAI conducted early tests with several companies to evaluate the fine-tuning feature’s effectiveness. Cosine’s Genie, an AI assistant powered by GPT-4o, showed strong performance in tasks like writing code, identifying bugs, and developing new features. AI solutions provider Distyl achieved first place in text-to-SQL benchmarks by using a fine-tuned GPT-4o, achieving over 70% accuracy across all metrics.
OpenAI assures that fine-tuned models will maintain the same levels of data privacy as ChatGPT, with additional security measures to protect enterprise data. “We’ve implemented layered safety mitigations for fine-tuned models to prevent misuse,” said OpenAI. “For example, we continuously run automated safety evaluations on fine-tuned models and monitor usage to ensure compliance with our policies.”
A Series of Upgrades
OpenAI has been actively enhancing its AI products, including teasing an AI-powered search engine in late July. In April, the company announced an update to make ChatGPT more conversational and less verbose in its responses. OpenAI also confirmed the development of a new AI detection tool with 99.9% accuracy after earlier attempts fell short, but the company plans a cautious approach for its commercial release to avoid common pitfalls in next-gen technology.
“We believe the careful approach we’ve taken is crucial given the complexities involved and the likely impact on the broader ecosystem beyond OpenAI,” noted an executive from the company.
Seeking impartial news? Meet 1440.
Every day, 3.5 million readers turn to 1440 for their factual news. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture, all in a brief 5-minute email. Enjoy an impartial news experience.
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!