• "Towards AGI"
  • Posts
  • OpenAI Faces Bankruptcy Speculation Amidst $8.5 Billion Expense Report

OpenAI Faces Bankruptcy Speculation Amidst $8.5 Billion Expense Report

In partnership with

Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.

Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.

OpenAI Faces Bankruptcy Speculation Amidst $8.5 Billion Expense Report

A new report suggests that OpenAI, led by Sam Altman and backed by Microsoft, may face financial difficulties with projected expenses reaching at least $8.5 billion. The Information's analysis indicates that OpenAI is expected to spend around $7 billion this year on its current AI models. Approximately $4 billion of this amount will go towards renting server capacity from Microsoft to operate ChatGPT and related large-language models. The remaining $3 billion is anticipated to cover training costs for its AI models, including deals with news publishers like News Corp for using copyrighted content to train OpenAI’s generative AI models.

Additionally, OpenAI's payroll is expected to be at least $1.5 billion, with the company employing around 1,500 people. The report's findings are based on previously undisclosed data and interviews with people associated with the business. OpenAI's revenue from ChatGPT and other fee-based large-language models is estimated to be between $3.5 billion and $4.5 billion for the year, leading to potential losses of at least $5 billion. This situation could necessitate a need for OpenAI to raise funds within the next year.

The projected losses far exceed those of competitors like Anthropic, which announced plans to spend more than $2.7 billion this year. The Information's report has led to widespread speculation about the future of OpenAI, with concerns about the company potentially facing bankruptcy despite Microsoft's financial backing. Some experts, like Wharton Professor Ethan Mollick and Abacus.AI CEO Bindu Reddy, have dismissed these claims, while others question OpenAI’s business model. AI researcher Gary Marcus highlighted concerns about the company's path to profitability, especially with Meta's recent release of the open-source AI model Llama 3.1, which offers similar technology for free. This free-to-use model could pose a significant challenge to OpenAI, especially as businesses are becoming wary of generative AI due to concerns over poor returns and inaccurate outputs, known as "hallucinations."In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!

Gartner Predicts Over 30% of GenAI Projects to Fail Post Proof of Concept

About 30% of current projects in Generative Artificial Intelligence (GenAI) are likely to be discontinued by the end of 2025. This is due to issues such as poor data quality, insufficient risk controls, rising costs, or unclear business value. Rita Sallam, a distinguished VP analyst at Gartner, noted that after the initial excitement around GenAI, executives are eager for returns on their investments, but organizations are finding it challenging to demonstrate and realize value. The financial burden of developing and deploying GenAI models is becoming more apparent as the scope of these projects expands.

A significant challenge for organizations is justifying the large investments in GenAI for productivity improvements, which are often difficult to translate directly into financial benefits. The report highlighted that while many organizations are using GenAI to transform their business models and create new opportunities, the costs associated with these initiatives can be substantial, ranging from $5 million to $20 million. Sallam emphasized that there is no uniform approach to GenAI, and costs can vary significantly based on what is invested, the use cases chosen, and the deployment strategies employed. Different approaches, whether aiming to disrupt the market with widespread AI integration or focusing conservatively on productivity enhancements, come with varying levels of cost, risk, variability, and strategic impact.

Gartner's research suggests that regardless of the scale of AI ambitions, investing in GenAI requires a greater tolerance for indirect and future financial returns rather than immediate ROI. Many CFOs have historically been hesitant to invest in projects with uncertain future value, often favoring tactical over strategic investments. However, early adopters of GenAI across various industries are reporting improvements in business outcomes, although these benefits can vary greatly depending on the specific use case, job type, and skill level of the workforce.

A recent Gartner survey found that respondents experienced an average of 15.8% revenue growth, 15.2% cost savings, and a 22.6% increase in productivity due to GenAI. Sallam noted that while these figures provide a useful benchmark for evaluating the business value of GenAI innovations, it is important to recognize the challenges in estimating these benefits, as they are highly specific to each company, use case, role, and workforce. Often, the full impact of GenAI may not be immediately apparent and may take time to manifest, but this does not diminish the potential advantages.

Pixel 9 Series to Feature Next-Gen Mobile AI Innovations: Report

Google is preparing to launch the Pixel 9 series on August 13, and details about the upcoming smartphones have begun to emerge online. The new series is expected to include next-generation mobile AI features, particularly focused on imaging.

One highlighted feature is "Add Me," which reportedly allows users to insert themselves into group photos. According to a leaked promo video, this feature works by taking a group photo and then taking another picture with the person who was missing from the original photo. The UI will then show an overlay of the new photo, allowing users to adjust it so it appears as if everyone was present in the original group photo.

Other expected features include Pixel Screenshots, which will enable users to find information from screenshots; Magic Editor, which allows users to change the background of an image and possibly accept text prompts; and an enhanced Gemini feature that can present more detailed information using a picture.

As for the specifications, the base model of the Pixel 9 is expected to have a 6.3-inch display and a glossy glass back. It will likely be powered by the Tensor G4 chip with 12GB of RAM. The Pixel 9 Pro and Pixel 9 Pro XL are also expected to feature the Tensor G4 chip, paired with 16GB of RAM.

Nvidia Launches New Software and Services to Accelerate GenAI Adoption

Nvidia, a multinational technology company based in California, is making it easier for businesses to adopt artificial intelligence (AI) with new software updates. They have introduced Nvidia inference microservices (NIMs), which are software packages that manage various logistical challenges involved in applying AI for specific tasks. This technology, known as generative AI, is utilized for applications like chatbots, voice recognition, and other computer interactions.

Recognizing that many companies lack the necessary expertise, Nvidia offers these services for a fee. Nvidia’s CEO highlighted these offerings at a conference, encouraging industries to integrate this technology and AI into their operations.

Nvidia’s chips have become essential for the recent growth in systems that support AI computing. At an upcoming conference, Nvidia CEO Carlos Ghosn will appear alongside Meta Platforms Inc. CEO Mark Zuckerberg. Nvidia’s revenue doubled last year and is projected to double again in the current fiscal year.

The Nvidia AI Enterprise product includes software and services designed to work on Nvidia hardware, priced at USD 4,500 per graphics processor per year. Kari Briski, Nvidia's vice president of product management for AI and HPC software development kits, described the Nvidia NIM as a comprehensive solution for deploying generative AI, simplified for developers but capable of scaling for larger applications.

Nvidia has made around 100 inference microservices available in preview and is now releasing the final versions. For example, Getty Images Holdings has improved high-resolution image generation, enhancing the software’s ability to interpret text prompts. Additionally, Shutterstock Inc.’s Edify three-dimensional image generator is operational and can respond to both text and images.

Nvidia notes that most AI is currently used by knowledge workers to assist with digital tasks. To expand access to generative AI, Nvidia offers software and services that enable users of Apple Inc.’s Vision Pro headset to create virtual worlds. Nvidia also emphasized that virtual twins can train robotic computers to mimic human behavior, reducing the need for manual developer input.

US Endorses Open-Source AI Amid Concerns Over Deepfake Risks

In the debate between proprietary and open-source AI, advocates of open models have received significant backing from the White House. In response to President Biden's Executive Order on AI safety, the National Telecommunications and Information Administration (NTIA) within the Department of Commerce released policy recommendations on July 30, supporting AI openness. However, proponents of closed development argue that openness carries its own set of risks.

The NTIA report highlighted the benefits of AI models with widely available weights, referred to as open foundation models. These models, according to the report, offer several advantages:

"They diversify and expand the range of participants in AI research and development, including those with fewer resources. They also decentralize control of the AI market away from a few large developers and allow users to use models without sharing data with third parties, enhancing confidentiality and data protection."

These findings align with arguments from open-source AI developers who have criticized the opaque nature of closed models created by companies like Google and OpenAI.

Doug Petkanics, CEO of Livepeer, emphasized the importance of open-source and open access at this stage in AI's development for both ideological and philosophical reasons. For researchers, hobbyists, and startups, reliance on Big Tech AI can be limiting. However, supporters of closed models caution that making them publicly available could be dangerous.

There are risks associated with open-source AI. For instance, ChatGPT won't provide instructions on how to create a bomb, limiting access for those with harmful intentions. In contrast, open-source models, such as Hugging Face's Falcon, lack traditional AI safety measures and can be modified for malicious purposes. Examples include WormGPT and FraudGPT, which are versions of the GPTJ language model trained on malware data. Additionally, some AI image generators, like nudify apps and tools for creating deepfake child abuse images, primarily use the open-source Stable Diffusion model.

The NTIA report acknowledges the risks posed by existing open foundation models but concludes that these risks do not warrant restricting the technology. It does, however, suggest that future, more powerful models may require regulatory oversight and recommends that the federal government establish a monitoring framework to guide its response.

Proponents of open models argue that the transparency and community-driven oversight they provide outweigh potential safety risks. For example, when Meta released Llama-2, they stated, "We believe it's safer," explaining that open access allows for more effective stress testing and issue identification by external developers and researchers. This transparency enables improvement and vulnerability fixes that might not be identified through internal testing alone.

Researchers Claim Open Source AI Accelerated China’s Global AI Competitiveness

Chinese researchers informed The New York Times that open-source software has significantly accelerated their AI development. Interviews with several technologists and researchers at Chinese tech companies revealed that "open source technologies were crucial to China's rapid AI advancements." The Times reported that these experts view open-source AI as a chance for China to take the lead in the field.

In Bangladesh, the internet was cut off for over a week starting July 18 due to violent anti-government protests. Students protested a High Court decision to reinstate what many perceive as a discriminatory quota system for desirable civil service jobs. The protests were met with deadly violence, leading to the shutdown of internet services.

Organizations, including the policy lab TechGlobal Institute, condemned the violence and called on authorities to "uphold human rights and restore access to communication and internet services, which can save lives during a conflict." NetBlocks, a group monitoring internet connectivity, noted that protesters demanded the restoration of telecoms, the release of detainees, and accountability for over 150 deaths. NetBlocks also mentioned that the internet blackout in Bangladesh began just before a faulty CrowdStrike update was released, inadvertently sparing the country from related issues.

Reports indicated that broadband internet was restored in select areas, but mobile networks remained offline. A resident described the situation as akin to "being back in the '90s," with limited business internet access and no social media.

In South Korea, the founder of the tech company Kakao, Kim Beom-su, was arrested last Tuesday for allegedly manipulating the stock price of SM Entertainment in February 2023. The manipulation was intended to prevent a competitor from acquiring the entertainment company, which manages some of the country's most popular K-pop artists and groups.

Meanwhile, Grab, an Uber-like service, decided not to acquire the Singapore taxi operator Trans-cab, instead purchasing the restaurant booking business Chope last week. The decision came after the Singapore Competition and Consumer Commission (CCCS) raised concerns about the potential acquisition's impact on competition. Grab confirmed the purchase of Chope, which operates in Singapore, Indonesia, Thailand, Hong Kong, and China, aiming to help small and medium-sized businesses compete with larger food and beverage brands.

OpenAI Director Predicts AGI Could Arrive Within 5 Years

Adam D’Angelo, an OpenAI board member and CEO of Quora, artificial intelligence (AI) could reach the intelligence level of humans within five to fifteen years, as reported by Seeking Alpha on July 29. D’Angelo made this prediction during a recent event, emphasizing that the arrival of artificial general intelligence (AGI) would mark a significant global milestone.

This statement follows recent reports that OpenAI has devised a method to monitor its progress toward AGI, introducing a five-level classification system to its staff. Currently, the company considers itself at Level 1, where AI can interact conversationally with humans, and is nearing Level 2, where AI systems can solve problems comparable to those solved by someone with a doctorate-level education. The subsequent levels include AI capable of acting autonomously on a user’s behalf for several days, developing innovations, and ultimately performing the work of an entire organization at Level 5.

OpenAI's CEO Sam Altman and CTO Mira Murati predicted last fall that AGI could be achieved within the next decade. Altman expressed optimism, stating that AGI would be the most advanced tool humanity has ever created, capable of astonishing achievements.

These developments have generated excitement in the business world about the potential for AI-driven commerce to transform global trade, provided the technology lives up to expectations. Ghazenfer Mansoor, CEO of Technology Rivers, highlighted that OpenAI’s pursuit of human-level reasoning could revolutionize sectors like supply chain management, market forecasting, and personalized customer experiences.

Earlier this year, OpenAI demonstrated AI models capable of answering complex science and math questions, with one model achieving over 90% accuracy on a championship math dataset. The company also showcased a project with new human-like reasoning abilities at an internal meeting. Alexander De Ridder, CTO of SmythOS, explained that such AI models might work by creating multiple options, exploring a tree of possibilities, and selecting the best outcome, similar to how chess players plan their moves. He suggested that OpenAI's innovation likely includes a significant breakthrough in efficient, scalable reasoning, potentially integrating autonomous web research and tool usage.

Commerce Report Urges Government Oversight Of Open AI Model Risks

The U.S. government should keep an eye on potential risks associated with open AI foundation models and be ready to respond if these risks become more severe, according to a new report from the Department of Commerce's National Telecommunications and Information Administration (NTIA). This report, which was shared with FedScoop, analyzes the risks and benefits of dual-use foundation models—large and complex AI systems trained on extensive datasets that can be adapted for various uses—often with publicly available model weights.

The report was prepared under the direction of the White House’s executive order on AI, which tasked the NTIA with evaluating and making recommendations for these models. While the NTIA acknowledged that open foundation models can democratize participation in AI research and development and reduce the concentration of power in the AI market, it also warned that the availability of model weights could pose risks to national security, privacy, and civil rights, particularly if not properly managed.

The report concludes that there is not enough evidence to definitively justify imposing restrictions on open-weight models, nor to rule out the possibility of future restrictions. Instead, the NTIA recommends that the government collect evidence on these models, evaluate the data, and act based on those evaluations.

Secretary of Commerce Gina Raimondo stated that the report provides a framework for responsible AI innovation and American leadership by promoting openness and outlining how the U.S. government can prepare for and address potential challenges. The evidence collection may involve encouraging standards, and if necessary, requiring audits, disclosures, and transparency for dual-use foundation models, even those without widely available model weights. The report also suggests that the evaluation process might include developing benchmarks and definitions for monitoring and potential regulatory actions, as well as enhancing federal expertise in technical, legal, social science, and policy areas to support the evaluation of these models.

Learn AI-led Business & startup strategies, tools, & hacks worth a Million Dollars (free AI Masterclass) 🚀

This incredible 3-hour Crash Course on AI & ChatGPT (worth $399) designed for founders & entrepreneurs will help you 10x your business, revenue, team management & more.

It has been taken by 1 Million+ founders & entrepreneurs across the globe, who have been able to:

  • Automate 50% of their workflow & scale your business

  • Make quick & smarter decisions for their company using AI-led data insights

  • Write emails, content & more in seconds using AI

  • Solve complex problems, research 10x faster & save 16 hours every week

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!