- "Towards AGI"
- Posts
- ChatGPT Hits 200 Million Weekly Users, Doubling Growth Since Last Fall
ChatGPT Hits 200 Million Weekly Users, Doubling Growth Since Last Fall
Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.
Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.
TheGen.AI News
ChatGPT Hits 200 Million Weekly Users, Doubling Growth Since Last Fall

On Thursday, AI startup OpenAI announced that its chatbot, ChatGPT, has surpassed 200 million weekly active users, doubling its user base since last fall. This rapid growth underscores the increasing popularity and widespread adoption of AI-driven tools.
Launched in 2022, ChatGPT quickly became popular for its ability to generate human-like responses to user prompts. In November, OpenAI CEO Sam Altman reported that the chatbot had reached 100 million weekly active users. This latest milestone highlights the growing demand for AI technologies across various industries.
OpenAI also noted that 92% of Fortune 500 companies are now using its products, and the usage of its automated Application Programming Interface (API) has doubled since the introduction of GPT-4o mini in July. GPT-4o mini is a more cost-effective and energy-efficient AI model, designed to make OpenAI’s technology more accessible to a wider audience.
ChatGPT’s success has played a significant role in increasing AI's prominence and has boosted the valuation of San Francisco-based OpenAI.
In related news, OpenAI and fellow AI startup Anthropic have signed agreements with the U.S. government for research, testing, and evaluation of their AI models, according to the U.S. Artificial Intelligence Safety Institute.
Additionally, media reports suggest that tech giants Apple and Nvidia are in talks to invest in OpenAI as part of a new fundraising round that could value the ChatGPT creator at over $100 billion. Microsoft, OpenAI’s primary backer, is also expected to participate in the investment.
Gartner Predicts One-Third of Gen AI Projects Will Be Abandoned

According to a recent Gartner report, many companies are finding it difficult to derive value from their generative artificial intelligence (Gen AI) projects, with about one-third of these initiatives expected to be abandoned by the end of 2025.
Gartner’s distinguished analyst, Rita Sallam, noted that after last year's excitement around Gen AI, executives are now eager to see returns on their investments. However, proving and realizing value has been challenging, especially as the scope of these projects expands and the financial strain of developing and deploying Gen AI models grows.
The report highlights the high costs associated with these projects, with upfront investments ranging from $5 million to $20 million. For example, using a Gen AI API for tasks like coding assistance might cost a company between $100,000 and $200,000 upfront, with an additional $550 per user per year. On the higher end, fine-tuning foundation AI models or building custom models from scratch can incur costs of $5 million to $20 million upfront, plus $8,000 to $21,000 per user annually.
Despite the significant challenges, some companies have reported benefits from Gen AI, such as increased revenue, cost savings, and improved productivity. However, Gartner warns that the benefits can be difficult to quantify. Sallam emphasized that Gen AI requires a greater tolerance for indirect, future-focused financial investments rather than immediate returns, which can make CFOs hesitant to allocate funds to strategic outcomes.
In addition to costs, Gartner identified other factors that could lead to the failure of AI projects, such as inadequate risk controls and poor data. These concerns stand in contrast to other surveys, like one from Bloomberg Intelligence, which indicated that the percentage of companies working on deploying Gen AI "co-pilot" programs has doubled between December of last year and July 2024.
Marketers Hesitant on Gen AI Adoption Despite Hype

Artificial intelligence (AI) has significantly impacted various industries, and marketing is no exception, particularly in how brands engage with consumers. The excitement around generative AI (Gen AI) peaked last year, but its concrete influence on creative campaigns remains mostly unproven.
The gap between the hype and the visible impact of generative AI in consumer-facing campaigns raises an important question: Is the technology underperforming, or are marketers still trying to figure out how to use it effectively?
Vineeth Viswambharan, Vice President of Marketing & Sales at Adani Wilmar, provides insight into the unseen role of AI, saying, “What you see with AI is just the surface. The real AI assistance lies behind the scenes in generating marketing communication.”
Viswambharan describes a revolutionized workflow where AI tools have streamlined traditional, time-consuming processes. He explains, “In the past, converting scripts into storyboards was a lengthy process involving illustrations. Now, AI tools like text-to-image converters and AI assistants accomplish this in a fraction of the time.” However, he also notes that the final consumer-facing content, whether an image or video, still lacks the subtlety and emotional depth that human-created work offers.
While large-scale AI-generated campaigns are uncommon, some brands are beginning to experiment. Zomato's viral campaign featuring AI-generated images of delivery personnel dancing in the rain showcased AI's potential for creating engaging content, cleverly turning delivery delays into a whimsical narrative. Yet, this also underscored the need for human oversight to ensure AI-generated content aligns with brand values and meets customer expectations.
Despite these advancements, brands and marketers remain cautious about fully adopting AI-generated campaigns, particularly those involving generative AI.
Apurva Sircar, Head of Marketing at Bandhan Bank, offers a balanced view, stating, “Gen AI still isn’t perfect, especially in understanding sarcasm. Until the technology can handle these nuances on its own, there will always be hesitation about using it fully.” Sircar highlights the current limitations of Gen AI in capturing the subtleties of human communication.
Nevertheless, this caution hasn’t stopped marketers from experimenting with generative AI in more controlled settings. "Gen AI is already widely used in marketing, particularly for crafting digital ad copy," Sircar notes. However, he also points out that while generative AI is extensively used, large-scale campaigns are still in the experimental phase. He believes it’s only a matter of time before Gen AI leads to major disruptions in the marketing industry.
Enterprises Ramp Up Generative AI Deployments, Bloomberg Survey Reveals

According to a Bloomberg Intelligence report on AI, the deployment rate of generative artificial intelligence (Gen AI) copilot programs by companies doubled between December last year and July 2024. The report, which surveyed 50 CIOs from U.S.-based companies in July, revealed that 66% of respondents are now working on implementing Gen AI copilots, compared to just 32% in December, according to Mandeep Singh, Bloomberg Intelligence's senior industry analyst and the lead author of the report.
The most common use case for Gen AI, cited by over half of the respondents, is chatbot agents, particularly for customer service applications.
The report also noted an increase in companies evaluating the training of foundation models, which are large language models underlying most Gen AI applications. The percentage of respondents "working on" training these models rose from 26% in December to 40% in July, and half of the respondents reported they are "evaluating" model training.
Singh highlighted that these deployments could lead to a significant rise in AI inference work among companies. Sixty percent of respondents indicated plans to increase spending on Microsoft's Azure for AI inference tasks, up from 41% in December.
Azure is currently the leading cloud provider for inference, while Amazon's AWS cloud service saw a drop in preference, with its usage among respondents falling from 55% to 42% between December and July. Google Cloud ranks third, with 36% of respondents planning to increase their spending on inference.
Singh pointed out that the demand for Azure's inference services is likely to keep growing, partly due to the appeal of partner OpenAI's Gen AI models, such as GPT-4, which are not available on AWS or Google Cloud. "The integration of Microsoft's Azure platform with OpenAI models gives it a competitive edge over other public cloud providers for hosting inference workloads," Singh wrote in the report.
The survey also showed a significant increase in the use of OpenAI models by companies, rising from 41% in December to 70% in July. By contrast, Google's top Gen AI offering, Google Gemini, was used by just 18% of respondents compared to OpenAI's 70%.
The report suggests that Gen AI's growing appeal is helping Microsoft narrow the gap with Amazon in cloud services. At the end of 2023, Microsoft held 16% of the cloud infrastructure services market, compared to AWS's 47%, a smaller gap than the 48% to 12% difference seen in 2018. "We expect this gap to narrow even more," Singh noted.
Beyond the top three cloud providers, Snowflake and MongoDB ranked high among preferred vendors for developing "retrieval-augmented generation" (RAG), a rising Gen AI technique where the AI model accesses an external database.
TheOpensource.AI News
Microsoft Unveils Phi-3.5 AI Models for Open-Source Development

Microsoft has introduced three new open-source AI models in its Phi-3.5 series: Phi-3.5-mini-instruct, Phi-3.5-MoE-instruct, and Phi-3.5-vision-instruct. These models, released under a permissive MIT license, provide developers with versatile tools for various tasks, including reasoning, multilingual processing, and image and video analysis.
The Phi-3.5-mini-instruct model, featuring 3.82 billion parameters, is tailored for quick, basic reasoning tasks. It’s designed for environments with limited memory and compute power, making it ideal for code generation, mathematical problem-solving, and logic-based reasoning tasks. Despite its smaller size, Phi-3.5-mini-instruct outperforms larger models like Meta’s Llama-3.1-8B-instruct and Mistral-7B-instruct on benchmarks such as RepoQA, which assesses long-context code comprehension.
The Phi-3.5-MoE-instruct model, with 41.9 billion parameters, uses a mixture-of-experts (MoE) architecture, allowing it to handle more complex reasoning tasks by selectively activating different parameters based on the input. It surpasses larger models, including Google’s Gemini 1.5 Flash, in various benchmarks, demonstrating its advanced reasoning abilities. This makes it particularly effective for applications requiring deep, context-aware understanding and decision-making.
The Phi-3.5-vision-instruct model, with 4.15 billion parameters, combines text and image processing capabilities, making it suitable for tasks such as image understanding, optical character recognition, and video summarization. Its multimodal approach, supported by a 128K token context length, enables it to excel in complex, multi-frame visual tasks. Trained on a mix of synthetic and publicly available datasets, the Phi-3.5-vision-instruct model is particularly strong in tasks like TextVQA and ScienceQA, offering high-quality visual analysis.
Each model in the Phi-3.5 series underwent rigorous training. The Phi-3.5-mini-instruct was trained on 3.4 trillion tokens over 10 days using 512 H100-80G GPUs. The Phi-3.5-MoE-instruct model required a longer training period, processing 4.9 trillion tokens over 23 days with the same number of GPUs. The Phi-3.5-vision-instruct model was trained on 500 billion tokens over six days using 256 A100-80G GPUs. This extensive training has enabled the Phi-3.5 models to achieve strong performance across numerous benchmarks, often surpassing other leading AI models, including OpenAI’s GPT-4, in several scenarios.
These benchmark results showcase the Phi-3.5 models' effectiveness, particularly the Phi-3.5 mini, compared to other top AI models like Mistral, Llama, and Gemini across various tasks. The data underscores the Phi-3.5 models' capabilities in a range of scenarios, from general reasoning to specialized problem-solving.
The AI community has responded positively to the technical strengths of the Phi-3.5 series, especially in multilingual and vision tasks. On social media, users have noted the models' impressive benchmark performances and expressed interest in their potential applications. For instance, Turan Jafarzade PhD commented on LinkedIn:
NTIA Backs Open-Source AI: Positive Implications for Security

The practice of hiding software’s inner workings to prevent misuse, known as “security by obscurity,” contradicts years of experience in software development. An open model tends to inspire greater trust, which is why the White House, in a National Telecommunications and Information Administration (NTIA) report, concluded that there’s no need to limit open-source artificial intelligence. In fact, it's beneficial for security. While generative AI can be used for both benign and harmful purposes, from trivial mischief to serious threats, the growing power and ubiquity of this technology understandably raises concerns among government officials and security experts.
Following President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, the NTIA examined the risks and benefits associated with large language models used in generative AI, particularly those with publicly available weights. These open models are more accessible to individuals and smaller companies, extending AI's reach beyond the major Silicon Valley players driving the AI boom. This wider access creates more opportunities for positive AI applications but also increases the potential for misuse.
When the NTIA released its “Dual-Use Foundation Models with Widely Available Model Weights” report in late July, many experts felt reassured. The NTIA wisely chose not to rely on a “security through obscurity” strategy, where protection depends on keeping sensitive information or capabilities hidden. Instead, the agency recommended that the U.S. government avoid restricting these open models and focus on monitoring them for potential risks.
The Importance of Open Models
Decades of real-world software development have shown that security through obscurity is ineffective. Sensitive information often gets exposed accidentally, such as when a developer mistakenly commits credentials to a GitHub repository or a misconfigured server leaks confidential files. Determined malicious actors can usually bypass such hidden defenses, similar to how they might find a way to connect to an SSH server running on a non-standard port.
Just as with traditional software, relying on obscurity to protect generative AI is not a viable defense. Malicious actors can still uncover what is being hidden, either intentionally or accidentally. The advantage of open-source software is that it allows anyone to inspect and potentially identify security issues. While this could include attackers, it generally benefits defenders more. For instance, when a backdoor was inserted into the widely-used xz compression library, it was detected and resolved before the compromised version could spread widely. This happened because a developer noticed a slight performance drop in the backdoored version while working on another project. Similarly, making an AI model more open increases the number of people who can identify and address any issues it might have.
TheClosedsource.AI News
OpenAI Enhances ChatGPT API with Advanced File Search Controls for Developers

OpenAI has introduced a major update to its ChatGPT API, giving developers enhanced control over the chatbot's File Search system. This upgrade improves the Assistant API, enabling developers to both examine the responses generated by the AI and fine-tune the system’s behavior for more precise and relevant outcomes.
The updated File Search tool within the Assistant API now allows developers to review the AI's response selection process. This gives them better insight into how the AI makes decisions, allowing for a deeper understanding of the system’s operations.
Moreover, developers can now modify the settings of the result ranker, which controls how the AI prioritizes information when creating responses. By selecting a ranking value between 0.0 and 1.0, developers can influence which information the AI emphasizes and which it overlooks, providing greater control over the relevance of the generated content.
OpenAI's Focus on Developer Enablement
This update underscores OpenAI's dedication to equipping developers with the necessary tools to effectively integrate and utilize ChatGPT in various applications. By enhancing control over the File Search system, OpenAI allows developers to build more accurate, reliable, and tailored AI-powered solutions.
This announcement comes on the heels of a recent report revealing OpenAI's plans to release a new AI model, "Strawberry," which is designed to enhance ChatGPT's mathematical and logical reasoning abilities.
TheGen.AI Maturity Assessment
Assess your organization's GenAI maturity with TheGen.AI's comprehensive evaluation and take action to advance your AI capabilities—fill out the form to get started.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.
If you're frustrated by one-sided reporting, our 5-minute newsletter is the missing piece. We sift through 100+ sources to bring you comprehensive, unbiased news—free from political agendas. Stay informed with factual coverage on the topics that matter.
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!