- "Towards AGI"
- Posts
- Women in AI: Overcoming Underrepresentation with Enthusiasm for Gen AI Tools
Women in AI: Overcoming Underrepresentation with Enthusiasm for Gen AI Tools
Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.
Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.
TheGen.AI News
Women in AI: Overcoming Underrepresentation with Enthusiasm for Gen AI Tools

Women have historically been underrepresented in the tech industry, including in the field of AI. However, this does not reflect their interest or enthusiasm for generative AI tools. A recent survey by Women Go Tech revealed that 68% of female respondents have used at least one AI tool, with ChatGPT being the most popular. Additionally, 61% expressed a desire to learn more about AI tools and applications. Despite this interest, many women feel hesitant due to longstanding biases, discrimination, and concerns about data privacy and ethical issues.
The survey, supported by Google.org and the OSCE, included 5,400 respondents from 13 European countries. It categorized women into four groups: those interested in tech careers (31%), those not interested in tech careers (30%), those with over two years of tech experience (27%), and those with less than two years of experience (12%). Among aspiring technologists, 77% showed interest in AI, while 64.6% of tech newcomers and 63.9% of experienced professionals wanted to learn more about the technology.
Apart from ChatGPT, respondents had tried OpenAI Codex, Bard/Gemini, and GPT-4, with translation, navigation/travel, and searching for answers being the most common uses. However, 25% of both new and experienced tech professionals reported discomfort with their technical skills. The report highlights that factors like low confidence, lack of encouragement, and limited access to education significantly impact women's self-esteem and participation in tech.
Pervasive stereotypes and societal expectations often lead women to underestimate their abilities, resulting in lower female representation and continued bias in the tech industry. Many women also experience "imposter syndrome," doubting their capabilities despite being qualified.
Ana Prică-Cruceanu, Chief SDG Strategy Officer at UNESCO's Women4Ethical AI initiative, emphasized the importance of showcasing female role models and addressing the Matilda effect, where women's contributions are attributed to men. She stressed that women can enter the tech field at any age and should be encouraged to do so.
Goldman Sachs Warns of Overhyped, Costly GenAI and Impending Investment Bubble

Goldman Sachs recently released a research paper titled “Gen AI: too much spend, too little benefit?” that raises questions about the economic feasibility of generative AI. The paper scrutinizes the significant investments made in AI technology and doubts whether these expenditures will yield the expected benefits and returns.
Despite being promoted as a groundbreaking technology by Silicon Valley and receiving strong support from the stock market, generative AI is facing intense scrutiny. Goldman Sachs’ research suggests that the financial returns on AI investments might be lacking. The report posits that while investors could continue to profit, the actual benefits of AI remain uncertain. It suggests that either AI will eventually meet its promises, or the current bubble will last longer than anticipated.
The optimism around AI has notably increased the stock prices of companies like Nvidia and other S&P 500 giants. These gains are largely based on the assumption that generative AI will enhance productivity through automation, reduced labor costs, and increased efficiency. However, Goldman Sachs warns that these stock price increases are premature, as the anticipated productivity improvements have yet to materialize. For the S&P 500 to achieve above-average returns over the next decade, a very favorable AI scenario is necessary, which may not be realistic.
A critical point raised in the paper is the gap between the hype and the actual performance of generative AI. The impact of AI on corporate profitability is crucial, and without an extremely optimistic scenario, S&P 500 returns are expected to be below their historical average since 1950.
MIT professor Daron Acemoglu, a contributor to the paper, emphasized that simply scaling AI training data may not resolve the technology’s issues. He questioned the idea that increasing the amount of training data would improve AI performance. Acemoglu noted that doubling the data, such as adding more content from Reddit, might improve informal conversational abilities but wouldn’t necessarily enhance functional tasks like customer service.
Jim Covello, Goldman Sachs’ head of global equity research, expressed skepticism about the cost and transformative potential of generative AI. He highlighted the technology’s high expenses and its current inability to solve complex problems. Covello compared the AI investment frenzy to previous tech hypes like virtual reality, the metaverse, and blockchain, which saw significant spending but delivered limited real-world applications.
Over a Third of Sensitive Business Data in Generative AI Apps is Regulated Personal Information: Netskope Report

Netskope, a leader in Secure Access Service Edge (SASE), has published new research indicating that regulated data (data that organizations are legally obligated to protect) constitutes over a third of the sensitive data shared with generative AI (genAI) applications, posing a significant risk of costly data breaches for businesses.
The latest Netskope Threat Labs research reveals that three-quarters of surveyed businesses now entirely block at least one genAI app, demonstrating a strong desire among enterprise technology leaders to mitigate the risk of sensitive data leaks. However, fewer than half of these organizations apply data-centric controls to prevent sensitive information from being shared in input inquiries, showing a lag in adopting advanced data loss prevention (DLP) solutions necessary for safely enabling genAI.
The researchers found that 96% of businesses globally are now using genAI—a figure that has tripled over the past year. On average, enterprises use nearly 10 genAI apps, up from three last year, with the top 1% of adopters now using an average of 80 apps, a significant increase from 14. This increased usage has led to a surge in proprietary source code sharing within genAI apps, accounting for 46% of all documented data policy violations. These changes complicate enterprise risk control, highlighting the need for more robust DLP efforts.
There are positive signs of proactive risk management in the nuanced security and data loss controls organizations are applying. For example, 65% of enterprises now implement real-time user coaching to guide user interactions with genAI apps. The research indicates that effective user coaching has significantly mitigated data risks, with 57% of users altering their actions after receiving coaching alerts.
"Securing genAI requires further investment and greater attention as its use spreads through enterprises with no signs of slowing down," said James Robinson, Chief Information Security Officer at Netskope. "Enterprises must recognize that genAI outputs can inadvertently expose sensitive information, propagate misinformation, or even introduce malicious content. This necessitates a robust risk management approach to safeguard data, reputation, and business continuity."
TheOpensource.AI News
Groq’s Open-Source Llama AI Model Surpasses GPT-4o and Claude in Function Calling

Groq, an AI hardware startup, has unveiled two open-source language models that surpass those of major tech companies in specialized tool use capabilities. The new Llama-3-Groq-70B-Tool-Use model has taken the top spot on the Berkeley Function Calling Leaderboard (BFCL), outperforming proprietary models from OpenAI, Google, and Anthropic.
Rick Lamers, Groq's project lead, shared the milestone on X.com, stating, “I’m proud to announce the Llama 3 Groq Tool Use 8B and 70B models. An open-source Tool Use full finetune of Llama 3 that reaches the #1 position on BFCL, beating all other models, including proprietary ones like Claude Sonnet 3.5, GPT-4 Turbo, GPT-4o, and Gemini 1.5 Pro.”
The larger 70B parameter model achieved a 90.76% overall accuracy on the BFCL, while the smaller 8B model scored 89.06%, placing third overall. These results indicate that open-source models can rival and even outperform closed-source models in specific tasks.
Groq collaborated with AI research firm Glaive to develop these models, employing full fine-tuning and Direct Preference Optimization (DPO) on Meta’s Llama-3 base model. They highlighted the use of ethically generated synthetic data for training, addressing concerns about data privacy and overfitting.
This breakthrough signifies a major shift in the AI field. By achieving top performance using only synthetic data, Groq challenges the belief that vast amounts of real-world data are required to develop state-of-the-art AI models. This method could alleviate privacy issues and reduce the environmental impact of training on large datasets. It also suggests new possibilities for creating specialized AI models in areas with limited or sensitive real-world data.
The models are now available via the Groq API and Hugging Face, a widely used platform for sharing machine learning models. This accessibility could drive innovation in areas needing advanced tool use and function calls, such as automated coding, data analysis, and interactive AI assistants.
Google Launches Project Oscar: Open-Source AI Agents for Developers

Despite its reputation for aggressively leveraging user data, Google has a strong history of contributing to open-source projects and even releasing its own.
Recognizing the growing interest in artificial intelligence, Google has been developing new AI-powered solutions to navigate the evolving digital landscape. One notable effort is Project Oscar, an open-source framework designed to create AI-powered assistants, or “agents,” to support software development and maintenance.
Oscar aims to enhance open-source software development by offering an experimental architecture for deploying agents within repositories. These agents can manage tasks such as addressing incoming issues, matching questions with existing documentation, and more. The primary objectives of the project are to reduce the workload on maintainers by handling issues, change lists, pull requests, and forum questions, ultimately leading to more efficient software maintenance.
While Project Oscar is currently under the Go project, it has the potential to become a standalone project in the future. It is designed to be flexible enough for use with other programming languages and projects as well.
Google has identified three core capabilities for Oscar:
1. Utilizing natural language to control deterministic tools.
2. Indexing and surfacing project-related context during contributor interactions.
3. Reviewing issue reports, change lists, and pull requests to improve them post-submission.
TheClosedsource.AI News
OpenAI To Launch Budget-Friendly, Smarter GPT-4o Mini

OpenAI is launching a lighter, more affordable model for developers called GPT-4o Mini. This model is significantly cheaper than full-sized versions and more advanced than GPT-3.5.
Developing applications with OpenAI’s models can be expensive, potentially excluding developers who might then choose more affordable alternatives like Google’s Gemini 1.5 Flash or Anthropic’s Claude 3 Haiku. To address this, OpenAI is now entering the market for lightweight models.
Olivier Godement, head of the API platform product, told The Verge, "I think GPT-4o Mini really aligns with OpenAI’s mission of making AI more broadly accessible. To benefit every industry and application, we must make AI much more affordable."
Starting today, ChatGPT users on Free, Plus, and Team plans can use GPT-4o Mini instead of GPT-3.5 Turbo, with Enterprise users gaining access next week. Although GPT-3.5 will no longer be available for ChatGPT users, developers can still access it via the API until it is eventually retired.
The new lightweight model will also support text and vision through the API, with plans to handle multimodal inputs and outputs, such as video and audio. This could enable more capable virtual assistants that understand and suggest travel itineraries, though the model is intended for simpler tasks and not advanced systems like Siri.
GPT-4o Mini achieved an 82 percent score on the Measuring Massive Multitask Language Understanding (MMLU) benchmark, which includes around 16,000 multiple-choice questions across 57 academic subjects. GPT-3.5 scored 70 percent on this benchmark, GPT-4o scored 88.7 percent, and Google claims Gemini Ultra has the highest-ever score at 90 percent. Competing models Claude 3 Haiku and Gemini 1.5 Flash scored 75.2 percent and 78.9 percent, respectively.
However, researchers are cautious about benchmark tests like the MMLU, as variations in administration can make scores difficult to compare, as noted by The New York Times. There is also concern that AI models might have the answers in their datasets, essentially allowing them to cheat, and typically, no third-party evaluators are involved in the process.
Netskope Integrates with OpenAI’s ChatGPT Enterprise to Enhance Data Governance

Netskope, a leading provider of secure access service edge (SASE) solutions, has announced its integration with OpenAI’s ChatGPT Enterprise Compliance API. This integration aims to enhance security and compliance for enterprise organizations utilizing generative AI (genAI) applications. Through the Netskope One platform, organizations now have access to improved security features such as application visibility, robust policy enforcement, advanced data security, and comprehensive security posture management.
Netskope has observed a significant increase in genAI application usage, with more than triple the users compared to the previous year. This surge in AI adoption has prompted organizations to reassess their data protection strategies. As the average activity per genAI user has also doubled, ensuring compliance, preventing data policy violations, and securing the use of genAI applications like ChatGPT Enterprise has become crucial. Netskope’s CASB API protection utilizes APIs from major vendors like Box, Google Workspace, and Microsoft 365 to provide visibility into cloud service settings and data, enforcing policies to control access and protect data.
The integration with ChatGPT Enterprise is designed to keep enterprise data compliant, secure, and protected. Andy Horwitz, SVP of Global Partner Ecosystem at Netskope, stated, “By integrating the Netskope One platform with OpenAI’s advanced capabilities in ChatGPT Enterprise, we continue to lead in providing comprehensive security solutions for enterprises adopting genAI tools. This integration underscores our commitment to equipping organizations with the necessary tools to manage sensitive data and maintain compliance as AI adoption accelerates.”
Through this integration, joint customers can now more effectively:
Adhere to compliance standards: With over 50 compliance templates and 3,000+ data identifiers, organizations can enforce data loss prevention (DLP) and compliance policies for sensitive data, supporting regulations like GDPR, HIPAA, and GLBA.
Advance detection and safeguard sensitive data: The platform offers out-of-band visibility and control to protect sensitive information such as personally identifiable information (PII) and intellectual property (IP). Continuous data scanning identifies and addresses data leaks in near real-time, using advanced DLP techniques like Machine Learning (ML) and Optical Character Recognition (OCR) to find difficult-to-identify sensitive information.
Protect against threats: Advanced ML models for malware detection complement traditional methods such as signatures, heuristics, and sandboxing, helping to identify potential threats in near real-time and further mitigate risks.
Former OpenAI Researcher Launches AI-Powered Education Company

Andrej Karpathy, a co-founder of OpenAI and former executive at Tesla, has announced the launch of his own AI education company, Eureka Labs, which will specialize in developing personalized AI teaching assistants for students on-demand.
Eureka Labs, headquartered in San Francisco, is registered as an LLC in Delaware, US, as reported by TechCrunch.
In a post on X, Karpathy explained that Eureka Labs aims to create AI teaching assistants to work alongside human teachers. “The teacher still designs the course materials, but they are supported, leveraged, and scaled with an AI Teaching Assistant optimized to guide students through them. This Teacher + AI symbiosis could facilitate an entire curriculum of courses on a unified platform,” he stated.
Karpathy also mentioned that these AI teaching assistants could be modeled after renowned personalities. “For instance, in a physics course, students could interact with high-quality materials alongside a virtual Richard Feynman, who guides them every step of the way,” he said. He emphasized that Eureka Labs will be ‘AI native,’ meaning generative AI will be an integral part of the platform rather than an added feature.
Despite his tenure at major tech companies like Tesla and OpenAI, Karpathy’s background is in teaching. He taught deep learning for computer vision at Stanford University until 2015, before becoming a founding member of OpenAI. He later left OpenAI to lead Tesla’s AI department, where he was in charge of developing computer vision for Tesla Autopilot.
Learn AI in 5 Minutes a Day
AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.
Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.
Exploring the Future: Generative AI – A Deep Dive into Safe and Secure Production Deployment
Towards AGI is excited to announce the second event in their series, "Exploring the Future: Generative AI." As artificial intelligence continues to progress, generative AI is at the cutting edge of innovation, offering transformative possibilities across various sectors. This event, sponsored by Derisk 360 and hosted by Freshminds, will explore the latest advancements, practical applications, and future trends of generative AI.
The central theme of the event will be "Deploying and using GenAI applications in production in a safe and secure environment." The event is scheduled for July 31, 2024, starting at 5:30 PM, and will take place at the Freshminds Offices, Kingsbourne House, 229-231 High Holborn, London WC1V 7DA.
Sign up for the event now!
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!