- "Towards AGI"
- Posts
- OpenAI Suspends 'Sora' Access Following Beta Tester Code Leak
OpenAI Suspends 'Sora' Access Following Beta Tester Code Leak
While OpenAI has not confirmed the specifics of the breach, it quickly revoked access to the tool for all users.
A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.
Welcome to Gen Matrix: Your Guide to GenAI Innovation and Adoption across Industries
Discover the trailblazers driving AI innovation with Gen Matrix.
Our platform showcases:
Organizations: Industry early adopters integrating Generative AI
Startups: GenAI innovators in hardware, infrastructure, and applications
Leaders: Influential figures shaping the GenAI ecosystem
Why Choose Gen Matrix?
Stay ahead with our comprehensive insights into the evolving Generative AI landscape. Coming Soon: Our inaugural Gen Matrix launches December 2024. Sign up now to access the report! Nominate a Trailblazer: Know an AI innovator? Nominate them for recognition in our upcoming matrix. Gen Matrix: Where AI innovation meets practicality.
TheGen.AI News
OpenAI Suspends 'Sora' Access Following Beta Tester Code Leak
OpenAI has suspended access to its unreleased Sora video generation tool after a group of beta testers leaked the platform online. The group, known as the "Sora PR Puppets," publicly shared methods to create AI-generated videos using Sora’s advanced text-to-video technology. Although the leak lasted only three hours, it sparked questions about OpenAI’s treatment of artists and its early access program. While OpenAI has not confirmed the specifics of the breach, it quickly revoked access to the tool for all users.
Artists Express Frustration Over Early Access Program
On Tuesday, the group voiced their grievances on the Hugging Face platform, accusing OpenAI of treating them as “unpaid R&D” and “PR puppets.” They criticized the company for promising them roles as co-creators and red teamers, only to subject them to restrictive content approval policies that required all generated videos to be submitted for approval before sharing. This approval process, they argued, was burdensome and failed to fairly compensate them for their efforts.
The group behind the leak also called out OpenAI’s broader approach to legitimizing and regulating art and creativity. They accused the company of exploiting Sora’s development process to improve its tool without adequately supporting the artists involved. The group’s message was clear: “ARTISTS ARE NOT YOUR UNPAID R&D.” They found it especially ironic that OpenAI, valued at $150 billion, provided minimal support to the creators contributing to Sora’s development.
OpenAI Responds
Following the leak, OpenAI suspended all access to Sora, which was expected to launch in early February. The company stated that participation in the preview program was voluntary and that testers were not obligated to provide feedback or use the tool. Niko Felix, an OpenAI spokesperson, highlighted that hundreds of artists had contributed to Sora’s development and reassured that the company would continue supporting them through grants and creative events.
Broader Implications
During the brief leak, users reportedly generated short, 10-second videos in 1080p resolution, often watermarked with OpenAI’s logo. Although the tool showed promise, the incident raised broader concerns about corporate responsibility, fair compensation for creators, and the ethics of early access programs within the tech industry.
Emerging Tech Boom: Gen AI and Quantum Computing to Create 1 Million Jobs by 2030
Emerging technologies like Generative AI (Gen AI) and Quantum Computing are projected to create over 1 million jobs by 2030, according to a report released by staffing firm Quess Corp on November 27. The report highlights a robust demand for tech skills, with Cybersecurity and DevOps seeing significant growth of 58% and 25%, respectively, during Q2FY25.
Development roles comprised 40% of overall hiring, while demand for Artificial Intelligence and Machine Learning (AI/ML) positions grew by 30% sequentially.
"While traditional programming languages remain in demand, there is a notable shift towards Cybersecurity, DevOps, and analytics, signaling an evolving IT landscape," the report stated, offering a cautiously optimistic outlook for the Indian IT sector, as reported by Moneycontrol.
Global Capability Centres (GCCs) have emerged as key drivers of tech recruitment in India during Q2FY25, actively hiring fresh graduates to address talent shortages. BFSI firms are utilizing DevOps for digital banking initiatives, while healthcare is leveraging Java for developing Electronic Health Records.
In Q2FY25, IT services led sectoral hiring with a 37% share, followed by Hi-Tech (11%), Consulting (11%), Manufacturing (9%), and BFSI (8%).
The report also noted sustained growth in the office market as domestic and international companies scale their operations. The expansion of GCCs in India has significantly increased demand for skilled professionals in engineering, IT, finance, and analytics.
Bengaluru leads tech hiring with a 44% share, followed by Hyderabad at 13%. GCCs are also increasingly recruiting talent from Tier-2 and Tier-3 cities, reflecting a broader and more inclusive recruitment strategy.
Generative AI’s Rapid Growth in Offices Sparks Privacy Warnings
Nearly half of Canadian workers (46%) are now utilizing generative artificial intelligence (AI) in their jobs, up from 22% last year, according to the latest Generative AI Adoption Index survey by KPMG in Canada. The Index currently stands at 31.6, representing a 116% growth in adoption since November 2023, where a score of 100 indicates full adoption.
Rising Adoption Brings Productivity and Risks
The survey, which gathered responses from 2,183 Canadian employees, revealed that as generative AI usage grows in frequency, users are increasingly engaging in risky behaviors, such as inputting sensitive company data.
Key findings include:
24% of users have entered proprietary company data (e.g., HR, supply chain) into public generative AI platforms, up from 16% in 2023.
19% have shared private financial data in such tools, up from 12%.
“While it’s encouraging to see employees embracing generative AI for its productivity benefits, organizations must act quickly to prevent inadvertent exposure of confidential data,” said Lewis Curley, Partner, People and Change practice at KPMG in Canada. “Without clear guidelines and training, employees may unknowingly compromise sensitive information.”
Employer Policies Lagging Behind
Although 51% of employees reported their employers encourage generative AI use and integrate it into project workflows, 37% were unaware of any employer-imposed controls over AI usage.
“It’s critical for organizations to establish clear policies and communicate expectations effectively,” added Curley.
Common AI Use Cases and Strategic Implementation
The most frequent uses of generative AI in the workplace remain consistent, including idea generation (33%), research (30%), and drafting emails (26%). However, Megan Jones, Director at KPMG in Canada, emphasized that leaders need to move beyond these basic applications to unlock the full potential of AI.
“Generative AI has the power to transform businesses by enhancing decision-making, streamlining workflows, and identifying growth opportunities,” Jones said. “Organizations that limit AI use to basic tasks risk falling behind competitors.”
Few Employers Offer Comprehensive Policies
While 60% of Canadian organizations have implemented generative AI, fewer than 20% of employees report their employers have comprehensive policies in place. Many describe these policies as vague or non-existent, with some employers even discouraging AI usage.
Jones highlighted the missed opportunities for companies that fail to embrace generative AI, urging leaders to reskill employees for future roles and foster a culture of innovation.
Unlocking Productivity Benefits with Intentionality
The survey found 52% of employees save one to five hours per week using generative AI, with most (68%) reallocating this time to high-value tasks such as brainstorming. However, 22% use it for personal activities like running errands or exercising.
“Generative AI can free up valuable time, but organizations need to guide employees on how to use it effectively,” Curley said. He also cautioned against overloading workers with high-value tasks, noting that less demanding duties like data entry can provide mental breaks and reduce burnout.
Ultimately, businesses that intentionally define the role of generative AI in their operations stand to maximize its productivity potential and foster a more innovative workforce.
Gen Z Chooses GenAI Over Managers for Impartial Advice: Study
Over half of Gen Z professionals (56%) prefer consulting Generative AI (GenAI) over their managers, attributing this preference to the AI’s constant availability and perceived impartiality, according to the report The GenAI Gap: GenZ & the Modern Workplace by upGrad. Released on Monday, the study surveyed over 3,500 Gen Z professionals (born between 1997 and 2012) and 1,000 HR leaders, shedding light on how this tech-savvy generation integrates GenAI into their workflows.
Key Findings:
Widespread GenAI Usage: A notable 73% of Gen Z professionals already use GenAI in their tasks, with 72% relying on its outputs with minimal edits.
Career Opportunities: 77% view GenAI as a gateway to new career opportunities, despite 54% raising concerns about insufficient organisational guidelines.
Environmental Awareness: Gen Z respondents are three times more concerned about the ecological impact of GenAI compared to their organisations.
Positive Outlook: Despite some job security concerns, 65% of Gen Z professionals feel neutral to optimistic about GenAI, reflecting their readiness to embrace AI-driven innovation.
Organisational Challenges:
Policy Gaps: Over half (54%) of respondents believe their organisation’s GenAI guidelines are inadequate, underscoring the need for clearer policies and better training.
Training Shortfalls: 52% report a lack of clarity and infrequent updates in workplace training programs for GenAI, highlighting limited upskilling opportunities.
HR Hesitation: Only 21% of HR leaders trust GenAI for compliance training, pointing to the need for AI-driven regulatory solutions.
Srikanth Iyengar, CEO of upGrad Enterprise, emphasized the importance of establishing supportive policies and targeted training to fully leverage GenAI’s potential. While Gen Z is enthusiastic about AI, organisations must act swiftly to address policy and training gaps to align with this generation’s expectations and harness the technology’s transformative capabilities.
TheOpensource.AI News
Global Leaders Converge at Inaugural Open-Source AI Summit in Abu Dhabi
The inaugural Open-Source AI Summit Abu Dhabi, organized by the Technology Innovation Institute (TII), concluded successfully, bringing together global AI leaders and industry experts to explore the future of open-source AI. Held at The St. Regis Saadiyat Island Resort, Abu Dhabi, the event featured participants from Meta, Google DeepMind, Oxford University, Tsinghua University, as well as local institutions such as the Inception Institute and TII, renowned for its globally recognized Falcon AI models.
Discussions during the summit focused on critical topics like open-source AI opportunities and challenges, accessibility and safety, ethics and governance, computing power and sustainability, and fostering international collaborations. The event came in the wake of growing global discourse on open-source AI, particularly after the Open-Source Initiative released its definition of open-source AI.
Dr. Najwa Aaraj, CEO of TII, highlighted the summit’s significance, stating, “The Open-Source AI Summit Abu Dhabi was a pivotal moment in shaping an inclusive AI future. By uniting the brightest minds globally, we encouraged cross-border collaboration to drive innovation centered on equity and safety. This effort is not just about advancing technology but ensuring its ethical and responsible use for humanity’s benefit.”
Dr. Hakim Hacid, Chief Researcher at TII’s AI Cross-Center Unit, which spearheaded the development of Falcon LLMs, added, “At TII, we are dedicated to building transparent, robust, and scalable AI systems. This summit offered an invaluable opportunity to collaborate with global AI experts, ensuring that AI development remains open, collaborative, and responsible.”
TII has been a pioneer in the open-source AI space, being among the first to release a Large Language Model (LLM) with Falcon 40B in May 2023. Falcon AI models continue to rank among the top open-source LLMs globally, as recognized by HuggingFace, a leading industry benchmark for LLM performance.
Conservative peer urges government not to limit open source AI
During a House of Lords debate on large language models and generative artificial intelligence (GenAI), Tina Stowell, chair of the Lords Communications and Digital Select Committee, emphasized the need for a UK AI strategy focused on fostering commercial opportunities, academic research, and spin-outs.
“As the government considers AI legislation, it must avoid implementing policies that hinder open-source AI development or exclude smaller, innovative players,” Stowell stated.
Focus on Scaling SMEs in AI and Creative Industries
In September, the committee initiated an inquiry into the UK’s potential for scaling up technology within the creative industries and AI. This inquiry specifically addresses barriers faced by small and medium-sized enterprises (SMEs) in these sectors.
Stowell highlighted emerging trends in AI, noting the growing consolidation of power among large tech companies and the increasing opportunities for applications built atop their platforms. She suggested this area could present significant growth potential for the UK.
The Role of Open-Source AI and Balanced Regulation
Stowell underscored the importance of open-source AI in fostering competition and economic vitality. “Open-source AI development is essential to supporting competition and safeguarding economic dynamism,” she said. However, she cautioned against overly restrictive regulations that could stifle innovation, advocating for a balanced approach that leverages the UK’s strengths.
She urged policymakers to ensure the UK carves its own path in AI regulation rather than mirroring the EU, US, or China. Stowell emphasized focusing on talent development, computing infrastructure, industry standards that encourage innovation, responsible practices, and risk mitigation while promoting diverse approaches to AI development.
Avoiding Premature Regulation
Drawing lessons from the EU, Stowell warned against rushing to regulate AI, citing complexities around liability and anti-competitive practices. She stressed the importance of a cautious and nuanced approach to regulation, ensuring it addresses the right areas without hindering innovation.
Inclusivity in Policy Development
Stowell called for greater involvement from smaller organizations in shaping AI policy. “Everyone should engage with the work of Parliamentary committees and respond to government consultations. This technology will impact us all, and broader participation will lead to better and more informed outcomes,” she said.
Notably, Stowell, along with University of Cambridge professor Neil Lawrence and Stability AI, has been shortlisted for an OpenUK award for contributions to artificial intelligence.
TheClosedsource.AI News
Orange Partners with OpenAI for Early Access to Cutting-Edge AI Models
Orange, the French telecom giant, has entered into a multi-year partnership with OpenAI in Europe, granting it access to pre-release AI models, according to Steve Jarrett, Orange’s Chief Artificial Intelligence Officer.
Why It Matters
This partnership makes Orange the first telecom operator in Europe to gain direct access to OpenAI’s models.
Key Details
"OpenAI’s models are the most popular, so it made financial sense for us to establish a direct billing relationship," Jarrett told Reuters.
The agreement gives Orange early access to pre-release versions of OpenAI’s models and the opportunity to influence their development.
These models will be hosted on secure infrastructure in Europe.
Over 50,000 Orange employees are already using OpenAI’s tools.
Broader Context
Orange also announced a collaboration with Meta and OpenAI to expand the availability of regional African languages in AI models.
Orange will provide data samples in Wolof and Pular to train Meta’s Llama and OpenAI’s Whisper language models, respectively.
These languages will be integrated into Orange’s customer support systems and offered to non-commercial entities such as governments, universities, and startups.
Don’t miss out on the insights driving the future of Artificial Intelligence! Join a community of researchers, developers, and AI enthusiasts to stay ahead of the curve in Generative AI. Each edition delivers exclusive updates, expert analysis, and thought-provoking discussions straight to your inbox. Subscribe today and be part of the journey toward AGI innovation.
Contact us for any paid collaborations and sponsorships.
Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.
In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.
Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.