- "Towards AGI"
- Posts
- Tether CEO Paolo Ardoino Calls for Local AI Models After OpenAI Security Breach
Tether CEO Paolo Ardoino Calls for Local AI Models After OpenAI Security Breach
Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.
Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.
TheGen.AI News
Tether CEO Paolo Ardoino Calls for Local AI Models After OpenAI Security Breach

Paolo Ardoino, the CEO of Tether, commented on the OpenAI hack, emphasizing that protecting privacy will remain challenging until AI models are localized.
In a recent tweet on X, Ardoino stated, "OpenAI seems to have been hacked a while ago. Scary." He stressed the significance of locally executable AI models for safeguarding individuals' privacy and ensuring resilience and independence. He noted that modern laptops and smartphones possess the computing power to utilize user data to optimize general large language models (LLMs), with these improvements being stored locally on the device.
This approach enhances security by retaining data locally and allowing offline use, providing users with robust AI-driven experiences while maintaining full control over their information.
Following Tether's recent announcement of its AI growth, Ardoino mentioned that the company is "actively exploring" the integration of locally executable models into its AI offerings.
Paolo described locally executable AI models as a significant advancement for user privacy and independence. These models operate directly on users' devices, such as smartphones or laptops, eliminating the need for third-party servers. This comes in response to a recent hack that OpenAI, the developer of ChatGPT, experienced.
Additionally, KoinX highlighted the OpenAI hack on X, pointing out vulnerabilities inherent in centralized AI models. They added that this incident has fueled a growing demand for decentralizing AI technologies, as organizations aim to improve security and resilience against increasing cyber threats.
GenAI Innovations: India Ranks Fifth Globally, China Takes the Lead

From tech companies to startups to universities, every segment in India is eager to tap into the potential of Generative AI. However, data indicates that the country still has significant progress to make.
India has filed only 1,350 patents related to Generative AI from 2014 to 2023, placing it fifth globally after China (38,210), the US (6,276), South Korea (4,155), and Japan (3,409), according to a report by the World Intellectual Property Organization (WIPO). Despite this, India has surpassed the UK (714 patents) and Germany (708) in the number of GenAI patents published.
Chinese companies like Tencent, Ping An Insurance Group, and Baidu have published the most GenAI patent families over the past decade. Although no Indian entities are among the top 20 GenAI patent owners, notable Indian patents include a retail AI assistant solution by RN Chidakashi Technologies (Miko Robotics) and an AI tool for contract lifecycle management by Tata Consultancy Services. The report also highlights India's rapid innovation pace, with the country recording the highest annual growth rate in GenAI patent publications at an average of 56 percent. However, this is based on a small base, with India holding a 3 percent share of total GenAI patents. In contrast, China has a 50 percent growth rate on a much larger base.
Between 2014 and 2023, a total of 54,358 patent families were published in the field of GenAI, with 89 percent (48,398) still active at the end of 2023.
Advancements in AI and deep learning have led to a significant increase in GenAI patent activity. The introduction of transformer models in 2017 and popular applications like ChatGPT in 2022 have contributed to spikes in GenAI patent publications, with the share of GenAI patents among all AI patents rising from 4.2 percent in 2017 to 6.1 percent in 2023.
Over 25 percent of all GenAI patents and more than 45 percent of GenAI scientific papers were published in 2023 alone. Notably, OpenAI, despite being synonymous with GenAI, did not file any patents for its research until early 2023, likely due to its non-profit origins.
Shailendra Bhandare, Partner at Khaitan & Co, notes that the maturity of patent regimes and their thresholds for awarding GenAI patents must be considered when analyzing numbers across countries. He emphasizes the need for India to have a well-defined eligibility framework for GenAI inventions to foster AI-related innovations. Bhandare also points out that GenAI-related patenting in India has increased since 2021. WIPO data indicates that the Chinese patent regime is relatively mature compared to other countries, with over 40,000 GenAI patents filed domestically between 2014 and 2023. In comparison, the US saw over 10,700 patents filed domestically, and 98 percent of India’s 1,350 patents were filed within its own jurisdiction.
Autogon Launches Gen-AI Video Tool to Create Localized, Culturally Relevant Content

Autogon AI, a pioneer in no-code AI business solutions, has launched its innovative GenR8 Video feature. This tool enables users to create culturally tailored and relevant AI-generated videos for specific local markets and regions. With GenR8 Video, users can input a script and clone their voice and accent to capture distinct cultural nuances and tones.
The user-friendly online interface is suitable for both personal and commercial use, catering to digital creators, companies, marketing agencies, and educational institutions.
Autogon AI’s GenR8 Video is accessible on the Autogon Playground, offering both individual and enterprise pricing plans.
“We’ve developed a tool to ensure no one is excluded from the generative AI revolution,” said Obi Ebuka David, Founder and CEO of Autogon.
David also highlighted the tool's value for global users aiming to localize ads, educational videos, or personal projects.
“Autogon’s GenR8 Video allows digital creators in major economic regions, such as the US and Europe, to produce nuanced, high-quality content for local markets,” Autogon CEO Obi Eubka David stated. Created to reflect diverse cultural narratives in AI-generated content, the GenR8 Video feature addresses the increasing demand for inclusive and representative media. This feature is part of Autogon AI’s dedication to promoting a more inclusive digital world.
Key Features of the GenR8 Video Tool:
Personalized Audio Creation: Users can record custom audio using Autogon AI’s advanced voice cloning technology, creating unique and authentic voiceovers that resonate with various cultural contexts.
Customizable Avatars: Select an avatar or generate an image using the Autogon Text to Image service, providing a visual representation that aligns with diverse cultural backgrounds.
Flexible Video Length: The tool supports generating videos up to five minutes per image, accommodating different content needs and preferences.
Meeting the Demand for Cultural Representation
Autogon’s innovative service addresses the increasing need for AI-generated videos that reflect a wide range of cultural perspectives. The GenR8 Video feature enhances the accessibility of culturally inclusive media, ensuring authentic representation of voices from diverse backgrounds in visual content.
Currently, video generation is available in English, capturing unique accents and linguistic nuances in the audio (demo here).
Future Projections
Autogon AI anticipates that this feature will facilitate the creation of videos that embrace a wide array of cultures. The customizable voice and avatar options are specifically designed to honor and reflect diverse cultural nuances.
MIT Researchers Unveil GenSQL: A Generative AI System for Databases

A new tool simplifies the process for database users to conduct complex statistical analyses of tabular data without needing to understand the intricate details behind the scenes.
GenSQL, a generative AI system for databases, enables users to make predictions, detect anomalies, impute missing values, correct errors, or generate synthetic data with minimal effort.
For example, if GenSQL were used to analyze medical data for a patient with consistently high blood pressure, it could identify an unusually low blood pressure reading for that patient, even if it falls within the normal range for the general population.
GenSQL seamlessly integrates a tabular dataset with a generative probabilistic AI model, allowing it to account for uncertainty and adapt its decisions based on new data.
Additionally, GenSQL can generate and analyze synthetic data that resemble the real data in a database. This feature is particularly useful in situations where sharing sensitive data, like patient health records, is not permissible, or when real data are limited.
This innovative tool is built on top of SQL, a programming language for database creation and manipulation that has been widely used since its introduction in the late 1970s.
“Historically, SQL demonstrated to the business world what computers could achieve. Users didn't need to write custom programs; they simply had to query a database using a high-level language. We believe that as we transition from merely querying data to interrogating models and data, we will need a similar language that instructs people on the coherent questions they can pose to a computer with a probabilistic model of the data,” says Vikash Mansinghka ’05, MEng ’09, PhD ’09, senior author of a paper introducing GenSQL and a principal research scientist leading the Probabilistic Computing Project at MIT's Department of Brain and Cognitive Sciences.
In comparisons with other popular AI-based data analysis methods, GenSQL not only proved to be faster but also yielded more accurate results. Moreover, the probabilistic models used by GenSQL are explainable, allowing users to read and modify them.
TheOpen.AI News
Alibaba’s LLM Secures Top Spot in Hugging Face’s AI Model Rankings

Open-source AI developer platform Hugging Face has released its latest rankings for large language models (LLMs), using various metrics to determine the top 10 models.
According to a report by the South China Morning Post (SCMP), the rankings focused on open-source LLMs, excluding those developed under proprietary conditions. As a result, mainstream models like OpenAI’s ChatGPT and Google’s Gemini were notably absent from the list.
A notable trend in Hugging Face’s rankings is the dominance of Chinese LLMs over their North American and European counterparts, with Alibaba’s AI offerings receiving significant praise.
Alibaba’s Qwen-72B-Instruct LLM secured the top position with a score of 43.02, following extensive testing across multiple benchmarks. Released in 2023, this model, part of the Tongyi Qianwen series, narrowed the gap with industry leaders by adopting an open-source approach and supporting enterprises in deploying their own AI products.
“Qwen 72B (Instruct) is the leader, and Chinese open models are generally dominating,” said Hugging Face CEO Clement Delangue.
The rankings also highlighted the success of smaller Tongyi Qianwen models, which placed fourth and tenth for their strong performances in math, logic, and long-range problem-solving.
Meta’s Llama came in second with a cumulative score of 36.67, nearly seven points behind Qwen 72B-Instruct, despite entering the space later. A closer examination shows that Llama 3 is barely holding onto second place, with third and fourth places scoring 35.59 and 34.35, respectively.
Yi-1.5, another Chinese LLM, ranked seventh with a score of 28.11 across six metrics, while Microsoft’s small-language model Phi-3 also made the list.
“There are signs that AI developers are overly focusing on main evaluations at the expense of other model performances,” Delangue commented on LinkedIn. “Bigger is not always better.”
The Rise of Chinese LLMs
While China lags in hardware, its AI developers are producing impressive models to compete with peers globally. They are driven by the need to tailor AI models to the nuances of the Chinese language and prevent foreign products from dominating the local market.
Alibaba has shifted its focus from quantum computing and emerging technologies to AI, a move that has proved beneficial. As OpenAI plans to limit access to its API in China and other restricted areas, several Chinese firms are racing to offer similar services to AI developers to expand their market share.
Peter Thiel Critiques AI Profit Concentration, Founders Fund Invests in Open-Source Solution

Peter Thiel, co-founder of Palantir Technologies and a well-known venture capitalist, has stirred up discussions in the tech industry with his recent remarks on artificial intelligence (AI) and his firm’s substantial investment in a new open-source AI platform.
Speaking at the Aspen Ideas Festival, Thiel compared the current surge in AI to the dot-com bubble of the late 1990s.
He highlighted a key feature of the AI sector today: the overwhelming dominance of a single company in profitability.
"At present, 80-85% of the revenue in AI is being generated by one company, Nvidia," Thiel noted, calling this concentration "very peculiar."
Thiel’s comments come amid Nvidia’s market capitalization soaring past $3 trillion, making it one of the world’s most valuable companies. The chip giant’s dominance in AI hardware has fueled its unprecedented growth.
However, Thiel’s concerns about the concentration in the AI industry extend beyond hardware.
His venture capital firm, Founders Fund, has taken a significant step to tackle the issue of AI development being controlled by a few tech giants.
The fund has co-led an $85 million seed round investment in Sentient, an open-source AI development platform.
Sentient aims to democratize AI development by encouraging community contributions to AI models. This approach contrasts sharply with the closed systems of companies like OpenAI, which restrict user access to their underlying models.
Sandeep Nailwal, co-founder of Polygon and a core contributor to Sentient, outlined the project’s mission:
"By creating an open platform for AGI development, we aim to ensure that the benefits of AI are shared fairly and that its development aligns with the interests of humanity as a whole."
The platform will be built on Polygon, representing an extension of the Ethereum scaling solution into the AI space. This integration of blockchain technology with AI development could potentially address some of the incentive issues in open-source AI, where contributors often go unrewarded for their efforts.
Sentient plans to launch "campaigns" for contributors, with specific metrics for evaluating contributions and rewards. These rewards may include co-ownership of the AI models created and future rewards based on their usage. While the project currently has no plans for a token, this could change as the community grows.
Vanderbilt University Launches Open Source Gen AI Platform for Institutions

Vanderbilt University has made its Generative AI platform, Amplify GenAI, open-source under the MIT license. “We operate what we consider the most advanced enterprise, open-source Generative AI platform, on par with or superior to leading commercial options,” said Jules White, Senior Advisor to the Chancellor on Generative AI and Professor of Computer Science. “Assistants and agents represent the future.”
Amplify GenAI is a model-independent solution that enables the creation of assistants capable of working with documents, PowerPoints, and accessing RAG. It operates privately within the institution’s AWS account.
The cost per user (tokens + AWS bill) for unlimited access to any model from OpenAI, Anthropic, or Mistral is approximately $3 per month.
“It’s unclear which model is the best. Our goal was to be model- and vendor-independent, allowing us to always utilize the best-performing or most cost-effective models,” Jules White explained. “Competition among model vendors has led to decreasing costs and increasing reasoning capabilities.”
TheClosed.AI News
OpenAI Startup Fund Supports AI Healthcare Venture with Arianna Huffington

Huffington Post founder Arianna Huffington and OpenAI CEO Sam Altman are supporting a new initiative, Thrive AI Health, which aims to develop AI-powered assistant technology to promote healthier lifestyles.
Backed by Huffington’s mental wellness company, Thrive Global, and the OpenAI Startup Fund, an early-stage venture fund associated with OpenAI, Thrive AI Health plans to create an "AI health coach" to provide personalized advice on sleep, nutrition, fitness, stress management, and "connection," according to a press release issued on Monday.
DeCarlos Love, who previously led health and fitness projects at Google’s Fitbit subsidiary, especially on the Pixel Watch wearable, has been appointed CEO. Thrive AI Health’s strategic investors include the Alice L. Walton Foundation, founded by Walmart co-founder Helen Walton, and the Alice L. Walton School of Medicine is one of its initial health partners.
The amount of capital invested by Thrive AI Health’s backers has not been disclosed. We have reached out for further details and will update this post when more information is available.
According to Huffington and Altman in a Time op-ed, Thrive AI Health’s goal is to develop an AI health “coach” trained on scientific research and medical data, utilizing a forthcoming health data platform and partnerships with institutions like Stanford Medicine. This AI assistant, available via a smartphone app and Thrive’s enterprise products, will learn from users’ behaviors and offer real-time, personalized health-related suggestions.
“Most health recommendations currently are generic,” Huffington and Altman write. “The AI health coach will enable very specific recommendations tailored to each individual: replace your third afternoon soda with water and lemon; take a 10-minute walk with your child after picking them up from school at 3:15 p.m.; begin your wind-down routine at 10 p.m. since you need to get up at 6 a.m. for a flight.”
OpenAI's CriticGPT Excels in Identifying ChatGPT's Code Errors

OpenAI has recently published a paper introducing CriticGPT, a fine-tuned version of GPT-4 designed to critique code generated by ChatGPT. CriticGPT has been shown to catch more bugs and produce superior critiques compared to human evaluators. OpenAI plans to leverage CriticGPT to enhance future versions of their models.
During the initial development of ChatGPT, OpenAI utilized human "AI trainers" to rate the model's outputs, creating a dataset for fine-tuning using reinforcement learning from human feedback (RLHF). As AI models advance to perform some tasks at the same level as human experts, it becomes challenging for human judges to evaluate their output. CriticGPT is part of OpenAI's scalable oversight initiative aimed at addressing this issue. The initial focus was on improving ChatGPT's code-generating capabilities. Researchers employed CriticGPT to generate code critiques and also engaged qualified human coders for the same task. Evaluations revealed that AI trainers preferred CriticGPT's critiques 80% of the time, indicating that CriticGPT could be a valuable source for RLHF training data.
OpenAI emphasized the importance of scalable oversight methods to help humans accurately evaluate model output, noting:
"The need for scalable oversight, broadly construed as methods that can help humans to correctly evaluate model output, is stronger than ever. Whether or not RLHF maintains its dominant status as the primary means by which LLMs are post-trained into useful assistants, we will still need to answer the question of whether particular model outputs are trustworthy. Here we take a very direct approach: training models that help humans to evaluate models....It is...essential to find scalable methods that ensure that we reward the right behaviors in our AI systems even as they become much smarter than us. We find LLM critics to be a promising start."
Interestingly, CriticGPT itself was fine-tuned using RLHF. For this, the training data included buggy code as input and human-generated critiques or explanations of the bugs as the desired output. The buggy code was created by having ChatGPT write code, followed by human contractors inserting bugs and writing critiques.
To evaluate CriticGPT, OpenAI had human judges rank various critiques side-by-side, comparing outputs from CriticGPT, baseline ChatGPT, human critics, and human critics assisted by CriticGPT ("Human+CriticGPT"). Judges preferred CriticGPT's critiques over those of ChatGPT and human critics. Additionally, the output from Human+CriticGPT teams was found to be "substantially more comprehensive" than that of humans alone, though it included more "nitpicks."
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!