• "Towards AGI"
  • Posts
  • Near Protocol Unveils Plan for World’s Largest Open-Source 1.4 Trillion Parameter AI Model

Near Protocol Unveils Plan for World’s Largest Open-Source 1.4 Trillion Parameter AI Model

The project will be built through a competitive, crowdsourced research and development process hosted on Near’s new AI Research hub.

A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.

Introducing the GEN Matrix: Your Essential Guide to Generative AI Trailblazers!

Dive into the forefront of Generative AI with the GEN Matrix—your ultimate resource for discovering the innovators, startups, and organizations leading the AI revolution. Our platform features three categories spotlighting:

  • Organizations: Early adopters advancing GenAI in production.

  • Startups: Pioneers across diverse GenAI layers (chips, infrastructure, applications, etc.).

  • Leaders: Key figures driving GenAI innovation and adoption.

Know someone making strides in GenAI? Nominate them to be featured in the GEN Matrix! Whether you're a business seeking AI solutions or a developer looking for tools, explore GEN Matrix to stay at the forefront of AI excellence.

Near Protocol Unveils Plan for World’s Largest Open-Source 1.4 Trillion Parameter AI Model

Near Protocol has announced an ambitious initiative to develop the world’s largest open-source AI model, revealed on the opening day of its Redacted conference in Bangkok, Thailand. This model, projected to have 1.4 trillion parameters, would be 3.5 times larger than Meta’s current open-source Llama model.

The project will be built through a competitive, crowdsourced research and development process hosted on Near’s new AI Research hub. Thousands of contributors will participate, beginning with a smaller 500-million-parameter model available for training from November 10.

Near Protocol’s Vision for an AI Model

The initiative will expand through a series of seven progressively larger and more complex models, selecting only top contributors for each subsequent stage. The models will be monetized, while contributor rewards and privacy will be maintained through encrypted Trusted Execution Environments, encouraging continuous updates as the technology evolves.

Near Protocol co-founder Illia Polosukhin shared at the conference that funding for the model’s training, estimated at $160 million, will be raised through token sales. “Tokenholders will be repaid from the model’s inferences, creating a self-sustaining loop to fund future models,” Polosukhin explained.

Near’s capabilities and team, including Polosukhin, who co-authored the original transformer research that led to ChatGPT, and co-founder Alex Skidanov, previously with OpenAI, make it one of the few crypto projects equipped for such an endeavor. Skidanov, now head of Near AI, acknowledged the challenges ahead, particularly the need for “tens of thousands of GPUs” or a decentralized network for large-scale model training.

Decentralized AI to Protect Privacy

Skidanov noted that using a decentralized network for model training would require new technological solutions, as existing distributed training methods rely on extremely fast interconnects. However, recent research from DeepMind suggests such decentralization may be achievable.

Polosukhin emphasized the importance of decentralized AI, explaining that if AI were controlled by a single company, it would undermine Web3 principles of decentralization. “If AI is centralized, we’ll follow whatever that company dictates, limiting our freedom,” he said. Conference speaker Edward Snowden reinforced this, warning that centralized AI could lead to a global surveillance state. He underscored the importance of civil rights on the internet, urging the creation of independent systems to preserve digital sovereignty.

Paytronix Introduces GenAI Assistant to Boost Restaurant Loyalty and Engagement

Paytronix has introduced a generative AI assistant to enhance its guest engagement platform, tailored for restaurants and convenience stores. This new tool, Paytronix Assistant, is designed to support loyalty professionals by answering questions using data from the company’s account/portal and offering links to related articles and best practices, according to a Paytronix press release on Monday (Nov. 11).

The AI assistant can quickly generate campaign ideas, provide visit and spend data, offer reporting tips, and answer general questions about platform navigation or troubleshooting, the release says.

With natural language queries like “How many marketable members does our loyalty program have?” or “What is the top-selling online ordering menu item?” the assistant delivers relevant insights.

“This generative AI empowers users to ask questions for analyzing the performance of loyalty programs, mobile campaigns, and gift card initiatives,” said Paytronix Product Manager Aubrey Giasson. “It provides brands with insights to guide their next steps and establish tailored best practices.”

During beta testing, Paytronix customers reported that the AI assistant generates data and ideas within seconds.

“It enables me to take control of campaign development,” said Petro 49’s Director of Business Development, Trevor Carbaugh. “If I need support, inspiration, or quick data retrieval, I just ask the Assistant.”

Christine Cocce, Director of Marketing at Legal Sea Foods, added: “The tool is intuitive and instantly delivers essential data summaries, insights, and loyalty program enhancement suggestions.” The AI assistant’s launch follows a recent announcement that Access Group, a business management software provider, agreed to acquire Paytronix.

Paytronix CEO Jeff Hindman noted in a Nov. 1 press release, “Joining Access will expand the software options available to our clients, enhancing our value and addressing daily business challenges.” Restaurants are increasingly experimenting with technology to improve efficiency while preserving a personal touch, as highlighted in the PYMNTS and Paytronix report, “Digital Divide: Technology, the Metaverse and the Future of Dining Out.”

Rabbitt AI Unveils Generative AI Tools to Transform Defense and Security

Indian AI startup Rabbitt AI has introduced a suite of generative AI (GenAI) tools designed to transform military operations by reducing human presence in high-risk areas.

At the core of this initiative is a focus on minimizing human exposure to danger. Rabbitt AI’s GenAI-powered drones, autonomous vehicles, and surveillance systems enable real-time threat detection and response, providing a safer, AI-driven approach to traditional security methods.

Using a variety of sensor data, including infrared, radar, audio, and visual feeds, Rabbitt’s models identify unauthorized movements, environmental anomalies, and unusual activities—all without human intervention.

“We are not far from a future where AI-equipped systems can dominate battlefields,” remarked Harneet Singh, Rabbitt AI’s CEO, who previously served as an AI consultant to the South Korean Navy.

“Our mission is to safeguard lives at the borders by developing autonomous, situationally aware AI systems that respond to threats by analyzing sensor data in real-time,” Singh added.

Rabbitt AI’s technology integrates deep learning models with diverse sensor inputs—from infrared and radar to audio and video—to autonomously detect unauthorized access, anomalies, and abnormal activities without human oversight.

Singh emphasized the autonomy of this technology, stating, “With AI-powered systems, we can provide uninterrupted, unbiased monitoring that ensures both comprehensive coverage and operational efficiency while also lowering costs.”

Beyond reducing personnel risk, Rabbitt’s GenAI tools streamline resources by automating many surveillance tasks, decreasing dependence on human labor. Singh notes that this approach “not only cuts costs but boosts accuracy, freeing military staff to focus on strategic priorities.” The AI’s detection capabilities also reduce the need for costly corrective actions.

Rabbitt AI is also pushing forward with “human-machine teaming” by pairing GenAI with unmanned drones and ground vehicles to improve adaptability in challenging terrains. Singh explained, “This technology provides real-time situational awareness, enabling command centers to receive immediate insights without the delay of human reporting, even in complex environments like urban areas or mountainous regions.”

An IIT Delhi alumnus honored by DRDO and Indian military officials, Singh emphasized Rabbitt AI’s larger vision for defense: “Our work goes beyond AI model development. We’re creating a defense ecosystem where AI serves as a force multiplier, enhancing soldiers’ capabilities, improving situational awareness, and reducing decision-making time.”

Founded by Singh, Rabbitt.ai specializes in GenAI solutions, including custom language model development, retrieval-augmented generation fine-tuning, and MLOps integration. Recently, the company secured $2.1 million in funding from TC Group of Companies and investors connected to NVIDIA and Meta.

The company has also appointed Asem Rostom as its Global Managing Director to lead expansion efforts in the MENA and Europe regions. Rostom was previously the managing director at Simplilearn.

As part of its expansion, Rabbitt AI launched Rabbitt Learning, a division dedicated to improving educational access and workforce readiness in the MENA region. A new office in Riyadh, Saudi Arabia, will support the growing demand for GenAI training programs and digital transformation projects across Gulf countries.

Hong Kong University Moves GenAI Healthcare Models to Clinical Trials

The Hong Kong University of Science and Technology (HKUST) has unveiled four new large language models (LLMs) designed for healthcare applications.

Overview

These AI-based tools were developed at HKUST’s AI supercomputing facility, SuperPOD, and include the following models:

  • MOME: An AI model that detects breast cancer pathologies in MRI scans.

  • mSTAR: A pathology assistant that analyzes whole slide images.

  • MedMR: A multimodal chatbot designed to handle medical inquiries.

  • XAIM: An explainable AI model providing visual and textual explanations for its analyses.

The HKUST research team shared further insights with Healthcare IT News on these models’ capabilities and outcomes.

The breast cancer model, MOME, has shown 87% diagnostic accuracy in tests across multiple centers and can also predict patient responses to neoadjuvant chemotherapy.

According to the team, mSTAR addresses various pathological and clinical needs, including cancer subtyping, metastasis detection, molecular predictions, survival analysis, and report generation.

MedMR, the medical chatbot, can answer questions, generate reports, and offer initial diagnoses based on medical images, achieving 93% accuracy in identifying tumors versus non-tumors using the PCam200 dataset from Patch Camelyon.

Additionally, XAIM achieved 98.67% accuracy in diagnosing skin lesions on Portugal’s PH² dataset.

The team is now discussing testing and implementation of these models with multiple hospitals in Hong Kong. “Collaborating with hospitals for extensive, multicenter validation will be essential to confirm the models’ generalizability and reliability before clinical use,” they added.

Broader Trends

Earlier this year, Hong Kong’s Centre for Artificial Intelligence and Robotics, under China’s Chinese Academy of Sciences, launched a doctors’ chatbot, CARES, built on Meta's Llama 2 LLM and currently being trialed in seven hospitals in Beijing.

Other healthcare systems in Asia are advancing similar generative AI initiatives. In October, Singapore’s Ministry of Health announced a new funding initiative for a national AI project expected to integrate generative AI across the public health system by 2025. Additionally, Singapore-based Docquity is working with community health centers in West Java, Indonesia, to implement the TehAI virtual assistant, supporting health workers in diagnosing conditions such as tuberculosis, stunting, and hypertension.

AlphaFold3 Goes Open Source, Enabling Broader Access for Protein Prediction Research

AlphaFold3 is now publicly available, six months after Google DeepMind’s decision to withhold the code from its initial publication on the protein-structure prediction model. As of November 11, the London-based company announced that scientists can download and use AlphaFold3 for non-commercial purposes.

“We’re eager to see how people will apply this technology,” said John Jumper, head of the AlphaFold team at DeepMind, who, along with CEO Demis Hassabis, recently received a share of the 2024 Chemistry Nobel Prize for their contributions to AlphaFold.

Significant Upgrade for Drug Discovery

AlphaFold3 brings a new capability: modeling proteins alongside other molecules. Previously, scientists could only access AlphaFold3 through a web server, which restricted prediction types and volumes. The server limited researchers from examining protein behavior in drug interactions, but with the recent code release, scientists can now perform such predictions independently.

Initially, DeepMind defended limiting AlphaFold3 to a web server, aiming to balance open research access with commercial interests. Isomorphic Labs, a DeepMind spinoff in London, has been leveraging AlphaFold3 in drug discovery. However, withholding the code and model weights sparked criticism from researchers, who argued it hindered reproducibility. DeepMind then committed to releasing an open-source version within six months.

While the AlphaFold3 code is now available for download, only academic scientists can request access to the training weights for the model.

Expanding Accessibility

DeepMind now faces competition from other entities developing AlphaFold3-inspired models based on the original paper’s pseudocode. In recent months, Chinese tech companies Baidu and ByteDance, as well as San Francisco startup Chai Discovery, have released their own versions of AlphaFold3.

A notable limitation is that none of these models, including AlphaFold3, is licensed for commercial uses like drug discovery. However, Chai Discovery’s model, Chai-1, offers drug discovery applications via a web interface. Another San Francisco company, Ligo Biosciences, has released a version of AlphaFold3 with fewer restrictions, though it currently lacks full functionality, such as drug and non-protein modeling.

Other groups are developing unrestricted versions of AlphaFold3. Columbia University’s Mohammed AlQuraishi plans to launch OpenFold3 by year-end, a model that drug companies can adapt with proprietary data to enhance its performance.

Importance of Openness

The last year has seen a surge in biological AI models with varying degrees of openness. Anthony Gitter, a computational biologist at the University of Wisconsin-Madison, supports industry involvement but expects transparency, especially when claims are published. “If DeepMind makes scientific claims about AlphaFold3, we expect full disclosure of methods and models for verification,” he said.

The emergence of several AlphaFold3 replicas demonstrates its reproducibility, even without open-source code, according to Pushmeet Kohli, DeepMind’s head of AI for science. He hopes for greater dialogue on publication standards as academic and corporate researchers increasingly collaborate in this field.

AlphaFold2’s open-source release sparked significant innovation, with recent achievements including the design of new proteins targeting cancer. Jumper is excited for similar breakthroughs with AlphaFold3, noting, “People will use it in unexpected ways; some will succeed, and some will fail.”

OpenAI and Rivals Explore New Approaches as AI Scaling Hits a Wall

AI companies, including OpenAI, are navigating unexpected delays and obstacles in scaling large language models, turning to training methods that allow algorithms to "think" in ways more akin to human cognition. According to a dozen scientists, researchers, and investors interviewed by Reuters, these techniques, employed in OpenAI’s latest "o1" model, could transform the AI landscape, affecting the types of resources—like energy and specialized chips—that AI companies rely on. OpenAI declined to comment for the story.

Since the release of ChatGPT two years ago, technology companies have largely adhered to the idea that “scaling up” with more data and compute power leads to stronger AI models. But several leading AI scientists are now questioning this “bigger is better” approach. Ilya Sutskever, co-founder of Safe Superintelligence (SSI) and former co-founder of OpenAI, recently noted that results from simply scaling pre-training—the phase where models learn from vast, unlabeled datasets—have begun to plateau. Sutskever, previously an advocate for scaling, left OpenAI this year to pursue alternative approaches to pre-training with SSI.

At major AI labs, researchers have encountered setbacks in efforts to surpass OpenAI’s GPT-4 model. Sources familiar with private operations say the high costs of training—often tens of millions of dollars—and the high likelihood of hardware issues have slowed progress. These large models require enormous amounts of data, which are becoming harder to source, and their energy needs are straining available power supplies.

In response, researchers are exploring “test-time compute,” a method that enhances AI models during the “inference” phase—when models are in use. For instance, rather than selecting a single answer instantly, a model can generate and evaluate multiple options, choosing the most effective outcome. This allows AI models to allocate more processing power to challenging tasks, like solving math or coding problems, with greater efficiency.

As Noam Brown, an OpenAI researcher involved in developing the o1 model, noted at the TED AI conference, a 20-second deliberation during a poker game gave the model the same performance boost as increasing its training scale by 100,000 times. OpenAI has adopted this approach in the o1 model, previously known as "Q*" and "Strawberry," allowing it to address problems through multi-step reasoning and incorporating feedback from experts. OpenAI plans to apply this approach to even larger models.

Meanwhile, top AI labs like Anthropic, xAI, and Google DeepMind are also developing similar methods, according to sources. Kevin Weil, OpenAI's chief product officer, commented in October that "by the time others catch up, we'll be three steps ahead."

The potential shift could reshape the demand landscape for AI hardware, currently dominated by Nvidia’s AI chips. Venture capitalists who have heavily invested in AI development, such as Sequoia Capital and Andreessen Horowitz, are taking note of this change and assessing its impact. Sonya Huang, a partner at Sequoia Capital, told Reuters that this shift may move the industry from large pre-training clusters to "inference clouds," which are distributed, cloud-based servers designed for inference tasks.

Nvidia, which recently surpassed Apple as the world’s most valuable company, may face competition in the inference chip market, although demand for its inference chips remains high. Jensen Huang, Nvidia’s CEO, recently highlighted the growing importance of inference-based scaling at a conference in India, calling it “the second scaling law” for AI.

OpenAI and AI Labs Embrace Human-Like Training Techniques Amid Scaling Challenges

AI companies, including OpenAI, are reportedly shifting away from the traditional method of scaling large language models by simply increasing data and compute power. Instead, they are now developing new training methods that aim to mimic more human-like thinking processes. This change comes as AI firms encounter delays and mounting challenges in the pursuit of larger models.

Researchers have noted that despite the surge in data and computational resources, performance improvements have begun to taper off. The high costs of training large models—often reaching tens of millions of dollars—are further complicated by technical breakdowns and power shortages. Additionally, the demand for data now exceeds what is easily accessible.

To address these issues, companies are exploring a technique called "test-time compute," which enhances AI models during the inference phase, allowing them to consider multiple options in real time before choosing the best result. OpenAI’s new o1 model leverages this technique, using multi-step reasoning and expert feedback to boost performance. Other leading AI labs, including Anthropic, xAI, and Google DeepMind, are also pursuing similar strategies.

This shift from large-scale pre-training to inference clouds, which operate through distributed cloud-based servers, could impact the AI hardware market. Nvidia, which dominates the market with its training chips, may face competition as the emphasis moves toward inference processing, potentially opening the door for new players. AI investors are monitoring these changes closely, as they could reshape the hardware requirements for the industry.

Don’t miss out on the insights driving the future of Artificial Intelligence! Join a community of researchers, developers, and AI enthusiasts to stay ahead of the curve in Generative AI. Each edition delivers exclusive updates, expert analysis, and thought-provoking discussions straight to your inbox. Subscribe today and be part of the journey toward AGI innovation.

Contact us for any paid collaborations and sponsorships.

Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.

In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.

Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.