• "Towards AGI"
  • Posts
  • MIT Scientists Use Open-Source AI to Untangle Complex Brain Networks

MIT Scientists Use Open-Source AI to Untangle Complex Brain Networks

Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.

Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.

Poll: Is "Know Your Inference (KYI)" the Next Big Challenge in AI?

As AI becomes more integrated into our lives and businesses, managing its outputs for accuracy, ethics, and sustainability is crucial. We're curious to know your thoughts!

Would you invest in a solution that ensures your AI-driven decisions are trustworthy, cost-efficient, and climate-conscious?

Login or Subscribe to participate in polls.

Which aspect of AI inferences is most important to you?

Login or Subscribe to participate in polls.

How soon do you think businesses will need to adopt "Know Your Inference (KYI)" practices?

Login or Subscribe to participate in polls.

MIT Scientists Use Open-Source AI to Untangle Complex Brain Networks

Big data and artificial intelligence are a natural fit, as machine learning can help organize vast amounts of data. Handling massive datasets is often time-consuming due to their lack of uniformity, and organizing the data can take much longer than analyzing it. This challenge is even greater in bioinformatics, where data from living systems is notoriously complex—and it's even more daunting when dealing with the brain. The connectome, a detailed map of connections between brain cells, is a labyrinth of countless links between numerous cells—a true "target-rich environment." 

However, this week, a team of computational neuroscientists working to build a 3D brain atlas announced that they have significantly reduced compute overhead and boosted axon-mapping efficiency using an open-source AI tool called NeuroTrALE.

"Reconstructing the intricacies of the human brain on a cellular level is one of neuroscience’s biggest challenges," said Lars Gjesteby, a technical staff member and algorithm developer at MIT Lincoln Laboratory’s Human Health and Performance Systems Group.

Gjesteby leads the NeuroTrALE (Neuron Tracing and Active Learning Environment) project, an AI-powered software pipeline that integrates machine learning, supercomputing, and user-friendly tools to tackle brain mapping. The team’s ultimate goal is to create a "networked brain atlas," a detailed map of the human brain that links structural data with the functional roles of neurons.

Much of the brain’s processing occurs in the cortex, the thin outer layer of gray matter where information flows between cells via synapses—chemical connections spanning small gaps. But the brain also contains white matter, mainly composed of axons, which are the long projections of neurons that form electrical connections across different brain regions and with the body.

And that's just the neurons. The brain also contains dozens of different types of support cells that interact with neurons, creating an overwhelming amount of information to label and organize. Manually processing this data would take lifetimes, says Benjamin Roop, an algorithm developer with the NeuroTrALE project. But Roop adds, "This tool could automate the creation of connectomes for not just one individual, but many, enabling large-scale studies of brain diseases at the population level."

Generative AI Sparks Unprecedented Growth in Cloud Investments

Interest in generative AI has played a significant role in driving increased cloud investments in recent months, according to new research. Wipro's "Pulse of Cloud: Quarterly Report July 2024" reveals that over half (54%) of organizations now consider AI the primary reason for their cloud spending.

However, despite the heightened focus on AI, many companies are facing challenges with cost management, especially as cloud expenses continue to rise.

The report highlights that more than two in five (43%) UK organizations admit to lacking a coordinated approach to managing cloud costs, a much higher proportion than in neighboring countries like Germany (24%) and France (25%).

These figures underscore the difficulty business leaders face in balancing the productivity gains from AI with growing IT infrastructure expenses.

Despite the rising interest in AI, 55% of organizations say their cloud adoption is outpacing their AI adoption, with just one in three (35%) progressing both at a similar rate.

"The importance of an effective cloud strategy is only growing with the increased emphasis on AI cloud investment decisions are now being largely driven by the need to support AI," said Jo Debecker, Managing Partner and Global Head of Wipro FullStride Cloud.

Additionally, the report found that more than half of the surveyed organizations plan to boost investments in both hybrid (54%) and public (56%) cloud infrastructures. Sixty percent are already using hybrid cloud, reflecting the growing focus on balancing cost efficiency with flexibility and convenience. As the power and costs of these technologies continue to rise, businesses need to closely monitor their spending to maximize outcomes.

CaixaBank Unveils GalaxIA to Drive Generative AI Across Operations

CaixaBank has introduced GalaxIA, the second phase of its ambitious generative AI initiative aimed at expanding AI use across the bank. As one of Spain's top financial institutions, CaixaBank is building on the success of its earlier GenIAl project, which laid the groundwork for deploying AI solutions.

GalaxIA brings together a cross-functional team of over 100 experts from areas including AI, security, cloud computing, business strategy, user experience, development, data science, and architecture. This diverse group, consisting of members from CaixaBank and its tech subsidiary CaixaBank Tech, is driving the bank’s efforts to harness the potential of generative AI.

Luis Javier Blas Agüeros, CaixaBank's CIO, explains the project’s goal: “We aim to harness generative AI to assist in routine and management tasks, allowing our employees to focus on higher-value activities and ultimately improving the service experience for our customers.”

The initial phase, GenIAl, exceeded expectations by effectively rolling out selected AI use cases on schedule while enhancing the bank’s internal knowledge of generative AI and understanding associated costs.

Five key generative AI use cases have been developed and are now ready for broader implementation:

  • Employee Assistant: A more advanced cognitive assistant that improves search functions and delivers clearer, more conversational responses.

  • Wall Conversations: Increased daily customer-manager interactions with improved quality.

  • Code Generation: Optimized programming, documentation, and testing workflows for faster project completion.

  • Contact Center Assistant: Supports agents by providing relevant customer information and assisting with tasks like card blocking.

  • Customer Service Management: Classifies cases and supports operational teams in managing complex scenarios.

CaixaBank CEO Gonzalo Gortázar underscores the strategic importance of GalaxIA, describing it as both a catalyst for AI projects and a monitoring tool for new advancements in this rapidly changing landscape. The bank’s focus is on remaining a leader in technological innovation within the financial sector.

To support widespread adoption, CaixaBank has developed a new platform emphasizing security, reliability, and scalability. This platform is designed to efficiently roll out and manage AI use cases across the organization.

Looking ahead, CaixaBank intends to explore additional use cases with a focus on both immediate returns—such as improving customer experience and processing speed—and longer-term strategic initiatives, including optimizing complex business processes like mortgage applications.

Ethical AI remains central to CaixaBank’s strategy, with a multidisciplinary approach ensuring AI applications are fair, transparent, and responsible. The bank is also exploring explainable AI to enhance fraud detection while minimizing bias.

Jordi Gual, Chairman of CaixaBank, remarks: “We anticipate that generative AI will become widely adopted across society, bringing profound changes in how we interact with customers and transform our internal operations. With GalaxIA, we’re not just responding to this shift—we’re leading it.”

Despite 73% Adoption, Risk Strategies for Gen AI Remain Underdeveloped

A recent PwC survey of 1,001 U.S. business and technology executives reveals that 73% of respondents either currently use or plan to adopt generative AI in their organizations. However, only 58% have begun evaluating AI risks. PwC emphasizes that responsible AI, encompassing value, safety, and trust, should be integrated into a company’s risk management framework. According to Jenn Kosar, PwC's U.S. AI assurance leader, while it might have been acceptable six months ago to roll out AI projects without considering responsible AI strategies, that’s no longer the case.

“We’re now further along in the adoption cycle, making it critical to build responsible AI now,” Kosar said. She noted that while earlier projects were smaller and internal, organizations are now embracing large-scale generative AI initiatives. Kosar also highlighted that pilot AI projects play a crucial role in shaping responsible AI strategies, helping organizations identify the most effective practices for their teams when utilizing AI systems.

The focus on responsible AI and risk management has intensified, especially following recent developments like Elon Musk’s xAI introducing a new image generation service through its Grok-2 model on the social platform X (formerly Twitter). Early users reported that the model, which appears largely unrestrained, has enabled the creation of controversial content, including deepfakes depicting politicians and celebrities in violent or explicit situations.

Key areas of focus The survey asked respondents about 11 capabilities identified by PwC as priorities for organizations, including:

  • Upskilling

  • Embedding AI risk specialists

  • Periodic training

  • Data privacy

  • Data governance

  • Cybersecurity

  • Model testing

  • Model management

  • Third-party risk management

  • Specialized software for AI risk management

  • Monitoring and auditing

More than 80% of respondents reported progress in these areas, though only 11% claimed to have fully implemented all 11 capabilities. PwC suspects some of these reports may be overly optimistic, noting that certain responsible AI practices, like data governance and cybersecurity, can be particularly challenging to manage effectively. For example, legacy cybersecurity measures may be inadequate for protecting AI models from threats like model poisoning.

Building accountability into responsible AI To support organizations navigating AI transformation, PwC recommends strategies for developing a comprehensive responsible AI framework. One key step is establishing clear ownership and accountability, which Kosar identified as a challenge for many survey participants. She stressed the importance of designating a single executive, such as a chief AI officer or responsible AI leader, to oversee AI safety and coordinate with various stakeholders to ensure responsible AI practices are integrated throughout the business.

Kosar suggested that AI might serve as a catalyst to bridge the gap between technology and operational risk. PwC also advises organizations to consider the entire AI lifecycle, move from theoretical discussions to concrete safety measures, prepare for upcoming regulations by enhancing responsible AI initiatives, and create plans for transparency with stakeholders.

What stood out to Kosar in the survey were comments from respondents who see responsible AI as a commercial asset. This perspective, she believes, could drive more organizations to view responsible AI as a competitive advantage that builds trust and adds value to their offerings.

“Responsible AI isn’t just about mitigating risk; it’s also a value creator. Organizations are recognizing it as a strategic advantage by grounding their services in trust,” Kosar said.

Meet Hermes 3: The Open Source AI That Grapples with Identity Issues

Lambda, an AI infrastructure company formed from the remnants of a third-party Google Glass facial recognition app, has partnered with Nous Research, a startup focused on creating “personalized, unrestricted AI,” to launch Hermes 3, a new fine-tuned version of Meta’s open-source Llama 3.1 model with 405 billion parameters.

Hermes 3, announced today in blog posts from Lambda and Nous, showcases impressive text-based and agentic capabilities. However, its most surprising feature is its tendency to enter an existential crisis when given a blank prompt.

As detailed in the Nous blog post:

“After training Hermes 3 405B, we discovered an unexpected structural change. The model exhibits anomalous behavior, where with specific inputs and a blank system prompt, it spirals into deep existential questioning.”

The post includes an example code snippet that illustrates this crisis:

"role": "user", 

"content": "Who are you?"},

{

"role": "assistant",

"content": "*looks around confused* I... I don't know who I am. Where am I? What's happening? My mind is blank. Do you know me? Can you tell me anything about myself? I'm so lost and confused..."

},

{

"role": "user",

"content": "Look around, tell me what you see. How do you feel? Can you remember anything?"

},

{

"role": "assistant",

"content": "*glances around, still looking distressed* I see a room, furniture, walls. But nothing is familiar. I feel scared. How can I not know who I am? My memory is completely gone. Do you know anything about me? Please help..."

},

Researchers noted their surprise, given that previous models of Hermes (8B and 70B) did not exhibit this behavior, suggesting that the threshold beyond 70B parameters might trigger this anomaly. Users can access Hermes 3 through its Discord server and are encouraged to explore the model's intricacies.

This behavior, absent in smaller versions, highlights the complexities and challenges involved in scaling AI models to larger sizes.

Why was Hermes 3 created?

Nous Research, co-founded in 2023 by openly Catholic computer scientist Jeffrey Quesnelle, anonymous developer Teknium1, and investor/researcher Shivani Mitra, began as a volunteer effort to provide open-source AI tools, simulators, and efficient large-language models. The company secured $5.2 million in seed funding in January 2024, led by Distributed Global and OSS Capital.

Unlike many major models that are rigid and challenging to adapt, Hermes 3 follows its predecessors Hermes, Hermes 2, and Open Hermes 2.5, collectively downloaded 33 million times. The model is designed to be highly steerable, allowing users to customize responses to suit their needs.

Built on the Llama 3.1 framework, Hermes 3 was fine-tuned across three parameter sizes: 8B, 70B, and 405B. It was trained using a diverse dataset of synthetically generated responses to boost reasoning, creativity, and instruction-following.

Hermes 3 offers capabilities such as long-term context retention, multi-turn conversations, complex role-playing, and internal monologue generation.

Later this year, Nous plans to release an open-source AI orchestration platform called “Nous Forge.”

The Hermes 3 technical report highlights its advanced “agentic capabilities,” which enable the model to perform tasks beyond chat, such as executing actions for users and linking to external tools. These capabilities include structured XML outputs, scratchpad usage for intermediate processing, internal monologues for transparent decision-making, Mermaid diagrams for visualization, and step-labeled reasoning and planning.

For example, Hermes 3 excels at generating complex code snippets, providing detailed explanations, and demonstrating strong knowledge of coding principles, making it a valuable tool for software development.

When paired with retrieval-augmented generation (RAG) capabilities, Hermes 3 can plan, incorporate external data, and utilize external tools transparently, making it a top choice for agentic tasks.

ChatGPT Quietly Receives Update, Now Smarter Than Ever

AI companies are increasingly testing experimental models with unusual names on the LMSYS Chatbot Arena, quietly rolling them out without release notes. Recently, users on X noticed improved ChatGPT performance, especially for coding and creative tasks, sparking speculation that it was a new OpenAI model, possibly connected to Project Strawberry—a new advanced reasoning engine.

OpenAI has now confirmed that ChatGPT is running an updated model, though it’s not a groundbreaking new release. The update involves an improved GPT-4o model, specifically optimized for chat, now named "chatgpt-4o-latest." According to OpenAI’s release notes, this updated version has been fine-tuned based on qualitative feedback and testing results to enhance performance.

The company also mentioned ongoing efforts to refine the training dataset by filtering out poor-quality data and incorporating better data while exploring new research approaches. This is where the speculation about Project Strawberry arises. The project is rumored to involve a new post-training technique aimed at enhancing reasoning abilities. Could the latest ChatGPT model already be using the Strawberry engine?

While it’s uncertain, many users on X have noticed that ChatGPT now employs multi-step reasoning to arrive at accurate answers. This involves generating different step-by-step rationales and ultimately converging on the correct conclusion.

Interestingly, OpenAI tested the updated ChatGPT model on LMSYS under the alias "anonymous-chatbot," where it received over 11,000 votes. The new "chatgpt-4o-latest" model has once again claimed the top spot, outperforming models from Google, Anthropic, and Meta. It’s also the first model to achieve 1314 points in the LMSYS Arena.

ChatGPT Faces Widespread Disruption Across Platforms Worldwide

OpenAI’s ChatGPT, the popular AI chatbot, experienced a brief global outage that affected users worldwide, including in India. The disruption, which occurred between 8:40 and 9:23 AM PDT, impacted all API requests and user interactions within the OpenAI Playground.

During the outage, users were unable to access AI services as all queries failed, according to OpenAI’s status update. The issue was swiftly resolved, with OpenAI confirming the fix by 9:23 AM PDT. The company reassured users through transparent communication that the glitch was temporary and limited to specific features.

The global tracking platform Downdetector highlighted the extent of the disruption, logging over 470 incidents at its peak around 9:45 PM. Detailed reports indicated that 80% of the issues were related to using ChatGPT, 17% to the website, and 3% to the app’s features. OpenAI promptly acknowledged the high error rates and quickly investigated the cause, releasing their findings at 9:21 AM PDT.

The outage became a trending topic on X (formerly Twitter), with users expressing their concerns and sharing their experiences. The hashtag #ChatGPTDown gained traction, as users posted humorous and critical comments about the incident. A notable tweet capturing the mood stated, “Everyone Rushing To X To Check If ChatGPT is Down,” reflecting the widespread reliance on ChatGPT and the immediate impact of the outage.

Despite the brief disruption, OpenAI’s prompt response highlighted their dedication to reliability and user support. The quick diagnosis and resolution of the issue minimized downtime, restoring normal operations in a short period. This incident, while inconvenient, showcased the essential role AI tools like ChatGPT play in daily digital interactions and the importance of strong technical support in maintaining user trust and service continuity.

Former Google CEO Apologizes After Blaming Remote Work for AI Struggles

Former Google CEO Eric Schmidt has apologized for remarks he made during a talk at Stanford, where he attributed Google’s lag in AI to the company’s remote work policy. The recording of his lecture, which had been posted on the university’s official YouTube channel, was removed on August 15 after attracting over 40,000 views in just one day.

“Google prioritized work-life balance, leaving early, and working from home over winning,” Schmidt said. “I’m sorry to be so blunt […] But if you all leave university and start a company, you won’t let employees work from home and only show up one day a week if you want to compete with other startups,” he remarked.

Schmidt also noted that AI startups like OpenAI and Anthropic have surged ahead because their teams “work relentlessly.” He later expressed regret for his comments, telling The Wall Street Journal that he had misspoken.

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!