• "Towards AGI"
  • Posts
  • Microsoft Enhances Quantum Platform with AI and Advanced Molecular Simulation

Microsoft Enhances Quantum Platform with AI and Advanced Molecular Simulation

Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.

Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path towards Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.

Microsoft Enhances Quantum Platform with AI and Advanced Molecular Simulation

Microsoft has enhanced its quantum-computing platform with generative artificial intelligence and other advanced features, aiming to make this transformative technology more accessible to the scientific community.

On Wednesday, the company introduced Generative Chemistry and Accelerated DFT, which extend the capabilities of its Azure Quantum Elements platform. These additions are designed to significantly reduce research time for scientists in the chemicals and materials science fields, according to a blog post by Jason Zander, EVP of Strategic Missions and Technologies.

"Just as generative AI has boosted creativity and productivity with tools like Copilot, we are now integrating AI and natural language processing into science," Zander stated. The goal of Generative Chemistry is to embed AI reasoning throughout the scientific method, leveraging next-generation AI models to expedite the process from hypothesis to results.

Azure Quantum Elements, launched late last year, merges AI and high-performance computing (HPC) to accelerate scientific research. In January, a collaboration with the US Department of Energy’s Pacific Northwest National Laboratory (PNNL) showcased the platform’s ability to reduce a data set of 32.6 million potential lithium battery replacements to just 18 in under four days.

Generative Chemistry enhances the platform further by enabling researchers to generate and investigate novel molecules suited for specific industry applications using AI models trained on hundreds of millions of compounds. Researchers can request molecules with desired characteristics and provide information about their application, with the system identifying relevant molecular properties. It can also suggest previously unseen molecules with useful properties tailored for specific applications, ensuring feasible synthesis in a reasonable number of steps.

Density Functional Theory (DFT) is widely used for molecular simulations, helping researchers study the electronic structure of atoms, molecules, and nanoparticles, as well as surfaces and interfaces. These simulations are complex and computationally intensive, often requiring supercomputers. Microsoft’s new Accelerated DFT service in Azure Quantum Elements runs these simulations at unprecedented speeds, an order of magnitude faster than PySCF, a popular open-source DFT code, according to the blog post.

McKinsey Boosts Gen AI Advancements with Iguazio Integration

It's been 18 months since OpenAI launched Chat GPT, sparking widespread interest and experimentation. This initial phase has revealed the balance between the benefits and limitations of generative AI (gen AI), along with the necessary infrastructure and skills.

However, the brief history of gen AI is also marked by numerous pilot projects that failed due to a lack of structure and purpose, resulting in disappointing outcomes. Some gen AI implementations, like chatbots and virtual agents, have even produced embarrassing or dangerous results. Ambitious leaders are now looking to move beyond the "thousand flowers bloom" stage of gen AI. They seek to incorporate more scientific rigor into their data science efforts.

Enter Iguazio, an AI and machine learning operations company acquired by McKinsey in 2023. Since its acquisition, Iguazio has become part of QuantumBlack, McKinsey's AI division, which focuses on driving innovation and experimentation in AI. Iguazio provides a software-based AI platform grounded in data science principles to assist organizations in developing, deploying, and managing gen AI solutions.

This platform, part of QuantumBlack Horizon—McKinsey's suite of AI development tools—addresses two major challenges that enterprises face when moving from gen AI proofs of concept to live implementations:

  1. Scaling: Gen AI operations (gen AI ops) enable efficient and effective scaling of gen AI applications.

  2. Governance: Gen AI guardrails mitigate risks by ensuring essential monitoring, data privacy, and compliance activities.

"Gen AI simplified the creation of POCs (proofs of concept) but made transitioning them to production more challenging, widening the gap between potential and actual business value," said Asaf Somekh, co-founder and CEO of Iguazio. "The Iguazio AI platform bridges this gap, helping organizations integrate gen AI efficiently into their business processes and applications."

Whether used independently or as part of McKinsey's expanding suite of solutions for specific functions and industries, the Iguazio platform's architecture supports gen AI adoption. It optimizes infrastructure use, such as cloud services and computing power, by automating many tasks for engineers. Additionally, AI infrastructure components like Nvidia’s GPUs can be shared across projects, reducing costs and maximizing resource efficiency. Code for data extraction can be converted into managed microservices and reusable components, supporting a variety of data extraction processes across the enterprise.

Amazon and Zeta Global Team Up to Revolutionize Marketing with Gen AI

Amazon and Zeta Global have announced a significant advancement in generative AI (Gen AI) that will enhance application scalability using Amazon Web Services (AWS) foundation models within the Zeta marketing platform. At A’Maison: The House of Amazon @ Cannes Lions 2024, executives from both companies highlighted the powerful integration of their technologies, which will provide businesses with seamless access to omnichannel marketing capabilities.

“Retailers and brands can gain a competitive advantage by unlocking profound customer insights, enabling them to forecast demand more accurately, optimize their media strategies, and create compelling content that aligns with consumer intent. This comprehensive understanding allows retailers to devise strategies that drive engagement and boost sales, leading to a more effective marketing approach,” explained David A. Steinberg, co-founder, chairman, and CEO of Zeta Global.

Amazon Bedrock, a fully managed generative AI service from AWS, simplifies the development of generative AI applications in businesses, eliminating the need for extensive skills or complex infrastructure. “Amazon Bedrock, Amazon Personalize, and Amazon SageMaker help companies like Zeta Global significantly enhance customer experiences by delivering hyper-personalized marketing and advertising messages that resemble one-on-one human interactions, fostering long-lasting customer loyalty,” said Jon Williams, global head of agency business development at AWS.

Zeta Creative AI Agents generate insights and images within the Zeta Opportunity Engine (ZOE), improving marketing platforms. “In essence, we’re providing businesses with tools to create highly intelligent AI Assistants with custom workflows that manage all their marketing tasks. These assistants can learn customer preferences, send the right messages at the right time, and even predict future customer needs. It’s like having a personal shopper for each customer, ensuring they receive exactly what they want when they want it,” said Steinberg.

Introducing DeepSeek-Coder-V2: The Open-Source AI Model Surpassing GPT4-Turbo in Coding and Math

Code intelligence focuses on developing advanced models that can understand and generate programming code. This interdisciplinary field combines natural language processing and software engineering to improve programming efficiency and accuracy. Researchers have created models to interpret code, generate new code snippets, and debug existing code, reducing the manual effort involved in coding tasks and making the development process faster and more reliable. These models have shown promise across various applications, from software development to education.

One major challenge in code intelligence is the performance gap between open-source models and advanced closed-source models. Despite significant efforts from the open-source community, these models often fall short compared to their closed-source counterparts in coding and mathematical reasoning tasks. This gap hinders the wider adoption of open-source solutions in professional and educational environments. Developing more powerful and accurate open-source models is essential for democratizing access to advanced coding tools and fostering innovation in software development.

Notable open-source models in code intelligence include StarCoder, CodeLlama, and the original DeepSeek-Coder. Although these models have shown consistent improvement thanks to the contributions of the open-source community, they still lag behind leading closed-source models like GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro. These closed-source models benefit from extensive proprietary datasets and substantial computational resources, enabling them to excel in coding and mathematical reasoning tasks. The need for competitive open-source alternatives remains.

Researchers from DeepSeek AI have introduced DeepSeek-Coder-V2, a new open-source code language model developed by DeepSeek AI. Built on the foundation of DeepSeek-V2, this model undergoes further pre-training with an additional 6 trillion tokens, enhancing its capabilities in coding and mathematical reasoning. DeepSeek-Coder-V2 aims to close the performance gap with closed-source models, offering a competitive open-source alternative with impressive benchmark results.

DeepSeek-Coder-V2 employs a Mixture-of-Experts (MoE) framework, supports 338 programming languages, and extends the context from 16K to 128K tokens. The model’s architecture includes versions with 16 billion and 236 billion parameters, designed to efficiently utilize computational resources while achieving superior performance in code-specific tasks. The training data for DeepSeek-Coder-V2 comprises 60% source code, 10% math corpus, and 30% natural language corpus, sourced from GitHub and CommonCrawl. This comprehensive dataset ensures the model's robustness and versatility in handling diverse coding scenarios.

LibreChat: The Open Source AI Chat with Advanced Customization

With an interface inspired by ChatGPT, LibreChat offers an "enhanced" open-source alternative with AI model selection as a key feature, along with additional features and customization options, such as plugins for Retrieval-Augmented Generation.

In an episode of the Practical AI podcast, LibreChat creator Danny Avila highlighted the software's unique ability to search through all past conversations. "To this day, it’s not a feature on ChatGPT or many different interfaces… I know a lot of people, just that one simple feature gets them on board."

LibreChat's website promotes it as "a centralized hub for all your AI conversations," capable of "harnessing the capabilities of cutting-edge language models from multiple providers in a unified interface," including models from OpenAI and other open-source and closed-source providers.

The site also boasts "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google, specifically mentioning GPT-4 Claude and Gemini Vision. LibreChat supports multiple languages, images, and files, with future plans to integrate videos. Currently, the focus is on image handling.

LibreChat is free and "completely open-source," according to its repository, with "community-driven development" under the MIT License.

In addition to its features, LibreChat highlights the passionate community actively creating an ecosystem of open-source AI tools. In an email interview with The New Stack, Danny Avila emphasized the importance of owning your own data, which he considers a "dying human right" in the internet age, especially with the rise of LLMs.

Avila noted how major sites like Reddit have changed their web scraping policies and imposed API usage limitations, requiring users to grant access to their data. He argues that companies seek not just popularity but the immediate influx of data, which he views as a hotter commodity than oil.

With locally-hosted LLMs, Avila sees an opportunity for users to withhold training data from Big Tech, offering a solution that can run exclusively on open-source technologies, database, and all, completely "air-gapped." Even with remote AI services claiming they won't use transient data for training, Avila believes local models will become increasingly capable over time.

LibreChat is compatible with these local models. On the podcast, Avila demonstrated its ability to ingest a CSV spreadsheet with mock sales data while switching between different LLMs. "I switched to Cohere, and it didn’t give as good of a response as GPT-4, but it was able to see that and work with it."

He also showcased how LibreChat tracks "conversation state" in a database, allowing users to switch to a different AI model mid-conversation.

OpenAI Co-Founder Launches Safe Superintelligence Inc. for AI Safety

Ilya Sutskever, one of OpenAI's founders who was involved in a failed attempt to remove CEO Sam Altman, announced the launch of a safety-focused AI company. Sutskever, a renowned AI researcher who left the ChatGPT maker last month, shared on social media that he has co-founded Safe Superintelligence Inc. with two partners. The company’s sole mission is to safely develop "superintelligence," referring to AI systems that surpass human intelligence.

In a prepared statement, Sutskever and his co-founders, Daniel Gross and Daniel Levy, emphasized that the company would avoid distractions from management overhead or product cycles. Their business model ensures that work on safety and security remains insulated from short-term commercial pressures.

Safe Superintelligence is an American company based in Palo Alto, California, and Tel Aviv, where the founders have strong connections and can attract top technical talent.

Sutskever was part of an unsuccessful effort last year to oust Altman, a move that resulted in internal turmoil at OpenAI. This conflict centered on whether the organization prioritized business opportunities over AI safety. Sutskever, who later expressed regret over the attempted ouster, had co-led a team at OpenAI focused on developing artificial general intelligence (AGI) safely.

Upon leaving OpenAI, Sutskever hinted at a personally significant project without providing details. He stated that his departure was his own decision. Shortly after, his team co-leader Jan Leike also resigned, criticizing OpenAI for prioritizing flashy products over safety. OpenAI subsequently established a safety and security committee, primarily staffed by company insiders.

OpenAI's Upcoming ChatGPT to Reach PhD-Level Intelligence, Says CTO Mira Murati

OpenAI's CTO recently visited Dartmouth Engineering, her alma mater, where she extensively discussed the future of artificial intelligence, the next generation of ChatGPT, and its anticipated PhD-level intelligence. Murati explained that while GPT-3 had the intelligence of a toddler and GPT-4 that of a high schooler, the upcoming model will exhibit PhD-level intelligence for specific tasks. She revealed that this next-generation GPT is expected to be released in about a year and a half and mentioned that conversing with the new chatbot might make users feel like it's smarter than them.

During the interview, Dartmouth Trustee Jeffrey Blackburn asked Murati about the potential for future GPT models to autonomously connect to the internet and perform tasks independently. Murati confirmed that OpenAI is considering this possibility, acknowledging the reality of AI systems with agent capabilities that can connect to the internet, communicate, and collaborate with each other and humans. She emphasized the importance of integrating safety and security measures alongside technological development, stating that intelligence and safety are interconnected domains. Smarter AI systems are easier to guide safely by providing clear directives, much like training a smarter dog is easier than training a less intelligent one.

Murati highlighted that future AI systems will have the capability to connect to the internet, interact with each other, and work with humans. She stressed that more intelligent AI systems are easier to control and direct safely.

Murati also acknowledged that it's impossible to eliminate all risks associated with AI technology. Addressing concerns about deepfake videos, she stated that while OpenAI has a responsibility due to its ownership of the technology, the responsibility is also shared with users, civil society, government, content creators, and the media to determine its appropriate use.

Citi Increases Microsoft Target to $520, Highlights OpenAI’s Positive Influence on Azure

Analysts at Citi have increased their target price for Microsoft (NASDAQ:MSFT) to $520 from $495, maintaining a Buy rating on the stock. In a recent note, the bank noted that the latest headlines about OpenAI are favorable for Microsoft's Azure, but the associated expenses might be underestimated.

Recent news reports indicate that OpenAI has more than doubled its annualized revenue to $3.4 billion in ARR and has expanded its cloud capacity contracts with Oracle (ORCL).

"While the growth implications from OpenAI headlines are positive (suggesting strength in Azure), we believe consensus models may be underestimating the losses that Microsoft will incur from their stake in OpenAI in the non-operating expense line," wrote Citi analysts.

They further explained that, although this impact is not substantial due to the large scale of Microsoft's business, consensus EPS estimates might be overstated by $0.04-$0.05 in the upcoming quarters, assuming all other factors remain constant.

As a result, Citi has slightly reduced its near-term EPS estimate by about 1% for fiscal Q4 2024 and fiscal Q1 2025, projecting an EPS of $3.08, approximately 2% below the consensus of around $3.15. However, they have raised their estimates for fiscal year 2027 and beyond.

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI and ClosedAI: Dive into the latest projects and innovations from the leading organisations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realisation of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!

Keep reading