• "Towards AGI"
  • Posts
  • LightOn Set to Become Europe’s First Public GenAI Startup with Paris IPO

LightOn Set to Become Europe’s First Public GenAI Startup with Paris IPO

LightOn's IPO reflects France's ambition to establish itself as Europe's primary hub for artificial intelligence and narrow the innovation gap with the U.S. and U.K.

A Thought Leadership platform to help the world navigate towards Artificial General Intelligence We are committed to navigate the path towards Artificial General Intelligence (AGI) by building a community of innovators, thinkers, and AI enthusiasts.

Whether you're passionate about machine learning, neural networks, or the ethics surrounding GenAI, our platform offers cutting-edge insights, resources, and collaborations on everything AI.

What to expect from Towards AGI: Know Your Inference (KYI): Ensuring your AI-generated insights are accurate, ethical, and fit for purpose Open vs Closed AI: Get expert analysis to help you navigate the open-source vs closed-source AI GenAI Maturity Assessment: Evaluate and audit your AI capabilities Expert Insights & Articles: Stay informed with deep dives into the latest AI advancements But that’s not all!

We are training specialised AI Analyst Agents for CxOs to interact and seek insights and answers for their most pressing questions. No more waiting for a Gartner Analyst appointment for weeks, You’ll be just a prompt away from getting the insights you need to make critical business decisions. Watch out this space!!

Visit us at https://www.towardsagi.ai to be part of the future of AI. Let’s build the next wave of AI innovations together!

Introducing the GEN Matrix: Your Essential Guide to Generative AI Trailblazers!

Dive into the forefront of Generative AI with the GEN Matrix—your ultimate resource for discovering the innovators, startups, and organizations leading the AI revolution. Our platform features three categories spotlighting:

  • Organizations: Early adopters advancing GenAI in production.

  • Startups: Pioneers across diverse GenAI layers (chips, infrastructure, applications, etc.).

  • Leaders: Key figures driving GenAI innovation and adoption.

Know someone making strides in GenAI? Nominate them to be featured in the GEN Matrix! Whether you're a business seeking AI solutions or a developer looking for tools, explore GEN Matrix to stay at the forefront of AI excellence.

LightOn Set to Become Europe’s First Public GenAI Startup with Paris IPO

French generative AI startup LightOn initiated its initial public offering (IPO) on the Euronext Growth market in Paris on Friday, with trading set to begin later in November. LightOn, which supplies large language model (LLM) solutions to both businesses and the French government, will become the first GenAI startup to go public in Europe. Unlike European counterparts like France's Mistral and Germany's DeepL, which have stayed private to secure funding, LightOn's IPO reflects France's ambition to establish itself as Europe's primary hub for artificial intelligence and narrow the innovation gap with the U.S. and U.K.

“This IPO provides investors with a unique chance to be part of the growth of a French tech leader that is already deploying AI solutions successfully with major groups in France and internationally,” said LightOn co-CEOs Igor Carron and Laurent Daudet.

Alphabet’s Google launched an AI hub in France in February. French President Emmanuel Macron has previously expressed his vision for France to increase its tech "unicorns" to 100 by 2030, up from the current 27. On Thursday, Euronext CEO Stéphane Boujnah highlighted the readiness of numerous EU tech companies to go public.

Shares will be offered at 10.35 euros each, with the company valued around 50 million euros. The IPO includes a capital raise of approximately 10.4 million euros ($11.2 million). LightOn is aiming for a revenue of 40 million euros and an EBITDA margin of 40% by 2027. The subscription period is set for Nov. 8-20, with settlement delivery on Nov. 25 and the Euronext Growth listing the following day.

Veefin Acquires Singapore-Based GenAI Startup Walnut in Fourth Deal of the Year

Digital supply chain finance and lending platform Veefin has acquired Singapore-based GenAI company Walnut in an all-cash transaction, marking its first international acquisition and its fourth this year following three domestic purchases since June.

Veefin announced it has acquired a 50% stake in Walnut, which provides technology solutions to banks and financial institutions for managing complex, large-scale data sets. Veefin highlighted Walnut's GenAI capabilities in transforming vast amounts of unstructured data into precise insights, which will support credit decisioning as Veefin Group develops “tech-first solutions for working capital management.” Raja Debnath, Chairman and Co-Founder of the Veefin Group, stated, “GenAI is essential for the group, and Walnut aligns perfectly with Veefin’s ecosystem.”

Founded in 2020, Walnut will continue to operate independently post-acquisition. Walnut’s flagship product, Vegaspread, streamlines complex financial data by extracting critical insights from various multi-format, multi-layout, and lengthy reports, including Annual Financial Statements (AFS) and the Notes to Accounts, in just minutes.

Walnut’s client list includes DBS, Bank of Singapore, Amret, and RCBC, while Veefin has over 500 clients across banking, financial institutions, and corporate sectors. The acquisition comes amid a surge in demand for GenAI capabilities both in India and globally. Veefin noted that India’s GenAI market, valued at $1.1 billion in 2023, is projected to grow at a CAGR of 48% to reach $17 billion by 2030, according to industry reports.

Bala Iyer, Co-founder & CEO of Walnut, said, “Veefin Group is a significant player in the SaaS ecosystem, especially for banking technology infrastructure solutions. Our product is an ideal addition to their extensive ecosystem. With plans to expand globally and in India, our goal is to become the leading GenAI tool for fast, intelligent credit decisioning.”

Earlier this year, Veefin acquired loan origination platform EpikIndifi in September, the Indian branch of global tech firm Nityo Infotech in August, and GST compliance and accounts automation solution provider Regime Tax Solutions in June.

GenAI Surge Drives Revenue for AWS, Azure, and Google Cloud

As the latest quarter concludes, AIM reviews the performance of major cloud service providers. Amazon Web Services (AWS) reported $27.5 billion in revenue, a 19% year-over-year increase. Microsoft’s Intelligent Cloud earned $24.1 billion, up 20%, while Google Cloud's revenue rose by 35% to $11.4 billion. AWS leads the market with a 31% share, followed by Microsoft Azure at 20% and Google Cloud at 12%.

During an earnings call, Microsoft CEO Satya Nadella announced the company's AI business is on track to surpass a $10 billion annual revenue run rate in the next quarter, claiming Microsoft as the fastest company to reach this milestone. Nadella noted a rise in customers using Azure AI to build co-pilots and agents, with Azure OpenAI usage doubling over the past six months. Companies like Grammarly, Harvey, Bajaj Finance, Hitachi, KT, and LG have moved from testing to full production. He also highlighted that GE Aerospace has created a digital assistant on Azure OpenAI, which has processed over 500,000 internal queries and 200,000 documents in just three months.

Microsoft was the first major cloud provider to deploy NVIDIA’s Blackwell system with GB200-powered AI servers. It has also expanded its language model (LLM) capabilities, adding support for OpenAI’s latest model family, o1, and introducing industry-specific multimodal models, including those for medical imaging. However, Microsoft expects a $1.5 billion loss from its OpenAI investment.

Meanwhile, AWS is optimistic about its generative AI business. AWS chief Andy Jassy reported that its AI business has a multibillion-dollar revenue run rate, growing at a triple-digit year-over-year rate, faster than AWS grew in its early stages. Jassy said Amazon will invest around $75 billion in 2024, focusing mainly on infrastructure and data centers, with most of it allocated to AWS due to generative AI demand. Like Microsoft’s OpenAI partnership, AWS is collaborating with Anthropic, recently adding Claude 3.5 Sonnet to Amazon Bedrock, along with Meta’s Llama 3.2 models, Mistral’s Large 2 models, and several Stability AI models.

Jassy noted strong adoption of Amazon Q, AWS’s generative AI-powered assistant for software development, which has achieved the industry’s highest acceptance rates for multiline code suggestions. Amazon recently added an inline chat feature to Amazon Q, powered by Claude 3.5 Sonnet.

Google CEO Sundar Pichai shared that Google Gemini API calls have grown 14x in the past six months. Google Cloud’s Vertex AI platform offers a complete suite of MLOps tools, including the Gemini API.

All three cloud providers aim to reduce cloud costs for their clients. Jassy highlighted AWS’s custom silicon, Trainium and Inferentia, as cost-effective solutions for inference. He revealed that Trainium2 will be available soon, promising attractive price-performance benefits for customers.

Similarly, Google is developing Trillium, its sixth-generation Tensor Processing Unit (TPU). Pichai shared that LG AI Research has cut inference processing time by over 50% and reduced costs by 72% using a combination of TPUs and GPUs. Google CFO Anat Ashkenazi reported a $13 billion capital expenditure for the quarter, with 60% directed to servers and 40% to data centers and networking.

Microsoft is also developing Maia 100, an AI accelerator for large-scale AI workloads on Azure. Nadella emphasized that Microsoft isn’t selling GPUs for others to train models; instead, AI-related revenue is primarily generated through inference, meeting established enterprise needs. CFO Amy Hood added that revenue growth from inference funds additional training investments, highlighting AI as a continuous cycle of growth and development. As Microsoft, AWS, and Google compete, the focus now shifts toward developing agents powered by LLMs, demanding even more computing power.

Open-Source Generative AI: Balancing Innovation and Risks

Open source has transformed software development from the early days of coding, and now, as generative artificial intelligence (GenAI) reshapes industries, open-source GenAI is capturing attention. But what unique advantages—and risks—does open-source GenAI bring?

Powered by large language models (LLMs), GenAI has already impacted communication, coding, and customer support with tools like ChatGPT and Copilot. Traditionally, many LLMs have been proprietary, limiting public access and modifications.

In contrast, open-source GenAI models offer developers transparency, customization, and often lower costs. These models can be fine-tuned on popular cloud platforms like AWS, Google Cloud, and Microsoft Azure, which can improve model accuracy by 5-10% for specific applications, according to GitHub. Open-source models are especially appealing to companies seeking tailored AI tools for specialized tasks. With open-source code, users can examine and adjust the underlying model, providing a level of transparency that AI ethicists support, as it allows for external audits to address security and bias.

Many vendors promote open-source GenAI’s benefits, including transparency, efficiency, modularity, and customizable code. However, open-source models also come with risks: unlike proprietary models that are vendor-supported, open-source models require users to manage their own security updates and ensure compliance with industry standards.

Who’s Leading the Charge?

Companies like Red Hat, Intel, and IBM are advocating for an open future in GenAI. IBM's latest Granite AI models, for example, were released under the Apache 2.0 license, which allows integration with proprietary code and royalty-free modifications. Jay Lyman, a senior research analyst at S&P Global Market Intelligence, notes that Apache 2.0 can offer cost and scalability benefits in GenAI.

In May, IBM and Red Hat launched InstructLab, an open-source project enabling communities to refine and merge changes to LLMs without retraining the model from scratch.

A Question of Semantics

The exact definition of "open-source GenAI" remains a topic of debate. Last week, the Open Source Initiative issued its first definition for AI-specific open source. “It’s a starting point for the discussion around what defines open-source Gen AI,” said Lyman.

While the Granite models reflect an open-source approach, IBM’s Director of Research Darío Gil acknowledged that Granite 3.0 might not fully meet the Open Source Initiative’s definition. “There is an ongoing discussion, and it’s helpful for the industry to clarify,” he said.

Even major GenAI players like Meta and X.ai have faced skepticism for calling models "open source" while restricting certain data. For example, X.ai hasn’t fully disclosed the code or training data for its Grok model.

Similarly, the Technology Innovation Institute in Abu Dhabi released Falcon 40B under Apache 2.0, but its larger model, Falcon 180B, carries restrictions, raising questions about what qualifies as truly open source.

Is the Future Open?

As open-source GenAI gains traction, industry analysts debate its potential to become the norm. Open-source models are advancing in quality and speed, said Arun Chandrasekaran, VP analyst at Gartner, though it’s unclear if they can sustain competition with proprietary models in the long term.

Chandrasekaran observed that the debate on open models has intensified, as providers weigh access and customization against competitive secrecy. “Many companies have started as open source but switched to closed source to monetize effectively, an issue that may be significant here due to the high costs of building and rapidly depreciating models,” he told Fierce Network.

A recent S&P Global Market Intelligence survey found that most companies use a mix of open-source and commercially licensed models. Lyman predicts this trend will continue, as each approach offers different benefits. “It’s likely to evolve like enterprise software, where open source is central to development, but proprietary software remains common,” he said.

Ultimately, companies must weigh the flexibility of open-source GenAI against the reliability of proprietary models. With competition among AI providers intensifying, only time will tell what role open models will play in GenAI’s future.

Chinese Military Adopts Facebook's Open Source AI for Strategic Advancements

According to a recent report, Meta’s open-source Llama model is already in use by the Chinese military. The tool, named "ChatBIT," is reportedly being developed to gather intelligence and aid in operational decision-making, as described in an academic paper obtained by Reuters.

Meta’s president of global affairs, Nick Clegg, quickly responded in a blog post, published just three days after the Reuters report, stating that Meta is working to make Llama accessible to US government agencies and national security contractors.

In the post, Clegg emphasizes that AI models like Llama can support both the prosperity and security of the US, and he argues for establishing US open-source standards in the competitive global AI landscape. The blog post, noted by Gizmodo, appears strategically timed, given China’s People’s Liberation Army was using the technology before the US government even considered similar applications.

"Please and Thank You"

Reuters points out that Meta’s blog post contradicts its own acceptable use policy, which prohibits applications in military, warfare, nuclear, or espionage contexts. However, as an open-source model, Meta’s provisions are ineffective in practice, leaving the company without control over its use.

Clegg argues that open-sourcing AI would help the US compete globally, especially against nations like China, which are heavily investing in developing their own open-source models. He writes that it is in the interest of the US and democratic nations to see American open-source models succeed over those from China and other countries.

Whether this reasoning will convince US officials, especially at the Pentagon, remains uncertain. The situation highlights a national security gap, as adversaries of the US now benefit from similar technological advancements.

Recently, the Biden administration announced it is finalizing measures to restrict US investments in AI advancements within China that could pose security risks. But given Meta’s open-source approach, these restrictions may come too late.

Meta, however, downplays the impact, with a spokesperson saying that an outdated version of an American open-source model has minimal influence in light of China’s trillion-dollar investments aimed at surpassing the US in AI.

Judge Dismisses News Outlets’ Copyright Lawsuit Against OpenAI Over AI Training Data

A New York federal judge dismissed a lawsuit on Thursday against AI giant OpenAI, filed by news outlets Raw Story and AlterNet, which alleged that OpenAI improperly used their articles to train its language models. U.S. District Judge Colleen McMahon ruled that the outlets could not demonstrate sufficient harm to uphold the case but allowed them the option to file a revised complaint, expressing "skepticism" about their ability to "allege a cognizable injury." Raw Story, which acquired AlterNet in 2018, was represented by attorney Matt Topic from Loevy + Loevy, who stated that the outlets are "confident we can address the court's concerns in an amended complaint."

An OpenAI spokesperson responded, asserting that their AI models are built using publicly available data in a way that aligns with fair use and legal precedent. The lawsuit, originally filed in February, claimed that thousands of Raw Story and AlterNet articles were used without permission to train ChatGPT, resulting in the reproduction of their copyrighted content upon request.

This case is part of a broader series of lawsuits against OpenAI and other tech firms by content creators, including authors, artists, and publishers, regarding the data used to train generative AI. The first lawsuit from a media outlet came in December, filed by The New York Times against OpenAI.

Unlike other similar cases, Raw Story and AlterNet’s complaint accused OpenAI of unlawfully removing copyright management information (CMI) from their articles without claiming copyright infringement. Judge McMahon sided with OpenAI, stating that the lawsuit should be dismissed.

McMahon clarified, "The real issue here is not the removal of CMI," but rather the "use of Plaintiffs' articles to develop ChatGPT without compensation," adding that the harm cited by the outlets does not meet the threshold needed for the lawsuit.

Judge McMahon noted that it remains uncertain if other legal theories might address this type of harm but stated that the question was beyond the scope of this case.

The case, Raw Story Media v. OpenAI Inc., is filed in the U.S. District Court for the Southern District of New York under No. 1:24-cv-01514. Representing Raw Story are attorneys Matt Topic, Jon Loevy, and Michael Kanovitz of Loevy + Loevy, while OpenAI’s defense team includes Joe Gratz, Vera Ranieri, Rose Lee of Morrison & Foerster; Joseph Wetzel, Andy Gass, Sy Damle, and Alli Stillman of Latham & Watkins; and Bob Van Nest, Jamie Slaughter, Paven Malhotra, Michelle Ybarra, Nick Goldberg, Tom Gorman, and Katie Lynn Joyce of Keker Van Nest & Peters.

Mistral AI Launches Multilingual Moderation API to Rival OpenAI in Content Safety

French AI startup Mistral AI unveiled a new content moderation API on Thursday, highlighting its efforts to rival OpenAI and other AI leaders while tackling increasing concerns around AI safety and content control.

This moderation service, powered by Mistral’s fine-tuned Ministral 8B model, is designed to detect potentially harmful content across nine categories, including sexual content, hate speech, violence, dangerous activities, and personal information. The API supports both raw text and conversational content analysis. The launch is timely for the AI industry, as pressure mounts for companies to strengthen their technology safeguards. Last month, Mistral joined other AI companies in signing the UK AI Safety Summit accord, committing to responsible AI development.

Already implemented in Mistral’s Le Chat platform, the moderation API supports 11 languages, such as Arabic, Chinese, English, French, German, Italian, Japanese, Korean, Portuguese, Russian, and Spanish. This multilingual feature gives Mistral an advantage over some competitors who focus mainly on English.

“In recent months, we’ve noticed increasing interest in new LLM-based moderation systems, which can make content moderation more scalable and resilient,” Mistral stated. 

The release follows Mistral’s recent high-profile partnerships with companies like Microsoft Azure, Qualcomm, and SAP, marking the startup as a significant player in the enterprise AI space. Last month, SAP announced it would integrate Mistral’s models, including Mistral Large 2, into its infrastructure, offering customers secure AI solutions that meet European regulations.

Mistral’s approach is distinctive due to its dual emphasis on edge computing and extensive safety features. While companies such as OpenAI and Anthropic have largely focused on cloud-based solutions, Mistral’s strategy of enabling both on-device AI and content moderation responds to rising concerns about data privacy, latency, and compliance—factors especially relevant to European companies under strict data protection laws.

The technical sophistication of Mistral’s approach is also notable. By training its model to interpret conversational context rather than isolated text, Mistral aims to catch subtle forms of harmful content that basic filters might miss.

The moderation API is now available on Mistral’s cloud platform, with pricing based on usage. Mistral plans to enhance the system’s accuracy and expand its capabilities in response to customer feedback and evolving safety standards.

Mistral’s rapid rise underscores the dynamic nature of the AI industry. Just a year ago, the Paris-based startup didn’t exist. Now it’s helping shape enterprise perspectives on AI safety. In a landscape dominated by American tech giants, Mistral’s European focus on privacy and security could be a major competitive advantage.

Unlock the future of problem solving with Generative AI!

If you're a professional looking to elevate your strategic insights, enhance decision-making, and redefine problem-solving with cutting-edge technologies, the Consulting in the age of Gen AI course is your gateway. Perfect for those ready to integrate Generative AI into your work and stay ahead of the curve.

In a world where AI is rapidly transforming industries, businesses need professionals and consultants who can navigate this evolving landscape. This learning experience arms you with the essential skills to leverage Generative AI for improving problem-solving, decision-making, or advising clients.

Join us and gain firsthand experience in how state-of-the-art technology can elevate your problem solving skills using GenAI to new heights. This isn’t just learning; it’s your competitive edge in an AI-driven world.

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!