- Towards AGI
- Posts
- Perfios Targets $500B Financial Sector With New GenAI Platform
Perfios Targets $500B Financial Sector With New GenAI Platform
Perfios' Bold New Offering.
Here is what’s new in the AI world.
AI news: Banking Meets AI
What’s new: Navigating the Risks of China's Generative AI Ecosystem
Open AI: The Open-Source Factor in 2024’s AI Policy Wars
OpenAI: Why Google's CEO Is Betting on OpenAI
Hot Tea: Built It, Now Regrets It?
Explore Gen Matrix Q2 2025

Uncover the latest rankings, insights, and real-world success stories in Generative AI adoption across industries.
See which organizations, startups, and innovators are leading the AI revolution and learn how measurable outcomes are reshaping business strategies.
The AI Revolution Comes to Finance: Perfios' Bold New Offering
Bengaluru-based B2B fintech leader Perfios, backed by Warburg Pincus and Kedaara Capital, has introduced a next-gen GenAI-powered Intelligence Stack designed to streamline operations in banking, financial services, and insurance (BFSI).
The company asserts that this AI-first framework can triple operational efficiency by automating complex workflows across financial institutions.
Key AI-Driven Solutions Launched:
Compass - A domain-specific internal chatbot for real-time assistance.
Data Intelligence Bridge - An advanced document processing and orchestration engine.
Prism - An API-first AI gateway for seamless integration.
Medical Insurance Claim Adjudication - AI-powered solution for faster, fraud-resistant claims processing.
Early internal trials have already demonstrated a 40% increase in productivity, underscoring the platform’s potential.
The Intelligence Stack leverages LLMs, machine learning, and vision-language models (VLMs), supported by a 30+ member data science team alongside ML engineers and domain specialists.
Perfios made a powerful impact at #DFS2025 as a Platinum Sponsor, showcasing how real-time credit decisioning and advanced data analytics are reshaping the BFSI landscape.
Backed by $384M in funding and trusted by 1000+ financial institutions, Perfios brought bold ideas, deep
— Dubai FinTech Summit (@DubaiFinTechSum)
7:37 AM • Jun 2, 2025
Unlike traditional bolt-on AI solutions, Perfios has rebuilt financial workflows from scratch, embedding intelligence at every critical decision point.
“Our GenAI Stack acts as an intelligent assistant across functions, whether it’s a risk analyst assessing loans, a relationship manager retaining clients, or an insurance claims officer detecting fraud in real time. We’re enabling smarter, faster decisions through context-aware AI.”
Expanding Global Footprint
Founded in 2008, Perfios now serves 1,000+ financial institutions across 18 countries, processing 8.2 billion data points annually through 75+ AI-driven products.
Strategic Acquisitions
April 2025: Acquired IHX, a healthcare data exchange platform, to strengthen its insurance tech stack.
Earlier in 2025: Bought Clari5 (fraud detection) and CreditNirvana (AI-powered debt recovery) to expand its AI capabilities.
The US-China AI Divide: Why Tool Choices Matter More Than Ever
A new report by Harmonic Security reveals widespread, unauthorized use of China-developed generative AI tools among employees at US and UK companies, creating significant data security and compliance risks.
The study uncovered hundreds of cases where sensitive corporate data, including source code, M&A documents, and customer records, was uploaded to Chinese AI platforms without security oversight.
Key Findings from the 30-Day Study (14,000 Employees Analyzed):
8% of employees used Chinese GenAI tools, including DeepSeek, Kimi Moonshot, Baidu Chat, Qwen (Alibaba), and Manus.
535 incidents involved 17+ MB of sensitive data, with 1/3 being proprietary source code and the rest including financial reports, legal contracts, and PII.
DeepSeek alone accounted for 85% of data exposure cases, followed by Kimi Moonshot and Qwen.
Scientists from the Hong Kong University of Science and Technology have developed an innovative AI model that can create 3D images of patients’ bones and organs in less than a minute, much faster than conventional approaches, significantly cutting radiation exposure by up to 99%.
— China Science (@ChinaScience)
1:00 AM • Jul 21, 2025
Why This Matters?
No Transparency: Many Chinese AI platforms have unclear data policies, with some reserving rights to retain and reuse uploaded data for model training.
Shadow AI Adoption: Employees, especially in developer-heavy firms, are prioritizing productivity over compliance, bypassing IT governance.
Regulatory & IP Risks: Companies in finance, healthcare, and tech face legal and competitive threats if proprietary data is mishandled.
The Governance Gap: Awareness Isn’t Enough
Harmonic warns that traditional security policies are failing to keep pace with employee-driven AI adoption. To mitigate risks, firms need:
Real-time AI activity monitoring.
Granular controls (e.g., block tools by country, restrict sensitive uploads).
Automated user alerts to guide compliant usage.
China's only good AI model is not DeepSeek.
There are TEN+ top tier models.
The US has only 5 labs—OpenAI, Anthropic, Google, Meta, xAI—playing at this scale.
Not to mention more Vertical Large Models serving the manufacturing industry - the US lacks manufacturing.— ShanghaiPanda (@thinking_panda)
7:47 AM • Jan 31, 2025
Proactive Solutions Over Reactive Policies
As unsanctioned GenAI use grows, Harmonic emphasizes that enforced technical controls, not just training, are critical. Their platform helps enterprises:
Detect and block high-risk AI interactions.
Prevent data leaks without stifling innovation.
Balance AI productivity with security.
With 1 in 12 employees already using Chinese GenAI tools, companies must act now to secure data, comply with regulations, and protect intellectual property before a breach forces their hand.
Why Trump’s AI Playbook Bets Big on Open-Source Innovation?
The Biden administration's newly unveiled Artificial Intelligence Action Plan positions open-source AI as both an economic catalyst and geopolitical tool, marking a notable shift in federal tech policy.
The framework argues that U.S.-led open models could set global standards while advancing American interests abroad.
Key Elements of the Strategy
Geostrategic Vision: The plan frames open-source AI as critical for influencing international norms, stating "open-weight models could become global standards" in academia and industry.
Five-Point Support Plan:
Boost federal R&D funding for open-source AI.
Direct the NTIA to promote adoption among SMEs.
Partner with tech giants on infrastructure.
Expand the National AI Research Resource.
Reduce compute costs for startups/researchers.
Balanced Approach: While endorsing openness, the policy avoids mandates, leaving release decisions to developers.
More Buttigieg on AI. This week, the Trump administration announced a framework where Big Tech is largely given free range over the applications of this technology in public life, punishing states that try to regulate it in ways the administration disagrees with.
— chyea ok (@chyeaok)
4:39 PM • Jul 25, 2025
Industry Reactions
Daniel Castro (ITI): Called it the "right call," arguing open-source fuels innovation and extends U.S. tech leadership.
Mark Surman (Mozilla): Urged federal procurement reforms to create demand, noting open-source struggles to find commercial footing.
Cautious Notes:
Whitney McNamara (Beacon Global): Acknowledged open-source’s potential but warned against blanket rules, citing DoD’s need for proprietary solutions in some cases.
Omissions and Opportunities
The plan stops short of pushing federal agencies to adopt open-source AI, a lever advocates say could transform the market. Critics maintain security risks require selective use of closed systems, especially in defense.
By treating open-source AI as both an innovation engine and diplomatic asset, the administration aims to counterbalance China’s tech ambitions while avoiding overregulation.
Yet its real-world impact hinges on execution, particularly whether federal investments can spur widespread adoption without compromising security.
With datamanagement.ai you gain -
Seamless Data Unification - Break down silos without disruption
AI-Driven Migration - Cut costs and timelines by 50%
Future-Proof Architecture - Enable analytics, AI, and innovation at scale
Outperform competitors or fall behind. The choice is yours.
Why Google Cloud's OpenAI Move Could Shake Up the AI Race?
In a striking move, Google CEO Sundar Pichai announced a cloud computing partnership with OpenAI, its fiercest AI competitor, during Google’s Q2 2025 earnings call.
The deal sees Google supplying computational infrastructure to train and serve OpenAI’s models, even as ChatGPT threatens Google’s core search business.
Key Details of the Partnership
Strategic Balancing Act: While OpenAI’s ChatGPT disrupts Google Search, the deal makes OpenAI a major Google Cloud customer, boosting revenue ($13.6B in Q2, up 32% YoY).
Resource Play: OpenAI, constrained by Nvidia GPU shortages, adds Google Cloud to its suppliers (alongside Microsoft and Oracle) for extra capacity. Google touts its TPU chips and GPU stockpile as competitive advantages.
Pichai’s Pragmatism: “Google Cloud is an open platform… we look forward to investing more in this relationship,” he said, sidestepping tensions over AI competition.
AI runs better here 👇
— Google Cloud Tech (@GoogleCloudTech)
9:35 PM • Jul 24, 2025
Broader Context
AI Ecosystem Growth: Google Cloud also powers Anthropic, Safe Superintelligence (Ilya Sutskever), and Fei-Fei Li’s World Labs, leveraging its hardware edge.
Search vs. AI Tensions: Gemini (450M users) and AI Overviews (2B users) show traction, but their business impact and how much they cannibalize Search remain unclear.
Historical Irony: The deal echoes Google’s early Yahoo partnership, which helped it eventually dominate search. Could history repeat?
The Bigger Picture
For Google, the partnership is a double-edged sword:
Cloud revenue surges from AI demand.
Fueling a competitor that could erode its $200B+ search empire.
“It’s hard to believe Pichai is genuinely thrilled,” analysts note, but with $10B in added 2025 capex for AI, Google seems willing to risk short-term friction for long-term cloud dominance.
ChatGPT Addiction? Altman Calls Out Risks for Young Users
OpenAI CEO Sam Altman has voiced serious concerns about young people's growing emotional dependence on AI chatbots like ChatGPT, calling it a disturbing trend during his appearance at a Federal Reserve conference this week.
Key Concerns Raised by Altman
"Can't Decide Without ChatGPT" Syndrome: Altman revealed that many young users treat the AI as a life coach, confessing, "I’m gonna do whatever it says" for major decisions.
Psychological Risks: He warned this over-reliance could be harmful, noting it’s a "really common thing" among younger demographics.
Broader Trust Issues: His comments align with Geoffrey Hinton’s recent admission that he "tends to believe GPT-4" even when he knows it’s wrong, highlighting AI’s persuasive power despite flaws.
Listen carefully to what Sam Altman says here before you use ChatGPT…
“If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit, we could be required to produce that … It makes sense to … really want the privacy clarity before you use it a lot.”
— Chief Nerd (@TheChiefNerd)
7:20 AM • Jul 24, 2025
AI’s Dual-Edged Sword: Productivity vs. Danger
Financial Sector Vulnerabilities: Altman flagged voice cloning scams and warned that deepfake videos will soon bypass facial authentication.
"Voiceprints as security? Crazy," he said, urging banks to outsmart AI-driven fraud.
Predicted an "impending fraud crisis" as scams evolve from fake calls to hyper-realistic video deception.
Why This Matters?
While AI boosts efficiency, Altman’s warnings underscore:
Psychological risks of treating chatbots as infallible guides.
Security gaps as criminals weaponize AI for fraud.
As AI integrates deeper into daily life, healthy skepticism and stronger safeguards are urgently needed, before overtrust leads to real-world harm.
Towards MCP: Pioneering Secure Collaboration in the Age of AI & Privacy

Towards MCP is a cutting-edge platform at the intersection of privacy, security, and collaborative intelligence. Specializing in Multi-Party Computation (MPC), we enable secure, decentralized data processing, allowing organizations to analyze and train AI models on encrypted data without exposing sensitive information.
Multi-Party Computation (MPC) Solutions
Privacy-preserving collaborative computing for secure data analysis.
Enabling organizations to jointly process encrypted data without exposing raw inputs.
AI & MPC Integration
Combining MPC with AI/ML to train models on decentralized, sensitive datasets (e.g., healthcare, finance).
Blockchain & Decentralized Privacy
Secure smart contracts and decentralized applications (dApps) with MPC-enhanced privacy.
Custom MPC Development
Tailored MPC protocols for industries requiring high-security data collaboration.
Research & Thought Leadership
Advancing MPC technology through open-source contributions and AGI-aligned innovation.
Your opinion matters!
Hope you loved reading our piece of newsletter as much as we had fun writing it.
Share your experience and feedback with us below ‘cause we take your critique very critically.
How did you like our today's edition? |
Thank you for reading
-Shen & Towards AGI team