How Gen AI Will Transform Remote Work?

In partnership with

Welcome to Towards AGI, your premier newsletter dedicated to the world of Artificial Intelligence. Our mission is to guide you through the evolving realm of AI with a specific focus on Generative AI. Each issue is designed to enrich your understanding and spark your curiosity about the advancements and challenges shaping the future of AI.

Whether you're deeply embedded in the AI industry or just beginning to explore its vast potential, "Towards AGI" is crafted to provide you with comprehensive insights and discussions on the most pertinent topics. From groundbreaking research to ethical considerations, our newsletter is here to keep you at the forefront of AI innovation. Join our community of AI professionals, hobbyists, and academics as we pursue the ambitious path toward Artificial General Intelligence. Let’s embark on this journey together, exploring the rich landscape of AI through expert analysis, exclusive content, and engaging discussions.

How Gen AI Will Transform Remote Work?

The story of automation's impact on workers is clear: it initially hit those in manufacturing and trade hardest. The mechanized loom forced independent weavers into factory jobs, and Ford’s assembly line transformed the roles of skilled mechanics and engineers into repetitive tasks. Eventually, office support staff experienced a similar shift; the advent of networked PCs turned their jobs into filling out software forms and ticking boxes. This often led to lower pay, fewer skills, less variety, reduced human interaction, and diminished dignity.

Conversely, most knowledge workers have benefited from automation. It has brought them more challenging, creative, and skill-enhancing work that pays better both financially and in status. The pandemic and the rise of remote work further enhanced their autonomy and work-life balance. Decades of research in labor economics, psychology, and sociology show that white-collar workers gain more from automation. A senior manager with an MBA, dealing with complex, judgment-based tasks, greatly benefits from technology that automates routine work. This phenomenon, known as skill-biased technical change, favors those with higher skills. However, this narrative is shifting.

The focus is now on physical presence. Remote workers, or those who could work remotely, may soon see their tasks heavily automated. Meanwhile, workers who need to be physically present for their jobs are safer from automation. This shift means that some remote workers will lose their jobs, while many others will need to reskill to adapt to significant job changes.

New predictive models about generative AI (gen AI) exposure explain this shift. Research shows that jobs more exposed to gen AI have greater potential productivity boosts. For example, customer service representatives using gen AI could see 50% productivity gains in half their tasks. Blockchain engineers, writers, and mathematicians are also highly exposed. Overall, 80% of working adults have jobs with at least 10% exposure to gen AI, and 19% have jobs with at least 50% exposure. This suggests that most workers can significantly boost their productivity through gen AI.

This trend is already evident. Microsoft’s survey of over 31,000 people across 31 countries revealed that by May, 75% of employees were using gen AI daily, nearly double the figure from January. Additionally, 78% of these workers adopted these tools on their own. Employers are also catching up; Gartner’s October 2023 survey found that 55% of companies were piloting or deploying LLM projects.

The rapid pace and broad scope of gen AI adoption are striking. From a historical and macroeconomic perspective, this could lead to an unprecedented rate of job change. Unlike past general-purpose technologies like internal combustion engines, telephony, and the internet, which took decades to become widespread due to costs, distribution challenges, and the need for new infrastructure, gen AI is instantly available, free, and useful to 2.6 billion workers worldwide. This means we must quickly adapt our work practices to incorporate it, unlike the gradual adjustments seen with earlier technologies. While we still have professions like accountants, barbers, and engineers, their roles have evolved significantly, and this trend will continue with gen AI.

Amazon Prime Video Introduces Gen-AI Based Recommendations

Prime Video is launching a global update to enhance user experience and provide AI-driven recommendations. This update aims to help users distinguish between free content included with their Prime membership and additional paid content more easily, especially as Prime Video offers more add-on subscriptions.

Key features of the update include:

  • Improved Personalized Recommendations: Powered by generative AI, making it simpler for users to find relevant content.

  • Category Browsing: Users can explore content through categories like “Top 10 in India.”

  • Enhanced Animations: Smooth page transitions and zoom effects for a more seamless browsing experience.

  • Simplified Synopses: Utilizes Large Language Models (LLMs) to offer clearer descriptions of TV shows and movies.

  • New Navigation Bar: Includes sections for “Home,” “Movies,” “TV Shows,” and “Live TV,” as well as active Prime Video Channels add-on subscriptions.

  • Prime Destination: A new section in the navigation bar for browsing movies and TV shows available at no extra cost with a Prime membership.

  • Add-On Subscription Management: Users can browse, sign up for, and manage their active add-on subscriptions directly from the navigation bar.

  • Subscriptions Section: Dedicated to managing additional add-ons and finding new options recommended based on user preferences.

  • Subscription Logos: Clear indicators of Prime and add-on subscription logos, like Lionsgate Play or Crunchyroll, will appear on the hero and title cards of content.

  • Live TV Recommendations: 24/7 stations that will start playing automatically.

This update aims to streamline the user experience and make content discovery more intuitive.

Abridge Partners with Epic and Mayo Clinic to Enhance Nursing Workflows with Gen AI

For the past six years, Abridge has been developing generative AI tools to assist doctors with medical documentation. Now, they are extending this technology to hospital nurses.

In partnership with the Mayo Clinic and health IT leader Epic, Abridge has introduced a generative AI-powered ambient documentation workflow for nurses. This new tool integrates seamlessly into the existing Epic inpatient nursing workflows.

This product was developed as part of Abridge's involvement in the Epic Workshop program, which began last year. This program includes third-party vendors collaborating with Epic to co-develop technology.

Nurses at Mayo Clinic will be instrumental in designing and testing the solution, focusing on workflows where the AI tool can have the greatest impact, according to Shiv Rao, M.D., CEO and founder of Abridge.

"Nurses are the backbone of the healthcare system. Providing them with advanced technology that can relieve their burden and allow them to focus on patient care is incredibly important," said Rao, a cardiologist. "I'm looking forward to the coming weeks and months as we measure the impact of this technology and hopefully expand its use across the healthcare system."

Abridge, recognized as one of Fierce Healthcare's Fierce 15 of 2024, uses AI to enhance the speed and accuracy of medical note-taking, leveraging a proprietary dataset from over 1.5 million medical encounters. The company's AI converts patient-clinician conversations into structured clinical note drafts in real time, which are then seamlessly integrated into the EMR.

Abridge was founded on the belief that clinician-patient conversations are central to healthcare, according to Rao. His personal experiences with the healthcare system highlighted the need for better communication and the potential for technology to bridge the gap between healthcare conversations and subsequent actions. Motivated by both personal and professional experiences, Rao launched Abridge in 2018 with Florian Metze, Ph.D., and Sandeep Konam.

Meta Unveils the Largest and Most Advanced Open-Source AI Model

In April, Meta hinted at developing a groundbreaking AI model: an open-source version that rivals the top private models from companies like OpenAI. Today, Meta unveils Llama 3.1, the largest open-source AI model to date. Meta claims it outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. Additionally, the Llama-based Meta AI assistant is now available in more countries and languages, featuring a new capability to generate images based on specific likenesses. CEO Mark Zuckerberg predicts that Meta AI will become the most widely used assistant by year’s end, surpassing ChatGPT.

Llama 3.1 is much more complex than the smaller Llama 3 models released a few months ago. The largest version boasts 405 billion parameters and was trained using over 16,000 of Nvidia’s high-cost H100 GPUs. While Meta hasn't disclosed the development costs for Llama 3.1, the Nvidia chips alone likely cost hundreds of millions of dollars.

Given these expenses, why is Meta offering Llama for free under a license requiring approval only from companies with hundreds of millions of users? In a blog post, Zuckerberg argues that open-source AI models are advancing faster than proprietary ones, akin to how Linux became the dominant open-source operating system for most phones, servers, and gadgets today.

He sees this as a pivotal moment in the industry, where most developers will shift to using open-source AI. Zuckerberg compares Meta’s investment in open-source AI to its previous Open Compute Project, which saved the company billions by collaborating with external companies like HP to improve and standardize data center designs. He anticipates a similar outcome with AI, writing, “I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source.”

Apple Unveils Open-Source AI Model to Rival Meta

Apple is quickly emerging as a surprising leader in the open-source artificial intelligence movement, introducing a new 7B parameter model that anyone can use or adapt.

Developed by Apple's research division, this model is unlikely to feature in an Apple product, except for the insights gained during its training. However, it underscores Apple's dedication to expanding the broader AI ecosystem, including through open data initiatives.

This model is the latest addition to the DCLM family and has outperformed Mistral-7B in benchmarks, nearing the performance of similar-sized models from Meta and Google.

Vaishaal Shanker from Apple's machine learning team stated on X that these are the "best performing truly open-source models" available today. By truly open-source, he means that all the weights, training code, and datasets are publicly accessible alongside the model.

This announcement coincides with Meta's expected unveiling of its massive GPT-4 competitor, Llama 3 400B. It remains uncertain whether Apple plans to release a larger DCLM model in the future.

The DCLM models are now available on Huggingface! According to our knowledge, these are by far the best performing truly open-source models (open data, open weight models, open training code) 1/5July 18, 2024

See more Apple's DCML (dataComp for Language Models) project involves researchers from Apple, the University of Washington, Tel Aviv University, and the Toyota Institute of Research. The project's goal is to design high-quality datasets for training models.

Amid concerns about the data used in training some models and whether all content was properly licensed or approved, this initiative is significant. The team conducts various experiments using the same model architecture, training code, evaluations, and framework to identify the best data strategy for creating a model that is both high-performing and efficient.

This effort led to the creation of DCML-Baseline, which was used to train the new models in both 7 billion and 1.4 billion parameter versions.

OpenAI’s Budget-Friendly Mini Model Challenges LLM Competitors

OpenAI's pricing for its new small language model, GPT4o Mini, which is 60% cheaper than GPT 3.5 Turbo and 40% cheaper than Google's Gemini 1.5 Flash, has intensified the competition among Indian LLM (large language model) developers focusing on small, concise use-cases for organizations.

Startup executives believe that OpenAI's pricing could lead to significant cost savings for scaling simple use-cases, like multilingual chatbots where each query involves 15-25 tokens. Additionally, OpenAI allows for multiple API calls, enabling users to integrate contextual data from various sources to address complex queries.

“Beyond chatbots, companies in various sectors can benefit from lower-cost yet advanced LLMs for diverse applications. For example, legal documentation summaries and compliance in pharmacovigilance can use literature reviews,” said Sreeraman Thiagarajan, founder and CEO of Agrahyah Technologies, which specializes in language and diffusion models.

“Multilingual translation at scale is valuable in education, book publishing, and generating synthetic data for machine learning. Large enterprises can utilize AI to analyze vast amounts of unstructured data, such as social media conversations, to extract consumer insights,” he added.

OpenAI Aims to Create Its Own AI Chips, Reducing Dependence on NVIDIA

OpenAI is currently considering the development of its own AI chips to lessen its reliance on NVIDIA’s GPUs for AI models. CEO Sam Altman is in discussions with semiconductor company Broadcom about potentially manufacturing these proprietary chips.

This initiative aims to secure a steady supply of components and enhance infrastructure, including power and data centers. To support this goal, OpenAI has hired former Google employees with AI hardware expertise, indicating its larger ambition to establish a network of semiconductor factories, which could require billions in funding.

Broadcom's expertise in custom ASIC solutions aligns well with OpenAI's need for tailored accelerators. The company can deliver essential silicon solutions for data centers, including networking components, PCIe controllers, and SSD controllers.

This comprehensive product lineup could help OpenAI efficiently meet its data center needs. Additionally, Broadcom's proficiency in inter-system and system-to-system communication technologies could enhance OpenAI's overall AI infrastructure capabilities.

OpenAI has also opened an office in Japan to explore new revenue opportunities and foster collaborations with local businesses, governments, and research institutions. They have already partnered with organizations like Carnegie Mellon University and Khan Academy to develop personalized learning experiences using AI.

However, developing their own AI chip to rival NVIDIA's could take years of research and development. Altman plans to raise billions of dollars to establish a series of factories to produce AI semiconductors, with potential partners including Intel, Taiwan Semiconductor Manufacturing Company, and Samsung Electronics.

Cohere Secures $500 Million, Emerging as a Rival to OpenAI and Anthropic

Toronto-based AI firm Cohere has secured $500 million in a new funding round, raising the company’s valuation to $5.5 billion. Supported by investors like Nvidia, Cisco, and Fujitsu, Cohere plans to use the funds to expand its teams and enhance its enterprise-grade AI offerings. According to Bloomberg, Cohere's Series D funding round was led by Canadian pension investment manager PSP Investments, with additional backing from Cisco, AMD, and Fujitsu, propelling the firm's valuation to $5.5 billion, making it one of the most valuable AI firms globally. Cohere aimed to raise between $500 million and $1 billion, with Nvidia also contributing. The firm is still in discussions to raise additional funds in the same round.

Founded in 2019, Cohere focuses on developing enterprise AI models for complex business challenges. With this funding, it is becoming a significant player in the AI market. Unlike direct competitors OpenAI or Anthropic, Cohere does not focus on consumer applications or artificial general intelligence (AGI). Instead, it specializes in large language models (LLMs) that offer enterprise-specific solutions.

Clients such as Notion Labs and Oracle use Cohere’s technology for tasks like powering chatbots, writing web copy, and summarizing documents. Cohere’s AI platform can be deployed on public clouds like Amazon Web Services (AWS) and Google Cloud, as well as on existing, virtual, or onsite private clouds. The company also operates a nonprofit research lab, Cohere for AI, which supports open science initiatives, including developing new language models and auditing translation data.

As of the end of March, Cohere reported having hundreds of client companies, generating approximately $35 million in annual revenue, up from around $13 million in Q4 2023. The company plans to double its workforce from 250 in 2024 and has offices in Toronto, San Francisco, London, and New York.

Learn AI-led Business & startup strategies, tools, & hacks worth a Million Dollars (free AI Masterclass) 🚀

This incredible 3-hour Crash Course on AI & ChatGPT (worth $399) designed for founders & entrepreneurs will help you 10x your business, revenue, team management & more.

It has been taken by 1 Million+ founders & entrepreneurs across the globe, who have been able to:

  • Automate 50% of their workflow & scale your business

  • Make quick & smarter decisions for their company using AI-led data insights

  • Write emails, content & more in seconds using AI

  • Solve complex problems, research 10x faster & save 16 hours every week

In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.

Highlights from GenAI, OpenAI, and ClosedAI: Dive into the latest projects and innovations from the leading organizations behind some of the most advanced AI models in open-source, closed-sourced AI.

Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.

Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realization of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!

Keep reading