- "Towards AGI"
- Posts
- Tokens will be the "Electricity" of the Future
Tokens will be the "Electricity" of the Future
Tokens will be the "Electricity" of the Future
Jensen Huang, the CEO of Nvidia, recently made a bold prediction that tokens in the context of generative AI will become the new "electricity" of the future. To understand this statement, we need to first grasp the concept of tokens in generative AI systems.

Tokens generated from AI Factories
What are Tokens in Generative AI anyway?
In generative AI models like GPT, Gemini, Claude etc, text is converted into numerical representations (a floating point in numerical representations ) called tokens. A token can represent a word, a subword (like a prefix or suffix), or even a single character. These tokens are then mapped to high-dimensional vector representations, which the model can process and learn from.
For example, the sentence "The quick brown fox jumps over the lazy dog" might be tokenised as:
["The", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"]
Each of these tokens would then be represented as a vector of numbers, typically with thousands of dimensions.
How do Tokens work in Generative AI?
Generative AI models are trained on vast amounts of text data, learning the patterns and relationships between tokens. When presented with a prompt or input, the model generates a probability distribution over the next possible token based on the context and its training data. This process is repeated token by token, allowing the model to generate coherent and contextually relevant text.
The quality and coherence of the generated text depend on the model's ability to accurately predict the next token based on the preceding context. The more training data and computational power available, the better the model can capture the nuances and complexities of language, leading to more human-like and contextually appropriate outputs.
As generative AI models continue to advance and find applications across various domains, from creative writing and coding to scientific research and product design, the demand for high-quality token generation will skyrocket. Tokens will become the currency that fuels this AI-driven economy through massive AI factories, much like electricity powers our modern world.
Tokens, like electricity, are fundamental units that power and enable modern technologies. Just as electricity revolutionised countless industries and transformed our world, tokens are the driving force behind the generative AI revolution and its remarkable capabilities.
Both tokens and electricity are invisible to the end-user yet essential for the functioning of various systems and applications. We don't see the flow of electrons, but we rely on electricity to power our devices. Similarly, while tokens are abstract concepts, they are indispensable for generative AI models to understand and generate human-like text.
Ok now why did Jensen predict Tokens will be the "Electricity" of the Future?
Jensen Huang's statement about tokens becoming the new "electricity" of the future stems from the fundamental role they play in the generative AI revolution. Just as electricity revolutionised and powered countless industries and technologies, tokens are the fundamental units that power generative AI systems and enable their remarkable capabilities.
Furthermore, Huang envisions a future where AI models will generate synthetic data and train each other, leading to even larger and more powerful models that require immense computational resources and token generation capabilities. This self-perpetuating cycle of AI-driven token generation and model training will drive the demand for tokens to unprecedented levels, solidifying their status as the "electricity" of the generative AI era.
TheGen.AI News
Gen AI Services to become nearly half the Tech Market by 2033

NVIDIA dominates the AI chip market with a massive 95% share, while AMD trails behind at 3%. The overall market has boomed, with spending reaching $49 billion in 2023, a significant jump from $17 billion just a year prior (AIM Research). Several tech giants like Google, Microsoft, Apple, Meta, Intel, and AMD are investing heavily in developing custom AI chips, but experts believe they'll struggle to catch NVIDIA.
Cloud companies are reaping the benefits of generative AI. Microsoft Azure's cloud revenue soared 23% year-over-year to $26.7 billion this quarter. Google Cloud and AWS also reported impressive growth, with YoY increases of 28.43% and 13%, respectively.
Generative AI is proving to be a goldmine for Indian IT companies. TCS secured $900 million worth of generative AI deals in the last quarter. Tech Mahindra used generative AI to optimize ad campaigns for a client, leading to a 27% revenue increase per advertiser. Wipro launched a new generative AI platform in partnership with IBM, and Infosys generated millions of lines of code using generative AI tools. Accenture is also making a big push in generative AI, securing over $1 billion in projects in the last two quarters and investing $3 billion to solidify its position as a leading service provider.
Data and analytics companies are experiencing significant growth due to the rise of generative AI. Snowflake's revenue has skyrocketed, with a projected 34% YoY increase for the current quarter. Since its CEO shift, Snowflake has transformed into a data and AI powerhouse with a strong focus on generative AI. Databricks is another major player, boasting a revenue run rate of $1.9 billion and an 80% annualized growth rate in India over the past two years. This growth is attributed to the increasing demand for data and AI capabilities among Indian businesses.
Infosys CEO Confirms No Job Cuts Despite GenAI Adoption

Infosys CEO Salil Parekh has announced that the company does not plan to implement job cuts, such as rightsizing or downsizing, due to generative AI (GenAI). Speaking to CNBC-TV18, Parekh emphasized that Infosys is not considering workforce reductions linked to GenAI, unlike other companies in the industry.
Parekh explained that Infosys views technological advancements as opportunities for growth and expansion, not as a means to decrease staff. While some companies have reduced headcounts citing AI efficiencies, Infosys is using GenAI to improve operational efficiency and cut costs without reducing employee numbers.
In contrast to global tech firms like Twitter, Meta, Amazon, and Google, which have announced layoffs attributed to AI or GenAI, Infosys is investing in training its workforce in GenAI. Parekh highlighted that six out of every eight Infosys employees are being trained in various aspects of generative AI, preparing the company for future technological advancements while maintaining its workforce.
Regarding hiring, Parekh noted that Infosys adopts an agile hiring model, adjusting based on economic conditions and the demand for digital transformation services. He mentioned that Infosys has not set a fixed annual hiring target and will remain flexible depending on the economic environment.
This statement follows a year-on-year decline in Infosys's headcount for the financial year 2024, the first such decrease since 2001. The company has seen a sequential reduction in its workforce over the past four quarters, reflecting broader industry trends.
Cloud Adoption Surges with Rise of Generative AI

Cloud computing has long been a staple in the tech industry, but its adoption has surged recently. Initially driven by the Covid-19 pandemic, which accelerated digital transformation and pushed businesses towards the cloud, the emergence of Generative AI (GenAI) is now further propelling this shift. Analysts note that the demand for powerful, scalable, and efficient computing resources necessary to train and deploy large language models (LLMs) makes cloud services indispensable in the new AI era.
Sudarshan Seshadri, SVP of Data & AI at Coforge, remarked, “GenAI has rapidly become mainstream, necessitating a comprehensive review of data strategies and significant architectural changes. The automation of data migration and schema design through GenAI has cut costs by up to 60%.”
Naveen Kamat, VP & CTO of Data and AI Services at Kyndryl, concurred, stating, “The ecosystem of large language models and GenAI tools provided by cloud services has simplified the creation of GenAI applications on the cloud, presenting significant opportunities for cloud providers in infrastructure and data services.”
The growth in cloud computing is also fueled by digital transformation, operational efficiency demands, and competitive advantages. Vivek Kant, a partner at BCG, predicted that “the growth of the public cloud is expected to exceed 20% year-over-year, with GenAI further driving this expansion.”
Additionally, the establishment of new cloud regions is accelerating as enterprises strive to meet data residency requirements and enhance disaster recovery and resiliency. Chris Chelliah, Senior VP of Technology and Customer Strategy at Oracle Japan and Asia Pacific, noted that Oracle is opening new cloud regions every 28 days. Oracle currently operates 68 customer-facing cloud regions, with two launched this year.
With cloud services poised for continued growth, IT firms are set to benefit substantially. Companies such as Tata Consultancy Services, Infosys, HCLTech, and Wipro have reported significant revenue increases from their cloud divisions, with dollar revenue growth ranging from 13% to 27% in FY24. Himani Agrawal, Country Head – Azure, Microsoft India and South Asia, stated, “IT services companies can expect significant opportunities from their cloud businesses.”
Demand for cloud-related services, including consulting and managed services, is on the rise, with the industry experiencing 16-18% compound annual growth in this area, according to Ritesh Gupta, Senior VP & CTO of Product Engineering Services at Happiest Minds Technologies. Gupta highlighted that “Indian IT companies, with their strong partnerships with global cloud providers and comprehensive cloud services and solutions, are particularly well-positioned to benefit from this trend.”
TheOpen.AI News

Many modern generative AI tools and systems are trained on extensive internet datasets, some of which are sourced from open source code repositories. This raises concerns about potential violations of open source licenses, which could become increasingly relevant in the context of AI training data and outcomes.
Effectively leveraging generative AI in the corporate environment involves addressing various challenges, such as ensuring data quality during model training, appropriately fine-tuning models, and addressing security risks associated with generative AI. Another challenge, though often overlooked, is managing open source software licensing (Apache, MIT etc).
While it remains uncertain under what circumstances generative AI technology might breach open source software promises, there is a possibility that courts could determine such violations, posing legal and compliance risks for companies relying on specific generative AI models. It's crucial for organizations to understand the impact of open source on AI, stay informed about legal proceedings related to this issue, and adapt strategies to safeguard their interests.
Many leading large language models, including those developed by OpenAI, Meta, and Google, are trained on vast datasets collected from the internet, which includes open source software code from platforms like GitHub. This training allows models to generate new code, a key feature of prominent AI products such as GitHub Copilot, ChatGPT, and Claude. However, the extent to which this generated code is truly "new" is subject to debate. Some argue that the code produced by AI tools is not entirely novel but rather a reiteration of the code on which the model was trained.
Chronon: An Open-Source Feature Platform for Streamlining ML Pipelines

Chronon is an open-source platform crafted for ML teams, streamlining the process of building, deploying, managing, and monitoring data pipelines essential for machine learning projects.
With Chronon, you can leverage your organization's entire data ecosystem, encompassing batch tables, event streams, and services, without the usual orchestration complexities.
Key features include:
- Gathering data from diverse sources like event streams, DB table snapshots, change data streams, service endpoints, and warehouse tables.
- Providing results for both online and offline use cases, with scalable low-latency endpoints for feature serving and hive tables for training data generation.
- Offering configurable accuracy settings for real-time or batch processing, ensuring temporal or snapshot precision as needed.
- Enabling backfilling of training sets from raw data, reducing the wait time for accumulating feature logs.
- Featuring a robust Python API that abstracts data source types, freshness, and contexts into intuitive SQL primitives, including group-by, join, and select operations.
- Automating feature monitoring with auto-generated pipelines to assess training data quality, measure training-serving skew, and monitor feature drift.
TheClosed.AI News
Behind Sora’s Creation: The Journey of a College Dropout at OpenAI

OpenAI actively seeks out young exceptional talent, such as Prafulla Dhariwal, who played a key role in developing GPT-4o, and 20-year-old college dropout Will DePue, a major contributor to Sora, a text-to-video tool generating lifelike video footage.
DePue, now 21, emphasizes the importance of maintaining a healthy perspective on age, especially in the tech industry known for its youth-centric culture. Reflecting on his growth over the past year, he aims to continue challenging himself to evolve.
Having joined OpenAI 11 months ago through the residency program, DePue focused on various technical tasks, including jailbreaking and prompt injection mitigation, along with evaluating and fine-tuning model capabilities. He may soon join 1X, a robotics company backed by OpenAI, Tiger Global, Everon, and NVIDIA.
DePue's unconventional path led him to OpenAI, where he utilized his network to secure a position. Unconcerned with job titles, his focus was on learning opportunities. Notably, he spearheaded projects like WebGPT, creating a package to run GPT models in the Chrome browser, with guidance from Andrej Karpathy on Transformers, a concept he was initially unfamiliar with.
DePue's interest in technology initially leaned towards hardware, and he only began coding at the age of 17. A friend's prediction of the 2008 startup market crash prompted him to relocate to Argentina, where he spent five years.
Born in Seattle, DePue attended Geffen Academy at UCLA for his education before pursuing a BS in computer science at the University of Michigan. However, he opted to drop out to pursue his passion for building projects. Before joining OpenAI, DePue founded DeepResearch and created Thrive.fyi, a Discord bot.
Sam Altman Faces Scrutiny Over AI Safety and Legal Challenges at OpenAI

OpenAI, supported by Microsoft, is currently facing a new set of controversies, ranging from NDAs signed by employees to a recent conflict with a well-known Hollywood actress. CEO Sam Altman is also being questioned about the company's dedication to AI safety, highlighted by concerns from former executives.
Recently, Gretchen Krueger, a former policy researcher at OpenAI, announced her resignation following the departure of senior executives Jan Leike and Ilya Sutskever. Krueger expressed her ongoing concerns about the company on the social media platform X.
The company is receiving increasing public criticism from former employees, who warn that the rapid development of AI technology could pose significant risks to humanity. These concerns have even been acknowledged by Sam Altman in the past.
Stuart Russell, a prominent AI researcher, spoke to Business Insider about the potential crises stemming from the swift advancement of AI, including the rise of AI deepfakes and the possibility of an AI-driven economic collapse.
Russell, a professor at the University of California Berkeley, criticized Altman's approach to prioritizing the development of artificial general intelligence without sufficient safety measures as "absolutely unacceptable," according to Business Insider. He explained that this stance led to the departure of many of OpenAI's safety experts. Russell also voiced a stark warning, questioning the morality of risking potential human extinction by advancing such technology: "What gave them the right to play Russian roulette with everyone's children?"
Additionally, OpenAI is currently embroiled in a legal issue with actress Scarlett Johansson. Johansson highlighted on social media that the voice used in OpenAI's latest GPT-4o model closely resembles hers, which she found concerning especially since she had declined Altman's invitation to collaborate on the project.
Elon Musk’s AI Startup xAI Clinches $6 Billion in Funding to Rival OpenAI

Elon Musk's AI venture, xAI, has successfully completed a $6 billion investment round, positioning it as a major competitor to OpenAI. Despite being just a year old, xAI has quickly developed its own large language model (LLM), a technology that drives much of the recent progress in generative AI, capable of producing human-like text, images, videos, and voices. This recent funding round, one of the largest in the AI sector, places the company's pre-investment valuation at $18 billion, according to Musk's announcement on X, the social media platform he owns.
Developing generative AI is notably costly due to the significant computing power and energy required to train LLMs. xAI stated in a blog post that the new funds will be used to launch its initial products, construct advanced infrastructure, and boost research and development of new technologies.
Musk secured investment from several familiar backers who have supported his other projects, such as Tesla and his acquisition of Twitter (now called X), including venture capital firms Andreessen Horowitz, Sequoia Capital, Fidelity Management & Research Company, and Kingdom Holding, managed by Prince Alwaleed bin Talal of the Saudi royal family.
The investor enthusiasm for AI was largely spurred by OpenAI, which developed the chatbot ChatGPT using an LLM. Although Musk co-founded OpenAI, he filed a lawsuit against it in March, accusing CEO Sam Altman and other executives of deviating from the company's original mission of benefiting humanity in favor of private profit. Meanwhile, OpenAI faces increasing competition from other major tech firms like Google’s Gemini, Meta’s Llama, and startups such as Amazon-supported Anthropic and France's Mistral.
Join the movement - If you wish to contribute thought leadership, then please fill the form below. One of our team members will be back in touch with you shortly.
Form link
Keep reading
In our quest to explore the dynamic and rapidly evolving field of Artificial Intelligence, this newsletter is your go-to source for the latest developments, breakthroughs, and discussions on Generative AI. Each edition brings you the most compelling news and insights from the forefront of Generative AI (GenAI), featuring cutting-edge research, transformative technologies, and the pioneering work of industry leaders.
Highlights from GenAI, OpenAI and ClosedAI: Dive into the latest projects and innovations from the leading organisations behind some of the most advanced AI models in open-source, closed-sourced AI.
Stay Informed and Engaged: Whether you're a researcher, developer, entrepreneur, or enthusiast, "Towards AGI" aims to keep you informed and inspired. From technical deep-dives to ethical debates, our newsletter addresses the multifaceted aspects of AI development and its implications on society and industry.
Join us on this exciting journey as we navigate the complex landscape of artificial intelligence, moving steadily towards the realisation of AGI. Stay tuned for exclusive interviews, expert opinions, and much more!