Visa The Embedded Lending Opportunity April 2024 Banner

There Are a Lot of Generative AI Acronyms — Here’s What They All Mean

artificial intelligence

Generative artificial intelligence (AI) represents an entirely new computing paradigm.

To view large language models (LLMs) as merely chatbots is equivalent to thinking of early computers as no more than calculators — it misses the entire opportunity area.

And while we are just in the first inning of the of the emerging generative AI economy, the innovative technology has already pushed an at-times bewildering bucket of new terms and industry-specific vocabulary to the forefront of public consciousness.

To most effectively leverage the groundbreaking technology, it is necessary to demystify the terms surrounding and supporting it.

AI has been around for a while. The first digital computer was built in the 1940s, and it was in 1950 that computer science pioneer Alan Turing first developed the Turing Test, designed to determine a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

But it is the recent and sweeping emergence of generative AI that has lifted the innovative technology from the pages of midcentury science fiction to the realities of 21st century workflows and processes.

Read also: Everything You Need to Know About Generative AI but Were Not Afraid to Ask

What to Know About the Letters Making up AI’s Tech

In 1956, the Dartmouth Summer Research Project on Artificial Intelligence officially launched the field of AI as an area of study and brought into prominence terms such as neural networks and natural language processing.

Neural networks (NNs) are computational models inspired by the structure and function of the human brain. They consist of interconnected nodes (neurons) organized into layers and are used in various machine learning and deep learning algorithms to process and learn from data.

Natural language processing (NLP) is a field of artificial intelligence that focuses on the interaction between computers and human language. NLP systems are designed to understand, interpret, and generate human language in a way that is both meaningful and useful.

Both NNs and NLPs would take a back seat to predictive and statistical computational inference methods over the following decades, only to make a huge resurgence in the 21st century as generative AI capabilities and advanced computing power opened up new applications.

In the 1960’s, the concept of machine learning (ML) was introduced.

Machine learning is a subset of AI that involves the use of algorithms and statistical models to enable computer systems to improve their performance on a specific task through learning from data.

ML capabilities were crucial in supporting the emergence and development of Predictive AI.

Predictive AI refers to AI systems that use historical and real-time data to make predictions or forecasts about future events or outcomes. This is often used in applications like sales forecasting, demand prediction and personalized recommendations.

Predictive AI can carry out processes at scale faster than humans, as well as make inferences that a human would miss when it comes to spotting patterns and linking up seemingly disparate sources of information. The technology was the key engine behind IBM’s Deep Blue chess playing computer program, which defeated reigning world chess champion and grand master Gary Kasparov in a highly publicized 1997 match.

Read more: Peeking Under the Hood of AI’s High-Octane Technical Needs

Enter the Age of Generative AI 

The transition from the 20th century to the 21st saw the emergence of “big data,” or the capacity to collect sprawling corpuses of data far too large for manual processing. This sheer availability of data allowed for AI algorithms to learn through simple brute force, spurring massive advancements in computing.

As a result of this, in the late 2000’s, deep learning began to outperform machine learning.

Deep learning (DL) is a subfield of machine learning that focuses on training artificial neural networks (ANNs) with multiple layers (deep neural networks) to learn and make predictions from data. It has been particularly successful in tasks like image recognition and natural language processing.

Deep learning capabilities effectively set the stage for generative AI, and it was in 2017 that generative AI as we now know it began in earnest.

That was when a paper from Google’s DeepMind group titled “Attention Is All You Need” introduced the foundational architecture of transformer neural networks and socialized the revolutionary concept of multi-head attention, effectively slingshotting the capabilities of AI models past their previous, sequentially hamstrung limitations as popularized by recurrent neural networks (RNNs) and into the realm of generative pretrained transformers (GPTs).

GPTs are a type of model that uses transformer architecture and pretraining techniques to generate coherent and contextually relevant text.

Read also: Companies Tap Their Own Data to Drive Efficiencies With AI

Large language models (LLMs) are advanced artificial intelligence models, typically based on deep learning techniques, that are trained on massive datasets to understand and generate human language. They can perform tasks like text generation, language translation, sentiment analysis, and text summarization.

LLMs are a type of foundation model (FM).

Foundational models refer to the fundamental models in various fields of AI, such as NLP computer vision, or reinforcement learning, upon which more advanced models are built. They often serve as a basis or starting point for developing more complex AI systems.

Today’s most advanced foundation models power generative AI systems.

Generative AI refers to AI systems or certain language models, that have the ability to create new data or content that is similar to existing data. This includes tasks like image generation, text generation, and more.

Already, the FMs underpinning gen AI systems are evolving from LLMs to LMMs (large multimodal models).

Large multimodal models are AI models designed to understand and generate content that involves multiple modalities, such as text, images, and audio. These models combine various data types for tasks like image captioning, video analysis, or text-to-speech synthesis, working across various modalities to generate novel and new outputs.

As generative AI capabilities become more and more advanced at a hyper rapid rate, the world’s biggest tech companies and nimblest startups are turning their attention and compute power toward developing AGI, or artificial general intelligence.

Artificial general intelligence refers to AI systems that possess human-like intelligence and the ability to understand, learn, and perform a wide range of tasks that require general reasoning and problem-solving abilities. AGI is still a theoretical concept and hasn’t been fully realized yet — but is considered to be the next great advancement in the field of AI.