Humans + AI > Humans + AI: Resources > Glossary: Generative AI Terminology

Glossary: Generative AI Terminology

A compilation of terminology used in AI and Generative AI with brief definitions.

General AI Terminology

Dataset
A collection of data whose size and quality play a crucial role in determining the effectiveness and accuracy of a model. It is used to train, test, or evaluate a machine learning or artificial intelligence model.
Deep Learning
It is a subset of machine learning that trains artificial neural networks using multiple layers to learn and recognize complex patterns and relationships among data. Moreover, its algorithms utilize multiple layers of nonlinear processing units to extract hierarchical representations of input data.
GAN (Generative Adversarial Network)
This neural network architecture consists of two parts. These are the discriminator network which attempts to distinguish between real and samples; and the generator network which generates new data samples that mimic the distribution of a given dataset.
Inference
This is the process of making predictions or decisions on new, and unseen data using a trained machine learning or deep learning model. During inference, the model takes in input data and produces an output, which could be a numerical value, a class label, a probability distribution, or some other type of output.
Machine Learning
This is a subfield of artificial intelligence that develops algorithms that can be used for a wide range of applications such as image and speech recognition, natural language processing, and anomaly detection. Machine learning algorithms learn from data and make predictions or decisions without being explicitly programmed.
Neural Network
A neural network is modeled after the structure and function of the human brain. It has layers of interconnected nodes, called neurons, that perform nonlinear transformations on input data.
Overfitting
When a machine learning model starts to memorize the training data instead of generalizing to new, unseen data becomes it has become too complex. Eventually, this results in poor performance on data that was not used for training.
Recurrent Neural Network (RNN)
It processes sequential or time-series data, such as sensor data or natural language. RNNs use feedback connections between the neurons in the network, allowing information to be passed from one-time step to the next.
Training
The process of learning a model from a dataset with the use of a machine learning or deep learning algorithm. This paves the way for the model to generalize to new, unseen data and make accurate predictions or decisions.
Underfitting
When a machine learning model is too simple to capture the underlying patterns in the data, resulting in poor performance on both the training and test data. This can also be the result of insufficient training or noisy or irrelevant features in the data.

Generative AI/ Transformer terminology

Fine-tuning
It is a process of adjusting the model’s parameters and parameters. This is used to optimize the performance of a target task such as translation, question-answering, and text classification.
Generative AI
It is a type of AI that uses probabilistic models to generate new data such as text, images, and videos out of the training data. It differs from traditional machine learning which only uses predictions.
GPT (Generative Pre-trained Transformer)
This is a type of generative AI model that uses a pre-training approach to generate natural language text. The pre-training is conducted on large amounts of text data using a self-supervised learning algorithm to learn the underlying patterns and structures in language.
Hyperparameters
These are the parameters set by the user to determine the model’s architecture, optimization algorithm, learning rate, regularization, and other properties that affect the model’s performance.
Large Language Model (LLM)
It is a model that is trained on large amounts of text data to learn patterns and structures of language. It uses deep learning techniques to generate natural language text. It’s now being used in natural language processing tasks such as language translation, text completion, and question-answering.
Multi-modal Generative Models
From the terminology “multi-modal”, they use multiple deep-learning techniques in order to create images, generate video captions, transform texts into images, and synthesize texts. Two of these deep-learning techniques are convolutional neural networks and recurrent neural networks.
Prompt Engineering
This is a process of generating the desired outputs using the most appropriate input prompts. It is done by selecting a subset of input tokens and conditioning the model.
Self-attention
This is a crucial component when weighing the importance of the various parts of any input sequence in order to make sensible predictions. It uses transformer-based models in the process of capturing long-range dependencies in the input sequence.
Transformer
It has an architecture of multiple layers of self-attention and feed-forward networks in order to capture long-range dependencies in the input sequence and generate the output sequence.