Skip to Main Content

Artificial Intelligence (AI): Glossary

Chat with a Librarian

Glossary of Common Terms

These definitions were generated by Microsoft Copilot in response to prompts entered by a librarian. Copilot integrates Open AI's GPT-4 with Microsoft 365. The generative AI tool combines data refined from a user's Microsoft Graph (calendar, emails, chats, documents, etc.) with Open AI's LLM. Following best practices for AI generated content, the definitions were subjected to human review for accuracy.

Artificial Intelligence (AI)

Artificial intelligence is the capability of a machine to imitate intelligent human behavior. It involves the creation of algorithms or computer programs that can learn, reason, and make decisions or predictions. AI systems are often designed to handle tasks that would normally require human intelligence, such as visual perception, speech recognition, decision-making, mathematical computation, and language translation.

Terms Relating to Generative AI*

  • Chatbot
    A chatbot is a computer program designed to simulate conversation with human users, especially over the Internet. Chatbots are commonly used in various applications such as customer service, information acquisition, personal assistants, and more. They can handle tasks like answering FAQs, booking appointments, providing recommendations, and even engaging in casual conversations. 
  • *Generative AI
    Generative AI is a type of artificial intelligence that can generate new content, such as text, images, or music, by learning from a dataset. It’s often used for tasks that involve creativity or pattern recognition, such as composing music, designing images, or writing text.
  • Machine Learning (ML)
    Machine learning is a subset of artificial intelligence that involves the development of algorithms and statistical models that enable computers to perform tasks without explicit instructions, relying on patterns and inference instead. It’s a method of data analysis that automates analytical model building and allows computer systems to improve their performance over time through experience.
  • Natural Language Processing (NLP)
    Natural Language Processing (NLP) is a field of artificial intelligence that focuses on the interaction between computers and humans through natural language. NLP aims to bridge the gap between human communication and computer understanding, allowing for more natural and intuitive interactions with technology. 
  • Large Language Model (LLM)
    A large language model is an AI system trained on extensive text data to understand and generate human language. It learns to predict the next word in a sentence by analyzing the context provided by the preceding words. The training involves iterative adjustments to the model’s parameters to improve its accuracy in text generation, a process that requires substantial computational resources and data.

Using Large Language Models (LLMs)

  • Hallucination
    In the context of artificial intelligence and large language models, “hallucination” refers to instances where the AI generates information or data that is not based on the input it received or the data it was trained on. Essentially, it’s when the AI ‘makes up’ information that seems plausible but is actually false or nonsensical. This can occur due to the AI’s attempt to fill gaps in its understanding or to generate coherent responses based on patterns it has learned, even when those patterns do not correspond to accurate information.
  • Prompt
    A “prompt” refers to a user-provided input such as a question, a statement, or a set of instructions that guides the AI in generating content or performing a task.
  • Prompt Engineering
    Prompt engineering is the process of designing and refining prompts to effectively guide generative AI in producing desired outputs. It involves understanding the AI’s capabilities and limitations, and crafting prompts that are clear, specific, and aligned with the AI’s training data. 

Understanding Large Language Models (LLMs)

  • Parameters
    In large language models (LLMs) like GPT-3 or GPT-4, “parameters” refer to the internal settings that determine how the model processes and generates language. During the training phase, parameters are adjusted based on the input data, allowing the model to learn patterns and relationships within the data.
  • Temperature
    In large language models, “temperature” is a hyperparameter that influences the randomness of the model’s output. A lower temperature results in more predictable and deterministic output, while a higher temperature encourages diversity and creativity in the responses, potentially at the cost of coherence and relevance. It’s a key factor in controlling the balance between randomness and determinism in generated content.
  • Tokens
    In natural language processing, “tokens” are the smallest units of text that are meaningful for analysis, such as words, phrases, or symbols. They are used by language models to parse and understand the input text, and to generate coherent and contextually appropriate output.
  • Training Data
    “Training data” refers to the vast dataset used to train the model to understand and generate human-like text. It consists of a large corpus of text that the model uses to learn patterns, structures, and nuances of language. This data is crucial for the model’s ability to accurately predict and generate text based on the input it receives.