Cloud Computing Concepts Hub

The Cloud Computing Concepts Hub is the centralized place where you can browse or search for informative articles about cloud computing. You'll find easy-to-understand info about broad topics such as "What is Machine Learning?" and "What is Data Science?" These articles are intended to help you up-level your understanding of frequently asked cloud computing topics. 

Browse all cloud computing concepts

Browse all cloud computing concepts content here:

Tech Use Case
1-12 (40)
Showing results: 1-12
Total results: 40
  • Recently Added
  • Title (A-Z)
  • Title (Z-A)
No results found.
  • Artificial Intelligence

    What is a Chatbot?

    A chatbot is a program or application that users can converse with through voice or text. Chatbots were first developed in the 1960s, and the technology powering them has changed over time. Chatbots traditionally use predefined rules to converse with users and provide scripted answers. Contemporary chatbots use natural language processing (NLP) to understand users, and they can respond to complex questions with great depth and accuracy. Your organization can use chatbots to scale, personalize, and improve communication in everything from customer service workflows to DevOps management.

  • Artificial Intelligence

    What Is Enterprise AI?

    Enterprise artificial intelligence (AI) is the adoption of advanced AI technologies within large organizations. Taking AI systems from prototype to production introduces several challenges around scale, performance, data governance, ethics, and regulatory compliance. Enterprise AI includes policies, strategies, infrastructure, and technologies for widespread AI use within a large organization. Even though it requires significant investment and effort, enterprise AI is important for large organizations as AI systems become more mainstream.

  • Artificial Intelligence

    What is Text Classification?

    Text classification is the process of assigning predetermined categories to open-ended text documents using artificial intelligence and machine learning (AI/ML) systems. Many organizations have large document archives and business workflows that continually generate documents at scale—like legal documents, contracts, research documents, user-generated data, and email. Text classification is the first step to organize, structure, and categorize this data for further analytics. It allows automatic document labeling and tagging. This saves your organization thousands of hours you'd otherwise need to read, understand, and classify documents manually.

  • Artificial Intelligence

    What are AI Agents?

    An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals. For example, consider a contact center AI agent that wants to resolves customer queries. The agent will automatically ask the customer different questions, look up information in internal documents, and respond with a solution. Based on the customer responses, it determines if it can resolve the query itself or pass it on to a human.

  • Artificial Intelligence

    What is Intelligent Document Processing (IDP)?

    Intelligent document processing (IDP) is automating the process of manual data entry from paper-based documents or document images to integrate with other digital business processes. For example, consider a business process workflow that automatically issues orders to suppliers when stock levels are low. Although the process is automated, no order is shipped until the supplier receives payment. The supplier sends an invoice via email, and the accounts team enters the data manually before completing payment—introducing manual checkpoints that create bottlenecks or errors. Instead, IDP systems automatically extract invoice data and enter it in the required format in the accounting system. You can use document processing to automate document management with the use of machine learning (ML) and various artificial intelligence (AI) technologies.

  • Artificial Intelligence

    What are Autoregressive Models?

    Autoregressive models are a class of machine learning (ML) models that automatically predict the next component in a sequence by taking measurements from previous inputs in the sequence. Autoregression is a statistical technique used in time-series analysis that assumes that the current value of a time series is a function of its past values. Autoregressive models use similar mathematical techniques to determine the probabilistic correlation between elements in a sequence. They then use the knowledge derived to guess the next element in an unknown sequence. For example, during training, an autoregressive model processes several English language sentences and identifies that the word “is” always follows the word “there.” It then generates a new sequence that has “there is” together.
  • Artificial Intelligence

    What is NeRF (Neural Radiance Field)?

    A neural radiance field (NeRF) is a neural network that can reconstruct complex three-dimensional scenes from a partial set of two-dimensional images. Three-dimensional images are required in various simulations, gaming, media, and Internet of Things (IoT) applications to make digital interactions more realistic and accurate. The NeRF learns the scene geometry, objects, and angles of a particular scene. Then it renders photorealistic 3D views from novel viewpoints, automatically generating synthetic data to fill in gaps.
  • Machine Learning

    What are Embeddings in Machine Learning?

    Embeddings are numerical representations of real-world objects that machine learning (ML) and artificial intelligence (AI) systems use to understand complex knowledge domains like humans do. As an example, computing algorithms understand that the difference between 2 and 3 is 1, indicating a close relationship between 2 and 3 as compared to 2 and 100. However, real-world data includes more complex relationships. For example, a bird-nest and a lion-den are analogous pairs, while day-night are opposite terms. Embeddings convert real-world objects into complex mathematical representations that capture inherent properties and relationships between real-world data. The entire process is automated, with AI systems self-creating embeddings during training and using them as needed to complete new tasks.
  • Artificial Intelligence

    What is Data Augmentation?

    Data augmentation is the process of artificially generating new data from existing data, primarily to train new machine learning (ML) models. ML models require large and varied datasets for initial training, but sourcing sufficiently diverse real-world datasets can be challenging because of data silos, regulations, and other limitations. Data augmentation artificially increases the dataset by making small changes to the original data. Generative artificial intelligence (AI) solutions are now being used for high-quality and fast data augmentation in various industries.
  • Artificial Intelligence

    What are Transformers in Artificial Intelligence?

    Transformers are a type of neural network architecture that transforms or changes an input sequence into an output sequence. They do this by learning context and tracking relationships between sequence components. For example, consider this input sequence: "What is the color of the sky?" The transformer model uses an internal mathematical representation that identifies the relevancy and relationship between the words color, sky, and blue. It uses that knowledge to generate the output: "The sky is blue." 

    Organizations use transformer models for all types of sequence conversions, from speech recognition to machine translation and protein sequence analysis.

  • Artificial Intelligence

    What is RAG (Retrieval-Augmented Generation)?

    Retrieval-Augmented Generation (RAG) is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response. Large Language Models (LLMs) are trained on vast volumes of data and use billions of parameters to generate original output for tasks like answering questions, translating languages, and completing sentences. RAG extends the already powerful capabilities of LLMs to specific domains or an organization's internal knowledge base, all without the need to retrain the model. It is a cost-effective approach to improving LLM output so it remains relevant, accurate, and useful in various contexts.

  • Artificial Intelligence

    What is Transfer Learning?

    Transfer learning (TL) is a machine learning (ML) technique where a model pre-trained on one task is fine-tuned for a new, related task. Training a new ML model is a time-consuming and intensive process that requires a large amount of data, computing power, and several iterations before it is ready for production. Instead, organizations use TL to retrain existing models on related tasks with new data. For example, if a machine learning model can identify images of dogs, it can be trained to identify cats using a smaller image set that highlights the feature differences between dogs and cats.

1 4

Learn more about cloud comparisons

The Cloud Comparisons page features content that helps readers understand common use cases for when to use one cloud solution or another. Compare and contrast cloud solutions and learn the nuances of different use cases that work best for your situation. 

Get started

Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today.