What is artificial intelligence?

Artificial intelligence (AI) is the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, creation, and image recognition. Modern organizations collect large volumes of data from diverse sources like smart sensors, human-generated content, monitoring tools, and system logs. Artificial intelligence technologies are self-learning systems that derive meaning from data. They can apply that knowledge to solve new problems in human-like ways. For example, AI technology can respond meaningfully to human conversations, create original images and text, and make decisions based on real-time data inputs.

Benefits of artificial intelligence

Your organization can integrate artificial intelligence capabilities to optimize business processes, improve customer experiences, and accelerate innovation.

Automate intelligently

Organizations have been automating digital processes for some time now. However, artificial intelligence introduces a new level of depth and problem-solving ability to the process. For example, an invoice processing system powered by AI technologies can automatically scan and record invoice data from any invoice template. It can also classify invoices based on various criteria, such as supplier, geography, department, and more. It can even check for errors and process payments with minimal oversight.

Boost productivity

Knowledge workers often perform tasks related to searching and discovering critical information. For example, healthcare workers look up patient records, hospital policies, and medicine databases, and airline workers look up flight information. Time spent finding and consolidating information from various sources distracts employees from their primary role. AI technologies can provide consolidated and summarized information in context and on time. Intelligent search and discovery functions powered by artificial intelligence can boost employee satisfaction and productivity in any industry. For example, Ryanair, Europe's largest airline, built an AI system to assist employees, enhancing productivity and satisfaction.

Solve complex problems

Many industries grapple with complex problems that require analyzing millions of past transactions and discovering hidden patterns—for example, fraud detection, machinery maintenance, and product innovation. AI systems can collect and analyze data at scale from various sources to support complex human decision-making. For example, answering when a particular mechanical component should be repaired requires analyzing machine data like temperature and speed alongside usage reports and past maintenance schedules. Artificial intelligence can take all this data, discover hidden connections, and suggest optimal maintenance schedules for significant cost savings. Similarly, it can support more complex fields like genomic research and drug discovery.

Create new customer experiences

Organizations use artificial intelligence to create customized customer experiences with greater security and speed. For example, AI systems can combine customer profile data, such as preferences and digital behavior, with other product or service data to create personalized reports, recommendations, and action plans. Customers can find real-time answers to questions or discover new products and services without waiting for live customer support. For example, Lonely Planet used artificial intelligence to generate curated travel itineraries for customers while cutting itinerary generation costs by 80%.

Find more AI use cases and benefits »

Artificial Intelligence Examples

Artificial intelligence has a wide range of applications. While not an exhaustive list, here's a selection of examples highlighting AI's diverse use cases.

Chatbots and smart assistants

AI-powered chatbots and smart assistants engage in more sophisticated and human-like conversations. They can understand the context and generate coherent responses for complex natural language and customer queries. They excel in customer support, virtual assistance, and content generation to provide personalized interactions. These models' continuous learning capability allows them to adapt and improve their performance over time, enhancing user experience and efficiency.

For example, Deriv, one of the world’s largest online brokers, faced challenges accessing vast amounts of data distributed across various platforms. It implemented an AI-powered assistant to retrieve and process data from multiple sources across customer support, marketing, and recruiting. With AI, Deriv reduced the time spent onboarding new hires by 45 percent and minimized recruiting task times by 50 percent.

Read about chatbots »

Intelligent document processing

Intelligent document processing (IDP) translates unstructured document formats into usable data. For example, it converts business documents like emails, images, and PDFs into structured information. IDP uses AI technologies like natural language processing (NLP), deep learning, and computer vision to extract, classify, and validate data. 

For example, HM Land Registry (HMLR) handles property titles for over 87 percent of England and Wales. HMLR caseworkers compare and review complex legal documents related to property transactions. The organization deployed an AI application to automate document comparison, cutting review time by 50 percent and supercharging the approval process of property transfer. For more information, read how HMLR uses Amazon Textract.

Read about IDP »

Application performance monitoring

Application performance monitoring (APM) is the process of using software tools and telemetry data to monitor the performance of business-critical applications. AI-based APM tools use historical data to predict issues before they occur. They can also resolve issues in real-time by suggesting effective solutions to your developers. This strategy keeps applications running effectively and addresses bottlenecks.

For example, Atlassian makes products to streamline teamwork and organization. Atlassian uses AI APM tools to continuously monitor applications, detect potential issues, and prioritize severity. With this function, teams can rapidly respond to ML-powered recommendations and resolve performance declines. 

Read about APM »

Predictive maintenance

AI-enhanced predictive maintenance is using large volumes of data to identify issues that could lead to downtime in operations, systems, or services. Predictive maintenance allows businesses to address the potential problems before they occur, reducing downtime and preventing disruptions.

For example, Baxter uses 70 manufacturing sites worldwide and operates 24/7 to deliver medical technology. Baxter employs predictive maintenance to detect abnormal conditions in industrial equipment automatically. Users can implement effective solutions ahead of time to reduce downtime and improve operational efficiencies. To learn more, read how Baxter uses Amazon Monitron.

Read about predictive maintenance »

Medical research

Medical research uses AI to streamline processes, automate repetitive tasks, and process vast quantities of data. You can use AI technology in medical research to facilitate end-to-end pharmaceutical discovery and development, transcribe medical records, and improve time-to-market for new products.

As a real-world example, C2i Genomics uses artificial intelligence to run high-scale, customizable genomic pipelines and clinical examinations. Researchers can focus on clinical performance and method development by covering computational solutions. Engineering teams also use AI to reduce resource demands, engineering maintenance, and NRE costs. For more details, read how C2i Genomics uses AWS HealthOmics.

Business analytics

Business analytics uses AI to collect, process, and analyze complex datasets. You can use AI analytics to forecast future values, understand the root cause of data, and reduce time-consuming processes. 

For example, Foxconn uses AI-enhanced business analytics to improve forecasting accuracy. They reached an 8 percent increase in forecasting accuracy, leading to $533,000 in annual savings in their factories. They also use business analytics to reduce wasted labor and increase customer satisfaction through data-driven decision-making.

What is the difference between machine learning, deep learning, and artificial intelligence?

Artificial intelligence (AI) is an umbrella term for different strategies and techniques for making machines more human-like. It includes everything from self-driving cars to robotic vacuum cleaners and smart assistants like Alexa. While machine learning and deep learning both fall under the AI umbrella, not all AI activities are machine learning and deep learning. For example, generative AI, which demonstrates human-like creative capabilities, is a very advanced form of deep learning.

Machine learning

While you may see the terms artificial intelligence and machine learning being used interchangeably in many places, machine learning is technically one among many other branches of artificial intelligence. It is the science of developing algorithms and statistical models to correlate data. Computer systems use machine learning algorithms to process large quantities of historical data and identify data patterns. In the current context, machine learning refers to a set of statistical techniques called machine learning models that you can use independently or to support other more complex AI techniques.

Read about machine learning »

Read about AI vs. machine learning »

Deep learning

Deep learning takes machine learning one step further. Deep learning models use neural networks that work together to learn and process information. They comprise millions of software components that perform micro mathematical operations on small data units to solve a larger problem. For example, they process individual pixels in an image to classify that image. Modern AI systems often combine multiple deep neural networks to perform complex tasks like writing poems or creating images from text prompts.

Read about deep learning »

Venn diagram showing the relation between machine learning, deep learning, and artificial intelligence

How did artificial intelligence technology develop?

Artificial intelligence technology has become increasingly popular due to generative AI tools gaining prominence in the public space. However, the technology has been around for several decades now and is continuously maturing. In his seminal paper from 1950, "Computing Machinery and Intelligence," Alan Turing considered whether machines could think. In this paper, Turing first coined the term artificial intelligence and presented it as a theoretical and philosophical concept. 

The past

Between 1957 and 1974, developments in computing allowed computers to store more data and process faster. During this period, scientists further developed machine learning (ML) algorithms. The progress in the field led agencies like the Defense Advanced Research Projects Agency (DARPA) to create a fund for AI research. At first, the main goal of this research was to discover whether computers could transcribe and translate spoken language.

Through the 1980s, the boosted funding available and the expanding algorithmic toolkit scientists used in AI streamlined development. David Rumelhart and John Hopfield published papers on deep learning techniques, which showed that computers could learn from experience. 

From 1990 to the early 2000s, scientists achieved many core goals of AI, like beating the reigning world chess champion.

The present

Present-day artificial intelligence primarily uses foundation models and large language models to perform complex digital tasks. Foundation models are deep learning models trained on a broad spectrum of generalized and unlabeled data. Based on input prompts, they can perform a wide range of disparate tasks with a high degree of accuracy. Organizations typically take existing, pre-trained foundation models and customize them with internal data to add AI capabilities to existing applications or create new AI applications.

It is important to note that many organizations continue using machine learning models for many digital tasks. Machine learning models can outperform foundation models for many use cases. Artificial intelligence developers can flexibly pick and choose the best models for specific tasks.

Read more about foundation models »

The future

With more computing data and processing power in the modern age than in previous decades, AI research is now more common and accessible. It's rapidly evolving into artificial general intelligence. Artificial general intelligence (AGI) is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks for which it is not necessarily trained or developed. 

Current artificial intelligence technologies all function within a set of pre-determined parameters. For example, AI models trained in image recognition and generation cannot build websites. AGI is a theoretical pursuit to develop AI systems with autonomous self-control, reasonable self-understanding, and the ability to learn new skills. It can solve complex problems in settings and contexts that were not taught to it at the time of its creation. AGI with human abilities remains a theoretical concept and research goal.

Read more about artificial general intelligence »

How does artificial intelligence work?

Artificial intelligence systems use a range of technologies to work. The specifics vary, but core principles remain the same: they convert all data types, such as text, images, videos, and audio, into numerical representations and mathematically identify patterns and relationships between them. Hence, artificial intelligence technologies require training - they are exposed to large volumes of existing datasets to "learn" — similar to humans learning from existing knowledge archives.

Some of the technologies that make artificial intelligence work are given below.

Neural networks

Artificial neural networks form the core of artificial intelligence technologies. They mirror the processing that happens in the human brain. A brain contains millions of neurons that process and analyze information. An artificial neural network uses artificial neurons that process information together. Each artificial neuron, or node, uses mathematical calculations to process information and solve complex problems.

Read about neural networks »

Natural language processing

Natural language processing (NLP) uses neural networks to interpret, understand, and gather meaning from text data. It uses various computing techniques that specialize in decoding and comprehending human language. These techniques allow machines to process words, grammar syntax, and word combinations to process human text and even generate new text. Natural language processing is critical in tasks like summarizing documents, chatbots, and conducting sentiment analysis. 

Read about NLP »

Computer vision

Computer vision uses deep learning techniques to extract information and insights from videos and images. Using computer vision, a computer can understand images just like a human would. You can use it to monitor online content for inappropriate images, recognize faces, and classify image details. It is critical in self-driving cars and trucks to monitor the environment and make split-second decisions.

Read about computer vision »

Speech recognition

Speech recognition software uses deep learning models to interpret human speech, identify words, and detect meaning. The neural networks can transcribe speech to text and indicate vocal sentiment. You can use speech recognition in technologies like virtual assistants and call center software to identify meaning and perform related tasks.

Read about speech-to-text»

Generative AI

Generative AI refers to artificial intelligence systems that can create new content and artifacts such as images, videos, text, and audio from simple text prompts. Unlike past AI, which was limited to analyzing data, generative AI leverages deep learning and massive datasets to produce high-quality, human-like creative outputs. While enabling exciting creative applications, concerns around bias, harmful content, and intellectual property exist. Overall, generative AI represents a major evolution in AI capabilities to generate human language and new content and artifacts in a human-like manner.

Read about generative AI »

What are the key components of AI application architecture?

Artificial intelligence architecture consists of three core layers. All the layers run on IT infrastructure that provides the necessary compute and memory resources for the AI to run.

Layer 1: Data layer

AI is built upon various technologies like machine learning, natural language processing, and image recognition. Central to these technologies is data, which forms the foundational layer of AI. This layer primarily focuses on preparing the data for AI applications. 

Layer 2: Model layer

Organizations typically select from one among many existing foundation models or LLMs. They customize it by different techniques that feed the model with the latest data the organization wants. This layer is pivotal for the AI system's decision-making capabilities. 

Layer 3: Application layer

The third layer is the application layer, the customer-facing part of AI architecture. You can ask AI systems to complete specific tasks, generate information, provide information, or make data-driven decisions. The application layer allows end users to interact with AI systems.

What are the challenges in artificial intelligence implementation?

Several challenges complicate AI implementation and usage. The following roadblocks are some of the most common challenges.

AI governance

Data governance policies must abide by regulatory restrictions and privacy laws. To implement AI, you must manage data quality, privacy, and security. You are accountable for customer data and privacy protection. To manage data security, your organization should clearly understand how AI models use and interact with customer data across each layer.

Technical difficulties

Training AI with machine learning consumes vast resources. A high threshold of processing power is essential for deep learning technologies to function. You must have robust computational infrastructure to run AI applications and train your models. Processing power can be costly and limit your AI systems' scalability.

Data limitations

You need to input vast volumes of data to train unbiased AI systems. You must have sufficient storage capacity to handle and process the training data. Equally, you must have effective management and data quality processes in place to ensure the accuracy of the data you use for training.

Responsible AI

Responsible AI is AI development that considers the social and environmental impact of the AI system at scale. As with any new technology, artificial intelligence systems have a transformative effect on users, society, and the environment. Responsible AI requires enhancing the positive impact and prioritizing fairness and transparency regarding how AI is developed and used. It ensures that AI innovations and data-driven decisions avoid infringing on civil liberties and human rights. Organizations find building responsible AI challenging while remaining competitive in the rapidly advancing AI space.

Read about responsible AI »

How can AWS support your artificial intelligence requirements?

AWS makes AI accessible to more people—from builders and data scientists to business analysts and students. With the most comprehensive set of AI services, tools, and resources, AWS brings deep expertise to over 100,000 customers to meet their business demands and unlock the value of their data. Customers can build and scale with AWS on a foundation of privacy, end-to-end security, and AI governance to transform at an unprecedented rate.

AI on AWS includes pre-trained AI services for ready-made intelligence and AI infrastructure to maximize performance and lower costs.

Examples of pre-trained services:

  • Amazon Rekogniton automates, streamlines, and scales image recognition and video analysis.
  • Amazon Textract extracts printed text, analyzes handwriting, and automatically captures data from any document.
  • Amazon Transcribe converts speech to text, extracts critical business insights from video files, and improves business outcomes.

Examples of AI infrastructure:

  • Amazon Bedrock offers a choice of high-performing FMs and a broad set of capabilities. You can experiment with various top FMs and privately customize them with your data.
  • Amazon SageMaker offers tools to pre-train FMs from scratch so they can be used internally.
  • Amazon Elastic Compute Cloud (EC2) Trn1 instances, powered by AWS Trainium chips, are purpose-built for high-performance deep learning (DL) training of generative AI models.

Get started with AI on AWS by creating a free account today!

Next Steps with AWS

Check out additional product-related resources
Learn more about Artificial Intelligence Services 
Sign up for a free account

Instant get access to the AWS Free Tier.

Sign up 
Start building in the console

Get started building in the AWS management console.

Sign in