
LG AI Research Develops Foundation Model Using Amazon SageMaker
LG AI Research built EXAONE—a foundation model that can be used to transform business processes—using Amazon SageMaker, broadening access to AI in various industries such as fashion, manufacturing, research, education, and finance.
Key Outcomes
1 year
to develop the EXAONE AI engineScalability
that supports linear scaling35%
reduction in cost of building AI engine60%
increase in data preparation speedOverview
LG AI Research, the artificial intelligence (AI) research hub of South Korean conglomerate LG Group, was founded to promote AI as part of its digital transformation strategy to drive future growth. The research institute developed its foundation model EXAONE engine within one year using Amazon SageMaker and Amazon FSx for Lustre. Built on Amazon Web Services (AWS), the foundation model mimics humans as it thinks, learns, and takes actions on its own through large-scale data training. The multi-purpose foundation model can be employed in various industries to carry out a range of tasks.
Opportunity | Developing a Super-Giant Multimodal AI
South Korean conglomerate LG Group collects vast amounts of data from its companies, which include home appliances, telecommunications, batteries, and pharmaceuticals. A key pillar of the group’s digital transformation is developing AI technology and integrating AI into its products and services. The group established LG AI Research to harness the power of AI in its digital transformation strategy, develop better customer experiences, and solve common industry challenges.When LG AI Research decided to develop its next-generation foundation model, which takes inspiration from how the human brain works and has an advanced capacity for learning and making judgments, it searched for the most efficient machine learning (ML) platform to handle vast amounts of data and large-scale training and inference. The foundation model needed to train on dozens of terabytes of data to make human-like deductions and comprehend texts and images. Moreover, the project required a high-performance compute infrastructure and the flexibility to increase the number of parameters to billions during training.Workflow automation was also important, as multiple models or downstream tasks needed to be completed simultaneously. To meet these requirements, the institute looked at an on-premises infrastructure, but costs were too high, and it would require 20 employees to configure and maintain the on-premises hardware. It would also require upgrading the GPUs every year and adding more GPUs to support workload spikes. Considering all the challenges in an on-premises solution, LG AI Research decided that Amazon SageMaker was the best fit for this project.
Customer Quote
By using Amazon SageMaker’s high-performance distributed training infrastructure, researchers can focus solely on model training instead of managing infrastructure.”
Kim Seung Hwan
Head of LG AI Research Vision Lab
Solution | Building the Foundation Model EXAONE Using Amazon SageMaker
EXAONE’s Architecture Diagram on AWS
Click to enlarge for fullscreen viewing.