Artificial Intelligence

Qingwei Li

Author: Qingwei Li

Qingwei Li is a Machine Learning Specialist at Amazon Web Services. He received his Ph.D. in Operations Research after he broke his advisor’s research grant account and failed to deliver the Nobel Prize he promised. Currently he helps customers in the financial service and insurance industry build machine learning solutions on AWS. In his spare time, he likes reading and teaching.

Anthropic’s Claude 3.5 Sonnet ranks number 1 for business and finance in S&P AI Benchmarks by Kensho

Anthropic’s Claude 3.5 Sonnet currently ranks at the top of S&P AI Benchmarks by Kensho, which assesses large language models (LLMs) for finance and business. Kensho is the AI Innovation Hub for S&P Global. Using Amazon Bedrock, Kensho was able to quickly run Anthropic’s Claude 3.5 Sonnet through a challenging suite of business and financial […]

Deploy large language models on AWS Inferentia2 using large model inference containers

You don’t have to be an expert in machine learning (ML) to appreciate the value of large language models (LLMs). Better search results, image recognition for the visually impaired, creating novel designs from text, and intelligent chatbots are just some examples of how these models are facilitating various applications and tasks. ML practitioners keep improving […]

Creating Amazon SageMaker Studio domains and user profiles using AWS CloudFormation

February 2021 Update: Customers can now use native AWS CloudFormation code templates to model the infrastructure set up for Amazon SageMaker Studio and configure its access for users in their organizations at scale. For more information, please see the announcement post.  Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning […]

Deploying your own data processing code in an Amazon SageMaker Autopilot inference pipeline

The machine learning (ML) model-building process requires data scientists to manually prepare data features, select an appropriate algorithm, and optimize its model parameters. It involves a lot of effort and expertise. Amazon SageMaker Autopilot removes the heavy lifting required by this ML process. It inspects your dataset, generates several ML pipelines, and compares their performance […]

Fine-tuning a PyTorch BERT model and deploying it with Amazon Elastic Inference on Amazon SageMaker

November 2022: The solution described here is not the latest best practice. The new HuggingFace Deep Learning Container (DLC) is available in Amazon SageMaker (see Use Hugging Face with Amazon SageMaker). For customer training BERT models, the recommended pattern is to use HuggingFace DLC, shown as in Finetuning Hugging Face DistilBERT with Amazon Reviews Polarity dataset. […]