Overview
This is a repackaged open-source software product wherein additional charges apply for expert support, pre-configuration, and ongoing maintenance services. The Multi-Model AI Inference AMI provides a fully pre-configured, GPU-accelerated environment for text-to-image generation. It integrates three leading diffusion-based models, they are Stability AI Stable Diffusion 3.5-Medium, Black-Forest Lab FLUX.1-dev, and ByteDance SDXL-Lightning each optimized for high-performance inference on AWS GPU instances. This AMI comes with NVIDIA GPU drivers, CUDA Toolkit, PyTorch, and Diffusers are pre-loaded, ensuring seamless GPU acceleration and stable performance. Each model runs in a dedicated Docker container and is exposed through a Gradio web interface, enabling immediate browser-based access for image generation. User can also preload their desired model using our configuration file and intuitive web interface. This AMI Includes: Ubuntu Server OS NVIDIA GPU Driver CUDA Toolkit Docker Engine & NVIDIA Container Toolkit PyTorch and Hugging Face Diffusers Gradio Web Interface AWS CLI, Git.
Highlights
- Integrated CUDA and PyTorch runtime for real-time, parallelized image generation. Each model includes a Gradio interface accessible via browser. Users can also preload their desired model using our configuration file and intuitive web interface.
- Open-source foundation allows model fine-tuning, container replacement, and scaling for production use. No embedded SSH keys or credentials; one-time password authentication and isolated containerized architecture.
- Step-by-step user guide and configuration instructions provided for deployment, model selection, and customization. Refer our User Manual for more reference.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Cost/hour |
|---|---|
g5.4xlarge Recommended | $0.05 |
g6.8xlarge | $0.05 |
g5.8xlarge | $0.05 |
g6.4xlarge | $0.05 |
p3.16xlarge | $0.05 |
p4d.24xlarge | $0.05 |
p5.48xlarge | $0.05 |
p3.8xlarge | $0.05 |
p5.4xlarge | $0.05 |
g5.16xlarge | $0.05 |
Vendor refund policy
Please note that refunds will only be issued in the event of identified stack issues. Kindly note that refunds will not be provided for infrastructure failures, downtimes resulting from misconfiguration, or any other issues with AWS infrastructure.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
We proudly introduce the Multi-Model Generative AI AMI, featuring pre-configured support for three state-of-the-art diffusion models they are, StabilityAI Stable Diffusion 3.5 Medium, Black-Forest-Labs FLUX.1-dev, and ByteDance SDXL-Lightning.This release is optimized for GPU acceleration on AWS, delivering fast, high-quality image generation and seamless web-based inference through integrated Gradio applications.The AMI comes pre-loaded NVIDIA drivers, CUDA Toolkit, PyTorch, and Diffusers, ensuring top-tier performance for AI artists, developers, and researchers. Automation scripts handle environment setup and model initialization, enabling users to start generating images instantly after launch no manual configuration required. User can also preload their desired model using our configuration file and intuitive web interface.
Additional details
Usage instructions
Follow the steps to get started :
- While the instance is in running state copy the public IP.
- Place the public IP in your terminal .
- Use SSH to connect your AWS EC2 by user ubuntu
- It Displays Three models in the terminal, you can choose one that you want.
- You can choose the model that you want by entering the option number.
- It takes some time to launch the model. Once it launches, it will show the port .
- Before placing the public IP in the browser, make sure that your AWS instance security port is open for 7861,7862, and 7863 ports.
- Use that publicip as http://publicIP:7862Â in the browser like this for other ports 7861 and 7863
- Now the Selected Model will be launched in your browser
Refer to our user manual for Yobitel Multi-Model Text-to-Image Inference server In https://www.yobitel.com/single-post/yobitel-multi-model-text-to-image-inference-server with step-by-step methods Please contact Yobitel customer support in case you require further assistance. Email: support@yobitel.comÂ
Support
Vendor support
We, Yobitel - Cloud-Native Application Stack and Cloud Consulting Services company, offer Free Training, Post Migration & Go-Live support, and Enhanced care support with AWS Chime 24/7 support to ensure a smooth transition. Our team of experts is well-versed in AWS Managed Cloud Services and provides businesses with the necessary guidance and support to ensure a successful transition to the cloud. Learning Resources: Yobitel - Cloud Native Service Provider Resource URL:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
