Overview
This is a repackaged open source software product wherein additional charges apply for packaging, hardening, preconfiguration, and maintenance of a CPU-based LLM server deployment.
The AMI includes Ollama and Open WebUI configured to run as system services, firewall rules for required ports, and a curated set of preloaded CPU-friendly models for faster initial use. Additional value is provided through deployment-ready defaults, integration setup, and ongoing image maintenance updates so customers can launch and operate the stack with reduced setup effort. The packaged deployment provides an Ollama API service on port 11434 and an Open WebUI service on port 8080, with services configured for startup integration and operational continuity. Open-source components remain available under their respective licenses.
Highlights
- Security Best Practices: The image is hardened with CIS-aligned baseline controls, unnecessary services reduced, and secure-by-default configuration applied for cloud deployment. This provides a stronger starting point for teams that require improved security posture from first boot.
- Regular Maintenance Updates: This AMI is updated regularly to include current package updates, security patches, and tested configuration improvements across the operating system and bundled components. The goal is to reduce exposure to stale dependencies while keeping launches reliable.
- Preconfigured LLM Stack: Ollama and Open WebUI are preinstalled and configured as system services with required firewall rules in place. The deployment includes curated CPU-friendly models so users can start inference and UI-based interaction quickly without manual installation and service wiring.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
- ...
Dimension | Cost/hour |
|---|---|
t3.large Recommended | $0.04 |
t2.micro | $0.04 |
t3.micro | $0.04 |
r8i-flex.large | $0.04 |
r8i.large | $0.04 |
r8id.xlarge | $0.04 |
r5.large | $0.04 |
r5d.xlarge | $0.04 |
r3.xlarge | $0.04 |
r7i.large | $0.04 |
Vendor refund policy
All sales are final. Due to the nature of digital infrastructure products, we do not offer refunds once the AMI has been launched. Please review the product details and usage instructions carefully before purchase. If you experience technical issues, our support team is available to assist at support@prezelfy.com .
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
his release introduces a preconfigured CPU-based LLM server stack that combines Ollama and Open WebUI in a deployment-ready AMI. The image includes service wiring, startup configuration, and network defaults for faster launch and reduced manual setup.
What is included in this version: Ollama installed and configured as a system service for local model serving. Open WebUI installed and configured as a system service for browser-based interaction. Curated CPU-friendly model set preloaded during build, with validation to ensure required models are present. First-boot model preload fallback to recover automatically if models are missing at launch time. Persistent and explicit model storage path configuration for consistent runtime behavior. Firewall configuration for required ports, including Open WebUI on 8080 and Ollama API on 11434. Improved service reliability updates, including corrected Ollama binary path resolution and runtime dependency handling. Operational hardening and baseline cloud-ready defaults for production-oriented deployments.
User-visible outcome: Faster time to first prompt with reduced post-launch configuration. Consistent service startup across reboots. Improved reliability of model availability in Open WebUI and Ollama API.
Additional details
Usage instructions
Connect via SSH using your EC2 key pair: ssh ec2-user@<public-ip>. Step-by-step instructions:: https://www.prezelfy.com/ai-on-cpu
Support
Vendor support
For support, inquiries, or specific requests, contact support@prezelfy.com .
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




