Overview
This is a repackaged software product wherein additional charges apply for hardening, security configuration, and support.
WHAT IS OLLAMA + OPEN WEBUI
Ollama is a Go-based runtime for running large language models locally - it pulls quantized GGUF weights from the public Ollama library and exposes a streaming REST API for inference with optional GPU acceleration via NVIDIA CUDA. Open WebUI is a self-hosted ChatGPT-style web interface (FastAPI + Svelte) that connects to the local Ollama API and lets your team chat with models, manage conversations, upload documents for RAG, run web search, and configure prompt templates. The bundle supports CPU and GPU instance types (g4dn.xlarge, g5.xlarge), embedded SQLite for chat history and user accounts, and Llama, Mistral, Gemma, Phi, Qwen, DeepSeek-R1 and 100+ models pulled on demand. Together they form a complete private AI stack with no external service dependencies.
LICENSING NOTE
Ollama is MIT-licensed. Open WebUI is distributed under the source-available Open WebUI License (not an OSI-approved open-source license). The license permits self-hosted use without restriction; deployments with more than 50 end users in any 30-day window that also modify Open WebUI branding (name, logo) require a commercial Enterprise License from Open WebUI Inc. This AMI ships Open WebUI with the original branding preserved - the 50-user clause does not apply unless the operator rebrands.
WHAT THIS AMI ADDS
Security hardening:
- Authentication enabled by default
- Nginx reverse proxy with TLS - Open WebUI proxied on port 443
- Ollama API (port 11434) bound to localhost only
- UFW firewall - ports 22, 80, 443 only
- fail2ban, AppArmor
OS hardening (CIS Level 1):
- CIS Ubuntu 24.04 LTS Level 1 benchmark applied via ansible-lockdown
- auditd, SSH hardening, Kernel hardening, IMDSv2 enforced
Compliance artifacts:
- SBOM - CycloneDX 1.6 at /etc/lynxroute/sbom.json
- CIS Conformance Report at /etc/lynxroute/cis-report.html
- CIS Tailored Profile at /usr/share/doc/lynxroute/CIS_TAILORED_PROFILE.md
Highlights
- Private ChatGPT: run Llama, Mistral, Gemma and 100+ LLMs locally - no OpenAI subscription, no data leaving your VPC, authentication enabled by default. Built by Lynxroute.
- CIS Level 1 hardened Ubuntu 24.04 LTS: auditd, fail2ban, AppArmor, SSH key-only, IMDSv2 enforced. CVE-scanned before every release. SBOM (CycloneDX) and CIS Conformance Report included.
- GPU-ready: works with g4dn.xlarge (NVIDIA T4) and g5.xlarge (A10G) for fast inference - pull any model from ollama.com/library via Web UI or CLI.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Cost/hour |
|---|---|
g4dn.xlarge Recommended | $0.05 |
t3.large | $0.03 |
g5.xlarge | $0.07 |
t3.medium | $0.02 |
m6i.xlarge | $0.05 |
m6i.2xlarge | $0.07 |
Vendor refund policy
We do not offer refunds for this product. AWS infrastructure charges (EC2, EBS, data transfer) are billed separately by AWS and are not refundable by us. If you experience technical issues with the AMI, please contact us at https://lynxroute.com before requesting a refund.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
Version 0.9.2 - Initial release (April 2026)
- Ollama 0.21.2 + Open WebUI 0.9.2 on Ubuntu 24.04 LTS
- CIS Level 1 hardening applied (ansible-lockdown/UBUNTU24-CIS)
- Authentication enabled by default - no open access
- Nginx reverse proxy with HTTPS - self-signed TLS, Ollama API localhost-only
- Startup page shown while Open WebUI initialises (~2-3 minutes on first boot)
- Models not pre-loaded - pull via Web UI or CLI after launch
- UFW firewall pre-configured (ports 22, 80, 443 only)
- fail2ban, auditd, AppArmor pre-configured
- SBOM (CycloneDX 1.6) at /etc/lynxroute/sbom.json
- CIS Conformance Report (OpenSCAP) at /etc/lynxroute/cis-report.html
- Tailored CIS profile at /usr/share/doc/lynxroute/CIS_TAILORED_PROFILE.md
- IMDSv2 enforced
Additional details
Usage instructions
- Launch instance (t3.medium for CPU, g4dn.xlarge for GPU)
- Open Security Group - allow TCP 443 and TCP 80 from your IP
- Wait 2-3 minutes for Open WebUI to initialise
- SSH: ssh -i key.pem ubuntu@<PUBLIC_IP>
- Read credentials: sudo cat /root/ollama-credentials.txt
- Open https://<PUBLIC_IP> - accept self-signed cert warning
- Log in with the admin account you created on first visit
- Pull a model: Settings - Models - pull llama3 (or via SSH: sudo ollama pull llama3)
Resources
Vendor resources
Support
Vendor support
Visit us online: https://lynxroute.com
For Ollama documentation: https://github.com/ollama/ollama For Open WebUI documentation: https://docs.openwebui.com For AWS infrastructure issues:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.