Listing Thumbnail

    Ollama & Open WebUI - Hardened Private LLM Runtime

     Info
    Sold by: Lynxroute 
    Deployed on AWS
    Free Trial
    This product has charges associated with it for hardening, security configuration, and support. Ollama + Open WebUI is a complete private AI stack - run Llama, Mistral, Gemma, and 100+ LLMs locally with a ChatGPT-like interface. No OpenAI subscription, no data leaving your VPC. Authentication enabled, Nginx TLS proxy, Ollama API localhost-only, and CIS Level 1 hardened Ubuntu 24.04 LTS base. Built and maintained by Lynxroute.

    Overview

    This is a repackaged software product wherein additional charges apply for hardening, security configuration, and support.

    WHAT IS OLLAMA + OPEN WEBUI

    Ollama is an open-source runtime for running large language models locally. Open WebUI provides a ChatGPT-like interface for interacting with those models. Together they form a complete private AI stack.

    Ollama manages model downloads, GPU/CPU allocation, and exposes a REST API compatible with the OpenAI API format. Open WebUI adds a full-featured chat interface with conversation history, model selection, RAG document support, and user management.

    Supported models: Llama 3, Mistral, Gemma, Phi, Qwen, DeepSeek, and 100+ models from ollama.com/library.

    WHY SELF-HOST YOUR LLM

    OpenAI and Anthropic API pricing compounds quickly at scale. Self-hosted LLMs eliminate per-token costs entirely. For teams processing sensitive documents, legal data, or GDPR-regulated content, keeping inference within your own VPC is the only fully compliant option.

    ENHANCED SECURITY OUT OF THE BOX

    Default Ollama deployments expose the API on all interfaces with no authentication. Open WebUI without configuration allows unrestricted registration. This AMI enables authentication, proxies everything through Nginx TLS, and keeps the Ollama API localhost-only.

    This AMI pre-configures all of these for you at first boot.

    WHAT THIS AMI ADDS

    Security hardening:

    • Authentication enabled by default - no open access
    • Admin password equals EC2 Instance ID - unique per instance
    • Nginx reverse proxy with TLS - Open WebUI proxied on port 443
    • Ollama API (port 11434) bound to localhost only - not exposed publicly
    • UFW firewall - ports 22, 80, 443 only; all other ports blocked
    • fail2ban - SSH brute-force protection
    • AppArmor - mandatory access control

    OS hardening (CIS Level 1):

    • CIS Ubuntu 24.04 LTS Level 1 benchmark applied via ansible-lockdown
    • auditd - system call auditing
    • SSH hardening - PasswordAuthentication disabled, key-only access
    • Kernel hardening - SYN cookies, ASLR, rp_filter, TCP BBR
    • IMDSv2 enforced - SSRF protection

    Compliance artifacts (inside the AMI):

    • SBOM - CycloneDX 1.6 software bill of materials at /etc/lynxroute/sbom.json
    • CIS Conformance Report - OpenSCAP HTML report at /etc/lynxroute/cis-report.html
    • Tailored CIS profile at /usr/share/doc/lynxroute/CIS_TAILORED_PROFILE.md

    Quick Start:

    1. Launch instance (t3.medium for CPU, g4dn.xlarge for GPU)
    2. Open Security Group - allow TCP 443 and TCP 80 from your IP
    3. Wait 2-3 minutes for Open WebUI to initialise
    4. SSH: ssh -i key.pem ubuntu@<PUBLIC_IP>
    5. Read credentials: sudo cat /root/ollama-credentials.txt
    6. Open https://<PUBLIC_IP> - accept self-signed cert warning
    7. Log in with the admin account you created on first visit
    8. Pull a model: Settings - Models - pull llama3 (or via SSH: sudo ollama pull llama3)

    Highlights

    • Private ChatGPT: run Llama, Mistral, Gemma and 100+ LLMs locally - no OpenAI subscription, no data leaving your VPC, authentication enabled by default. Built by Lynxroute.
    • CIS Level 1 hardened Ubuntu 24.04 LTS: auditd, fail2ban, AppArmor, SSH key-only, IMDSv2 enforced, SBOM and CIS Conformance Report included for compliance teams.
    • GPU-ready: works with g4dn.xlarge (NVIDIA T4) and g5.xlarge (A10G) for fast inference - pull any model from ollama.com/library via Web UI or CLI.

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 24.04

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Free trial

    Try this product free for 5 days according to the free trial terms set by the vendor. Usage-based pricing is in effect for usage beyond the free trial terms. Your free trial gets automatically converted to a paid subscription when the trial ends, but may be canceled any time before that.

    Ollama & Open WebUI - Hardened Private LLM Runtime

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (6)

     Info
    Dimension
    Cost/hour
    g4dn.xlarge
    Recommended
    $0.05
    t3.large
    $0.03
    g5.xlarge
    $0.07
    t3.medium
    $0.02
    m6i.xlarge
    $0.05
    m6i.2xlarge
    $0.07

    Vendor refund policy

    We do not offer refunds for this product. AWS infrastructure charges (EC2, EBS, data transfer) are billed separately by AWS and are not refundable by us. If you experience technical issues with the AMI, please contact us at https://lynxroute.com  before requesting a refund.

    How can we make this page better?

    Tell us how we can improve this page, or report an issue with this product.
    Tell us how we can improve this page, or report an issue with this product.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    Version 0.9.2 - Initial release (April 2026)

    • Ollama 0.21.2 + Open WebUI 0.9.2 on Ubuntu 24.04 LTS
    • CIS Level 1 hardening applied (ansible-lockdown/UBUNTU24-CIS)
    • Authentication enabled by default - no open access
    • Nginx reverse proxy with HTTPS - self-signed TLS, Ollama API localhost-only
    • Startup page shown while Open WebUI initialises (~2-3 minutes on first boot)
    • Models not pre-loaded - pull via Web UI or CLI after launch
    • UFW firewall pre-configured (ports 22, 80, 443 only)
    • fail2ban, auditd, AppArmor pre-configured
    • SBOM (CycloneDX 1.6) at /etc/lynxroute/sbom.json
    • CIS Conformance Report (OpenSCAP) at /etc/lynxroute/cis-report.html
    • Tailored CIS profile at /usr/share/doc/lynxroute/CIS_TAILORED_PROFILE.md
    • IMDSv2 enforced

    Additional details

    Usage instructions

    1. Launch instance (t3.medium for CPU, g4dn.xlarge for GPU)
    2. Open Security Group - allow TCP 443 and TCP 80 from your IP
    3. Wait 2-3 minutes for Open WebUI to initialise
    4. SSH: ssh -i key.pem ubuntu@<PUBLIC_IP>
    5. Read credentials: sudo cat /root/ollama-credentials.txt
    6. Open https://<PUBLIC_IP> - accept self-signed cert warning
    7. Log in with the admin account you created on first visit
    8. Pull a model: Settings - Models - pull llama3 (or via SSH: sudo ollama pull llama3)

    Resources

    Vendor resources

    Support

    Vendor support

    Visit us online: https://lynxroute.com 

    For Ollama documentation: https://docs.ollama.com/  For Open WebUI documentation: https://docs.openwebui.com 

    For Ollama upstream issues: https://github.com/ollama/ollama/issues  For Open WebUI upstream issues:

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.