Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

    Listing Thumbnail

    DeepSeek & Llama powered All-in-One LLM Suite

     Info
    This product has charges associated with it for seller support. Run & Manage latest LLMs locally, privately, securely and cost-effectively without any vendor lock-in. This VM solution comes pre-loaded with LLaMA, Mistral, Gemma, DeepSeek, & Qwen models along with Open-WebUI as an intuitive UI to interact with the LLMs and Ollama to install new models as needed.
    Listing Thumbnail

    DeepSeek & Llama powered All-in-One LLM Suite

     Info

    Overview

    Open image

    This is a repackaged open source software product wherein additional charges apply for support by TechLatest.net.

    Note: We provide free demo access for the "DeepSeek & Llama-powered All-in-One LLM Suite." To request a free demo, please reach out to us at marketing@techlatest.net  with the subject "Free Demo Access Request - [Your Company Name]"

    Important: For step by step guide on how to setup this vm , please refer to our Getting Started guide 

    Ollama is a robust platform designed to simplify the management of large language models (LLMs). It provides a streamlined interface for downloading, running, and fine-tuning models from various vendors, making it easier for developers to build, deploy, and scale AI applications.

    Alongside, the VM is preconfigured with multiple cutting-edge models and allows users to pull and install additional LLMs as needed.

    The LLMs can be utilized via API integration as well as Open-WebUI based intuitive Chat UI to directly interact with multiple LLMs interactively.

    To ensure optimal performance, make sure to deploy the instance with the minimum specifications listed below, or higher.

    Minimum VM Specs : 8gbvRAM /2vCPU

    What is included in the VM :

    1. Preconfigured Models:

    • DeepSeek-R1: with 8B, 14B, 32B, 70B parameters
    • LLaMA 3.3
    • Mistral
    • Gemma 2 (27B)
    • Qwen 2.5: with 7B, 14B, 32B, 72B parameters
    • Nomic Embed Text

    2. Open-WebUI :

    • User-Friendly Interface: Open-WebUI offers an intuitive platform for managing Large Language Models (LLMs), enhancing user interaction through a chat-like interface.
    • Advanced Features: Supports Markdown, LaTeX, and code highlighting, making it versatile for various applications.
    • Centralized Access Control: Support RBAC (Role Based Access Control) to manage access.
    • Accessibility: Designed to work seamlessly on both desktop and mobile devices, ensuring users can engage with LLMs anywhere.

    3. Ollama :

    • Simplified Model Management: Ollama streamlines the process of deploying and interacting with LLMs, making it easier for developers and AI enthusiasts.
    • Integration with Open-WebUI: Offers a cohesive experience by allowing users to manage models directly through the Open-WebUI interface.
    • Real-Time Capabilities: Enables dynamic content retrieval during interactions, enhancing the context and relevance of responses.

    Key Benefits:

    • Privacy First: Your data remains secure and private, with no risk of 3rd party data exposure
    • No Vendor-Lockin: No need for expensive vendor subscriptions
    • Multipurpose: Weather you are single user or a team within an Enterprise, you can use this vm for various purposes such as AI app development using APIs, AI chat alternative to commercial offerings, LLM inference and evaluation etc.
    • No need for expensive GPU instances: Run the LLMs on CPU based instances as long as the instance meet the RAM requirements of the LLM models of your choice

    Why Choose Techlatest VM Offer?

    • Cost and Time Efficient: Consolidate your models into a single environment, eliminating setup overhead and bandwidth cost of model download
    • Seamless API Integration: Integrate models directly into your applications for custom workflows and automation
    • Effortless Model Management: Simplify model installation and management with Ollama intuitive system, enabling easy customization of your AI environment.
    • Side-by-Side Model Comparison: Evaluate different models in parallel in Open-WebUI to quickly determine which one best fits your needs.

    Disclaimer: Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and/or names or their products and are the property of their respective owners. We disclaim proprietary interest in the marks and names of others.

    In order to deploy and use this VM offer, users are required to comply with Ollama, Open-WebUI and preconfigured models licenses and term of agreement.

    Refer to below links on the Licensing terms:

    https://github.com/ollama/ollama/blob/main/LICENSE 

    https://github.com/open-webui/open-webui/blob/main/LICENSE 

    https://www.ollama.com/library/llama3.3/blobs/53a87df39647 

    https://www.ollama.com/library/llama3.3/blobs/bc371a43ce90 

    https://www.ollama.com/library/deepseek-r1/blobs/6e4c38e1172f 

    https://www.ollama.com/library/qwen2.5/blobs/832dd9e00a68 

    https://www.ollama.com/library/mistral/blobs/43070e2d4e53 

    https://www.ollama.com/library/gemma2:27b/blobs/097a36493f71 

    https://www.ollama.com/library/nomic-embed-text/blobs/c71d239df917 

    Highlights

    • Includes models from DeepSeeK, Llama, Gemma, Mistral, Qwen & lot more

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 24.04 LTS

    Typical total price

    This estimate is based on use of the seller's recommended configuration (t2.xlarge) in the US East (N. Virginia) Region. View pricing details

    $0.336/hour

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    DeepSeek & Llama powered All-in-One LLM Suite

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    Usage costs (570)

     Info
    • ...
    Instance type
    Product cost/hour
    EC2 cost/hour
    Total/hour
    t2.large
    $0.15
    $0.093
    $0.243
    t2.xlarge
    Recommended
    $0.15
    $0.186
    $0.336
    t2.2xlarge
    $0.15
    $0.371
    $0.521
    t3.large
    $0.15
    $0.083
    $0.233
    t3.xlarge
    $0.15
    $0.166
    $0.316
    t3.2xlarge
    $0.15
    $0.333
    $0.483
    t3a.micro
    $0.15
    $0.009
    $0.159
    t3a.large
    $0.15
    $0.075
    $0.225
    t3a.xlarge
    $0.15
    $0.15
    $0.30
    t3a.2xlarge
    $0.15
    $0.301
    $0.451

    Additional AWS infrastructure costs

    Type
    Cost
    EBS General Purpose SSD (gp2) volumes
    $0.10/per GB/month of provisioned storage

    Vendor refund policy

    Will be charged for usage, can be canceled anytime and usage fee is non refundable.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes

    first release

    Additional details

    Usage instructions

    1. On the EC2 Console page, instance is up and running. To connect to this instance through putty, copy the IPv4 Public IP Address. (refer Putty Guide available at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-linux-inst-from-windows.html  for details on how to connect using putty/ssh).

    2. Open putty, paste the IP address and browse your private key you downloaded while deploying the VM, by going to SSH- >Auth->Credentials , click on Open.

    3. Login as ubuntu user.

    4. Update the password of ubuntu user using below command :

    sudo passwd ubuntu

    1. Once ubuntu user password is set, access the GUI environment using RDP on Windows machine or Remmina on Linux machine.

    2. Copy the Public IP of the VM and paste it in the RDP. Login with ubuntu user and its password.

    3. To access the Open-WebUI , open your browser and copy paste the public IP of the VM as https://public_ip_of_vm

    4. Click on Get Started Link at the bottom and Create your first admin account on the registration page.

    For step by step guide to provision and use this VM , please visit AWS Getting Started Guide 

    Support

    Vendor support

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to write a review for this product.