Listing Thumbnail

    Dynamia AI Platform - GPU Virtualization & Scheduling for EKS

     Info
    Deployed on AWS
    Enterprise GPU virtualization & AI scheduling for EKS, with a full web console (global & cluster dashboards; node/GPU inventory; workload, storage & quota control) plus fractional sharing, VRAM overcommit, and live per-pod VRAM scaling.

    Overview

    Dynamia AI Platform brings enterprise-grade GPU virtualization and AI-aware scheduling to Amazon EKS, and includes an integrated web console for multi-cluster dashboards, inventory, and policy/governance.

    Core GPU & scheduling capabilities

    • Fractional sharing with hard limits: per-pod SM/compute throttling and VRAM caps (MB or %), preventing noisy neighbors.
    • VRAM overcommit with guardrails: increase cluster-level utilization while honoring per-pod safety limits.
    • Live VRAM vertical scaling: adjust a running pod's GPU memory without restarts for many inference workloads.
    • AI-purpose scheduling: binpack/spread, target by GPU model/UUID, NUMA/NVLink awareness, namespace/tenant GPU quotas; optional gang scheduling & preemption via integrations (e.g., Volcano/Koordinator).

    Web console (single control plane)

    • Overview dashboard: multi-cluster posture, utilization, allocation, hot spots, and SLA risk hints.
    • Cluster dashboards: per-cluster health, GPU usage, allocation vs. requests, saturation trends.
    • Inventories: nodes list & detail, GPUs list & detail (model/UUID/topology/health), and workloads list & detail with GPU limits/actuals.
    • Governance & ops: quota management (tenant/project/namespace), storage management (volumes, classes, usage), policy enforcement and basic audit trail.
    • Observability: built-in DCGM metrics and prebuilt Grafana dashboards; alerting hooks.

    Integrations & compatibility

    • Kubernetes-native (Helm/EKS add-on); no app changes required.
    • Works with vLLM Production Stack, SGLang, TensorRT-LLM, JupyterHub, Volcano/Koordinator.
    • Supports NVIDIA GPUs on Amazon EKS; optional MIG awareness; RBAC and LTS release channel.

    Outcomes

    • Higher GPU utilization with fewer VRAM-related failures, clearer multi-tenant controls, and faster, safer rollout of AI training and inference services.

    Highlights

    • Enterprise GPU virtualization for EKS -- fractional sharing with strict SM/compute & VRAM limits, VRAM overcommit, and live per-pod VRAM scaling to maximize utilization without refactoring apps.
    • AI-purpose scheduling & quotas -- binpack/spread placement, model/UUID targeting, NUMA/NVLink awareness, and tenant/namespace GPU quotas; optional gang scheduling & preemption via integrations.
    • Full web console & governance -- global & cluster dashboards, node/GPU/workload inventories, storage and quota management.

    Details

    Delivery method

    Supported services

    Delivery option
    dynamia ai v1

    Latest version

    Operating system
    Linux

    Deployed on AWS

    Unlock automation with AI agent solutions

    Fast-track AI initiatives with agents, tools, and solutions from AWS Partners.
    AI Agents

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Dynamia AI Platform - GPU Virtualization & Scheduling for EKS

     Info
    Pricing is based on the duration and terms of your contract with the vendor. This entitles you to a specified quantity of use for the contract duration. If you choose not to renew or replace your contract before it ends, access to these entitlements will expire.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    1-month contract (10)

     Info
    Dimension
    Description
    Cost/month
    NVIDIA T4
    One NVIDIA T4 GPU, for g4 and g5 instances.
    $133.00
    NVIDIA L4
    One NVIDIA L4 GPU, for g6 instances.
    $133.00
    NVIDIA A10G
    One NVIDIA A10G GPU, for g5 instances.
    $133.00
    NVIDIA A100-SXM4-40GB
    One NVIDIA A100-SXM4-40GB, for p4d instances.
    $216.00
    NVIDIA A100-SXM4-80GB
    One NVIDIA A100-SXM4-80GB, for p4de instances.
    $216.00
    NVIDIA L40S
    One NVIDIA L40S, for g6e instances.
    $133.00
    NVIDIA H100
    One NVIDIA H100, for p5 instances.
    $216.00
    NVIDIA H200
    One NVIDIA H200, for p5e instances.
    $216.00
    NVIDIA B200
    One NVIDIA B200, for p6 instances.
    $216.00
    AWS Neuron
    One AWS Neuron chip, for trn1 instances.
    $133.00

    Vendor refund policy

    Refunds are handled according to AWS Marketplace policies. Buyers may request a refund within 30 days of purchase if the subscription was not activated or deployment failed due to a product issue. No pro-rata refunds are offered after activation, except where required by law. Please contact support@dynamia.ai  with your AWS account ID and order ID for assistance.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    dynamia ai v1

    Supported services: Learn more 
    • Amazon EKS
    Helm chart

    Helm charts are Kubernetes YAML manifests combined into a single package that can be installed on Kubernetes clusters. The containerized application is deployed on a cluster by running a single Helm install command to install the seller-provided Helm chart.

    Version release notes

    This release version contains nvidia device support.

    1. Enterprise GPU virtualization for EKS -- fractional sharing with strict SM/compute & VRAM limits, VRAM overcommit, and live per-pod VRAM scaling to maximize utilization without refactoring apps.
    2. AI-purpose scheduling & quotas -- binpack/spread placement, model/UUID targeting, NUMA/NVLink awareness, and tenant/namespace GPU quotas; optional gang scheduling & preemption via integrations.
    3. Full web console & governance -- global & cluster dashboards, node/GPU/workload inventories, storage and quota management.

    Additional details

    Usage instructions

    You can follow this instruction to deploy dynamia ai platform in your cluster:

    https://dynamia.ai/blog/dynamia-ai-aws-installation 

    Support

    Vendor support

    Email: info@dynamia.ai  Docs & tickets: https://dynamia.ai  SLA: P1 247 (1-hour response) P2 85 (next business day) Enterprise services include onboarding, architecture reviews, and LTS updates via Helm/EKS add-on.

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 AWS reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.