Listing Thumbnail

    ParaTools Pro for E4S™: AI/ML & HPC Tools on Heidi (Ubuntu, x86)

     Info
    Deployed on AWS
    Free Trial
    AWS Free Tier
    This product has charges associated with it for optional seller support and pre-configured stack (ParaTools Pro for E4S™). ParaTools Pro for E4S™ is the hardened Extreme-scale Scientific Software Stack on Ubuntu (x86_64), for use with Adaptive Computing's Heidi AI Cloud Supercomputer. It includes over 100 HPC and AI/ML tools, a VNC-based remote desktop environment, and cluster configuration built using the Spack package manager and an MVAPICH MPI tuned for AWS Elastic Fabric Adapter (EFA). It is a platform for developing AI/ML applications using tools such as NVIDIA NeMo™, NVIDIA BioNeMo, PyTorch, TensorFlow, JAX, Keras, and vLLM tuned for AWS and Heidi, with AWS NeuronX support for Inferentia/Trainium instances. The HPC stack features numerical libraries (WRF, PETSc, Trilinos, SuperLU-dist), visualization tools (ParaView, VisIt), performance evaluation tools (TAU, HPCToolkit), and HPC applications (OpenFOAM, LAMMPS, Xyce, CP2K, GROMACS, Quantum Espresso).

    Overview

    ParaTools Pro for E4S™  - the Extreme-scale Scientific Software Stack, E4S™  hardened for commercial clouds and supported by ParaTools, Inc. provides a platform for developing and deploying HPC and AI/ML applications. This product is the Ubuntu (x86_64) variant of the unified Heidi image; a Rocky Linux (x86_64) variant is available as a separate marketplace listing. It is designed for use with Adaptive Computing's Heidi AI Cloud Supercomputer , which provides multi-cloud HPC orchestration with automated infrastructure deployment, scaling, and cluster monitoring. A single unified image auto-detects its cluster role (head node / compute node) at boot time; the image is also usable standalone as a development environment. When run on EFA-capable instance families, cluster nodes are interconnected by a low-latency, high-bandwidth fabric based on AWS Elastic Fabric Adapter (EFA), fully leveraged by MVAPICH-Plus -- a proprietary MVAPICH derivative from our partners at X-Scale Solutions. A VNC-based remote desktop is available for interactive GUI work. ParaTools Pro for E4S™ features a suite of over 100 HPC tools built using the Spack  package manager and the MVAPICH-Plus MPI library tuned for EFA and CUDA. HPC applications include OpenFOAM, Weather Research and Forecasting Model-WRF, LAMMPS, Xyce, CP2K, GROMACS, and Quantum Espresso; the AI/ML Python stack includes NVIDIA NeMo™ with NVIDIA BioNeMo, PyTorch, TensorFlow, JAX, Keras, vLLM, OpenCV, Hugging Face Transformers, matplotlib, and JupyterLab. AWS NeuronX drivers and a dedicated PyTorch NeuronX environment are included for use on Inferentia/Trainium instance families. The VSCodium IDE is also pre-installed. New packages can be easily added using Spack  and pip and are accessible across cluster nodes. It may be used for developing the next generation of generative AI applications using a suite of Python tools and interfaces.

    E4S™ has built a unified computing environment for deployment of open-source projects. E4S™ was originally developed to provide a common software environment for the exascale leadership computing systems currently being deployed at DOE National Laboratories across the U.S. Support for ParaTools Pro for E4S™ is available through ParaTools, Inc. This product has additional charges associated with it for optional product support and updates.

    This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR), under SBIR Award Number DE-SC0022502 ("E4S: Extreme-Scale Scientific Software Stack for Commercial Clouds").

    Note: This product contains repackaged and tuned open source software (e.g., E4S™, Spack, and AI/ML tools like NVIDIA NeMo™, BioNeMo, TensorFlow, JAX, vLLM, etc.) which is configured and linked against an MVAPICH MPI implementation specifically developed and tuned for EFA.

    The full list of E4S applications installed via Spack is as follows:

    • adios
    • adios2
    • alquimia
    • aml
    • amrex
    • arborx
    • argobots
    • ascent
    • axom
    • boost
    • bricks
    • butterflypack
    • cabana
    • caliper
    • chai
    • chapel
    • charliecloud
    • conduit
    • cp2k
    • cusz
    • darshan-runtime
    • darshan-util
    • datatransferkit
    • dyninst
    • e4s-alc
    • e4s-cl
    • exago
    • faodel
    • fftx
    • flecsi
    • flit
    • fpm
    • gasnet
    • ginkgo
    • globalarrays
    • glvis
    • gmp
    • gotcha
    • gptune
    • gromacs
    • h5bench
    • hdf5
    • hdf5-vol-async
    • hdf5-vol-cache
    • hdf5-vol-log
    • heffte
    • hpctoolkit
    • hpx
    • hypre
    • kokkos
    • kokkos-kernels
    • laghos
    • lammps
    • lbann
    • legion
    • libcatalyst
    • libceed
    • libnrm
    • libquo
    • libunwind
    • loki
    • magma
    • metall
    • mfem
    • mgard
    • mpark-variant
    • mpifileutils
    • nccmp
    • nco
    • nek5000
    • nekbone
    • netcdf-fortran
    • netlib-lapack
    • netlib-scalapack
    • nrm
    • omega-h
    • openfoam
    • openmpi
    • openpmd-api
    • papi
    • papyrus
    • parallel-netcdf
    • pdt
    • petsc
    • phist
    • plasma
    • plumed
    • precice
    • pruners-ninja
    • pumi
    • py-cinemasci
    • py-h5py
    • py-jupyterhub
    • py-libensemble
    • py-petsc4py
    • qthreads
    • quantum-espresso
    • raja
    • rempi
    • scr
    • slate
    • slepc
    • strumpack
    • sundials
    • superlu
    • superlu-dist
    • swig
    • sz
    • sz3
    • tasmanian
    • tau
    • trilinos
    • turbine
    • umap
    • umpire
    • upcxx
    • variorum
    • veloc
    • vtk-m
    • wannier90
    • warpx
    • wps
    • wrf
    • xyce
    • zfp

    Highlights

    • ParaTools Pro for E4S™ and Machine Learning stacks, including NVIDIA NeMo™, built and optimized for AWS EFA and Heidi
    • An MVAPICH MPI implementation that offers lower latency and higher throughput than default OpenMPI implementations for pre-installed applications and user-installed applications
    • Over 100 HPC and AI applications managed via the Spack package manager, with VNC remote desktop for interactive computing

    Details

    Delivery method

    Delivery option
    64-bit (x86) Amazon Machine Image (AMI)

    Latest version

    Operating system
    Ubuntu 24.04

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Free trial

    Try this product free for 31 days according to the free trial terms set by the vendor. Usage-based pricing is in effect for usage beyond the free trial terms. Your free trial gets automatically converted to a paid subscription when the trial ends, but may be canceled any time before that.

    ParaTools Pro for E4S™: AI/ML & HPC Tools on Heidi (Ubuntu, x86)

     Info
    Pricing is based on actual usage, with charges varying according to how much you consume. Subscriptions have no end date and may be canceled any time.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.
    If you are an AWS Free Tier customer with a free plan, you are eligible to subscribe to this offer. You can use free credits to cover the cost of eligible AWS infrastructure. See AWS Free Tier  for more details. If you created an AWS account before July 15th, 2025, and qualify for the Legacy AWS Free Tier, Amazon EC2 charges for Micro instances are free for up to 750 hours per month. See Legacy AWS Free Tier  for more details.

    Usage costs (567)

     Info
    • ...
    Dimension
    Cost/hour
    c5n.9xlarge
    Recommended
    $0.99
    t2.micro
    $0.99
    t3.micro
    $0.99
    i3en.metal
    $0.99
    m5n.12xlarge
    $0.99
    m7i.large
    $0.99
    m2.2xlarge
    $0.99
    r6i.xlarge
    $0.99
    m6idn.16xlarge
    $0.99
    i3.16xlarge
    $0.99

    Vendor refund policy

    Refund Policy: The standard refund policy outlined here will be followed: https://docs.aws.amazon.com/marketplace/latest/userguide/refunds.html 

    Any additional refund enquiries can be sent to support@paratools.com  and will be considered individually on a case by case basis.

    How can we make this page better?

    Tell us how we can improve this page, or report an issue with this product.
    Tell us how we can improve this page, or report an issue with this product.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    64-bit (x86) Amazon Machine Image (AMI)

    Amazon Machine Image (AMI)

    An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.

    Version release notes
    • Adaptive Heidi release: v2.0
    • E4S release: 25.11
    • Default MPI: MVAPICH-4 Plus
    • OS: Ubuntu 24.04

    *** UNIFIED IMAGE NOTICE *** Single unified image serves both Heidi Server (head node) and Heidi Node (compute node) roles. Image auto-detects role at boot when deployed with Adaptive Computing's Heidi AI Cloud Supercomputer. Also usable standalone as a development environment.

    *** OS VARIANT NOTICE *** Ubuntu 24.04 x86_64 variant. For the Rocky Linux 9 x86_64 variant, see: https://aws.amazon.com/marketplace/pp/prodview-k3nscpq6ipu6c 

    IMPORTANT:

    1. Usable standalone as a development environment or shared-memory system. Multi-node TORQUE-scheduled jobs and cluster orchestration activate when deployed with Adaptive Computing's Heidi AI Cloud Supercomputer.
    2. MVAPICH-4 Plus provides optimized multi-node support without strictly requiring EFA-enabled instances; EFA is still recommended for best performance.
    3. AWS NeuronX drivers and PyTorch NeuronX environment are included for use with Inf1/Inf2/Trn1/Trn2 instance families.

    Updates in this version: Platform:

    • E4S 25.11 scientific software stack
    • MVAPICH-4 Plus -- MPI library tuned for AWS EFA and CUDA (MVAPICH 4.1 core, Hydra 4.3.1 process manager)
    • CUDA 12.9, gcc 13.3.0
    • Julia 1.12.4
    • ParaView 6.0.1, VisIt (visualization)
    • TurboVNC 3.2.1 + noVNC 1.4.0 (VNC-based web remote desktop)

    AI/ML stack (system Python env, /opt/python/pkgs/python-3.12.12):

    • PyTorch 2.10.0 (CUDA 12.9) + torchvision 0.25.0
    • TensorFlow 2.20.0
    • Keras 3.14.0
    • JAX 0.9.2
    • vLLM 0.19.0
    • OpenCV 4.13.0
    • Hugging Face Transformers 4.57.6, HF Hub 0.36.2
    • Triton 3.6.0, mpi4py 4.1.1
    • Gradio 6.11.0, LangChain 1.2.15, OpenAI SDK 2.31.0
    • Ollama 0.20.7 (local LLM inference)
    • JupyterLab + Notebook 7.5.5, Marimo 0.23.0
    • NumPy 2.2.6, SciPy 1.17.1, Pandas 3.0.2
    • Matplotlib 3.10.8, Seaborn 0.13.2, Plotly 6.6.0, GeoPandas 1.1.3

    NeMo / BioNeMo stack (segregated venv, activate via . /usr/local/py-env/nemo/bin/activate):

    • NVIDIA NeMo Toolkit 2.5.3
    • NVIDIA BioNeMo suite (core 2.4.5, fw 2.7.1, ESM-2, Evo2, AMPLIFY, Geneformer, MoCo, scDL)
    • PyTorch 2.9.1 (CUDA 12.9)
    • Megatron-Core 0.14 + Megatron-Bridge, Megatron-FSDP, Megatron-Energon
    • Flash-Attention 2.7.4
    • PyTorch Lightning 2.4.0, Accelerate 1.13.0
    • Diffusers 0.37.1, PEFT 0.18.1

    AWS NeuronX (dedicated venv /opt/aws_neuron_venv_pytorch):

    • Drivers: aws-neuronx-dkms 2.27.4.0, collectives 2.31.24.0, runtime-lib 2.31.24.0, tools 2.29.18.0
    • torch-neuronx 2.9.0, torch-xla 2.9.0
    • neuronx-cc 2.24.5133, neuronx-distributed 0.18.27753
    • jax-neuronx 0.7.0.1.0
    • CLI tools at /opt/aws/neuron/bin (neuron-ls, neuron-top, neuron-bench, ...)
    • Example workflows in /opt/demo/examples/neuronx/{inf1,inf2,trn1}

    Containers + orchestration:

    • Docker 29.4.0, Podman 4.9.3, Singularity-CE 4.4.1
    • k3s v1.34.6 + kubectl

    IDE:

    • VS Codium 1.106.37943

    Packages installed via Spack:

    • adios@1.13.1
    • adios2@2.10.2
    • adios2@2.11.0
    • alquimia@1.1.0
    • aml@0.2.1
    • amrex@25.10
    • arborx@1.5
    • arborx@2.0.1
    • argobots@1.2
    • ascent@0.9.5
    • axom@0.10.1
    • boost@1.88.0
    • bricks@2023.08.25
    • butterflypack@3.2.0
    • cabana@0.7.0
    • caliper@2.12.1
    • chai@2025.03.0
    • chapel@2.6.0
    • charliecloud@0.40
    • conduit@0.9.5
    • cp2k@2025.2
    • cusz@0.14.0
    • darshan-runtime@3.4.7
    • darshan-util@3.4.7
    • datatransferkit@3.1.1
    • dyninst@13.0.0
    • e4s-alc@1.0.3
    • e4s-cl@1.0.5
    • exago@1.6.0
    • faodel@1.2108.1
    • fftx@1.2.0
    • flecsi@2.4.1
    • flit@2.1.0
    • fpm@0.10.0
    • gasnet@2025.8.0
    • ginkgo@1.10.0
    • globalarrays@5.8.2
    • glvis@4.4
    • gmp@6.3.0
    • gotcha@1.0.8
    • gptune@4.0.0
    • gromacs@2025.3
    • h5bench@1.4
    • hdf5@1.14.6
    • hdf5-vol-async@1.7
    • hdf5-vol-cache@v1.1
    • hdf5-vol-log@1.4.0
    • heffte@2.4.1
    • hpctoolkit@2025.1.0
    • hpx@1.11.0
    • hypre@2.33.0
    • kokkos@4.7.01
    • kokkos-kernels@4.7.01
    • laghos@3.1
    • lammps@20250722
    • lbann@0.104
    • legion@25.03.0
    • libcatalyst@2.0.0
    • libceed@0.12.0
    • libnrm@0.1.0
    • libquo@1.4
    • libunwind@1.8.3
    • loki@0.1.7
    • magma@2.9.0
    • metall@0.30
    • mfem@4.8.0
    • mgard@compat-2023-12-09
    • mpark-variant@1.4.0
    • mpifileutils@0.12
    • nccmp@1.9.1.0
    • nco@5.3.4
    • nek5000@19.0
    • nekbone@17.0
    • netcdf-fortran@4.6.2
    • netlib-lapack@3.12.1
    • netlib-scalapack@2.2.2
    • nrm@0.1.0
    • omega-h@10.8.6-scorec 
    • openfoam@2412
    • openmpi@5.0.8
    • openpmd-api@0.16.1
    • papi@7.2.0
    • papyrus@1.0.2
    • parallel-netcdf@1.14.1
    • pdt@3.25.2
    • petsc@3.24.0
    • phist@1.12.1
    • plasma@24.8.7
    • plumed@2.9.2
    • precice@3.3.0
    • pruners-ninja@1.0.1
    • pumi@2.2.9
    • py-cinemasci@1.7.0
    • py-h5py@3.14.0
    • py-jupyterhub@1.4.1
    • py-libensemble@1.5.0
    • py-petsc4py@3.24.0
    • qthreads@1.18
    • quantum-espresso@7.5
    • raja@2025.03.0
    • rempi@1.1.0
    • scr@3.1.0
    • slate@2025.05.28
    • slepc@3.24.0
    • strumpack@8.0.0
    • sundials@7.5.0
    • superlu@7.0.0
    • superlu-dist@9.1.0
    • swig@4.0.2-fortran 
    • sz@2.1.12.5
    • sz3@3.2.0
    • tasmanian@8.1
    • tau@2.35.1
    • trilinos@16.1.0
    • turbine@1.3.0
    • umap@2.1.1
    • umpire@2025.03.0
    • upcxx@2023.9.0
    • variorum@0.8.0
    • veloc@1.7
    • vtk-m@2.3.0
    • wannier90@3.1.0
    • warpx@25.04
    • wps@4.5
    • wrf@4.6.1
    • xyce@7.10.0
    • zfp@1.0.1

    Additional details

    Usage instructions

    The 1-Click Security Group opens port 22 only so that you can access your instance via SSH using login 'ubuntu', you may change this later.

    For software development and basic usage:

    1. Launch the ParaTools Pro for E4S (TM) AMI via 1-Click
    2. On the 'EC2 Launch an Instance' page pick the key pair you will use to login
    3. On the 'EC2 Launch an Instance' page optionally edit the network settings by pressing the edit button. Adjust the firewall rules if needed to ensure ssh access and enable Auto-assign public IP if you plan to access the instance remotely from a non-AWS IP address.
    4. Click 'Launch Instance'
    5. Find your running instance in the EC2 Instances section of the EC2 dashboard, and connect to the instance via SSH using the key pair you previously selected by picking the instance and pressing the connect button.

    For more advanced usage, including launching a ParaTools Pro for E4S (TM) cluster with the Heidi AI Cloud Supercomputer, and submitting multi-node jobs please see:

    https://docs.paratoolspro.com/AWS/getting-started-AWS/ 

    Support

    Vendor support

    For general support questions, please email support@paratools.com 

    Paid support contracts, custom AMIs, and computing environments are available. Please see https://paratoolspro.com/  for additional details.

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Similar products

    Customer reviews

    Ratings and reviews

     Info
    0 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    0%
    0%
    0%
    0%
    0%
    0 reviews
    No customer reviews yet
    Be the first to review this product . We've partnered with PeerSpot to gather customer feedback. You can share your experience by writing or recording a review, or scheduling a call with a PeerSpot analyst.