Overview
A fully pre-configured deep learning environment with TensorFlow 2.15 and Keras 3.0 on Ubuntu 22.04 LTS. This enterprise-ready Amazon Machine Image (AMI) provides a complete machine learning stack optimized for AWS infrastructure. Deploy in minutes and start building AI applications immediately without the complexity of environment setup and dependency management.
Key Features
CORE FRAMEWORKS
TensorFlow 2.15.0 with full GPU support
Keras 3.0.0 with multi-backend compatibility
NumPy, SciPy, and scientific computing stack
Comprehensive ML ecosystem pre-installed
DEVELOPMENT TOOLS
Jupyter Lab and Jupyter Notebook ready
Complete Python 3.10 development environment
Pre-configured development tools and libraries
Automated dependency resolution
PRODUCTION READY
Ubuntu 22.04 LTS base operating system
Optimized for AWS EC2 instances
GPU-ready configuration (CUDA support)
Security updates and maintenance included
FULL ML STACK
Computer Vision: OpenCV, Pillow
Data Science: pandas, matplotlib, seaborn
Machine Learning: scikit-learn, XGBoost ready
Utilities: requests, beautifulsoup4, and more
Use Cases
AI Research and Development
Computer Vision Projects
Natural Language Processing
Time Series Analysis and Forecasting
Academic and Educational Projects
Enterprise ML Prototyping
Production Model Deployment
ML Training and Workshops
Technical Specifications
Operating System: Ubuntu 22.04 LTS
Python Version: 3.10.12
Core Framework: TensorFlow 2.15.0
High-level API: Keras 3.0.0
Package Management: pip and virtualenv
Default Shell: bash with optimized configuration
Benefits
Save Hours of Setup Time: No need to install and configure complex dependencies
Consistent Environments: Ensure reproducibility across development and production
Cost Effective: Pay only for EC2 resources, no additional licensing fees
Scalable: Works with all EC2 instance types from t3.micro to p4d.24xlarge
Secure: Regular security updates and maintained package versions
Flexible: Suitable for both CPU and GPU accelerated workloads
Getting Started Launch this AMI from AWS Marketplace and access your ready-to-use deep learning environment. Connect via SSH to begin development or use Jupyter Lab through your web browser. All tools are pre-configured and ready for immediate use.
Highlights
- Production-Ready Deep Learning Stack Fully configured TensorFlow 2.15 and Keras 3.0 environment on Ubuntu 22.04 LTS. Includes complete ML ecosystem with Jupyter, pandas, OpenCV, and scikit-learn. Enterprise-optimized setup saves hours of installation and configuration time.
- GPU-Ready & AWS-Optimized Pre-configured for both CPU and GPU acceleration on AWS EC2 instances. Supports CUDA and all instance types from cost-effective t3 to high-performance p4d. Optimized for AWS infrastructure with security updates included.
- Zero Setup Time & Cost-Effective Launch and start coding in minutes with pre-installed development tools. Pay only for EC2 resources with no additional licensing fees. Perfect for prototyping to production workloads across research, education, and enterprise use cases.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
- ...
Dimension | Cost/hour |
|---|---|
t2.large Recommended | $0.10 |
t3.micro | $0.10 |
u7i-12tb.224xlarge | $6.40 |
r6i.metal | $0.00 |
inf2.8xlarge | $6.40 |
c4.8xlarge | $6.40 |
c4.large | $0.10 |
c7i-flex.large | $0.10 |
c7i-flex.8xlarge | $6.40 |
vt1.6xlarge | $1.60 |
Vendor refund policy
For this offering, Galaxys Cloud does not offer refund, you may cancel at anytime.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
ver2025
Additional details
Usage instructions
Launching the AMI
- Navigate to AWS Marketplace and subscribe to the TensorFlow 2.15 + Keras 3.0 Deep Learning Environment
- Click "Continue to Subscribe" then "Continue to Configuration"
- Choose your preferred region and delivery method (typically 64-bit x86)
- Click "Continue to Launch" and select "Launch through EC2"
- Choose your instance type based on workload requirements:
- Development: t3.medium or t3.large
- GPU workloads: g4dn.xlarge or p3.2xlarge
- Production: c5.2xlarge or larger
- Configure instance details, add storage (minimum 20GB recommended), and add tags if needed
- Configure security group to allow SSH (port 22) and Jupyter (port 8888)
- Review and launch the instance using your existing key pair or create a new one
Initial Access and Setup
- Once instance is running, connect via SSH: ssh -i your-key.pem ubuntu@your-instance-ip
- The environment is pre-configured and ready for immediate use
- All packages are installed in the system Python environment
- No additional setup or configuration required
Using Jupyter Notebook
- Start Jupyter Lab with: jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root
- Note the access token displayed in the terminal output
- Open your web browser and navigate to: http://your-instance-ip:8888Â
- Enter the token when prompted to access Jupyter Lab
- For persistent Jupyter sessions, consider using screen or tmux
Development Workflow
- Create your Python scripts or Jupyter notebooks in the home directory
- Import TensorFlow and Keras as follows: import tensorflow as tf import keras
- Verify installation with: print(f"TensorFlow version: {tf.version}") print(f"Keras version: {keras.version}")
- Use pre-installed libraries: numpy, pandas, matplotlib, scikit-learn, opencv
- Additional packages can be installed using pip
GPU Configuration (Optional)
- For GPU instances, ensure you select GPU-optimized instance types
- GPU drivers are pre-installed and configured
- TensorFlow will automatically detect and use available GPUs
- Verify GPU detection with: tf.config.list_physical_devices('GPU')
Best Practices
- Regularly update packages using: pip install --upgrade package-name
- Use virtual environments for project-specific dependencies
- Monitor instance performance through AWS CloudWatch
- Set up regular backups of your important work
- Use EBS snapshots for persistent storage needs
- Configure security groups to restrict access to necessary ports only
Stopping and Terminating
- Stop instance when not in use to save costs
- Backup important data before terminating instances
- Terminate instance through EC2 console when no longer needed
- Remember that instance storage is ephemeral and data will be lost on termination
Support and Resources
- Check the documentation for common issues and solutions
- Review AWS documentation for EC2 instance management
- Monitor instan
Support
Vendor support
Remote support seller@galaxys.cloudÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
