Overview

Product video
H2O LLM Studio was created by our top Kaggle Grandmasters and provides organizations with a no-code fine-tuning framework to make their own custom state-of-the-art LLMs for enterprise applications.
With H2O LLM Studio, you can:
- easily and effectively fine-tune LLMs without needing any coding experience.
- use a graphic user interface (GUI) specifically designed for large language models.
- fine-tune any LLM using a large variety of hyperparameters.
- use recent fine-tuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with low memory footprint.
- use Reinforcement Learning (RL) to fine-tune your model (experimental)
- use advanced evaluation metrics to judge the answers generated by the model.
- track and compare your model performance visually. In addition, Neptune and W&B integration can be used.
- chat with your model and get instant feedback on your model's performance.
- easily export your model to the Hugging Face Hub and share it with the community.
Highlights
- GenAI framework
- LLM no-code fine-tuning
- Create your own LLM
Details
Features and programs
Financing for AWS Marketplace purchases
Pricing
Additional AWS infrastructure costs
Type | Cost |
---|---|
EBS General Purpose SSD (gp3) volumes | $0.08/per GB/month of provisioned storage |
Vendor refund policy
This AMI is provided free of charge and is open source. As such, the vendor does not bill you for its use, and no refunds are necessary or applicable. You will only incur standard AWS infrastructure fees for running the AMI on AWS services, which are managed and billed directly by AWS. If you have questions about infrastructure costs, please refer to the AWS Billing & Cost Management service or contact AWS Support.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
v1.13.0 Latest
New Features Support Llama3.2 Default model list can be configured with ENV vars Expand dataset import connectors app.toml updates Add importing dataset with h2o drive
Fixes Cap progress to 0.99 if experiment is still running Pass settings for hf_transfer & chore Fix flash attention in docker image
Additional details
Usage instructions
Steps:
- Choose the correct size EC2 (example g5.xlarge)
- Configure EC2 for PEM key access
- Configure the security group for port 10101 TCP
- Configure the attached storage for 1TB (1000GB)
To connect to the Studio application, use your web browser and connect to http://[EC2-accesible-IP]:10101
Note: It can take several minutes for H2O LLM Studio to fully initialize.
This product is open source and doesn't require a license. For further information, please see our documentation: https://docs.h2o.ai/h2o-llmstudio/
To connect to the EC2 instance use the defined PEM key file for access. Example:
ssh -i [CUSTOMER-PEM-Key.pem] ubuntu@[EC2-accesible-IP]
Once shell access has been gained, to gain root simply enter:
sudo -i
Resources
Vendor resources
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products


