Overview
Holo1 is an Action Vision-Language Model (VLM) developed by HCompany for use in the Surfer-H web agent system. It is designed to interact with web interfaces like a human user.
As part of a broader agentic architecture, Holo1 acts as a policy, localizer, or validator, helping the agent understand and act in digital environments.
Trained on a mix of open-access, synthetic, and self-generated data, Holo-1 enables state-of-the-art (SOTA) performance on the WebVoyager benchmark, offering the best accuracy/cost tradeoff among current models. It also excels in UI localization tasks such as Screenspot, Screenspot-V2, Screenspot-Pro, GroundUI-Web, and our own newly introduced benchmark WebClick.
Holo1 is optimized for both accuracy and cost-efficiency, making it a strong open-source alternative to existing VLMs.
Highlights
- Surfer-H operates purely through the browser, just like a real user. Combined with Holo1, it becomes a powerful, general-purpose, cost-efficient web automation system.
- UI Localisation: A key skill for the real-world utility of our VLMs within agents is localization: the ability to identify precise coordinates on a user interface (UI) to interact with to complete a task or follow an instruction.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.4xlarge Inference (Batch) Recommended | Model inference on the ml.g5.4xlarge instance type, batch mode | $0.00 |
ml.g6e.4xlarge Inference (Real-Time) Recommended | Model inference on the ml.g6e.4xlarge instance type, real-time mode | $0.00 |
ml.g5.8xlarge Inference (Batch) | Model inference on the ml.g5.8xlarge instance type, batch mode | $0.00 |
ml.g5.16xlarge Inference (Batch) | Model inference on the ml.g5.16xlarge instance type, batch mode | $0.00 |
ml.g6e.8xlarge Inference (Real-Time) | Model inference on the ml.g6e.8xlarge instance type, real-time mode | $0.00 |
ml.g6e.16xlarge Inference (Real-Time) | Model inference on the ml.g6e.16xlarge instance type, real-time mode | $0.00 |
Vendor refund policy
This product is offered free of charge. As no payment is collected, refunds do not apply.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Inference Sagemaker ML Product based on OpenAI ChatCompletion protocol.
Additional details
Inputs
- Summary
This model can be invoked using the OpenAI-compatible chat/completions endpoint, following the payload structure defined in the OpenAI Chat Completions API (https://platform.openai.com/docs/api-reference/chat/create ).
The model supports structured input during inference
- Limitations for input type
- The served model supports a maximum context length of 16,384 tokens and can handle up to 3 images per request. To ensure accurate coordinate predictions, input images must be resized client-side. This avoids automatic resizing on the server, which can alter image dimensions.
- Input MIME type
- application/json
Support
Vendor support
At this time, we do not provide direct support for this product. However, we are continuously improving the offering and may add support in the future.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.