
Overview
ReaderLM v2 is a 1.5B parameter language model that converts raw HTML into beautifully formatted markdown or JSON with superior accuracy and improved longer context handling. ReaderLM v2 handles up to 512K tokens combined input and output length and offers multilingual support across 29 languages, including English, Chinese, Japanese, Korean, French, Spanish, Portuguese, German, Italian, Russian, Vietnamese, Thai, Arabic, and more.
Thanks to its new training paradigm and higher-quality training data, ReaderLM v2 is a significant leap forward from its predecessor, particularly in handling long-form content and markdown syntax generation. While the first generation approached HTML-to-markdown conversion as a "selective-copy" task, v2 treats it as a true translation process. This shift enables the model to masterfully leverage markdown syntax, excelling at generating complex elements like code fences, nested lists, tables, and LaTeX equations.
Highlights
- **High-Accuracy HTML-to-Markdown Conversion with Improved Stability**: Transforms raw HTML into well-structured markdown, preserving complex elements like nested lists, tables, and LaTeX equations, while addressing degeneration issues such as repetition and looping in long sequences.
- **Direct HTML-to-JSON Extraction**: Allows users to directly convert HTML to JSON using customizable schemas, eliminating the need for intermediate markdown conversion.
- **Longer Context and Multilingual Support**: Handles up to 512K tokens in combined input and output length, and supports 29 languages, making it ideal for diverse and large-scale web data processing.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g4dn.xlarge Inference (Batch) Recommended | Model inference on the ml.g4dn.xlarge instance type, batch mode | $2.70 |
ml.g5.xlarge Inference (Real-Time) Recommended | Model inference on the ml.g5.xlarge instance type, real-time mode | $4.50 |
ml.g4dn.4xlarge Inference (Batch) | Model inference on the ml.g4dn.4xlarge instance type, batch mode | $7.20 |
ml.g4dn.16xlarge Inference (Batch) | Model inference on the ml.g4dn.16xlarge instance type, batch mode | $26.10 |
ml.g4dn.8xlarge Inference (Batch) | Model inference on the ml.g4dn.8xlarge instance type, batch mode | $13.68 |
ml.g4dn.12xlarge Inference (Batch) | Model inference on the ml.g4dn.12xlarge instance type, batch mode | $20.25 |
ml.g5.xlarge Inference (Batch) | Model inference on the ml.g5.xlarge instance type, batch mode | $4.50 |
ml.g4dn.2xlarge Inference (Batch) | Model inference on the ml.g4dn.2xlarge instance type, batch mode | $3.96 |
ml.g4dn.4xlarge Inference (Real-Time) | Model inference on the ml.g4dn.4xlarge instance type, real-time mode | $7.20 |
ml.g4dn.16xlarge Inference (Real-Time) | Model inference on the ml.g4dn.16xlarge instance type, real-time mode | $26.10 |
Vendor refund policy
Refunds to be processed under the conditions specified in EULA.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Initial release.
Additional details
Inputs
- Summary
Model input is in the format below:
- Input MIME type
- text/csv
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
model | It should be a fixed value: "ReaderLM-v2". | Type: FreeText | Yes |
prompt | Prompt to the model with input, instructions and expected return type set.
Please refer to the `create_prompt` function in the example notebook at https://github.com/jina-ai/jina-sagemaker/blob/main/notebooks/Reader-LM.ipynb for usage details. | Type: FreeText | Yes |
stream | Whether to stream back partial progress. | Default value: false
Type: FreeText | No |
Resources
Vendor resources
Support
Vendor support
We provide support for this model package through our enterprise support channel.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

