
Overview
jina-embeddings-v3 is a multilingual multi-task text embedding model designed for a variety of NLP applications. Based on the Jina-XLM-RoBERTa architecture, this model supports Rotary Position Embeddings to handle long input sequences up to 8192 tokens. Additionally, it features 5 LoRA adapters to generate task-specific embeddings efficiently.
Highlights
- **Extended Sequence Length**: Supports up to 8192 tokens with [RoPE](https://arxiv.org/abs/2104.09864).
- **Task-Specific Embedding**: Customize embeddings through the task argument with the following options: * *retrieval.query*: Used for query embeddings in asymmetric retrieval tasks * *retrieval.passage*: Used for passage embeddings in asymmetric retrieval tasks * *separation*: Used for embeddings in clustering and re-ranking applications * *classification*: Used for embeddings in classification tasks * *text-matching*: Used for embeddings in tasks that quantify similarity between two texts, such as STS or symmetric retrieval tasks
- **[Matryoshka Embeddings](https://arxiv.org/abs/2205.13147)**: Supports flexible embedding sizes (32, 64, 128, 256, 512, 768, 1024), allowing for truncating embeddings to fit your application.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.xlarge Inference (Batch) Recommended | Model inference on the ml.g5.xlarge instance type, batch mode | $2.50 |
ml.g5.xlarge Inference (Real-Time) Recommended | Model inference on the ml.g5.xlarge instance type, real-time mode | $2.50 |
ml.p2.xlarge Inference (Batch) | Model inference on the ml.p2.xlarge instance type, batch mode | $2.30 |
ml.g4dn.4xlarge Inference (Batch) | Model inference on the ml.g4dn.4xlarge instance type, batch mode | $4.00 |
ml.g4dn.16xlarge Inference (Batch) | Model inference on the ml.g4dn.16xlarge instance type, batch mode | $14.50 |
ml.p2.16xlarge Inference (Batch) | Model inference on the ml.p2.16xlarge instance type, batch mode | $35.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $48.25 |
ml.g4dn.2xlarge Inference (Batch) | Model inference on the ml.g4dn.2xlarge instance type, batch mode | $2.20 |
ml.p3.8xlarge Inference (Batch) | Model inference on the ml.p3.8xlarge instance type, batch mode | $25.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $7.00 |
Vendor refund policy
Refunds to be processed under the conditions specified in EULA.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug fixes.
Additional details
Inputs
- Summary
The model accepts JSON inputs. Texts must be passed in the following format.
{ "data": [ { "text": "How is the weather today?" }, { "text": "What's the color of an orange?" } ], "parameters": { "task": "text-matching", "late_chunking": false, "dimensions": 1024 } }
- Input MIME type
- text/csv
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
data.text | An array of strings for the model to embed. | Type: FreeText | Yes |
parameters.task | Select the downstream task for which the embeddings will be used. The model will return the optimized embeddings for that task. None can also be specified, indicating that no specific task is required. | Default value: None
Type: FreeText
Limitations: Should be one of 'retrieval.query', 'retrieval.passage', 'separation', 'classification', 'text-matching' or None. | No |
parameters.late_chunking | Apply the late chunking technique to leverage the model's long-context capabilities for generating contextual chunk embeddings.
Reference: https://jina.ai/news/late-chunking-in-long-context-embedding-models
| Default value: false
Type: FreeText
Limitations: boolean | No |
parameters.dimensions | Output dimensions. Smaller dimensions are easier to store and retrieve, with minimal performance impact thanks to MRL. | Default value: 1024
Type: Integer
Minimum: 1
Maximum: 1024 | No |
Resources
Vendor resources
Support
Vendor support
We provide support for this model package through our enterprise support channel.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



