Overview
Mellum All is JetBrains' large language model (LLM) optimized for code completion in all languages.
Highlights
- LLaMA-like architecture with ~4B parameters.
- Context-aware code completion with fill-in-the-middle.
- Top quality-to-size ratio.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.2xlarge Inference (Batch) Recommended | Model inference on the ml.g5.2xlarge instance type, batch mode | $0.00 |
ml.g6e.xlarge Inference (Real-Time) Recommended | Model inference on the ml.g6e.xlarge instance type, real-time mode | $0.00 |
ml.g5.2xlarge Inference (Real-Time) | Model inference on the ml.g5.2xlarge instance type, real-time mode | $0.00 |
ml.g6.2xlarge Inference (Real-Time) | Model inference on the ml.g6.2xlarge instance type, real-time mode | $0.00 |
ml.p4d.24xlarge Inference (Real-Time) | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $0.00 |
ml.p4de.24xlarge Inference (Real-Time) | Model inference on the ml.p4de.24xlarge instance type, real-time mode | $0.00 |
ml.p5.48xlarge Inference (Real-Time) | Model inference on the ml.p5.48xlarge instance type, real-time mode | $0.00 |
ml.p5e.48xlarge Inference (Real-Time) | Model inference on the ml.p5e.48xlarge instance type, real-time mode | $0.00 |
Vendor refund policy
This product is free.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
First version available to enterprise customers.
Additional details
Inputs
- Summary
Model input should be application/json. See the input parameters and sample input data for more information.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
prefix | Code that appears before the cursor. | string | Yes |
suffix | Code that appears after the cursor. | string | Yes |
filepath | Relative path of the file being edited, including file name and extension (e.g. 'src/utils/helpers.ts'). | string | Yes |
context | Additional context items (such as neighbouring files, docs, etc.) | list | No |
max_length | Maximum number of BPE tokens that may be generated for the completion snippet. | integer | Yes |
stop_token | String at which generation should stop (exclusive). | string | No |
use_control | Selects the Cloud-Control mode. Use 'off' if unsure.` | "on","off","silent" | Yes |
generate_indents | Whether the model should emit indentation characters. | boolean | No |
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.