Overview
TabPFN-2.5 by Prior Labs is the worlds leading Tabular Foundation Model. It ranks first on the popular TabArena benchmark for classification and regression tasks, outperforming tree-based models and ensembles tuned for more than 4 hours on datasets up to 50,000 samples and 2,000 features. TabPFN is a pretrained transformer trained on hundreds of millions of synthetic prediction tasks, allowing it to generalize across thousands of use cases in a single forward pass. This enables fast, accurate predictions with minimal preprocessing. The model handles mixed feature types (text, numerical, categorical), missing values, uninformative features, and outliers. It is an ideal default model for teams seeking reliable performance without costly tuning or retraining cycles. In addition to classification, regression, and time-series tasks, TabPFN can be used for unsupervised workflows such as synthetic data generation, uncertainty estimation, and learning tabular embeddings. TabPFN-2.5 is the third generation of the TabPFN models previously published in Nature. This TabPFN-2.5 model package is free to use under the non-commercial conditions as specified in the model license.
Highlights
- Achieve state-of-the-art classification and regression performance in seconds. TabPFN-2.5 removes the need for model selection and hyperparameter tuning, delivering fast and accurate predictions with minimal setup.
- Applies to structured data across any industry including healthcare, finance, manufacturing, energy, and more. Proven across thousands of real-world use cases and diverse tabular datasets.
- No retraining needed. Update the model context with new data and get updated predictions immediately.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Vendor refund policy
Using TabPFN-2.5 on Amazon SageMaker is free of charge for non-commercial use. Because the model itself does not incur any licensing or usage fees, no refunds are provided for any costs incurred while using this product, including but not limited to charges for AWS compute instances, storage, networking, or any other AWS infrastructure used to run or host the model.
If you have questions about this policy or need assistance, you can contact Prior Labs at: hello@priorlabs.aiÂ
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This initial release introduces TabPFN-2.5 on Amazon SageMaker, bringing state-of-the-art tabular prediction capabilities to AWS customers. This version includes full support for classification and regression tasks on mixed tabular data (numerical, categorical, missing values) without any model selection, training, or hyperparameter tuning. It supports datasets up to 50,000 rows and 2,000 features, enabling high-performance predictions across a wide variety of real-world workloads.
Additional details
Inputs
- Summary
You can submit inference requests to TabPFN-2.5 using two supported input formats:
- application/json - a JSON-encoded request body
- multipart/form-data - with a single Parquet file
Both formats must stay within the 25 MB SageMaker payload limit. Because Parquet is compressed, multipart requests allow more rows or features to fit into the same limit.
Each request may include optional model_params (to configure how the underlying estimator runs) and predict_params (to control the output format of predictions). These parameters follow the same structure in both JSON and multipart inputs.
See examples and fields descriptions below.
- Limitations for input type
- All request payloads, regardless of their content type must be within 25 MB. Larger requests will be automatically rejected by Amazon SageMaker.
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
task | The inference task. | Supported "classification", "regression". | Yes |
n_estimators | The number of estimators in the TabPFN ensemble, default 8. We aggregate the predictions of `n_estimators` - many forward passes of TabPFN.. | - | No |
categorical_features_indices | The indices of the columns that are suggested to be treated as categorical, default None. If `None`, the model will infer the categorical columns. | - | No |
softmax_temperature | The temperature for the softmax function, default 0.9. This is used to control the confidence of the model's predictions. Lower values make the model's predictions more confident. | Temperature must be greater than 0. | No |
average_before_softmax | Only used if `n_estimators > 1`, default False. Whether to average the predictions of the estimators before applying the softmax function. | - | No |
ignore_pretraining_limits | Whether to ignore the pre-training limits of the model, default False. The TabPFN models have been pre-trained on a specific range of input data. If the input data is outside of this range, the model may not perform well. | - | No |
inference_precision | The precision to use for inference, default "auto". This can dramatically affect the speed and reproducibility of the inference. Higher precision can lead to better reproducibility but at the cost of speed. | - | No |
fit_mode | Determine how the TabPFN model is fitted. The mode determines how the data is preprocessed and cached for inference. This is unique to an in-context learning foundation model like TabPFN, as the "fitting" is technically the forward pass of the model. | Supported "low_memory", "fit_preprocessors", "fit_with_cache", "batched". | No |
memory_saving_mode | Enable GPU/CPU memory saving mode, default "auto". This can both avoid out-of-memory errors and improve fit+predict speed by reducing memory pressure. | - | No |
random_state | Controls the randomness of the model. Pass an int for reproducible results and see the scikit-learn glossary for more information. | - | No |
Resources
Vendor resources
Support
Vendor support
For general support or license inquiries reach out to us via hello@priorlabs.aiÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.