
Overview
LHTM-Opt is an instruction-tuned Japanese Language Large Model developed by alt Inc., which has excellent Japanese knowledge and can be applied in various NLP tasks. alt Inc. is a venture firm with the mission of freeing humankind from non-creative/unproductive labor through the creation of P.A.I.® (Personal Artificial Intelligence) and AI clones.
Lightweight and Deployable: With 7B model size, our LLM is designed to be lightweight, ensuring ease of deployment.
Benchmark Excellence: LHTM-Opt obtained competitive scores on the JGLUE and Rakuda benchmarks, which are benchmarks for Japanese LLMs. These scores are a testament to our model's understanding, reasoning, and generation capabilities.
Ideal for RAG Applications: LHTM-Opt can enhance question answering systems, content creation tools, and more by providing contextually relevant and coherent responses.
Seamless Integration: Published on AWS Marketplace, our Japanese LLM is ready for immediate deployment.
Highlights
- ## Key Features - **LHTM-Opt** is lightweight and can be deployed with eases. - **LHTM-Opt** obtained competitive scores on the JGLUE and Rakuda benchmarks, which are benchmarks for Japanese LLMs. Those scores indicated the ability of our model in Japanese language understanding and reasoning.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.p3.2xlarge Inference (Batch) Recommended | Model inference on the ml.p3.2xlarge instance type, batch mode | $1.20 |
ml.p3.2xlarge Inference (Real-Time) Recommended | Model inference on the ml.p3.2xlarge instance type, real-time mode | $1.20 |
ml.p3.8xlarge Inference (Batch) | Model inference on the ml.p3.8xlarge instance type, batch mode | $1.20 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $1.20 |
ml.g4dn.4xlarge Inference (Real-Time) | Model inference on the ml.g4dn.4xlarge instance type, real-time mode | $1.20 |
ml.g4dn.16xlarge Inference (Real-Time) | Model inference on the ml.g4dn.16xlarge instance type, real-time mode | $1.20 |
ml.p3.16xlarge Inference (Real-Time) | Model inference on the ml.p3.16xlarge instance type, real-time mode | $1.20 |
ml.g5.xlarge Inference (Real-Time) | Model inference on the ml.g5.xlarge instance type, real-time mode | $1.20 |
ml.g5.8xlarge Inference (Real-Time) | Model inference on the ml.g5.8xlarge instance type, real-time mode | $1.20 |
ml.g5.12xlarge Inference (Real-Time) | Model inference on the ml.g5.12xlarge instance type, real-time mode | $1.20 |
Vendor refund policy
This product is not refundable.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Our initial official release!
Additional details
Inputs
- Summary
The model accepts JSON requests that specify the prompt and generation parameters. The prompt can be in Llama2 chat format for chatting.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
prompt | The prompt to be completed. | Type: FreeText | Yes |
max_new_tokens | The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. | Default value: 128
Type: Integer
Minimum: 0 | No |
temperature | The value used to modulate the next token probabilities. | Default value: 0.2
Type: Continuous
Minimum: 0
Maximum: 2.0 | No |
top_k | The number of highest probability vocabulary tokens to keep for top-k-filtering. -1 for keeping all tokens. | Default value: 40
Type: Integer | No |
top_p | If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. | Default value: 0.9
Type: Continuous
Minimum: 0.0
Maximum: 1.0 | No |
do_sample | Whether or not to use sampling; use greedy decoding otherwise. | Default value: true
Type: Categorical
Allowed values: true, false | No |
repetition_penalty | The parameter for repetition penalty. 1.0 means no penalty. | Default value: 1.1
Type: Continuous | No |
skip_prompt | Skip the prompt in the completion or not. | Default value: true
Type: Categorical
Allowed values: true, false | No |
Resources
Vendor resources
Support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.