Inizia a usare Amazon SageMaker JumpStart

Panoramica

Amazon SageMaker JumpStart è un hub di machine learning (ML) che può contribuire ad accelerare il percorso verso il ML. Scopri come iniziare a utilizzare algoritmi integrati con modelli preaddestrati provenienti da hub di modelli, modelli di base preformati e soluzioni predefinite per risolvere casi d'uso comuni. Per iniziare, consulta la documentazione o i notebook di esempio che puoi eseguire rapidamente.

Tipo di prodotto
Attività di testo
Attività visive
Attività tabulari
Attività audio
Multimodale
Apprendimento rinforzato
Showing results: 1-12
Total results: 567
  • Popolarità
  • Funzionalità in primo piano
  • Nome del modello da A a Z
  • Nome del modello da Z ad A
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3 models. Llama 3 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-405B-FP8

    Meta
    405B variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-8B-Instruct

    Meta
    8B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 3B Instruct

    Meta
    3B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 1B Instruct

    Meta
    1B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Vision Language

    Meta Llama 3.2 11B Vision Instruct

    Meta
    11b instruction-tuned variants of Llama 3.2 models that supports both text and image as input.
    Deploy only
  • foundation model

    Featured
    Text Generation

    Llama 2 13B

    Meta
    13B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama three from Meta comes in two parameter sizes — 8B and 70B with 8k context length — that can support a broad range of use cases with improvements in reasoning, code generation, and instruction following.

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 70B Chat

    Meta
    70B dialogue use case optimized variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 48