Posted On: May 17, 2024
We are excited to announce that Knowledge Bases for Amazon Bedrock (KB) now lets you configure inference parameters to have greater control over personalizing the responses generated by a foundation model (FM).
With this launch you can optionally set inference parameters to define parameters such randomness and length of the response generated by the foundation model. You can control how random or diverse the generated text is by adjusting a few settings, such as temperature and top-p. The temperature setting makes the model more or less likely to choose unusual or unexpected words. A lower value for temperature generates expected and more common word choices. The top-p setting limits how many word options the model considers. Reducing this number restricts the consideration to a smaller set of word choices makes the output more conventional.
In addition to randomness and diversity, you can restrict the length of the foundation model output, through maxTokens, and stopsequences. You can use the maxTokens setting to specify the minimum or maximum number of tokens to return in the generated response. Finally, the stopsequences setting allows you to configure strings that serve as control for the model to stop generating further tokens.
The inference parameters capability within Knowledge Bases is now available in Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), US East (N. Virginia), US West (Oregon) regions. To learn more, refer to Knowledge Bases for Amazon Bedrock documentation. To get started, visit the Amazon Bedrock console or utilize the RetrieveAndGenerate API.