Sign in
Categories
Your Saved List Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Stable Diffusion - Create Stunning Images on Your Linux Cloud GPU Server

Stable Diffusion - Create Stunning Images on Your Linux Cloud GPU Server

By: NI SP - High-End Remote Desktop and HPC Latest Version: Stable_Diffusion_XL_Automatic_VLAD_WEBUI_Nerf_UBU22_V33
Linux/Unix
Linux/Unix

Product Overview

Stable Diffusion allows to render beautifully stunning images based on text or image input independently on your own AWS cloud server with great performance. NeRF neural networks create 3D scenes from videos and images. Ubuntu 22 operating system.

AUTOMATIC Stable Diffusion


Stable Diffusion creates images similar to Midjourney or OpenAI DALL-E. Automatic is a feature rich collection of Stable Diffusion integration to create beautiful images yourself.

Supports text2image as well as img2img to create impressive images based on other images with a guidance prompt controlling the influence on the generated image.

Leverages the Automatic Stable Diffusion bundle and GUI including built-in upscaling (ESRGAN, LDSR, ...), face restoration (GFPGAN, Codeformer, ...), inpainting, outpainting and many other features.

Supported versions: Stable Diffusion 1.4, 2.0 and 2.1.

Deforum Stable Diffusion


Create image sequences and videos automatically with Deforum Stable Diffusion. Please find more background in our guide linked to the left. Example: https://www.youtube.com/watch?v=SEXbfni0nRc. Also available in the primary Automatic version of Stable Diffusion as extension.

NeRF - Create 3D Scenes with Neural Networks


NeRFs use neural networks to represent and render realistic 3D scenes based on an input collection of 2D images. The images can automatically be sampled from a video or be a collection of videos. Please find more background in our guide linked to the left. Supports Instant-NGP from nVidia and Nerfstudio integrating different NeRF technologies.

Supports T4 GPUs with 16 GB of VRAM (g4dn family) and powerful A10 GPUs with 24 GB (g5 family) for large image rendering.

Uses DCV from AWS to offer high-end remote desktop. You can upload and download images created via the DCV interface.

If you prefer Windows as OS please check out our other Stable Diffusion Windows Marketplace offer.

This is a collaborative project of NI SP and AI SP.

More background on Stable Diffusion and license:

Stable Diffusion is a latent text-to-image diffusion model. Thanks to a generous compute donation from Stability AI and support from LAION, they were able to train a Latent Diffusion Model on 512x512 images from a subset of the LAION-5B database. Similar to Google's Imagen, this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM. See this section below and the model card. Stable Diffusion was trained on AWS GPU servers.

Stable Diffusion v1 refers to a specific configuration of the model architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and then finetuned on 512x512 images.

Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present in its training data. Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding model card.

The weights are available via the CompVis organization at Hugging Face under a license which contains specific use-based restrictions to prevent misuse and harm as informed by the model card, but otherwise remains permissive. While commercial use is permitted under the terms of the license, we do not recommend using the provided weights for services or products without additional safety mechanisms and considerations, since there are known limitations and biases of the weights, and research on safe and ethical deployment of general text-to-image models is an ongoing effort. The weights are research artifacts and should be treated as such.

The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.

Version

Stable_Diffusion_XL_Automatic_VLAD_WEBUI_Nerf_UBU22_V33

Operating System

Linux/Unix, Ubuntu Ubuntu 22.04

Delivery Methods

  • Amazon Machine Image

Pricing Information

Usage Information

Support Information

Customer Reviews