NSFW Check
The NSFW checker is designed to help users avoid generating harmful or offensive content, such as images that are sexually suggestive, violent, or hateful.
To make use of this filter, just pass the safety_checker parameter as true in the json body.
The checker is trained on a dataset of known NSFW images, and it is able to identify and remove most NSFW content from generated images. However, the filter is not perfect, and it may occasionally miss some NSFW content.
Here are some additional tips for preventing NSFW content generation with Stable Diffusion:
- Use a lower number of diffusion steps. A higher number of diffusion steps can lead to more detailed images, but it can also increase the risk of generating NSFW content.
- Use a higher guidance scale. A higher guidance scale will make the model more likely to follow your prompt, which can help to reduce the risk of generating NSFW content.
- Use a safe model. Some Stable Diffusion models are specifically trained to avoid generating NSFW content. For example, the CompVis/stable-diffusion-safety-checker model is trained on a dataset of safe images and can be used to filter out NSFW content.