Let’s create an AI NSWF (Not Safe for Work) image detector using Python and FastAPI as a microservice for your application to keep it safe and family-friendly. You can integrate the AI NSWF detector via RESTful API with any application you want.
We will use “Falconsai/nsfw_image_detection” pre-trained model for NSFW Image Classification. This model is primarily designed for NSFW image classification, allowing it to filter explicit or inappropriate content.
What will we do?
- Creating a FastAPI web application.
- Creating an API endpoint to upload an image and start classification.
Setting up the FastAPI App
We will create a Python virtual environment for our microservice and install FastAPI. We will set up a basic web endpoint at the root path (/) that returns a simple welcome message,
python -m venv .venv
pip install "fastapi[standard]"
Create a main.py file and add the following code to test our app.
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World", "description": "NSFW Image Detection API"}}
from fastapi import FastAPI: Imports the necessary FastAPI class.
app = FastAPI(): Creates an instance of the FastAPI application, which is the main object for handling all API functions.
@app.get("/"): This is a decorator that tells FastAPI that the function immediately following it (read_root) should be run when a user accesses the root URL path (/) using an HTTP GET request (the standard way to view a webpage or fetch data).
def read_root():: Defines the function that handles the request.
return {"Hello": "World", "description": "NSFW Image Detection API"}: When a user hits the / endpoint, the API will return this JSON (JavaScript Object Notation) dictionary.
Let’s start our server to make sure everything is fine.
fastapi dev main.py
You should see something like {"Hello":"World","description":"NSFW Image Detection API"}
Installing nsfw image detection
First, we need to install the dependencies we need to run the model.
pip install pillow transformers torch torchvision
We should be ready to create our API to upload an image and classify an uploaded image to detect NSFW content.
import io
import base64
from fastapi import FastAPI, File, UploadFile, HTTPException
from PIL import Image
from transformers import pipeline
app = FastAPI()
# Initialize the NSFW classifier pipeline
classifier = pipeline("image-classification", model="Falconsai/nsfw_image_detection")
#root function code here ###
@app.post("/classify-image")
async def classify_image(file: UploadFile = File(...)):
"""
Classify an uploaded image to detect NSFW content
"""
try:
# Validate file type
if not file.content_type.startswith("image/"):
raise HTTPException(status_code=400, detail="File must be an image")
# Read and process the image
contents = await file.read()
image = Image.open(io.BytesIO(contents))
# Convert to RGB if necessary (some images might be in different modes)
if image.mode != "RGB":
image = image.convert("RGB")
# Classify the image
results = classifier(image)
return {
"filename": file.filename,
"predictions": results,
"is_nsfw": any(pred["label"].lower() in ["nsfw", "porn", "explicit"] and pred["score"] > 0.5 for pred in results)
}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error processing image: {str(e)}")
The code performs the following key steps:
Imports: It imports necessary libraries for web framework (FastAPI), file handling (UploadFile, io), image processing (PIL.Image), and the machine learning model (transformers.pipeline).
Model Initialization: It initializes a pre-trained image classification model named Falconsai/nsfw_image_detection and assigns it to the classifier pipeline.
API Endpoint (/classify-image): It defines an asynchronous POST endpoint that accepts an image file as input (UploadFile).
It includes a try...except block to catch general errors during processing and return a 500 Internal Server Error if something goes wrong.
Validation: It first checks if the uploaded file is actually an image. If not, it returns a 400 Bad Request error.
Image Processing: It reads the file contents, opens it as a PIL Image object, and ensures the image is converted to the RGB format for model compatibility.
Classification: It passes the processed image to the pre-trained classifier model to get predictions (labels and confidence scores).
Response: It returns the filename, the raw predictions, and a simplified boolean field is_nsfw which is set to True if any prediction related to NSFW/Porn/Explicit content has a confidence score greater than 50% (0.5).
Response body:
{
"filename": "1fa3b6ad44e71a33685ba9126ba18224.jpg",
"predictions": [
{
"label": "normal",
"score": 0.8291031718254089
},
{
"label": "nsfw",
"score": 0.17089684307575226
}
],
"is_nsfw": false
}
You can restart the server and try http://localhost:8000/docs, then use “/classify-image” API form, or try it on Postman or any other way, just visit http://localhost:8000/classify-image
Here is the full code:
from typing import Union
import io
import base64
from fastapi import FastAPI, File, UploadFile, HTTPException
from PIL import Image
from transformers import pipeline
app = FastAPI()
# Initialize the NSFW classifier pipeline
classifier = pipeline("image-classification", model="Falconsai/nsfw_image_detection")
@app.get("/")
def read_root():
return {"Hello": "World", "description": "NSFW Image Detection API"}
@app.post("/classify-image")
async def classify_image(file: UploadFile = File(...)):
"""
Classify an uploaded image to detect NSFW content
"""
try:
# Validate file type
if not file.content_type.startswith("image/"):
raise HTTPException(status_code=400, detail="File must be an image")
# Read and process the image
contents = await file.read()
image = Image.open(io.BytesIO(contents))
# Convert to RGB if necessary (some images might be in different modes)
if image.mode != "RGB":
image = image.convert("RGB")
# Classify the image
results = classifier(image)
return {
"filename": file.filename,
"predictions": results,
"is_nsfw": any(pred["label"].lower() in ["nsfw", "porn", "explicit"] and pred["score"] > 0.5 for pred in results)
}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error processing image: {str(e)}")
