Moderate Username and Images

This guide explains how to implement moderation for usernames and images in your application using Stream’s Moderation APIs. You’ll learn how to check usernames before user creation/updates and moderate image content to maintain a safe environment for your users.

Currently on JS SDK and .NET SDK supports this endpoint, but we are working on adding support for other SDKs. If you need this feature in other SDKs, please let us know at support@getstream.io.

This endpoint usage will be counted towards your text and image moderation quota.

Username Moderation

Before creating or updating a user profile, you can check if the username complies with basic moderation rules to help maintain a safe environment for your users.

Check Username

// Check username before user creation/update
const response = await client.moderation.checkUserProfile("user-id", {
  username: "username_to_check",
});

// Response will indicate if the username is acceptable
if (response.recommended_action === "keep") {
  // Username is acceptable, proceed with user creation/update
} else {
  // Username violates moderation rules, ask user to choose a different name
}

Key points about username moderation:

  • Username will be checked against following labels:
    • RACISM
    • HOMOPHOBIA
    • EXTREMISM
    • INSULT
    • MISOGYNY
    • BODY_SHAMING
    • SEXUALLY_EXPLICIT
  • Performs validation only and doesn’t create entries in the moderation dashboard
  • Helps prevent inappropriate usernames before they appear on your platform
  • Counts towards your text moderation quota
  • Currently we don’t allow custom configuration for username moderation.

Profile Image Moderation

You can moderate images using Stream’s AI Image Moderation Engine. This helps detect inappropriate or harmful image content before it’s published on your platform.

Check Profile Images

// Check profile image before allowing upload
const response = await client.moderation.checkUserProfile("user-id", {
  image: "https://example.com/profile.jpg",
});

// Handle the moderation result
if (response.recommended_action === "keep") {
  // Image is safe to use as profile picture
} else {
  // Image violates moderation rules
}

The AI Image Moderation Engine can detect various types of inappropriate content, including:

  • Explicit content
  • Violence and weapons
  • Hate symbols
  • Other harmful imagery
© Getstream.io, Inc. All Rights Reserved.