This blog post will guide you through the process of implementing an AI-powered image moderation feature for your application. We will delve into how Social+’s technology, in partnership with AWS, can be utilized to scan and moderate images for inappropriate or offensive content.
Furthermore, we will provide details on how to enable and disable this feature, and how to set the confidence level for each moderation category through the use of Social+. Lastly, we will explore some use cases of this technology. Join us as we explore how to create safer online spaces with AI!
Pre-requisites
Before we dive into the steps, ensure you have the following:
- An Social+ Portal account
- An Social+ Console Account
- A UI or access to Social+ UI Kits
Note: If you haven’t already registered for an Social+ account, we recommend following our comprehensive step-by-step guide in the Social+ Portal to create your new network.
Understanding Image Moderation
Image moderation is a feature that scans and moderates all uploaded images on posts and messages for inappropriate, offensive, and unwanted content before they are published. This is achieved by leveraging Social+’s technology, in partnership with AWS, which detects and moderates images that contain violence, nudity, suggestive, or disturbing content. This allows you to create a safer online community for your users without requiring any human intervention.
Enabling Image Moderation
By default, image moderation is disabled. To enable it, follow these steps:
- Log into your ASC Console
- Navigate to Settings > Image Moderation
- Toggle “Allow Image Moderation” to “Yes”
Once you’ve enabled image moderation, you will need to set the confidence level for each moderation category.
Setting Confidence Levels
By default, the confidence levels set are “0” for each category. Allowing any one category to be set to ‘0’ confidence level will likely result in all images being blocked from being uploaded, regardless of whether the image contained any inappropriate elements.
Setting confidence levels at a higher threshold is likely to yield more accurate results when it comes to detecting images for said content. If you specify a confidence value of less than 50, a higher number of false positives are more likely to be returned compared to a higher confidence value. You should only specify a confidence value of less than 50 only when lower confidence detection is acceptable for your use case.
Disabling Image Moderation
To disable image moderation, simply toggle “Allow Image Moderation” to “No”. Any images uploaded will no longer go through the image recognition service, and any inappropriate content will no longer be detected.
Factors Considered by Social+
Social+’s image moderation technology considers four main factors:
- Nudity: Detects explicit or suggestive nudity.
- Suggestive: Identifies suggestive content or behavior.
- Violence: Recognizes violent actions or imagery.
- Disturbing: Detects disturbing or frightening content.
Use Cases
- Social Media Platforms: Social media platforms can use image moderation to ensure that all user-uploaded content adheres to community guidelines and standards, thereby creating a safer and more inclusive environment for all users.
- Online Marketplaces: Online marketplaces can use image moderation to prevent the listing of inappropriate or offensive items, ensuring a safe and comfortable shopping experience for all users.
Final Thoughts
Harnessing the power of AI for image moderation can significantly enhance the safety and appropriateness of content in online communities. By leveraging Social+’s technology, in partnership with AWS, you can ensure that all uploaded images are scanned and moderated for inappropriate or offensive content, thereby creating a safer online space for your users. Remember, the key to effective image moderation lies in setting the right confidence levels for each moderation category!