Modly AI
Modly AI is a focused, AI-powered content moderation API that helps Web3 platforms review and flag inappropriate website content—specifically before publishing to IPFS. It analyzes both text and image links within HTML files to detect violations across key moderation categories.
Key Features:
HTML Content Moderation Scans submitted website files for inappropriate or harmful text and image links, ensuring content quality before IPFS publishing.
Category-Based Detection Flags content across currently supported categories:
Adult/NSFW Content
Violence
Hate Speech
Graphic or Disturbing Imagery
Harassment
Illegal Substances
Nudity
Text & Image Analysis
Text is analyzed for tone, profanity, harassment, and other violations.
Image links (via
<img>
tags) are passed to vision models for nudity and graphic content detection.
Severity Scoring & Insights Every flagged issue includes:
Category (e.g., harassment, profanity)
Severity level (
low
,medium
,high
)Confidence score
A relevant excerpt or image URL
Structured JSON Output The moderation response is cleanly formatted for easy frontend rendering or decision-making logic in your dApp.
Who It's For
Modly AI is built for:
Web3 website builders
IPFS-based publishing platforms
DAO tools
Community publishing interfaces
Any dApp aiming to maintain ethical content standards before publishing permanently on-chain
Modly AI helps decentralized platforms uphold ethical publishing standards, reduce exposure to harmful content, and provide users with a safer, more trustworthy content experience.
Last updated