ActiveFence — trust & safety platform with AI content moderation, threat intelligence, and red-teaming.
Content Moderation API Tools
AI-NativeContent moderation APIs for text, images, video: toxicity, NSFW, spam detection.
52 tools in this category
Aikido Security — Developer security platform combining SAST, SCA, cloud posture, and secrets scanning in a single low-noise tool.
Amity — social & chat SDK with built-in moderation, profanity filters, and image moderation.
App-Ray — Mobile app security testing platform performing automated SAST and DAST on Android and iOS applications.
Appknox — Mobile application security testing platform with automated SAST, DAST, and API security checks for Android and iOS.
Astra Security — Web security suite with WAF, malware scanner, and automated DAST for web applications and APIs.
Azure Video Indexer — video AI including content moderation, transcript, faces, and scene labels.
Besedo — Content moderation platform combining AI classification with human review for marketplaces, dating, and UGC platforms.
Bodyguard.ai — real-time text moderation API for social platforms, comments, and chats.
Bright Security — Developer-centric DAST platform with fast, accurate API and web app scanning integrable into CI/CD pipelines.
Factmata (now Cailabs) — content classification API for hate, toxicity, and misinformation in text.
Camb AI — multilingual speech + text moderation API with 150+ language coverage.
Checkmarx — Enterprise SAST, SCA, IaC, and API security scanning platform covering 30+ languages with IDE and CI integration.
Cinder — trust & safety operations platform and API built by ex-Meta/Palantir for content review workflows.
CleanSpeak — Profanity filter and content moderation platform with customizable word lists and user reputation scoring.
DerScanner — Multi-language SAST and SCA platform with binary analysis support for detecting vulnerabilities in 40+ languages.
GGWP — AI trust & safety platform for gaming voice + text chat moderation and player reputation.
Guardsquare — Mobile app shielding platform with proactive code obfuscation, RASP, and threat monitoring for Android and iOS.
Imagga — image recognition API with NSFW, categorization, and custom classifier moderation endpoints.
Jit — Developer security platform orchestrating SAST, SCA, secrets detection, and IaC scanning as a product security plan.
Koko — crisis detection and intervention API for self-harm, suicide, and eating-disorder signals.
Lasso Moderation — Content Moderation API tool for developers. Specializes in content safety.
Logically — disinformation and narrative intelligence platform with API for platform integrity.
Loris — customer conversation intelligence with real-time guidance and abusive-language detection.
MobSF — Open-source mobile application security framework for automated SAST and DAST of Android, iOS, and Windows apps.
Modulate ToxMod — proactive voice chat moderation API for games and voice apps.
NowSecure — Automated mobile app security testing platform for Android and iOS with OWASP MASVS coverage and CI/CD integration.
OX Security — End-to-end pipeline security platform tracking artifact integrity and detecting supply chain attacks across the SDLC.
Perspective API — toxicity and attribute scoring API from Jigsaw/Google for text moderation.
Picpurify — image and video moderation API for nudity, violence, drugs, and face detection.
Private AI — PII redaction and classification API for text, audio, images, and documents.
Reality Defender — deepfake and synthetic media detection API for images, video, audio, and text.
Respondology — brand and creator comment moderation across Instagram, YouTube, TikTok, Facebook.
Samurai Labs — AI for detecting online violence, cyberbullying, and predatory behavior.
Snyk — Developer-first security platform with SCA, SAST, container, and IaC scanning integrated into IDEs and CI/CD pipelines.
SonarQube — SAST and code quality platform detecting security vulnerabilities, bugs, and code smells across 30+ programming languages.
StackHawk — Developer-centric DAST tool running application security tests in CI/CD using OpenAPI and GraphQL specs.
Synopsys Black Duck — SCA platform detecting open-source security vulnerabilities, license risks, and operational quality risks.
Thorn Safer — CSAM detection API and hash matching for platforms protecting children online.
Tisane Labs — NLP content moderation API detecting hate speech, cyberbullying, personal attacks, and sexual content in 30+ languages.
Tremau — DSA and online safety compliance platform with moderation workflow API.
Truepic — content authenticity and C2PA provenance API for verifying real vs synthetic media.
TrustLab — trust & safety intelligence and measurement API for platform integrity.
Two Hat — AI content moderation platform with contextual analysis for gaming, social, and UGC platforms with appeal workflows.
Unitary — multimodal content moderation API for video, image, and text with contextual understanding.
Utopia AI — Multilingual AI content moderation platform for detecting toxic language, spam, and policy violations in real time.
Veracode — Application security platform with SAST, DAST, SCA, and penetration testing delivered as a cloud-based scan service.
WaveSpeedAI — Content Moderation API tool for developers. Specializes in ai moderation.