Single endpoint integration
POST text to one endpoint and receive structured JSON that plugs directly into moderation flows.
Moderate comments, chat, usernames, reviews, and forms with one simple endpoint. Send text in, get structured JSON back, and ship moderation faster without building your own profanity filter from scratch.
Integration
Single endpoint
Output
Structured JSON
Stack
Works anywhere
API response preview
{
"profanity": true,
"matches": [
{
"term": "f***ing",
"category": "sexual",
"severity": "medium",
"score": 0.92
}
]
}
Category
Sexual
Severity
Medium
Score
0.92
ProfanityCheck helps developers and moderation teams screen user-generated text before it is published, stored, or routed for manual review. It is built for comments, chat, usernames, reviews, bios, and form submissions.
Features
POST text to one endpoint and receive structured JSON that plugs directly into moderation flows.
Go beyond a simple yes or no and build rules for blocking, warning, logging, or review.
Works with Laravel, Node.js, Python, mobile apps, serverless functions, and anything that can make HTTP requests.
Keep your stack lean. No unnecessary dependency layer or lock-in.
Clear public usage limits make the product feel transparent and trustworthy.
Use the output in your admin tools, queues, dashboards, or automated policy workflows.
Use cases
ProfanityCheck works best where moderation has to be fast, consistent, and simple to integrate.
Filter offensive text before it reaches public pages or community threads.
Screen live messages in apps, games, communities, or support flows.
Stop abusive display names, bios, and profile text before publication.
Reduce brand risk in reviews, product feedback, and marketplace submissions.
Validate open text fields in forms, promotions, and lead generation flows.
Route flagged content into your own queue, workflow, or moderation dashboard.
API example
POST /api/v1/check
curl -X POST https://profanitycheck.dev/api/v1/check \
-H "Content-Type: application/json" \
-d '{"text": "This is a fucking example."}'
Example JSON payload
{
"profanity": true,
"matches": [
{
"term": "fucking",
"category": "sexual",
"severity": "medium",
"score": 0.92
}
]
}
Reject clearly profane input before it is stored or published.
Ask for cleaner wording and reduce unnecessary moderation effort.
Send borderline cases to moderators based on severity or score.
Rate limits
Per minute
10
requests
Per hour
400
requests
Per day
1000
requests
FAQ
A profanity API checks text for profane or offensive language and returns data your application can use to block, flag, or review content.
Comments, chat messages, usernames, bios, reviews, support messages, and open text fields are all common use cases.
No. It is a standard JSON API and works with any stack that can make HTTP requests.
Yes. It can include details like detected term, category, severity, and score to support more nuanced moderation.
Dataset contribution
Submit terms or phrases that should be reviewed for inclusion in the dataset. Add optional context if regional or cultural nuance matters.
Start integrating