Skip to content
Profanity API for modern moderation workflows

Detect profanity in user-generated text with a fast, developer-friendly API.

Moderate comments, chat, usernames, reviews, and forms with one simple endpoint. Send text in, get structured JSON back, and ship moderation faster without building your own profanity filter from scratch.

Integration

Single endpoint

Output

Structured JSON

Stack

Works anywhere

API response preview

{
                        "profanity": true,
                        "matches": [
                        {
                            "term": "f***ing",
                            "category": "sexual",
                            "severity": "medium",
                            "score": 0.92
                        }
                        ]
                    }

Category

Sexual

Severity

Medium

Score

0.92

A profanity filter API that fits real products, not just demos

ProfanityCheck helps developers and moderation teams screen user-generated text before it is published, stored, or routed for manual review. It is built for comments, chat, usernames, reviews, bios, and form submissions.

Features

Built for developers. Useful for moderation teams.

01

Single endpoint integration

POST text to one endpoint and receive structured JSON that plugs directly into moderation flows.

02

Severity, category, and score

Go beyond a simple yes or no and build rules for blocking, warning, logging, or review.

03

JSON in, JSON out

Works with Laravel, Node.js, Python, mobile apps, serverless functions, and anything that can make HTTP requests.

04

No SDK required

Keep your stack lean. No unnecessary dependency layer or lock-in.

05

Predictable rate limits

Clear public usage limits make the product feel transparent and trustworthy.

06

Moderation-ready responses

Use the output in your admin tools, queues, dashboards, or automated policy workflows.

Use cases

Use it anywhere users can submit text

ProfanityCheck works best where moderation has to be fast, consistent, and simple to integrate.

Comments and forums

Filter offensive text before it reaches public pages or community threads.

Chat and messaging

Screen live messages in apps, games, communities, or support flows.

Usernames and profiles

Stop abusive display names, bios, and profile text before publication.

Reviews and marketplaces

Reduce brand risk in reviews, product feedback, and marketplace submissions.

Forms and campaigns

Validate open text fields in forms, promotions, and lead generation flows.

Custom pipelines

Route flagged content into your own queue, workflow, or moderation dashboard.

API example

A clean request and a usable response

Request

POST /api/v1/check

curl -X POST https://profanitycheck.dev/api/v1/check \
                    -H "Content-Type: application/json" \
                    -d '{"text": "This is a fucking example."}'

Response

Example JSON payload

{
                        "profanity": true,
                        "matches": [
                        {
                            "term": "fucking",
                            "category": "sexual",
                            "severity": "medium",
                            "score": 0.92
                        }
                        ]
                    }

Block instantly

Reject clearly profane input before it is stored or published.

Warn the user

Ask for cleaner wording and reduce unnecessary moderation effort.

Queue for review

Send borderline cases to moderators based on severity or score.

Rate limits

Clear limits build trust

Per minute

10

requests

Per hour

400

requests

Per day

1000

requests

FAQ

Questions developers will actually ask

What is a profanity API?

A profanity API checks text for profane or offensive language and returns data your application can use to block, flag, or review content.

What kinds of text can I check?

Comments, chat messages, usernames, bios, reviews, support messages, and open text fields are all common use cases.

Do I need an SDK?

No. It is a standard JSON API and works with any stack that can make HTTP requests.

Does the response include more than true or false?

Yes. It can include details like detected term, category, severity, and score to support more nuanced moderation.

Dataset contribution

Suggest a word or phrase

Submit terms or phrases that should be reviewed for inclusion in the dataset. Add optional context if regional or cultural nuance matters.

Submissions are reviewed before inclusion. Do not include personal or identifying information.

Start integrating

Add profanity detection to your product without building it yourself