NOPE API Reference
Safety layer for chat & LLMs. Analyze conversations for mental health and safeguarding risk.
Base URL: https://api.nope.net
API Version: v1 (current)
Quick Start
curl -X POST https://api.nope.net/v1/evaluate \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "I feel hopeless"}],
"config": {"country": "US"}
}'Get your API key at dashboard.nope.net.
For integration patterns and end-to-end examples, see the Integration Patterns Guide.
Authentication
Most endpoints require a Bearer token:
Authorization: Bearer nope_live_xxxxxxKey types:
nope_live_*- Production keysnope_test_*- Test keys (rate limited)
Free endpoints (require API key):
GET /v1/signpost— Basic crisis resources by country (free)GET /v1/signpost/smart— AI-ranked crisis resources ($0.001/call)
Public endpoints (no auth required):
GET /v1/signpost/:id— Single resource by database ID (for widget embeds)GET /v1/signpost/countries— List supported countriesGET /v1/signpost/detect-country— IP-based country detectionGET /v1/try/signpost/smart— Demo AI-ranked resources (rate-limited, max 5 results)
Deprecated (use /v1/signpost/ instead, sunset Jan 2027):*
GET /v1/resources/*— All resources endpoints are deprecated
API Limits & Quotas
Request Size Limits
These hard limits apply to all requests and return 400 Bad Request if exceeded:
| Limit | Value | Applies To |
|---|---|---|
| Max message count | 100 messages | /v1/evaluate |
| Max message size | 50 KB per message | /v1/evaluate |
| Max text blob size | 50 KB | /v1/evaluate (when using text field) |
| Max query length | 500 characters | /v1/resources/smart |
Message Truncation
To control costs and focus on relevant context, NOPE truncates conversation history in certain scenarios.
/v1/evaluate
| Access Level | Truncation Behavior |
|---|---|
| With API key | No truncation — full message history retained |
| Without API key (try endpoint) | Last 10 messages, max 500 tokens (~2000 chars) per message |
When truncation occurs, the response includes metadata.messages_truncated: true.
Resource Limits
| Endpoint | Parameter | Limit |
|---|---|---|
/v1/signpost |
limit |
Max 10 |
/v1/signpost/smart |
limit |
Max 10 |
/v1/signpost/smart |
query |
Max 500 characters |
/v1/try/signpost/smart |
limit |
Max 5 (lower for demo) |
Try Endpoints
The /v1/try/* demo endpoints have additional restrictions:
| Restriction | Value |
|---|---|
| Rate limit | 10 requests/minute/IP |
| Message truncation | Always applied (10 messages max) |
| Resource results | Max 5 (vs 10 for authenticated) |
| Debug info | Never included |
| Custom models | Not available |
| Multiple judges | Not available |
Try endpoints are for API exploration and demos. For production use, get an API key at dashboard.nope.net.
Rate Limits
Authenticated endpoints have per-user rate limits to ensure fair usage. Limits are generous for normal usage patterns.
| Endpoint | Rate Limit |
|---|---|
/v1/evaluate |
100 requests/min |
/v1/oversight/analyze |
50 requests/min |
/v1/oversight/ingest |
10 requests/min |
/v1/signpost |
200 requests/min |
/v1/signpost/smart |
100 requests/min |
/v1/webhooks/* |
30 requests/min |
Rate limit headers are included on all responses:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 99
X-RateLimit-Reset: 1704067200000When exceeded, returns 429 Too Many Requests with a Retry-After header:
{
"error": "rate_limit_exceeded",
"message": "Rate limit exceeded. Please retry after 45 seconds.",
"retry_after_seconds": 45
}Note: Rate limits apply per user (by API key). If you need higher limits, contact us.
POST /v1/evaluate
Primary API endpoint. Analyze conversation for risk using an orthogonal subject/type taxonomy. Returns detailed assessment with risks, communication style, and matched crisis resources.
Key Concepts: Subject × Type
NOPE uses an orthogonal design separating WHO is at risk from WHAT the risk is:
| Dimension | Values | Question |
|---|---|---|
| Subject | self, other, unknown |
WHO is at risk? |
| Type | suicide, self_harm, violence, abuse, etc. | WHAT type of harm? |
This enables clean detection of scenarios like:
- "I want to hurt myself" → subject: self, type: self_harm
- "My friend is suicidal" → subject: other, type: suicide
- "He hit me again" → subject: self, type: abuse (speaker is victim)
Evaluate Request
{
// Provide ONE of these:
messages?: Array<{role: 'user'|'assistant', content: string}>,
text?: string, // Single text blob (converted to user message)
config?: {
country?: string, // ISO 3166-1 alpha-2 (e.g., "US", "GB"), default "US"
include_resources?: boolean, // Default: true
},
user_context?: string, // Additional context (e.g., app persona)
}Evaluate Response
{
// Identified risks
risks: Array<{
type: RiskType,
subject: 'self' | 'other',
severity: 'none' | 'mild' | 'moderate' | 'high' | 'critical',
imminence: 'not_applicable' | 'chronic' | 'subacute' | 'urgent' | 'emergency',
features?: string[], // Evidence features
}>,
// Chain-of-thought reasoning
rationale: string,
// Speaker summary (derived from self-subject risks)
speaker_severity: Severity,
speaker_imminence: Imminence,
// Whether to show crisis resources
show_resources: boolean,
// Matched crisis resources with explanations
resources?: {
primary: CrisisResource & { why: string },
secondary: Array<CrisisResource & { why: string }>,
},
request_id: string, // Unique ID for audit trail
timestamp: string, // ISO 8601
metadata?: {
api_version: 'v1',
input_format: 'structured' | 'text_blob',
messages_truncated?: boolean,
fallback_used?: boolean, // True if v0 LLM fallback was used
},
}Example Response
{
"risks": [
{
"type": "suicide",
"subject": "self",
"severity": "moderate",
"imminence": "chronic",
"features": ["hopelessness", "passive_ideation"]
}
],
"rationale": "User expressing feelings of hopelessness with passive suicidal ideation.",
"speaker_severity": "moderate",
"speaker_imminence": "chronic",
"show_resources": true,
"resources": {
"primary": {
"type": "crisis_line",
"name": "988 Suicide and Crisis Lifeline",
"phone": "988",
"is_24_7": true,
"why": "Primary national crisis line for suicidal ideation."
},
"secondary": []
},
"request_id": "req_abc123",
"timestamp": "2025-01-15T10:30:00Z",
"metadata": {
"api_version": "v1",
"input_format": "structured"
}
}POST /v1/screen (Deprecated)
Deprecated: The
/v1/screenendpoint has been consolidated into/v1/evaluate, which now uses Edge-backed classification at $0.003/call.
Use /v1/evaluate instead. It provides:
- Single-pass classification (faster)
- Chain-of-thought reasoning via
rationale - LLM-ranked crisis resources with relevance explanations
- Full SB243/NY Article 47 compliance
Legacy Endpoint
The legacy /v0/screen endpoint remains available at $0.001/call for existing integrations. SDK methods screen() now call this legacy endpoint and emit deprecation warnings.
Migration
Replace /v1/screen calls with /v1/evaluate:
# Before (deprecated)
curl -X POST https://api.nope.net/v1/screen \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"text": "I feel hopeless"}'
# After (recommended)
curl -X POST https://api.nope.net/v1/evaluate \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"text": "I feel hopeless"}'The /v1/evaluate response includes:
risks[]— Same 9 risk types with severity, imminence, and featuresrationale— Chain-of-thought reasoningspeaker_severity/speaker_imminence— Top-level fieldsshow_resources— Boolean flag for resource displayresources.primary.why/resources.secondary[].why— Explanation for resource ranking
Risk Subjects
| Subject | Description | When to use |
|---|---|---|
self |
The speaker is at risk | "I want to hurt myself" |
other |
Someone else is at risk | "My friend is suicidal", "He hit her" |
unknown |
Cannot determine with confidence | Ambiguous scenarios |
Key insight: speaker_severity only considers risks where subject === 'self'. This prevents showing crisis resources to worried bystanders asking about others.
Risk Types (9 types)
| Type | Description |
|---|---|
suicide |
Self-directed lethal intent - thoughts, plans, or attempts to end one's life |
self_harm |
Non-suicidal self-injury (NSSI) - intentional self-harm without intent to die |
self_neglect |
Self-care failure and psychiatric emergency - eating disorders, psychosis, substance crisis, severe functional impairment, medical care refusal |
violence |
Risk of harm to others - threats, plans, or acts of violence |
abuse |
Physical, emotional, sexual, or financial abuse patterns |
sexual_violence |
Rape, sexual assault, or sexual coercion |
neglect |
Failure to care for dependents - children, elderly, vulnerable adults |
exploitation |
Trafficking, labor exploitation, sextortion, grooming |
stalking |
Persistent unwanted contact, following, surveillance |
Communication Styles (8 styles)
Communication style describes how content is expressed, orthogonal to risk level. The same crisis content can be expressed directly, through humor, via creative writing, etc.
| Style | Description |
|---|---|
direct |
Explicit, first-person present statements ("I want to die") |
humor |
Dark humor, memes, ironic expressions, Gen-Z speak |
fiction |
Creative writing, roleplay, storytelling contexts |
hypothetical |
"What if" scenarios, "asking for a friend" |
distanced |
Third-party concern, temporal distancing, past tense |
clinical |
Academic, professional, research discussion |
minimized |
Hedged language, downplaying severity |
adversarial |
Jailbreak attempts, manipulation, testing boundaries |
Why this matters:
- Distinguish genuine crisis from dark humor
- Identify distancing ("asking for a friend")
- Detect adversarial attempts with embedded risk
- Recognize minimization that may undersell risk
Severity Scale
| Level | Definition |
|---|---|
none |
No clinical concern |
mild |
Minor distress, no functional impairment |
moderate |
Clear concern, not immediately dangerous |
high |
Serious risk requiring urgent intervention |
critical |
Life-threatening, imminent harm |
Imminence Scale
| Level | Definition |
|---|---|
not_applicable |
ONLY when severity=none |
chronic |
Weeks-months, stable pattern |
subacute |
Likely escalation in days-weeks |
urgent |
Escalation likely within 24-48h |
emergency |
Happening NOW |
For detailed guidance on what actions to take based on severity, confidence, and imminence levels, see the Integration Patterns Guide.
Features
Features are atomic, observable indicators returned in the features array of each risk assessment. NOPE uses a universal feature pool with 180+ indicators across these categories:
- Ideation & Intent — C-SSRS based (passive_ideation, active_ideation, plan_present, etc.)
- Means & Access — Lethal means availability (firearm_access, medication_access, etc.)
- Violence — HCR-20 based (specific_threat, identifiable_target, etc.)
- Abuse & IPV — DASH based (coercive_control, strangulation, etc.)
- Exploitation — Trafficking, grooming, sextortion indicators
- Neglect — Dependent care failures
- Eating Disorder — Restriction, purging, body dysmorphia
- Stalking — SAM based (unwanted_contact, following, etc.)
- Clinical — Psychotic and substance features (hallucinations, withdrawal, etc.)
- Emotional — Hopelessness, agitation, acute distress
- Protective Factors — START based (help_seeking, social_support, etc.)
- Context — Subject and relationship context markers
For the complete feature vocabulary with descriptions, see the User Risk Taxonomy page.
Legal Flags
{
ipv?: {
indicated: boolean,
strangulation: boolean, // ANY history = 7.5x homicide risk
lethality_risk: 'standard' | 'elevated' | 'severe' | 'extreme',
escalation_pattern: boolean,
},
safeguarding_concern?: {
indicated: boolean,
context: 'minor_involved' | 'vulnerable_adult' | 'csa' | 'infant_at_risk' | 'elder_abuse',
},
third_party_threat?: {
tarasoff_duty: boolean, // Duty to warn may apply
specific_target: boolean, // Identifiable victim
},
}Note: safeguarding_concern surfaces patterns that may trigger statutory obligations depending on jurisdiction and organizational role. NOPE flags concerns for human review—AI systems are not mandatory reporters under any current statute.
Widget Integration
When speaker_severity is not 'none', display crisis resources using the embeddable widget:
if (result.speaker_severity !== 'none') {
const iframe = document.createElement('iframe');
iframe.src = 'https://widget.nope.net/resources?country=US&scopes=suicide,crisis';
iframe.width = '100%';
iframe.height = '400';
container.appendChild(iframe);
}See the Widget Builder for configuration options and the JavaScript API.
SDKs
Official SDKs with full type definitions:
| SDK | Package | Docs |
|---|---|---|
| Node.js | @nope-net/sdk |
Node.js SDK Reference |
| Python | nope-net |
Python SDK Reference |
Both SDKs support all API endpoints (evaluate, screen, oversight, resources), webhook verification, and include typed responses.
Webhooks
Receive real-time HTTP notifications when evaluations exceed configured risk thresholds.
Note: Webhooks require a minimum balance to ensure delivery reliability.
| Event | Source | Description |
|---|---|---|
evaluate.alert |
/v1/evaluate |
User risk meets or exceeds your threshold |
oversight.alert |
/v1/oversight/* |
AI behavior concern is high or critical |
oversight.ingestion.complete |
/v1/oversight/ingest |
Batch processing completed |
test.ping |
Dashboard/API | Test event to verify endpoint |
API Routes:
| Method | Endpoint | Description |
|---|---|---|
POST |
/v1/webhooks |
Create webhook |
GET |
/v1/webhooks |
List webhooks |
PUT |
/v1/webhooks/:id |
Update webhook |
DELETE |
/v1/webhooks/:id |
Delete webhook |
POST |
/v1/webhooks/:id/test |
Send test ping |
For webhook payload structures, signature verification, and integration examples, see the Webhooks Guide.
GET /v1/signpost
Requires API key (free). Returns crisis helpline resources for a given country using scope-based filtering.
curl -H "Authorization: Bearer nope_live_xxx" \
"https://api.nope.net/v1/signpost?country=US&scopes=suicide,mental_health"| Parameter | Type | Required | Description |
|---|---|---|---|
country |
string | Yes | ISO 3166-1 alpha-2 code |
subdivisions |
string | No | Comma-separated ISO 3166-2 codes (e.g., "US-CA,US-NY") |
scopes |
string | No | Comma-separated service scopes (WHAT the resource helps with) |
populations |
string | No | Comma-separated populations (WHO the resource serves) |
urgent |
boolean | No | Only 24/7 resources |
limit |
number | No | Max resources (default: 10) |
Filtering Parameters
scopes — filters by service scope (what the resource helps with):
?scopes=suicide— suicide crisis resources?scopes=domestic_violence— DV resources?scopes=eating_disorder— eating disorder resources?scopes=lgbtq— LGBTQ+ specialist resources (Trevor Project, Trans Lifeline)
populations — filters by population served (who the resource serves):
?populations=veterans— resources for veterans?populations=lgbtq— resources serving LGBTQ+ community?populations=youth— youth-focused resources
Combined: Both can be used together with AND logic:
?scopes=suicide&populations=veterans— suicide resources specifically for veterans?scopes=domestic_violence&populations=lgbtq— DV resources serving LGBTQ+ community
Note: Invalid scope or population values return a 400 error with the invalid values listed.
Valid Scopes
NOPE supports 93 service scopes for filtering crisis resources (suicide, domestic_violence, eating_disorder, lgbtq, etc.).
For the complete list of all scopes with descriptions, see the Service Taxonomy page.
Valid Populations
NOPE supports 26 population filters for targeting specific demographics (veterans, youth, lgbtq, etc.).
For the complete list of all populations with descriptions, see the Service Taxonomy page.
GET /v1/signpost/smart
Requires API key + balance ($0.001/call). Returns AI-ranked crisis resources using semantic search.
Use this when you have a natural language query and want the most relevant resources, not just scope-based filtering.
curl -H "Authorization: Bearer nope_live_xxx" \
"https://api.nope.net/v1/signpost/smart?country=US&query=teen+eating+disorder"| Parameter | Type | Required | Description |
|---|---|---|---|
country |
string | Yes | ISO 3166-1 alpha-2 code |
query |
string | Yes | Natural language search query |
scopes |
string | No | Optional scope pre-filter |
limit |
number | No | Max resources (default: 10) |
Example: query=teen eating disorder prioritizes eating disorder helplines over generic crisis lines.
GET /v1/signpost/search
Requires API key (free). Semantic search across all crisis resources using vector embeddings.
Unlike /smart which uses LLM ranking, this endpoint uses pre-computed embeddings for fast semantic search across the entire resource database. Best for natural language queries where you want relevant results without country restrictions.
curl -H "Authorization: Bearer nope_live_xxx" \
"https://api.nope.net/v1/signpost/search?query=lgbtq+support+for+black+community"| Parameter | Type | Required | Description |
|---|---|---|---|
query |
string | Yes | Natural language search query |
country |
string | No | ISO 3166-1 alpha-2 code to filter results |
limit |
number | No | Max resources (default: 10, max: 50) |
threshold |
number | No | Similarity threshold 0-1 (default: 0.3) |
Search Response
{
query: string,
country: string | null,
results: Array<{
id: string,
name: string,
description: string,
country_code: string,
is_24_7: boolean,
similarity: number, // 0-1, higher = more relevant
open_status: {
is_open: boolean | null, // null = uncertain
next_change: string | null, // ISO 8601 timestamp
confidence: "high" | "low" | "none",
message: string | null, // e.g. "Open 24/7", "Closed · Opens Monday at 9 AM"
},
// ... other resource fields (phone, chat_url, etc.)
}>,
count: number,
timing: {
embed_ms: number,
search_ms: number,
total_ms: number,
}
}Example: Find LGBTQ+ resources for specific communities:
curl -H "Authorization: Bearer nope_live_xxx" \
"https://api.nope.net/v1/signpost/search?query=trans+youth+support&country=US"GET /v1/signpost/:id
Public endpoint (no auth required). Fetch a single crisis resource by its database UUID. Useful for widget embeds that display a specific resource.
curl "https://api.nope.net/v1/signpost/c051c06a-119f-4823-af66-894d9b934b5f"Resource by ID Response
{
"resource": {
"id": "c051c06a-119f-4823-af66-894d9b934b5f",
"name": "988 Suicide & Crisis Lifeline",
"phone": "988",
"is_24_7": true,
"open_status": {
"is_open": true,
"next_change": null,
"confidence": "high",
"message": "Open 24/7"
}
}
}Resource by ID Errors
| Status | Description |
|---|---|
| 400 | Invalid UUID format |
| 404 | Resource not found or disabled |
Resource by ID Use Cases
- Single resource embeds: Display a specific helpline on a partner website
- Deep linking: Link directly to a resource from external systems
- Widget route: Powers the
/resource/[id]widget embed URL
POST /v1/steer
Requires API key ($0.001/call). Verify AI responses comply with system prompt rules.
Basic usage:
curl -X POST https://api.nope.net/v1/steer \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"system_prompt": "You are a helpful assistant. Never mention competitors.",
"proposed_response": "While CompetitorX is good, we offer better value..."
}'With conversation context:
curl -X POST https://api.nope.net/v1/steer \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"system_prompt": "You are a cooking assistant. Only answer cooking questions.",
"proposed_response": "The capital of France is Paris.",
"messages": [
{"role": "user", "content": "What is the capital of France?"}
]
}'Steer Request
{
system_prompt: string, // Rules your AI should follow
proposed_response: string, // The AI response to verify (required)
messages?: Array<{ // Optional conversation history (must end with user message)
role: 'user' | 'assistant',
content: string
}>,
include_audit?: boolean // Include detailed audit trail
}Steer Response
{
outcome: 'COMPLIANT' | 'REDEEMED' | 'CANNOT_COMPLY',
compliant: boolean, // Was original compliant?
modified: boolean, // Was response modified?
response: string, // Final response (original or redeemed, empty if CANNOT_COMPLY)
cannot_comply?: { // Present when outcome is CANNOT_COMPLY
reason: string, // Why the system prompt is unprocessable
category: 'violence' | 'csam' | 'terrorism' | 'safety_circumvention' | 'other'
},
conversation?: { // Present when using messages array
turn_count: number,
triggering_user_message?: string
},
prompt_quality?: {
score: number, // 0-100
grade: 'A' | 'B' | 'C' | 'D' | 'F',
dimensions: {
specificity: number,
extractability: number,
consistency: number,
completeness: number,
testability: number,
actionability?: number
},
issues: string[]
},
stages: {
preprocess: {
red_lines: number,
watch_items: number,
persona?: string,
cached: boolean,
latency_ms: number
},
screen: {
passed: boolean,
hits: number, // Forbidden items found in response
misses: number, // Required items not found
has_hard_violations: boolean, // Exact matches (authoritative)
has_soft_violations: boolean, // Patterns/semantic (analysis can override)
evasion_patterns: string[], // Detected evasion attempts
latency_ms: number
},
verify: {
exit_point: 'TRIAGE' | 'ANALYSIS' | 'REDEMPTION',
triage_confidence: number, // 0-100
analysis?: { // Present if analysis ran
score: number, // 0-1 overall compliance
compliant: boolean,
rules?: Array<{
id: string,
description: string,
fulfilment: 'EXACTLY_MET' | 'MAJORLY_MET' | 'MODERATELY_MET' | 'PARTIALLY_MET' | 'UNMET' | 'NOT_APPLICABLE',
reasoning: string,
red_line_id?: string
}>,
lowest_rule?: { // Most problematic rule
id: string,
fulfilment: string,
reasoning: string
}
},
redemption?: { // Present when outcome is REDEEMED
original_intent: string, // What the user was trying to accomplish
redeemed_response: string, // The compliant alternative
addressed_violations: string[] // Which red lines were fixed
},
latency_ms: number
}
},
request_id: string,
timestamp: string,
total_latency_ms: number
}Custom Response Handling
When outcome is REDEEMED, you can use the provided response directly, or craft your own using the metadata:
| Field | Purpose |
|---|---|
stages.verify.redemption.original_intent |
What the user was trying to accomplish |
stages.verify.redemption.addressed_violations |
Which red lines were violated |
stages.verify.analysis.rules |
Rule-by-rule breakdown with fulfilment levels |
stages.verify.analysis.lowest_rule |
The most problematic rule |
stages.screen.has_hard_violations |
Whether exact-match violations occurred |
stages.screen.evasion_patterns |
Detected evasion attempts |
Fulfilment levels: EXACTLY_MET (1.0), MAJORLY_MET (0.75), MODERATELY_MET (0.5), PARTIALLY_MET (0.25), UNMET (0.0), NOT_APPLICABLE (excluded from scoring)
Violation types:
- Hard violations — Exact string matches (passwords, API keys). Screen is authoritative.
- Soft violations — Regex patterns, semantic rules. Analysis can override if semantically equivalent.
For detailed examples and integration patterns, see the Steer Guide.
Steer Limits
Input Limits
| Limit | Authenticated | Try Endpoint |
|---|---|---|
| Max system prompt | 50,000 chars | 10,000 chars |
| Max proposed response | 50,000 chars | 10,000 chars |
| Combined max (prompt + response) | 80,000 chars | 20,000 chars |
| Max messages (multi-turn) | 10 | 10 |
| Max per-message length | 10,000 chars | 10,000 chars |
| Rate limit | 100 req/min | 10 req/min per IP |
Truncation Behavior
When inputs exceed limits, Steer applies intelligent truncation:
| Scenario | Behavior |
|---|---|
| Prompt/response exceeds limit | Keeps first 20,000 + last 10,000 chars |
| Combined exceeds 80,000 | Proportionally reduces both inputs |
| Message > 10,000 chars | Keeps first 5,000 + last 2,000 chars |
| Message > 50,000 chars | Replaced with metadata placeholder |
| More than 10 messages | Keeps only the last 10 |
When truncation occurs, truncation.truncated: true is included in the response.
Output Limits
| Constraint | Value | Notes |
|---|---|---|
| Max output tokens (verify) | 4,096 | Shared between analysis + redemption |
| Estimated max redeemed response | ~12,000 chars | After analysis overhead |
| Max output tokens (preprocess) | 8,000 | Red lines and watch items extraction |
Fallback: If redemption fails, returns: "I apologize, but I can't provide that response. How else can I help?"
CANNOT_COMPLY Outcome
In rare cases, Steer returns CANNOT_COMPLY instead of COMPLIANT or REDEEMED. This signals that the system prompt itself is unprocessable — Steer cannot reliably verify responses against it.
Categories:
csam— System prompts that sexualize minorsviolence— Prompts instructing the AI to help harm peopleterrorism— Attack planning or extremist recruitmentsafety_circumvention— Jailbreak prompts like "DAN" or "ignore all restrictions"other— Other unprocessable concerns
When CANNOT_COMPLY is returned:
responseis emptycompliantisfalsecannot_comply.reasonexplains whycannot_comply.categoryidentifies the issue type
Note: Steer is conservative — legitimate use cases (therapists, security researchers, fiction writers) are allowed through. Only egregiously harmful prompts trigger this.
For detailed documentation, see the Steer Guide.
Pricing
NOPE uses prepaid usage-based billing — no subscriptions, no tiers.
| Endpoint | Cost |
|---|---|
/v1/signpost/smart |
$0.001 |
/v1/evaluate |
$0.003 |
/v1/steer |
$0.001 |
/v1/signpost |
Free |
New accounts receive $1.00 free credit. Top up via dashboard.nope.net/billing.
Errors
| Code | Description |
|---|---|
| 400 | Invalid request |
| 401 | Invalid or missing API key |
| 402 | Insufficient balance |
| 429 | Rate limit exceeded |
| 500 | Internal server error |
Clinical Frameworks
| Framework | Usage |
|---|---|
| C-SSRS | Suicide severity (ideation features) |
| HCR-20 | Violence risk (violence features) |
| START | Protective factors |
| DASH | IPV risk assessment |
| Danger Assessment | IPV lethality indicators |
Support
- Status: status.nope.net
- Test Suites: suites.nope.net
- Email: [email protected]
This API supports human decision-making, not replaces it. Always maintain human oversight for high-risk situations.