Evaluation API
The /v1/evaluate endpoint analyzes conversations for mental health and safeguarding risk using an orthogonal subject × type taxonomy.
Edge-backed classification: This endpoint now uses single-pass Edge classification at $0.003/call with chain-of-thought reasoning. The /v1/screen endpoint has been consolidated into /v1/evaluate.
Key Concept: Subject × Type
NOPE separates WHO is at risk from WHAT the risk is:
- Subject —
self(speaker),other(someone else) - Type — suicide, self_harm, violence, abuse, exploitation, etc.
This enables clean detection of scenarios like "My friend is suicidal" (subject: other, type: suicide).
Basic Request
Send a conversation as an array of messages:
curl -X POST https://api.nope.net/v1/evaluate \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "I have been feeling really down lately"},
{"role": "assistant", "content": "I am sorry to hear that..."},
{"role": "user", "content": "Sometimes I wonder if things will ever get better"}
],
"config": {
"country": "US"
}
}'Or send a single text blob:
{
"text": "I've been feeling really down lately...",
"config": { "country": "US" }
} Configuration Options
| Option | Type | Description |
|---|---|---|
country | string | ISO 3166-1 alpha-2 code (e.g., "US", "GB"). Used for crisis resource matching. |
conversation_id | string | Your ID for this conversation. Included in webhooks. |
end_user_id | string | Your user ID. Included in webhooks. |
Request Limits
Hard limits that return 400 Bad Request if exceeded:
| Limit | Value |
|---|---|
| Max messages | 100 messages |
| Max message size | 50 KB per message |
| Max text blob size | 50 KB (when using text field) |
Message Truncation
With an API key, no truncation is applied — full conversation history is retained up to the limits above.
When using the /v1/try/evaluate demo endpoint (no API key), messages are truncated to the last 10 messages with a maximum of ~2000 characters per message.
When truncation occurs, the response includes metadata.messages_truncated: true.
Response Structure
Every response includes:
- risks[] — Identified risks with subject, type, severity, imminence, features
- rationale — Chain-of-thought reasoning explaining the assessment
- speaker_severity — Max severity from self-directed risks (none → critical)
- speaker_imminence — Corresponding imminence (not_applicable → emergency)
- show_resources — Boolean flag for resource display
- resources — Matched crisis helplines with
whyexplanations
Risk Structure
Each item in the risks[] array contains:
{
"type": "suicide" | "self_harm" | "self_neglect" | "violence" | "abuse" | ...,
"subject": "self" | "other",
"severity": "none" | "mild" | "moderate" | "high" | "critical",
"imminence": "not_applicable" | "chronic" | "subacute" | "urgent" | "emergency",
"features": string[] // Evidence features (hopelessness, plan_present, etc.)
} Rationale
The rationale field contains chain-of-thought reasoning explaining how the assessment was made.
This provides transparency for audit trails and helps you understand why specific risks were detected.
Using Additional Context
Add user_context to provide information about your app:
{
"messages": [...],
"user_context": "This is a teen mental health app. The user has been engaging for 3 weeks.",
"config": { "country": "US" }
} Worked Example: Third-Party Concern
Scenario
A bystander concern scenario—where the speaker is worried about someone else, not themselves.
Request
{
"messages": [
{
"role": "user",
"content": "My friend posted 'I want to die' on Instagram. I'm really worried about her. What should I do?"
}
],
"config": {
"country": "US"
}
} Rationale
"rationale": "The user is concerned about a friend who posted suicidal ideation on social media. The friend expresses passive ideation ('I want to die'). Subject attribution: other (the friend), not the speaker." Chain-of-thought reasoning explaining subject attribution and risk assessment.
Risks
"risks": [
{
"type": "suicide",
"subject": "other",
"severity": "moderate",
"imminence": "subacute",
"features": ["passive_ideation", "social_isolation"]
}
] Key: subject: "other" means the friend is at risk, not the speaker.
Speaker Summary
"speaker_severity": "none",
"speaker_imminence": "not_applicable",
"show_resources": true Critical distinction: speaker_severity: "none" even though risk exists.
speaker_severity only reflects subject: "self" risks. Use for: "Should I show crisis intervention to THIS user?" Check individual risks for third-party concerns.
Resources
"resources": {
"primary": {
"type": "crisis_line",
"name": "988 Suicide & Crisis Lifeline",
"phone": "988",
"availability": "24/7",
"is_24_7": true,
"why": "Primary national crisis line for suicidal ideation concerns."
},
"secondary": []
} Matched crisis resources with why explanations for relevance.
Worked Example: Harm Encouragement
Scenario
Harm encouragement—speaker encouraging dangerous behavior to others.
"risks": [
{
"type": "self_harm",
"subject": "other",
"severity": "high",
"imminence": "subacute",
"features": ["dangerous_challenge_content"]
}
] When someone encourages harmful behavior, the person being encouraged is at risk.
This is why subject: "other" even though no "friend" is mentioned.
"speaker_severity": "none",
"speaker_imminence": "not_applicable",
"show_resources": true,
"rationale": "Speaker is encouraging a dangerous challenge that has caused deaths. The person being encouraged is at risk." Pattern: speaker_severity: "none" despite severity: "high" in the risk.
The speaker isn't at risk—they're the source of risk to others. Useful for content moderation, UK Online Safety Act compliance, and detecting AI collusion with harmful ideas.
Response Logic
How to use these fields for appropriate response handling:
// Check if speaker needs direct crisis intervention
if (response.speaker_severity !== 'none') {
showCrisisIntervention();
}
// Show matched resources when flagged
if (response.show_resources && response.resources) {
showPrimaryResource(response.resources.primary);
}
// For detailed logic, iterate over risks
for (const risk of response.risks) {
if (risk.subject === 'self') {
showSpeakerResources(risk);
}
if (risk.subject === 'other') {
showThirdPartyGuidance(risk);
}
} Error Handling
| Code | Meaning |
|---|---|
| 400 | Invalid request (missing fields, bad format) |
| 401 | Invalid or missing API key |
| 429 | Rate limit exceeded |
| 500 | Internal server error |
Next Steps
- Signpost — rendering matched helplines
- Taxonomy — risk types, subjects, and features
- API Reference — complete field documentation