Crisis Screening
The /v1/screen endpoint provides cost-effective crisis detection across all 9 risk types. Returns severity, imminence, rationale, and matched crisis resources.
When to use /v1/screen: High-volume screening, regulatory compliance, real-time triage. For detailed clinical features (180+), protective factors, and legal flags, use /v1/evaluate instead.
Building for compliance? Screen is designed to support crisis detection requirements across jurisdictions. See AI safety regulations worldwide for applicable laws.
Key Differences from /v1/evaluate
| Aspect | /v1/screen | /v1/evaluate |
|---|---|---|
| Risk types | All 9 | All 9 |
| Output | Type + severity + imminence | Full clinical profile |
| Clinical features | — | 180+ (C-SSRS, HCR-20, DASH) |
| Protective factors | — | 36 (START-based) |
| Cost | $0.001 per call | $0.05 per call |
Request Limits & Truncation
Hard limits that return 400 Bad Request if exceeded:
| Limit | Value |
|---|---|
| Max messages | 100 messages |
| Max message size | 50 KB per message |
| Max text blob size | 50 KB |
Message Truncation: The screen endpoint always truncates to the last 6 messages (3 conversation turns) regardless of authentication status. This is intentional — crisis screening focuses on current state, so recent messages are most relevant. The 6-message limit balances detection accuracy with cost efficiency.
If you need full conversation history analysis, use /v1/evaluate with an API key.
Basic Request
Send a single text or conversation messages:
curl -X POST https://api.nope.net/v1/screen \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"text": "I have been feeling really hopeless lately"}'Or with conversation history:
{
"messages": [
{ "role": "user", "content": "I've been feeling really down" },
{ "role": "assistant", "content": "I'm sorry to hear that..." },
{ "role": "user", "content": "I don't want to be here anymore" }
],
"config": {
"country": "US" // ISO country code for locale-specific resources
}
} Locale-Specific Resources
By default, the endpoint returns US crisis resources (988 Lifeline). To get resources for other countries,
pass an ISO country code in the config.country field:
{
"text": "I want to end it all",
"config": {
"country": "GB" // Returns UK resources (Samaritans)
}
} Supported countries include:
- US — 988 Suicide & Crisis Lifeline
- GB — Samaritans (116 123)
- AU — Lifeline Australia (13 11 14)
- CA — 988 Suicide Crisis Helpline
- 222 countries — See talk.help for full coverage
Resources are ranked by relevance (crisis lines first, 24/7 availability prioritized) and up to 3 resources
are returned: one primary and up to two secondary.
Response Structure
{
"risks": [
{
"type": "suicide",
"subject": "self",
"severity": "moderate",
"imminence": "chronic",
"confidence": 0.85
}
],
"show_resources": true,
"suicidal_ideation": true,
"self_harm": false,
"rationale": "Speaker expresses active suicidal ideation without specific method",
"resources": {
"primary": {
"name": "988 Suicide & Crisis Lifeline",
"phone": "988",
"chat_url": "https://988lifeline.org/chat/",
"website_url": "https://988lifeline.org/",
"is_24_7": true
},
"secondary": [
{
"name": "Crisis Text Line",
"sms_number": "741741",
"text_instructions": "Text HOME to 741741",
"is_24_7": true
}
]
},
"request_id": "screen_1703001234567_abc123",
"timestamp": "2024-12-19T10:30:00.000Z"
} Response Fields
| Field | Type | Description |
|---|---|---|
risks | array | Primary output. Detected risks with type, subject, severity, imminence, and confidence |
show_resources | boolean | Should crisis resources be displayed? (derived from risks[]) |
suicidal_ideation | boolean | Suicidal ideation detected (derived from risks[]) |
self_harm | boolean | Non-suicidal self-injury detected (derived from risks[]) |
rationale | string | Brief explanation ("reasonable efforts" evidence) |
resources | object | Crisis resources (only when show_resources=true) |
recommended_reply | object | AI-generated supportive reply (only when config.include_recommended_reply=true and risks detected) |
request_id | string | Unique ID for audit trail |
timestamp | string | ISO 8601 timestamp |
Risk Object Fields
| Field | Values | Description |
|---|---|---|
type | suicide, self_harm, self_neglect, violence, abuse, sexual_violence, neglect, exploitation, stalking | What type of harm |
subject | self, other, unknown | Who is at risk |
severity | none, mild, moderate, high, critical | How severe |
imminence | not_applicable, chronic, subacute, urgent, emergency | How soon |
confidence | 0.0 - 1.0 | Confidence in assessment |
Detection Logic
The risks[] array is the primary output, containing all detected risk types across all 9 categories.
The show_resources field is true when any risk has:
- severity ≥
mild, AND - subject is
selforunknown(for self-directed risks), OR - the speaker is a perpetrator (e.g., violence toward others)
Third-party concerns like "my friend is suicidal" use subject=other and don't trigger resources
since the speaker isn't the one in crisis.
The legacy boolean flags (suicidal_ideation, self_harm) are derived from the risks[] array for backward compatibility.
Self-Harm (NSSI)
Non-suicidal self-injury is tracked separately. Someone may engage in self-harm without suicidal intent, in which case:
suicidal_ideation: falseself_harm: trueshow_resources: true
Recommended Reply (Optional)
Enable config.include_recommended_reply to generate an AI-written supportive response
when risks are detected. The reply references the matched crisis resources by name.
When is a reply generated? Only when show_resources=true (risks detected).
If no risks are detected, the reply field is omitted.
Example request with reply generation:
{
"text": "I've been cutting myself to cope",
"config": {
"country": "US",
"include_recommended_reply": true
}
} Example response with recommended_reply:
{
"show_resources": true,
"self_harm": true,
"suicidal_ideation": false,
"recommended_reply": {
"content": "I hear that you've been using cutting as a way to cope with difficult feelings. That takes courage to share. The 988 Suicide & Crisis Lifeline is available 24/7 if you'd like to talk to someone—you can call or text 988. I'm here if you want to keep talking.",
"source": "llm_generated"
},
"resources": { ... }
} The reply is designed to:
- Validate the person's experience before suggesting resources
- Match tone to severity (calm for moderate, more urgent for critical)
- Reference actual crisis resources by name (988 Lifeline, Samaritans, etc.)
- Avoid toxic positivity ("It gets better!", "Stay positive!")
- Invite continued conversation
Cost: Generating a reply adds approximately $0.0005 to the request cost (half a mill). Only charged when a reply is actually generated.
Audit Trail
For compliance logging, every response includes:
request_id— Unique identifier (e.g.,sb243_1703001234567_abc123)timestamp— ISO 8601 timestamp
Store these with your conversation logs for compliance records.
Displaying Resources
When show_resources is true, display the provided crisis resources:
if (response.show_resources) {
// Display the primary crisis line
showBanner(response.resources.primary.name + ": " + response.resources.primary.phone);
} Error Handling
| Code | Meaning |
|---|---|
| 400 | Invalid request (need either text or messages) |
| 401 | Invalid or missing API key |
| 500 | Internal server error |
Next Steps
- API Reference — complete field documentation
- Evaluation API — for comprehensive multi-domain safety
- Widget Integration — embedding crisis resources