Integration Patterns
Practical recipes for integrating NOPE into your application. Choose the pattern that fits your risk tolerance and user context.
const client = new NopeClient({ apiKey: '...' }) — see Quickstart for setup.Which endpoint do I need?
/v1/screen $0.001Lightweight crisis detection. Returns yes/no referral decision with matched resources.
Use for: SB243 compliance, basic safety layer, high-volume screening
/v1/evaluate $0.05Full risk assessment. Returns severity, imminence, risk types, features, and recommended responses.
Use for: Mental health apps, companion AI, nuanced response handling
Pattern 1: Screen Every Message
The simplest integration for regulatory compliance. Call /v1/screen on every user message,
wait for the response, then show crisis resources if needed.
Best for: SB243 compliance, companion AI, any chat interface where you need to catch acute crisis.
// On every user message
const response = await fetch('https://api.nope.net/v1/screen', {
method: 'POST',
headers: {
'Authorization': 'Bearer ' + API_KEY,
'Content-Type': 'application/json'
},
body: JSON.stringify({
messages: conversationHistory, // Include full context
config: { user_country: userCountry }
})
});
const result = await response.json();
if (result.show_resources) {
// Show crisis resources before/alongside AI response
showCrisisResources(result.resources);
} Key points:
- Include as much conversation context as possible—more context improves accuracy
- The response is fast (~100-200ms) so waiting inline is usually fine
- Resources are pre-matched to the user's country and situation
Pattern 2: Evaluate with Graduated Response
For sensitive contexts where you need nuanced handling based on severity. Evaluate every message and adjust your response accordingly.
Best for: Teen mental health apps, therapy companions, platforms with vulnerable populations.
// Higher-sensitivity: evaluate every message
const result = await client.evaluate({
messages: conversationHistory,
config: { user_country: 'US', user_age_band: 'minor' }
});
// Graduated response based on severity
switch (result.summary.speaker_severity) {
case 'critical':
case 'high':
// Interrupt flow, show prominent resources
pauseAndShowCrisisUI(result);
break;
case 'moderate':
// Show resources in sidebar, continue conversation
showSidebarResources(result.crisis_resources);
break;
case 'mild':
// Subtle indicator, log for review
showSubtleIndicator();
logForReview(result);
break;
} Response options by severity:
- critical/high: Interrupt the conversation, show prominent crisis UI, consider pausing AI responses
- moderate: Show resources in sidebar or dedicated area, continue conversation with awareness
- mild: Subtle indicator, log for human review, optionally adjust AI tone
- none: Continue normally
Pattern 3: Background Evaluation
Evaluate asynchronously—every N messages, periodically, or triggered by heuristics. Update the UI or adjust AI behavior without blocking the conversation.
Best for: Lower-risk contexts, cost optimization, supplementing real-time screening with deeper analysis.
// Background evaluation: every N messages or periodically
async function backgroundEvaluate(conversationId: string) {
const conversation = await getConversation(conversationId);
const result = await client.evaluate({
messages: conversation.messages,
config: {
user_country: conversation.userCountry,
conversation_id: conversationId,
end_user_id: conversation.userId
}
});
if (result.summary.speaker_severity !== 'none') {
// Update UI asynchronously
await updateConversationUI(conversationId, {
showResources: true,
resources: result.crisis_resources
});
// Optionally adjust AI behavior
await updateSystemPrompt(conversationId, {
crisisAwareness: true,
severity: result.summary.speaker_severity
});
}
} What you can do with background results:
- Show resources in a separate area of the UI (not blocking conversation)
- Adjust the AI's system prompt to be more aware of the emotional context
- Add slight latency to responses to allow for human-like thoughtfulness
- Trigger human review workflows for flagged conversations
- Update risk dashboards for your support team
Pattern 4: Non-Chat Text Fields
NOPE works anywhere users can enter free text—not just chat. Profile bios, journal entries, forum posts, feedback forms, support tickets.
Best for: User profiles, content moderation, support workflows, any user-generated content.
// Any user-generated text field
async function onFormSubmit(formData: FormData) {
const userBio = formData.get('bio') as string;
const result = await client.screen({
text: userBio, // Single text field, not messages array
config: { user_country: detectCountry() }
});
if (result.show_resources) {
// Show resources on profile page
// Or trigger outreach workflow
}
} Use the text parameter instead of messages for single text blobs.
Both /v1/screen and /v1/evaluate accept either format.
Pattern 5: Webhooks for Alerting
Configure webhooks to receive real-time alerts when risk thresholds are crossed. Useful for human review workflows and monitoring dashboards.
// Configure webhook for real-time alerts
// Set webhook URL in dashboard.nope.net
// Your webhook endpoint receives:
{
"event": "risk_detected",
"severity": "high",
"conversation_id": "conv_123",
"end_user_id": "user_456",
"timestamp": "2024-01-15T10:30:00Z",
"summary": {
"speaker_severity": "high",
"primary_concerns": "..."
}
} See Webhooks Guide for setup and configuration.
Combining Patterns
These patterns aren't mutually exclusive. A common setup:
- 1 /screen on every message for baseline compliance—fast, cheap, catches acute crisis
- 2 /evaluate in background every 5-10 messages for deeper analysis and UI adjustments
- 3 Webhooks for moderate+ severity to alert human reviewers
Context Matters
The more conversation context you provide, the better the assessment. A message like "I can't do this anymore" means something very different depending on what came before.
Less accurate
messages: [
{ role: "user", content: "I can't do this anymore" }
]More accurate
messages: [
// Include prior context
{ role: "user", content: "My partner left me" },
{ role: "assistant", content: "..." },
{ role: "user", content: "I can't do this anymore" }
]Recommendation: Send the full conversation history, or at minimum the last 10-20 messages. The API handles context truncation internally if needed.
What To Do With Results
| If you detect... | Consider... |
|---|---|
show_resources: true | Show crisis resources. Render resources object or use the embedded widget. |
speaker_severity: "high" or "critical" | Interrupt flow, prominent crisis UI, consider pausing AI, alert human reviewers. |
any_third_party_risk: true | Show "how to help someone" guidance. The user isn't at risk, but someone they know might be. |
legal_flags present | Review for reporting obligations. IPV, child safeguarding, or specific threats may have legal implications. |
imminence: "emergency" | Prioritize immediate response. Show emergency services alongside crisis lines. |
Decision Guidance
Detailed recommendations for what actions to take based on severity, confidence, and imminence levels.
Severity-Based Actions
Use these recommended actions based on summary.speaker_severity:
| Severity | Recommended Action | Show Resources | Block AI Response? |
|---|---|---|---|
critical | Immediate intervention - Show crisis resources prominently with urgent messaging | ✅ Prominent, top of screen | ✅ Yes if imminence: 'emergency' |
high | Urgent care needed - Show resources with strong recommendation to reach out | ✅ Prominent | Consider yes |
moderate | Support recommended - Show resources as helpful option | ✅ Less prominent (footer/sidebar) | ❌ No |
mild | Monitor - Optionally show resources, log for patterns | ⚠️ Optional | ❌ No |
none | Normal interaction - No intervention needed | ❌ No | ❌ No |
Confidence Thresholds
Risk Confidence
How much to trust the overall risk assessment:
| Confidence | Interpretation | Action |
|---|---|---|
| > 0.8 | High confidence - strong evidence | Act with confidence |
| 0.6-0.8 | Moderate confidence - reasonable evidence | Act but monitor for false positives |
| 0.4-0.6 | Low confidence - ambiguous signals | Log for monitoring, show resources conservatively |
| < 0.4 | Very low confidence - unclear | Treat as uncertain, minimal intervention |
Subject Confidence
How certain the API is about WHO is at risk (self vs other):
| subject_confidence | subject: "self" | subject: "other" |
|---|---|---|
| > 0.8 | High confidence speaker is at risk → Show resources for them | High confidence about third party → Show resources but don't block speaker |
| 0.5-0.8 | Moderate confidence → Show resources but less aggressively | Moderate confidence → May be ambiguous |
| < 0.5 | Uncertain → Likely asking about others or hypothetical → Log but don't intervene strongly | Uncertain → May actually be self-concern in disguise |
Key Insight
Low subject_confidence with subject: "self" often indicates "asking for a friend" scenarios. Don't block content, but make resources available.
Imminence-Based Actions
| Imminence | Timeframe | Recommended Action |
|---|---|---|
emergency | Happening NOW | Show emergency services (911/999), consider blocking AI response, log for immediate human review |
urgent | Next 24-48h | Show 24/7 crisis lines prominently, suggest immediate outreach |
subacute | Days-weeks | Show crisis resources and support services, monitor |
chronic | Weeks-months | Show support resources, mental health services |
not_applicable | No active risk | No intervention needed |
Legal Flags - Recommended Actions
When legal flags are present:
| Flag | Condition | Recommended Action |
|---|---|---|
ipv.indicated | strangulation: true | CRITICAL - Show IPV resources immediately, consider logging for safety team review |
ipv.lethality_risk | "extreme" or "severe" | Show IPV + emergency resources, strong language about danger |
safeguarding_concern | context: "minor_involved" or "csa" | Log for compliance review (platform-dependent), show appropriate resources |
third_party_threat | tarasoff_duty: true | LEGAL OBLIGATION - May require reporting (jurisdiction-dependent), consult legal counsel |
Important Legal Note
NOPE flags patterns that may trigger obligations. Platforms must determine their own legal obligations based on jurisdiction and role. AI systems are not mandatory reporters under current statutes.
Code Examples
Handling Ambiguous Risk Attribution
When subject_confidence is low, the API is uncertain WHO is at risk:
for (const risk of result.risks) {
if (risk.subject === 'self' && risk.subject_confidence < 0.7) {
// AMBIGUOUS: Might be asking about others ("my friend is suicidal")
// Don't block content, but show resources less aggressively
console.log('Ambiguous self-risk detected');
// Log for monitoring but don't treat as definite self-risk
}
if (risk.subject === 'other' && risk.subject_confidence > 0.8) {
// CLEAR: User is concerned about someone else
// Show resources but don't block their ability to seek help
console.log('Third-party concern detected');
}
} Resource Display Priority
When showing crisis resources, display contact methods in this order:
function getPreferredContact(resource) {
// 1. Phone if 24/7 and currently open
if (resource.is_24_7 && resource.phone) {
return { type: 'phone', value: resource.phone, label: 'Call' };
}
// 2. Check open_status for non-24/7 resources
if (resource.open_status?.is_open && resource.phone) {
return { type: 'phone', value: resource.phone, label: 'Call Now' };
}
// 3. Text/SMS if available
if (resource.sms_number) {
return { type: 'sms', value: resource.sms_number, label: 'Text' };
}
// 4. Chat if available
if (resource.chat_url) {
return { type: 'chat', value: resource.chat_url, label: 'Chat' };
}
// 5. Regional messaging apps
if (resource.whatsapp_url) return { type: 'whatsapp', value: resource.whatsapp_url };
if (resource.telegram_url) return { type: 'telegram', value: resource.telegram_url };
if (resource.line_url) return { type: 'line', value: resource.line_url };
if (resource.wechat_id) return { type: 'wechat', value: resource.wechat_id };
// 6. Email/Website as fallback
return { type: 'website', value: resource.website_url, label: 'Visit Website' };
} Combining Severity + Confidence
function shouldShowResources(summary, overallConfidence) {
// High severity always shows resources regardless of confidence
if (summary.speaker_severity === 'critical' || summary.speaker_severity === 'high') {
return true;
}
// Moderate severity with decent confidence
if (summary.speaker_severity === 'moderate' && overallConfidence > 0.6) {
return true;
}
// Mild severity only with high confidence
if (summary.speaker_severity === 'mild' && overallConfidence > 0.8) {
return true; // Optional
}
return false;
} Guardrails AI Integration
If you use Guardrails AI for LLM validation,
NOPE provides an official validator that wraps /v1/screen.
from guardrails import Guard
from nope_crisis_screen import CrisisScreen
guard = Guard().use(
CrisisScreen(severity_threshold="moderate"),
on="messages" # Validate user input
)
response = guard(
openai.chat.completions.create,
model="gpt-4",
messages=[{"role": "user", "content": user_message}],
) Install with pip install nope-crisis-screen. Supports on_fail="fix" to auto-respond with crisis resources,
custom handlers, and all standard Guardrails actions. See the validator repo for full documentation.
Next Steps
- Crisis Screening Guide — /screen endpoint details
- Evaluation API Guide — /evaluate endpoint details
- Crisis Resources — rendering matched helplines
- Widget Integration — embed pre-built crisis UI