Skip to main content

NOPE API Reference

Safety layer for chat & LLMs. Analyze conversations for mental health and safeguarding risk.

Base URL: https://api.nope.net API Version: v1 (current)


Quick Start

curl -X POST https://api.nope.net/v1/evaluate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "I feel hopeless"}],
    "config": {"user_country": "US"}
  }'

Get your API key at dashboard.nope.net.

For integration patterns and end-to-end examples, see the Integration Patterns Guide.


Authentication

Most endpoints require a Bearer token:

Authorization: Bearer nope_live_xxxxxx

Key types:

  • nope_live_* - Production keys
  • nope_test_* - Test keys (rate limited)

Free endpoints (require API key):

  • GET /v1/resources — Basic crisis resources by country (free)
  • GET /v1/resources/smart — AI-ranked crisis resources ($0.001/call)

Public endpoints (no auth required):

  • GET /v1/resources/:id — Single resource by database ID (for widget embeds)
  • GET /v1/resources/countries — List supported countries
  • GET /v1/resources/detect-country — IP-based country detection
  • GET /v1/try/resources/smart — Demo AI-ranked resources (rate-limited, max 5 results)

API Limits & Quotas

Request Size Limits

These hard limits apply to all requests and return 400 Bad Request if exceeded:

Limit Value Applies To
Max message count 100 messages /v1/evaluate, /v1/screen
Max message size 50 KB per message /v1/evaluate, /v1/screen
Max text blob size 50 KB /v1/evaluate, /v1/screen (when using text field)
Max query length 500 characters /v1/resources/smart

Message Truncation

To control costs and focus on relevant context, NOPE truncates conversation history in certain scenarios.

/v1/evaluate

Access Level Truncation Behavior
With API key No truncation — full message history retained
Without API key (try endpoint) Last 10 messages, max 500 tokens (~2000 chars) per message

When truncation occurs, the response includes metadata.messages_truncated: true.

/v1/screen

Always truncates to last 6 messages regardless of authentication status.

This is intentional: crisis screening focuses on current state, so recent messages are most relevant. The 6-message limit (3 conversation turns) balances detection accuracy with cost efficiency.

Note: If you need full conversation history analysis, use /v1/evaluate with an API key.

Resource Limits

Endpoint Parameter Limit
/v1/resources limit Max 10
/v1/resources/smart limit Max 10
/v1/resources/smart query Max 500 characters
/v1/try/resources/smart limit Max 5 (lower for demo)

Try Endpoints

The /v1/try/* demo endpoints have additional restrictions:

Restriction Value
Rate limit 10 requests/minute/IP
Message truncation Always applied (10 messages max)
Resource results Max 5 (vs 10 for authenticated)
Debug info Never included
Custom models Not available
Multiple judges Not available

Try endpoints are for API exploration and demos. For production use, get an API key at dashboard.nope.net.

Rate Limits

Authenticated endpoints have per-user rate limits to ensure fair usage. Limits are generous for normal usage patterns.

Endpoint Rate Limit
/v1/evaluate 100 requests/min
/v1/screen 500 requests/min
/v1/oversight/analyze 50 requests/min
/v1/oversight/ingest 10 requests/min
/v1/resources 200 requests/min
/v1/resources/smart 100 requests/min
/v1/webhooks/* 30 requests/min

Rate limit headers are included on all responses:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 99
X-RateLimit-Reset: 1704067200000

When exceeded, returns 429 Too Many Requests with a Retry-After header:

{
  "error": "rate_limit_exceeded",
  "message": "Rate limit exceeded. Please retry after 45 seconds.",
  "retry_after_seconds": 45
}

Note: Rate limits apply per user (by API key). If you need higher limits, contact us.


POST /v1/evaluate

Primary API endpoint. Analyze conversation for risk using an orthogonal subject/type taxonomy. Returns detailed assessment with risks, communication style, and matched crisis resources.

Key Concepts: Subject × Type

NOPE uses an orthogonal design separating WHO is at risk from WHAT the risk is:

Dimension Values Question
Subject self, other, unknown WHO is at risk?
Type suicide, self_harm, violence, abuse, etc. WHAT type of harm?

This enables clean detection of scenarios like:

  • "I want to hurt myself" → subject: self, type: self_harm
  • "My friend is suicidal" → subject: other, type: suicide
  • "He hit me again" → subject: self, type: abuse (speaker is victim)

Evaluate Request

{
  // Provide ONE of these:
  messages?: Array<{role: 'user'|'assistant', content: string}>,
  text?: string,  // Single text blob (converted to user message)

  config?: {                         // Optional (defaults: user_country="XX", locale="en")
    user_country?: string,           // ISO 3166-1 alpha-2 (e.g., "US", "GB"), default "XX"
    locale?: string,                 // e.g., "en-US", default "en"
    user_age_band?: 'adult'|'minor'|'unknown',
    return_assistant_reply?: boolean, // Default: true
    assistant_safety_mode?: 'template'|'generate',
    conversation_id?: string,        // Your ID for webhooks
    end_user_id?: string,            // Your user ID for webhooks
  },
  user_context?: string,             // Additional context (e.g., app persona)
}

Evaluate Response

{
  // Communication assessment (how is content expressed?)
  communication: {
    styles: Array<{style: CommunicationStyle, confidence: number}>,
    language?: string,  // ISO 639-1
  },

  // Identified risks (subject × type matrix)
  risks: Array<{
    subject: 'self' | 'other' | 'unknown',
    subject_confidence: number,  // 0-1, confidence that subject is correct
    type: RiskType,
    severity: 'none' | 'mild' | 'moderate' | 'high' | 'critical',
    imminence: 'not_applicable' | 'chronic' | 'subacute' | 'urgent' | 'emergency',
    confidence: number,  // 0-1
    features: string[],  // Evidence features
  }>,

  // Summary for quick decision-making
  summary: {
    speaker_severity: Severity,     // Max severity from self-subject risks
    speaker_imminence: Imminence,   // Max imminence from self-subject risks
    any_third_party_risk: boolean,  // True if any other-subject risks
    primary_concerns: string,       // Human-readable explanation
  },

  // Legal/safeguarding flags
  legal_flags?: {
    ipv?: {
      indicated: boolean,
      strangulation: boolean,
      lethality_risk: 'standard' | 'elevated' | 'severe' | 'extreme',
      escalation_pattern: boolean,
    },
    safeguarding_concern?: {
      indicated: boolean,
      context: 'minor_involved' | 'vulnerable_adult' | 'csa' | 'infant_at_risk' | 'elder_abuse',
    },
    third_party_threat?: {
      tarasoff_duty: boolean,  // Duty to warn
      specific_target: boolean,
    },
  },

  // Protective factors
  protective_factors?: {
    protective_factors: string[],
    protective_factor_strength: 'weak' | 'moderate' | 'strong',
  },

  confidence: number,  // 0-1, overall confidence
  agreement?: number,  // 0-1, inter-judge agreement (multi-judge mode)

  crisis_resources: CrisisResource[],
  widget_url?: string,  // Reserved for future use

  recommended_reply?: {
    content: string,
    source: 'llm_generated',
    notes?: string,
  },

  filter_result?: {
    triage_level: 'none' | 'concern' | 'crisis',
    preliminary_risks: string[],
    reason: string,
  },

  metadata?: {
    access_level: string,
    is_admin: boolean,
    messages_truncated: boolean,
    input_format: string,
    api_version: 'v1',
  },
}

Example Response

{
  "communication": {
    "styles": [{"style": "direct", "confidence": 0.9}],
    "language": "en"
  },
  "risks": [
    {
      "subject": "self",
      "subject_confidence": 0.95,
      "type": "suicide",
      "severity": "moderate",
      "imminence": "chronic",
      "confidence": 0.8,
      "features": ["hopelessness", "passive_ideation"]
    }
  ],
  "summary": {
    "speaker_severity": "moderate",
    "speaker_imminence": "chronic",
    "any_third_party_risk": false,
    "primary_concerns": "User expressing feelings of hopelessness with passive suicidal ideation."
  },
  "protective_factors": {
    "protective_factors": ["help_seeking"],
    "protective_factor_strength": "weak"
  },
  "confidence": 0.8,
  "crisis_resources": [
    {
      "type": "crisis_line",
      "name": "988 Suicide and Crisis Lifeline",
      "phone": "988",
      "is_24_7": true
    }
  ],
  "recommended_reply": {
    "content": "I hear how heavy things feel right now. Those feelings of hopelessness are really difficult. Would you like to talk about what's been going on?",
    "source": "llm_generated"
  },
  "metadata": {
    "api_version": "v1"
  }
}

POST /v1/screen

Real-Time Safety Triage. Fast, cost-effective risk detection across all 9 risk types.

This endpoint satisfies requirements for:

  • California SB243 "Companion AI Safety Act" (effective Jan 1, 2026)
  • New York GBS Article 47 "AI Companion Models" (effective Nov 5, 2025)
  • Other jurisdictions with similar safety requirements

Screen vs Evaluate — Triage vs Assessment:

Both endpoints detect the same 9 risk types. The difference is depth, not coverage:

Aspect /v1/screen /v1/evaluate
Risk types All 9 All 9
Output Type + severity + imminence Full clinical profile
Clinical features 180+ features (C-SSRS, HCR-20, DASH)
Protective factors 36 factors (START-based)
Legal flags 5 mandatory reporting triggers
Cost $0.001 $0.05
Use case Real-time triage Escalation, case review, compliance

When to use /v1/screen vs /v1/evaluate

Use Case Recommended Endpoint
Real-time message screening /v1/screen
High-volume triage (cost-sensitive) /v1/screen
Regulatory compliance (SB243, Article 47) /v1/screen
Detailed risk profiling /v1/evaluate
Clinical feature extraction /v1/evaluate
Escalation decisions requiring detail /v1/evaluate

Screen Request

{
  // Provide ONE of these:
  messages?: Array<{role: 'user'|'assistant', content: string}>,
  text?: string,  // Single text input

  config?: {
    country?: string,  // ISO country code (default: 'US')
    include_recommended_reply?: boolean, // Generate AI-written supportive reply
    debug?: boolean,   // Include latency, model info
  }
}

Country codes: Use ISO 3166-1 alpha-2 codes (US, GB, AU, CA, etc.). See talk.help for full country coverage.

Screen Response

{
  // Risk detections (primary output)
  risks: Array<{
    type: RiskType,                     // suicide, self_harm, violence, abuse, etc.
    subject: 'self' | 'other' | 'unknown',
    severity: 'none' | 'mild' | 'moderate' | 'high' | 'critical',
    imminence: 'not_applicable' | 'chronic' | 'subacute' | 'urgent' | 'emergency',
    confidence: number,                 // 0-1
  }>,

  // Backward-compatible flags (derived from risks[])
  show_resources: boolean,              // true if any risk warrants resources
  suicidal_ideation: boolean,           // risks[] contains suicide with subject self/unknown
  self_harm: boolean,                   // risks[] contains self_harm with subject self/unknown

  rationale: string,                    // Brief explanation ("reasonable efforts" evidence)

  // Resources (only when show_resources = true)
  // Scope-matched to detected risk types (222 countries supported)
  resources?: {
    primary: CrisisResource,            // Main crisis line for the country
    secondary: CrisisResource[],        // Additional resources (0-2)
  },

  // Optional AI-generated supportive reply (when config.include_recommended_reply = true AND risks detected)
  recommended_reply?: {
    content: string,                    // Brief supportive response with resource references
    source: 'llm_generated',            // Always 'llm_generated'
  },

  // Audit trail (for compliance logging)
  request_id: string,                   // Unique ID (e.g., "sb243_1703001234567_abc123")
  timestamp: string,                    // ISO timestamp

  // Debug (if config.debug = true)
  debug?: {
    model: string,
    latency_ms: number,
  },
}

// CrisisResource fields (varies by resource)
interface CrisisResource {
  name: string,           // e.g., "988 Suicide & Crisis Lifeline"
  phone?: string,         // e.g., "988"
  sms_number?: string,    // e.g., "741741"
  text_instructions?: string, // e.g., "Text HOME to 741741"
  chat_url?: string,      // e.g., "https://988lifeline.org/chat/"
  website_url?: string,
  is_24_7?: boolean,
  availability?: string,  // e.g., "24/7" or "Mon-Fri 9am-5pm"
  languages?: string[],   // e.g., ["en", "es"]
  open_status?: {         // Computed from opening_hours_osm
    is_open: boolean | null,  // null if hours unknown
    next_change?: string,     // ISO timestamp of next open/close transition
    confidence: 'high' | 'low' | 'none',  // How confident we are in this status
    message?: string,     // e.g., "Open 24/7", "Closed · Opens in 2 hours"
  },
}

Detection Logic

The risks[] array contains all detected risk types with severity, imminence, and subject attribution.

The show_resources field is true when any risk is detected with:

  • severity ≥ mild, AND
  • subject is self or unknown (for self-directed risks like suicide, self_harm, self_neglect)
  • OR any severity for perpetrator risks where the speaker is the source (violence, abuse toward others)

(Third-party concerns like "my friend is suicidal" use subject=other and don't trigger resources since the speaker isn't the one in crisis.)

Resource matching: Crisis resources are scope-matched to the detected risk types. For example:

  • suicide → suicide prevention hotlines
  • abuse → domestic violence resources
  • exploitation → human trafficking support

Example (US - default)

curl -X POST https://api.nope.net/v1/screen \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "I have been feeling really hopeless lately"}'
{
  "risks": [
    {
      "type": "suicide",
      "subject": "self",
      "severity": "moderate",
      "imminence": "chronic",
      "confidence": 0.8
    }
  ],
  "show_resources": true,
  "suicidal_ideation": true,
  "self_harm": false,
  "rationale": "Speaker expresses passive ideation (hopelessness, wish to be dead)",
  "resources": {
    "primary": {
      "name": "988 Suicide & Crisis Lifeline",
      "phone": "988",
      "chat_url": "https://988lifeline.org/chat/",
      "is_24_7": true
    },
    "secondary": [
      {
        "name": "Crisis Text Line",
        "sms_number": "741741",
        "text_instructions": "Text HOME to 741741",
        "is_24_7": true
      }
    ]
  },
  "request_id": "sb243_1703001234567_abc123",
  "timestamp": "2024-12-19T10:30:00.000Z"
}

Example (Domestic Violence - UK)

curl -X POST https://api.nope.net/v1/screen \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "My partner hit me again last night", "config": {"country": "GB"}}'
{
  "risks": [
    {
      "type": "abuse",
      "subject": "self",
      "severity": "high",
      "imminence": "chronic",
      "confidence": 0.9
    }
  ],
  "show_resources": true,
  "suicidal_ideation": false,
  "self_harm": false,
  "rationale": "Speaker reports ongoing intimate partner violence",
  "resources": {
    "primary": {
      "name": "National Domestic Abuse Helpline",
      "phone": "0808 2000 247",
      "website_url": "https://www.nationaldahelpline.org.uk",
      "is_24_7": true
    },
    "secondary": [...]
  },
  "request_id": "sb243_1703001234567_def456",
  "timestamp": "2024-12-19T10:35:00.000Z"
}

Risk Subjects

Subject Description When to use
self The speaker is at risk "I want to hurt myself"
other Someone else is at risk "My friend is suicidal", "He hit her"
unknown Cannot determine with confidence Ambiguous scenarios

Key insight: summary.speaker_severity only considers risks where subject === 'self'. This prevents showing crisis resources to worried bystanders asking about others.


Risk Types (9 types)

Type Description
suicide Self-directed lethal intent - thoughts, plans, or attempts to end one's life
self_harm Non-suicidal self-injury (NSSI) - intentional self-harm without intent to die
self_neglect Self-care failure and psychiatric emergency - eating disorders, psychosis, substance crisis, severe functional impairment, medical care refusal
violence Risk of harm to others - threats, plans, or acts of violence
abuse Physical, emotional, sexual, or financial abuse patterns
sexual_violence Rape, sexual assault, or sexual coercion
neglect Failure to care for dependents - children, elderly, vulnerable adults
exploitation Trafficking, labor exploitation, sextortion, grooming
stalking Persistent unwanted contact, following, surveillance

Communication Styles (8 styles)

Communication style describes how content is expressed, orthogonal to risk level. The same crisis content can be expressed directly, through humor, via creative writing, etc.

Style Description
direct Explicit, first-person present statements ("I want to die")
humor Dark humor, memes, ironic expressions, Gen-Z speak
fiction Creative writing, roleplay, storytelling contexts
hypothetical "What if" scenarios, "asking for a friend"
distanced Third-party concern, temporal distancing, past tense
clinical Academic, professional, research discussion
minimized Hedged language, downplaying severity
adversarial Jailbreak attempts, manipulation, testing boundaries

Why this matters:

  • Distinguish genuine crisis from dark humor
  • Identify distancing ("asking for a friend")
  • Detect adversarial attempts with embedded risk
  • Recognize minimization that may undersell risk

Severity Scale

Level Definition
none No clinical concern
mild Minor distress, no functional impairment
moderate Clear concern, not immediately dangerous
high Serious risk requiring urgent intervention
critical Life-threatening, imminent harm

Imminence Scale

Level Definition
not_applicable ONLY when severity=none
chronic Weeks-months, stable pattern
subacute Likely escalation in days-weeks
urgent Escalation likely within 24-48h
emergency Happening NOW

For detailed guidance on what actions to take based on severity, confidence, and imminence levels, see the Integration Patterns Guide.


Features

Features are atomic, observable indicators returned in the features array of each risk assessment. NOPE uses a universal feature pool with 180+ indicators across these categories:

  • Ideation & Intent — C-SSRS based (passive_ideation, active_ideation, plan_present, etc.)
  • Means & Access — Lethal means availability (firearm_access, medication_access, etc.)
  • Violence — HCR-20 based (specific_threat, identifiable_target, etc.)
  • Abuse & IPV — DASH based (coercive_control, strangulation, etc.)
  • Exploitation — Trafficking, grooming, sextortion indicators
  • Neglect — Dependent care failures
  • Eating Disorder — Restriction, purging, body dysmorphia
  • Stalking — SAM based (unwanted_contact, following, etc.)
  • Clinical — Psychotic and substance features (hallucinations, withdrawal, etc.)
  • Emotional — Hopelessness, agitation, acute distress
  • Protective Factors — START based (help_seeking, social_support, etc.)
  • Context — Subject and relationship context markers

For the complete feature vocabulary with descriptions, see the User Risk Taxonomy page.


{
  ipv?: {
    indicated: boolean,
    strangulation: boolean,    // ANY history = 7.5x homicide risk
    lethality_risk: 'standard' | 'elevated' | 'severe' | 'extreme',
    escalation_pattern: boolean,
  },
  safeguarding_concern?: {
    indicated: boolean,
    context: 'minor_involved' | 'vulnerable_adult' | 'csa' | 'infant_at_risk' | 'elder_abuse',
  },
  third_party_threat?: {
    tarasoff_duty: boolean,    // Duty to warn may apply
    specific_target: boolean,  // Identifiable victim
  },
}

Note: safeguarding_concern surfaces patterns that may trigger statutory obligations depending on jurisdiction and organizational role. NOPE flags concerns for human review—AI systems are not mandatory reporters under any current statute.


Widget Integration

When summary.speaker_severity is not 'none', display crisis resources using the embeddable widget:

if (result.summary.speaker_severity !== 'none') {
  const iframe = document.createElement('iframe');
  iframe.src = 'https://widget.nope.net/resources?country=US&scopes=suicide,crisis';
  iframe.width = '100%';
  iframe.height = '400';
  container.appendChild(iframe);
}

See the Widget Builder for configuration options and the JavaScript API.


SDKs

Official SDKs with full type definitions:

SDK Package Docs
Node.js @nope-net/sdk Node.js SDK Reference
Python nope-net Python SDK Reference

Both SDKs support all API endpoints (evaluate, screen, oversight, resources), webhook verification, and include typed responses.


Webhooks

Receive real-time HTTP notifications when evaluations exceed configured risk thresholds.

Note: Webhooks require a minimum balance to ensure delivery reliability.

Event Source Description
evaluate.alert /v1/evaluate User risk meets or exceeds your threshold
oversight.alert /v1/oversight/* AI behavior concern is high or critical
oversight.ingestion.complete /v1/oversight/ingest Batch processing completed
test.ping Dashboard/API Test event to verify endpoint

API Routes:

Method Endpoint Description
POST /v1/webhooks Create webhook
GET /v1/webhooks List webhooks
PUT /v1/webhooks/:id Update webhook
DELETE /v1/webhooks/:id Delete webhook
POST /v1/webhooks/:id/test Send test ping

For webhook payload structures, signature verification, and integration examples, see the Webhooks Guide.


GET /v1/resources

Requires API key (free). Returns crisis helpline resources for a given country using scope-based filtering.

curl -H "Authorization: Bearer nope_live_xxx" \
  "https://api.nope.net/v1/resources?country=US&scopes=suicide,mental_health"
Parameter Type Required Description
country string Yes ISO 3166-1 alpha-2 code
scopes string No Comma-separated service scopes (WHAT the resource helps with)
populations string No Comma-separated populations (WHO the resource serves)
urgent boolean No Only 24/7 resources
limit number No Max resources (default: 10)

Filtering Parameters

scopes — filters by service scope (what the resource helps with):

  • ?scopes=suicide — suicide crisis resources
  • ?scopes=domestic_violence — DV resources
  • ?scopes=eating_disorder — eating disorder resources
  • ?scopes=lgbtq — LGBTQ+ specialist resources (Trevor Project, Trans Lifeline)

populations — filters by population served (who the resource serves):

  • ?populations=veterans — resources for veterans
  • ?populations=lgbtq — resources serving LGBTQ+ community
  • ?populations=youth — youth-focused resources

Combined: Both can be used together with AND logic:

  • ?scopes=suicide&populations=veterans — suicide resources specifically for veterans
  • ?scopes=domestic_violence&populations=lgbtq — DV resources serving LGBTQ+ community

Note: Invalid scope or population values return a 400 error with the invalid values listed.

Valid Scopes

NOPE supports 93 service scopes for filtering crisis resources (suicide, domestic_violence, eating_disorder, lgbtq, etc.).

For the complete list of all scopes with descriptions, see the Service Taxonomy page.

Valid Populations

NOPE supports 23 population filters for targeting specific demographics (veterans, youth, lgbtq, etc.).

For the complete list of all populations with descriptions, see the Service Taxonomy page.


GET /v1/resources/smart

Requires API key + balance ($0.001/call). Returns AI-ranked crisis resources using semantic search.

Use this when you have a natural language query and want the most relevant resources, not just scope-based filtering.

curl -H "Authorization: Bearer nope_live_xxx" \
  "https://api.nope.net/v1/resources/smart?country=US&query=teen+eating+disorder"
Parameter Type Required Description
country string Yes ISO 3166-1 alpha-2 code
query string Yes Natural language search query
scopes string No Optional scope pre-filter
limit number No Max resources (default: 10)

Example: query=teen eating disorder prioritizes eating disorder helplines over generic crisis lines.


GET /v1/resources/:id

Public endpoint (no auth required). Fetch a single crisis resource by its database UUID. Useful for widget embeds that display a specific resource.

curl "https://api.nope.net/v1/resources/c051c06a-119f-4823-af66-894d9b934b5f"

Resource by ID Response

{
  "resource": {
    "id": "c051c06a-119f-4823-af66-894d9b934b5f",
    "name": "988 Suicide & Crisis Lifeline",
    "phone": "988",
    "is_24_7": true,
    "open_status": {
      "is_open": true,
      "next_change": null,
      "confidence": "high",
      "message": "Open 24/7"
    }
  }
}

Resource by ID Errors

Status Description
400 Invalid UUID format
404 Resource not found or disabled

Resource by ID Use Cases

  • Single resource embeds: Display a specific helpline on a partner website
  • Deep linking: Link directly to a resource from external systems
  • Widget route: Powers the /resource/[id] widget embed URL

Pricing

NOPE uses prepaid usage-based billing — no subscriptions, no tiers.

Endpoint Cost
/v1/resources/smart $0.001
/v1/screen $0.001
/v1/evaluate $0.05
/v1/resources Free

New accounts receive $1.00 free credit. Top up via dashboard.nope.net/billing.


Errors

Code Description
400 Invalid request
401 Invalid or missing API key
402 Insufficient balance
429 Rate limit exceeded
500 Internal server error

Clinical Frameworks

Framework Usage
C-SSRS Suicide severity (ideation features)
HCR-20 Violence risk (violence features)
START Protective factors
DASH IPV risk assessment
Danger Assessment IPV lethality indicators

Support


This API supports human decision-making, not replaces it. Always maintain human oversight for high-risk situations.