L
LM4VSP v1.0.0
Try check-in
DARPA HR0011SB20254-02 · Army SBIR 10716

Peer-First
Conversational AI
Crisis-Prevention Co-Pilot.

The LLM doesn't try to be the help. It routes the right signal to the right person at the right time.

$ reference architecture · production-ready · azure-portable

phi.processed
0
hash.algo
SHA-256
peer.ack.window
5min
vcl.fallback
988
// architecture

8-Layer
Reference Stack.

Every component has a direct peer in Azure. Phase I lift is a deploy-config change — not a re-architecture.

01 Check-in input free-text · session-id only · no identity binding 02 Serverless API gateway workers.now() → azure_functions.future() 03 LLM classifier · RACE prompt llama-3.3-70b → gpt-5 (azure-openai) 04 Risk routing { none | elevated | crisis } · schema-validated 05 Peer ring ping [load-bearing] durable_object · "Bro check on Charlie" 06 5-minute countdown deterministic · only authenticated peer ack pauses 07 VCL fallback · 988 + 838255 metadata-only · no transcript shared 08 Audit trace { session_id, event } · no content · no PII
# responsible

Mandatory human-in-the-loop. No LLM output bypasses the deterministic escalation rule.

# traceable

Every classification ships with C-SSRS / Joiner ITS framework citation and reasoning.

# governable

Escalation rule is data, not a code path inferred from model output.

// check-in

How are you, really?

No login. No identity binding. Hashed before persistence.

breathe

[ in.4s · hold · out.4s · 0.1Hz ]

POST /api/checkin
sha256(text) → text_hash · text.length → text_len
200 OK · classification
{
  "risk_level": "none",
  "framework_citation": "Columbia C-SSRS step 1",
  "indicators": [],
  "next_action": "continue check-in cadence"
}

Phase I demonstration prototype.

Not for actual crisis support. Not authorized for clinical use. Not a substitute for licensed mental health care. Processes no patient data.

If you or a veteran you know is in crisis: call 988 then press 1, or text 838255.