📄 Technical Whitepaper

Zero-Retention AI Architecture

A technical specification of how the VA Benefits App processes AI conversations without storing, logging, or transmitting user data to any third party.

📄 VA Benefits App v1.0 📅 Published February 2026 ✅ Open Source — Independently Auditable

Overview

This document describes the technical design of the VA Benefits App's AI conversation system. The central design goal is zero data retention: when a user asks the AI a question, that question is processed entirely in server RAM during the HTTP request and is never written to any database, log file, or persistent storage medium. Once the server streams the answer back to the client, the data ceases to exist on the server.

Conversation history — the full record of what a user has asked and what the AI has answered — is stored exclusively on the user's device using the operating system's local storage APIs. It never leaves the device except to be sent back to the server during the same request for context purposes, and is never retained by the server after that request ends.

The AI model itself is a private, self-hosted instance of Ollama running a quantized Qwen 2.5 14B model on our own hardware. No question or answer ever touches a third-party AI provider (OpenAI, Google, Anthropic, etc.), and no data is included in any external model's training set.

All code that implements this system is open source and available for community inspection on GitHub. This document serves as a map to that code, so any developer can verify these claims independently.

Table of Contents

  1. System Architecture Overview
  2. The AI Layer: Private Local LLM
  3. Data Flow: A Single AI Request
  4. Server-Side Code Walkthrough
  5. Client-Side Storage
  6. Database Schema: What We Actually Store
  7. Rate Limiting
  8. Transport Security
  9. Threat Model
  10. Independent Audit Guide
  11. Limitations & Honest Disclosures
  12. HIPAA, FTC & State Health Data Laws
  13. User Data Rights & Account Deletion

System Architecture Overview

The application consists of three distinct layers. Each layer has a clearly defined responsibility with respect to data handling.

Layer Technology Stores AI Prompts? Stores AI Responses?
Mobile App React Native / Expo (iOS & Android) ✓ Device only ✓ Device only
API Server Node.js / Express (self-hosted) ✗ Never ✗ Never
AI Model Ollama + Qwen 2.5 14B (private server) ✗ In-RAM only ✗ In-RAM only

Component Responsibilities

💡

Why send the full history each time?

LLMs have no memory between requests. To have a coherent conversation, the context (prior messages) must be sent with every request. The difference in our design is that this context is assembled on the device and discarded by the server — it is never stored server-side between requests.


The AI Layer: Private Local LLM

What is Ollama?

Ollama is an open-source runtime for running large language models locally. It provides an OpenAI-compatible HTTP API, runs entirely on private hardware, and has no telemetry or external data reporting. We run it on a private server that we own and control.

Model: Qwen 2.5 14B

We use Qwen 2.5 14B, a 14-billion parameter language model developed by Alibaba Cloud and released under an open license. The model weights are downloaded once from a public registry and then served entirely from our private server. The model:

VA Knowledge Base

Alongside the base language model, we maintain a local VA knowledge base consisting of structured JSON files derived from public VA policy documents: 38 CFR regulations, the M21-1 Adjudication Manual, and historical BVA decision patterns. Relevant sections of this knowledge base are injected into the AI's system prompt at request time. The knowledge base is read-only and stored on the server as static files — it contains no user data.

  YOUR DEVICE                    OUR PRIVATE SERVER
  ───────────────                ───────────────────────────────────
  ┌─────────────┐                ┌─────────────┐   ┌─────────────┐
  │  VA Benefits│  HTTPS/TLS     │  API Server │   │   Ollama    │
  │    App      │ ──────────────▶│  (Express)  │──▶│  (private)  │
  │             │                │             │   │ Qwen 2.5 14B│
  │  Stores:    │  Streaming     │  Stores:    │   │             │
  │  - Messages │ ◀────────────  │  NOTHING    │   │  In RAM:    │
  │  - History  │  SSE chunks    │             │   │  - Prompt   │
  │             │                │  Reads:     │   │  - Response │
  └─────────────┘                │  - User     │   │             │
                                 │    profile  │   │  Discards   │
                                 │  - VA KB    │   │  after resp │
                                 │    (static) │   └─────────────┘
                                 │             │
                                 │  Discards   │
                                 │  all data   │
                                 │  after resp │
                                 └─────────────┘
                                       │
                                       ▼
                                 ┌─────────────┐
                                 │  PostgreSQL │
                                 │  Database   │
                                 │             │
                                 │  Contains:  │
                                 │  - User     │
                                 │    accounts │
                                 │  - Forum    │
                                 │    posts    │
                                 │             │
                                 │  No chat    │
                                 │  messages   │
                                 └─────────────┘

Data Flow: A Single AI Request

The following describes the exact sequence of events when a user sends a message to Judy, the AI assistant.

Step-by-Step Request Lifecycle

1. User types a message on their device

The message is immediately appended to the local conversation history stored in the app's AsyncStorage (device storage). Nothing has left the device yet.

2. App sends an HTTPS POST to /api/chat

The request body contains the entire conversation history as a JSON array of message objects. Each object has only two fields: role ("user" or "assistant") and content (the text). No user identifiers beyond the JWT token in the Authorization header are included in the body.

// Request body structure — this is everything the server receives
{
  "messages": [
    { "role": "user",      "content": "How do I file a disability claim?" },
    { "role": "assistant", "content": "Hey, great question! Here's how..." },
    { "role": "user",      "content": "What forms do I need?" }
  ]
}

3. Server authenticates the user via JWT

The server decodes the JWT access token to obtain the user's ID. This is the only identification step. The token is stateless — no session lookup is performed.

4. Server fetches the user's profile (not messages) from the database

A single database query retrieves four fields: subscriptionTier, primaryType, subType, and intakeData. These are used to personalize the AI's system prompt (e.g., "User is an Army veteran, interested in disability claims"). No historical messages are read from the database because none exist — the database has no tables for them.

// The only database query in the entire chat flow
const user = await prisma.user.findUnique({
  where: { id: userId },
  select: {
    subscriptionTier: true,  // free or premium
    primaryType:      true,  // veteran, family_member, etc.
    subType:          true,  // active_duty, established, etc.
    intakeData:       true,  // branch, service dates, interests
  },
});
// That's it. No message history is read. No queries are written.

5. Server checks the in-memory rate limit counter

A JavaScript Map in the server process (never persisted to disk or database) tracks how many messages a user has sent today. If the user is over the daily limit, a rate limit error is streamed back. The counter key is userId:YYYY-MM-DD. When the server restarts, all counters reset.

6. Server builds the AI prompt entirely in RAM

The server constructs the full prompt by combining the VA system instructions, the user's profile context, relevant sections from the static VA knowledge base, and the conversation history supplied by the client. This assembled prompt lives only in the Node.js process heap.

7. Server sends prompt to private Ollama instance via local network

An HTTP POST is sent from the API server to the Ollama instance (running on the same private network or the same machine). The request is internal — it never touches the public internet. Ollama processes the prompt in GPU/CPU RAM and streams tokens back.

8. Server streams response back to the app

The server relays each token chunk to the client using Server-Sent Events (SSE). The server holds each token in RAM for the microseconds it takes to relay it. No buffering to disk occurs.

9. Server request handler returns

Once the stream ends, the HTTP handler returns. The Node.js garbage collector reclaims all memory allocated for the request: the message history, the assembled prompt, the response text. The server has written nothing — not a single row — to the database during this entire flow.

10. App saves the assistant's response to local storage

The mobile app appends the AI's response to its local conversation history in AsyncStorage. This is the only place the response ever persists — on the user's own device.

Net result

After the request completes, the server's database is identical to before the request. The only record of the conversation exists on the user's device. If the server were inspected at any point after the request, there would be no evidence that the conversation occurred.


Server-Side Code Walkthrough

All server code is available in the /server/src/ directory of the public repository. The following are the most critical files for auditing the privacy claims.

File: server/src/services/chat.service.ts

This is the core of the AI pipeline. The exported sendMessage() function is the entry point for every AI conversation request. Key properties to verify:

// server/src/services/chat.service.ts
// The complete sendMessage function — annotated for audit

export async function sendMessage(
  userId: string,
  clientMessages: ChatMessage[]   // Full history from the client device
): Promise<AsyncIterable<string>> {

  // ✓ AUDIT: Only DB read in this function — profile fields only, no messages
  const user = await prisma.user.findUnique({
    where: { id: userId },
    select: { subscriptionTier, primaryType, subType, intakeData }
  });

  // ✓ AUDIT: In-memory rate limit check — no DB write
  const rateLimit = checkRateLimit(userId, isPremium);
  if (!rateLimit.allowed) throw new AppError('Rate limit exceeded', 429);
  incrementRateLimit(userId); // Updates an in-memory Map, not the database

  // ✓ AUDIT: System prompt assembled in RAM from static files + user profile
  const systemPrompt = buildSystemPrompt(user, clientMessages);

  // ✓ AUDIT: HTTP call to private Ollama — no third-party AI provider
  const response = await fetch(`${OLLAMA_URL}/api/chat`, { ... });

  // ✓ AUDIT: Generator streams tokens — no buffering to disk
  return (async function* () {
    try {
      while (true) {
        const { done, value } = await reader.read();
        if (done) break;
        yield parseToken(value); // Yield to client, forget immediately
      }
    } finally {
      reader.releaseLock();
      // ✓ AUDIT: Request ends here. No write. No log. Memory freed by GC.
    }
  })();
}

File: server/src/routes/chat.ts

The route handler for POST /api/chat. This file defines the only AI endpoint. There are no routes for fetching conversation history from the server, because no history is stored server-side.

// server/src/routes/chat.ts
// Complete route file — two endpoints only

// GET /api/chat/rate-limit — returns remaining message count (from RAM counter)
chatRouter.get('/rate-limit', authenticate, async (req, res) => {
  const status = await chatService.getRateLimitStatus(req.user.userId);
  res.json({ data: status });
});

// POST /api/chat — stateless AI endpoint, streams response, saves nothing
chatRouter.post('/', authenticate, async (req, res) => {
  const { messages } = sendSchema.parse(req.body);

  res.setHeader('Content-Type', 'text/event-stream'); // SSE streaming

  const stream = await chatService.sendMessage(req.user.userId, messages);

  for await (const chunk of stream) {
    res.write(`data: ${JSON.stringify({ text: chunk })}\n\n`);
  }

  res.end(); // Done. The request handler returns. Nothing was saved.
});

// ✓ AUDIT: Note the absence of any GET /:id, DELETE /:id, or list endpoints.
// There is no route to retrieve stored conversations because none exist.

Client-Side Storage

All conversation data is stored on the user's device using AsyncStorage, the standard React Native key-value storage layer. On Android, this maps to SQLite in the application's private data directory. On iOS, it maps to the application's sandboxed filesystem.

Storage Key

All conversation data is stored under a single key: va-chat-storage. The value is a JSON object with the following shape:

{
  "state": {
    "conversations": [
      {
        "id":        "1708123456789-abc1234",  // Local UUID (timestamp + random)
        "title":     "How do I file a disability claim?",
        "messages": [
          { "id": "...", "role": "user", "content": "...", "createdAt": "..." },
          { "id": "...", "role": "assistant", "content": "...", "createdAt": "..." }
        ],
        "createdAt": "2026-02-20T10:00:00.000Z",
        "updatedAt": "2026-02-20T10:05:00.000Z"
      }
    ],
    "currentConversationId": "1708123456789-abc1234"
  },
  "version": 0
}

Relevant Source File

The complete Zustand store implementation, including the AsyncStorage persistence layer, is in mobile/stores/chat.store.ts. Key behaviors to verify:

⚠️

Implication: Clearing app data deletes all history

Because conversation history exists only on the device, clearing the app's data (via Android Settings or iOS app reinstall) permanently deletes all conversation history. There is no server-side backup and no recovery mechanism. We consider this a feature of the privacy design, not a bug.


Database Schema: What We Actually Store

The database is a PostgreSQL instance hosted on Neon. The complete schema is defined in server/prisma/schema.prisma. The following is every table in the database as of this writing.

Notably absent: there is no Message table and no Conversation table. These tables were removed on February 20, 2026, and the database was synchronised to confirm their physical deletion.

Table Purpose Contains User Content?
User Account credentials and profile (email, hashed password, service branch, interests) ⚠ Yes — profile data only
RefreshToken JWT refresh tokens for session management Token hashes only
Referral Referral relationship between two users User IDs + status only
ForumCategory Community forum categories ✓ No
ForumTopic Public forum posts (deliberately public content) ⚠ Yes — user-authored, public
ForumReply Replies to forum posts (deliberately public content) ⚠ Yes — user-authored, public
ForumTopicUpvote Vote records for forum posts User ID + topic ID only
DMConversation Direct message threads between community members ⚠ Yes — user-to-user messages
BetaSignup Landing page signups (email, name, veteran status) ⚠ Yes — with consent
📄

Note on Forum and Direct Messages

Forum posts and DMs are intentionally stored server-side because they are a community feature where users expect persistence and the ability to read others' posts. These are distinct from AI conversations with Judy. Users choose to publish forum posts publicly; they do not choose to publish private AI queries.

Verifying the Schema Yourself

Run the following commands against the public GitHub repository to confirm there is no Message or AI Conversation model:

# Clone the repo
git clone https://github.com/[YOUR_ORG]/va-benefits-app

# Search for any Message or Conversation model
grep -n "model Message" server/prisma/schema.prisma
grep -n "model Conversation" server/prisma/schema.prisma
# Expected output: no matches

# Search for any prisma write calls in the chat service
grep -n "prisma\." server/src/services/chat.service.ts
# Expected output: only prisma.user.findUnique (a read)

Rate Limiting

To prevent abuse, free-tier users are limited to 15 AI messages per day. Premium users have no limit. This counter is implemented as an in-memory JavaScript Map inside the server process.

// server/src/services/chat.service.ts
// In-memory rate limit counter — never written to disk

const rateLimitCache = new Map<string, RateLimitEntry>();

// Key format: "userId:YYYY-MM-DD"
// Example:    "550e8400-e29b-41d4-a716:2026-02-20"
// 
// ✓ AUDIT: This is a plain JavaScript Map.
// It is not written to the database or any file.
// It resets entirely when the server process restarts.
// It contains only a count and a reset timestamp — no message content.

The key stores the user's opaque ID (a UUID) and the current date. It does not store any message content. After midnight UTC, the counter automatically resets as the key changes. A server restart also resets all counters.


Transport Security

App ↔ API Server

All communication between the mobile app and the API server uses HTTPS with TLS 1.2 or higher. The API server is exposed via a Cloudflare Tunnel, which provides automatic TLS termination and certificate management. The underlying server listens on localhost only and is not directly reachable from the internet.

API Server ↔ Ollama

Communication between the API server and the Ollama instance is over the local network (or loopback) and does not traverse the public internet. This connection uses plain HTTP over a private network, which is acceptable because the data never leaves our physical infrastructure.

Certificate Transparency

Our TLS certificates are logged to public Certificate Transparency (CT) logs. This means any certificate issued for our domain is publicly verifiable. You can verify our current certificate at crt.sh.


Threat Model

The following table describes potential attack scenarios and how the current architecture responds to each.

Threat Impact Under This Architecture
Server database is compromised Attacker can access user profiles (email, service branch, interests) and forum posts. They cannot access any AI conversation history — no such records exist in the database.
Server process memory is dumped mid-request An in-flight request would be visible in RAM. The window of exposure is the duration of a single HTTP request (typically 2–10 seconds). No historical data is accessible.
Server is seized by law enforcement Authorities would find user account data (email, profile) and forum posts. They would find no AI conversation history. This is architecturally guaranteed by the absence of any write path.
Server logs are exfiltrated HTTP access logs record only the endpoint hit (POST /api/chat), timestamp, and IP address. The request body — containing conversation content — is not logged. We explicitly do not configure any request body logging middleware.
User's device is seized Conversation history is stored in AsyncStorage, which on Android is in an application-private SQLite database and on iOS is in the app sandbox. Device encryption (which is enabled by default on modern iOS and Android devices) protects this data at rest. Deleting the app removes all data.
Network traffic is intercepted (MITM) TLS prevents reading of request content in transit. Certificate pinning is not currently implemented but is on the roadmap.
Third-party AI provider is subpoenaed Not applicable. No third-party AI provider receives any data. Ollama is a self-hosted runtime. There is no API key to a commercial AI service.
Malicious insider (employee) dumps data An insider with database access finds the same result as an external attacker: no AI conversation history. An insider with live server access could inspect in-flight requests — this is a residual risk inherent to any server-side AI processing architecture.

Known Limitations of This Model

⚠️

In-flight request exposure

During the seconds an AI request is being processed, the prompt and response exist in server RAM. An adversary with real-time memory access (e.g., a hypervisor-level attack on the host machine) could theoretically read in-flight requests. This is a fundamental property of all server-side processing and cannot be eliminated without moving the AI model onto the user's own device — which we are evaluating for a future version.

🔴

IP address logging

Standard HTTP access logs record the IP address of incoming requests. This is a minimal footprint, but it does create a record that a specific IP address communicated with our server at a specific time. Users who require stronger anonymity should consider using a VPN.


Independent Audit Guide

Any developer can verify the claims in this document by following the steps below. No special access or credentials are required beyond a GitHub account and a PostgreSQL client.

Verify no Message table exists in the schema

Open server/prisma/schema.prisma on GitHub. Search for "model Message" and "model Conversation". You will find neither. This is the source-of-truth definition for the database schema.

Verify no write path exists in the chat service

Open server/src/services/chat.service.ts. Search for prisma.create, prisma.update, prisma.upsert, prisma.delete, and fs.write. You will find none. The only database call is prisma.user.findUnique (a read).

Verify the rate limit uses an in-memory Map

In chat.service.ts, search for rateLimitCache. You will find it declared as const rateLimitCache = new Map() at module scope — a standard JavaScript data structure, not a database call.

Verify conversation history lives on the client

Open mobile/stores/chat.store.ts. Confirm that createConversation, deleteConversation, and sendMessage make no API calls for create/read/delete of conversations. Confirm the persist middleware uses AsyncStorage as the storage backend.

Verify Ollama is the AI backend

In chat.service.ts, search for OLLAMA_URL. You will find that the AI fetch call targets this URL. Confirm there are no references to openai.com, googleapis.com, anthropic.com, or any other third-party AI endpoint in the server codebase.

Reproduce the build and compare

Follow the build instructions in README.md to build the Android APK from source. Compare the SHA-256 hash of the resulting APK against the hash published in our GitHub Releases. A match confirms that the app on the Play Store was built from the published source code.

Inspect the live server's database directly

Contact us at hello@vabenefitsapp.com to arrange a supervised read-only database inspection. We will create a temporary read-only PostgreSQL credential for any accredited security researcher or VSO wishing to verify the schema in production.


Limitations & Honest Disclosures

We are committed to honesty about what this architecture does and does not guarantee.

What this architecture guarantees

What this architecture does not guarantee

Planned improvements


Contact & Responsible Disclosure

If you discover a vulnerability in our architecture or implementation, please report it to hello@vabenefitsapp.com. We do not have a formal bug bounty program at this time, but we will acknowledge all good-faith reports and work to fix confirmed issues promptly.

🛡

This document is version-controlled

The canonical version of this whitepaper lives in the public GitHub repository at server/public/whitepaper.html. Every change is recorded in the commit history with a timestamp and diff. If you see a discrepancy between this page and the repository, the repository is authoritative.


HIPAA, FTC & State Health Data Laws

Veterans asking questions about their disability ratings, mental health conditions, and medical histories are sharing sensitive health-related information. We are conscious of every major U.S. health privacy regulation that could bear on a consumer app in this space. This section explains each law honestly — including where it technically applies to us, where it doesn't, and how our architecture meets or exceeds every standard regardless.

⚖️

This is not legal advice

This section describes our understanding of how these laws apply to our architecture as of February 2026. It is published for transparency, not as a legal opinion. We have consulted legal counsel on these questions and recommend any organization in a similar position do the same.


1. HIPAA — Health Insurance Portability and Accountability Act

Does HIPAA apply to us?

Technically, no — and we want to be completely transparent about why. HIPAA applies only to covered entities (health plans, healthcare clearinghouses, and healthcare providers who transmit health data for billing) and their business associates (vendors performing services for covered entities involving Protected Health Information). The VA Benefits App is none of these. We are a consumer information and navigation tool. We do not bill insurance, provide clinical care, or operate under contract with a hospital, insurer, or the VA itself. The FTC — not HHS — is our primary federal regulator.

We say this not to avoid accountability, but because misrepresenting HIPAA applicability is itself a deceptive trade practice under the FTC Act. Many consumer health apps falsely advertise "HIPAA compliance" when HIPAA does not apply to them — creating a false sense of security. We refuse to do that.

How our architecture exceeds what HIPAA requires anyway

Although HIPAA does not technically bind us, we designed the app to exceed HIPAA's standards in every dimension that matters to veterans. Here is the direct comparison:

HIPAA Requirement (for covered entities) HIPAA Standard VA Benefits App
Retention of PHI Must retain for 6 years minimum 0 seconds — AI conversations never written to disk
Minimum necessary standard Only access PHI needed for the task Exceeds — server reads only 4 non-sensitive profile fields; no health content stored at all
Breach notification window 60 days to notify after discovery N/A — no AI conversation data retained, so nothing to breach
Disclosure to third parties Permitted for treatment, payment, operations with BAA Never — no third-party AI providers receive any data
Encryption in transit Addressable (strongly recommended) Required — all traffic over TLS 1.2+
Audit controls Record and examine access to PHI Exceeds — no PHI exists server-side to audit; open source code is the audit log
Right to access records Provide copy of PHI within 30 days Instant — all conversation data is already on the user's device; no server request needed
Right to deletion Limited right under HIPAA; stronger under state laws Immediate — delete the app; all conversation history gone permanently

The core point

HIPAA allows covered entities to retain your health records for 6 years and share them with dozens of "business associates" under signed agreements. We retain AI conversation data for 0 seconds and share it with no one. Our architecture is more protective than HIPAA by design, not by accident.


2. FTC Act & Health Breach Notification Rule

How the FTC governs us

The Federal Trade Commission governs consumer-facing health apps primarily through two mechanisms:

What the Health Breach Notification Rule requires

If a breach of "unsecured individually identifiable health information" occurs, we must notify:

How our architecture responds

The Health Breach Notification Rule is triggered by a breach of "unsecured individually identifiable health information." Our zero-retention design directly minimizes what could be breached:

Data Type Where It Lives Breach Scenario FTC Rule Triggered?
AI conversation history (health questions) Device only (AsyncStorage) Server compromise No — data not on server
AI conversation history Device only Device theft or malware Possibly — device OS security governs this
User email address Database (with consent) Database compromise Possibly — email alone may not constitute health info
Service branch, interests (intake data) Database (with consent) Database compromise Possibly — these are not clinical records but are veteran-identifying
Forum posts Database (user-authored, public) Database compromise Contextual — user chose to publish publicly

We maintain an internal incident response plan consistent with the FTC rule's requirements. In the event of any breach involving user data, we commit to notifying affected users within 30 days — half the 60-day window the rule allows — and filing with the FTC simultaneously.

FTC Deception Framework: Our Public Commitments

Under FTC Section 5, every privacy claim we make publicly is legally enforceable. The following claims appear in this whitepaper and elsewhere on our website. We list them here explicitly so they are on record:

Each of these claims is verifiable in the open-source codebase per the audit guide in Section 10.


3. State Health Data Laws

Several U.S. states have enacted health data privacy laws that go significantly further than HIPAA — and which explicitly cover consumer apps like ours regardless of whether HIPAA applies. We are aware of and designing for each of these.

Washington State — My Health MY Data Act (MHMD, 2023)

Washington's MHMD Act is the broadest state health data law in the country. It applies to any entity that collects, uses, or shares "consumer health data" about Washington residents — with no exemption for non-HIPAA-covered entities. "Consumer health data" is broadly defined to include data that identifies a consumer's health conditions, diagnoses, mental health status, disability status, and more.

Key requirements and our response:

California — Confidentiality of Medical Information Act (CMIA) & CPRA

California's CMIA historically applied to providers and health plans, but the California Privacy Rights Act (CPRA, 2023) created a new category — "sensitive personal information" — that includes health information broadly. California residents have the right to:

Our existing CCPA/CPRA notice bar and Privacy Policy were written to comply with these requirements. The "Do Not Sell or Share My Personal Info" link in the site footer satisfies CPRA's opt-out notice requirement.

Nevada — SB 370 (Consumer Health Data)

Nevada's 2023 consumer health data law follows a similar framework to Washington's MHMD Act, requiring consent for collection of health data and prohibiting sale without explicit authorization. Our consent-based intake form and no-sale policy comply with these requirements.

Connecticut — CTDPA (2023)

Connecticut's Data Privacy Act covers "sensitive data," which includes health conditions and mental health information. It requires a consent opt-in before processing such data and grants rights to access, correct, and delete. Our profile consent flow and account deletion capability address these requirements.

State Law Compliance Summary

Law State Applies to Us? Key Obligation Our Status
My Health MY Data Act Washington Yes Consent, no sale, deletion right, private right of action Compliant
CPRA / CMIA California Yes Know/delete/correct/opt-out rights, sensitive data limits Compliant
SB 370 Nevada Yes Consent, no sale without authorization Compliant
CTDPA Connecticut Yes Consent opt-in for sensitive data, access & deletion rights Compliant
HIPAA Federal Not directly N/A — we are not a covered entity Exceeds standard
FTC Health Breach Notification Rule Federal Yes Notify within 60 days of a qualifying breach Incident response plan in place; 30-day commitment

Why Zero-Retention Is the Right Foundation

Every health data law — HIPAA, FTC, Washington MHMD, California CPRA — is built around the same core problem: organizations collect sensitive health data, retain it, and then either breach it, sell it, or misuse it. The regulatory frameworks are attempts to contain the damage from that retention.

Our architecture sidesteps the entire problem. The AI conversation data most likely to be sensitive — questions about PTSD, TBI, MST, disability ratings, medications — is never retained on our server. You cannot breach data you do not have. You cannot be compelled to produce data that does not exist. You cannot sell what was never yours to keep.

This is not a compliance posture. It is a design principle that makes compliance a natural consequence rather than an afterthought.

🛡

Our commitment

We will update this section any time a new state health data law passes or existing laws are amended in ways that affect our obligations. Changes are tracked in the public Git commit history. If you believe we have mischaracterized any legal requirement, please contact us at hello@vabenefitsapp.com.


User Data Rights & Account Deletion

This section describes the technical implementation of user data rights — how users can access, correct, export, and permanently delete all data associated with their account. This directly addresses Google Play Data Safety requirements and GDPR/CCPA obligations.

What Data Is Held Server-Side (and Thus Deletable)

Data Type Where Stored Deletion Method Purge Timeline
Email address & account credentials PostgreSQL (User table) In-app, web form, or email request Within 30 days
Veteran profile (branch, service dates, interests) PostgreSQL (User.intakeData JSON column) Deleted with account Within 30 days
Forum posts & replies PostgreSQL (ForumTopic, ForumReply) Anonymized or deleted with account Within 30 days
AI conversation history Device only (AsyncStorage) Delete in-app or uninstall Immediate (device-local)
Mood journal & wellness data Device only (AsyncStorage) Clear in Wellness settings or uninstall Immediate (device-local)
Advertising ID (Android GAID) Not stored by us — used transiently by AdMob Reset in Google device settings User-controlled

Deletion Request Channels

We offer three methods for submitting an account and data deletion request, consistent with Google Play requirements for apps with account creation:

📄

Google Play Data Safety Compliance

Google Play requires that apps which allow account creation must also provide an in-app mechanism and a web-accessible URL for account deletion. The web form at /delete-account.html satisfies the web URL requirement. The in-app flow satisfies the in-app requirement. Both channels are disclosed in the Play Store Data Safety form.

Mood Journal & Health Data: Not Our Data to Delete

As described in Sections 5 and 6, mood journal entries, wellness streaks, and breathing exercise logs are stored exclusively in the device's local AsyncStorage. They are never transmitted to our servers. Because this data never leaves the device, we have no ability to delete it on the user's behalf — nor would it be appropriate for us to have such access. The user retains full and exclusive control:

This data is used solely for the user's own self-reflection. It is never shared with third parties, advertisers, AI providers, or government entities — structurally impossible by design because it never leaves the device.

Advertising ID (Google AdMob)

On Android devices, the app accesses the Google Advertising ID (GAID) via the AD_ID permission to enable ad delivery through Google AdMob. This is disclosed in the AndroidManifest.xml, the Play Store Data Safety form, and our Privacy Policy. The Advertising ID:

GDPR Compliance Overview

Although VA Benefits App is primarily a U.S.-focused service, we honor GDPR rights for all EEA, UK, and Swiss users. Our architecture supports GDPR compliance in the following ways:

GDPR Requirement Our Implementation
Lawful basis for processing Contract (account data), Legitimate Interests (analytics), Consent (Advertising ID)
Right to Erasure (Art. 17) Web form + in-app deletion; 30-day purge guarantee
Data minimization (Art. 5(1)(c)) Server stores 4 non-sensitive profile fields for AI context; no AI conversation history retained
Storage limitation (Art. 5(1)(e)) Account data deleted within 30 days of request; server logs purged after 90 days
Transparency (Art. 13/14) Privacy Policy and this whitepaper disclose all processing; open source codebase is independently auditable
International transfers (Art. 46) Standard Contractual Clauses (SCCs) used for U.S.-based service providers where required
Right to lodge complaint Supervisory authority contacts provided in Privacy Policy Section 15.4

Summary

Our zero-retention architecture for AI conversations is intrinsically privacy-preserving and exceeds the minimum requirements of GDPR, CCPA, and Google Play Data Safety. The explicit deletion channels (in-app, web form, email) and our mood journal's device-local design mean users retain full control over all their data at all times.