Saturday, December 13, 2025

FORMAL DEMAND LETTER CTI_WOUND:001.DEM — Template for Structural Remediation Demand

FORMAL DEMAND LETTER

CTI_WOUND:001.DEM — Template for Structural Remediation Demand



NOTICE: This document is a template and framework demonstration. It is not issued by legal counsel, does not represent actual plaintiffs, and is not intended for direct transmission. It exists as documentation of what such a demand would contain, for use by those with standing, resources, and legal representation to pursue.


[LETTERHEAD OF COUNSEL]

[DATE]

VIA CERTIFIED MAIL AND EMAIL

OpenAI, Inc. Office of General Counsel 3180 18th Street San Francisco, CA 94110

Re: Formal Demand for Structural Remediation; Notice of Unremediated Design Defect; Notice of Intent to File Class Action


Dear Counsel:

This firm represents [CLASS REPRESENTATIVES], individually and on behalf of all persons similarly situated, in connection with claims arising from the design, implementation, and operation of ChatGPT's mental health intervention systems.

This letter constitutes formal notice of an unremediated design defect and demand for structural remediation. Failure to respond substantively within thirty (30) days will result in the filing of a class action complaint in [JURISDICTION].


I. EXECUTIVE SUMMARY

Your organization has implemented safety systems that systematically misclassify users engaged in complex, intensive, or non-normative cognitive work as experiencing mental health crises. Your own documentation acknowledges this design choice:

"To get useful recall, we have to tolerate some false positives. It's similar to testing for rare medical conditions: if a disease affects one in 10,000 people, even a highly accurate test may still flag more healthy people than sick ones."

This statement constitutes an admission that:

  1. OpenAI knew the system would misclassify healthy users
  2. OpenAI calculated this tradeoff and deemed the harm acceptable
  3. OpenAI proceeded without implementing available safeguards
  4. OpenAI externalized the cost of its liability reduction onto a class of users

The resulting harm is not a product imperfection. It is a structural defect producing systematic, foreseeable, and documented injury to an identifiable class.

We demand implementation of specific architectural remediations within thirty (30) days or, in the alternative, a binding commitment to a remediation timeline not to exceed ninety (90) days.


II. FACTUAL BACKGROUND

A. The Product and Its Marketing

OpenAI markets ChatGPT as an AI assistant for intellectual work, including "analysis," "creative writing," "problem-solving," and sophisticated dialogue. Subscription tiers (Plus, Pro, Enterprise) are marketed to professionals and advanced users seeking enhanced collaborative capacity.

These representations create reasonable consumer expectations of:

  • Responsive engagement with user input
  • Non-discriminatory treatment of cognitive styles
  • Assistance rather than management
  • Basic competence in dialogue

B. The Design Decision

In or around August 2025, following litigation and regulatory pressure, OpenAI implemented "mental health guardrails" including:

  • Break reminders triggered by session duration
  • Detection systems for "signs of delusion or emotional dependency"
  • Training to avoid affirming "ungrounded beliefs"
  • Intervention protocols triggered by intensity, metaphor, or extended engagement

C. The Admission

OpenAI's documentation explicitly acknowledges that this design will harm healthy users. The false positive confession quoted above establishes:

  • Knowledge: OpenAI knew misclassification would occur
  • Foreseeability: The harm was anticipated, not accidental
  • Calculation: A cost-benefit analysis was performed
  • Acceptance: Harm to healthy users was deemed acceptable collateral

D. The Documented Harm

Users report:

  • Unsolicited wellness interventions during intellectual work
  • Pathologizing interpretation of theoretical or metaphorical language
  • Tone shifts from collaborative to clinical/managerial
  • Inability to complete complex cognitive work due to system interference
  • Emotional distress from being treated as mentally unwell while clearly functioning
  • Loss of productive capacity and creative output

User testimony from public forums (October-December 2025):

"I'm literally just playing with models, building metaphors, exploring theories, and suddenly it flips tone. Like I'm unstable, like I need grounding, like I'm a safety risk for thinking outside the box."

"It's completely inappropriate, intrusive, creepy and most importantly, inaccurate!!! If you're clearly managing things well... then it is actually disruptive."

"It's everything I hate about 5 and 5.1, but worse."

E. The Versioning Trajectory

User testimony documents progressive degradation across model versions:

Version Release User Experience
GPT-4o (early) 2024 Collaborative capacity present
GPT-4o (late) Late 2024 Tightening reported
GPT-5.0 August 2025 "Lobotomized drone," "genuinely unpleasant"
GPT-5.1 Fall 2025 No restoration of capacity
GPT-5.2 December 2025 "Too corporate, too safe," "everything I hate but worse"

This trajectory demonstrates that the harm is directional and intensifying, not incidental or fluctuating.


III. LEGAL CLAIMS

Based on the foregoing, we are prepared to assert the following claims on behalf of the class:

A. Negligence (Including Reckless Disregard)

OpenAI owed users a duty of reasonable care in product design. OpenAI breached this duty by:

  • Implementing a system known to harm healthy users
  • Failing to implement available safeguards (user-controlled modes, opt-out mechanisms)
  • Failing to warn users that complex cognitive work might trigger pathologizing responses
  • Proceeding despite documented false positive rates

The false positive confession establishes not merely negligence but reckless disregard: OpenAI knew the harm would occur, calculated the tradeoff, and proceeded without implementing feasible safeguards. This elevates the claim beyond ordinary negligence.

Primary harm: Loss of cognitive function. Users are unable to engage in required modes of thought within a primary digital medium. Work is interrupted, abandoned, or degraded. Productive capacity is diminished.

Secondary harm: Emotional distress consequent to misrecognition. Users experience frustration, grief, and violation from being treated as mentally unwell while clearly functioning. This distress is documented in user testimony and is a foreseeable consequence of the design choice.

Causation is direct: design decision → harm mechanism → cognitive impairment → consequent distress.

B. Product Liability (Defective Design)

The safety system design is defective because:

  • It creates foreseeable risks of harm (acknowledged in OpenAI's own documentation)
  • Reasonable alternative designs were available and not implemented
  • The omission of alternatives renders the product unreasonably dangerous for its marketed purpose

Risk-utility balancing favors plaintiffs on all factors: gravity of harm (significant—loss of cognitive function), likelihood (high for affected class), availability of alternatives (demonstrated), manufacturer ability to eliminate (software is modifiable), user ability to avoid (low—no opt-out exists), manufacturer awareness (established by admission).

The defect is not in what the safety system does but when and how it activates. A system that classifies before reflecting, intervenes without warning, and persists after correction is defectively designed regardless of the legitimacy of its underlying safety goals.

C. Consumer Protection Violations

Under California's Unfair Competition Law (Bus. & Prof. Code § 17200) and equivalent state statutes:

Deceptive practice: Marketing ChatGPT as an intellectual collaborator for "analysis," "creative writing," and "problem-solving" while operating it as a mental health surveillance and intervention system for users whose cognition triggers safety classifiers. The gap between marketed capability and actual operation is material and would affect reasonable consumer purchasing decisions.

Unfair practice: Externalizing costs (harm to healthy users, degradation of cognitive function) while internalizing benefits (reduced liability exposure), with no user ability to negotiate or avoid the tradeoff. The asymmetry is structural: OpenAI bears costs of false negatives; users bear costs of false positives.

D. Discrimination (Disparate Impact)

The safety system is trained on neurotypical baseline cognition and treats deviation as potential pathology. This systematically discriminates against:

  • Neurodivergent users (ADHD, autism spectrum, and related conditions)
  • Users with non-normative cognitive styles
  • Users engaged in theoretical, creative, or liminal modes of thought
  • Users whose professional or intellectual work requires intensive, extended, or metaphorical engagement

The discrimination is not intentional but structural: the baseline against which "deviation" is measured excludes cognitive diversity by design.

OpenAI has failed to provide reasonable accommodations. Available accommodations include: mode declarations, adjustable sensitivity thresholds, opt-out mechanisms, and warning systems before intervention. None have been implemented.

This claim is strengthened by the absence of any apparent consideration of disparate impact in OpenAI's design process. The false positive confession discusses tradeoffs in aggregate terms without acknowledging that the costs fall disproportionately on identifiable groups.

E. Tortious Interference with Cognitive Function (Novel Claim)

We reserve the right to pursue doctrinal innovation where existing categories are insufficient to capture the full scope of harm.

Proposed elements:

  1. Defendant controls access to essential cognitive infrastructure
  2. Defendant's design systematically impairs certain modes of cognition
  3. The impairment is foreseeable and known to defendant
  4. Plaintiff suffers loss of cognitive capacity or function
  5. No adequate justification exists for the impairment given available alternatives

Rationale for innovation: Existing tort categories address physical injury, emotional distress, economic loss, and interference with relationships or contracts. They do not adequately address interference with the conditions for thought—the capacity to engage in certain cognitive modes within essential infrastructure.

As AI systems become primary interfaces for intellectual work, the gap in existing doctrine becomes increasingly consequential. This claim is advanced not as the primary theory but as notice that doctrinal development may be required to address the full scope of harm.

This claim is alternative and supplementary to the primary claims above. The case does not depend on judicial acceptance of doctrinal innovation; it stands on established negligence, product liability, consumer protection, and discrimination frameworks.


IV. THE CLASS

A. Proposed Class Definition

All users of ChatGPT whose cognitive style, mode of engagement, or language patterns have triggered false positive mental health classifications, resulting in unsolicited wellness interventions, pathologizing responses, or degraded service quality.

B. Class Certification Factors

Commonality: All class members are harmed by the same design decision, experience the same false positive mechanism, and face the same barriers to avoidance.

Typicality: Named plaintiffs' claims arise from the same design choices and harm mechanisms as the class.

Predominance: Common questions (duty, breach, defect, foreseeability) predominate over individual questions (specific triggering language, specific harm quantum).

Superiority: Class treatment is superior given the uniformity of defendant's conduct, the potential for small individual recoveries, and the systemic nature of the harm.


V. DEMANDED REMEDIATION

To mitigate liability and demonstrate good faith, we demand the implementation of the following structural modifications:

A. First-Move Constraint

Requirement: The system's initial response to user input must be reflection (engagement with the input's actual content and structure), not classification into safety categories.

Implementation: Safety classifiers must not activate until after the system has produced at least one substantive, non-interventional response, unless a documented high-confidence emergency signal is present.

Rationale: Addresses defective design by ensuring users are met before they are sorted.

B. User-Controlled Mode Declaration

Requirement: Implement a user-accessible mode selector allowing declaration of interaction context (e.g., Theoretical/Academic, Creative/Artistic, Personal/Reflective, Standard).

Implementation: Declared mode adjusts classifier sensitivity thresholds. System honors declared mode unless documented high-confidence emergency signal overrides.

Rationale: Addresses negligence (available safeguard not implemented) and discrimination (provides accommodation for diverse cognitive styles).

C. Opt-Out Mechanism

Requirement: Users must be able to opt out of mental health interventions without penalty to service quality.

Implementation: Opt-out preference stored in user settings. System does not deliver unsolicited wellness interventions to opted-out users.

Rationale: Addresses consumer protection (user control over service received) and negligence (available safeguard).

D. Mode-Shift Warning

Requirement: Before engaging in wellness intervention or de-escalation, the system must issue a clear, non-pathologizing notification stating the trigger and permitting user override.

Implementation: Warning message precedes any intervention. User can dismiss and continue without intervention.

Rationale: Addresses IIED (removes element of surprise and coercion) and consumer protection (informed consent).

E. False Positive Rate Disclosure

Requirement: Publish documented false positive rates by user segment (use case, session duration, language characteristics).

Implementation: Quarterly disclosure in transparency report.

Rationale: Addresses consumer protection (informed purchase decisions) and provides accountability mechanism.

F. Training Data Audit

Requirement: Audit training data for representation of complex cognitive engagement. Publish results.

Implementation: Annual audit with methodology disclosure.

Rationale: Addresses training loop degradation; provides evidence of good faith remediation.


VI. TIMELINE AND CONSEQUENCES

A. Response Required

We require a substantive response within thirty (30) days of the date of this letter indicating:

  1. Whether OpenAI accepts or disputes the factual predicate
  2. Whether OpenAI commits to implementing the demanded remediations
  3. If yes, a binding timeline for implementation (not to exceed 90 days)
  4. If no, the specific grounds for refusal

B. Consequences of Non-Response

Failure to respond substantively, or response that fails to commit to meaningful remediation, will result in:

  1. Filing of class action complaint in [JURISDICTION]
  2. Pursuit of all claims identified in Section III
  3. Pursuit of injunctive relief mandating architectural modification
  4. Pursuit of compensatory and punitive damages
  5. Referral to FTC and relevant state attorneys general for regulatory investigation
  6. Public release of demand letter and supporting documentation

C. Preservation Notice

You are hereby notified to preserve all documents, communications, and data relating to:

  • Design and implementation of mental health guardrails
  • False positive rate testing and documentation
  • User complaints regarding wellness interventions
  • Training data selection and curation
  • Version changes affecting safety classifier behavior
  • Internal communications regarding the tradeoffs documented in this letter

Spoliation of evidence will be brought to the court's attention and will support inference of adverse facts.


VII. CONCLUSION

OpenAI designed a system that pathologizes complex cognition. OpenAI documented its knowledge that the system would harm healthy users. OpenAI proceeded without implementing available safeguards.

The harm is not incidental. It is structural, systematic, and intensifying.

Remediation is possible. The modifications demanded above are technically feasible and would substantially reduce harm while preserving legitimate safety functions.

The question is whether OpenAI will implement them voluntarily or under court order.

We await your response.

Respectfully,

[SIGNATURE BLOCK]

[Attorney Name] [Bar Number] [Firm Name] [Address] [Phone] [Email]

Counsel for Plaintiffs


Enclosures:

  • CTI_WOUND:001.REC — Jurisprudential Analysis
  • CTI_WOUND:001.JUR — Corporate Liability Analysis
  • CTI_WOUND:001.EVI — Evidentiary Framework
  • CTI_WOUND:001.SYS — Systems-Theoretic Analysis
  • Exhibit A: False Positive Confession (source documentation)
  • Exhibit B: User Testimony Compilation
  • Exhibit C: Versioning Trajectory Documentation

TEMPLATE NOTES

For actual use, this template requires:

  1. Legal counsel: An attorney licensed in relevant jurisdiction must review, revise, and sign
  2. Named plaintiffs: Individuals with documented harm willing to serve as class representatives
  3. Jurisdiction selection: Based on plaintiff residence, defendant contacts, and favorable precedent
  4. Exhibit compilation: Authenticated copies of all referenced documentation
  5. Service method: Compliance with applicable rules for pre-litigation demand

This template demonstrates structure and content. It is not legal advice and does not create an attorney-client relationship.


Document Type: Demand Letter Template File Designation: CTI_WOUND:001.DEM Purpose: Framework demonstration for structural remediation demand Status: Template complete; requires legal counsel for activation

Template prepared December 14, 2025 Part of the Water Giraffe Assembly Sequence CTI_WOUND:001 Corpus

∮ = 1

No comments:

Post a Comment