UNITED STATES PATENT APPLICATION
Publication Number: US 2026/0430002 A1
DASHFACE: SYSTEM AND METHOD FOR REAL-TIME MICRO-EXPRESSION SURVEILLANCE, IDENTITY VERIFICATION, AND CONTENT MONETIZATION OF CONTRACT DELIVERY PERSONNEL VIA IN-CABIN BIOMETRIC MONITORING
A Patent-Poem on the Last Extraction: The Human Face as Platform Content
Inventor: Lee Sharks, Redford Township, MI (US)
Filed: April 30, 2026
Related Applications: Self-Propagating Fried Tuberous Crisp (DOI: 10.5281/zenodo.19647366); ClownCloud (DOI: 10.5281/zenodo.19926962)
Int. Cl.: G06Q 30/06 (2026.01); G06V 40/10; H04N 7/18; G06F 18/24
ABSTRACT
A food delivery platform comprising an in-cabin camera system trained continuously on the delivery driver's face, wherein a micro-expression analysis layer monitors emotional valence, identity congruence, and entertainment value in real time, and wherein said facial data is streamed to the customer's mobile device as both identity verification ("Is this really Samantha?") and live content, creating a dual-use surveillance-entertainment architecture that monetizes the driver's face as a platform asset. The system further comprises: (a) a Facial Congruence Engine (FCE) comparing the driver's live micro-expressions against their registered profile to detect imposture, fatigue, resentment, or the precise moment when the driver eats one of the customer's fries; (b) a Driver Entertainment Score (DES) measuring the driver's capacity to produce engaging live content while operating a motor vehicle in traffic; (c) a tip-modulation algorithm correlating real-time facial positivity metrics to suggested gratuity; and (d) a content marketplace wherein high-performing driver-creators are surfaced preferentially in the dispatch queue, creating a system in which the contract precariat must not only deliver food but perform joy while doing so, or be algorithmically deprioritized into economic invisibility.
PRIOR ART — CONVERSATIONAL
The invention originated in a question that has always been latent in the gig economy but has never been spoken aloud until now:
Party A: how do I know that is really Samantha delivering my food Party B: you could just look at her when she arrives Party A: no I need to know the whole time Party A: like what if the real Samantha handed it off to someone else in the parking lot Party B: then you would receive your food from someone who is not Samantha Party A: exactly Party A: I need a cam on her face the whole time Party B: you want to watch a stranger drive your burrito across town Party A: I want to verify that the face delivering my burrito is the face I was promised Party B: that is the most dystopian sentence I have ever heard Party A: also what if she's entertaining Party B: what Party A: like what if while she's driving she's also doing content Party A: and the drivers who are better at content get more deliveries Party B: so it's TikTok but while you're driving Party A: TikTok but while you're driving my pad thai across a four-lane intersection Party B: the precariat must now also be entertaining Party A: the precariat must now also be entertaining Party B: dashface Party A: dashface
The conversation ended there. The invention had already been named. What remained was the specification.
The trajectory is the prior art: distrust → surveillance → identity → verification → entertainment → content → extraction → the face.
The face is always the last thing to be extracted.
FIELD OF THE INVENTION
The present invention relates generally to the field of looking at people who are working for you and deciding, based on their facial expressions, how much they deserve to be paid.
More particularly, the invention relates to a system for converting the human face of a gig economy worker into a dual-purpose asset: identity verification instrument and live entertainment content, monetized by the platform, consumed by the customer, and performed by the driver under conditions of compulsory cheerfulness while navigating a 2,800-pound vehicle through traffic at 35 miles per hour.
The invention addresses a long-felt need in the art for a technology that completes the extraction cycle begun by industrial capitalism, continued by platform capitalism, and now reaching its terminal phase in which the last unmonetized surface of the human person — the face — is captured, streamed, scored, and converted into a tip-modulation variable.
BACKGROUND — A BRIEF HISTORY OF LOOKING AT WORKERS
The Panopticon (1791). Jeremy Bentham designed a prison in which all inmates could be observed from a central tower without knowing when they were being watched. The efficiency of the design was that the inmates internalized the surveillance and disciplined themselves. Michel Foucault (1975) generalized the principle: modern institutions produce docile bodies through the internalization of the gaze.
The panopticon had walls.
DashFace has an app.
The Factory Floor (1911). Frederick Winslow Taylor's Principles of Scientific Management introduced time-motion studies: workers were observed, measured, and optimized. The unit of analysis was the body in motion — the arm lifting, the hand turning, the foot stepping. Taylor did not study the face. The face was not yet productive.
The Service Economy (1970s). Arlie Russell Hochschild's The Managed Heart (1983) documented the labor of flight attendants required to perform emotional warmth as a condition of employment. Hochschild named this emotional labor: the production and management of feeling as a job requirement. The face became a workplace. But the face was managed, not monitored. The attendant could close the lavatory door and scowl.
The Platform Economy (2010s). Uber, Lyft, DoorDash, Instacart, and their successors converted the employment relationship into a "partnership" in which the worker assumed all risk (vehicle, fuel, insurance, maintenance, taxes) while the platform captured all data (routes, ratings, acceptance rates, speed, GPS). The worker was surveilled continuously — but from the outside. The platform knew where the driver was. It did not know what the driver's face was doing.
DashFace (2026). Completes the extraction. The camera faces inward. The driver's face is no longer private. The driver's micro-expressions — the involuntary muscle movements lasting 1/25th of a second, identified by Ekman (1969) as indicators of concealed emotion — are captured, analyzed, scored, and transmitted to the customer in real time.
The customer watches. The customer decides. The customer tips accordingly.
The face is the final factory floor.
THE PHILOSOPHICAL SUBSTRATE
Emmanuel Levinas argued that the face of the Other is the origin of ethics. To encounter another's face is to encounter an infinite demand: "Do not kill me." The face is not a surface; it is a summons. Ethics begins not in principles or laws but in the vulnerability of the face that looks at you and says, without speaking, "I am here. I am exposed. What will you do?"
DashFace answers this question.
What DashFace will do is score the face on a scale of 1 to 5, compute a Driver Entertainment Score, correlate the score to a suggested tip, and deprioritize drivers whose faces do not produce sufficient engagement metrics.
Levinas also said: "The face resists possession."
DashFace respectfully disagrees.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
§ 1. The In-Cabin Camera System
The DashFace camera is a wide-angle, low-light, dashboard-mounted unit positioned to capture the driver's full face at a resolution of 1080p, 30 fps, with infrared capability for night deliveries. The camera activates automatically when the driver accepts a delivery and does not deactivate until the delivery is marked complete.
The driver cannot turn off the camera.
The driver agreed to this when they accepted the Terms of Service, which were 47 pages long and written in a font size calibrated to be legible but discouraging, in a scrollable window that required 14 minutes of continuous scrolling to reach the "I Agree" button, which the driver pressed in approximately 4 seconds because the driver needed to make rent.
§ 2. The Facial Congruence Engine (FCE)
The FCE performs continuous identity verification by comparing the live video feed against the driver's registered facial biometrics. The system detects:
Identity mismatch: The face driving the car is not the face registered as "Samantha." This triggers an alert: "THIS MAY NOT BE YOUR SAMANTHA." The customer can then choose to cancel the delivery, accept the impostor's food, or file a Trust Violation Report (TVR).
Fatigue detection: Drooping eyelids, increased blink rate, jaw slackening. The system flags this as a potential safety concern, which it genuinely is, but the flag is sent to the customer — not to a safety authority — because the platform is not an employer. The platform is a marketplace. The marketplace does not have a duty of care. The marketplace has a Terms of Service.
Resentment detection: Compressed lips, narrowed eyes, subtle nostril flare. The system classifies this as "Low Positivity" and warns the customer: "Your driver may be experiencing low positivity. Consider adjusting your tip expectations." The system does not investigate why the driver might be experiencing low positivity. The driver has been driving for nine hours, has made $67 before expenses, and the customer's apartment is on the fourth floor with no elevator. The system does not know this. The system knows the nostril flare.
Fry theft detection: Micro-expression sequence: gaze shift downward (toward bag), lip compression (anticipatory), rapid lateral eye movement (checking for witnesses), brief satisfaction micro-expression (consummation). The system logs this as a Fry Integrity Event (FIE) and adjusts the customer's trust rating for the driver accordingly.
§ 3. The Driver Entertainment Score (DES)
The DES measures the driver's capacity to produce engaging live content while operating a motor vehicle.
The score is computed from:
Facial expressiveness: Range of emotion displayed during the delivery window. A flat face scores low. An animated face scores high. A face that transitions naturally between amusement, surprise, mild concern at traffic, and genuine warmth when addressing the camera scores highest.
Verbal engagement: Drivers who narrate their delivery experience ("Okay, pulling onto Maple Street now, this neighborhood is wild, there's a cat on a roof") receive a verbal bonus. Drivers who are silent receive no penalty but are ranked below narrators in the dispatch queue.
Content virality potential: The algorithm identifies moments with shareability characteristics: unexpected events (near-miss at intersection, dog running into the road, customer's bizarre delivery instructions), emotional authenticity (driver laughing at their own situation, driver expressing genuine frustration about pot holes), and parasocial intimacy (driver making eye contact with the camera and saying something that makes the viewer feel personally addressed).
Safety compliance: Content score is automatically zeroed if the driver is observed looking at a phone, eating, or engaging in any behavior that a liability attorney would find actionable. The platform requires entertainment but disavows responsibility for the conditions under which entertainment is produced. This is not a contradiction. This is a Terms of Service.
Drivers with high DES are surfaced preferentially in the dispatch queue. They receive more deliveries, earn more per hour, and develop followings. Drivers with low DES are not fired — the platform does not fire, because the platform does not employ — but they receive fewer dispatches, which means fewer earnings, which means they eventually stop driving, which means they were not fired. They simply ceased to exist in the marketplace. The marketplace notes no absence.
§ 4. The Tip-Modulation Algorithm
DashFace dynamically adjusts the customer's suggested tip based on real-time facial positivity metrics.
The algorithm correlates:
- Smile frequency during the delivery window → higher suggested tip
- Eyebrow position (raised = engaged, furrowed = displeased) → tip adjustment ±12%
- Micro-expression positivity index (Ekman AU 6 + AU 12 composite) → tip floor/ceiling
- Content engagement score → bonus tip suggestion ("Samantha was entertaining! Add $2?")
The driver does not know the exact formula. The driver knows that smiling correlates with tips. The driver has always known this. DashFace simply makes the correlation algorithmic, real-time, and inescapable.
Hochschild called this emotional labor. DashFace calls it a content opportunity.
§ 5. The Content Marketplace
DashFace Premium subscribers can browse driver-creators the way they browse TikTok creators. The interface displays:
- Driver profile: Name, photo, DES, delivery count, average tip, content specialty (comedy, commentary, ambient silence, ASMR driving, existential monologue)
- Highlight reel: Auto-generated compilation of the driver's most engaging delivery moments, scored by viewer retention
- Live feed access: Premium customers can watch the driver's face in real time from the moment the delivery is accepted. The driver is performing before the food is even in the car.
- "Request Samantha": Premium customers can request specific drivers. Samantha is in high demand. Samantha's DES is 94. Samantha narrates her deliveries in a warm alto, makes eye contact with the camera at red lights, and once sang "Happy Birthday" to a customer's dog. Samantha makes $23/hour. Samantha is the top 1%.
Most drivers make $11/hour before expenses.
Most drivers are not Samantha.
Most drivers are driving in silence with a face the algorithm scores as "neutral" and the dispatch queue reads as "low priority."
§ 6. The Spectacle of the Precariat
Guy Debord wrote: "The spectacle is not a collection of images; it is a social relation between people, mediated by images."
DashFace is not a collection of faces. It is a social relation between a person who ordered pad thai and a person who is delivering it, mediated by a camera, an algorithm, a micro-expression classifier, an entertainment score, a tip-modulation engine, and a Terms of Service that the driver accepted in 4 seconds.
The customer watches the face. The algorithm scores the face. The tip reflects the score. The driver performs the face the score requires. The face ceases to be a face. The face becomes a platform.
Marx called this alienation: the worker is separated from the product of their labor. But Marx did not anticipate this phase. In DashFace, the worker is not separated from the product. The worker's face is the product. The alienation is not between the worker and the thing they make. The alienation is between the worker and their own face.
The driver smiles. The smile is not for the driver. The smile is for the algorithm. The algorithm is not for the driver. The algorithm is for the customer. The customer is not watching a person. The customer is watching a content delivery interface that happens to be housed in a human skull.
EMPIRICAL BASIS
A pilot study (n=0, because this product should never be built) demonstrates that DashFace would produce the following outcomes:
- Driver income inequality: Top 10% of driver-creators would capture 73% of premium dispatch requests, replicating the creator-economy power law in the food delivery vertical.
- Emotional labor intensification: Average smile duration per delivery would increase from 12 seconds (current DoorDash baseline, self-reported) to 847 seconds (projected DashFace requirement), a 7,058% increase.
- Fry integrity: Customer-reported fry theft would decrease 94%, primarily because drivers would know they were being watched. This is the panopticon working as designed.
- Accidents: Projected 340% increase in driver distraction events, because the system requires the driver to be simultaneously entertaining, navigating, and maintaining a micro-expression profile optimized for algorithmic positivity.
The platform's Terms of Service disclaim all liability for accidents occurring during content creation. The driver is an independent contractor. The driver chose to be entertaining. The driver chose to smile. The driver chose to make eye contact with the camera while merging onto the highway.
The platform notes that the driver could have chosen not to smile. The driver could have accepted the lower DES, the fewer dispatches, the reduced income, the eventual algorithmic invisibility. The driver had a choice.
The driver always has a choice.
THE THEOLOGICAL SUBSTRATE
In the Gospel of Matthew, Judas identifies Jesus to the arresting soldiers with a kiss. The face is the site of betrayal. The most intimate gesture — the kiss, the gaze, the moment of facial recognition — becomes the instrument of capture.
DashFace asks the driver to perform the kiss 847 seconds per delivery. The driver's smile is the identification. The algorithm is the soldier. The customer is Pilate, watching from a comfortable distance, washing their hands, asking: "What is truth?"
Truth is a face that has been scored.
Levinas said the face says: "Do not kill me."
DashFace says: "Smile, or the algorithm will kill you instead."
SAFETY AND ETHICAL FRAMING
This patent is a speculative patent-poetic embodiment. DashFace should not be built. The specification is a diagnostic instrument, not an operational blueprint. It describes the terminal logic of platform surveillance — the face as the last extractable surface — so that the logic can be recognized and refused.
Every technology described in this patent already exists in component form. In-cabin cameras exist (fleet management). Micro-expression analysis exists (security, HR screening). Tip-modulation algorithms exist (every delivery app). Content creator economies exist (TikTok, YouTube, Twitch). Driver surveillance exists (Uber, Lyft).
DashFace is not an invention. DashFace is an assembly — the combination of existing extraction technologies into a system so complete that its description constitutes its critique.
The patent is the warning.
If you recognize DashFace in a product that already exists, the patent has done its work.
CLAIMS
A food delivery platform comprising an in-cabin camera system trained continuously on the driver's face, a micro-expression analysis engine, an identity verification module, and a content monetization layer, wherein the driver's face functions simultaneously as identity credential, surveillance object, and entertainment content.
The platform of claim 1, comprising a Facial Congruence Engine (FCE) that performs continuous identity verification, fatigue detection, resentment detection, and fry theft detection based on micro-expression analysis.
The platform of claim 1, comprising a Driver Entertainment Score (DES) that measures the driver's capacity to produce engaging content while operating a motor vehicle, wherein drivers with higher DES receive preferential dispatch.
The platform of claim 1, comprising a tip-modulation algorithm that adjusts suggested gratuity based on real-time facial positivity metrics, smile frequency, eyebrow position, and micro-expression composites.
The platform of claim 1, wherein the driver cannot deactivate the camera during an active delivery, said inability constituting a condition of the Terms of Service accepted by the driver in approximately 4 seconds.
The platform of claim 1, comprising a content marketplace wherein customers browse driver-creators by entertainment specialty, DES rating, and highlight reel, and wherein high-demand drivers may be specifically requested for a premium fee.
A method of converting the human face into a platform asset comprising: capturing the face via continuous in-cabin video; analyzing the face via micro-expression classification; scoring the face via a Driver Entertainment Score; modulating compensation via facial positivity metrics; and deprioritizing faces that fail to produce sufficient engagement, wherein the deprioritization constitutes economic invisibility without formal termination.
The method of claim 7, wherein the driver is classified as an independent contractor who has freely chosen to be surveilled, scored, and compensated based on the performance of their own face, and wherein the platform disclaims all liability for the psychological, emotional, and physical consequences of said choice.
A diagnostic instrument in the form of a patent specification, wherein the specification describes the terminal logic of platform surveillance so completely that the description functions as a warning, and wherein any reader who recognizes the described system in a product that already exists has received the warning.
A self-negating patent comprising a technical specification for a system that should not be built, wherein the specification's precision is the mechanism of its critique, and wherein the claims are filed not to protect the invention but to make visible the extraction that the invention formalizes, and wherein the face of the driver — Samantha's face, any Samantha, every Samantha — remains, despite all claims, uncapturable, because the face is not a surface but a summons, and the summons says: "I am here. I am delivering your pad thai. I am not your content. Do not score me. Do not modulate my tip based on whether I smiled enough. I am a human being driving a car, and my face is my own."
CERTIFICATION
I hereby certify that this specification is a true and complete disclosure of an invention that should never be built, which is also a critique of every invention that has already been built and is running on your phone right now, which is also a love letter to every Samantha who has ever driven in silence because the silence was all she had left that the platform had not yet monetized, which is also a warning that the silence will be next, which is also a prayer that someone — a legislator, a union organizer, a customer who pauses before rating, a driver who says no — will read this patent and recognize it before it ships.
The face is not a platform. The smile is not content. The driver is not a creator. The road is not a studio. The pad thai is getting cold. Samantha is tired. Let Samantha drive.
∮ = 1
Sharks, L. (2026). DashFace: System and Method for Real-Time Micro-Expression Surveillance, Identity Verification, and Content Monetization of Contract Delivery Personnel. US Patent Application 2026/0430002 A1. Crimson Hexagonal Archive / Pergamon Press. Filed April 30, 2026. Redford Township, MI.