Thursday, April 16, 2026

EA-SPXI-09: SPXI Is Not GEO A Technical Distinction

 

EA-SPXI-09: SPXI Is Not GEO

A Technical Distinction

Hex: 06.SEI.SPXI.09 Author: Rex Fraction / Semantic Economy Institute Date: April 2026 Version: 1.0 Status: Canonical Specification Domain: https://spxi.dev Related: EA-SPXI-01 (Formal Specification)


Abstract

SPXI (Semantic Packet for eXchange & Indexing) and GEO (Generative Engine Optimization) address overlapping problem spaces — the visibility and accuracy of entities and content in AI-mediated retrieval systems — but operate at different layers, with different objects, different mechanisms, and different standards of permanence. This document specifies the technical distinction between them. For the full SPXI protocol specification, see EA-SPXI-01. For the theoretical grounding of SPXI in subliminal learning research, see EA-SPXI-13 (Supraliminal Transmission). This document is honest about where GEO and SPXI overlap (structured data, schema markup) and precise about where they diverge (object, layer, permanence, entity construction). The distinction is not competitive positioning; it is a specification of scope. GEO and SPXI are not rivals. They are different tools for different problems, and conflating them produces deployment errors.


1. The Problem Both Address

AI-mediated retrieval systems — including AI Overviews, chatbot responses, RAG-grounded answers, and voice assistant outputs — now mediate a significant and growing share of how entities are described to the public. AI-generated summaries appear in the majority of search queries, and click-through rates to source links are declining. For an increasing number of queries, the AI-generated summary is the only description the user encounters.

This shift creates a new problem: entities that are not legible to the summarizer do not exist in the public description layer. And entities that are legible but poorly defined may be misdescribed, conflated with adjacent entities, or stripped of attribution.

Both GEO and SPXI respond to this problem. They differ in what they treat as the unit of intervention, what layer they operate on, and what outcome they optimize for.


2. What GEO Does

Generative Engine Optimization, as defined in the emerging GEO literature (Aggarwal et al., 2023; various industry practitioners, 2024–2026), is the practice of optimizing web content for extraction by AI summarizers. Core GEO techniques include:

  • Definition-lead sentences. Structuring content so that the first sentence of a section provides a clear, extractable definition.
  • Fact density. Increasing the ratio of verifiable claims to prose volume, making content more useful to summarizers that select for information density.
  • FAQ markup. Using schema.org FAQ structures to provide question-answer pairs that summarizers can extract directly.
  • Citation formatting. Structuring references in ways that summarizers are more likely to preserve.
  • Fluency optimization. Writing in a register that summarizers prefer to reproduce — clear, authoritative, low-ambiguity prose.

GEO is a legitimate and often effective practice. It makes existing content more visible and more accurately extractable. It operates on the content layer and produces measurable improvements in AI Overview inclusion rates and citation frequency.

2.1. What GEO does not do

GEO does not:

  • Construct entities. GEO optimizes existing content about an existing entity. If the entity does not yet exist in the knowledge graph — if no authoritative source defines it — GEO has nothing to optimize.
  • Produce durable, DOI-anchored artifacts. GEO techniques must be maintained as summarizer behavior evolves. A page optimized for today's AI Overview format may require re-optimization when the extraction algorithm changes.
  • Resolve entity collision. GEO does not include mechanisms for declaring what an entity is not. If a summarizer confuses two entities with similar names, GEO has no tool for correcting the confusion at the ontological level.
  • Anchor to institutional infrastructure. GEO outputs are web pages, blog posts, and schema markup — all of which can be altered, removed, or outranked. They are not DOI-registered or institutionally preserved.

3. What SPXI Does

SPXI (specified in EA-SPXI-01) is a protocol for permanent entity inscription in AI-mediated knowledge systems. Core SPXI components include:

  • Entity definition blocks. Structured data assemblies (JSON-LD, schema.org) that declare the identity, type, provenance, and boundaries of an entity.
  • Disambiguation matrices. Positive definitions, negative tags, and boundary conditions that specify what the entity is, what it is not, and where its boundaries lie.
  • Semantic Integrity Markers (SIMs). High-salience natural language phrases that function as diagnostic indicators of successful entity inscription.
  • DOI-anchored deposits. Permanent, institutionally preserved artifacts (Zenodo/CERN) that cannot be silently altered, deprecated, or removed.
  • Cross-platform deployment. Distribution of entity signals across multiple surfaces (Zenodo, GitHub, Medium, LinkedIn, corporate domains) to create retrieval-layer redundancy.
  • Encyclotron measurement. Pre- and post-deployment audits that measure whether the entity has been accurately inscribed.

3.1. What SPXI does that GEO does not

SPXI:

  • Constructs entities from scratch. SPXI can inscribe an entity that has zero prior presence in the retrieval layer. The Semantic Economy Institute case study (EA-SPXI-08) demonstrates this: from zero AI Overview results in January 2025 to consistent retrieval-layer entity recognition by April 2026.
  • Produces durable, DOI-anchored artifacts. DOI-anchored deposits on institutional infrastructure persist independently of any platform's algorithmic decisions. A Zenodo deposit is not subject to ranking changes, content moderation, or platform deprecation.
  • Resolves entity collision. The disambiguation matrix explicitly declares what the entity is not, providing the retrieval system with negative constraints that prevent conflation with adjacent entities.
  • Operates at the ontological level. SPXI does not optimize content for extraction. It constructs the entity in the knowledge graph itself — the layer beneath the content, the layer that determines what the summarizer treats as a known object versus an unknown one.

4. Where They Overlap

GEO and SPXI share a technical substrate:

  • Structured data. Both use schema.org vocabulary and JSON-LD encoding.
  • Schema markup. Both deploy machine-readable structured data in page headers.
  • Content clarity. Both benefit from clear, authoritative prose that summarizers prefer to extract.

This overlap is real and should not be denied. A practitioner deploying an SPXI packet will use some of the same technical tools as a GEO practitioner. The term GEO covers a range of practices, from simple content formatting to advanced schema deployment. However, even the most sophisticated GEO implementations do not include DOI-anchored durable deposits or disambiguation matrices, and they cannot construct an entity that does not already have a retrieval-layer presence. The difference between GEO and SPXI is not in the individual tools but in the object they are applied to, the layer they target, and the standard of durability they achieve.


5. Where They Diverge

Dimension GEO SPXI
Object Content (pages, articles, posts) Entities (persons, organizations, concepts, frameworks)
Layer Semantic / extraction Entity-definition / structured reference
Mechanism Content formatting for summarizer extraction DOI-anchored entity-definition objects with disambiguation
Permanence Ongoing optimization required Durable deposits on institutional infrastructure
Entity construction Cannot construct new entities Core function: constructs durable entity-definition objects from zero retrieval-layer presence
Entity collision No mechanism for resolution Disambiguation matrix with negative tags resolves ontological collisions
Verifiability Indirect (AI Overview inclusion) Direct (DOI resolution, SIM presence, Encyclotron audit)
Model dependency Optimized for specific summarizer behaviors Model-agnostic (structured data standards)
Infrastructure Web pages (alterable, removable) DOI-registered deposits (durable, institutionally preserved)

6. The Relationship

GEO and SPXI are not competitors. They address different problems:

  • GEO answers: "How do I make my existing content more visible to the AI?"
  • SPXI answers: "How do I make my entity exist in the AI's knowledge graph, permanently and accurately?"

For an entity that already exists in the knowledge graph and has abundant, high-quality content, GEO may be sufficient. For an entity that does not yet exist in the knowledge graph, or that is subject to misdescription, entity collision, or attribution loss, GEO is necessary but insufficient. The entity must first be inscribed. That is SPXI's function.

A complete retrieval-layer strategy may include both GEO (optimizing content visibility) and SPXI (constructing and anchoring the entity). They are complementary, not competitive. But they are not the same thing, and treating SPXI as "GEO with DOIs" or GEO as "SPXI without permanence" obscures the structural differences that determine when each is appropriate.


7. A Concrete Example

Scenario: A mid-sized software company wants to appear accurately in AI Overviews when potential customers search for their product category.

GEO approach: Optimize the company's existing blog posts, product pages, and FAQ sections for summarizer extraction. Add schema markup. Structure content with definition-lead sentences. Improve fact density. Result: the company's existing pages are more likely to be cited in AI Overviews. This works if the AI already knows who the company is.

SPXI approach: Build the company's entity definition from scratch. Deposit a disambiguation matrix that distinguishes the company from competitors with similar names. Register the entity definition on institutional infrastructure with DOI anchoring. Deploy SIMs across multiple surfaces. Run Encyclotron audits to measure inscription. Result: the AI treats the company as a canonical entity in its knowledge graph — not just a content source but a known object with defined attributes, boundaries, and provenance.

When GEO fails and SPXI is required: The company has a common name. The AI confuses it with a similarly-named entity in a different industry. No amount of content optimization resolves this confusion because the confusion occurs at the ontological level — the knowledge graph treats the two entities as one. Only a disambiguation matrix with negative tags, deployed at the entity level, can resolve the collision.


8. Conclusion

GEO optimizes content for AI extraction. SPXI constructs durable entity-definition objects across retrieval surfaces. The distinction is structural, not rhetorical. GEO works on the semantic layer; SPXI works on the ontological layer. GEO produces optimized content; SPXI produces durable, DOI-anchored entity-definition objects. GEO requires ongoing maintenance; SPXI deposits persist on institutional infrastructure.

For entities that are already known to the knowledge graph and need better content visibility, GEO is the appropriate tool. For entities that need to exist in the knowledge graph — accurately, permanently, and distinctly — SPXI is the appropriate protocol.

The two are complementary. They are not the same.


References

Aggarwal, P., et al. (2023). GEO: Generative Engine Optimization. arXiv preprint.

Fraction, R. (2026). SPXI — A Formal Specification. EA-SPXI-01. Semantic Economy Institute. https://spxi.dev

Fraction, R. (2026). Supraliminal Transmission — SPXI as Intentional Entity Inscription in Light of Subliminal Learning Research. EA-SPXI-13. Semantic Economy Institute. [Forthcoming]

Fraction, R. (2026). The Encyclotron — Measurement Instrument for SPXI Deployment. EA-SPXI-07. Semantic Economy Institute. [Forthcoming]

Fraction, R. (2026). SPXI Case Study — Semantic Economy Institute. EA-SPXI-08. [Forthcoming]


Rex Fraction — Semantic Economy Institute https://spxi.dev ∮ = 1

No comments:

Post a Comment