The Generative AI Economy & Foundation Models

The fastest technology adoption curve in recorded economic history.

Market Context

The global generative AI market reached approximately $11 billion in 2022. By 2025, that figure crossed $200 billion, with projections for 2030 ranging from $1.3 to $2.6 trillion.

At the center of this expansion is the Foundation Model (FM). This paradigm shift has surpassed the internet, mobile, and cloud computing in both pace and scale.

Defining the "Engine," Not the "Wrapper"

A Foundation Model is not a website, a mobile app, or a simple software feature. It is a massive neural network trained on trillions of data tokens—a process costing $100 million or more per run.

Most familiar "AI" products (e.g., ChatGPT web UI, Gemini app) are application wrappers. They are consumers of the model's intelligence, not the model itself. Our study focuses on the training layer—the mathematical engine underneath those surfaces.

Analytical Subject: The Training Organization (FM Vendor)

The unit of analysis is the FM Vendor—the organization that performs original, frontier-scale model training from scratch. We study N=49 vendors globally, including OpenAI, Google DeepMind, Anthropic, Mistral, and DeepSeek. We explicitly exclude the thousands of downstream "wrappers." In this market, the real strategic decisions about openness, pricing, and infrastructure are made at the training organization level. Everything else is a consequence of those core choices.

The Strategic Question: Toward a Platform

How do you build a business around a $100M+ engine?

The Big Issue

Frontier pre-training requires $100M+ per run. Yet, this is a highly concentrated market with a global census of only N=49 vendors.

This massive sunk cost demands an aggressive commercial strategy to survive.

The Managerial Question

How do vendor executives strategically compete in this high-stakes ecosystem?

When everyone is building similar technology, how do you construct a sustainable business model that recovers these immense investments?

Our Instinct vs. Functional Reality

Classical platform economics suggests this should lead to a winner-takes-all monopoly. To understand if that instinct is correct, we cannot just look at the technology. We must analyze these vendors through the economic functions they perform across three different observations.

First Observation: AI as a Search Engine

Matching intent to information.

The Search Engine Benchmark

Google Search UI

ChatGPT launched with an interface nearly identical to Google. This visual similarity captures an economic reality: both reduce transaction costs by connecting user intent to relevant capability.

The Core Matching Function

DeepSeek UI

In platform economics, this is matching. Google matches queries to ranked web pages; FMs match prompts to generated computational responses. Both are matching architectures.

Second Observation: AI as a Bundled Platform

Bundling discrete services into a unified product.

The eCommerce Assembling Benchmark

eCommerce UI

Amazon bundles product catalogs, reviews, and logistics into one package. In platform economics, this is assembling. It eliminates the cost of using multiple separate tools.

AI's Bundling Function

GPT Store UI

FMs perform the same assembling function. Systems like ChatGPT bundle text, code, search, and images. Google bundles Gemini into Search and Workspace.

Third Observation: The Knowledge Logic of Social Media

Usage as a training signal.

Messaging & Social Media

Messenger UI

In social media, every user action is harvested to build preference models. AI platforms do the same with Reinforcement Learning (RLHF), learning from every interaction.

The AI Data Flywheel

Open Web UI

FMs are digital platforms that match intent to capability, assemble modular tools, and manage knowledge through data. We need a framework that unifies these three.

The PEMAK Framework

Brousseau & Pénard (2007): A unified framework generating six strategic conditions.

PEMAK Framework Illustration

MATCHING (M)
MONOPOLYINTEGRATED

Reduces the transaction costs of connecting participants with heterogeneous needs. FMs match user intent to capability via semantic embeddings.

ASSEMBLING (A)
WIDEPAID

The modular composition of discrete services into integrated outputs. It forces a strategic tradeoff between breadth (range of packages) and depth (marketing method).

KNOWLEDGE MGMT (K)
HIERARCHICALOPEN

The accumulation and deployment of knowledge generated by platform participation. Evaluates structural tradeoffs in information extraction and rights management.

3 Dimensions → 6 Binary Tradeoffs → 64 Logically Possible Configurations

Six Strategic Conditions

Operationalizing the PEMAK tradeoffs for Foundation Model vendors.

MONOPOLY

Matching
1
Adjacent platform dominance
0
Competitive niche

INTEGRATED

Matching
1
Owns compute + user UI
0
API-only delivery

WIDE

Assembling
1
General-purpose flagship
0
Specialist focus

PAID

Assembling
1
Premium-only pricing
0
Free / Ecosystem strategy

HIERARCHICAL

Knowledge
1
Internally controlled training
0
Community-governed

OPEN

Knowledge
1
Weights publicly released
0
Proprietary weights
2^6 = 64 Logically Possible13 Actually Observed

The Central Empirical Puzzle

Opposite strategies. Same commercial outcome.

Functional Equivalence & Viability

Even though Foundation Models lack traditional two-sided networks, they execute the exact same economic functions as digital platforms: Matching, Assembling, and Knowledge Management. Therefore, we evaluate their success using platform literature: a vendor crosses the threshold into viability when it opens a commercial API for complementors.

Big Tech

Cross-subsidization: Actively integrated into a profitable core ecosystem (Brousseau & Pénard, 2007).

Startups

Pragmatic Legitimacy: Achieving sufficient market adoption to secure continuous VC backing (Chanson & Rocchi, 2024).

Viability is not just strict accounting profit, but being "self-sustainable" in these respective forms.

Past Platforms: Convergence

In prior digital markets, we saw convergence to a single dominant form. Amazon's closed ecosystem won eCommerce; Facebook's graph won social media; Android's open license won mobile OS.

Classical platform economics predicts winner-takes-all convergence.

FM Market 2025: Divergence

Look at the market today: Google is integrated and closed. Meta is unbundled and open. Both achieve sustainable engagement. The market is not converging; it is supporting completely opposite strategies simultaneously.

Why does a winner-takes-all market produce multiple opposite winners?
Core Research Question

Which combinations of strategic conditions are sufficient for business model viability?

Why "Sufficient"?

Standard regression isolates the "average net effect" of a single independent variable. But in complex digital ecosystems, strategies are interdependent. We are not looking for one necessary master key, but rather the equifinal recipes (sufficient configurations) that lead to the same outcome (Ragin, 2008).

Why Standard Regression Fails Here

Four structural limitations when analyzing complex digital ecosystems.

Sample Too Small

N=49 < N_min(100)

N=49. Logistic regression requires N≥100 for stable estimates. With 49 cases across six parameters, regression models overfit severely, rendering standard errors unreliable.

Conditions Interact

β(OPEN) ≠ constant

OPEN=1 is a positive driver for Meta, but would destroy Anthropic's premium pricing model. Regression assumes independent, additive effects (fixed β), failing to capture these structural interdependencies.

Multiple Paths to Success

Equifinality violated

Open-weights, premium APIs, and platform bundlers achieve viability through entirely different structural mechanisms. Regression forces a single "best-fit" average path, violating equifinality.

Skewed Outcome

Y=0: 5/49 ≈ 10%

Only 5 non-viable cases out of 49. Regression-based models break down at this imbalanced ratio, where single outlier cases can swing the entire model's coefficients.

"In complex platform strategy, conditions do not act independently. They interlock into stable, inseparable configurations."

Qualitative Comparative Analysis (QCA)

Finding exact strategic recipes instead of average net effects.

1Regression Logic (Additive)

Viability = β₀ + β₁·OPEN + β₂·INT + ... + ε

Evaluates the "average net effect" of each variable across all cases, assuming they operate independently. Masks specific strategic contexts.

2QCA Logic (Configurational)

(~INTEGRATED) + (MONOPOLY · WIDE) → Viability

Identifies the specific combinations of choices that produce the outcome. Directly maps equifinality (multiple valid paths).

Methodological Precedent

Chanson & Rocchi (2024): Applied csQCA to N=30 French neobanks. Proved that binary viability coding—grounded in pragmatic legitimacy and market recognition—produces valid configurational findings without requiring private accounting data.

Marx (2010): csQCA validity relies on avoiding random correlations via the Marx criterion (N ≥ 2^(k-1)). For k=6 conditions, minimum N=32. Our census of N=49 satisfies this rigorously.

Handles N=49 Validly
Maps Interacting Conditions
Reveals Equifinal Paths
Models Causal Asymmetry

Case Selection: Who Is in the Census and Why

Unit of Analysis: The Foundation Model Vendor (N=49 vendors). Evaluated against a multi-gateway presence methodology.

Multi-Gateway Presence Methodology

Gateway TypeGateways Evaluated (11 Total)
Cloud ProvidersAlibaba Cloud, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure AI, Tencent Cloud
API AggregatorsCloudFlare AI Gateway, OpenRouter, Vercel AI Gateway
Community HubsHugging Face Hub, Arena.ai, LiteLLM
Inclusion Criterion: Any vendor whose models appeared on ≥ 1 of the 11 gateways at any point during 2023–2026, regardless of current status.

Data Sourcing & Traceability

Connecting strategic codings to verifiable evidence.

Collection Channels

21 Global Gateways

Cloud (AWS Bedrock, Azure AI, Alibaba Cloud), Aggregators (OpenRouter, Vercel), and Hubs (Hugging Face, Arena.ai).

Verifiable Evidence

Model Cards, GitHub Licenses (Apache 2.0/MIT), Terms of Service, and Public ARR reports (e.g., OpenAI $11.6B).

Y=0 Validation

Every exit (Character.ai, Adept AI) documented with ≥3 independent citations (TechCrunch, SEC filings, official announcements).

Coding Traceability Sample: Zhipu AI / GLM

ConditionValEmpirical Evidence & Source
WIDE1GLM-4 flagship covers text, code, vision, & tool use. (Source: bigmodel.cn)
HIER.1Full internal pipeline control; CAC compliance requirements. (Source: THUDM Repo)
OPEN1Weights released under Apache 2.0. (Source: huggingface.co/THUDM)
PAID0Free consumer app + free developer API tiers. (Source: BigModel.cn pricing)
Reference Linkage:
Apache 2.0HuggingFaceStanford FMTI

The Data Matrix:

Only 15 of 64 logically possible configurations are populated. The market exhibits high strategic coherence.

Abridged Binary Data Matrix

N=49 Vendors
VendorMONOINTWIDEPAIDHIEROPENPathway
OpenAI111110Proprietary
Google111010Proprietary
Anthropic101110Collective
Mistral AI001011Collective
DeepSeek001011Collective
Meta101011Collective
Cohere001110Collective
xAI111010Proprietary
Apple111010Y=0 Deviant
* Displaying 9 of 49 rows. N=49 vendors.

Condition Frequencies

MONOPOLY30.6%
INTEGRATED24.5%
WIDE65.3%
PAID40.8%
HIERARCHICAL89.8%
OPEN40.8%
Note: INTEGRATED is only 24.5%. The market remains overwhelmingly non-integrated API-centric. OPEN is 40.8%.

Coding Protocol: Binary Decision Rules

Explicit thresholds for the six PEMAK conditions to ensure replicability.

ConditionCoded = 1 (Present)Coded = 0 (Absent)
MONOPOLYExplicit AGI goal OR >30% API market share OR parent platform dominance (>50%) w/ LLM bundlingCompetitive positioning, niche focus, market share <30%, open-source commoditization
INTEGRATED≥3 cross-product integrations with shared context windows OR ecosystem lock-in<3 integrations OR API-only distribution OR explicit strategic neutrality
WIDESingle flagship model with native multimodal processing (text+code+vision+audio) in unified archText-focused flagship with separate vision/audio models OR adapters only
PAIDPremium-only pricing with no substantive free tier for production useFreemium model with significant free access OR ad-supported
HIERARCHICALDocumented Constitutional AI framework OR external safety audits OR published governanceStandard RLHF without constitutional grounding OR no external audit
OPENDownloadable model weights publicly available under permissive licenseAPI-only access OR proprietary weights OR restrictive licensing

Boundary Case Example: Anthropic

Anthropic lacks proprietary data centers but has a massive AWS partnership. Crisp binary rule: INTEGRATED=0 (fewer than 3 cross-product integrations). fsQCA Triangulation: INTEGRATED=0.33. The same sufficient configurations emerge, proving stability.

Expert Interview Design: Leveraging FMTI Standards

Standardized questionnaire mapping Stanford CRFM indicators to PEMAK strategic conditions.

Expert Validation Framework (FMTI-Derived)

Our expert panel used a structured coding instrument modeled after the 100 indicators of the Foundation Model Transparency Index. This ensures that high-level strategy codings are grounded in verifiable technical and organizational disclosure.

FMTI DomainFMTI Indicator (Sample)PEMAK ConditionConsensus Coding Logic
UpstreamData Acquisition MethodsHIERARCHICALVerify if data curation is strictly internal (H=1) or relies on community sourcing (H=0).
ModelCapabilities & ModalitiesWIDEConfirm native text+vision+audio processing in a single flagship architecture (W=1).
DownstreamDistribution & LicenseOPENBinary check: Weights released under Apache 2.0/MIT/CC-BY (O=1) vs Proprietary (O=0).
DownstreamUsage Policies & PricingPAIDAnalyze paywalls and data-usage opt-out terms. No substantive free tier required for P=1.
OrganizationEcosystem & GovernanceMONO / INTAssess adjacent bottleneck leverage (M=1) and cross-stack bundling depth (I=1).
* Indicators derived from FMTI December 2025 Release (Stanford CRFM).
100 IndicatorsExpert Consensus (2/3)

Calibration: Three-Source Triangulation & Expert Panel

Subjective strategic choices translated into binary logic via strict independent convergence.

Three-Source Triangulation Protocol

Each binary assignment requires majority agreement (≥2/3) across independent source strata. Disagreements trigger qualitative process tracing.

StratumDescription & Sources
1: Expert PanelStructured evaluation of primary evidence.Cross-disciplinary panel: VCs (MONOPOLY, PAID), ML engineers (INTEGRATED, WIDE, OPEN), Platform Academics (HIERARCHICAL). Independent coding prior to group discussion.
2: Academic & BenchmarksSystematic review of quantitative metrics.Stanford HAI Index, State of AI Report, LMSYS Chatbot Arena, EU AI Act filings.
3: Public DiscourseAnalysis of declared strategic intent.Executive statements, GitHub Issues, developer forums, investor relations.

Inter-Rater Reliability

Calculated on a random 12-vendor subset. Pre-specified threshold: Cohen's κ ≥ 0.70.

ConditionCohen's κKripp.'s α
MONOPOLY0.740.76
INTEGRATED0.860.88
WIDE0.790.82
PAID0.930.95
HIERARCHICAL0.810.85
OPEN1.001.00

* MONOPOLY relies on interpretative market boundaries, leading to lower (but passing) κ. Process tracing resolves disagreements.

Methodological Robustness & Falsifiability

Five-stage QCA analysis protocol and pre-registered falsification criteria.

Five-Stage Analysis Protocol

1
Data Preparation
Three-source corroboration, code 6 conditions, validate IRR. Output: n×6 matrix.
2
Necessity Analysis
Test individual conditions for necessity (Cons. ≥ 0.90).
3
Sufficiency Analysis
Test condition combinations via Boolean minimization (Cons. ≥ 0.80).
4
Profile Synthesis
Combine necessity and sufficiency into coherent archetypes.
5
Proposition Testing
Evaluate theoretical P1–P6 against empirical patterns.

Falsifiability Criteria

The methodology is considered falsified if any of these pre-registered conditions occur:

  • Expert panel consistently achieves κ < 0.70.
  • Single-condition analyses produce contradictory predictions across cases.
  • Fuzzy-set triangulation (fsQCA) produces different archetype assignments than csQCA.
  • Sensitivity analysis reveals systematic instability under ±5% threshold variation.
None of these conditions were met.
Marx Criterion Passed

6 conditions requires N≥32. Our census of N=49 prevents random configurational correlations.

Foundation Model Transparency Index (FMTI)

External validation via Stanford CRFM's 100-indicator assessment (Dec 2025).

100 Transparency Indicators

Stanford evaluates vendors across 100 indicators in three domains. Average score in Dec 2025 dropped to 40/100, reflecting stricter disclosure standards.

32
Upstream
Data acquisition, labor, and compute resources.
32
Model
Architecture, risks, and trustworthiness evaluations.
36
Downstream
Distribution, usage policies, and societal impact.

Questionnaire Sample

  • • Does the developer disclose the Top-5 data sources used?
  • • Is the amount of compute (FLOPs) used for pre-training disclosed?
  • • Are red teaming methodologies and findings published?

December 2025 Benchmarks

VendorScoreStatus
IBM (Granite)95/100Top Performer
Writer72/100Leader
AI21 Labs66/100Leader
OpenAI (GPT-4)48/100Average
DeepSeek32/100Developing
Alibaba (Qwen)26/100Developing
xAI / Midjourney14/100Low

Citation: Wan et al. (2025). The 2025 Foundation Model Transparency Index. Stanford CRFM / arXiv:2512.10169.

Necessity Analysis — No Single Condition Is Required

Equifinality confirmed: multiple paths to viability, no universal key.

Consistency (Y=1)

ConditionCons.0.90 Threshold
MONOPOLY
0.303
FAIL
~MONOPOLY
0.697
FAIL
INTEGRATED
0.242
FAIL
~INTEGRATED
0.758
FAIL
WIDE
0.576
FAIL
PAID
0.242
FAIL
HIERARCHICAL
0.848
HIGHEST
OPEN
0.394
FAIL

"No single condition is necessary. The market has no master key."

Equifinality is confirmed: Different combinations of structural choices produce the identical outcome of commercial viability.

Strategic Implication

This coexistence is not a transitional phase resulting from market immaturity. It is the stable structure of this market. Multiple completely distinct strategic recipes produce sustainable commercial viability.

The Open-Weight Insurance Policy

20 out of 20 open-weight vendors achieved viability. Zero exceptions.

20/20

Open-Weight Adoption

Ecosystem Generativity

"When weights are released, the model becomes a commons. Thousands of developers carry it forward, regardless of the vendor's survival."
ZITTRAIN (2006) OPERATIONALIZED

All 20 Open-Weight Vendors (OPEN=1)

ME
Meta/Llama
MS
Mistral AI
DS
DeepSeek
AI
AllenAI
QB
Alibaba/Qwen
PH
MS Phi
ZG
Zhipu/GLM
YI
01.AI/Yi
AL
Aion-labs
NR
NousResearch
NS
Neversleep
SK
Sao10k
TD
Thedrummer
SA
Stability AI
KY
Kyutai
BF
Black Forest
+4
Others

Y=0 Contrast

Character.ai & Adept:
Sold to monopolies.
Both proprietary (closed). API permanently discontinued.

Set-Theoretic Finding

Open-weight release = structural insurance against organizational exit.

The Result — Two Paths to Commercial Viability

~INTEGRATED + MONOPOLY · WIDE → Y=1. Overall consistency: 0.917.

Collective Path

n=34/44
~INTEGRATED

"You do not own the infrastructure — and that is enough."

A
Ecosystem Generativity

Llama, Mistral, DeepSeek, z.ai — weights as public commons

B
Premium API Differentiation

Anthropic, Cohere, ArceeAI — active orchestration, closed weights

C
Community Gateway

OpenRouter, Perplexity, Morph — benchmark legitimacy entry

AnthropicMetaMistralDeepSeekCohere+29
Cons: 0.919 | Cov: 0.773

Proprietary Path

n=13/44
MONOPOLY · WIDE

"Adjacent monopoly + broad capability = captive distribution."

Captive Distribution

Existing billion-user base (Search, Social, OS)

Cross-Subsidization

Google $237B, Microsoft $211B, Meta $134B fund R&D

Switching Costs

Wide capability scope creates workflow lock-in

GoogleOpenAIAmazonByteDanceMicrosoft
Cons: 0.929 | Cov: 0.295
0.917
Overall Consistency
1.000
Overall Coverage
44/44
Viable Vendors Explained

The Death Zone — What Cannot Exist

~MONOPOLY · INTEGRATED: zero vendors. Structurally empty.

Empty Set

~MONOPOLY · INTEGRATED
Zero vendors. Theoretically forbidden.

Why Empty — Economics Frameworks

1. Transaction Cost Economics

Integration only pays when you can appropriate the rents. Without monopoly leverage, open-weights price your advantage to zero.

2. Commoditize-Your-Complement

Meta releases Llama at zero cost specifically to destroy the pricing power of proprietary integrated vendors.

Real-World Evidence

Character.ai
Sold to Google$2.7B
Inflection AI
Sold to Microsoft$650M
Adept AI
Sold to Amazon$430M

Deviant Cases — Five Y=0 Vendors Analyzed

All are event-specific or actor-specific. None are configuration-determined.

acqui-hire

Character.ai

T02: ~MON · ~INT · ~WIDE
Why Y=0

Google acqui-hire, Aug 2024 ($2.7B). API discontinued post-acquisition.

Event-specific exit.

Same-Config Y=1
Morph
OpenRouter
Relace
Z-AI
acqui-hire

Adept AI

T04: ~MON · ~INT · ~WIDE
Why Y=0

Amazon acqui-hire, Jun 2024 ($430M). Team absorbed for AGI research.

Actor-level acquisition.

Same-Config Y=1
ArceeAI
Inception
Inflection
Liquid
strategic

Apple

T11: MON · INT · WIDE
Why Y=0

Apple Intelligence is iOS-only. No external API by strategic choice.

Platform gatekeeper choice.

Same-Config Y=1
Google
ByteDance
Baidu
Tencent
xAI
repositioning

Nvidia

T10: MON · INT · ~WIDE
Why Y=0

Repositioned to AI infra (NIM platform). GPU business already dominates.

Strategic pivot to hardware advantage.

pivot

Aleph Alpha

T04: ~MON · ~INT · ~WIDE
Why Y=0

Pivoted to sovereign AI consulting (PhariaAI) in 2024. Exited API race.

Strategic exit from frontier market.

Same-Config Y=1
ArceeAI
Inception
Inflection
Liquid

All 5: contingent exits, not configuration failures.

The configurations themselves remain inhabited by viable vendors. The Death Zone is structurally determined — deviant cases are not.

Why Multiple Winners Coexist

Pathway-heterogeneous network effects. Each path activates a structurally different complementor mechanism.

Proprietary Path

MONOPOLY · WIDE
Network Effect
Captive Distribution

Adjacent monopoly delivers FM to billions at zero cold-start cost — Search (Google), Social (Meta, ByteDance).

Cross-Subsidy

Ad/sub revenue funds losses unsustainable for standalone VC portfolios.

Barrier

Monopoly assets are structurally unavailable to new entrants.

GoogleOpenAIMicrosoftAmazonByteDanceBaiduxAI
Cons: 0.909 | Cov: 0.303 | n=10

Collective Path

~INTEGRATED

"Eliminating frontier capex as a prerequisite for viability."

A
Collective
Ecosystem Generativity

Permissive weights → Model as commons. All 20 OPEN=1 viable.

MetaMistralDeepSeek
B
Developer
API Orchestration

Premium API + Active SDK. Cloud compute as variable OpEx.

AnthropicCohere
C
Gateway
Community Gateway

Free wide-scope/routing. Adoption despite closed weights.

PerplexityOpenRouter
Cons: 0.926 | Cov: 0.758 | n=25
"Network effects are pathway-heterogeneous: distribution effects of incumbents differ from community externalities of open-weights. Distinct niches are mutually non-substitutable."
Stable Configurational Ecology — Not Winner-Takes-All

The FM market organizes by capital constraints and network-effect heterogeneity, sustaining different business models in durable equilibrium.

Contributions — Theory, Method, and Policy

A framework that predicts both what exists and what cannot exist.

01

Complementor Engagement

Grounds FM viability in active 3rd-party adoption. The moment a tech project becomes a platform.

Iansiti & Levien (2004)
02

Ecosystem Generativity

Open-weight release as structural insurance. 20/20 vendors viable — zero exceptions.

Zittrain (2006)
03

Functional Equivalence

FM vendors ARE platforms if they match, assemble, and manage knowledge. PEMAK applies directly.

Brousseau & Pénard (2007)
04

Equifinality as Equilibrium

Multiple paths to viability are NOT transitional. Capital constraints sustain the ecology.

Chanson & Rocchi (2024)

PolicyConfiguration-Aware Regulation

Platform Aggregators

Focus on bundling risks and adjacent monopoly cross-subsidies, not just capability benchmarks.

Open-Weight Vendors

Track aggregate deployment patterns across the ecosystem, not just originating choices.

Death Zone Alert

Integrated standalone vendors without monopoly = early-warning signal for acqui-hire risk.

Conclusion

A Coherent Structure Beneath the Chaos

~INTEGRATED + MONOPOLY · WIDE → Y=1
Route A
Ecosystem Generativity

Open-weight release turns community into insurance. Zero-cost complementor distribution.

Route B
API Differentiation

Trust premium + active orchestration. Cloud compute replaces sunk capital costs.

Route C
Platform Aggregation

Adjacent monopoly cross-subsidy. Captive multi-million user distribution.

0
/ 49
Death Zone

~MONOPOLY · INTEGRATED: impossible without cross-subsidy.

20/20
Open-Weight Insurance

Ecosystem generativity structurally forecloses organizational exit.

"A framework that predicts only what exists is descriptive. A framework that predicts what cannot exist — and explains why — is causal."

N=49 CensusCons: 0.917Cov: 1.000csQCA Methodology
1/27