The Circular Deal Around AI, And Where “Data-Rich” Companies Fit

Image Details: Stock Image of The Reddit Icon, from Squarespace

* NOTE: none of the info in this article or ANY article of MacroBytes is investment advice or political opinion, it is solely informational for editorial purposes

Overview

The AI rally seems far from finished, with Big Tech stocks continuing to invest billions of dollars into semiconductor, data center, and circular partnership deals to fuel the entire ecosystem. Nvidia just announced its partnership with the US government, while OpenAI/Microsoft are both set to benefit from the transfer of ChatGPT’s parent company from a private non-profit, to a for-profit entity that may potentially IPO soon. Meanwhile, Google, Broadcom, AMD, Oracle, Salesforce, and many others continue to invest heavily in the technology and its adoption, with no stop in sight. All these stocks have rallied immensely in recent months and even as recently as the past week. As we head into earnings for Big Tech, with several of them scheduled for October 29/30, there seems to be immense momentum of the continued scaling of this technology. Below is a brief synopsis of large AI names heading into Q1 Fiscal 2026 earnings (data from Yahoo Finance & visualized with AI) :

AI Valuations Watch — As of 10/28/2025, 3pm Central

Current price and 1-week move heading into earnings, plus how each name fits into the AI circular-deals ecosystem. Summarized With AI

Microsoft

MSFT
$543
5.12% 1-week
Earnings: 10/29/2025

Azure sits at the center of AI deals with model partners and chipmakers, bundling Copilot across the stack while investing in custom silicon to reduce GPU dependency.

Nvidia

NVDA
$202
11.70% 1-week
Earnings: 11/19/2025

Supplies the core GPU/accelerated platform and software (CUDA, networking) that most hyperscalers and model labs standardize on, reinforcing circular demand across the AI stack.

Meta

META
$752
2.56% 1-week
Earnings: 10/29/2025

Open-sourcing Llama and buying large volumes of accelerators, Meta feeds the loop between model ecosystems, ad optimization, and cloud hardware vendors.

Google

GOOGL
$268
6.86% 1-week
Earnings: 10/29/2025

Runs Gemini and Vertex AI while advancing TPUs, partnering with chip and enterprise players to expand AI services through Google Cloud.

Broadcom

AVGO
$373
9.06% 1-week
Earnings: 12/11/2025

Provides high-speed networking and custom silicon that connect AI data centers, benefiting as hyperscalers scale out clusters.

Qualcomm

QCOM
$181
7.49% 1-week
Earnings: 11/05/2025

Drives on-device AI with edge NPUs in phones/PCs, linking model deployments to mobile and client ecosystems.

Apple

AAPL
$269
2.37% 1-week
Earnings: 10/30/2025

Integrates on-device AI across its custom silicon and services, catalyzing developer tools and inference at the edge.

Intel

INTC
$42
9.51% 1-week
Earnings: 10/22/2025

Targets AI with Xeon, Gaudi accelerators, and foundry services, aiming to supply both compute and manufacturing capacity to the ecosystem.

AMD

AMD
$260
9.07% 1-week
Earnings: 11/04/2025

Competes in AI accelerators (MI series) and EPYC CPUs, partnering with cloud providers to diversify supply beyond a single vendor.

Applied Digital

APLD
$35
6.55% 1-week
Earnings: 10/09/2025

Builds AI/HPC data-center capacity and hosting services that rent compute to model developers and enterprises.

Salesforce

CRM
$255
-3.33% 1-week
Earnings: 12/02/2025

Einstein + Data Cloud connects enterprise data to partner models and hyperscaler infrastructure, channeling AI into CRM workflows.

Oracle

ORCL
$281
2.29% 1-week
Earnings: 12/08/2025

OCI provisions AI infrastructure and database services, partnering with accelerator vendors and ISVs to host training/inference at scale.

Amazon

AMZN
$229
3.04% 1-week
Earnings: 10/30/2025

AWS monetizes AI via Bedrock and custom chips (Trainium/Inferentia), looping cloud demand between model providers and enterprise builders.

At the moment, the entire AI ecosystem is quite robust, but how does it actually work. Here is a simple methodology of how you can understand the AI ecosystem and its relevant players:

The AI Ecosystem & Its Players Explained

  1. The Bedrock: Data Centers, Semiconductors, & Hardware For Compute

    AI requires mass amounts of data to be viable from a commercial and consumer standpoint. The average LLM prompt takes up +1000x the energy as a usual browser search. Data centers enable for this scalability, employing semiconductors and GPUs as the primary mechanism through which servers can handle LLM prompts. Semiconductor companies serve as the foundational bedrock, without which LLMs wouldn’t be able to scale the way they have.

    Key Players: Nvidia, Broadcom, AMD, Intel, Applied Digital, NextEra Energy, Qualcomm (Smaller Player)

  2. The Engine: LLM Providers, Agentic AI Companies

    These are the models we interact with on a day-to-day basis. The LLM are the actual models that actually process our prompts via Natural Language Processing (NLP), process the data (which is computationally taxing), and output responses.

    Key Players: OpenAI, Claude, X.ai, Perplexity, Microsoft (Copilot), Google (Gemini), Amazon (AWS AI Tools), Cursor

  3. The Drivers: SaaS Infrastructure Providers & Scalable Platforms

    These are the companies that immediately come to mind when you think of widely used platforms in both commercial and customer users. In other words, who are the end users of AI platforms. When we think of commercial businesses, think of software that every company needs to operate: CRMs (Customer Relationship Management), ERPs (Enterprise Resource Planning), PMO (Project Management), GRC (Governance, Risk, Compliance), etc. These are the large commercial businesses that will continue to invest its infrastructure to enable large AI scalability. In the customer front, think of apps that you and I use every single day: our phones, social media, etc. These are the companies that will scale AI from a consumer standpoint, relying upon the previous foundations in steps 1 and 2.

    Key Players: Salesforce, Oracle, SAP, Atlassian, Apple, Meta | Potential Future Players: ServiceNow, Security SaaS Players, etc.

The Missing Layer No One Talks About

Semiconductors act as the foundational bedrock, while LLMs are the engine of the agentic AI age, however, an area people seem to be talking about less is area between “Bedrock” and “Engine” - you can almost think of this “layer 1.5” as the fuel that allow the engine to function.

This layer is none other than data.

1.5. The Fuel: “Data-Rich” Companies, API Pulls, & Storage Databases

This is one aspect of AI people seem to be examining much less at the moment compared to the aforementioned aspects of the AI Hype Cycle. There are a few reasons for the “lack of hype” around the “Data” aspect of AI. For one, because AI is still quite new, there is little regulation nor transparency at the moment regarding how LLMs like ChatGPT are extracting data and scraping the web. While this may be the case at the moment, the regulatory landscape around AI is quickly changing, and more importantly, many companies that are considered “Data Rich” may stand to benefit immensely.

When ChatGPT or any LLM answers a prompt, it utilizes API pulls from various platforms such as Google to source data, and utilize NLP reasoning to frame direct “answers” to our questions. For example, the API ChatGPT uses from Google pulled the first 100 search (num=100) results directly from google for all relevant search criteria for the prompt. Based on these API pull requests, GPT is able to formulate a response.

However, just this past month, in mid-September, Google adjusted their API policy to only pull the first 10 (num=10) results for API pulls rather than the first 100 (https://support.google.com/websearch/thread/380315494/the-num-100). This was for a variety of reasons, but a primary one was due to the fact that API pulls of 100 were too expensive for the company, particularly with the rise of API pulls for LLM models such as GPT, Claude, Perplexity, X.ai, etc. Google’s shift in API policy indicates two important things:

  1. Quality, quickly parsable data is critical to the success of LLM models, without which there is no fuel for LLM engines and foundational bedrock to compute.

  2. As LLMs become more data-intensive to provide accurate results and become “commercially viable”, “data-rich” companies with large dataset access will continue to become more in-demand.

One company that is a perfect example of “data-rich” and is set to benefit from an increased demand for scalable data in LLMs is Reddit. While the company, for most of its history has been seen as a social media company that is driven by user engagement and anonymous threads that are useful for community discussions, the company is experiencing a major shift in 2025 (along with other companies that are “data-rich). In the past few months, Reddit has sued Claude for utilizing its data without adequate consent, arguing that LLMs must pay to access and consensually pull data from its subreddits and threads. Additionally, Reddit’s threads, which have been optimized for search-engine optimization (SEO), frequently showed up within the num=11-20 range for Google’s API pulls. Now that Google has adjusted its API parameters, Reddit stands to benefit from LLM providers such as OpenAI directly paying the company for its “data-rich” API pulls, extraction, and scalable usage with models.

Reddit serves as just one example of a “data-rich” company, however, its recent growth in valuations, cash flow generation, and AI relevance is proof of similar monetization strategies to come with other “data-rich” platforms.

Below is a quick synthesis (powered by AI assistance) of how this layer “1.5” fits into the circular AI ecosystem.

Reddit (RDDT) — Layer 1.5: Data = Fuel for the AI Economy

Reddit exemplifies the “data-rich” layer that sits between the hardware bedrock and the LLM engine—fuel that powers AI results and monetization.

Current Value
$212.97
Example based on your screenshot
6-Month Return
+75.7%
Example based on your screenshot

Why Reddit Matters in the AI “Circular Economy”

  • Layer 1.5 (Fuel): Data sits between the compute bedrock and the LLM engine—without high-quality, quickly parsable data, models can’t run effectively.
  • Data-rich advantage: Reddit’s vast corpus of human conversations and SEO-optimized threads gives it valuable, structured-enough “fuel” for LLMs.
  • API economics shift: As noted previously, when platforms constrain API pulls (e.g., moving from 100 to 10 results), high-signal datasets become more valuable and more likely to be paid/licensed by LLM providers.
  • Monetization pressure: Enforcement and licensing (e.g., pushing back on model providers scraping without consent) underscore the trend toward paid, consensual access to “data-rich” platforms.

Together, this reinforces the AI ecosystem: data fuels models → models power apps → user activity creates more data—a feedback loop that strengthens Reddit’s and other "data-rich companies" strategic position within the AI stack.

Bedrock

Compute, GPUs, data centers

Layer 1.5

Reddit: data-rich “fuel”

Engine

LLMs & agentic AI

Drivers

SaaS & consumer apps

Note: This is a summary and visualization created that synthesizes all content above.


Sources:

Financial Data: https://finance.yahoo.com/

* All visualizations were created by MacroBytes itself (with & without AI assistance) - data is sourced as needed when collected from external sources but visualized by MacroBytes

* NOTE: none of the info in this article or ANY article of MacroBytes is investment advice, it is solely an opinion for editorial purposes

Previous
Previous

Your Company Has a Nervous System. It’s Missing Its Immune System.

Next
Next

Examining Quantum, And Its Impact On The Future Of Tech