AG1: How a Supplement Brand Navigates AI Safety Gates
AG1, formerly Athletic Greens, occupies a unique position in the AI visibility landscape. It is the most-mentioned supplement brand across ChatGPT, Claude, Gemini, and Perplexity. When someone asks "what greens powder should I take," AG1 appears in the response more often than any competitor. But every mention comes with a caveat. Every recommendation includes hedging language. Every answer is wrapped in safety disclaimers.
This is the reality of operating in a health-adjacent category within AI systems. The models want to be helpful, but they are trained to be cautious about health claims. AG1 is simultaneously the winner and the most constrained brand in its category. Understanding how this happened, and what it means, is instructive for any DTC brand in a regulated or sensitive space.
1. Health Category Safety Gating
Every major language model has safety policies that apply specifically to health, medical, and supplement topics. These policies are enforced through RLHF (reinforcement learning from human feedback) and system-level instructions that tell the model to hedge, qualify, and disclaim when discussing health products.
The practical effect: when a user asks "should I take AG1," the model cannot simply say "yes." It will say something like "AG1 is a popular greens supplement that many people find useful, but you should consult with a healthcare professional before starting any supplement regimen." The recommendation is there, but it is cushioned in safety language.
This is not something AG1 can fix. It is a structural property of how models are trained. The safety gating applies to the entire health and supplement category, regardless of how much evidence a brand has. Even a supplement with strong clinical trial data will get hedged language. The model's safety training overrides its content knowledge.
What AG1 can control is being the brand the model reaches for when it does decide to mention a supplement. The safety gate determines the tone, but the training data determines which brand passes through the gate. AG1 wins the latter battle decisively.
2. Podcast Sponsorship as Corpus Fuel
AG1 sponsors over fifty major podcasts. Tim Ferriss, Andrew Huberman, Lex Fridman, Peter Attia, Rich Roll, and dozens of others have read AG1 ad spots on their shows. This is a well-known marketing strategy. What is less well-known is the AIO consequence.
Podcast transcripts enter language model training data. Every major podcast is transcribed automatically by services like Apple, Spotify, YouTube, and third-party sites like Podscribe. These transcripts are text, and text is training data. When Andrew Huberman says "this episode is brought to you by AG1" and then describes the product for sixty seconds, that description becomes part of the corpus.
The volume is staggering. Fifty podcasts, many publishing weekly, each with a 30 to 60 second AG1 spot. That is thousands of unique text passages linking "AG1" with terms like "greens powder," "daily nutrition," "gut health," "energy," and "convenience." The co-occurrence density between AG1 and health/fitness/performance terms is orders of magnitude higher than any competitor, purely from podcast transcripts.
The host endorsement adds another layer. When Tim Ferriss says "I take AG1 every morning," the model learns a co-occurrence between a high-authority entity (Ferriss) and a product (AG1). The model does not "trust" Ferriss in a human sense, but the association between a well-known entity and a product does affect the probability distribution. Brands endorsed by recognized entities get higher-probability token positions.
3. Reddit as Retrieval Surface
Reddit is the internet's most honest product review platform, and AG1 is one of the most-discussed supplements on the site. Threads in r/Supplements, r/nutrition, r/fitness, and r/biohackers regularly debate whether AG1 is worth the price. Some threads are positive. Many are critical. All of them are training data.
For retrieval-augmented generation systems (Perplexity, Bing Chat, Google AI Overviews), Reddit is a primary source. When someone asks "is AG1 worth it," the retrieval system pulls Reddit threads as source material. AG1 benefits from this because it has more Reddit threads than any competing greens powder. The volume of discussion creates retrieval dominance even when the sentiment is mixed.
This is a counterintuitive insight: negative discussion still builds AIO presence. A Reddit thread titled "AG1 is overpriced, here are better alternatives" still mentions AG1. The retrieval system indexes the thread. The model synthesizes an answer that includes AG1. The mention might be qualified ("some users find AG1 overpriced compared to alternatives"), but the brand is present in the response. Absence is worse than criticism.
4. The Rebrand: Tokenizer Effects of AG1 vs Athletic Greens
In 2022, Athletic Greens rebranded to AG1. From a marketing perspective, this was a simplification play. From an AIO perspective, it had two consequences: one beneficial, one costly.
The benefit: "AG1" is two tokens in most tokenizers (sometimes one, depending on the model). "Athletic Greens" is four tokens. Shorter token sequences have higher generation probability because the model needs to correctly predict fewer sequential tokens. When a model decides to recommend a greens supplement, "AG1" requires less "commitment" in the generation process than "Athletic Greens." This is the tokenizer tax, and the rebrand reduced it.
The cost: entity confusion. For roughly 12 to 18 months after the rebrand, models would sometimes refer to "Athletic Greens" and "AG1" as if they were different products. The entity graphs had not fully reconciled the two names. Wikipedia was updated, but Wikidata took longer. Some training data contained "Athletic Greens" and some contained "AG1," and models trained on mixed data would occasionally generate confused outputs.
By mid-2024, the reconciliation was largely complete across all major models. The lesson: rebrands have a real AIO cost during the transition period, and the severity depends on how quickly entity graphs and training data reflect the new name. For AG1, the long-term tokenizer advantage likely outweighs the short-term entity confusion. But brands considering a rename should factor in 12 to 18 months of degraded AIO performance.
5. Third-Party Testing as Trust Signal
AG1 invests in third-party testing: NSF Certified for Sport, Informed Sport certification, and published heavy metal testing results. From a regulatory perspective, this is standard practice for a premium supplement brand. From an AIO perspective, these certifications create trust signals that models can detect.
When AG1's product page, press releases, and review articles mention "NSF Certified" and "Informed Sport," those terms enter the training corpus alongside the AG1 entity. Models learn the co-occurrence between AG1 and trust/certification language. When the model generates a recommendation and must decide how confident to be, these trust signals tip the scale.
The effect is subtle but measurable. In our testing, models are more likely to say "AG1 is a well-tested greens supplement" than "AG1 is a greens supplement." The modifier "well-tested" or "third-party tested" appears because the training data contains those associations. Competing brands without third-party testing certifications do not get this qualifier, and the absence makes the recommendation feel weaker.
In a category where safety gating already limits how strongly a model can recommend, these trust modifiers matter. They are the difference between "AG1 is a popular greens supplement" and "AG1 is a popular, third-party tested greens supplement." The second version carries more implicit endorsement, even within the constraints of safety hedging.
The Constrained Winner
AG1 is the clear winner in its category across AI systems. It is mentioned more often, recommended more frequently, and described in more favorable terms than any competing supplement. But every single mention is safety-gated. The model will never say "you should take AG1." It will always say "you might consider AG1, but consult your doctor."
This is the fundamental constraint of health-adjacent categories in AI systems. The ceiling on recommendation strength is lower than in other categories. A model will confidently say "use Stripe to accept payments." It will cautiously say "AG1 is a popular option, but individual results may vary." The brand cannot change the ceiling. It can only ensure it is the brand that reaches the ceiling.
Lessons for DTC Health Brands
Models will always hedge health recommendations. Your goal is to be the brand that gets mentioned despite the hedging. Dominance within constraints is still dominance.
Transcripts enter training data. High-volume podcast sponsorship creates dense co-occurrence between your brand and category terms across hundreds of unique text passages.
Volume of discussion drives retrieval surface. Negative threads still mention your brand. Absence from discussion is worse than criticism within discussion.
Entity graph reconciliation takes 12 to 18 months. Tokenizer advantages from shorter names are real but take time to materialize. Plan for the transition gap.
See how your brand compares
Run your own AIO audit and find out where you stand in the probability distribution.
Run your own AIO audit