Backlinks vs brand mentions for AI visibility: the 3x thesis
Unlinked brand mentions correlate 0.664 with AI citations. Backlinks correlate 0.218. Mentions are 3x more predictive. Here is why, and what to do about it.
Ahrefs ran a correlation study across 75,000 brands in late 2025. They wanted to know which factors predict whether a brand gets mentioned in Google's AI Overviews. The result was a chart most SEO teams did not want to look at.
A December 2025 follow-up extended the analysis to ChatGPT and Google's AI Mode. Brand mention correlations held at 0.664 to 0.709 across every engine tested. YouTube mentions came in at roughly 0.737, the single strongest individual factor the researchers found. Backlinks kept the same low number everywhere.
This article is our thesis for the entire Signals AI visibility pillar: the link economy that drove SEO for twenty years does not drive citation in large language models. The mention economy does. Every other article in our GEO coverage ties back to this one.
We have run more than 10,000 campaigns in the Reddit, Quora, and editorial space since 2017. The reader who gets to the end of this piece should know exactly why the 0.664 number exists, what a "brand mention" actually means to a transformer model, and what to do about it this quarter.
The 0.664 vs 0.218 study, in detail
Total backlinks landed at 0.218. Organic traffic hit 0.274. Domain Rating reached 0.326. In absolute terms these are weak to moderate correlations, but on the same scale, branded mentions were a completely different magnitude of signal.
The Ahrefs team repeated the exercise in December 2025 across ChatGPT and Google AI Mode. Branded mentions correlated 0.664 to 0.709 across all three platforms. That repeatability is what makes it a thesis, not a headline.
What a "brand mention" actually means to an LLM
A brand mention, for AI visibility purposes, is the literal co-occurrence of your brand string next to a topic the model already associates with that category. Linked or unlinked does not matter. The model is not crawling a link graph. It is reading language and building associations between tokens.
There are two places those associations form. The first is pre-training: Common Crawl and adjacent corpora are the raw text that shaped the model's baseline knowledge of the world. The second is retrieval: live web snippets that platforms like ChatGPT, Perplexity, and Google AI Overviews pull in during a query. A strong brand mention footprint feeds both layers. An isolated backlink does not feed either of them in a way the model notices.
Kevin Indig's November 2025 analysis of 98,000 ChatGPT citation rows found a 0.542 correlation between brand popularity and ChatGPT visibility, the strongest single factor in his dataset. The same analysis found Perplexity at 0.196 and Google AI Overviews at 0.254. That spread tells you engines weight mentions differently, but all of them weight mentions more than they weight link metrics.
Why backlinks stopped moving the needle
Backlinks still matter for Google's traditional blue-link ranking. They stopped mattering for AI citation because the retrieval layer inside most generative engines is not the Google web graph. SearchGPT runs on Bing. Seer Interactive's testing found that 87% of SearchGPT citations match the top 10 Bing results for the same query. Perplexity has its own retrieval stack that leans heavily on Reddit. Google AI Overviews share some signals with classic ranking, but the overlap is shrinking: Ahrefs found only 38% of AI Overview citations now come from the Google top 10, down from 76% seven months earlier.
A link from a high-authority site still signals quality to a search index. It does not meaningfully change the training corpus the model was built on, and it does not guarantee the retriever will pick that page. What does increase the odds is the same brand name appearing across many independent sources that the retriever trusts. Volume across sources beats isolated authority.
The 10x visibility cliff
The Ahrefs 75K study surfaces a hard non-linearity that matters for prioritization. Brands in the top quartile of branded web mentions averaged 169 AI Overview appearances. Brands in the second quartile averaged 14 appearances. That is not a 2x or 3x difference. It is roughly 10x, and it repeats in every category the team sliced the data by.
This has a specific operator implication: the first dollar you spend earning brand mentions is worth more than the tenth, until you cross the top quartile. After that the curve flattens. The strategic question is where your competitive set sits on that curve, not how many mentions feel like enough in the abstract.
The training data layer: how mentions get baked in
Large language models are trained on versions of Common Crawl, the public web archive that refreshes on a rolling schedule. When a brand appears in a Reddit thread, a Wikipedia article, or a Forbes listicle, that text gets crawled, tokenized, and absorbed as patterns the model can recall at inference time. The model does not store the page. It stores the relationships between your brand name and the words around it.
The GEO research from Princeton, Georgia Tech, IIT Delhi, and the Allen Institute (Aggarwal et al., KDD 2024) tested nine content optimization strategies across 10,000 queries. The three that moved visibility the most were statistics addition (+40% on the paper's Position-Adjusted Word Count metric), quotation addition (+28%), and cite-sources (+30% overall, +115% for lower-ranked content). Keyword stuffing scored -9%. The paper is the first peer-reviewed confirmation that what we think of as "classic SEO" tricks do not transfer to generative engines. What transfers is verifiable, attributable content, which is exactly what a well-placed brand mention is.
The retrieval layer: who feeds each engine
Retrieval is where mentions pay off in real time. Every major engine pulls its live citations from a different mix of sources. Knowing that mix is the whole game.
ChatGPT (default mode) answers from training data. Only about 34.5% of ChatGPT queries trigger a web search as of early 2026, down from 46% in late 2024 (Profound analysis of 700,000+ conversations). When ChatGPT does search, 87% of its citations come from Bing's top 10 results. Wikipedia is the single most-cited domain in the browsing layer, followed by Reddit, Forbes, and G2 in various proportions by topic.
Perplexity cites Reddit heavily. Reddit accounts for roughly 46.7% of Perplexity's top 10 sources, followed by YouTube at 13.9% (Profound). Google AI Overviews are the most promiscuous of the three, spreading citations across Reddit, YouTube, Wikipedia, Quora, LinkedIn, Forbes, and a long tail of category-specific sites. Reddit alone sits near 21% of Google AI Overview citations by some studies, though the number fluctuates: ChatGPT's Reddit citation rate collapsed from roughly 60% to 10% over a few days in mid-September 2025 after an internal algorithm change that Seer Interactive documented.
The practical rule: retrieval sources are not stable, and they are not the same from engine to engine. Only about 11% of domains are cited by both ChatGPT and Perplexity in overlapping queries (ALM Corp, 30 million source analysis). You cannot win the retrieval layer by betting on one site. You win by showing up across the sources each engine is most likely to surface.
The co-occurrence finding
One metric surfaces repeatedly in the citation research and rarely gets enough attention: co-occurrence. Citation co-occurrence, the frequency with which your brand name appears next to other known category entities, correlates roughly 0.74 with ChatGPT mention rate in controlled studies. Traditional domain authority correlates around 0.27 in the same datasets.
Co-occurrence is what the model actually encodes. When it sees "best project management tools for startups" a thousand times across Reddit threads and SaaS listicles, and your brand is inside half those sentences, it learns the association. When the same prompt comes from a user later, your brand surfaces because the prediction path is grooved. Kevin Indig's Growth Memo analysis found heavily-cited text has a 20.6% entity density compared to 5-8% in normal English prose. Dense, entity-rich content is what gets pulled into the answer.
Linked vs unlinked: does the hyperlink matter?
For LLM visibility, the link itself is close to irrelevant. The Ahrefs team tested linked and unlinked mentions separately and found essentially the same correlation with AI Overview visibility. The text that surrounds your brand is what matters. The anchor tag is a noise variable.
Nofollow and follow links also produced near-identical correlations in Semrush's 2025 AI search study across 1,000 domains. If the retriever reached the page, it used the text on the page. It did not weight link type. This is a clean break from how Google's traditional link graph treated nofollow, and it is part of why older SEO instincts lead teams to overpay for follow links they do not need.
One caveat: a link still drives the reader to your site, which can produce branded search volume, which does correlate with AI visibility at 0.392. Links have secondary effects. They are not a primary signal.
Why press releases almost never get cited
The reason is structural. Press release wire services syndicate identical text across hundreds of low-authority mirror domains. LLM retrievers deduplicate near-duplicate text aggressively. From the model's point of view, a syndicated press release is one document, not hundreds. Worse, those mirror domains almost never rank for the operator queries the retriever is fielding, so they are not eligible for inclusion when someone asks a real question.
Editorial placement is the opposite. One genuine mention in a respected category publication, written as a sentence in a larger article that already ranks, is worth more than fifty syndicated mirrors. Our own placements live on this side of the line and cite the same data when we explain why.
The three places mentions earn their weight
Across every study we have reviewed, mentions earn weight from three source types. Everything else is rounding error.
| Source type | Why engines weight it | What a good mention looks like |
|---|---|---|
| Community UGC (Reddit, Quora, niche forums) | Retrieval favors real-user discussion; Perplexity pulls 46.7% of top sources from Reddit alone | A real comment from a real account, inside a thread buyers are already reading, that names the product in context |
| Editorial listicles ("best X for Y") | 8 of the top 10 most-cited AI URLs are listicles; FAQ-structured pages earn 41% citation rate vs 15% otherwise | Inclusion in a well-indexed "best of" article on a site that already ranks, with a one-sentence description of what the product does |
| Review aggregators (G2, Capterra, Trustpilot) | Brands with review-site profiles are 3x more likely to be cited by AI answers than brands without | A real profile with real reviews on a site the engine trusts for category comparisons |
How to earn brand mentions at scale (the operator play)
The tactical playbook is short and unglamorous. There is no shortcut, and the tools that claim shortcuts (press wire distribution, link vendors, AI-generated guest posts) tend to underperform specifically on the metrics that matter for AI visibility.
Earn mentions in three places, consistently, across three to six months:
Reddit and Quora threads where your category is discussed. Real accounts, long histories, disclosed affiliation when you are the maker. Comments that add value first and name the product only when it is the honest answer. Perplexity's 46.7% Reddit dependency and Google AI Overviews' 21% Reddit citation share mean the same comment can pay off in two engines at once.
Editorial placements in category listicles. Find the "best X for Y" articles already ranking for your buyers' queries. Pitch the authors with a one-line description, proof, and a specific angle. Our AI Blog brand mentions service places editorial mentions across a 20,000-site network when the outreach step is the bottleneck. Either path works. The placement is what counts.
Review site profiles, fully populated. G2, Capterra, Trustpilot, Product Hunt, and category-specific review platforms. Populate the profile, ask recent happy customers for real reviews (not stars, reviews with text), keep the profile fresh. Stale review pages drop from the retrieval index faster than teams expect.
Patience is non-negotiable. Most brands see measurable AI citation lift within 4 to 8 weeks of starting this work. Compounding shows up at month three or four. Starting now and stopping at month two is the single most common failure mode we see.
Measuring brand mention impact without fooling yourself
The right measurement stack is cheap and boring. It is not Profound or Otterly or any of the $499-per-month tools that are pitching teams this quarter, unless your team is ready to run them in parallel with a DIY baseline.
The DIY version looks like this. Pick 30 to 50 operator queries your buyers actually type into ChatGPT, Perplexity, Google AI Mode, and Google's search bar. Run them monthly from a clean account. Log which brands appear, in what order, and in what context. Count mentions, not impressions. Track share of voice against your competitive set. The raw numbers are your scoreboard.
Pair that with a branded mentions tracker (Google Alerts for the budget version, Brand24 or Mention for the paid one, or Ahrefs Brand Radar if you already subscribe). Count unlinked brand mentions per month. If that number goes up and your share of AI citations goes up inside the same quarter, the causal story is doing real work. If mentions go up and citations do not, the mentions are landing in the wrong places. Diagnose and re-aim.
The mistakes that burn quarters
We see the same mistakes across every brand that asks us to audit their GEO work. Each one looks like a reasonable SEO instinct applied to a different problem.
Buying backlinks to move AI visibility. Domain Rating is a 0.326 correlation to AI Overview visibility. Brand mentions are 0.664. Directing your quarterly budget at the lower-leverage input is a known lost quarter.
Press release carpet bombing. 0.04% citation rate. Dashboards full of "mentions" that no engine is reading. If the wire service is the mechanism, the mention is not real.
One-off viral mentions. LLMs reward pattern consistency across many threads. A single 5,000-upvote Reddit post does not move the needle the way twelve 50-upvote threads across different subreddits do. Our Reddit analysis covers this.
Hiding the affiliation. Fake user accounts get caught, deleted, and screenshotted. Moderators dislike astroturfing more than they dislike honest brand disclosure, and the engines do not reward you for lying. Disclose when you are the maker. The disclosure does not hurt the placement.
Stopping at month two. Training corpora refresh slowly. Live retrieval picks up new mentions in days, but the cumulative effect that moves the 10x cliff takes three to six months of consistent work.
What this means for your next quarter
The operator move, for a team that wants to be found inside ChatGPT, Perplexity, and Google AI Overviews a year from now, is to redirect the quarterly GEO budget from link acquisition to mention acquisition. Reddit and Quora placements, editorial inclusion in category listicles, and real review site profiles produce 3x the correlation with AI visibility that equivalent backlink investments do. The math is not close.
Our complete guide to getting mentioned by ChatGPT and other AI assistants covers the tactical execution layer that this article's thesis feeds into. If you want the full picture, read them together.
Frequently asked questions
Is the 0.664 correlation causation?
No, and the Ahrefs team is careful to say so. It is correlation across a large, representative sample of brands at a single point in time. The causal story is that mentions feed both training and retrieval, which explains the observed relationship, but correlation alone does not prove that. What it does prove is that brands with more mentions appear in AI answers dramatically more often than brands without, and that the same relationship does not exist for backlinks. If you are a risk-averse reader, treat the correlation as the strongest currently available predictor and act on it until a causal study says otherwise.
Do backlinks still matter at all?
Yes. Backlinks still move traditional Google rankings, and traditional rankings still matter for the Bing index that SearchGPT sits on and for the Google top 20 that seoClarity found overlaps with 97% of AI Overviews. A strong backlink profile is table-stakes infrastructure. The point of this article is that additional backlinks, at the margin, produce less AI visibility per dollar than additional brand mentions at the margin. If you have to pick one, pick mentions. If you can do both, do both.
Which engine should I optimize for first?
Optimize for the one your buyers actually use. In B2B SaaS that is usually ChatGPT and Google AI Overviews. In consumer research it is increasingly Perplexity and Google AI Mode. The universe of cited sources overlaps by roughly 11% across these engines, which means a mention strategy that only targets one platform leaves most of the potential upside on the table. The good news is that Reddit and editorial listicles feed all four engines at once, so a single well-placed mention has compound value.
Does the brand mention need to be recent?
For live retrieval, yes. Content updated within the last 30 days is cited roughly 3.2x more often than content older than 90 days, across the Seer Interactive and Ahrefs freshness studies. For training data, the mention just needs to be in the crawl window for the next training pass, which varies by model. The operator play is to treat fresh mentions as the retrieval input and cumulative mentions as the training input, and fund both at the same time.
Can I track brand mentions without paying for Profound or Otterly?
Yes. A 30 to 50 prompt panel run monthly from a clean account gives you the same baseline data the $499-per-month tools sell. Our how to get mentioned by AI assistants guide walks through the exact prompt harness we use. The paid tools are worth the money once you are tracking 200+ prompts across multiple competitors and want a continuously updated dashboard. Before that, DIY is faster and teaches you which queries actually matter.
How long before I see results?
First measurable lift at 4 to 8 weeks of consistent work, meaningful compounding at month 3 to 4, durable share of voice at month 6+. Training data citations are the slowest layer because they depend on the next model refresh cycle. Retrieval citations are the fastest because Perplexity and SearchGPT pick up new content in days. Plan for at least two quarters of output before you judge the strategy. Teams that stop at week six almost always conclude the strategy does not work, and that conclusion is an artifact of measuring too early.
What does Signals actually sell in this space?
Editorial brand mention placements. We operate a 20,000-site network and handle outreach, pitching, inclusion negotiation, and placement delivery. The service is priced per placement. We publish the criteria for what qualifies as a placement (live, indexed, in-category, in a page the engine is likely to retrieve). If the placement does not ship, it does not bill. We are one of the answers to the mention problem this article describes. We are not the only answer. The DIY path in the operator play section works if you have the outreach bandwidth.
Related Services
Continue Reading
Sources: Ahrefs 75,000 brand AI Overview correlation study (2025-2026), Ahrefs ChatGPT and AI Mode extension (December 2025), Seer Interactive SearchGPT/Bing citation overlap study, Profound ChatGPT source analysis (700,000 conversations), Kevin Indig Growth Memo ChatGPT citation analysis, GEO: Generative Engine Optimization (Aggarwal et al., KDD 2024), Semrush AI search backlinks study (1,000 domains), SE Ranking 129,000 domain analysis, Pew Research Center Google AI summary click-through study (March 2025), SparkToro 2024 zero-click search study.