The 50 domains that drive 80% of AI citations (Q2 2026 update)
Five studies of 100M+ AI citations converge on the same short list. These are the 50 domains doing most of the citation work across ChatGPT, Perplexity, and Google AI Overviews. Updated quarterly.
Five independent studies of AI citations, covering more than 100 million total data points between them, converge on the same observation: a small number of domains do most of the work. The top 5 domains account for roughly 38% of all citations. The top 10 capture 46% to 54%, depending on which study you trust. The top 20 reach 66%. By the time you get to the top 50 domains, you have covered roughly 80% of where LLM answers pull from.
This is the list. 50 domains, tiered by function, with the data sources we used and the editorial path to earn a mention at each one. The specific 50 domains change quarter to quarter, but the shape of the distribution is stable enough that this list is a reliable prioritization tool. We will refresh this article quarterly because the rankings move, sometimes dramatically, in weeks. The September 2025 Reddit collapse inside ChatGPT and the Gemini 3 reshuffling of Google AI Overviews in January 2026 are both recent enough to matter.
We have run more than 10,000 editorial placement campaigns in this space since 2017, and we watch these rankings because they decide what our own placement inventory looks like. The reader who finishes this article should know exactly where to spend the next quarter's GEO budget.
How this list was built
The ranking is a synthesis of five public datasets and one internal reconciliation. We did not re-run the citation counts ourselves. We are reading the same dashboards every serious GEO team is reading and merging them into one sortable list.
The datasets we pulled from:
Semrush 3-month AI citation study, March to June 2025, roughly 10 million citations across ChatGPT, Perplexity, Google AI Mode, and Google AI Overviews
Ahrefs 78.6 million search study, June 2025, covering the top 10 most-mentioned domains across ChatGPT, Perplexity, and Google AI Overviews, by Patrick Stox
Profound 700,000-conversation analysis of ChatGPT citations, October to December 2025, updated in their March 2026 LinkedIn analysis
ALM Corp 30 million source analysis covering ChatGPT, Google AI, Gemini, Perplexity, and AI Overviews
Surfer AI Tracker citation report, March to August 2025, 36 million AI Overviews and 46 million citations
When the studies disagreed (which they often did), we sided with the more recent dataset and noted the volatility in the domain's line item. A citation percentage reported as "3-8%" means the studies have converged on that range, not that any single study reported that exact number.
The tier 1 infrastructure layer (top 5)
| # | Domain | Citation share | Strongest in | Editorial path |
|---|---|---|---|---|
| 1 | wikipedia.org | ~8-12% overall, up to 47.9% of ChatGPT top-10 share | ChatGPT, Google AI Mode, Gemini | Earn a Wikipedia page through independent coverage; do not write your own draft |
| 2 | youtube.com | ~9-23% depending on study and engine | Google AI Overviews, Perplexity, Gemini | Transcripts are scraped; publish high-retention videos with clear titles |
| 3 | reddit.com | 5-47% depending on engine (Perplexity 46.7%, AI Overviews 21%) | Perplexity, Google AI Overviews | Real accounts in discussion threads; our Reddit marketing guide covers the execution layer |
| 4 | google properties (blog.google, support.google, youtube.com) | ~16% of AI Mode, 43% self-citation in AI Overviews | Google AI Mode, AI Overviews | Not directly earnable; structural bias from Google's own retrieval |
| 5 | amazon.com | Top 5 in most studies | ChatGPT, Google AI Mode | Product listings with reviews; category pages for informational queries |
Wikipedia deserves a closer look because its position is mostly stable across every study. Profound's analysis of 700,000 ChatGPT conversations found Wikipedia in roughly 1 in 6 conversations with citations, making it the default knowledge layer the model falls back on when no better source is found. Google's AI Mode has Wikipedia at the top with over 1.1 million mentions in the Ahrefs 100-domain analysis.
You do not earn a Wikipedia page by writing one. You earn it by accumulating enough independent third-party coverage in reliable sources that a notable volunteer editor decides you clear the notability bar. Self-drafted Wikipedia pages get deleted, often within days, and the attempt can leave negative edit history that makes future legitimate pages harder to defend.
The tier 2 editorial layer (positions 6-15)
| # | Domain | Strongest in | Editorial path |
|---|---|---|---|
| 6 | linkedin.com | Surged from #11 to #5 on ChatGPT between Nov 2025 and Feb 2026; #1 for professional queries | Long-form articles from real experts; employee newsletters; category-defining posts |
| 7 | forbes.com | ChatGPT top 10, Google AI Overviews | Contributor pitches; paid sponsor content rarely qualifies; editorial inclusion is the win |
| 8 | businessinsider.com | ChatGPT top 10 for business queries | Beat reporter pitches; data-driven stories |
| 9 | techradar.com | ChatGPT, consumer tech queries | Product review pitches; category buying guides |
| 10 | bloomberg.com | ChatGPT, financial and business queries | Source quotes in beat reporter stories |
| 11 | nytimes.com | ChatGPT, Google AI Overviews | Hardest tier-1 placement; worth the effort for authoritative category pieces |
| 12 | wsj.com | ChatGPT, financial and enterprise queries | Expert commentary; trend stories |
| 13 | theverge.com | Consumer tech, Google AI Overviews, Perplexity | Product launch coverage; hands-on reviews |
| 14 | techcrunch.com | Startup and SaaS queries | Funding stories, launch coverage, category analyses |
| 15 | wired.com | Tech culture, long-form analysis | Feature stories; expert interviews |
LinkedIn is the line item that moved most between our Q1 and Q2 updates. Profound's March 2026 analysis flagged LinkedIn as the #1 most-cited domain for professional queries across every major AI search platform, and the domain's rank on ChatGPT doubled between November 2025 and February 2026. Profound speculates that the shift tracks to LinkedIn's long-form post format suddenly reading like "credible expert blog content" to the retriever. Whatever the cause, the practical effect is that LinkedIn long-form content is now a higher-leverage placement than it was at the start of the year.
The tier 3 review and comparison layer (positions 16-25)
| # | Domain | Category | Editorial path |
|---|---|---|---|
| 16 | g2.com | B2B software reviews | Claim profile, respond to reviews, ask happy customers for long-form written reviews |
| 17 | capterra.com | B2B software reviews | Claim profile, populate category descriptions, maintain review flow |
| 18 | trustpilot.com | Cross-category reviews, DTC | Verified business account, respond to every review, especially critical ones |
| 19 | producthunt.com | Product launches, SaaS discovery | Launch well (see our Product Hunt guide), maintain maker profile, post updates |
| 20 | yelp.com | Local business queries | Claim profile, keep hours accurate, respond to reviews |
| 21 | tripadvisor.com | Travel, hospitality, local | Business listing, photo quality, response rate |
| 22 | gartner.com | Enterprise software, IT decision queries | Analyst relations work; inclusion in Magic Quadrants is the unlock |
| 23 | crunchbase.com | Startup data, funding queries | Claim profile, maintain accurate funding and team data |
| 24 | glassdoor.com | Employer queries | Employer profile, ask current employees for authentic reviews |
| 25 | softwareadvice.com | B2B software buying queries | Claim profile, populate features and pricing |
Brands with G2, Capterra, or Trustpilot profiles are cited in AI answers at roughly 3x the rate of brands without, per the Ahrefs cross-engine data. That correlation is one of the largest in the whole dataset and the single highest-leverage free action we know of. Claim the profiles. Populate them. Ask five real customers to write reviews. Most brands stop at "claim the profile" and never come back, which is why the Tier 3 layer is still underutilized.
The tier 4 vertical authority layer (positions 26-40)
| # | Domain | Vertical |
|---|---|---|
| 26 | nih.gov | Healthcare, medical research |
| 27 | mayoclinic.org | Healthcare, consumer health |
| 28 | healthline.com | Consumer health |
| 29 | clevelandclinic.org | Healthcare |
| 30 | sciencedirect.com | Research, scientific literature |
| 31 | stackoverflow.com | Programming, developer queries |
| 32 | github.com | Open source, technical documentation |
| 33 | shopify.com | Ecommerce, DTC |
| 34 | hubspot.com | Marketing, sales, CRM |
| 35 | investopedia.com | Personal finance, investing |
| 36 | cnbc.com | Business and financial news |
| 37 | harvard.edu | Academic authority |
| 38 | mit.edu | Academic, technical |
| 39 | nature.com | Scientific research |
| 40 | statista.com | Statistics, market data |
The vertical authority layer is where careful, honest editorial work pays off the most. These sites do not accept contributor content the way Forbes does. You earn inclusion by doing primary research, publishing data a journalist can cite, and being available as a source when the editorial team calls. That is slower than a press release. It is also the difference between a brand that owns a category and a brand that buys ads next to it.
The tier 5 community and niche layer (positions 41-50)
| # | Domain | Where it earns weight |
|---|---|---|
| 41 | quora.com | #4 most-cited in Google AI Mode (7.25% of responses), strong on consumer explainer queries |
| 42 | medium.com | Blog-format explainer queries, long-tail SEO |
| 43 | news.ycombinator.com | Tech, startup, and developer queries |
| 44 | indiehackers.com | Solo founder and bootstrapped SaaS queries |
| 45 | dev.to | Developer content, code-related queries |
| 46 | substack.com | Newsletter and opinion queries |
| 47 | yahoo.com | News, finance, general information queries |
| 48 | apple.com | Apple ecosystem and hardware queries |
| 49 | facebook.com | Local, community, and consumer queries |
| 50 | x.com | Grok queries (99.7% dependency), breaking news |
Quora deserves a specific callout. It was the #1 most-cited domain in Google AI Overviews at one point in 2024, then lost ground, then stabilized at #4 in Google AI Mode at 7.25% of responses per Semrush's 26,000 URL study. The platform's traffic is down, but its retrieval weight inside Google's AI surfaces is still disproportionately high for the declining user base. Our Quora strategy guide covers the current state and the execution layer for Quora placements.
The engine-by-engine breakdown
The same 50 domains do not rank in the same order across every engine. If your buyers skew toward ChatGPT, your priority list looks different than if they live in Perplexity. Here is how the top 5 positions shift per engine, based on the Ahrefs and Profound cross-engine studies.
| Engine | Top 5 domains (ordered) | Notable pattern |
|---|---|---|
| ChatGPT (browsing mode) | Wikipedia, Reddit, LinkedIn, Forbes, G2 | Wikipedia dominates (~7.8% total share); LinkedIn surging; Reddit volatile |
| Google AI Overviews | YouTube, Wikipedia, Reddit, Quora, Google properties | UGC-heavy; Reddit at ~21% in some studies, YouTube at ~18.8%; self-reference at 43% |
| Google AI Mode | Wikipedia, YouTube, blog.google, Reddit, Google.com | Wikipedia leads at 11.22% (Ahrefs); 13.7% overlap with AI Overviews |
| Perplexity | Reddit, YouTube, Wikipedia, Apple, Google | Reddit dominance at 46.7% of top-10 (Profound); 11% overlap with ChatGPT |
| Gemini | YouTube, Reddit, Wikipedia, Google properties, LinkedIn | Google ecosystem advantage; growing weight on long-form UGC |
| Grok | x.com and everything else | Grok pulls approximately 99.7% of retrieved content from X; optimize for X or deprioritize |
The cross-engine overlap is smaller than most teams assume. Only 11% of cited domains appear in both ChatGPT and Perplexity per ALM Corp's 30-million-source analysis. Another Ahrefs study found 86% of top mentioned sources were not shared across ChatGPT, Perplexity, and Google AI Overviews. The practical implication: a single-platform placement strategy caps your theoretical upside at roughly 14% of the addressable citation pool.
What is not on this list (and why)
Two categories of domain do not make the cut despite appearing in every agency "boost your AI visibility" pitch deck. Understanding why saves a quarter of wasted budget.
How the rankings actually change
This list is volatile on a quarter-over-quarter basis. Since the start of 2025 we have tracked four specific shifts that every team working in GEO needs to know about.
Reddit's September 2025 ChatGPT collapse. ChatGPT cited Reddit in roughly 60% of prompt responses in early August 2025 before collapsing to around 10% over a few days in mid-September. Wikipedia in the same window dropped from 55% of citations to under 20%. Forbes, Medium, and PR Newswire captured most of the displaced share. The collapse traced to an OpenAI algorithm change that Seer Interactive documented 46 days before OpenAI announced ads in ChatGPT. Reddit's position has partially recovered in 2026 but never back to August peaks.
LinkedIn's Q1 2026 surge. LinkedIn's domain rank on ChatGPT moved from around #11 in November 2025 to around #5 in February 2026, a 2x increase in citation frequency. Profound flagged it as the largest individual domain shift they had seen in their tracking window. LinkedIn is now the most-cited domain for professional queries across every major AI search platform.
Google AI Mode's January 2026 Gemini 3 reshuffle. On January 27, 2026, Google made Gemini 3 the default model powering AI Overviews. SE Ranking found that Gemini 3 replaced approximately 42% of the domains that had previously been cited in AI Overviews under the previous model, and the new model delivers roughly 32% more source URLs per AI Overview response. If your Q4 2025 data is what your strategy is built on, it is already stale.
YouTube's steady climb. YouTube has grown its citation share inside AI Overviews by approximately 34% over the past six months, per Ahrefs. YouTube transcripts are now one of the most consistent sources for Google's AI surfaces. For brands producing long-form video, the transcript is the ranking signal. For brands not producing video yet, the competitive pressure is increasing.
The operator playbook: where to start
Fifty domains is too many to work on at once. The operator play is to pick a sequence based on where your buyers actually live and how much outreach bandwidth your team has. Here is the sequence we recommend to our own clients.
Weeks 1-2: Claim everything free on the list. G2, Capterra, Trustpilot, Product Hunt, Crunchbase, Glassdoor, Yelp if relevant, TripAdvisor if relevant. Populate every field. Ask five customers for long-form reviews. This single sprint usually lifts brand mention share by 15-30% within the next retrieval refresh window.
Weeks 3-6: Reddit and Quora footprint. Identify the 15 to 20 threads where your category is actively discussed. Contribute value-first answers with disclosed affiliation. This is the biggest single compounding move you can make for Perplexity visibility specifically, and Reddit coverage cross-pollinates into every other engine.
Months 2-4: Editorial listicle inclusion. Find the "best X for Y" articles already ranking for your buyer queries. Pitch the authors. If outreach capacity is the bottleneck, this is exactly the work our AI Blog brand mentions service exists to handle. If your team has the bandwidth, the DIY path works identically.
Months 3-6: Vertical authority push. Original data releases, expert availability for reporters, category thought leadership on LinkedIn and Medium. This is the slowest layer and the one that compounds the hardest over time.
What to do if your category is not here
Some categories have no domain on the top 50 that fits naturally. Legal services, local services, niche B2B, highly-regulated verticals. In those cases, the top 50 is a reference point, not a prescription. What you actually need is the top 10 for your vertical.
The methodology to find your own top 10: pick 20 queries your buyers ask. Run them in ChatGPT, Perplexity, and Google AI Mode. Log every cited domain. Rank by frequency. That ordered list is your top 10, and it is almost always more useful than a generalist top 50 ranking when you are deep in a narrow vertical. Our ChatGPT guide walks through the exact prompt panel methodology.
Frequently asked questions
Is this list accurate to a specific percentage?
The ordering is approximate. The tiers are accurate. Studies disagree on exact percentages because they use different query pools, engine mixes, and observation windows. What they agree on is the shape: a handful of domains do most of the work, and the names at the top of the handful are remarkably stable across methodologies. Trust the tiering, not the exact rank numbers. If you need precision for a specific engine, pull from the engine-specific study we cite for that line item.
How often will this list change?
We refresh quarterly. The September 2025 Reddit collapse happened over roughly five days. The LinkedIn surge took about three months. The Gemini 3 domain reshuffle hit overnight. Quarterly is the right cadence for operator planning; weekly monitoring is the right cadence for active campaigns. If you want the most recent numbers between our refreshes, Profound and Ahrefs Brand Radar publish their own updates monthly.
Why is Wikipedia not at #1 on every engine?
Wikipedia is the highest-cited single domain on ChatGPT and Google AI Mode but not on Perplexity or Google AI Overviews. Perplexity's retrieval stack prefers real-user discussion (Reddit at 46.7%) over encyclopedic content. Google AI Overviews prefer video and UGC (YouTube, Reddit, Quora) over reference material. Wikipedia's ranking depends on which engine you care about, not on which engine has the smartest retrieval.
What about llms.txt? Does it work?
Adoption is under 15% across top sites, and no major engine has committed to reading it as a ranking signal. llms.txt is a proposal, not a specification the engines enforce. Publishing one is cheap, so there is no reason not to, but do not build a GEO strategy around it.
How do I earn a Wikipedia page without writing one myself?
Accumulate independent third-party coverage in tier 2 editorial publications (Forbes, Business Insider, TechCrunch, Bloomberg). When you have 3-5 pieces of coverage in reliable sources that discuss the brand as a subject rather than quoting a spokesperson, you are eligible. A notable Wikipedia editor will often create the page if you post the coverage in the right talk channels. Paying a freelancer to write your page gets the page deleted, usually within days. This is the one area where patience is not optional.
What about local queries? Most of these domains are not local.
Local queries pull from a different source set: Google Business Profile, Yelp, TripAdvisor, Nextdoor, city-specific Reddit subs. For local GEO, this list is the wrong reference. Your top 10 will be dominated by directories and local review sites. We are planning a dedicated local GEO playbook in the pillar; until then, treat G2-through-TripAdvisor in Tier 3 as your starting point and build out from there.
Does Signals place mentions on all 50 of these domains?
No, and we would not trust any service that claimed they did. The structure of the top 5 (Wikipedia, YouTube, Google properties) does not support paid placement. Tier 2 editorial sites accept contributor content by outreach only, not by purchase. Where our service lives is in Tiers 2 through 5 for the sites that do accept editorial inclusion through contributor or listing paths: Reddit, Quora, category listicles in mid-authority publications, review sites with claim-and-populate profiles. That is the 20,000-site network.
Related Services
Continue Reading
Sources: Semrush 3-month AI citation study (2025), Ahrefs 78.6M searches top 10 cited domains analysis, Profound 700,000-conversation ChatGPT analysis, ALM Corp 30 million source cross-engine analysis, Surfer AI Tracker 36M AI Overviews + 46M citations report, Profound LinkedIn March 2026 analysis, BrightEdge healthcare citation research, Seer Interactive SearchGPT and Bing overlap study, SE Ranking Gemini 3 domain shift analysis.