How LLMs use G2, Capterra, and Trustpilot for SaaS citations
Brands with profiles on G2, Capterra, and Trustpilot are cited 3x more often in ChatGPT answers. The 2026 listing checklist after the G2/Gartner merger.
Originally published April 18, 2026
Every B2B SaaS operator we talk to in 2026 has the same unlocked knot in their head. They know AI overviews are eating organic click-through. They know ChatGPT, Perplexity, and AI Mode are where half of software buyers now start research. They know that on a "best X for Y" query the LLM is going to cite something - it is just rarely their blog post, their landing page, or their ungated PDF. When the models cite sources on category queries for B2B software, they reach for the review sites first. The question we get almost every intake call is some variant of: which review sites actually move the AI needle, what does a cited profile look like, and how fast can we close the gap?
The 2026 data, including the single biggest structural event in this market in a decade - G2's acquisition of Capterra, Software Advice, and GetApp from Gartner on February 5, 2026 for approximately $110 million - reshuffles the answer. Brands with profiles on G2, Capterra, and Trustpilot are cited in AI answers at roughly 3× the rate of brands without. The G2 family now controls 84% of review-site citations in BOFU B2B queries. And review volume is the single largest lever: 100+ reviews on G2 correlates with 3.2× more AI mentions than under-20 profiles. Signals runs an aged Reddit account marketplace plus an editorial network for AI brand mentions across Reddit, Quora, Product Hunt, and Threads, and the editorial side of that graph is precisely the third-party signal layer that confirms the brand entity LLMs already saw on G2. This is the operator-level read on how the LLM source graph actually picks up review-site content, and what to ship first.
Brands with G2, Capterra, and Trustpilot profiles get cited in AI answers at ~3× the rate of brands without.
The G2 + Capterra + Software Advice + GetApp merger (Feb 5, 2026) gave the family a modeled ~3.68% BOFU citation share, a 76% jump versus G2 alone.
Review-volume floor is 50–75 reviews. Below that, retrieval graphs do not have enough signal to surface the brand on category queries.
100+ reviews on G2 correlates with 3.2× more AI mentions than under-20 profiles. A 10% review-count increase ≈ 2% citation lift.
Trustpilot blocks GPTBot in robots.txt, yet became the 5th most-cited page on ChatGPT in January 2026 because training data plus cross-references compensate.
Why review sites carry more AI citation weight than your blog
For category-recommendation queries, AI engines prefer review sites because they solve three retrieval problems your own site cannot. Independent verification (a buyer left that review, not your marketing team), standardized structure (ratings, categories, industry tags the model can parse), and high content density per category (hundreds of named competitors in one place). The Ahrefs 26,283-URL study found unlinked brand mentions correlate 0.664 with AI citations versus 0.218 for backlinks - mentions are 3× more predictive. Review sites are the highest-density unlinked mention surface on the internet for SaaS.
Hall's June 2025 review-platform citation analysis put the platform-level numbers on a single page. In B2B software queries, GetApp commanded 47.65% of ChatGPT citations and 39.74% of Perplexity citations before the G2 merger; G2 alone held 8.25% of ChatGPT citations with the rest split across SourceForge, Software Advice, and TrustRadius. Internal first-party G2 data, published on G2 Tech Signals, tracked the platform's AI visibility from 6.3% in August 2025 to 14.9% in October 2025 - a doubling in two months driven by schema expansion and the start of the Gartner deal pipeline. If you are not on these sites, the LLM's source graph has no honest path to your product name on a category query.
What the G2 + Capterra + Software Advice + GetApp merger did to the citation map
On February 5, 2026, Gartner closed the sale of Capterra, Software Advice, and GetApp to G2 for roughly $110 million. The combined entity now reaches 200M+ annual software buyers with 6M verified reviews across 2,000+ categories. Omniscient Digital's February 2026 analysis of 25,755 AI citations across 200 B2B buyer-intent prompts modeled the effect on bottom-of-funnel queries: G2's solo BOFU citation share was 2.09%, Capterra added +0.94%, Software Advice +0.46%, and GetApp +0.17% - a combined 3.68%, a 76% relative jump that moves G2 from the 4th-most-cited BOFU domain to 2nd, behind only Reddit.
On proof-and-testimonial queries - "real user reviews of X" - the shift is sharper. G2 alone held 7.54% pre-acquisition; the combined family is modeled at 12.69%, opening a 93% citation lead over the second-ranked domain. Profound's tracking showed G2 was already ~33% of review-site citations on ChatGPT and Google AI Overviews, and ~75% on Perplexity before the deal. The practical read: post-merger, your G2 profile does the work that used to require four separate listings. Your Capterra listing is no longer optional - it is a data feed that now flows into the same retrieval graph. For the broader source-graph context, see our analysis of the 50 domains that drive 80% of AI citations.
Which platform matters for which AI engine
The engines do not draw from review sites evenly. ChatGPT leans on the G2 family for B2B software; Perplexity leans harder on G2 specifically (~75% share of software-review citations pre-merger); Google AI Overviews samples more broadly across G2, Capterra, and Trustpilot with a freshness bias. Trustpilot is the DTC and cross-category outlier - PR Newswire reported a 246% surge in Trustpilot ChatGPT citations between June and August 2025, and by January 2026 Trustpilot was the 5th-most cited page on the internet by ChatGPT across all categories.
| Review site | Best fit | ChatGPT citation share (B2B sw) | Perplexity citation share | AI crawler access |
|---|---|---|---|---|
| G2 | B2B SaaS, category queries | ~22% share of voice (all AI) | ~75% review-site share | Selective |
| Capterra | SMB SaaS, operational categories | ~20% (Semrush Nov 2025) | Strong on Perplexity | Selective |
| Software Advice | Vertical SaaS, consultative buys | Included in G2 family (+0.46%) | Lower | Selective |
| GetApp | B2B software discovery | 47.65% (B2B sw subset) | 39.74% | Selective |
| Trustpilot | DTC, consumer SaaS, fintech | #5 most-cited page overall | Moderate | Blocks GPTBot |
| TrustRadius / Clutch | Enterprise / agency services | Smaller overall share | Moderate | Full |
Sources: Hall 2025, Omniscient February 2026, Semrush November 2025, Trustpilot/PR Newswire April 2026.
One engineering note most teams miss: Trustpilot and Yelp fully block GPTBot and many LLM crawlers via robots.txt, which means the models learn Trustpilot content through training data and third-party citations, not live retrieval. G2, Capterra, Software Advice, and GetApp allow selective crawling. Clutch, SourceForge, and TrustRadius allow full access. This matters when you are deciding where to pour review-collection effort: platforms with full live access compound faster on fresh queries.
The review-volume threshold that unlocks AI visibility
Review volume is the single most predictive on-platform lever. Products with 100+ reviews on G2 appear in AI-powered search results at 3.2× the rate of products with fewer than 20 reviews, per the Am I Cited 2026 benchmark. The meaningful floor - the point where your product starts showing up at all - sits at roughly 50–75 reviews. Below that, the LLM retrieval graph has no statistical signal to pull from; your entry effectively does not exist for category queries.
Kevin Indig's 30,000-citation study of 500 software categories quantified the curve: a 10% increase in review count correlates with roughly a 2% increase in AI citation rate. That is a linear-ish relationship with no diminishing return up through several hundred reviews. For most bootstrapped SaaS, the fastest path from zero visibility to meaningful citation presence is a 15–25 reviews-per-month collection cadence, which the Am I Cited data ties to a 40% higher visibility lift versus static profiles. Stale profiles - 47 reviews, newest from 2023 - underperform profiles with half the volume and steady recent velocity. Review freshness is a freshness signal; Google AI Mode shows a 25.7% freshness preference in source selection overall.
The 2026 review-site listing checklist
Ship these in order. Each item matches a specific LLM retrieval or ranking signal.
Claim every profile in the G2 family - G2, Capterra, Software Advice, GetApp. All four now route into the same parent dataset. Fill every field: 300+ character description, primary category, 3+ sub-categories, 5+ feature tags, integrations, pricing, target company size, industries served. Empty fields are a retrieval-graph dead end.
Claim Trustpilot - required for consumer-facing SaaS, fintech, DTC, and any brand that shows up in "is X legit" queries. Trustpilot is the default consumer-safety citation on ChatGPT.
Seed 50 reviews in the first 60 days - the 50–75 review floor is the only threshold below which LLMs ignore you. Pull from your warmest post-purchase cohort; ask for specific use-case language.
Maintain 15–25 new reviews per month. Velocity > one-time stacks. Freshness-weighted engines reward it.
Insist on long-form reviews - 150+ words with named competitors and specific use cases. Short "Great product!" reviews do not get extracted as citations.
Respond to every review within 7 days, especially 1–3 stars. Response text is indexed and becomes part of the brand entity.
Sync your G2 category to your Wikipedia and homepage schema - entity consistency across surfaces is how LLMs confirm you are the same brand they saw on G2. For the mechanics of entity consistency on the engine side, see our pillar on how to get mentioned by ChatGPT.
Track citations weekly across ChatGPT, Perplexity, and AI Mode. The free DIY measurement method takes 20 minutes a week and catches drift before it becomes a gap.
What to do when your category is already saturated
If your category on G2 has an incumbent with 2,000+ reviews, do not try to match review volume head-on. The cited brands in "best X for Y" sub-queries often have one-tenth the review count of the category leader because they rank on specificity, not scale. LLMs look for distinguishing attributes: "best project management tool for construction teams", "best CRM for 5-person agencies". The incumbent is cited on the generic query; the niche brand is cited on the qualified query.
The operator play is to dominate 2–3 long-tail sub-categories where you can realistically hit 50+ reviews concentrated around a single use case. Ask your top five most-loved customers to mention the specific sub-use-case verbatim in their G2 review. Add the sub-category tag to your profile. Within 4–8 weeks, the same retrieval graph that ignored you on the head query will surface you on the qualified one. This sub-category pivot is the single highest-ROI move we see bootstrapped SaaS teams make on the G2 side.
Where Signals fits in the review-site play
Review sites are the first-party asset layer. Editorial mentions are the second-party asset layer that feeds around them. AI engines cross-reference a product name they see on G2 with the same name appearing in editorial trade coverage, Reddit threads, and long-form comparison articles on third-party sites. That cross-surface consistency is what flips a "review site profile" into a "brand entity the LLM cites with confidence."
Our Blog Brand Mentions product runs the editorial side of that graph across a 20,000+ site network, specifically targeting the publication tier that LLMs already cite for your category. The DIY path is real: a dedicated PR hire, a data-led pitching cadence, and 6–12 months will do it. If your launch window is tighter than that, managed editorial mentions compound on top of a fully claimed G2 / Capterra / Trustpilot stack. The review sites do the category-query work. Editorial mentions do the "is this a real brand" work. Neither alone clears the AI citation floor; together, they are the current 2026 operator formula.
Frequently asked questions
Yes, and the effect is measurable within 4–8 weeks. Brands with G2, Capterra, and Trustpilot profiles are cited in AI answers at roughly 3× the rate of brands without, per multiple 2026 studies. G2 alone holds a 22–23% share of voice across ChatGPT, Perplexity, and AI Overviews per Semrush's November 2025 analysis of 230,000 prompts. For products in the G2 top 20 cited domains globally, the uplift is closer to 4–6× on BOFU category queries.
Roughly 50–75 reviews is the meaningful floor; 100+ reviews is the threshold where citation probability jumps 3.2× versus under-20 profiles. A 10% increase in review count correlates with a 2% increase in AI citation rate linearly up through several hundred reviews. Below 50, the LLM retrieval graph does not have enough signal to surface you on category queries.
Yes. Capterra, Software Advice, and GetApp now feed into the same parent dataset G2 uses, and each still has its own profile surface and category taxonomy. The Omniscient February 2026 analysis showed Capterra adding +0.94% BOFU citation share on top of G2's 2.09%. Claim all four profiles. They are separate signals that the retrieval graph aggregates into one brand entity.
It matters most for consumer-adjacent B2B SaaS - fintech, HR, payroll, benefits platforms, anything with a "is this company legit" query attached. Trustpilot became the 5th-most cited page on the internet by ChatGPT in January 2026 after a 246% citation surge from June to August 2025. For pure B2B enterprise SaaS, G2 and Capterra carry more weight than Trustpilot; for any SaaS with a consumer-facing surface, Trustpilot is mandatory.
Yes, and the detection is improving. Both G2 and Trustpilot deploy active fraud detection, and LLMs increasingly cross-reference review velocity against traffic and brand-mention velocity on other surfaces. A burst of 50 five-star reviews in a week with no corresponding brand mention activity is a detectable fingerprint. Authentic review collection - post-purchase email sequences, NPS-promoter follow-ups, in-app prompts - is the only path that compounds. Shortcuts fail the pattern check.
Not if you want AI citations. Blocking OpenAI's GPTBot removes your site from both the training corpus and SearchGPT's live retrieval pool. The Trustpilot and Yelp cases are instructive: both fully block GPTBot, yet both are cited heavily because of the sheer third-party reference volume and training-era content. Your own site does not have that compensating signal. Allow GPTBot, ClaudeBot, PerplexityBot, and Google-Extended. Block only if you have a specific legal or licensing reason.
Most brands see initial AI citations within 4–8 weeks of hitting the 50-review floor and filling out complete profiles across G2 and Capterra. The floor is the gating factor, not the profile age. A well-claimed profile with 10 reviews will not get cited; a partially claimed profile with 80 reviews will. Collection velocity - 15–25 new reviews per month - determines whether the citation lift persists or decays.