How we measure AI visibility
Our methodology is fully transparent. Every point in your score traces back to a specific prompt, a specific AI response, and a specific result. No black box.
See the full version history: Methodology changelog →
The AI Visibility Score
The AI Visibility Score is a weighted number from 0 to 1000, modeled after a credit score. Higher is better. A score of 0 means your brand did not appear in any AI-generated answer across the prompts we tested. A score of 1000 would mean your brand appeared first, prominently, and without competitors in every answer.
Mention detection
We check whether your brand name appears in each AI response. This is the baseline — you either showed up or you didn't.
Rank position
AI models often list multiple brands. We detect where in the answer your brand appears — first mention scores higher than fifth.
Sentiment
We analyze the language used when your brand is mentioned. Positive framing scores higher than neutral; neutral scores higher than negative.
Competitor presence
If your brand is mentioned alongside many competitors, that dilutes visibility. Being the only or primary recommendation scores higher.
How prompts work
We run prompts that simulate real questions your potential customers ask AI models. Examples include "What are the best options for X?" or "Which brands do you recommend for Y?" We run these prompts across multiple large language models and collect the responses.
Each prompt is run independently across all monitored models. Results are aggregated into a score per prompt, then rolled up to a category score and an overall score.
Which models we monitor
We currently monitor:
- → ChatGPT — GPT-4o Mini and GPT-5.2
- → Claude — Opus 4.6 and Sonnet 4.5
- → Gemini — 2.5 Flash
We report scores per model and in aggregate. Visibility varies significantly across models — a brand can be consistently recommended by one model and absent from another.
Score ranges
| Score | Label | What it means |
|---|---|---|
| 800–1000 | Dominant | Consistently first or primary recommendation |
| 600–799 | Strong | Frequently mentioned, competitive position |
| 400–599 | Moderate | Appears in some answers, inconsistent coverage |
| 200–399 | Weak | Rarely appears, significant gaps |
| 0–199 | Not visible | Little to no presence in AI-generated answers |
Transparency commitment
We believe AI visibility data should be as transparent as possible. That means:
- →Every score component is labeled and traceable to source data
- →Methodology changes are versioned and published in the changelog
- →Raw AI responses are stored and available to review
- →Score recalculations are supported when methodology updates
See your score
Run a free snapshot to see exactly how your brand is measured — prompt by prompt, model by model.
Run free snapshot