Demo data · Pro monitoring features · Interactive actions disabled
This is sample data for a fictional brand. Run your own prompts to see your brand's real AI Visibility Score.
Prompt Detail
Coverage details
Mentioned in 0 of 0 responses
View details
Detailed Results
AI responses for the selected prompt in this run.
No completed answers yet for this prompt.
AI models currently default to other brands for this prompt. Most brands start here. Visibility typically improves only after models encounter your brand in authoritative comparisons, references, or third-party sources.
Scores are versioned and explainable. See the methodology changelog for exact calculation details.
You're rarely recommended for this category. This is normal for newer or niche brands.
Enable daily monitoring to track when visibility starts improving.
Your brand is not yet appearing in AI responses for this prompt.
This reflects how AI models currently respond to these prompts—not a failing grade. Most brands start here. The value of monitoring is knowing when this changes.
Continued monitoring will track any shifts in AI model behavior over time.
Run Health
- ⚠ Some model responses missing
- ⏱ Runtime: 268.4s
- 📦 Prompts tested: 1
Prompt Coverage by Model
Mention rate by model for this run and selected prompt.
If you see low scores on specific models, those are the ones your customers might be using — consider focusing your online presence improvements there.
Run History
Compare with other runs for this selected prompt.
| Date | Score | Mentions | Status | Runtime | |
|---|---|---|---|---|---|
| — | 5/5 | Success | 10.3m | View | |
| — | 5/5 | Success | 11.6m | View | |
| — | 5/5 | Success | 12.7m | View | |
| — | 5/5 | Success | 10.7m | View | |
| — | 5/5 | Success | 10.9m | View | |
| — | 5/5 | Success | 11.3m | View | |
| — | 5/5 | Success | 11.1m | View | |
| — | 5/5 | Success | 11.8m | View | |
| — | 5/5 | Success | 11.3m | View | |
| — | 4/4 | Success | 8.4m | View |
Understanding Your Metrics
AI Visibility Score
How often AI models mention your brand across the prompts in this run. Score reflects methodology-based visibility using mention quality and prompt reliability weighting.
Confidence Range (95% CI)
Statistical uncertainty in your score. With limited data points, your "true" visibility could fall anywhere in this range. More runs = narrower range = more confidence.
Model Consensus
Do different AI tools (ChatGPT, Claude, Gemini, etc.) agree on mentioning your brand? Strong consensus means most models mention you. Weak consensus means only some do—your visibility depends on which AI your customers use.
Mention Quality
When AI mentions you, how prominently?
- Strong — AI specifically recommends your brand
- Moderate — AI includes you prominently among options
- Weak — AI briefly lists you among many alternatives
Average Mention Position
A prominence proxy based on where your first mention appears in each response. "Top 30%" means your brand usually appears in the first third of the response. List rank is shown separately when explicit ranked lists are detected.
Benchmark Comparison
How you compare to other brands we've run for similar prompt themes. "Above average" means you're outperforming most similar brands. This is based on aggregate data across our platform.
Reliability-Adjusted Score
Some prompts give random results—sometimes mentioning your brand, sometimes not. This score weights reliable prompts more heavily, giving you a more stable measure of your true AI Visibility Score.
Tone (Rec / Inc / Caut)
How AI frames your brand when it mentions you:
- Recommended — AI actively suggests your brand to users
- Included — AI lists you among options without strong preference
- Cautioned — AI mentions concerns, limitations, or caveats
Questions? Contact support