News from With

Getting started with GEO? Here’s our glossary of AI terminology

For many businesses, GEO arrives wrapped in unfamiliar terminology. This lack of shared language creates friction; teams struggle to brief agencies, measurement is misunderstood, and expectations quickly drift from what is realistically achievable.

1. Core GEO and AI concepts

Generative engines
AI-powered tools that answer questions directly, rather than showing a list of search results.

Generative engine optimisation (GEO)
The process of improving how a business appears in answers generated by AI tools such as ChatGPT, Gemini and Perplexity.

Large language models (LLMs)
The AI systems behind generative engines. They read and process large amounts of text to produce human-like answers.

AI-generated answers
Responses written by a generative engine using information it has gathered and interpreted from many sources.

LLM visibility
How often, and how clearly, a brand or organisation appears in AI-generated answers.

Zero-click search
When users get the information they need from an AI answer without clicking through to a website.

2. Prompts and how AI is tested

Prompt
The question or instruction entered into an AI tool to generate an answer.

Prompt set
A defined group of prompts used consistently to measure how AI answers change over time.

Prompt testing
Running the same prompts repeatedly to understand how AI responses vary across tools or time periods.

Category prompt
A broad question used to see which companies or brands an AI associates with a particular market or service area.

Comparison prompt
A question that asks an AI to compare two or more companies, products or services.

Brand reputation prompt
A question that asks an AI directly about how good, credible or trustworthy a particular brand is.

3. Measurement and benchmarking

GEO benchmarking
A structured way of measuring how a brand appears in AI answers at a specific point in time.

Baseline (GEO baseline)
The starting measurement used to track progress and change in AI visibility over time.

Share of model
A metric showing how often a brand appears in AI answers compared with its competitors for a set of prompts.

Decision-grade data
Data that is reliable and consistent enough to confidently support business decisions. Some AI data does not yet meet this standard.

Multi-model divergence
Differences in how various AI tools respond to the same question.

4. Accuracy, sentiment and interpretation

Response accuracy scoring
An assessment of whether AI answers describe a business correctly, partly correctly, incorrectly, or not at all.

Sentiment analysis
A review of whether AI responses describe a brand in a positive, neutral or negative way.

Framing analysis
An assessment of how a brand is positioned in AI answers, for example as a leader, a credible option, or a weaker choice.

5. Sources and citations

Citation
A source referenced by an AI tool to support an answer.

Citation analysis
The process of reviewing which sources AI tools rely on when talking about a brand or topic.

6. Content structure and technical terms

Structured content
Content organised using clear headings, bullet points and lists so AI systems can understand it more easily.

Semantic relevance
How clearly content reflects the meaning and context of a topic, rather than just repeating keywords.

Schema markup
Extra information added to a website that helps AI systems understand what a page contains.

LLM.txt
A file, similar to robots.txt, that helps guide AI systems towards preferred content on a website.

AI crawlers
Automated tools used by AI providers to find and read online content.

Get the latest news

Sign up to our newsletter to receive our latest stories directly to your inbox.

Sign up