Presenter ViewNext Up
Session 2 · May 6

The Engine

Why your content isn't getting cited. Here's how the brand kit changes everything.

Your competitor was cited 3 times.

You weren't cited once.

The Uncomfortable Truth

If your competitor is showing up and you aren't, it's not because their product is better. It's because their content is shaped in a way AI engines can extract and cite. Yours isn't. Yet.

How Engines Think

Some engines answer from memory.

Others go search first. Here's what that looks like.

Perplexity and Google AI Overviews always retrieve live web pages before responding. ChatGPT does too when web browsing is on. When they do, this is what happens in the next few seconds.

Step 1 · The Prompt
1 question

A single question typed by the buyer.

What the buyer types
Step 2 · The Sub-Queries
3 searches run

The engine rewrites it into 2 more searches with different angles. 88.6% of prompts trigger exactly 2 sub-queries.

88.6% generate 2 sub-queries
Step 3 · Pages Retrieved
~39 pages pulled

Each sub-query pulls ~13 pages from the web. One question, three searches, ~39 pages.

~13 pages per sub-query
Step 4 · The Answer
85% never cited

Most of those 39 pages never make the final answer. Only 15% get cited.

15% citation rate
The Citation Problem

Getting found is only the first hurdle.

Getting cited is harder.

Think of the engine as a teacher grading essays with a highlighter. It's not reading your whole page. It's scanning for one sentence that directly answers the question.

The Rule

The enemy of citation is a buried answer. If your best sentence is in paragraph 6, the engine skips right past it.

The Notion Experiment

We took one brand.

Three engines. Three questions.

Engine 1
Perplexity

Always retrieves live web pages. Shows exact URLs. Pulls predominantly from Notion's own docs.

Engine 2
Google AI Overview

Retrieves live pages. Pulls heavily from Reddit, Capterra, and competitor-owned sites.

Engine 3
ChatGPT

Training data only. No live sources cited. Most widely used engine and most at risk for stale positioning.

Q1
What is Notion?
Q2
How does Notion compare to Asana?
Q3
Where does each engine pull its information from?
What feeds each engine
Perplexity

Your own docs and published content. You have direct influence over what this engine pulls.

High control
Google AI Overview

Reddit, review sites, competitor content. Shaped heavily by what others say about you.

Limited control
ChatGPT

Training data from before its cutoff. The only lever is what got indexed upstream.

Upstream only
The Consistent Signal

All three engines said the same things.

Whether Notion wants them to or not.

All-in-one workspace

Every engine led with this framing. No variance across Perplexity, Google, or ChatGPT.

Flexible vs. Asana's structure

All three agreed: Notion is the blank canvas, Asana is the opinionated tool. Perfectly consistent framing.

Wins on wikis and knowledge management

Every engine agreed Notion wins on documentation, wikis, and knowledge bases against Asana.

Steep learning curve

Every engine surfaced “setup required” or “steep learning curve” as a weakness. It's embedded in the narrative whether Notion put it there or not.

The Fragmentation

This is Notion.

50M+ users. A decade of content. Massive brand awareness.

And three engines still can't describe it with the same level of clarity.

AI features: present or invisible

Google led with “AI-powered.” Perplexity never mentioned Notion AI. ChatGPT also ignored it entirely. No consistent message on their key differentiator.

Who is it actually for?

Google said startups and creative teams. ChatGPT said students, entrepreneurs, and creators. Perplexity didn't specify. Three engines, three different customers.

Tool, system, or mashup?

Google: “second brain, replace all your tools.” Perplexity: neutral workspace. ChatGPT: “like Trello plus Airtable plus Confluence.” Three mental models, same product.

Different weaknesses by source

Google: weak mobile app and DB performance lags. Perplexity: setup time. ChatGPT: blank-canvas ambiguity. Each engine inherited a different set of failure modes.

Your Turn

What's one thing you're worried the engines are getting wrong about you?

The Fix

Shared context.

Your brand kit is where it lives. One source of truth your whole team and every playbook pulls from.

Live demo
The Anatomy

A playbook is a single workflow.

All of yours pull from the same upstream source.

Shared context
Brand Kit

Voice, positioning, competitors, differentiators. Who you are and how you talk about it.

Knowledge Base

Docs, case studies, product details, FAQs. The specific material each playbook works from.

Example playbook options
Content Creation

Draft new content in your brand voice, structured for citation.

Content Refresh

Update existing content to reflect new positioning and add extractable structure.

Comparison Pages

Generate competitor comparisons grounded in your differentiators.

The Architecture

Update the brand kit or knowledge base and every playbook that pulls from it reflects the change automatically.

The Challenge

You understand this in theory.

What does it look like in practice?

  • You understand why a brand kit is the foundation for AI visibility.
  • You understand how shared context transforms every output you produce.
  • You understand this is how your team produces content at scale without losing brand consistency.
The remaining problem

Everyone else in your company is still working from their own version of the truth. Different stakeholders, different context, different language. Before this scales, you need internal alignment.

The Internal Pitch

How to get your team on board.

A framework, not a script.

Beat 1
The Problem

Name a specific gap. A search you ran. A competitor that showed up when you didn't. Make it concrete.

Beat 2
The Strategy

Name the strategic move, not the tool. You're building one source of truth that every output starts from.

Beat 3
The System

Describe what you're building. A brand kit that feeds every workflow. One update, everywhere.

Beat 4
The Ask

Keep it small. Ask for 30 minutes to validate your positioning. Not budget. Not a project.

30 minutes, not budget
Tonight's Homework

Two things. Both matter for tomorrow.

One
Complete the context source map

It's in the Resource Hub. A quick inventory: where does your best brand context live today? Which team has it? What format is it in? This maps directly to what goes in your brand kit vs. your knowledge base.

Two
Sharpen your brand kit

Add prompts to AirOps and keep expanding your brand kit. Voice rules, competitors, differentiators. The more complete your context, the better every output gets.

Tomorrow: The AirOps MCP

Your brand kit won't just live in AirOps. It will follow you everywhere. Into Claude. Into ChatGPT. Into whatever your team already uses. Come with your brand kit sharpened and a workflow in mind.