How exactly does Google's AI ranking work? RankBrain, BERT, MUM - I'm confused
Community discussion on Google's AI ranking systems. SEO professionals break down RankBrain, BERT, MUM, and Neural Matching to understand how Google's AI affect...
I keep reading conflicting information about BERT.
Back in 2019, BERT was THE thing to understand for SEO. Natural language processing, understanding context, etc.
Now everyone’s talking about GPT-4, Claude, Gemini, and I’m confused.
My questions:
Trying to cut through the noise and understand what actually matters for content optimization now.
Let me clarify the technical landscape.
The model family tree:
Transformer (2017)
├── BERT-style (encoders - understand text)
│ ├── BERT (Google, 2018)
│ ├── RoBERTa (Meta)
│ ├── MUM (Google, 2021)
│ └── Many others
└── GPT-style (decoders - generate text)
├── GPT series (OpenAI)
├── Claude (Anthropic)
├── Gemini (Google)
└── Many others
BERT is still relevant, but:
What actually matters:
| Search Type | Primary Model Style | Your Focus |
|---|---|---|
| Traditional Google | BERT/MUM (encoders) | Query-content matching, intent |
| AI Overviews | Hybrid | Extractable answers |
| ChatGPT/Perplexity | GPT-style (decoders) | Comprehensive, citable content |
The practical takeaway:
“Optimizing for BERT” was always about writing natural, context-rich content. That hasn’t changed. The specific model names don’t matter for your optimization strategy.
Exactly. “Optimize for BERT” was shorthand for:
All of this still applies. You’re optimizing for how modern language models understand text, not for a specific model.
The principles that work across all models:
These help BERT understand your content for ranking AND help GPT-style models extract it for citations.
SEO perspective on the BERT evolution.
The BERT era (2019-2021):
The MUM/AI era (2021-present):
What changed in practice:
Honestly? Not much for content strategy.
The advice was always:
This worked for BERT. It works for MUM. It works for GPT.
What IS new:
The citation/extraction layer. GPT-style models need to extract and cite your content, not just match it to queries.
This requires:
But the natural language foundation is the same.
Content strategy perspective.
How I explain this to clients:
“BERT was about Google understanding what you mean. GPT is about AI using what you wrote.”
The practical difference:
For traditional search (BERT/MUM understanding):
For AI answers (GPT extraction):
The overlap:
Both reward:
My recommendation:
Don’t think in terms of “optimizing for BERT vs GPT.” Think: “How do I create content that language models can understand (BERT) AND extract/cite (GPT)?”
The answer is the same: clear, natural, well-structured, expert content.
Research perspective on the evolution.
Where BERT fits now:
BERT was foundational - it taught the industry that bidirectional context understanding works. Google hasn’t “replaced” BERT; they’ve evolved it.
The evolution:
For Google Search specifically:
Google uses multiple models in their ranking stack:
What this means for you:
The specific model doesn’t matter for your strategy. What matters is that all these models:
Optimize for these principles, not for specific model names.
Technical writing perspective.
What changed in my writing from BERT to AI era:
BERT era focus:
Added for AI era:
What stayed the same:
My practical workflow:
The BERT principles are the foundation. AI optimization is the enhancement layer.
Practical consultant perspective.
What I tell clients about BERT:
“Don’t worry about BERT specifically. Focus on these principles that all modern search systems share…”
The timeless principles:
What’s changed for AI:
Added emphasis on:
The bottom line:
“BERT optimization” was marketing speak for “write naturally and answer questions.” That still applies. You’re just adding AI extraction optimization on top now.
Data perspective on BERT-related changes.
Tracking content performance across eras:
We tracked 1,000 pieces of content from 2019-2025:
BERT era (2019-2021):
MUM/AI era (2021-2025):
The pattern:
Natural language writing (the BERT principle) remains foundational. But structure for AI extraction provides additional lift.
Practical implication:
Don’t abandon BERT principles. Build on them with AI-friendly structure.
What we use:
Am I Cited to track which content formats get cited most by AI. Helps identify what structure works beyond just natural language.
This cleared up my confusion. Summary:
Is BERT still relevant?
Yes, but as a foundation, not a specific optimization target. The principles BERT represented (natural language, context, intent) are still crucial.
What’s changed:
What I’m doing:
The mental model:
BERT = Foundation (understanding) GPT = Layer on top (extraction and citation)
Both reward the same core qualities. AI just adds structure requirements.
Thanks everyone - much clearer now.
Get personalized help from our team. We'll respond within 24 hours.
Track how AI systems understand and cite your content. See which content formats perform best across different language models.
Community discussion on Google's AI ranking systems. SEO professionals break down RankBrain, BERT, MUM, and Neural Matching to understand how Google's AI affect...
Community discussion on how AI search engines work. Real experiences from marketers understanding LLMs, RAG, and semantic search compared to traditional search.
Community discussion on the differences between ChatGPT and ChatGPT Search. Real experiences from marketers optimizing content for both training data-based and ...
Cookie Consent
We use cookies to enhance your browsing experience and analyze our traffic. See our privacy policy.