Our support content gets zero AI citations - what are we doing wrong?
Community discussion on optimizing support content for AI visibility. Support and content teams share strategies for making help documentation citable by AI sea...
Something’s changed in our support queue. Over the last 6 months, I’ve noticed:
The shifts we’re seeing:
Examples:
My questions:
Jennifer, this is happening across the industry. We’ve been studying it.
The new customer journey:
Old: Problem → Google → Company Help Center → Contact Support New: Problem → ChatGPT → (Maybe) Company Help Center → Contact Support
What this changes:
The data from our support:
| Metric | 2024 | 2025 | Change |
|---|---|---|---|
| Total tickets | 10,000 | 8,500 | -15% |
| Complex tickets | 3,000 | 4,500 | +50% |
| Avg handle time | 8 min | 12 min | +50% |
| First contact resolution | 75% | 65% | -10% |
Fewer tickets, but each one takes longer because the easy ones are gone.
That data matches our experience. The +50% on complex tickets is real.
How are you handling the misinformation cases? When customers say “ChatGPT told me…” and it’s wrong?
Handling AI misinformation:
Our process:
We created a “AI misconception log” that agents add to when they see patterns. Common ones get escalated to marketing/content to address.
Examples we’ve addressed:
Knowledge management perspective on the AI customer service shift:
Your help content is now training AI.
What’s in your help center, documentation, and FAQs is what AI learns about your product. If your content is:
The solution:
Treat your help content as AI training data. It needs to be:
What we changed:
We added sections like:
This helps AI provide accurate information BEFORE customers contact support.
Operations perspective on the shift:
Staffing implications:
If simple tickets decrease and complex tickets increase, you need:
How we adapted:
The cost reality:
Lower volume but higher complexity = roughly same total cost BUT customer satisfaction increased because fewer simple queries means less queue wait time for complex ones.
Content strategy to reduce AI misinformation:
The problem: AI is a black box - you can’t directly correct it. But you CAN influence what it learns.
What we do:
Monitoring:
We use Am I Cited to track what AI tells users about us. When we spot misinformation:
It’s not instant, but you can systematically correct AI’s understanding of your product.
We actually built AI into our support workflow. Here’s the impact:
AI-assisted support model:
Results:
| Metric | Before AI Bot | After AI Bot |
|---|---|---|
| Human ticket volume | 100% | 40% |
| Customer satisfaction | 78% | 82% |
| First response time | 4 hours | Instant |
| Human handle time | 8 min | 15 min |
The key insight:
By the time a customer reaches a human, they’ve already:
Human agents start with full context. More complex, but more efficient.
Customer research perspective:
We surveyed 500 customers about their AI usage before contacting support:
| Behavior | Percentage |
|---|---|
| Used AI first | 62% |
| Tried AI-suggested solutions | 48% |
| AI answered their question | 35% |
| AI gave wrong information | 18% |
| Mentioned AI to support agent | 41% |
The “AI-first” customer segment:
They’re typically:
Implication:
When they reach you, they’re often further along in frustration but also better at describing the problem.
Training perspective on handling AI-influenced customers:
New skills our agents need:
Training modules we added:
The cultural shift:
Agents now see themselves as part of a feedback loop. Their observations about AI misinformation flow to content team, which updates docs, which improves AI accuracy.
This thread validated what I suspected and gave me actionable strategies. Key takeaways:
The reality:
Strategies to implement:
Short-term:
Medium-term:
Long-term:
The survey data showing 62% use AI first is significant. This isn’t a trend - it’s the new normal.
Thanks everyone for the operational and strategic insights.
Get personalized help from our team. We'll respond within 24 hours.
Track what AI tells customers about your company before they contact support. Understand the information customers receive from ChatGPT and Perplexity.
Community discussion on optimizing support content for AI visibility. Support and content teams share strategies for making help documentation citable by AI sea...
Community discussion on post-purchase AI search behavior. Marketers share experiences with customers using AI to validate purchases and seek alternatives.
Community discussion on requesting corrections from AI platforms. Real experiences from brand managers dealing with inaccurate AI-generated information about th...
Cookie Consent
We use cookies to enhance your browsing experience and analyze our traffic. See our privacy policy.