Let me help you talk to legal:
Legal’s concerns (valid but misplaced):
- “They’re using our content without permission”
- “We lose control of how content is used”
- “We might have liability if AI misrepresents us”
The responses:
1. Content usage:
Our content is publicly accessible. Robots.txt is a request, not a legal barrier. Content in training sets predates our blocking. Blocking now doesn’t remove existing data.
2. Control:
We never had control over how people use publicly available content. AI citation is functionally similar to being quoted in an article. We want citations - it’s visibility.
3. Liability:
AI providers take responsibility for their outputs. There’s no established case law creating liability for cited sources. Not citing us doesn’t protect us - it just makes us invisible.
The business case:
- Blocking: Lose visibility, protect nothing
- Allowing: Gain visibility, risk nothing new
Proposed policy language:
“We allow AI crawler access to maximize visibility for our publicly available content. We reserve the right to revise this policy if content licensing frameworks evolve.”
This gives legal a policy on paper while keeping you visible.