Home / Companies / WorkOS / Blog / Post Details
Content Deep Dive

From blocking bots to optimizing for LLMs: How the web flipped its script

Blog post from WorkOS

Post Details
Company
Date Published
Author
Maria Paktiti
Word Count
1,245
Language
English
Hacker News Points
-
Summary

The web is undergoing a significant shift from keeping bots out to actively welcoming them, as companies recognize the value of Large Language Models (LLMs) like ChatGPT and Claude in helping users discover and trust products. Historically, bots were seen as threats, leading to the use of CAPTCHAs, rate limits, and robots.txt to protect against scraping and fraud. However, LLMs now serve as important gateways to the web, necessitating a new era of optimization where companies enhance their content's machine-readability through structured markup, semantic consistency, and AI-friendly APIs. This transition represents a strategic shift from gatekeeping to selective openness, where the web remains both human-readable and machine-welcoming, balancing the need to invite useful AI agents while maintaining defenses against malicious bots. The future of web development lies in designing experiences that cater to both humans and machines, ensuring that LLMs can accurately understand and convey a company's value proposition.