Repository navigation
robots-txt
- Website
- Wikipedia
advertools - online marketing productivity and analysis tools
A simple and flexible web crawler that follows the robots.txt policies and crawl delays.
🤖 The largest directory for AI-ready documentation and tools implementing the proposed llms.txt standard
Tame the robots crawling and indexing your Nuxt site.
The robots.txt exclusion protocol implementation for Go language
A simple but powerful web crawler library for .NET
Determine if a page may be crawled from robots.txt, robots meta tags and robot headers
A set of reusable Java components that implement functionality common to any web crawler
Ultimate Website Sitemap Parser
Opt-Out tool to check Copyright reservations in a way that even machines can understand.
Open-Source Python Based SEO Web Crawler
NodeJS robots.txt parser with support for wildcard (*) matching.
Known tags and settings suggested to opt out of having your content used for AI training.
Makes it easy to add robots.txt, sitemap and web app manifest during build to your Astro app.
Parse through any sitemap in Node.js
grobotstxt is a native Go port of Google's robots.txt parser and matcher library.
Gatsby plugin that automatically creates robots.txt for your site
🤖 A curated list of websites that restrict access to AI Agents, AI crawlers and GPTs
Simple robots.txt template. Keep unwanted robots out (disallow). White lists (allow) legitimate user-agents. Useful for all websites.