Repository navigation
robots-txt
- Website
- Wikipedia
advertools - online marketing productivity and analysis tools
A simple and flexible web crawler that follows the robots.txt policies and crawl delays.
Tame the robots crawling and indexing your Nuxt site.
The robots.txt exclusion protocol implementation for Go language
A simple but powerful web crawler library for .NET
A set of reusable Java components that implement functionality common to any web crawler
Determine if a page may be crawled from robots.txt, robots meta tags and robot headers
Ultimate Website Sitemap Parser
Opt-Out tool to check Copyright reservations in a way that even machines can understand.
🤖 The largest directory for AI-ready documentation and tools implementing the proposed llms.txt standard
Open-Source Python Based SEO Web Crawler
NodeJS robots.txt parser with support for wildcard (*) matching.
Known tags and settings suggested to opt out of having your content used for AI training.
Makes it easy to add robots.txt, sitemap and web app manifest during build to your Astro app.
grobotstxt is a native Go port of Google's robots.txt parser and matcher library.
Gatsby plugin that automatically creates robots.txt for your site
🤖 A curated list of websites that restrict access to AI Agents, AI crawlers and GPTs
Simple robots.txt template. Keep unwanted robots out (disallow). White lists (allow) legitimate user-agents. Useful for all websites.
ScrapeGPT is a RAG-based Telegram bot designed to scrape and analyze websites, then answer questions based on the scraped content. The bot utilizes Retrieval Augmented Generation and webscraping to return natural language answers to the user's queries.