Robots.txt files are referenced by search engines to index your website content. These can be useful to keep certain content, such as an offer hidden behind a form, from being returned in search engine results.
This article covers how robots.txt files work, and how to update the robots.txt file in HubSpot.
Please note: Google and other search engines can't retroactively remove pages from results after you implement the robots.txt file method. While this tells bots not to crawl a page, search engines can still index your content if, for example, there are inbound links to your page from other websites. If your page has already been indexed and you'd like it to be removed from search engines retroactively, you'll likely want to use the "No Index" meta tag method.
How robots.txt files work
Your robots.txt file tells search engines how to crawl pages hosted on your website.
The two main components of your robots.txt file are:
- User-agent – Defines the search engine or web bot that a rule applies to. An asterisk (*) can be used as a wildcard with User-agent to include all search engines.
- Disallow – Advises a search engine not to crawl and index a file, page, or directory.
To learn more about how to set up your robots.txt files, check out the robots.txt documentation. This resource includes more details about how robots.txt files work and examples. You can also use a robots.txt generator tool to create your file.
Update your robots.txt file in HubSpot
- In your HubSpot account, click the settings icon settings in the main navigation bar.
- In the left sidebar menu, click Marketing, then click Web pages.
- Use the Modifying dropdown menu to select a domain to update.
- Scroll down to the Robots.txt section and make your changes to your robots.txt file in the text field.
- Click Save to save your changes.