Customize your robots.txt file

Last updated: February 8, 2019

Applies to:

Marketing Hub  Professional, Enterprise
Legacy Marketing Hub Basic
HubSpot CMS

Robots.txt files are referenced by search engines to index your website content. These can be useful to keep certain content, such as a content offer hidden behind a form, from being returned in search engine results.

Please note: Google and other search engines can't retroactively remove pages from search results after you implement the robots.txt file method. While this tells bots not to crawl a page, search engines can still index your content if, for example, there are inbound links to your page from other websites. If your page has already been indexed and you'd like it to be removed from search engines retroactively, you'll likely want to use the "No Index" meta tag method

How robots.txt files work

Your robots.txt file tells search engines how to crawl pages hosted on your website. The two main components of your robots.txt file are:

  • User-agent: Defines the search engine or web bot that a rule applies to. An asterisk (*) can be used as a wildcard with User-agent to include all search engines. 
  • Disallow: Advises a search engine not to crawl and index a file, page, or directory.

To learn more about how to set up your robots.txt files for Google search results, check out Google's developer documentationYou can also use a robots.txt generator tool to create your file. 

Please note: to block a file in your file manager, customize the file so it's hosted on one of your domains. Then you can add the file URL to your robots.txt file. 

Update your robots.txt file in HubSpot

  • In your HubSpot account, click the settings icon settings in the main navigation bar.

  • In the left sidebar menu, navigate to Website Pages.

  • Use the Modifying dropdown menu to select a domain to update.

  • Click the SEO & Crawlers tab.
  • Scroll down to the Robots.txt section and make your changes to your robots.txt file in the text field.

  • Click Save.