Reports

Block pages or blog posts from being indexed by search engines

Last updated: September 11, 2018

Applies to:

Marketing Hub Basic, Professional, Enterprise
There are a few options to prevent search engines from indexing specific pages on your website. We recommend carefully researching each of these options before implementing any changes to ensure that only the desired pages are blocked from search engines.

Please note: if you choose to use the "No Index" meta tag method, please be aware that it should not be combined with the robots.txt file method. Search engines need to begin crawling the page in order to see the "No Index" meta tag and the robots.txt file prevents crawling altogether.

Robots.txt file

Your robots.txt file is a file on your website that search engine crawlers read to see what pages they should and should not index. Learn how to set up your robots.txt file in HubSpot.

Google and other search engines can't retroactively remove pages from results after you implement the robots.txt file method. While this tells bots not to crawl a page, search engines can still index your content if, for example, there are inbound links to your page from other websites. If your page has already been indexed and you'd like it to be removed from search engines retroactively, you'll likely want to use the "No Index" meta tag method below. 

"No index" meta tag

A "no index" meta tag is a string of code entered into the head section of a page's HTML that tells search engines not to index the page.

 <meta name="robots" content="noindex">

edit-head-html

 

Google Webmaster Tools

If you have a Google Webmaster Tools account, you may submit a URL to be removed from Google search results.

Please notethis will only apply to Google's search results.

 If you wish to block files in your HubSpot file manager, such as a PDF document, from being indexed by search engines, you will need to select a connected subdomain for the file(s) and use the file URL to be blocked from crawlers.  

How HubSpot handles requests from a user agent

If you're setting a user agent string to test crawl your website and are seeing a message of access denied, this is expected behavior. Google is still crawling and indexing your site.

The reason you see this message is because HubSpot only allows requests from the googlebot user agent coming from IPs that are owned by Google. To protect HubSpot-hosted sites from attackers or spoofers, requests from other IP addresses will be denied. HubSpot does this for other search engine crawlers as well, such as BingBot, MSNBot, and Baiduspider.

Was this article helpful?

If you still need help you can get answers from the , or to contact support.