Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

100% Fixed: How to stop Google indexing subdomain names?

Fixed: How to stop Google indexing subdomain names?

Preventing Content from Showing in Search Results Using Robots.txt

To ensure that new content does not appear in search results, you can employ a robots.txt file by adding the URL slug. Search Engines rely on these files to determine how to index the content of a website.

If search engines have already indexed your content, you can include a “noindex” meta tag in the HTML head section of the content. This tag instructs search engines not to display the content in search results.

Please take note that the robots.txt file can only block content hosted on a domain associated with HubSpot. For more information on customizing file URLs, you can explore the files tool.

Here’s how you can utilize a robots.txt file:

  1. Use Robot.txt Files
    • To prevent content that hasn’t been indexed by search engines from appearing in search results, you can add it to a robots.txt file.
  2. Edit Your Robots.txt File in HubSpot
    • Log in to your HubSpot account and click on the settings icon in the main navigation bar.
    • In the left sidebar menu, go to Website > Pages.
    • Choose the domain for which you want to edit the robots.txt file:
      • To edit the robots.txt file for all connected domains, click the “Choose a domain to edit its settings” dropdown menu and select “Default settings for all domains.”
      • To edit the robots.txt file for a specific domain, click the “Choose a domain to edit its settings” dropdown menu and select the domain. If necessary, click “Override default settings” to customize the file for this domain.
    • Click the “SEO & Crawlers” tab.
    • In the “Robots.txt” section, modify the content of the file. A typical robots.txt file consists of two parts:
      • User-agent: This defines the search engine or web bot to which a rule applies. By default, it includes all search engines, denoted with an asterisk (*), but you can specify particular search engines here.
      • Disallow: This instructs a search engine not to crawl and index any files or pages using a specific URL slug. For each page you want to add to the robots.txt file, enter “Disallow: /url-slug” (e.g., www.cloudtimon.com/welcome would be entered as “Disallow: /welcome”).


This post first appeared on Top Web Development Company In Hyderabad, please read the originial post: here

Share the post

100% Fixed: How to stop Google indexing subdomain names?

×

Subscribe to Top Web Development Company In Hyderabad

Get updates delivered right to your inbox!

Thank you for your subscription

×