How do I get my site on Google?

Mobile First Indexing
5 things about mobile-first indexing you should know
Jul 28, 2020
mobile ready site
Make Your Site Mobile-Friendly
Jul 29, 2020
Show all

How do I get my site on Google?

How do i get my site on google

Inclusion in Google’s search results is free and easy; you don’t even need to submit your site to Google. Google is a fully automated search engine that uses web crawlers to explore the web constantly, looking for sites to add to our index. In fact, the vast majority of sites listed in our results aren’t manually submitted for inclusion, but found and added automatically when we crawl the web. Learn how Google discovers, crawls, and serves web pages.

We offer webmaster guidelines for building a Google-friendly website. While there’s no guarantee that our crawlers will find a particular site, following these guidelines should help make your site appear in our search results.

Google Search Console provides tools to help you submit your content to Google and monitor how you’re doing in Google Search. If you want, Search Console can even send you alerts on critical issues that Google encounters with your site. Sign up for Search Console.

Here are a few basic questions to ask yourself about your website when you get started.

  • Is my website showing up on Google?
  • Do I serve high-quality content to users?
  • Is my local business showing up on Google?
  • Is my content fast and easy to access on all devices?
  • Is my website secure?

You can find additional getting started information on http://g.co/webmasters

The rest of this document provides guidance on how to improve your site for search engines, organized by topic. You can download a short, printable checklist of tips from http://g.co/WebmasterChecklist

Do you need an SEO expert?

An SEO (“search engine optimization”) expert is someone trained to improve your visibility on search engines. By following this guide, you should learn enough to be well on your way to an optimized site. In addition to that, you may want to consider hiring an SEO professional that can help you audit your pages.

Deciding to hire an SEO is a big decision that can potentially improve your site and save time. Make sure to research the potential advantages of hiring an SEO, as well as the damage that an irresponsible SEO can do to your site. Many SEOs and other agencies and consultants provide useful services for website owners, including:

  • Review of your site content or structure
  • Technical advice on website development: for example, hosting, redirects, error pages, use of JavaScript
  • Content development
  • Management of online business development campaigns
  • Keyword research
  • SEO training
  • Expertise in specific markets and geographies

Before beginning your search for an SEO, it’s a great idea to become an educated consumer and get familiar with how search engines work. We recommend going through the entirety of this guide and specifically these resources:

If you’re thinking about hiring an SEO, the earlier the better. A great time to hire is when you’re considering a site redesign, or planning to launch a new site. That way, you and your SEO can ensure that your site is designed to be search engine-friendly from the bottom up. However, a good SEO can also help improve an existing site.

For a detailed rundown on the need for hiring an SEO and what things to look out for, you can read our Help Center article “Do you need an SEO”

Help Google find your content

The first step to getting your site on Google is to be sure that Google can find it. The best way to do that is to submit a sitemap. A sitemap is a file on your site that tells search engines about new or changed pages on your site. Learn more about how to build and submit a sitemap.

Google also finds pages through links from other pages. See Promote your site later in this document to learn how to encourage people to discover your site.

Tell Google which pages shouldn’t be crawled

Best Practices

For non-sensitive information, block unwanted crawling by using robots.txt

A “robots.txt” file tells search engines whether they can access and therefore crawl parts of your site. This file, which must be named “robots.txt”, is placed in the root directory of your site. It is possible that pages blocked by robots.txt can still be crawled, so for sensitive pages you should use a more secure method.

Image showing the proper location of a robots.txt file.

You may not want certain pages of your site crawled because they might not be useful to users if found in a search engine’s search results. If you do want to prevent search engines from crawling your pages, Google Search Console has a friendly robots.txt generator to help you create this file. Note that if your site uses subdomains and you wish to have certain pages not crawled on a particular subdomain, you’ll have to create a separate robots.txt file for that subdomain. For more information on robots.txt, we suggest this Webmaster Help Center guide on using robots.txt files.

Read about several other ways to prevent content from appearing in search results.

Avoid:

  • Don’t let your internal search result pages be crawled by Google. Users dislike clicking a search engine result only to land on another search result page on your site.
  • Allowing URLs created as a result of proxy services to be crawled.

For sensitive information, use more secure methods

Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don’t acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don’t want seen.

In these cases, use the noindex tag if you just want the page not to appear in Google, but don’t mind if any user with a link can reach the page. For real security, though, you should use proper authorization methods, like requiring a user password, or taking the page off your site entirely.

Help Google (and users) understand your content

Let Google see your page the same way a user does

When Googlebot crawls a page, it should see the page the same way an average user does15. For optimal rendering and indexing, always allow Googlebot access to the JavaScript, CSS, and image files used by your website. If your site’s robots.txt file disallows crawling of these assets, it directly harms how well our algorithms render and index your content. This can result in suboptimal rankings.

Recommended action:

  • Use the URL Inspection tool. It will allow you to see exactly how Googlebot sees and renders your content, and it will help you identify and fix a number of indexing issues on your site.

Create unique, accurate page titles

A <title> tag tells both users and search engines what the topic of a particular page is. The <title> tag should be placed within the <head> element of the HTML document. You should create a unique title for each page on your site.

HTML snippet showing the title tag

Create good titles and snippets in search results

If your document appears in a search results page, the contents of the title tag may appear in the first line of the results (if you’re unfamiliar with the different parts of a Google search result, you might want to check out the anatomy of a search result video).

The title for your homepage can list the name of your website/business and could include other bits of important information like the physical location of the business or maybe a few of its main focuses or offerings.

Best Practices

Accurately describe the page’s content

Choose a title that reads naturally and effectively communicates the topic of the page’s content.

Avoid:

  • Choosing a title that has no relation to the content on the page.
  • Using default or vague titles like “Untitled” or “New Page 1”.

Create unique titles for each page

Each page on your site should ideally have a unique title, which helps Google know how the page is distinct from the others on your site. If your site uses separate mobile pages, remember to use good titles on the mobile versions too.

Avoid:

  • Using a single title across all of your site’s pages or a large group of pages.

Use brief, but descriptive titles

Titles can be both short and informative. If the title is too long or otherwise deemed less relevant, Google may show only a portion of it or one that’s automatically generated in the search result. Google may also show different titles depending on the user’s query or device used for searching.

Avoid:

  • Using extremely lengthy titles that are unhelpful to users.
  • Stuffing unneeded keywords in your title tags.

Use the “description” meta tag

A page’s description meta tag gives Google and other search engines a summary of what the page is about. A page’s title may be a few words or a phrase, whereas a page’s description meta tag might be a sentence or two or even a short paragraph. Like the <title> tag, the description meta tag is placed within the <head> element of your HTML document.

HTML snippet showing the <meta> description tag

What are the merits of description meta tags?

Description meta tags are important because Google might use them as snippets for your pages. Note that we say “might” because Google may choose to use a relevant section of your page’s visible text if it does a good job of matching up with a user’s query. Adding description meta tags to each of your pages is always a good practice in case Google cannot find a good selection of text to use in the snippet. The Webmaster Central Blog has informative posts on improving snippets with better description meta tags and better snippets for your users. We also have a handy Help Center article on how to create good titles and snippets.

Sample plain blue link search result for "baseball cards"

Best Practices

Accurately summarize the page content

Write a description that would both inform and interest users if they saw your description meta tag as a snippet in a search result. While there’s no minimal or maximal length for the text in a description meta tag, we recommend making sure that it’s long enough to be fully shown in Search (note that users may see different sized snippets depending on how and where they search), and contains all the relevant information users would need to determine whether the page will be useful and relevant to them.

Avoid:

  • Writing a description meta tag that has no relation to the content on the page.
  • Using generic descriptions like “This is a web page” or “Page about baseball cards”.
  • Filling the description with only keywords.
  • Copying and pasting the entire content of the document into the description meta tag.

Use unique descriptions for each page

Having a different description meta tag for each page helps both users and Google, especially in searches where users may bring up multiple pages on your domain (for example, searches using the site: operator). If your site has thousands or even millions of pages, hand-crafting description meta tags probably isn’t feasible. In this case, you could automatically generate description meta tags based on each page’s content.

Avoid:

  • Using a single description meta tag across all of your site’s pages or a large group of pages.

Use heading tags to emphasize important text

Use meaningful headings to indicate important topics, and help create a hierarchical structure for your content, making it easier for users to navigate through your document.

Best Practices

Imagine you’re writing an outline

Similar to writing an outline for a large paper, put some thought into what the main points and sub-points of the content on the page will be and decide where to use heading tags appropriately.

Avoid:

  • Placing text in heading tags that wouldn’t be helpful in defining the structure of the page.
  • Using heading tags where other tags like <em> and <strong> may be more appropriate.
  • Erratically moving from one heading tag size to another.

Use headings sparingly across the page

Use heading tags where it makes sense. Too many heading tags on a page can make it hard for users to scan the content and determine where one topic ends and another begins.

Avoid:

  • Excessive use of heading tags on a page.
  • Very long headings.
  • Using heading tags only for styling text and not presenting structure.

Add structured data markup

Structured data21 is code that you can add to your sites’ pages to describe your content to search engines, so they can better understand what’s on your pages. Search engines can use this understanding to display your content in useful (and eye-catching!) ways in search results. That, in turn, can help you attract just the right kind of customers for your business.

Image showing a search result enhanced by review stars using structured data.

For example, if you’ve got an online store and mark up an individual product page, this helps us understand that the page features a bike, its price, and customer reviews. We may display that information in the snippet for search results for relevant queries. We call these “rich results.”

In addition to using structured data markup for rich results, we may use it to serve relevant results in other formats. For instance, if you’ve got a brick-and-mortar store, marking up the opening hours allows your potential customers to find you exactly when they need you, and inform them if your store is open/closed at the time of searching.

Google Search result for ice cream stores, showing rich results enabled by structured data.

You can mark up many business-relevant entities:

  • Products you’re selling
  • Business location
  • Videos about your products or business
  • Opening hours
  • Events listings
  • Recipes
  • Your company logo, and many more!

See a full list of supported content types in our developer site22.

We recommend that you use structured data with any of the supported notations markup to describe your content. You can add the markup to the HTML code to your pages, or use tools like Data Highlighter23 and Markup Helper24 (see the Best Practices section for more information about them).

Best Practices

Check your markup using the Rich Results test

Once you’ve marked up your content, you can use the Google Rich Results test25 to make sure that there are no mistakes in the implementation. You can either enter the URL where the content is, or copy the actual HTML which includes the markup.

Avoid:

  • Using invalid markup.

Use Data Highlighter

If you want to give structured markup a try without changing the source code of your site, you can use Data Highlighter which is a free tool integrated in Search Console that supports a subset of content types.

If you’d like to get the markup code ready to copy and paste to your page, try the Markup Helper tool.

Avoid:

  • Changing the source code of your site when you are unsure about implementing markup.

Keep track of how your marked up pages are doing

The various rich result reports26 in Search Console shows you how many pages on your site we’ve detected with a specific type of markup, how many times they appeared in search results, and how many times people clicked on them over the past 90 days. It also shows any errors we’ve detected.

Avoid:

  • Adding markup data which is not visible to users.
  • Creating fake reviews or adding irrelevant markups.

Manage your appearance in Google Search results

Correct structured data on your pages also makes your page eligible for many special features in Search results, including review stars, fancy decorated results, and more. See the gallery of search result types that your page can be eligible for.27

Organize your site hierarchy

Understand how search engines use URLs

Search engines need a unique URL per piece of content to be able to crawl and index that content, and to refer users to it. Different content – for example, different products in a shop – as well as modified content – for example, translations or regional variations – need to use separate URLs in order to be shown in search appropriately.

URLs are generally split into multiple distinct sections:

protocol://hostname/path/filename?querystring#fragment

For example:

https://www.example.com/RunningShoes/Womens.htm?size=8#info

Google recommends that all websites use https:// when possible. The hostname is where your website is hosted, commonly using the same domain name that you’d use for email. Google differentiates between the “www” and “non-www” version (for example, “www.example.com” or just “example.com”). When adding your website to Search Console, we recommend adding both http:// and https:// versions, as well as the “www” and “non-www” versions.

Path, filename, and query string determine which content from your server is accessed. These three parts are case-sensitive, so “FILE” would result in a different URL than “file”. The hostname and protocol are case-insensitive; upper or lower case wouldn’t play a role there.

A fragment (in this case, “#info“) generally identifies which part of the page the browser scrolls to. Because the content itself is usually the same regardless of the fragment, search engines commonly ignore any fragment used.

When referring to the homepage, a trailing slash after the hostname is optional since it leads to the same content (“https://example.com/” is the same as “https://example.com”). For the path and filename, a trailing slash would be seen as a different URL (signaling either a file or a directory), for example, “https://example.com/fish” is not the same as “https://example.com/fish/”.

Navigation is important for search engines

The navigation of a website is important in helping visitors quickly find the content they want. It can also help search engines understand what content the webmaster thinks is important. Although Google’s search results are provided at a page level, Google also likes to have a sense of what role a page plays in the bigger picture of the site.

Example of a useful page hierarchy for a website.

Plan your navigation based on your homepage

All sites have a home or “root” page, which is usually the most frequented page on the site and the starting place of navigation for many visitors. Unless your site has only a handful of pages, you should think about how visitors will go from a general page (your root page) to a page containing more specific content. Do you have enough pages around a specific topic area that it would make sense to create a page describing these related pages (for example, root page -> related topic listing -> specific topic)? Do you have hundreds of different products that need to be classified under multiple category and subcategory pages?

Using ‘breadcrumb lists’

A breadcrumb is a row of internal links at the top or bottom of the page that allows visitors to quickly navigate back to a previous section or the root page. Many breadcrumbs have the most general page (usually the root page) as the first, leftmost link and list the more specific sections out to the right. We recommend using breadcrumb structured data markup28 when showing breadcrumbs.

Website with a breadcrumb list showing the current page hierarchy.

Create a simple navigational page for users

A navigational page is a simple page on your site that displays the structure of your website, and usually consists of a hierarchical listing of the pages on your site. Visitors may visit this page if they are having problems finding pages on your site. While search engines will also visit this page, getting good crawl coverage of the pages on your site, it’s mainly aimed at human visitors.

Best Practices

Create a naturally flowing hierarchy

Make it as easy as possible for users to go from general content to the more specific content they want on your site. Add navigation pages when it makes sense and effectively work these into your internal link structure. Make sure all of the pages on your site are reachable through links, and that they don’t require an internal “search” functionality to be found. Link to related pages, where appropriate, to allow users to discover similar content.

Avoid:

  • Creating complex webs of navigation links, for example, linking every page on your site to every other page.
  • Going overboard with slicing and dicing your content (so that it takes twenty clicks to reach from the homepage).

Use text for navigation

Controlling most of the navigation from page to page on your site through text links makes it easier for search engines to crawl and understand your site. When using JavaScript to create a page, use “a” elements with URLs as “href” attribute values, and generate all menu items on page-load, instead of waiting for a user interaction.

Avoid:

Create a navigational page for users, a sitemap for search engines

Include a simple navigational page for your entire site (or the most important pages, if you have hundreds or thousands) for users. Create an XML sitemap file to ensure that search engines discover the new and updated pages on your site, listing all relevant URLs together with their primary content’s last modified dates.

Avoid:

  • Letting your navigational page become out of date with broken links.
  • Creating a navigational page that simply lists pages without organizing them, for example by subject.

Show useful 404 pages

Users will occasionally come to a page that doesn’t exist on your site, either by following a broken link or typing in the wrong URL. Having a custom 404 page30 that kindly guides users back to a working page on your site can greatly improve a user’s experience. Your 404 page should probably have a link back to your root page and could also provide links to popular or related content on your site. You can use Google Search Console to find the sources of URLs causing “not found” errors31.

Avoid:

  • Allowing your 404 pages to be indexed in search engines (make sure that your web server is configured to give a 404 HTTP status code or – in the case of JavaScript-based sites – include a noindex robots meta-tag when non-existent pages are requested).
  • Blocking 404 pages from being crawled through the robots.txt file.
  • Providing only a vague message like “Not found”, “404”, or no 404 page at all.
  • Using a design for your 404 pages that isn’t consistent with the rest of your site.

Simple URLs convey content information

Creating descriptive categories and filenames for the documents on your website not only helps you keep your site better organized, it can create easier, “friendlier” URLs for those that want to link to your content. Visitors may be intimidated by extremely long and cryptic URLs that contain few recognizable words.

URLs like the one shown in the following image can be confusing and unfriendly.

URL for page with unhelpful numeric page name.

If your URL is meaningful, it can be more useful and easily understandable in different contexts.

URL showing helpful, human-readable page name.

URLs are displayed in search results

Lastly, remember that the URL to a document is usually displayed in a search result in Google below the document title.

Google is good at crawling all types of URL structures, even if they’re quite complex, but spending the time to make your URLs as simple as possible is a good practice.

Best Practices

Use words in URLs

URLs with words that are relevant to your site’s content and structure are friendlier for visitors navigating your site.

Avoid:

  • Using lengthy URLs with unnecessary parameters and session IDs.
  • Choosing generic page names like “page1.html”.
  • Using excessive keywords like “baseball-cards-baseball-cards-baseballcards.htm”.

Create a simple directory structure

Use a directory structure that organizes your content well and makes it easy for visitors to know where they’re at on your site. Try using your directory structure to indicate the type of content found at that URL.

Avoid:

  • Having deep nesting of subdirectories like “…/dir1/dir2/dir3/dir4/dir5/dir6/page.html”.
  • Using directory names that have no relation to the content in them.

Provide one version of a URL to reach a document

To prevent users from linking to one version of a URL and others linking to a different version (this could split the reputation of that content between the URLs), focus on using and referring to one URL in the structure and internal linking of your pages. If you do find that people are accessing the same content through multiple URLs, setting up a 301 redirect32 from non-preferred URLs to the dominant URL is a good solution for this. You may also use canonical URL or use the rel=”canonical”33 link element if you cannot redirect.

Avoid:

  • Having pages from subdomains and the root directory access the same content, for example, “domain.com/page.html” and “sub.domain.com/page.html”.

Leave a Reply

Your email address will not be published. Required fields are marked *