
Site Navigation: Why it Matters for SEO & How to Optimize
Site navigation seems like something you shouldn’t have to think about, but its importance to search engine optimization (SEO) is often underestimated. Your main goal is to present certain information to your customers, so it’s easy to get lost in thinking that it doesn’t matter how you structure that information. However, your site navigation plays a number of different roles in providing data to search engine crawlers and ensuring usability, therefore making it a crucial element for development in your SEO strategy. The term “site navigation” can actually refer to multiple components of a site. First, it usually refers to the main navigation bar on a given website, often found running across the top of the screen. It can also refer to the overall sitemap of the domain, including links not found in that header bar. Confusing things even further, “site navigation” can even refer to how easily a user can travel throughout your site and find the information s/he is seeking. To simplify things, in this article we’ll refer to site navigation as the overall structure and navigability of your website. The Importance of Sitemaps and Crawlers To understand the mechanics that dictate why site navigation links are important, we first have to understand crawlers. Google Search Console (GSC) can provide crawl stats on your site, letting you know areas that could use improvement. If you’ve been working in SEO for more than a month, you’re probably at least fleetingly aware that crawlers are automated indexing units that scour the web for structured data. Web crawlers, also called spiders, are bots that systematically scour the internet for content and download copies of web pages to add to a search engine’s index. How do search engine crawlers know where to find web pages? They start with a list of seed URLs, which can come from a variety of sources, including sitemaps. As each web page is crawled, the bot identifies all the hyperlinks on that page and adds them to the queue. Google, in particular, has several invisible crawlers constantly discovering new information on the Internet. Since crawlers consume resources on the systems they visit, some people block or limit their access. For example, people who don’t want their site crawled add special code to their robots.txt file to tell bots what they can and can’t index. Sometimes they block entire directories. However, blocking crawlers can prevent portions of your site from being indexed in Google. In order to generate the most relevant search results, search engines need to have a vast store of accurate, up-to-date information about the pages on the web. Crawlers ensure the legitimacy of this data, so if you stop crawlers from being able to do their job, you run the risk of having your pages left out of this massive store of information. On the other hand, if you can help crawlers do their job, you’ll maximize the number of pages they’re able to see on your site, and will thus maximize your presence in search engine indexes. To maximize crawler efficiency, first you need to make sure your website is free from Flash and JavaScript. These are old-style web formats that were once popular due to their flashy appearances, but their structure makes it almost impossible for crawlers to digest. It’s better to use more modern, crawlable schemes using XHTML and CSS. Building an XML sitemap is a must if you want crawlers to read your site in full. There are many free tools available that can help you build an XML sitemap, such as XML-Sitemaps.com, but it’s better if you put it in the hands of an experienced web developer. Once complete, you can upload your sitemap to Google Search Console (GSC) and place the file on your site, directly off the root. Having a properly formatted XML sitemap, in addition to maintaining a crawlable site, will ensure your pages are fully indexed. Site Depth The depth of your site is also an important factor for navigation. If you’re engaging in a content marketing strategy, “depth” might seem like a good thing. After all, the deeper your content goes, the more likely it is that you will be seen as an expert and that you’ll attract a wider audience as a result. However, “depth,” as it applies to websites, is actually a bad thing. The depth of a website is related to the complexity of links that lead to a certain point. For example, let’s say your website has 50 pages. The home page and nine other pages are immediately available on the top header. However, in order to access the remaining 40 pages, you need to click into one of those initial 10. The name of the game is K.I.S.S. (keep it simple stupid). Stick to “meat and potatoes” content. That’s why we only have two sitemaps, giving the crawlers less steps to get to the entirety of our site’s content. Some of those pages actually require a specific clicking order (such as Home > Products > Tables > Wooden Tables) in order to be reached. This is considered a “deep” site. Shallow sites, on the other hand, offer multiple pathways to each page. Instead of mandating a directional flow like the example above, a shallow site would have many pages pointing to each page in the hierarchy (Products, Wooden Tables, etc.). Shallow sites and a simple sitemap make it easier for users to find what they are looking for, and as a result, shallow sites get a small boost in domain authority. Restructuring your site to avoid unnecessary depth can give you more authority and more ranking power. You might be wondering if you should just link more pages in your main menu. Although this would certainly make your content easily accessible to crawlers, it will create a frustrating user experience. Your main menu should only give visitors the most important links. Having too many choices and multiple sub menus can make people bounce. Create a text sitemap for your visitors