As a first order matter, your website must be coded so that Google can read it. Otherwise, the content on your website may as well be invisible. Indeed, a page that you can read may very well be invisible to Google because of the way it is coded. Technical optimization involves changes to code and server settings which affect Google’s ability to crawl, read and index your website.
The larger a website, the more important technical optimization becomes. Since Google allocates a certain amount of “budget” to crawl each website, large websites may not get crawled in their entirety. Therefore, one of the main goals of technical SEO is to optimize Google’s crawl budget so that the information on your website most important to your business’ success can be accessed and crawled by Google as quickly and efficiently as possible and ultimately appear in search results.
Some of the steps in technical optimization include:
Identifying problems on your website by conducting extensive and repeated diagnostics
- Identifying and preventing roadblocks on your site by crawling your site as Google would
- Optimizing website and page load speed
- Analyzing server log files
- Assessing and resolving Google Search Console errors: Google Search Console is a powerful free SEO diagnostic which, among other things, flags problems with crawling and indexation on websites.
- Building an internal tool and framework for SEO testing so that you can test variants of pages and see which outperforms the other
Code changes which signpost or map information so Google can find and index it faster
- Optimizing schema.org markup for on-page elements. We use code to tell Google more about a page by explicitly tagging the text elements on it so Google recognizes their relevance to a specific industry or category. In the case of a piece of clothing for example, you might add tags for material, color or size. In the case of a car, you might tag the portions of text corresponding to attributes such as engine type, new or used, hybrid or gas, etc.
- Providing recommended robots.txt and XML sitemap updates to increase crawl depth and frequency and ensure timely surfacing of new pages and updated content: An XML sitemap lists every URL on a website, whereas a robots.txt lists the URLs and subfolders that Google should not or cannot crawl. By directly submitting these files to Google, we can explicitly tell Google which URLs to crawl rather than relying on the website’s own page links to lead Google from one page to the next, significantly improving crawl efficiency and conserving precious crawl budget.
- Assessing and recommending improvements to JavaScript handling: JavaScript is an important programming language which makes websites interactive and dynamic. However, it can cause problems for Google’s crawler because it does not interact with such features. We identify those problems and recommend fixes.
Reducing the overall number of pages on the website while conveying the same amount of information to Google
- Adopting optimized URL conventions and correcting any errors due to incorrect handling of parameters: Some websites have huge numbers of pages -- needlessly. Take the example of an ecommerce website, which frequently includes sort and filter options on a product category page: sometimes it can be coded so that the results of every sort and filter combination result in a new URL. For example, in the case of an online shoe store, that could mean a new and separate page for every single combination of size x color x style x price range. The result is millions of URLs on some websites, all eating up crawl budget without any new value-added information conferred by the sort x filter permutations.
- Assessing canonical tag usage: Proper canonicalization of web pages ensures that the problem above doesn’t happen: The word “canon” means a body of works. For example, “Plato’s Symposium belongs to the canon of classical Western philosophy.” Similarly, web pages belonging under the same general classification, heading or rubric, can be grouped under the same canon, and collapsed under a single parent page with only the latter tagged for crawling and indexation by Google. Using the example of the online shoe store above, all the pages for “black” and “brown” “boots” in sizes “5”, “6”, “7”, “8”, etc., could be collapsed under a single official page, “Boots”, instead of Google having to crawl separate, individual pages for each permutation of size x color x style x price range. Canonicalization allows Google to ignore meaningless subpages, saving crawl budget for more important information on a website.
Ensuring that Google doesn’t end up in a non-existent URL or location by implementing correct redirects (from defunct pages to relevant, current pages) and reviewing server response code handling. Every time Google’s crawler visits a site, it leaves a trail including the responses it got visiting each page, all of which can be seen in the server log.
Identification of potential weaknesses inherent to your current web architecture or technology decisions that can or should be remedied in the future
Other Issues
- Assessing website mobile-friendliness
- Evaluating multilingual SEO considerations