Screaming Frog

If you’re not familiar with the world of internet marketing, and someone told you that you should use Screaming Frog, what would you think?

I’d probably call them crazy. As it turns out, though, Screaming Frog is an excellent tool far too few people actually use to its full extent. Now, when people talk about Screaming Frog, they aren’t talking about the company and their web marketing services. You’re certainly free to contract them for any and all SEO work you want done, but that’s not what I’m here for. I’m here to teach you how to use the free tool they provide, the Screaming Frog SEO Spider Tool.

A spider is a piece of software that crawls around on a website, harvesting data and presenting it to the owner of the spider. Google has a fleet of these things it uses to index the Internet as completely as possible, for use in the search results.

Other search engines – everything from Yahoo and Bing to oddballs like Million Short either use search spiders of their own or pull data indexes from other entities that use spiders.

This particular spider is a desktop application you can download and run from your local PC, regardless of platform.

Views: Graphs and Diagrams

Steps to uninstall Screaming Frog in Windows XP

Advanced Tab

Basic Tab

Configuration Options

Limits Tab

Crawling: Large Website


Creating a Sitemap

File Robots.txt


Crawling: Subfolder

Crawling: List of URLs

Top Tabs

It fetches SEO data, including URL, metadata, Schema categories, and more. The primary benefit of Screaming Frog’s Spider is the ability to search for and filter various SEO issues. You don’t have to have a deep knowledge of SEO to figure out what is and isn’t done properly; the tool will help filter it for you. It can find bad redirects, meta refreshes, duplicate pages, missing metadata, and a whole lot more. The tool is extremely robust. The data it collects includes server and link error, redirects, URLs blocked by robots.txt, external and internal links and their status, the security status of links, URL issues, issues with page titles, metadata, page response time, page word count, canonicalization, link anchor text, images with URLs, sizes, alt text, and a heck of a lot more.

Essentially, when I talk about doing a site audit or a content audit, everything I recommend you harvest can be harvested with Screaming Frog, and a whole lot more. Plus, since the tool is made to be SEO-friendly, it follows Google’s AJAX (Francis) standard for web crawling. Now, the basic tool is the Lite version of the tool, which you can download and use for free.

However, it limits you in several notable ways.

Primarily, you can only crawl 500 URLs with it, and you lack access to some custom options, Google Analytics integration, and a handful of other features.

I highly recommend, if you have a medium or large-sized site with over 500 URLs you would want to crawl, that you buy the full license. It’s an annual fee of 99 British Pounds, which works out as of this writing to be about $140.

Given that it works out to be under $12 per month, most businesses can easily afford it, and it’s well worth the price. By default, Screaming Frog obeys the same directives as the Googlebot, including nofollow and noindex tags in your robots.txt.

However, if you want, you can give it unique directives using its own user agent, “Screaming Frog SEO Spider”.

This allows you to control it more directly, and potentially give it more access than Google gets. You can read more about how to do that on their download page, at the bottom. Regardless of the size of your site, unless you’re 100% certain you’ve done everything right and you haven’t made a mistake – you’re wrong if you believe that, by the way – the first thing you want to do is complete a total site crawl.

Configuration Files

I’m going to be assuming you’re using the full version of Screaming Frog to make sure you haven’t missed anything. Again, it’s super cheap, just buy the license. Click the configuration menu and click spider.

In the menu that appears, click “crawl all subdomains” so that it’s checked. You can crawl CSS, JavaScript, Images, SWF, and External links as well to get a complete view of your site. You can leave those unchecked if you want a faster crawl of just page and text elements, but no media or scripts.

Screaming Frog Setup Installer

Initiate the crawl and wait for it to complete. It will be faster the less you have checked in the configuration menu. It is also limited by what processing power and memory you have allocated to the program.

The more powerful your computer, the faster it will crawl.

Click the Internal tab and filter your results by HTML. Click to export. You will be given a CSV file of all the data crawled, sorted by individual HTML page.

You can then use this to identify issues on a given page and fix them quickly and easily.

If you’ve found that Screaming Frog crashes when crawling a large site, you might be having high memory issues.

The spider will use all the memory available to it, and sometimes it will go higher than your computer will allow it to handle.

In order to put a throttle on it and keep it from crashing, you will need to go back to that spider configuration menu.

Under Advanced, check “pause on high memory usage.” This will pause the spider when it’s eating your resources beyond where it can handle.

If you find that your crawl is timing out, it might be due to the server not handling as many requests as you want to send in. To rate limit your crawling, go to the speed submenu in the configuration menu and pick a limit for the number of requests it can make per second.

If you want to use proxies with your crawling – for competitive research or to avoid bot-capture blocking – you will need to click configuration and click proxy.

Steps to uninstall Screaming Frog in Windows 95, 98, Me, NT, 2000

From within this menu, you can set a proxy setup of your choice. Screaming Frog supports pretty much any kind of proxy you want to use, though you will want to make sure it’s fast and responsive, otherwise your crawl will probably take forever.

Links are difficult to audit because they can be difficult to harvest. How many links do you have on a typical page? Couple that with all of your parameters and you have a lot of information you need to gather.

Here’s how to do it with the spider. In the spider configuration menu, check all subdomains but uncheck CSS, images, JavaScript, Flash, and any other options you don’t need.

Decide if you want to crawl nofollowed links and check the boxes accordingly.

Initiate the crawl and let it run until it’s finished. Click the Advanced Report menu and click “All Links” to generate and export a CSV of all of the links it crawls, including their locations, their destinations, their anchor text, their directives, and other data.

Steps to uninstall Screaming Frog in Windows 10 / Windows 8 / Windows 7 / Windows Vista

From here you can export the data, or you can sort it as much as you like. Here are some sorts and actions you can perform. Click the internal tab and sort by outlinks. This will show you the pages with the most links on your site. Pages with over 100 links are generally suspect according to Google, so you may want to audit those pages to determine why they have so many links, and how you can minimize them.

Saving the Crawl

Click the internal tab and click status code. Any links that show the 404 status code are links that are broken; you will want to fix those links.

Links that report a 301 or other redirects may be redirecting to homepages or to harmful pages; check them and determine if they should be removed.

You can also generate specific reports for different types of status codes – 3XX, 4XX, or 5XX for redirections, client errors, or server errors respectively – under the Advanced Report drop-down. This guide shows you how to use Majestic and Screaming Frog together to find internal linking opportunities.

Content audits are hugely important because a ton of the most important search ranking factors today are all content-based.

Site speed, HTTPS integration, mobile integration,; these are all important, but they aren’t as important as having high-quality content, good images, and a lack of duplication.

Perform a full site crawl, including CSS, Images, scripts, and all the rest. You want as much data as possible.

In the internal tab, filter by HTML, then scroll over to the word count column and sort it low to high.

Pages with anything under 500-1000 words are likely to be thin content; determine whether they should be improved, noindexed, or removed entirely. Note: this will require some interpretation for e-commerce sites, particularly pages with minimal but valuable product information.

In the images tab, filter by “missing alt text” to find images that are holding back your site by not having alt text associated with the images.

You can also filter by “alt text over 100 characters” to find images with excessive alt text, which is generally detrimental to the user experience and to your search ranking.

In the page titles tab, filter for titles over 70 characters.

Google doesn’t display much more than that, so extra-lengthy titles aren’t doing you any favors.

Truncate or edit the titles to remove excessive characters that aren’t doing you any good.

In the same page titles tab, filter by duplicate to find pages that have duplicate meta titles. Duplicate titles indicate duplicate content, which is a Panda penalty and can be hurting your search ranking significantly. If the pages are unique, change their titles to reflect their content. If the pages are duplicates, remove one and redirect its URL to the other, or canonicalize the content if necessary.

In the URL tab, filter by duplicate to find similar duplication issues that need canonicalization to fix. In the meta description tab, filter by duplicate to find lazy duplicated meta descriptions on unique pages, or duplicate pages that have had their titles changed to make them appear more unique.

Fix these issues ASAP, they are hurting your site.

In the URL tab, filter by various options to determine pages that have non-standard or non-human-readable URLs that could be changed. This is particularly important for pages with non-ASCII characters or excessive underscores in the URL. In the directives tab, filter by any directive you want to identify pages or links that have directives attached to them.

Directives include index/noindex, follow/nofollow, and several other directives in much less common use. This can also be used to determine where canonicalization is already implemented.

Crawling: Subdomain

Sitemaps are incredibly helpful for Google, as they let the search engine know where all of your pages are and when they were last updated.

You can generate one in a number of different ways, but Screaming Frog has its own method if you want to use it. All you need to do is crawl your site completely, including all subdomains. Then click on the “Advanced Export” menu and click the bottom option, the XML Sitemap option.

This will save your sitemap as an Excel table, which you can then edit. Open it and select read online and “open as an SML table.” Ignore any warnings that pop up. In table form, you can edit your sitemap easily, and you can save it as an XML file.

When that’s done, you can upload it to Google. If you are finding that certain sections of your site are not being indexed, you may have an issue with robots.txt flagging those subfolders as noindex.

Additionally, if a page has no internal links pointing to it, it won’t be crawlable.

Make sure any page you know exists but that doesn’t show up as an internal link pointing to it. And, there you have it!

The beginner’s guide to Screaming Frog. Screaming Frog app for Windows 10 – Download Screaming Frog for Windows 10/8/7 64-bit/32-bit. This app is one of the most popular Office and Business Tools apps worldwide!

Install Screaming Frog latest full setup on your PC/laptop ✓ Safe and Secure!

The industry leading website crawler and SEO spider for Windows PC. Screaming Frog is a Office and Business Tools application like WinDjView, VCE Exam, and Auditor from Screaming Frog Ltd. It has a simple and basic user interface, and most importantly, it is free to download. Screaming Frog is an efficient software that is recommended by many Windows PC users.


Screaming Frog is a very fast, small, compact and innovative Demo Office and Business Tools for Windows PC. It is designed to be uncomplicated for beginners and powerful for professionals. This app has unique and interesting features, unlike some other Office and Business Tools apps. Screaming Frog works with most Windows Operating System, including Windows XP / Vista / Windows 7 / Windows 8 / Windows 10. Although there are many popular Office and Business Tools software, most people download and install the Demo version.

However, don’t forget to update the programs periodically. You can get Screaming Frog free and download its latest version for Windows XP / Vista / Windows 7 / Windows 8 / Windows 10 PC from below.

It’s better to know the app’s technical details and to have a knowledge background about the app.

Therefore, you can find out if Screaming Frog will work on your Windows device or not. Download Screaming Frog (latest version) free for Windows 10 (64-bit and 32-bit) PC/laptop/tablet.

What is New in the Screaming Frog Latest Version?

Safe Download and Install from the official link! Screaming Frog 64-bit and 32-bit download features:. Screaming Frog direct, free and safe download.


Latest version update. Compatible with Windows 10 64-bit and 32-bit. Download Screaming Frog for your PC or laptop. Download & install the latest offline installer version of Screaming Frog for Windows PC / laptop.

It works with both 32-bit & 64-bit versions of Windows XP / Vista / Windows 7 / Windows 8 / Windows 10. Safety (Virus) Test:✔ Tested and is 100% Safe to download and install on your Windows XP / Vista / Windows 7 / Windows 8 / Windows 10 device (PC/laptop/tablet).

  1. ✓ Compatibilities improvement for new Windows update.✓ Fixes bugs. Now let’s just move to the next section to share the steps you have to follow to download Screaming Frog for Windows PC.
  2. Download the Screaming Frog installer file from the link above. Save the downloaded file to your computer. Double-click on the downloaded Screaming Frog installer file.

The First-Timer’s Guide to Screaming Frog

Now, a smart screen might appear and ask for a confirmation. Click “Yes” to confirm. Finally, follow the installation instructions until you get a confirmation notification of a successful installation process.

So those are all the processes that you have to follow to download Screaming Frog for Windows PC. Then let’s go on to the next section where we will discuss Screaming Frog itself. So you can understand the application and its features. Screaming Frog is one of the most popular Office and Business Tools alongside ProWritingAid, MatterControl, and GeoGebra.

This app has its advantages compared to other Office and Business Tools applications.

Screaming Frog is lightweight and easy to use, simple for beginners and powerful for professionals. Screaming Frog application is free to download and offers easy-to-install, easy-to-use, secure, and reliable Office and Business Tools applications.

Meta description

Creating an XML Sitemap

This application’s primary functions are comprehensive and go beyond the features offered by others that can be considered as its rivals.

Screaming Frog for PC – fast, reliable, and robust by Screaming Frog Ltd. Screaming Frog Free & Safe Download. Screaming Frog latest version for the best experience.

It works/compatible with almost all Windows versions, including Windows XP / Vista / Windows 7 / Windows 8 / Windows 10.

User-friendly Interface. Privacy and Security! Lightweight and consume low resources. Best for Office and Business Tools application.

PC User’s choice! Click the Windows Start menu. Locate and select the Control Panel menu, then select Programs. Under Programs, click the Uninstall a Program.


Select Screaming Frog and then right-click, select Uninstall/Change. Then click Yes to confirm the Screaming Frog uninstallation process.

Click the Windows Start menu. Locate and select the Control Panel menu, then select Add or Remove Programs icon. Select the Screaming Frog and then click Remove/Uninstall.

Then click Yes to confirm the Screaming Frog uninstallation process. Click the Windows Start menu. Locate and select the Control Panel menu, then double-click the Add/Remove Programs icon. Select the Screaming Frog and then right-click, select Uninstall/Change. Then click Yes to confirm the Screaming Frog uninstallation process.

Isn’t Screaming Frog what you were looking for? We prepared a list of alternatives below! ProWritingAid, MatterControl, and GeoGebra is the strong competitor of Screaming Frog. Otherwise, Gnuplot and Crypto Note also quite good as the alternative of this software.

There are also other similar apps such as Auditor, VCE Exam, and WinDjView that also need to try if you want to find the best alternative of Screaming Frog.

  • The Screaming Frog for Windows PC is unquestionably the best Office and Business Tools that you can find nowadays.
  • It also is the most reliable when it comes to performance and stability.
  • You can find that out for yourself.
  • That is why a lot of PC users recommend this app.
  • Get superb and impressive experience using this Screaming Frog application developed by Screaming Frog Ltd.
  • Screaming Frog nowadays are already getting better each time.

If you have some questions related to this app, feel free to leave your queries in the comment section. Or you can share with us your experience when using this Screaming Frog on your Windows 10 PC.

What is Screaming Frog?

And if you know other people who want to experience Screaming Frog for Windows PC, you can share this article to help them. Enjoy using Screaming Frog for Windows PC.

Find other interesting articles that will help you how to download ProWritingAid for Windows 10 PC, install MatterControl for Windows 10, GeoGebra review, or about best Gnuplot alternative apps for Windows 10. Q: What is Screaming Frog for PC?A: For more information about this app, please go to the developer link on the above of this page. Q: Is Screaming Frog free?

If not, how much does it price to download this app?A: Absolutely no cost!

You can download this app from official websites for free by this website—any extra details about the license you can found on the owner’s websites. Q: How do I access the free Screaming Frog download for Windows PC?A: It is easy! Just click the free Screaming Frog download button in the above of this page.


Clicking the download button will start the installer to download Screaming Frog free for a PC/laptop. Q: Is this Screaming Frog will typically run on any Windows?A: Yes! The Screaming Frog for PC will typically work on most recent Windows operating systems, including Windows XP / Vista / Windows 7 / Windows 8 / Windows 10 64-bit and 32-bit. Q: What’s the difference between 64-bit and 32-bit versions of Screaming Frog?A: The Screaming Frog 64-bit version was specially designed for 64-bit Windows Operating Systems and performed much better on those.

The Screaming Frog 32-bit version was initially intended for 32-bit Windows Operating Systems, but it can also run on 64-bit Windows Operating Systems.

Q: What’s the importance of downloading the latest version of Screaming Frog?A: We recommend downloading the latest version of Screaming Frog because it has the most recent updates, which improves the quality of the application. Screaming Frog is an application that builds by Screaming Frog Ltd.

All trademarks, product names, company names, and logos mentioned here are their respective owners’ property.

This site ( is not affiliated with them directly. All information about applications, programs, or games on this website has been found in open sources on the Internet.

We don’t host or store Screaming Frog on our servers.

Downloads are done through the Official Site. We are firmly against piracy, and we do not support any sign of piracy.

If you think that the application you own the copyrights is listed on our website and want to remove it, please contact us.

We are always compliant with DMCA regulations and respect the application owners.


We are happy to work with you. Please find the DMCA / Removal Request page below. Screaming Frog is a very powerful SEO Spider able to perform in-depth SEO OnSite analysis.

In this guide, we will see some of the main features very useful during SEO analysis. The free version of Screaming Frog allows you to analyze up to 500 URLs.



Screaming Frog allows you to crawl a specific website, subdomain or directory. In the paid version, the SEO Spider allows you to select the “Crawl all Subdomain” option if you have more than one subdomain. If you only need to crawl one subdomain, simply add the URL in the appropriate box. The most commonly used features are monitoring status queues on a website (40x,50x,200 and 30x).

Screaming Frog by default crawls a directory by simply adding the address in the bar as presented in the image below.

If you need to perform an advanced crawling you can use the wildcard that tells the SEO Spider to crawl all pages that precede and/or follow the “Wildcard”. The path to use this feature is:. Spider > Include and add in the box that appears the desired syntax, for example with this syntax:* the spider only crawls the sections of the website that are present in the “About Us” branch of the website, then all the resources that are after the “Jolly” character. Starting the crawl will extract all the “Daughters” URLs of the “About Us” section, for example: or

Performing a Link Audit with Screaming Frog

This option is particularly useful with large websites where we do not have resources to work on very large data. Keep in mind that the crawling data will have to be (in most cases) processed in Excel, so the starting point will have to be a workable data in an easy way to “Search Vert”, work with filters and charts.

From the “Mode” tab you can select the crawling mode, in case you want to crawl a set of URLs the mode to set is “List” because you can import an Excel file with a column containing the list of URLs.

The other option to scan a list of URLs is “copy and paste”, then copy from an external source (Excel, CSV, TXT or HTML page) the list of URLs and click “Paste”.

It is necessary that each URL also contains the http or HTTPS protocol including the www, so the correct structure of each URL should be: When you need to analyze a large website and it’s not enough to just crawl through HTML and images (in SEO perspective very often it’s good to also analyze the status queues of CSS and JS files to make sure that search engine spiders are able to correctly render pages) you can work on the settings:.


1.Configuration > System > Memory and allocate more memory, for example 4GB2.Set the storage to the database instead of RAM. If even with these two configurations it is not possible to analyze a large website, the only settings that can be activated are:. 1.Start crawling by website branches, one and more branches at a time:.

With wildcard character;. Include/Exclude option;. Custom robots.txt;. Navigation Depth (Crawl Depth);. Query string parameters. 2.Exclude from crawling: Images, CSS, JS and other non-HTML resources.

From an SEO perspective, it is essential to perform a single crawl because it allows you to have a complete view, for example the pair of URLs From and URL To in reference to 301, 404 or the monitoring of the distribution of internal links.


It may happen that Screaming Frog goes in time out or, in general, it can’t analyze resources (or it’s very slow) even on small websites; in this case the problem could be related to other factors, such as hosting performance or the fact that our IP address (from which we started Screaming Frog) has been blocked by the website owner (or by the dedicated IT resource).

Performing a Content Audit with Screaming Frog

Our IP address can be banned by a provider because the action of Screaming Frog is very similar to a hacker attack (e.g. DOS attack) aimed at running out of server resources and causing 50x errors.


After finishing crawling the website there are multiple export options:. Save the source of Screaming Frog: Having the source allows you to control the crawling data without having to start it again. Especially useful for large websites or to collaborate with colleagues and share the source. Having the source allows you to control the crawling data without having to start it again.

Especially useful for large websites or to collaborate with colleagues and share the source. Save only the necessary tab;.

Export all pages to a single Excel file;.

Bulk export, very useful to have, for example, full internal link distribution: All inlinks (for internal linking analysis);All outlinks;All anchor text;All images; structured data;…. All inlinks (for internal linking analysis);. All anchor text;.

Response code structured data;. The image below shows how to export structured data. Screaming Frog allows you to export a configuration file that can be reused for future projects/customers.

It is particularly useful if you perform SEO analysis for similar clients (similar website structure) and have configured advanced filters or special extraction options (filters, exclude/include or wildcard). The configuration file is also useful if custom scripts have been programmed, for example in Python or from the command line to automate purely mechanical operations.

For example, if we need to perform a series of purely technical SEO Audits and the output requires the same data, it would make no sense, for each website, to re-configure Screaming Frog.

Page Titles

Screaming Frog is “Robots.txt Compliant” so it is able to perfectly follow the guidelines indicated in robots.txt exactly like Google Search.


Through the configuration options it is possible:. ignore the robots.txt;. see the URLs blocked by the robots.txt;. option to use custom robots.txt. The last option may come in handy before the go-live of a website to test the robots.txt file to see if the directives in the file are correct.

How to find pages that have social sharing buttons

By default, Screaming Frog does not accept cookies, as do search engine spiders. This option is often underestimated or ignored but in fact, for some websites, it is of fundamental importance because by accepting cookies you can unlock features and add code that can give extremely useful SEO and performance information.

For example, by accepting cookies you can unlock a small JavaScript that adds code to the HTML of the page… and if this code creates SEO side problems how can I verify it?

How to find pages that are using iframes

Screaming Frog helps us in this case as shown in the image below. One of the best methods to create a sitemap is to use an SEO Tool like Screaming Frog, also the use of WordPress plugins like SEO Yoast are fine, but there may be update and non-compatibility problems, for example, it may happen that the URLs in the sitemap return status code 404.

How to find pages that contain embedded video or audio content

It is recommended to generate a sitemap that contains only canonical URLs with status code 200. For large websites, it is recommended to create a sitemap for each type of content (PDF, images and HTML pages) and a sitemap for each branch of the information architecture.

Meta Data and Directives

How to identify pages with lengthy page titles, meta descriptions, or URLs

Having specific sitemaps allows the search engine to better analyze URLs and file types and allows you to have full control and easily make a comparison between URLs in Google Search index (site operator:) and individual sitemaps.

How to find duplicate page titles, meta descriptions, or URLs

Please note that the limit of URLs to add in a sitemap is 49,999. For details on standards see:

How to find duplicate content and/or URLs that need to be rewritten/redirected/canonicalized

To generate a Screaming Frog sitemap follow the steps below:. Sitemaps (top bar) > XML Sitemap or Images Sitemap. Among the Screaming Frog options you can decide which pages to include based on:. paginated URLs;. Change frequency;. Noindex images;. Include relevant images based on the number of links they receive;.

Include images from a CDN. For large websites, e.g. e-commerce, product photos can be uploaded to a subdomain or external hosting, for a variety of reasons such as:. Avoid the absorption of resources allocated to the CMS;. Ease of management, as management scripts can be created only to images to improve their performance,.

How to identify all of the pages that include meta directives e.g.: nofollow/noindex/noodp/canonical etc.

Management of cron job for synchronization between physical warehouse and e-commerce. With regard to the structure of the website with a particular focus on the information architecture, the “Visualisations” section is useful as it allows to have a graphic vision of the website structure, in diagrams or graphs.

  • During an internal linking analysis, this section is fundamental but it is recommended to integrate it with mind-map programs, such as XMind and with standard tools:
  • The configuration options of the SEO Spider are collected and organized in tabs, in this paragraph, we will examine the macro tabs without going into detail on all the individual options.
  • External links;.
  • Link outside of the start folder;.
  • Follow internal or external nofollow;.
  • Crawl all subdomains;.
  • Crawl outside of the start folder;.
  • Crawl canonical;.
  • Extraction of hreflang;.
  • Crawl of links inside the sitemap;.
  • Extraction and crawl of AMP links;.
  • This tab is particularly useful for analyzing very large websites but not only.

How to verify that my robots.txt file is functioning as desired

From this section you can:. Set the total crawl limit, expressed in the number of URLs;. The crawl depth expressed in the number of directories;. The limit in the number of query strings;. The limit of redirect 301 to follow (to avoid the chains of 301, harmful in terms of resource use and therefore crawl budget);. Length of URLs to follow, default is 2,000 characters;. Maximum weight of pages to analyze. Pause on high memory usage;. Always follow redirects;. Always follow canonicals;. Respect noindex;. Respect canonical;. Respect Next/Prev;. Extract images from img srcset Attribute;. Respect HSTS Policy;. Respect self-referencing meta refresh;. Response timeout;.

5xx Response Retries;. Store rendered HTML;.

How to find or verify Schema markup or other microdata on my site

Extract Microdata;. Validaton;. Google Validation. In the main menu at the top of the tool there are a series of buttons (tabs) which open sections, see them in detail.

The internal tab combines all the data extracted during crawling and added in the other tabs (excluding external, hreflang and custom tabs).

The usefulness of this tab lies in having an overview and the possibility to export and work the data externally, for example in Excel, with Data Studio or mind-map tools.

This tab shows information related to URLs outside the domain. From this section you can see information related to HTTP and HTTPS protocols of both external and internal URLs. This tab is useful to verify, for example, the correct migration to HTTPS. This tab provides information on response queues, both internal and external.

This tab provides information related to page titles, in particular for:. Duplicate titles;. Title less than 35 characters;. Title greater than 65 characters;. Title equal to H1;.

Multiple titles. Provides meta description information, length (min and max in SEO optics), if duplicate or absent.


How to create an XML Sitemap

It provides information about the H1 tag heading, for example if it is equal to the title, because very often (especially in E-commerce) products have the H1 equal to the title.

This criticality can be solved, for example, by concatenating the product variant to the current H1 and having an original tag. Information on the length and originality of H2 tags. The data provided in this tab are related both to the weight of the image and to the number of internal links it receives both to the Indexability Status.

It is remembered that an image in SEO optics must be considered as an HTML page because, if well optimized, it is able to carry organic traffic, for example through searches by image. This tab shows the list of cononic resources. It provides information about paging and paged resources, particularly the use of Rel Next and Rel Prev tags.

Creating an XML Sitemap By Uploading URLs

This tab provides information on using the Hreflang tag for the correct setting of a multi-language or multi-language and multi-country website.

SEO audits for multi-language websites require effort aside from the complexity and analysis to be performed on multiple markets.

The Custom tab allows you to control the URLs obtained through the use of custom filters and extractions. Analytics and Search ConsoleThrough this tab, you can integrate your Google Analytics and Google Search Console accounts.

How to check my existing XML Sitemap

This is a basic guide to using the SEO Spider to understand its potential and areas of use.

To date, Screaming Frog is one of the best tools to conduct technical SEO analysis.

It is certainly very useful to integrate this guide with real case studies applied to clients during our SEO Audits in order to make it more enjoyable to follow.

You can configure your crawl settings to discover and compare the URLs within your XML sitemaps to the URLs within your site crawl.

Go to ‘Configuration’ -> ‘Spider’ in the main navigation and at the bottom there are a few options for XML sitemaps – Auto discover XML sitemaps through your robots.txt file or manually enter the XML sitemap link into the box. *Important note – if your robots.txt file does not contain proper destination links to all XML sitemap you want crawled, you should manually enter them.

Once you’ve updated your XML Sitemap crawl settings, go to ‘Crawl Analysis’ in the navigation then click ‘Configure’ and ensure the Sitemaps button is ticked. You’ll want to run your full site crawl first, then navigate back to ‘Crawl Analysis’ and hit Start.

Once the Crawl Analysis is complete, you’ll be able to see any crawl discrepancies, such as URLs that were detected within the full site crawl that are missing from the XML sitemap.

General Troubleshooting

How to identify why certain sections of my site aren’t being indexed or aren’t ranking

Wondering why certain pages aren’t being indexed? First, make sure that they weren’t accidentally put into the robots.txt or tagged as noindex. Next, you’ll want to make sure that spiders can reach the pages by checking your internal links. A page that is not internally linked somewhere on your site is often referred to as an Orphaned Page.

In order to identify any orphaned pages, complete the following steps:

  • Go to ‘Configuration’ -> ‘Spider’ in the main navigation and at the bottom there are a few options for XML sitemaps – Auto discover XML sitemaps through your robots.txt file or manually enter the XML sitemap link into the box. *Important note – if your robots.txt file does not contain proper destination links to all XML sitemap you want crawled, you should manually enter them.
  • Go to ‘Configuration → API Access’ → ‘Google Analytics’ – using the API you can pull in analytics data for a specific account and view. To find orphan pages from organic search, make sure to segment by ‘Organic Traffic’
  • You can also go to General → ‘Crawl New URLs Discovered In Google Analytics’ if you would like the URLs discovered in GA to be included within your full site crawl. If this is not enabled, you will only be able to view any new URLs pulled in from GA within the Orphaned Pages report.
  • Go to ‘Configuration → API Access’ → ‘Google Search Console’ – using the API you can pull in GSC data for a specific account and view. To find orphan pages you can look for URLs receiving clicks and impressions that are not included in your crawl.
    • You can also go to General → ‘Crawl New URLs Discovered In Google Search Console’ if you would like the URLs discovered in GSC to be included within your full site crawl. If this is not enabled, you will only be able to view any new URLs pulled in from GSC within the Orphaned Pages report.
  • Crawl the entire website. Once the crawl is completed, go to ‘Crawl Analysis –> Start’ and wait for it to finish.
  • View orphaned URLs within each of the tabs or bulk export all orphaned URLs by going to Reports → Orphan Pages

If you do not have access to Google Analytics or GSC you can export the list of internal URLs as a .CSV file, using the ‘HTML’ filter in the ‘Internal’ tab.

Open up the CSV file, and in a second sheet, paste the list of URLs that aren’t being indexed or aren’t ranking well. Use a VLOOKUP to see if the URLs in your list on the second sheet were found in the crawl.

How to check if my site migration/redesign was successful

@ipullrank has an excellent Whiteboard Friday on this topic, but the general idea is that you can use Screaming Frog to check whether or not old URLs are being redirected by using the ‘List’ mode to check status codes. If the old URLs are throwing 404’s, then you’ll know which URLs still need to be redirected.

How to find slow-loading pages on my site

After the spider has finished crawling, go to the ‘Response Codes’ tab and sort by the ‘Response Time’ column from high to low to find pages that may be suffering from a slow loading speed.

How to find malware or spam on my site

First, you’ll need to identify the footprint of the malware or the spam. Next, in the Configuration menu, click on ‘Custom’ → ‘Search’ and enter the footprint that you are looking for.

You can enter up to 10 different footprints per crawl. Finally, press OK and proceed with crawling the site or list of pages.

When the spider has finished crawling, select the ‘Custom’ tab in the top window to view all of the pages that contain your footprint. If you entered more than one custom filter, you can view each one by changing the filter on the results.

PPC & Analytics

How to verify that my Google Analytics code is on every page, or on a specific set of pages on my site

SEER alum @RachaelGerson wrote a killer post on this subject: Use Screaming Frog to Verify Google Analytics Code. Check it out!

How to validate a list of PPC URLs in bulk

Save your list in .txt or .csv format, then change your ‘Mode’ settings to ‘List’.

Next, select your file to upload, and press ‘Start’ or paste your list manually into Screaming Frog. See the status code of each page by looking at the ‘Internal’ tab.

To check if your pages contain your GA code, check out this post on using custom filters to verify Google Analytics code by @RachaelGerson.


How to scrape the meta data for a list of pages

So, you’ve harvested a bunch of URLs, but you need more information about them? Set your mode to ‘List’, then upload your list of URLs in .txt or .csv format. After the spider is done, you’ll be able to see status codes, outbound links, word counts, and of course, meta data for each page in your list.

How to scrape a site for all of the pages that contain a specific footprint

First, you’ll need to identify the footprint. Next, in the Configuration menu, click on ‘Custom’ → ‘Search’ or ‘Extraction’ and enter the footprint that you are looking for.

You can enter up to 10 different footprints per crawl. Finally, press OK and proceed with crawling the site or list of pages. In the example below, I wanted to find all of the pages that say ‘Please Call’ in the pricing section, so I found and copied the HTML code from the page source.

When the spider has finished crawling, select the ‘Custom’ tab in the top window to view all of the pages that contain your footprint. If you entered more than one custom filter, you can view each one by changing the filter on the results.

Below are some additional common footprints you can scrape from websites that may be useful for your SEO audits:

  • http://schema\.org – Find pages containing

Pro Tip:

If you are pulling product data from a client site, you could save yourself some time by asking the client to pull the data directly from their database. The method above is meant for sites that you don’t have direct access to.

URL Rewriting

How to find and remove session id or other parameters from my crawled URLs

To identify URLs with session ids or other parameters, simply crawl your site with the default settings. When the spider is finished, click on the ‘URI’ tab and filter to ‘Parameters’ to view all of the URLs that include parameters.

To remove parameters from being shown for the URLs that you crawl, select ‘URL Rewriting’ in the configuration menu, then in the ‘Remove Parameters’ tab, click ‘Add’ to add any parameters that you want removed from the URLs, and press ‘OK.’ You’ll have to run the spider again with these settings in order for the rewriting to occur.

How to rewrite the crawled URLs (e.g: replace .com with, or write all URLs in lowercase)

To rewrite any URL that you crawl, select ‘URL Rewriting’ in the Configuration menu, then in the ‘Regex Replace’ tab, click ‘Add’ to add the RegEx for what you want to replace.

Once you’ve added all of the desired rules, you can test your rules in the ‘Test’ tab by entering a test URL in the space labeled ‘URL before rewriting’. The ‘URL after rewriting’ will be updated automatically according to your rules.

If you wish to set a rule that all URLs are returned in lowercase, simply select ‘Lowercase discovered URLs’ in the ‘Options’ tab. This will remove any duplication by capitalized URLs in the crawl.

Remember that you’ll have to actually run the spider with these settings in order for the URL rewriting to occur.

Keyword Research

How to know which pages my competitors value most

Generally speaking, competitors will try to spread link popularity and drive traffic to their most valuable pages by linking to them internally. Any SEO-minded competitor will probably also link to important pages from their company blog. Find your competitor’s prized pages by crawling their site, then sorting the ‘Internal’ tab by the ‘Inlinks’ column from highest to lowest, to see which pages have the most internal links.

To view pages linked from your competitor’s blog, deselect ‘Check links outside folder’ in the Spider Configuration menu and crawl the blog folder/subdomain. Then, in the ‘External’ tab, filter your results using a search for the URL of the main domain. Scroll to the far right and sort the list by the ‘Inlinks’ column to see which pages are linked most often.

Pro Tip:

Drag and drop columns to the left or right to improve your view of the data.

How to know what anchor text my competitors are using for internal linking

In the ‘Bulk Export’ menu, select ‘All Anchor Text’ to export a CSV containing all of the anchor text on the site, where it is used and what it’s linked to.

How to know which meta keywords (if any) my competitors have added to their pages

After the spider has finished running, look at the ‘Meta Keywords’ tab to see any meta keywords found for each page. Sort by the ‘Meta Keyword 1’ column to alphabetize the list and visually separate the blank entries, or simply export the whole list.

Link Building

How to analyze a list of prospective link locations

If you’ve scraped or otherwise come up with a list of URLs that needs to be vetted, you can upload and crawl them in ‘List’ mode to gather more information about the pages. When the spider is finished crawling, check for status codes in the ‘Response Codes’ tab, and review outbound links, link types, anchor text and nofollow directives in the ‘Outlinks’ tab in the bottom window. This will give you an idea of what kinds of sites those pages link to and how. To review the ‘Outlinks’ tab, be sure that your URL of interest is selected in the top window.

Of course you’ll want to use a custom filter to determine whether or not those pages are linking to you already.

You can also export the full list of out links by clicking on ‘All Outlinks’ in the ‘Bulk Export Menu’. This will not only provide you with the links going to external sites, but it will also show all internal links on the individual pages in your list.

For more great ideas for link building, check out these two awesome posts on link reclamation and using Link Prospector with Screaming Frog by SEER’s own @EthanLyon and @JHTScherck.

How to find broken links for outreach opportunities

So, you found a site that you would like a link from? Use Screaming Frog to find broken links on the desired page or on the site as a whole, then contact the site owner, suggesting your site as a replacement for the broken link where applicable, or just offer the broken link as a token of good will.

How to verify my backlinks and view the anchor text

Upload your list of backlinks and run the spider in ‘List’ mode. Then, export the full list of outbound links by clicking on ‘All Out Links’ in the ‘Advanced Export Menu’. This will provide you with the URLs and anchor text/alt text for all links on those pages. You can then use a filter on the ‘Destination’ column of the CSV to determine if your site is linked and what anchor text/alt text is included.

@JustinRBriggs has a nice tidbit on checking infographic backlinks with Screaming Frog. Check out the other 17 link building tools that he mentioned, too.

How to make sure that I’m not part of a link network

Want to figure out if a group of sites are linking to each other? Check out this tutorial on visualizing link networks using Screaming Frog and Fusion Tables by @EthanLyon.

I am in the process of cleaning up my backlinks and need to verify that links are being removed as requested

Set a custom filter that contains your root domain URL, then upload your list of backlinks and run the spider in ‘List’ mode. When the spider has finished crawling, select the ‘Custom’ tab to view all of the pages that are still linking to you.

Bonus Round

Did you know that by right-clicking on any URL in the top window of your results, you could do any of the following?

  • Copy or open the URL
  • Re-crawl the URL or remove it from your crawl
  • Export URL Info, In Links, Out Links, or Image Info for that page
  • Check indexation of the page in Google, Bing and Yahoo
  • Check backlinks of the page in Majestic, OSE, Ahrefs and Blekko
  • Look at the cached version/cache date of the page
  • See older versions of the page
  • Validate the HTML of the page
  • Open robots.txt for the domain where the page is located
  • Search for other domains on the same IP

Likewise, in the bottom window, with a right-click, you can:

  • Copy or open the URL in the ‘To’ for ‘From’ column for the selected row

How to Edit Meta Data

SERP Mode allows you to preview SERP snippets by device to visually show how your meta data will appear in search results.

  • Upload URLs, titles and meta descriptions into Screaming Frog using a .CSV or Excel document
    • If you already ran a crawl for your site you can export URLs by going to ‘Reports → SERP Summary’. This will easily format the URLs and meta you want to reupload and edit.
  • Mode → SERP → Upload File
  • Edit the meta data within Screaming Frog
  • Bulk export updated meta data to send directly to developers to update

How to a Crawl JavaScript Site

It’s becoming more common for websites to be built using JavaScript frameworks like Angular, React, etc. Google strongly recommends using a rendering solution as Googlebot still struggles to crawl javascript content. If you’ve identified a website built using javascript, follow the below instructions to crawl the website.

  • ‘Configuration → Spider → Rendering → JavaScript
  • Change rendering preferences depending on what you’re looking for. You can adjust the timeout time, window size (mobile, tablet, desktop, etc)
  • Hit OK and crawl the website

Within the bottom navigation, click on the Rendered Page tab to view how the page is being rendered. If your page is not being rendered properly, check for blocked resources or extend the timeout limit within the configuration settings. If neither option helps solve the how your page is rendering, there may be a larger issue to uncover.

You can view and bulk export any blocked resources that may be impacting crawling and rendering of your website by going to ‘Bulk Export’ → ‘Response Codes’

View Original HTML and Rendered HTML

If you’d like to compare the raw HTML and rendered HTML to identify any discrepancies or ensure important content is located within the DOM, go to ‘Configuration’ → ’Spider’ –> ‘Advanced’ and hit store HTML & store rendered HTML.

Within the bottom window, you will be able to see the raw and rendered HTML. This can help identify issues with how your content is being rendered and viewed by crawlers.

Tell us what else you’ve discovered!

Final Remarks

In closing, I hope that this guide gives you a better idea of what Screaming Frog can do for you. It has saved me countless hours, so I hope that it helps you, too!

By the way, I am not affiliated with Screaming Frog; I just think that it’s an awesome tool.

Still nerding out on technical SEO?

Check out our open positions.

More about me:

Aichlee Bushnell is a Seer alum. Follow her on Twitter!

For more SEO tutorials and the latest digital marketing updates, subscribe to the Seer newsletter!

Comments are closed.