Website Crawler Download


Top 20 Web Crawling Tools to Scrape the Websites Quickly

Top 20 Web Crawling Tools to Scrape the Websites Quickly

What’s Web Crawling
Web crawling (also known as web data extraction, web scraping, screen scraping) has been broadly applied in many fields today. Before a web crawler tool ever comes into the public, it is the magic word for normal people with no programming skills. Its high threshold keeps blocking people outside the door of Big Data. A web scraping tool is the automated crawling technology and it bridges the wedge between the mysterious big data to everyone.
Web Crawling Tool Helps!
No more repetitive work of copying and pasting.
Get well-structured data not limited to Excel, HTML, and CSV.
Time-saving and cost-efficient.
It is the cure for marketers, online sellers, journalists, YouTubers, researchers, and many others who are lacking technical skills.
Here is the deal.
I listed the 20 BEST web crawlers for you as a reference. Welcome to take full advantage of it!
Top 20 Web Crawling Tools
Web Scraping Tools
Visual Scraper
Content Grabber (by Sequentum)
Helium Scraper
Website Downloader
Cyotek Webcopy
Extension Tools
OutWit Hub
Web Scraping Services
Zyte (previous Scrapinghub)
RPA tool
Library for coders
1. Octoparse: “web scraping tool for non-coders“
Octoparse is a client-based web crawling tool to get web data into spreadsheets. With a user-friendly point-and-click interface, the software is basically built for non-coders.
How to get web data
Pre-built scrapers: to scrape data from popular websites such as Amazon, eBay, Twitter, etc. (check sample data)
Auto-detection: Enter the target URL into Octoparse and it will automatically detect the structured data and scrape it for download.
Advanced Mode: Advanced mode enables tech users to customize a data scraper that extracts target data from complex sites.
Data format: EXCEL, XML, HTML, CSV, or to your databases via API.
Octoparse gets product data, prices, blog content, contacts for sales leads, social posts, etc.
Three ways to get data using Octoparse
Important features
Scheduled cloud extraction: Extract dynamic data in real-time
Data cleaning: Built-in Regex and XPath configuration to get data cleaned automatically
Bypass blocking: Cloud services and IP Proxy Servers to bypass ReCaptcha and blocking
2. 80legs
80legs is a powerful web crawling tool that can be configured based on customized requirements. It supports fetching huge amounts of data along with the option to download the extracted data instantly.
Important features
API: 80legs offers API for users to create crawlers, manage data, and more.
Scraper customization: 80legs’ JS-based app framework enables users to configure web crawls with customized behaviors.
IP servers: A collection of IP addresses is used in web scraping requests.
3. ParseHub
Parsehub is a web crawler that collects data from websites using AJAX technology, JavaScript, cookies and etc. Its machine learning technology can read, analyze and then transform web documents into relevant data.
Integration: Google sheets, Tableau
Data format: JSON, CSV
Device: Mac, Windows, Linux
4. Visual Scraper
Besides the SaaS, VisualScraper offers web scraping services such as data delivery services and creating software extractors for clients. Visual Scraper enables users to schedule the projects to run at a specific time or repeat the sequence every minute, day, week, month, year. Users could use it to extract news, updates, forum frequently.
Various data formats: Excel, CSV, MS Access, MySQL, MSSQL, XML or JSON
Seemingly the official website is not updating now and this information may not as up-to-date.
5. WebHarvy
WebHarvy is a point-and-click web scraping software. It’s designed for non-programmers.
Scrape Text, Images, URLs & Emails from websites
Proxy support enables anonymous crawling and prevents being blocked by web servers
Data format: XML, CSV, JSON, or TSV file. Users can also export the scraped data to an SQL database
6. Content Grabber(Sequentum)
Content Grabber is a web crawling software targeted at enterprises. It allows you to create stand-alone web crawling agents. Users are allowed to use C# or to debug or write scripts to control the crawling process programming.
It can extract content from almost any website and save it as structured data in a format of your choice, including.
Integration with third-party data analytics or reporting applications
Powerful scripting editing, debugging interfaces
Data formats: Excel reports, XML, CSV, and to most databases
7. Helium Scraper
Helium Scraper is a visual web data crawling software for users to crawl web data. There is a 10-day trial available for new users to get started and once you are satisfied with how it works, with a one-time purchase you can use the software for a lifetime. Basically, it could satisfy users’ crawling needs within an elementary level.
Data format: Export data to CSV, Excel, XML, JSON, or SQLite
Fast extraction: Options to block images or unwanted web requests
Proxy rotation
8. Cyotek WebCopy
WebCopy is illustrative like its name. It’s a free website crawler that allows you to copy partial or full websites locally into your hard disk for offline reference.
You can change its setting to tell the bot how you want to crawl. Besides that, you can also configure domain aliases, user agent strings, default documents and more.
However, WebCopy does not include a virtual DOM or any form of JavaScript parsing. If a website makes heavy use of JavaScript to operate, it’s more likely WebCopy will not be able to make a true copy. Chances are, it will not correctly handle dynamic website layouts due to the heavy use of JavaScript.
9. HTTrack
As a website crawler freeware, HTTrack provides functions well suited for downloading an entire website to your PC. It has versions available for Windows, Linux, Sun Solaris, and other Unix systems, which covers most users. It is interesting that HTTrack can mirror one site, or more than one site together (with shared links). You can decide the number of connections to opened concurrently while downloading web pages under “set options”. You can get the photos, files, HTML code from its mirrored website and resume interrupted downloads.
In addition, Proxy support is available within HTTrack for maximizing the speed.
HTTrack works as a command-line program, or through a shell for both private (capture) or professional (on-line web mirror) use. With that saying, HTTrack should be preferred and used more by people with advanced programming skills.
10. Getleft
Getleft is a free and easy-to-use website grabber. It allows you to download an entire website or any single web page. After you launch the Getleft, you can enter a URL and choose the files you want to download before it gets started. While it goes, it changes all the links for local browsing. Additionally, it offers multilingual support. Now Getleft supports 14 languages! However, it only provides limited Ftp supports, it will download the files but not recursively.
On the whole, Getleft should satisfy users’ basic crawling needs without more complex tactical skills.
11. Scraper
Scraper is a Chrome extension with limited data extraction features but it’s helpful for making online research. It also allows exporting the data to Google Spreadsheets. This tool is intended for beginners and experts. You can easily copy the data to the clipboard or store it in the spreadsheets using OAuth. Scraper can auto-generate XPaths for defining URLs to crawl. It doesn’t offer all-inclusive crawling services, but most people don’t need to tackle messy configurations anyway.
12. OutWit Hub
OutWit Hub is a Firefox add-on with dozens of data extraction features to simplify your web searches. This web crawler tool can browse through pages and store the extracted information in a proper format.
OutWit Hub offers a single interface for scraping tiny or huge amounts of data per needs. OutWit Hub allows you to scrape any web page from the browser itself. It even can create automatic agents to extract data.
It is one of the simplest web scraping tools, which is free to use and offers you the convenience to extract web data without writing a single line of code.
13. Scrapinghub (Now Zyte)
Scrapinghub is a cloud-based data extraction tool that helps thousands of developers to fetch valuable data. Its open-source visual scraping tool allows users to scrape websites without any programming knowledge.
Scrapinghub uses Crawlera, a smart proxy rotator that supports bypassing bot counter-measures to crawl huge or bot-protected sites easily. It enables users to crawl from multiple IPs and locations without the pain of proxy management through a simple HTTP API.
Scrapinghub converts the entire web page into organized content. Its team of experts is available for help in case its crawl builder can’t work your requirements.
As a browser-based web crawler, allows you to scrape data based on your browser from any website and provide three types of robots for you to create a scraping task – Extractor, Crawler, and Pipes. The freeware provides anonymous web proxy servers for your web scraping and your extracted data will be hosted on ’s servers for two weeks before the data is archived, or you can directly export the extracted data to JSON or CSV files. It offers paid services to meet your needs for getting real-time data.
enables users to get real-time data from crawling online sources from all over the world into various, clean formats. This web crawler enables you to crawl data and further extract keywords in many different languages using multiple filters covering a wide array of sources.
And you can save the scraped data in XML, JSON and RSS formats. And users are allowed to access the history data from its Archive. Plus, supports at most 80 languages with its crawling data results. And users can easily index and search the structured data crawled by
On the whole, could satisfy users’ elementary crawling requirements.
16. Import. io
Users are able to form their own datasets by simply importing the data from a particular web page and exporting the data to CSV.
You can easily scrape thousands of web pages in minutes without writing a single line of code and build 1000+ APIs based on your requirements. Public APIs have provided powerful and flexible capabilities to control programmatically and gain automated access to the data, has made crawling easier by integrating web data into your own app or web site with just a few clicks.
To better serve users’ crawling requirements, it also offers a free app for Windows, Mac OS X and Linux to build data extractors and crawlers, download data and sync with the online account. Plus, users are able to schedule crawling tasks weekly, daily, or hourly.
17. Spinn3r (Now)
Spinn3r allows you to fetch entire data from blogs, news & social media sites, and RSS & ATOM feed. Spinn3r is distributed with a firehouse API that manages 95% of the indexing work. It offers advanced spam protection, which removes spam and inappropriate language use, thus improving data safety.
Spinn3r indexes content similar to Google and save the extracted data in JSON files. The web scraper constantly scans the web and finds updates from multiple sources to get you real-time publications. Its admin console lets you control crawls and full-text search allows making complex queries on raw data.
RPA Tool
18. UiPath
UiPath is a robotic process automation software for free web scraping. It automates web and desktop data crawling out of most third-party Apps. You can install the robotic process automation software if you run it on Windows. Uipath is able to extract tabular and pattern-based data across multiple web pages.
Uipath provides built-in tools for further crawling. This method is very effective when dealing with complex UIs. The Screen Scraping Tool can handle both individual text elements, groups of text and blocks of text, such as data extraction in table format.
Plus, no programming is needed to create intelligent web agents, but the hacker inside you will have complete control over the data.
Library for programmers
19. Scrapy
Scrapy is an open-sourced framework that runs on Python. The library offers a ready-to-use structure for programmers to customize a web crawler and extract data from the web at a large scale. With Scrapy, you will enjoy flexibility in configuring a scraper that meets your needs, for example, to define exactly what data you are extracting, how it is cleaned, and in what format it will be exported.
On the other hand, you will face multiple challenges along the web scraping process and take efforts to maintain it. With that said, you may start with some real practices data scraping with python.
20. Puppeteer
Puppeteer is a Node library developed by Google. It provides an API for programmers to control Chrome or Chromium over the DevTools Protocol and enables programmers to build a web scraping tool with Puppeteer and If you are a new starter in programming, you may spend some time in tutorials introducing how to scrape the web using puppeteer.
Besides web scraping, Puppeteer is also used to:
get screenshots or PDFs of web pages
automate form submission/data input
create a tool for automatic testing
日本語記事:Webクローラーツール20選|Webデータの収集を自動化できるWebスクレイピングについての記事は 公式サイトでも読むことができます。Artículo en español: Las 20 Mejores Herramientas de Web Scraping para Extracción de DatosTambién puede leer artículos de web scraping en el Website Oficial
25 Hacks to Grow Your Business with Web Data Extraction
Top 30 Big Data Tools for Data Analysis
Top 30 Data Visualization Tools
Web Scraping Templates Take Away
Video: Create Your First Scraper with Octoparse 8
15 BEST Website Crawler Tools in 2021 [Free & Paid] - Guru99

15 BEST Website Crawler Tools in 2021 [Free & Paid] – Guru99

A web crawler is an internet bot that browses WWW (World Wide Web). It is sometimes called as spiderbot or spider. The main purpose of it is to index web pages.
Web crawlers enable you to boost your SEO ranking visibility as well as conversions. It can find broken links, duplicate content, missing page titles, and recognize major problems involved in SEO. There is a vast range of web crawler tools that are designed to effectively crawl data from any website URLs. These apps help you to improve website structure to make it understandable by search engines and improve rankings.
Following is a handpicked list of Top Web Crawler with their popular features and website links to download web crawler apps. The list contains both open source(free) and commercial(paid) software.
Best Web Crawler Tools & Software
1) Visualping
Visualping is a website monitoring tool that crawls the web for changes. Use Visualping in your SEO strategy to monitor changes on SERPs, competitor landing pages and Google algorithm updates.
You can automatically monitor parts of a webpage or entire pages in bulk.
Track your competitors and clients keyword edits on title, meta, H1 and other tags.
Receive notifications via email, Slack, Teams or Discord.
Monitor visual, text and code changes.
Provide complete SEO reports and change audits to your clients.
Use other SEO tools to collect data and Visualping to alert you of the changes.
2) Semrush
Semrush is a website crawler tool that analyzed pages & structure of your website in order to identify technical SEO issues. Fixing these issues helps to improve your search performance. Apart from this service, it also offers tools for SEO, market research, SMM and advertising.
It will test for Metadata, HTTP/HTTPS, Directives, Status codes, Duplicate content, Page response time, Internal linking, Image sizes, Structured data, Site structure, etc
Provides easy to use interface
It helps you to analyze log file.
This application has a dashboard that enables you to view website issues with ease.
Enables you to audit your website without any hassle.
is a website SEO checker that helps you to improve SEO ratings. It provides on-page SEO audit report that can be sent to clients.
This web crawler tool can scan internal and external links on your website.
It helps you to test the speed of your site.
You can visualize the structure of a web page with ease.
also allows you to check indexing issues on landings pages.
It enables you to prevent hackers from attack.
4) ContentKing
ContentKing is an app that enables you to perform real-time SEO monitoring and auditing. This application can be used without installing any software.
It helps you to structure your site with segments.
You can monitor your website changes.
It offers various APIs like Google Search Console and Analytics.
It provides a user-friendly dashboard.
It helps you to collaborate with your clients or colleagues.
5) Link-Assistant
Link-Assistant is a website crawler tool that provides website analysis and optimization facilities. It helps you to make your site works seamlessly. This application enables you to find out the most visited pages of your website.
Provides site optimization reports that help you to boost your business productivity.
You can customize this tool according to your desire.
Easy to configure your site settings.
Helps you to make your website search engine friendly.
It can optimize a site in any language.
6) Hexometer
Hexometer is a web crawling tool that can monitor your website performance. It enables you to share tasks and issues with your team members.
It can check the security problems of your website.
Offers intuitive dashboard.
This application can perform white label SEO.
Hexometer can optimize for SERP (Search Engine Results Page).
This software can be integrated with Telegram, Slack, Chrome, Gmail, etc.
It helps you to keep track of your website changes.
7) Screaming Frog
Screaming Frog is a website crawler that enables you to crawl the URLs. It is one of the best web crawler which helps you to analyze and audit technical and onsite SEO. You can use this tool to crawl upto 500 URLs for free.
It instantly finds broken links and server errors.
This free web crawler tool helps you to analyze page titles and metadata.
You can update and collect data from a web page using XPath (XML Path Language).
Screaming Frog helps you to find duplicate content.
You can generate XML Sitemaps (a list of your website’s URLs).
This list website crawler allows you to integrate with Google Analytics, GSC (Google Search Console) & PSI (PageSpeed Insights).
8) Deepcrawl
DeepCrawl is a cloud-based tool that helps you to read and crawl your website content. It enables you to understand and monitor the technical issues of the website to improve SEO performance.
It supports multi-domain monitoring.
This online web crawler provides customized dashboards.
This website crawler tool helps you to index and discover your web pages.
Deepcrawl enables you to increase the loading speed of your website.
This app provides a ranking, traffic, and summary data to view the performance of the website.
9) WildShark SEO Spider Tool
WildShark SEO Spider Tool is a URL crawling app that helps you to identify pages with duplicate description tags. You can use it to find missing duplicate titles.
Highlight missing H3 tags, title tags, and ALT tags.
It helps you to improve on-page SEO performance.
You can optimize your web page titles and descriptions.
WildShark SEO Spider tool enables you to boost website conversion rates.
This tool also looks for missing alt tags.
10) Scraper
Scraper is a chrome extension that helps you to perform online research and get data into CSV file quickly. This tool enables you to copy data to the clipboard as a tab-separated value.
It can fix the issue with spreadsheet titles ending.
This website crawler tool can capture rows containing TDs (Tabular Data Stream).
Scraper is easy to use tool for the people who are comfortable with XPath query language.
11) Visual SEO Studio
Visual SEO Studio is a web crawling tool that crawls exactly like a search spider. It provides a suite to inspect your website quickly.
It helps you to audit a backlink profile.
This web crawler freeware tool can also crawl the website having AJAX (Asynchronous JavaScript and XML).
Visual SEO Studio can audit XML Sitemaps by web content.
is a tool that helps you to capture data from the search engine and e-commerce website. It provides flexible web data collection features.
Allows you to customize according to your business needs.
This web crawler software can effectively handle all captchas.
This tool can fetch data from complex sites.
is easy to scale without managing IPS (Intrusion Prevention System).
13) 80legs
80legs is a crawling web service that enables you to create and run web crawls through SaaS. It is one of the best Free online Web Crawler tools which consists of numerous server that allows you to access the site from different IP addresses.
It helps you to design and run custom web crawls.
This tool enables you to monitor trends online.
You can build your own templates.
Automatically control the crawling speed according to website traffic.
80legs enables you to download results to the local environment or computer.
You can crawl the website just by entering a URL.
14) Dyno Mapper
DYNO Mapper is a web-based crawling software. It helps you to create an interactive visual site map that displays the hierarchy.
This online Website Crawler tool can track the website from tablets, mobile devices, and desktop.
This web crawler software helps you to understand the weakness of your website or application.
Dyno Mapper enables you to crawl private pages of password-protected websites.
You can track keyword results for local and international keyword rankings.
It enables developers to develop search engine friendly websites.
15) Oncrawl
Oncrawl is a simple app that analyzes your website and finds all the factors that block the indexation of your web pages. It helps you to find SEO issues in less amount of time.
You can import HTML, content, and architecture to crawl pages of your website.
This online web crawler can detect duplicate content on any website.
Oncrawl can crawl the website with JavaScript code.
This tool can handle, a file that tells search engines which pages on your site to crawl.
You can choose two crawls to compare and measures the effect of new policies on your website.
It can monitor website performance.
16) Cocoscan
Cocoscan is a software product that analyzes your website and finds the factor that blocks the indexation of your web pages. This crawler tool can find the primary SEO related issues in less time.
It can identify important keyword density.
Cocoscan can check for duplicate written content in any website.
This web crawler app can analyze your website and make your website searchable by a search engine.
This lists crawler app provides you a list of pages with issues that could affect your website.
You can increase Google ranking effortlessly.
This web crawler online offers real time visual image of a responsive website.
17) HTTrack
HTTrack is an open-source web crawler that allows users to download websites from the internet to a local system. It is one of the best web spidering tools that helps you to build a structure of your website.
This site crawler tool uses web crawlers to download website.
This program provides two versions command line and GUI.
HTTrack follows the links which are generated with JavaScript.
18) webharvy
Webharvy is a website crawling tool that helps you to extract HTML, images, text, and URLs from the site. It automatically finds patterns of data occurring in a web page.
This free website crawler can handle form submission, login, etc.
You can extract data from more than one page, keywords, and categories.
Webharvy has built-in VPN (Virtual Private Network) support.
It can detect the pattern of data in web pages.
You can save extracted data in numerous formats.
Crawling multiple pages is possible.
It helps you to run JavaScript code in the browser.
Link: FAQs
❓ What is a Web Crawler?
A Web Crawler is an Internet bot that browses through WWW (World Wide Web), downloads and indexes content. It is widely used to learn each webpage on the web to retrieve information. It is sometimes called a spider bot or spider. The main purpose of it is to index web pages.
❗ What is a Web Crawler used for?
A Web crawler is used to boost SEO ranking, visibility as well as conversions. It is also used to find broken links, duplicate content, missing page titles, and recognize major problems involved in SEO. Web crawler tools are designed to effectively crawl data from any website URLs. These apps help you to improve website structure to make it understandable by search engines and improve rankings.
Which are the best Website Crawler tools?
Following are some of the best website crawler tools:
Screaming Frog
How to choose the best Website Crawler?
You should consider the following factors while choosing the best website crawler:
Easy to use User Interface
Features offered
A web crawler must detect file and sitemap easily
It should find broken pages and links with ease
It must identify redirect issues, and HTTP/ HTTPS issues
A web crawler should be able to connect with Google Analytics with ease
It must detect mobile elements
It should support multiple file formats
A web crawler must support multiple devices
Screaming Frog SEO Spider Website Crawler

Screaming Frog SEO Spider Website Crawler

The industry leading website crawler for Windows, macOS and Ubuntu, trusted by thousands of SEOs and agencies worldwide for technical SEO site audits.
Buy & Renew
User Guide
SEO Spider Tool
The Screaming Frog SEO Spider is a website crawler that helps you improve onsite SEO, by extracting data & auditing for common SEO issues. Download & crawl 500 URLs for free, or buy a licence to remove the limit & access advanced features.
Free Vs Paid
What can you do with the SEO Spider Tool?
The SEO Spider is a powerful and flexible site crawler, able to crawl both small and very large websites efficiently, while allowing you to analyse the results in real-time. It gathers key onsite data to allow SEOs to make informed decisions.
Find Broken Links, Errors & Redirects
Analyse Page Titles & Meta Data
Review Meta Robots & Directives
Audit hreflang Attributes
Discover Exact Duplicate Pages
Generate XML Sitemaps
Site Visualisations
Crawl Limit
Crawl Configuration
Save Crawls & Re-Upload
JavaScript Rendering
Crawl Comparison
Near Duplicate Content
AMP Crawling & Validation
Structured Data & Validation
Spelling & Grammar Checks
Custom Source Code Search
Custom Extraction
Google Analytics Integration
Search Console Integration
PageSpeed Insights Integration
Link Metrics Integration
Forms Based Authentication
Store & View Raw & Rendered HTML
Free Technical Support
Price per licence
Licences last 1 year. After that you will be required to renew your licence.
Free Version
Crawl Limit – 500 URLs
Paid Version
Crawl Limit – Unlimited*
* The maximum number of URLs you can crawl is dependent on allocated memory and storage. Please see our FAQ.
” Out of the myriad of tools we use at iPullRank I can definitively say that I only use the Screaming Frog SEO Spider every single day. It’s incredibly feature-rich, rapidly improving and I regularly find a new use case. I can’t endorse it strongly enough. ”
Mike King
Founder, iPullRank
” The Screaming Frog SEO Spider is my “go to” tool for initial SEO audits and quick validations: powerful, flexible and low-cost. I couldn’t recommend it more. ”
Aleyda Solis
Owner, Orainti
The SEO Spider Tool Crawls & Reports On…
The Screaming Frog SEO Spider is an SEO auditing tool,
built by real SEOs with thousands of users worldwide. A quick summary of some of the data collected in a crawl
include –
Errors – Client errors such as broken links & server errors (No responses, 4XX client & 5XX server errors).
Redirects – Permanent, temporary, JavaScript redirects & meta refreshes.
Blocked URLs – View & audit URLs disallowed by the protocol.
Blocked Resources – View & audit blocked resources in rendering mode.
External Links – View all external links, their status codes and source pages.
Security – Discover insecure pages, mixed content, insecure forms, missing security headers and more.
URI Issues – Non ASCII characters, underscores, uppercase characters, parameters, or long URLs.
Duplicate Pages – Discover exact and near duplicate pages using advanced algorithmic checks.
Page Titles – Missing, duplicate, long, short or multiple title elements.
Meta Description – Missing, duplicate, long, short or multiple descriptions.
Meta Keywords – Mainly for reference or regional search engines, as they are not used by Google, Bing or Yahoo.
File Size – Size of URLs & Images.
Response Time – View how long pages take to respond to requests.
Last-Modified Header – View the last modified date in the HTTP header.
Crawl Depth – View how deep a URL is within a website’s architecture.
Word Count – Analyse the number of words on every page.
H1 – Missing, duplicate, long, short or multiple headings.
H2 – Missing, duplicate, long, short or multiple headings
Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet etc.
Meta Refresh – Including target page and time delay.
Canonicals – Link elements & canonical HTTP headers.
X-Robots-Tag – See directives issued via the HTTP Headder.
Pagination – View rel=“next” and rel=“prev” attributes.
Follow & Nofollow – View meta nofollow, and nofollow link attributes.
Redirect Chains – Discover redirect chains and loops.
hreflang Attributes – Audit missing confirmation links, inconsistent & incorrect languages codes, non canonical hreflang and more.
Inlinks – View all pages linking to a URL, the anchor text and whether the link is follow or nofollow.
Outlinks – View all pages a URL links out to, as well as resources.
Anchor Text – All link text. Alt text from images with links.
Rendering – Crawl JavaScript frameworks like AngularJS and React, by crawling the rendered HTML after JavaScript has executed.
AJAX – Select to obey Google’s now deprecated AJAX Crawling Scheme.
Images – All URLs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
User-Agent Switcher – Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
Custom HTTP Headers – Supply any header value in a request, from Accept-Language to cookie.
Custom Source Code Search – Find anything you want in the source code of a website! Whether that’s Google Analytics code, specific text, or code etc.
Custom Extraction – Scrape any data from the HTML of a URL using XPath, CSS Path selectors or regex.
Google Analytics Integration – Connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
Google Search Console Integration – Connect to the Google Search Analytics API and collect impression, click and average position data against URLs.
PageSpeed Insights Integration – Connect to the PSI API for Lighthouse metrics, speed opportunities, diagnostics and Chrome User Experience Report (CrUX) data at scale.
External Link Metrics – Pull external link metrics from Majestic, Ahrefs and Moz APIs into a crawl to perform content audits or profile links.
XML Sitemap Generation – Create an XML sitemap and an image sitemap using the SEO spider.
Custom – Download, edit and test a site’s using the new custom
Rendered Screen Shots – Fetch, view and analyse the rendered pages crawled.
Store & View HTML & Rendered HTML – Essential for analysing the DOM.
AMP Crawling & Validation – Crawl AMP URLs and validate them, using the official integrated AMP Validator.
XML Sitemap Analysis – Crawl an XML Sitemap independently or part of a crawl, to find missing, non-indexable and orphan pages.
Visualisations – Analyse the internal linking and URL structure of the website, using the crawl and directory tree force-directed diagrams and tree graphs.
Structured Data & Validation – Extract & validate structured data against specifications and Google search features.
Spelling & Grammar – Spell & grammar check your website in over 25 different languages.
Crawl Comparison – Compare crawl data to see changes in issues and opportunities to track technical SEO progress. Compare site structure, detect changes in key elements and metrics and use URL mapping to compare staging against production sites.
” I’ve tested nearly every SEO tool that has hit the market, but I can’t think of any I use more often than Screaming Frog. To me, it’s the Swiss Army Knife of SEO Tools. From uncovering serious technical SEO problems to crawling top landing pages after a migration to uncovering JavaScript rendering problems to troubleshooting international SEO issues, Screaming Frog has become an invaluable resource in my SEO arsenal. I highly recommend Screaming Frog for any person involved in SEO. ”
” Screaming Frog Web Crawler is one of the essential tools I turn to when performing a site audit. It saves time when I want to analyze the structure of a site, or put together a content inventory for a site, where I can capture how effective a site might be towards meeting the informational or situation needs of the audience of that site. I usually buy a new edition of Screaming Frog on my birthday every year, and it is one of the best birthday presents I could get myself. ”
Bill Slawski
Director, Go Fish Digital
About The Tool
The Screaming Frog SEO Spider is a fast and advanced SEO site audit tool. It can be used to crawl both small and very large websites, where manually checking every page would be extremely labour intensive, and where you can easily miss a redirect, meta refresh or duplicate page issue. You can view, analyse and filter the crawl data as it’s gathered and updated continuously in the program’s user interface.
The SEO Spider allows you to export key onsite SEO elements (URL, page title, meta description, headings etc) to a spread sheet, so it can easily be used as a base for SEO recommendations. Check our out demo video above.
Crawl 500 URLs For Free
The ‘lite’ version of the tool is free to download and use. However, this version is restricted to crawling up to 500 URLs in a single crawl and it does not give you full access to the configuration, saving of crawls, or advanced features such as JavaScript rendering, custom extraction, Google Analytics integration and much more. You can crawl 500 URLs from the same website, or as many websites as you like, as many times as you like, though!
For just £149 per year you can purchase a licence, which removes the 500 URL crawl limit, allows you to save crawls, and opens up the spider’s configuration options and advanced features.
Alternatively hit the ‘buy a licence’ button in the SEO Spider to buy a licence after downloading and trialing the software.
FAQ & User Guide
The SEO Spider crawls sites like Googlebot discovering hyperlinks in the HTML using a breadth-first algorithm. It uses a configurable hybrid storage engine, able to save data in RAM and disk to crawl large websites. By default it will only crawl the raw HTML of a website, but it can also render web pages using headless Chromium to discover content and links.
For more guidance and tips on our to use the Screaming Frog SEO crawler –
Please read our quick-fire getting started guide.
Please see our recommended hardware, user guide, tutorials and FAQ. Please also watch the demo video embedded above!
Check out our tutorials, including how to use the SEO Spider as a broken link checker, duplicate content checker, website spelling & grammar checker, generating XML Sitemaps, crawling JavaScript, testing, web scraping, crawl comparison and crawl visualisations.
Keep updated with future releases by subscribing to RSS feed, our mailing list below and following us on Twitter @screamingfrog.
Support & Feedback
If you have any technical problems, feedback or feature requests for the SEO Spider, then please just contact us via our support. We regularly update the SEO Spider and currently have lots of new features in development!
Back to top

Frequently Asked Questions about website crawler download

How do I crawl an entire website?

The six steps to crawling a website include:Configuring the URL sources.Understanding the domain structure.Running a test crawl.Adding crawl restrictions.Testing your changes.Running your crawl.

How can I crawl data from a website online?

Use the following guide to start:Enter a valid domain name and press the “start” button.Use robots. txt and sitemap. … Watch how the site crawler collects data and arranges SEO errors in reports in real-time.Analyze generated SEO reports with issues found.Fix errors and make re-crawl to validate changes.

What is the best web crawler?

Top 20 web crawler tools to scrape the websitesCyotek WebCopy. WebCopy is a free website crawler that allows you to copy partial or full websites locally into your hard disk for offline reading. … HTTrack. … Octoparse. … Getleft. … Scraper. … OutWit Hub. … ParseHub. … Visual Scraper.More items…•Jun 3, 2017

About the author


If you 're a SEO / IM geek like us then you'll love our updates and our website. Follow us for the latest news in the world of web automation tools & proxy servers!

By proxyreview

Recent Posts

Useful Tools