DubBot inventories website content by going to a URL that is supplied to the application and essentially clicking on every link within that webpage. If the link is contained within the same domain specified in the Site settings, DubBot will download a version of the page for analysis.

DubBot will then click on every link within that page to continue traversing and downloading versions of webpages throughout your website. This process continues until there are no links remaining that have been identified within the provided domain.

Crawling a website is a common practice for Search Engines when performing analysis of webpages for search engine ranking.

A common analogy is that crawling a website is like a spider crawling a web.

Please note: DubBot can also be configured to crawl a specified list of URLs via manual upload, CSV upload, or supplied Sitemap. In these cases, the crawler will only inventory the links provided in those documents or explicitly maintained.

Did this answer your question?