Crawling a Development Site

How to utilize DubBot's authentication settings to crawl a dev site.

Updated yesterday

With DubBot, you can configure automated accessibility testing for your web development site, in most cases, even if it's behind a login.


A robots.txt file is a tool used by websites to tell search engines which parts of their site should not be explored or indexed.

For DubBot to access your development site, you might need to ignore robots.txt. To do this, navigate to Settings and then to the Advanced tab in the DubBot app. Uncheck Obey robots.txt.

Screenshot of the Advanced tab set up choices in the DubBot app. A red box highlights the location of the Avanced link and a red arrow points to the checkbox item of Obey robots.txt.

For more information check out our Help Article all about robots.txt.

DubBot's Static IP

If access to your development site remains unavailable for the DubBot crawler, it could indicate that our crawler is being blocked. To address this, your server administrator can allow-list our crawler with the following static IPs:

The static IP for the crawler is:

The static IP for the proxy is:

Crawling Authentication

If your development site requires a username and password to login, DubBot may still be able to crawl by enabling crawler authentication. If this setting is not available on your Advanced tab in the Site settings, you will need to reach out to DubBot Support to get that feature set up in your organization's account.

The option to enable crawler authetication is visible under the heading Crawler Authentication. The radio button in checked and additional settings for the authenticator are visible.

You can find more details on how to use crawler authentication with DubBot in our Crawling Behind Log in help article.

Still Need Help?

If you have questions, please reach out to our DubBot Support team vial email at or via the blue chat bubble in the lower right corner of your screen. We are here to help!

Did this answer your question?