With DubBot, you can configure automated accessibility testing for your web development site, in most cases, even if it's behind a login.
In this article ...
Robots.txt
A robots.txt file is a tool used by websites to tell search engines which parts of their site should not be explored or indexed.
For DubBot to access your development site, you might need to ignore robots.txt. To do this, navigate to Settings and then to the Advanced tab in the DubBot app. Uncheck Obey robots.txt.
For more information check out our Help Article all about robots.txt.
DubBot's Static IP
If access to your development site remains unavailable for the DubBot crawler, it could indicate that our crawler is being blocked. To address this, your server administrator can allow-list our crawler with the following static IPs:
The static IP for the crawler is: 34.213.65.175
The static IP for the proxy is: 52.36.131.191
Crawling Authentication
If your development site requires a username and password to login, DubBot may still be able to crawl by enabling crawler authentication. If this setting is not available on your Advanced tab in the Site settings, you will need to reach out to DubBot Support to get that feature set up in your organization's account.
You can find more details on how to use crawler authentication with DubBot in our Crawling Behind Log in help article.
Still Need Help?
If you have questions, please reach out to our DubBot Support team vial email at help@dubbot.com or via the blue chat bubble in the lower right corner of your screen. We are here to help!