A wild invite appears..
Nice. The scope is big with multiple domains and wildcard * - meaning all subdomains are in scope except those listed in out of scope. Where do we begin? First things first is to set our scanners off. Here is a list of scanners I use and what they do.
- https://github.com/aboul3la/Sublist3r - Almost everyone knows what this tool is/does. Sublist3r is an easy-to-use subdomain scanner, although be warned it doesn't find as many results as other scanners.
- https://github.com/guelfoweb/knock - Another simple subdomain scanner. You can either use the internal wordlist or supply your own.
- https://github.com/michenriksen/aquatone- This is the last tool I run. AquaTone can not only discover subdomains via multiple methods, but you have the option to also take screenshots of each domain found.</li>
By now we should have a long list of subdomains found along with ports open and a screenshot of any that responded. Now it's the fun part, dive in and see what we've found!
The next step
First of all, look for interesting subdomains that contain words such as
qa, dev, admin, upload, test, xml, docs, wiki, portal, management and check what is on there. Do as much research as possible into these subdomains: where was they found? why was they found? was they once exposed but now aren't? does waybackmachine have anything maybe?. By understanding what each subdomain does you can start to understand and get a feel for how a webapp works. This is extremely useful when it comes to participating on one program! Not only this, but set off more scanners for each of the subdomains that work and start scanning .sub.example.com, and as well as this we introduce some more tools.
- https://github.com/xmendez/wfuzz - wfuzz is a quick scanner which replaces the word 'fuzz' anywhere in the URL. Useful for dir/file scanning!
- https://portswigger.net/ - Burp Suite. The king of tools. Get yourself a wordlist from https://github.com/danielmiessler/SecLists and take advantage of the "Intruder" feature to start scanning for files/directories. I like using Intruder for scanning because you can easily grep for results in the UI. You can actually use a cool technique with Burp and that's to host your own redirect script (use XAMPP) and then point intruder to that URL & tell it to follow all redirects. For example:
http://127.0.0.1/redirect.php?goto=https://www.urlhere.com/- by telling BURP to follow all redirects it will hit your url > redirect to destination. Useful for testing multiple subdomains for the same directory/file - such as /crossdomain.xml and /robots.txt :)
- https://gist.github.com/mhmdiaa/2742c5e147d49a804b408bfed3d32d07 - WayBackMachine Robots Scraper, sometimes site like to make use of the robots.txt a bit too much and place lots of endpoints in there. WayBackMachine is great for going back in time and potentially finding old files that may still be on the server.
- https://github.com/zseano/InputScanner - ZSeano Input Scanner, quickly input a list of subdomains and endpoints found and start scanning for low hanging fruit by scanning for input fields and testing them for basic XSS. You can make use of Burps "Spider" tool to gather endpoints.
- https://github.com/maK-/parameth - A great tool by @mak to bruteforce GET&POST parameters.
By now we should be starting to get a good feel for how this application is working and what they've got exposed on the internet, and even potentially a bug already! But we're not done discovering stuff, let's carry on.
The next step is to get down to some manual work and that can include google dorking. I've found so many low hanging fruit on well established programs from just dorking the right things. I always start off with
site:example.com inurl:& to discover any endpoints indexed. This not only helps you discover what their site is coded in (by file extensions), but you can also start testing for things like XSS/SQL. From here I will start enumerating file extensions (
site: example.com ext:jsp etc) and certain keywords,
site:example.com inurl:upload. Get creative with your searching. :)
So at this point, we haven't really messed with their main site, even though some bugs may be there. From testing the "outside" you should already have a feel for how 'secure' they are and what they may be filtering against (have all XSS attempts failed? perhaps they've been tested before, using a framework, waf?). When looking at a website these are my thoughts & questions I ask when looking for bugs:
Do they allow for logging in via any Oauth? What about any rate limiting? How are they handling resetting passwords? What about signing up with things like nullbyte, can I somehow "login" to another users account? (example%[email protected].com). What about any mobile app? Sometimes they allow for logging in via oauth on mobile only.
What information do they allow me to submit? Can I enter any HTML anywhere? (do they allow markdown for example). What about updating my account settings, what sort of CSRF protection is there (if any). Look for upload features (photo upload?) and start testing.
Most mobile apps I test usually have atleast 1 IDOR bug when querying/updating information. Most mobile apps use a simple API and no checks are done on if you are that user or not.
Can I delete other peoples post? Update their information (remember, mobile apps!). The idea is to see if account A can do something to account B. Two browsers, one afternoon, it's fun!
Application Logic bugs
A bug that doesn't necessary have a security impact, but business impact. An example would be if a loan company wants to limit applications to £10,000, but you find a way to submit £1,000,000. Breaking how their website is intended to be used to cause impact. (Note: If testing for stuff like that, be careful and check the scope!)
Write notes & spot patterns!
Lots of hours have been put into this program by now. We've got some bugs and things are going well. Now it's time to look for patterns. Everytime you find a vulnerable parameter keep a note of it and start testing for this parameter across other areas of their site (a lot of companies re-use parameter names across the site). As time goes on you'll see a new feature come out and begin to start testing instantly for previously found vulnerable parameters.