It’s a question as old as information security – how should organizations combine manual efforts with automation to identify risk most effectively? Tool-based automated security defect discovery excels at finding the low-hanging-fruit, but relying too heavily on automation leaves gaps – skilled threat actors combine OSINT and active reconnaissance, bespoke attack tools, and creativity to find and exploit vulnerabilities that automation misses.
This post:
- Explores ways to combine automation with manual efforts to optimize defect discovery
- Examines which security defects are best-suited to automated or manual discovery
Better Together
Automation can provide basic defect discovery coverage across all your applications. High-touch manual efforts are best applied to your most critical applications first, such as profitable line-of-business applications or applications that handle your most sensitive data.
Automate defect discovery for the easier-to-find issues, freeing your security team to focus on higher-impact activities like:
- Scaling via security automation
- Refining existing security automation (expanding coverage across the application portfolio, reducing false positives, improving tool efficiency, reporting useful metrics, …)
- Developing new security automation (writing new tools, customizing existing tools, integrating with build/deployment procedures…)
- Threat modeling to guide other security activities
- Building/delivering training for developers and DevOps staff
- Objective-oriented red teaming exercises (to demonstrate risk to organizational stakeholders, and to test advanced defensive capabilities)
- Analyzing your applications/systems for design & architecture flaws
Automate Low-Hanging-Fruit
Automation is best at finding well-defined, relatively static security defects such as:
- Weak SSL/TLS configurations via tools like sslyze, Nessus SSL/TLS-related plugins, or Qualys SSL Labs
- Sensitive data exposure via tools like gitrob to scan GitHub repos for sensitive info, s3-inspector to find public AWS S3 buckets, or git-secrets to prevent accidentally committing secrets to version control
- Use of known-vulnerable software via tools like Retire.js to flag known-vulnerable libraries in JavaScript-based projects or OWASP Dependency-Check for weak libraries in Java and .NET projects. At a network/host level, vulnerability managers/scanners like Nessus, Nexpose, or nmap’s version detection capabilities (plus scripting to store & analyze scan results) can help.
Creativity & Manual Efforts
Manual defect discovery mechanisms (such as penetration testing, red teaming, threat modeling, or source code review) are best for finding defects that require a detailed, semantic understanding of a system and creative thinking about attack paths. Tools cannot match skilled manual efforts for finding application security defects like:
- Authentication bypass since automated tools often miss the nuances of application-specific authentication flows, or misunderstand what application functionality should require authentication
- Privilege escalation since automated tools lack the context of application-specific privilege levels, and which user classes should have access to particular application functionality (e.g. normal users shouldn’t be able to access administrative functionality)
- Insecure direct object references since automated tools often won’t understand which parameters/fields in an application may be vulnerable or able to detect a vulnerability (e.g. sensitive documents accessed by application-wide sequential numeric IDs, even across users)
- Insecure use of cryptography which can mean insecure implementations (e.g. accidentally exposing encryption keys in public locations like S3 or web-accessible directories) or architectural weaknesses (e.g. encrypting where hashing would be better-suited, poor password storage hygiene, lack of cryptographic agility/ability to swap out cryptographic algorithms in response to discovered issues, …)