🎯 Penetration Testing November 15, 2025 · 9 min read

Secure Code Review: Manual Analysis vs SAST Tools

Static analysis tools find the easy bugs fast, but manual code review catches the logic flaws that SAST misses. Learn how to combine both approaches for thorough secure code review.

PT
🎯 Penetration Testing
PT

Static Application Security Testing (SAST) tools have become a standard part of modern DevSecOps pipelines. They’re automated, they run continuously, and they catch a meaningful set of common vulnerabilities. They’ve also created a false sense of confidence among many organisations who believe that because they have a SAST tool running, their code is being reviewed for security.

It isn’t. Not fully. This guide explains what SAST tools do well, where they consistently fail, and why manual source code security review remains irreplaceable for complex applications and high-value assessments.


What SAST Tools Do Well

SAST tools (Semgrep, CodeQL, SonarQube, Checkmarx, Fortify, Veracode) analyse your source code without executing it, looking for patterns that match known vulnerability types.

They excel at:

Injection sinks: Finding calls to dangerous functions with user-controlled input — cursor.execute(query + user_input), subprocess.run(cmd, shell=True), eval(user_data). These are pattern-matchable and SAST finds them reliably.

Known-bad API usage: Deprecated cryptographic functions (MD5, DES, SHA1 for passwords), hardcoded credentials in source, dangerous deserialization calls, Math.random() used for security purposes.

Missing controls: Absence of CSRF tokens, missing rate limiting annotations, unvalidated redirects to user-supplied URLs.

Compliance-driven issues: PCI DSS and HIPAA-relevant patterns — logging of credit card numbers, storing passwords in plaintext, insufficient TLS version requirements.

Volume: A SAST tool can scan a million lines of code in minutes. A human can’t.


Where SAST Consistently Fails

1. Business Logic Vulnerabilities

SAST tools have no concept of your application’s intended behaviour. They can’t detect:

  • A shopping cart that allows applying the same discount code multiple times
  • An API endpoint that returns admin data to non-admin users because the developer forgot an authorisation check on one specific route
  • A multi-step workflow where users can skip step 2 and proceed directly to step 3
  • Race conditions in booking or redemption flows
  • IDOR vulnerabilities (the pattern item = get_item(item_id) looks fine to SAST — the missing if item.owner != current_user: abort(403) is what matters)

Business logic flaws require understanding the application’s intent. SAST has no way to compare what the code does against what it should do.

2. Authentication and Authorisation Flows

SAST can detect obvious auth issues — hardcoded credentials, missing authentication annotations. But complex auth flows defeat it:

  • A JWT validation function that correctly validates signatures but is never called on certain endpoints
  • An OAuth implementation that checks scope but doesn’t validate state parameter (CSRF in OAuth)
  • A multi-tenant SaaS app where the tenant ID is derived from the session but one endpoint derives it from a URL parameter instead
  • Custom RBAC implementations where the logic is spread across multiple files and services

3. Second-Order Vulnerabilities

SAST typically traces data flow from input to sink within a limited scope. Second-order vulnerabilities exploit data that was stored cleanly (first request) and becomes dangerous when retrieved (second request):

# Request 1 — SAST sees this as fine:
# user input goes through html.escape() before storage
safe_name = html.escape(request.form['name'])
db.execute("INSERT INTO users (name) VALUES (?)", (safe_name,))

# Request 2 — SAST may not trace this back to the original input:
user = db.execute("SELECT name FROM users WHERE id=?", (user_id,))
# name is rendered in admin panel WITHOUT additional escaping
# → Stored XSS, despite the initial input sanitisation
admin_response = f"<tr><td>{user['name']}</td></tr>"

Second-order SQL injection follows the same pattern — safe parameterised insertion followed by unsafe use of retrieved data in a dynamic query.

4. Configuration and Architecture Issues

SAST analyses source code. It doesn’t see:

  • Misconfigured CORS headers that allow credential-bearing cross-origin requests
  • Missing security headers in web server or reverse proxy config
  • Over-permissive IAM roles granted to the application’s service account
  • API gateways that strip security checks before forwarding to internal services
  • Infrastructure decisions that expose internal services

5. Cryptographic Implementation Context

SAST can flag AES-ECB mode or MD5 usage. It cannot evaluate:

  • Whether your key generation is truly random or seeded with predictable values
  • Whether your JWT signing keys are appropriately long and complex for the algorithm
  • Whether your custom PRNG implementation is cryptographically sound
  • Whether your implementation of a standard protocol contains subtle timing side-channels

6. Language and Framework-Specific Nuances

SAST tools support major languages but vary significantly in depth. They often miss:

  • ORM-level injection that isn’t obvious from the ORM call syntax
  • Template engine-specific injection patterns
  • Framework-specific authentication bypass techniques (e.g., Spring Security misconfigurations)
  • Language-specific type coercion issues ("0" == false in PHP, prototype pollution in JavaScript)
  • Deserialization gadget chains in Java or .NET

What Manual Code Review Provides

A skilled reviewer reading code brings capabilities that no automated tool replicates:

Understanding Intent

The reviewer asks: “What is this code supposed to do?” and compares it against “What does this code actually do?” This is how business logic flaws and access control issues are found.

@app.route('/api/orders/<order_id>', methods=['GET'])
@login_required
def get_order(order_id):
    # SAST: looks fine — authenticated, uses parameterised query
    # Human reviewer: where's the ownership check?
    order = db.execute("SELECT * FROM orders WHERE id=?", (order_id,)).fetchone()
    return jsonify(order)

A SAST tool sees an authenticated endpoint with a parameterised query. A reviewer sees a BOLA (Broken Object Level Authorisation) vulnerability — any authenticated user can retrieve any order by guessing the ID.

Cross-File and Cross-Service Data Flow

Manual review can trace data flows across multiple files, services, and data stores in ways that automated tools struggle with. Complex multi-service architectures with internal APIs, message queues, and asynchronous processing often have trust boundary violations that are only visible when the architecture is understood holistically.

Semantic Understanding of Custom Frameworks

Enterprise codebases often use internal frameworks with custom authentication helpers, ORM abstractions, and middleware. SAST tools have no knowledge of these — a security-relevant helper function that’s incorrectly used won’t be flagged because the tool doesn’t know what it’s supposed to do.

Finding Missing Controls

It’s harder to find what’s not there than what is. SAST can sometimes detect missing annotations (@csrf_protect, @login_required). But complex missing controls — like the absence of input validation in one function out of fifty, or a missing integrity check in one message handler out of twenty — require reading the code with security intent.


How Manual Code Review Is Conducted

A professional source code security review follows a structured methodology:

Phase 1: Scoping and Understanding

  • Review architecture documentation, data flow diagrams, and threat models
  • Understand the technology stack (frameworks, ORMs, authentication libraries)
  • Identify the most security-sensitive components (authentication, payment, data access, admin functions)
  • Map external inputs (APIs, file uploads, webhooks, SSO callbacks)

Phase 2: Automated Tooling (as an aid, not a replacement)

Run SAST tools as the first pass:

  • Triage and validate all findings (eliminate false positives)
  • Use findings to guide manual review — where are the dangerous patterns?
  • Check tool coverage — what languages and frameworks are partially supported?

Phase 3: Manual Review by Component

Prioritise review effort on the highest-risk components:

Authentication and session management:

- How are credentials validated?
- Is password hashing using an appropriate algorithm (Argon2/bcrypt)?
- How are session tokens generated and validated?
- What happens on logout — are server-side sessions invalidated?
- Are there any session fixation risks?
- How is MFA implemented? Can it be bypassed?

Authorisation:

- How is the current user identified?
- Is authorisation checked at every data access point?
- Is there any reliance on client-supplied user ID or role?
- Are there any admin functions accessible to non-admins?

Input handling:

- Where does user input enter the system?
- How is it validated and sanitised?
- Where is it eventually used? (output encoding, query parameters, file operations)
- Are there any second-order flows?

Cryptography:

- What algorithms are in use?
- Where are keys stored? How are they generated?
- Is randomness from a CSPRNG?
- Are there any custom crypto implementations?

Third-party integrations:

- Are webhook payloads validated (HMAC signature verification)?
- Is response data from external APIs trusted implicitly?
- Are API keys scoped to minimum permissions?

Phase 4: Exploitation Confirmation

For ambiguous findings, the reviewer may attempt to exploit the potential vulnerability (in a test environment or against agreed scope) to confirm it’s real. This eliminates false positives from the report and provides proof-of-concept evidence.


When to Use Each Approach

ApproachBest For
SAST onlyContinuous pipeline scanning — catch common issues in every PR
SAST + manual critical pathsHigh-risk components (auth, payment) get human review; rest is SAST
Full manual reviewPre-launch assessment of high-value targets, regulated industries, complex codebases — see our source code security review service
White-box pentestCombines manual code review with dynamic testing — validates exploitability

For most SaaS products:

  • SAST in CI/CD pipeline for every commit (continuous)
  • Manual review of authentication, authorisation, and payment code before major releases
  • Full code security assessment annually or after significant architectural changes

What to Expect from a Code Review Engagement

Scope: Define what codebase is in scope, and what depth of review. 100% line-by-line review of 500,000 lines of code is not achievable in a 5-day engagement. Target the highest-risk components.

Languages supported: Ensure your reviewer has expertise in your specific stack. A JavaScript expert reviewing Go code will miss language-specific issues.

Output: A professional code review report includes:

  • Each finding with file name, line number, code snippet, description, proof-of-concept (where applicable), and remediation guidance
  • Severity rating with justification
  • Overall architecture observations (systemic issues, not just individual vulnerabilities)
  • Positive findings (what the team is doing well — helps calibrate the report)

Timeline: Expect 1–3 days for scoping and tooling, 3–10 days for review (depending on codebase size and depth), 1–2 days for reporting.


Integrating Manual Review into Your SDLC

The most effective programmes don’t treat code review as a separate engagement — they integrate it:

  • Security champions: Developers embedded in each team who are trained in secure code review principles — they review security-sensitive changes before they merge
  • PR-level security review: A security team member reviews PRs that touch auth, payments, or core data access — not all PRs, just the high-risk ones
  • Pre-launch reviews: Formal review engagement before any significant new feature or product launches
  • Annual full assessments: Comprehensive review of the full codebase annually, feeding findings back into the backlog

CyberneticsPlus conducts source code security reviews for web applications, APIs, and mobile apps. Pair source code review with web application penetration testing for complete coverage — our DevSecOps consulting service helps you build these reviews into your SDLC. Contact us to scope a code security review.

#source code review #SAST #secure code review #code audit #security review #AppSec

Need expert help with Penetration Testing?

Our certified security team is ready to assess your environment and recommend the right solutions.

Book a Free Consultation