Case Study Financial Services ·United States · March 20, 2025

How We Stopped a DDoS Attack Targeting a US Asset Management Firm

A US-based asset management firm came under a sustained multi-vector DDoS attack during peak trading hours. CyberneticsPlus deployed emergency mitigation within 90 minutes, achieving zero downtime for the remaining trading day.

90min

Time to Full Mitigation

0

Minutes of Downtime

3.2Gbps

Peak Attack Volume

DDoS ProtectionIncident ResponseCloudflare Deployment

Background

A mid-size asset management firm managing over $2 billion in client assets contacted CyberneticsPlus during a live incident. Their public-facing client portal and trading API were under active attack — the first DDoS the firm had experienced in its 12-year history. The attack began at 9:45 AM EST, 15 minutes after US market open.

The firm had no existing DDoS mitigation capability beyond basic upstream rate limiting at their hosting provider — which was overwhelmed within minutes of the attack starting.

The Attack

Attack profile:

  • Multi-vector: UDP flood (volumetric) combined with HTTP/S application-layer flood
  • Peak volume: 3.2 Gbps / 280,000 packets per second at network layer
  • Application layer: 45,000 requests per second targeting the trading API’s authentication endpoint
  • Attack duration: 6 hours continuous, with two intensity peaks
  • Source: Distributed botnet across 47 countries; no single ASN contributing more than 8%

The volumetric component overwhelmed the hosting provider’s upstream infrastructure. The application-layer component specifically targeted the /api/v2/auth/login endpoint — indicating the attacker had prior knowledge of the API structure.

Response

Hour 1: Triage and Emergency Mitigation

CyberneticsPlus received the incident call at 10:12 AM EST.

0–30 minutes:

  • Confirmed attack type via traffic analysis (provider NetFlow data)
  • Identified the attack was bifurcated — volumetric at network layer, targeted HTTP flood at application layer
  • Initiated Cloudflare emergency onboarding for the client’s domains

30–60 minutes:

  • DNS cutover to Cloudflare proxy completed (TTL pre-reduced to 60 seconds)
  • Enabled Cloudflare “I’m Under Attack Mode” — JS challenge on all traffic
  • Network-layer attack absorbed by Cloudflare’s global anycast network

60–90 minutes:

  • Application-layer attack pattern analysed: all attacking requests shared unusual HTTP header patterns (missing Accept-Language, static User-Agent string)
  • Custom WAF rule deployed to block attack pattern:
    (not http.request.headers["accept-language"][0] exists) and
    (http.request.uri.path eq "/api/v2/auth/login") and
    (http.request.method eq "POST")
    → Block
  • Attack traffic dropped to zero within the application layer
  • Attack continued at network layer, fully absorbed by Cloudflare

By 11:42 AM EST (90 minutes after initial contact), the client portal and trading API were fully operational with no degradation.

Parallel: Origin Protection

While the attack was being mitigated at the CDN layer, we identified that the client’s origin server IP was publicly discoverable via historical DNS records (Shodan indexed a pre-CDN A record).

Actions taken:

  • Advised client to restrict origin inbound to Cloudflare IP ranges only
  • Assisted with security group update to block all non-Cloudflare traffic on ports 80/443
  • Confirmed origin IP was not being directly targeted (attack was exclusively at the CDN layer)
  • Documented origin IP rotation as a follow-up action for the week following the incident

Monitoring Phase (Hours 2–6)

The attack continued for 6 hours but had zero impact on availability. CyberneticsPlus maintained a live monitoring session throughout, adjusting WAF rules as the attacker changed tactics:

  • Hour 2: Attacker rotated User-Agent strings → updated rule to target header absence pattern instead
  • Hour 3: Attack added HTTP/2 requests → Cloudflare’s HTTP/2 prioritisation handled automatically
  • Hour 4–6: Attack continued at reduced intensity (~40% of peak), fully mitigated

Findings and Recommendations

Post-incident, we conducted a security review of the client’s internet-facing infrastructure:

  1. No WAF pre-attack: The trading API had no WAF protection. Application-layer attacks would have been effective even without the volumetric component.
  2. Origin IP exposure: Historical DNS records exposed the origin server. The attacker could have bypassed the CDN entirely.
  3. No incident response plan: The firm had no documented process for security incidents — the 27-minute gap between attack start and vendor call indicates internal confusion about who to contact.
  4. API authentication endpoint lacked rate limiting — the auth endpoint was accepting unlimited unauthenticated requests, making it an effective DDoS amplifier.

Outcomes

Immediate:

  • Zero downtime for trading operations
  • Client portal served normally throughout the 6-hour attack
  • Regulatory requirements met (no operational disruption reportable under firm’s compliance obligations)

Post-incident (30 days):

  • Cloudflare Business plan deployed permanently with full WAF configuration
  • Origin server restricted to Cloudflare IP ranges
  • API authentication endpoint rate-limited to 10 requests per minute per IP
  • Incident response plan documented with designated security contacts and vendor escalation procedures
  • Quarterly DDoS tabletop exercise added to operational calendar

The client has since experienced two additional, smaller DDoS attempts (identified in Cloudflare logs) — both mitigated automatically with zero operational impact.

A

Anonymous

Financial Services · United States

Want similar results for your business?

Book a free consultation and we'll assess your current security posture.

Book a Free Consultation