Executive Summary
A cyberattack is not a question of if — it is a question of when and how ready you are to respond. Organisations with a tested incident response plan contain incidents in hours; those without a plan contain them in days or weeks, with correspondingly greater damage.
This playbook is a practical operational guide for security teams, IT managers, and incident commanders. It follows the NIST SP 800-61 incident response lifecycle and provides specific playbooks for the most common incident types: ransomware, data breach, account compromise, and cloud security incidents.
This document should be treated as a living document — reviewed and updated after every incident and at least annually.
Chapter 1: The Incident Response Lifecycle
NIST SP 800-61 defines four phases:
Preparation → Detection & Analysis → Containment, Eradication & Recovery → Post-Incident Activity
↑ ↓
└──────────────────────────────────────────────────────────────────────────────┘
(continuous feedback loop)
Phase 1 — Preparation: Building the capability before an incident occurs Phase 2 — Detection and Analysis: Identifying that an incident is occurring and understanding its scope Phase 3 — Containment, Eradication, and Recovery: Stopping the attack, removing the attacker, and restoring operations Phase 4 — Post-Incident Activity: Learning from the incident to improve resilience
Chapter 2: Preparation
Incident Response Team Structure
| Role | Responsibility | Who Fills It |
|---|---|---|
| Incident Commander (IC) | Overall coordination; escalation decisions; stakeholder communication | CISO or Security Manager |
| Technical Lead | Technical investigation and response actions | Senior Security Engineer or SOC Lead |
| Communications Lead | Internal and external communications, regulatory notification | Legal or PR/Comms representative |
| IT Operations | System isolation, credential resets, infrastructure changes | IT Manager/SysAdmin |
| Legal Counsel | Regulatory obligations, legal hold, external counsel coordination | General Counsel or external lawyer |
| Executive Sponsor | Authority for major decisions (pay ransom, notify customers, engage press) | CEO or COO |
RACI Matrix
Document who is Responsible, Accountable, Consulted, and Informed for each decision during an incident:
| Decision | Responsible | Accountable | Consulted | Informed |
|---|---|---|---|---|
| Isolate infected system | IT Ops | IC | Technical Lead | — |
| Reset all credentials | IT Ops + Tech Lead | IC | Legal | HR |
| Engage external IR firm | IC | CISO | Legal | CEO |
| Notify regulators | Legal | IC | CISO | CEO |
| Pay ransom | IC | CEO | Legal, CISO | Board |
Communication Templates
Pre-draft these before an incident:
Internal all-hands template:
We are currently investigating a security incident. IT and security teams are working to contain and resolve the issue. Please [do not use VPN / avoid printing / avoid accessing the following systems]. More information will be provided in [X hours]. Do not discuss this externally.
Customer notification template (placeholder):
We are writing to inform you of a security incident that may have affected [X]. We detected [activity description] on [date]. We immediately [containment action]. [Data affected / not affected]. We have [regulatory notifications made]. We are [ongoing actions]. Contact: [security@company.com] for questions.
Regulator notification template: Prepare a template compliant with the specific regulations applicable to your organisation. Key elements: date of detection, date of occurrence (if known), data subject category and count, data categories affected, likely consequences, measures taken.
Tools and Resources
IR toolkit — pre-deployed or on-standby:
- EDR platform with ability to isolate endpoints remotely (CrowdStrike Falcon, Microsoft Defender for Endpoint, SentinelOne)
- SIEM with real-time alerting (Splunk, Microsoft Sentinel, Elastic SIEM)
- Network packet capture capability (Zeek/Bro, Security Onion, or cloud flow logs)
- Forensic imaging tools (FTK Imager, dd, AWS EBS snapshots)
- Offline/out-of-band communication channel (separate email, Signal, or an IR communication platform) — if your primary email is compromised, you need an alternative
- Preserved forensic copies of all critical logs (CloudTrail, AD logs, endpoint EDR data)
External contacts to have on retainer or speed-dial:
- External IR firm (CyberneticsPlus or specialist IR provider)
- Cyber insurance incident response hotline
- Law enforcement liaison (CERT-In for India, NCSC for UK, CISA for US)
- Legal counsel with data breach experience
- PR firm with crisis communications experience (for significant incidents)
Chapter 3: Detection and Analysis
Incident Triggers
| Trigger | Source | Priority |
|---|---|---|
| Ransomware note / encrypted files | User report, EDR | Critical — immediate response |
| Mass data exfiltration detected | DLP, SIEM | Critical |
| Credential compromise confirmed | Threat intel, dark web alert | High |
| Anomalous admin activity | SIEM, EDR alert | High |
| Billing spike (cloud) | Billing alert | High (cryptomining likely) |
| Malware detected on endpoint | EDR alert | Medium–High (depends on malware type) |
| Failed authentication spike | SIEM | Medium |
| Vulnerability disclosed on a key system | Threat intel | Medium |
Initial Triage Checklist
When a potential incident is reported:
- Record the time of notification — important for regulatory timeline compliance
- Assign an Incident Commander — one person leads from this point
- Open an incident channel — dedicated Slack/Teams channel or bridge call
- Preserve evidence immediately — do not reboot or shut down systems before capturing volatile data
- Initial classification: Is this a confirmed incident or a potential incident (false positive)?
- Scope assessment: How many systems are affected? What data could be impacted?
- Notify Legal — early legal involvement ensures attorney-client privilege on communications
Incident Severity Classification
| Severity | Definition | Response Time | Escalation |
|---|---|---|---|
| P1 — Critical | Active ransomware, confirmed data breach, full domain compromise | Immediate | CEO, Board |
| P2 — High | Confirmed account compromise, malware active on multiple systems | 1 hour | CISO, Legal |
| P3 — Medium | Isolated malware, minor data exposure, suspicious activity | 4 hours | Security Manager |
| P4 — Low | False positive, minor policy violation | 24 hours | Security Team |
Evidence Collection — Volatile Data First
Before isolating a compromised system, capture volatile evidence (it disappears on reboot):
# Active network connections
netstat -antp > connections.txt
ss -tulpn >> connections.txt
# Running processes
ps auxf > processes.txt
/proc/*/cmdline (Linux)
tasklist /v > processes.txt (Windows)
# Logged-in users
who > users.txt
last -n 50 >> users.txt
# Recent commands
history > history.txt
cat ~/.bash_history >> history.txt
# Memory dump (for advanced analysis)
winpmem.exe memory.dmp (Windows)
LiME kernel module (Linux)
For cloud workloads: take a snapshot of the EC2/VM instance before terminating it.
Chapter 4: Containment
Containment Strategy Decision
Before containing, determine the strategy:
Short-term containment (immediate, while investigation continues):
- Isolate the affected system (network isolation, not shutdown)
- Block identified attacker IPs/domains at perimeter
- Disable compromised credentials
Long-term containment (sustained while eradication is prepared):
- Maintain a clean, known-good version of affected systems on standby
- Implement additional monitoring to detect attacker persistence
- Restrict access to sensitive systems during investigation
Containment vs. monitoring: In some investigations, it is valuable to monitor attacker activity before containing — to understand their full scope and objectives. This is a decision for the Incident Commander in consultation with Legal. Do not monitor longer than necessary.
Network Isolation Techniques
| Technique | How | Use When |
|---|---|---|
| EDR isolation | CrowdStrike/MDE policy push | Fast; single endpoint; maintains logging |
| VLAN reassignment | Network switch config change | Multiple endpoints in same area |
| Firewall ACL | Block specific IP/subnet | Attacker IP identified |
| Physical disconnect | Unplug network cable | No remote capability; air-gap needed |
| Cloud security group | AWS SG, Azure NSG rule | Cloud workload isolation |
Credential Response
If credential compromise is suspected:
- Disable the compromised account(s) — not just password reset (an active session remains valid)
- Revoke all active sessions — force re-authentication for all sessions
- Audit authentication logs — identify all access using the compromised credential
- Identify privilege escalation — did the attacker use the credential to create new accounts or modify permissions?
- Reset all service accounts — if a domain controller is involved, assume all credentials are compromised
Chapter 5: Specific Incident Playbooks
Playbook A: Ransomware
Detection indicators:
- Files with new/unknown extensions (.locked, .encrypted, .WNCRY)
- Ransom note (README.txt, DECRYPT_INSTRUCTIONS.html)
- Rapid file modification events in EDR
- Shadow copies deleted (Event ID 524, vssadmin delete shadows)
- CPU spike on multiple systems
Immediate actions (first 30 minutes):
- Do not pay the ransom — not yet; engage incident response team and legal first
- Do not reboot affected systems — may destroy forensic evidence; some ransomware has different behaviour on reboot
- Identify Patient Zero — first infected system; usually an email attachment or RDP compromise
- Identify propagation scope — which systems show encryption? Check EDR, file server logs
- Isolate all affected systems — at the network level
- Preserve backups — immediately verify backup integrity and isolate them from the network; ransomware frequently targets backups
Investigation:
- Identify the ransomware variant (upload a sample file and ransom note to ID Ransomware)
- Check whether a decryption key is available without payment (No More Ransom project)
- Determine the initial access vector — this must be closed before recovery begins
- Engage external IR firm if internal capability is insufficient
Ransom payment decision:
- Check whether payment is legally permissible (OFAC sanctions list — paying some ransomware groups is illegal)
- Engage legal counsel before any payment
- Consider: backup availability, RTO requirements, data sensitivity, attacker reputation for providing decryptors
- Payment does not guarantee decryption — track record varies by group
Recovery:
- Rebuild affected systems from clean OS images (not from snapshots of compromised systems)
- Restore data from last known-good backup
- Close the initial access vector before bringing systems back online
- Verify all systems are clean before reconnecting to network
- Monitor intensively for 30 days post-recovery
Playbook B: Data Breach
Detection indicators:
- DLP alert: large volume of data transferred externally
- SIEM alert: database query returning unusually large result sets
- Cloud storage: public access enabled on previously private bucket
- External notification: breach discovered by third party, researcher, or dark web monitoring
- User report: customer reports receiving their data unexpectedly
Immediate actions:
- Identify the breach vector — which system, which account, which vulnerability
- Stop the bleeding — close the exposure (disable the endpoint, fix the misconfiguration, revoke the credential)
- Scope the breach — what data was accessed/exfiltrated? What is the affected population?
- Engage Legal immediately — regulatory notification timelines start from awareness, not discovery
- Preserve evidence — access logs, data transfer records, authentication logs
Data scope assessment:
| Data Type | Regulatory Trigger (India) | Regulatory Trigger (EU) | Trigger (UK) |
|---|---|---|---|
| PII (names, emails, phone) | DPDPA 2023 | GDPR — significant risk threshold | UK GDPR |
| Financial data | DPDPA + RBI | GDPR | UK GDPR + FCA |
| Health data | DPDPA + sensitive category | GDPR — high risk, likely mandatory | UK GDPR |
| Government IDs (Aadhaar) | DPDPA + sensitive category | — | — |
Regulatory notification timelines:
- GDPR (EU): 72 hours to supervisory authority if high risk; without undue delay to data subjects
- UK GDPR: 72 hours to ICO
- DPDPA India: notification timeline set by DPB (Data Protection Board) rules — pending final regulations
- PCI DSS: immediate notification to card brands and acquiring bank
- HIPAA: 60 days to OCR; 60 days to individuals for breaches of >500 records
Playbook C: Account Compromise
Detection indicators:
- Login from unusual country/IP
- MFA challenge failures on a previously MFA-enrolled account
- Unusual activity post-login (mass data download, new inbox rules, forwarding rules)
- Password change request from unexpected source
- User reports not recognising activity in account
Immediate actions:
- Disable the account — not just password reset; an active session token remains valid
- Revoke all active sessions — Azure AD: “Revoke sign-in sessions”; AWS IAM: attach deny-all policy immediately, rotate access keys
- Audit account activity — what did the attacker do with the account? (Email forwarded? Files accessed? New accounts created?)
- Check for persistence — new inbox rules, OAuth apps granted access, MFA methods added, new admin accounts
- Notify the account owner — out-of-band (phone call if email is compromised)
Email account compromise specifics:
- Check inbox rules for forwarding to external addresses (attackers set up silent forwarding)
- Check for sent emails from the compromised account (BEC attempt, phishing sent to contacts)
- Check OAuth app permissions — apps granted “access to email” or “send on behalf” may maintain access even after password reset
- Check for data downloaded from OneDrive/SharePoint/Google Drive
Business Email Compromise (BEC) response: If the compromised account was used for financial fraud (wire transfer request, invoice fraud):
- Immediately contact the financial institution to attempt to recall the transfer
- File a report with law enforcement (CERT-In, FBI IC3, or relevant authority)
- Contact the target organisation if another company’s employees were deceived
- Engage legal counsel
Playbook D: Cloud Security Incident
Detection indicators:
- AWS GuardDuty/Azure Defender/GCP SCC alert
- Billing spike in unfamiliar regions
- New resources created in unexpected regions
- CloudTrail/Activity Log shows API calls from unexpected IPs or for unusual services
- Access key used after being rotated (old key still active)
Immediate actions:
- Identify the compromised credential — review management plane logs for the identity performing unexpected actions
- Disable the credential — IAM user, service account, or API key
- Terminate unauthorized resources — especially in regions you don’t use (cryptomining indicator)
- Assess scope — which resources were accessed? Was any data exfiltrated from S3/Blob/GCS? RDS/database accessed?
- Rotate all credentials — if one credential is compromised, assume others may be; rotate all access keys and service account keys
- Engage cloud provider support — AWS, Azure, and GCP all have security incident escalation paths and may provide credits for abuse
AWS-specific:
- GuardDuty finding:
UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration - Use CloudTrail Lake for fast log analysis
- Use IAM Access Analyzer to identify external sharing
Chapter 6: Eradication
Once the attacker is contained, eradicate their presence completely before recovery.
Eradication Checklist
- Identify and remove all malware (full AV scan, EDR sweep of all endpoints)
- Remove attacker-created accounts (local and domain)
- Remove attacker-created persistence mechanisms (scheduled tasks, services, registry run keys, cron jobs, SSH authorized_keys)
- Close the initial access vector (patch the vulnerability, close the port, remove the phished credential)
- Audit and reset all privileged credentials (assume all are compromised for major incidents)
- Remove attacker’s backdoors and tooling
- Verify integrity of critical systems (compare file hashes against known-good baselines)
Chapter 7: Recovery
Recovery returns affected systems to normal operation.
Recovery Sequence
- Restore from clean backups — verify backup integrity before restoration
- Rebuild compromised systems from known-good OS images if backup integrity is in doubt
- Patch the vulnerability exploited for initial access before bringing systems back online
- Implement additional controls identified during the investigation
- Staged return — bring systems back in stages, monitoring intensively at each stage
- Validate normal operation before declaring incident closed
Recovery Verification
- User access restored and functional
- Business processes operating normally
- No re-detection of malware or attacker activity in EDR/SIEM
- Backup systems verified operational for the next incident
Chapter 8: Post-Incident Review
Within 48 Hours — Hot Review
Capture immediate observations while memory is fresh:
- What happened, when, and in what sequence?
- What worked well in the response?
- What slowed the response?
- Any decisions that would be made differently?
Within 2 Weeks — Formal Post-Incident Review
The post-incident review (PIR) should produce:
Root cause analysis:
- Initial access vector
- Why was this vector undetected or unprotected?
- How did the attacker escalate/move laterally?
- Why was detection delayed?
Remediation actions: Each finding should become an action item with an owner and deadline:
| Finding | Action | Owner | Deadline |
|---|---|---|---|
| Phishing email bypassed email filter | Update email security rules; deploy DMARC enforcement | IT Security | 2 weeks |
| MFA not enabled on VPN | Enable MFA on VPN for all users | IT Ops | 1 week |
| No network segmentation — rapid lateral movement | Implement VLAN segmentation for critical systems | IT Arch | 30 days |
Report: Post-incident report should be circulated to CISO, Legal, and relevant business stakeholders. For significant incidents, a board-level summary is appropriate.
Conclusion
An incident response playbook is not a guarantee — it is a structured approach that dramatically improves response effectiveness. Organisations that practice and test their playbooks respond faster, contain incidents more effectively, and emerge with lower overall impact.
Test your playbook: Conduct a tabletop exercise at least annually. Walk through a realistic scenario (ransomware attack, data breach, account compromise) and identify gaps in your plans, tooling, and communication chains.
CyberneticsPlus provides incident response retainer services, tabletop exercises, and post-incident investigation support. Contact us to build or test your incident response capability.