Frequently Asked Questions
Topics include basic concepts, crisis team & roles, the BOB method, the Logger, preparation & practice, laws and regulations, cyber crisis vs. other crises, crisis plans & Runbooks, and about CCRC
Fundamentals
Cybersecurity crisis management is the framework of processes, decision-making structures, and communication lines through which an organization manages, limits, and recovers from a serious cyber incident. It goes beyond technical response: it also encompasses administrative coordination, legal obligations, internal and external communication, and safeguarding business continuity.
Not every incident is a crisis. An incident becomes a crisis the moment its impact exceeds the organization's operational resilience, think of prolonged downtime of critical systems, reputational damage, significant financial consequences, or when the situation requires executive attention. Recognizing that threshold early is crucial.
Incident response is operational and technical: detecting, containing, and remediating an attack. Crisis management is strategic: coordinating the organization as a whole, making decisions under time pressure, and managing internal and external stakeholders. Both are necessary but require different roles and skills.
Crisis Team & Roles
An effective crisis team has at least three fixed roles: a lead, a logger, and a communication expert.
The lead is the central figure of the crisis team. This is not the CISO or the person responsible for security: during a cyber crisis, they have their own critical role as subject matter expert. The lead needs to be someone with enough authority from the executive board to act quickly and decisively. Ideally, the executive board itself is not part of the crisis team: this slows down decision-making and pulls the top of the organisation away from their own responsibilities.
Instead, the board is closely involved via a fixed line of communication with the lead. Crucial decisions are prepared in terms of content by the crisis team and then taken in coordination with the board. This ensures decision-making remains both fast and administratively supported.
The logger ensures information management: an accurate timeline of actions, decisions, and findings is indispensable, both during the crisis and afterwards for evaluation, accountability, and any legal or regulatory obligations.
The communication expert manages all internal and external messaging. During a crisis, the risk of contradictory messages is high; a single central point of contact prevents confusion and reputational damage.
Depending on the nature of the incident, additional expertise is brought in, such as legal and compliance advisors, IT/security operations, HR, or operations.
Practice consistently points to the same pitfalls: escalating too late, unclear decision-making lines, poor internal communication, delaying external notifications (including legally required reports to regulators), and the lack of a predetermined communication protocol. Many of these mistakes are avoidable with proper preparation.
BOB Method
BOB stands for Fact-finding (Beeldvorming), Judgment (Oordeelsvorming), and Decision-making (Besluitvorming). It is a structured meeting method that helps crisis teams reach decisions in an orderly manner under time pressure. By consciously separating the three phases, a team prevents jumping to solutions too quickly before a shared understanding of the situation exists.
In the Fact-finding phase, one question is central: what do we know? The team collects facts, signals, and uncertainties without judgment. The goal is shared situational awareness: everyone at the table must have the same picture before further steps are taken.
In the Judgment phase, the picture is interpreted: what does this mean? What options are available? What risks are associated with each choice? This is the phase of analysis and weighing, not of deciding.
In the Decision-making phase, a decision is made based on the shared picture and the weighed options. The lead explicitly closes the phase with a clear decision, including who does what and within what timeframe.
During a cyber crisis, a team is under extreme pressure: information is incomplete, emotions run high, and everyone has an opinion. Without structure, tunnel vision occurs, dominant speakers dictate the discussion, and decisions are made before the team agrees on the facts. BOB breaks this pattern. By explicitly naming the phases — 'we are now in fact-finding, we'll save judgments for later' — the lead safeguards the quality of the decision-making process, even when pressure is at its highest.
BOB is not intended for every operational decision, that would slow down the process. It is most valuable at moments when the crisis team gathers for a joint meeting: at the first crisis meeting to interpret the situation, at every subsequent crisis meeting to update decisions, and at crucial choice moments such as whether or not to pay ransom, informing customers, or taking systems offline. These specific decisions deserve a structured process.
The lead is the guardian of the BOB process. It is their task to actively monitor the phases, bring participants back if they jump to solutions too early, and explicitly close each phase before the next begins. An effective chair interferes as little as possible with the content of the discussion; that role is process-oriented, not content-oriented. That is exactly why the CISO or security manager should not be the lead: they have too much involvement in the content to monitor the process objectively.
The Logger
The logger is one of the three minimum required roles within a crisis team, but in practice, it is sometimes underestimated. The core of the role is twofold: recording everything that happens, and relieving the lead of operational tasks so they can remain fully focused on strategic direction. A lead who takes their own notes, schedules meetings, and monitors actions is no longer a lead; they are administrating. The logger prevents this.
During every crisis meeting, the logger maintains an accurate and structured logbook. This includes: decisions made (including who made the decision and based on what information), assigned actions (including owner and deadline), the timeline of events, established facts, and explicit assumptions. The distinction between facts and assumptions is crucial; in a crisis, a team often works on incomplete information, and it is important that this remains traceable.
The logger does not stop when the meeting ends. After each meeting, the logger prepares a concise report summarizing the decisions, actions, and the current state of affairs: this report forms the basis for the next meeting. Additionally, the logger organizes the follow-up meeting: time, participants, and agenda. In between, the logger actively monitors the follow-up of assigned actions: are owners working on them? Are deadlines being met? Where is adjustment needed? This is reported back to the lead.
The lead has one primary task during a crisis: strategic direction. This is only possible if they are not distracted by operational concerns. By taking all logistical and administrative work off the lead's hands: recording, reporting, action monitoring, meeting organization, the lead is kept free for what matters: situation assessment, decision-making, and coordination with the board. In that sense, the logger is the silent engine behind a well-functioning crisis team.
After an incident, the logger's crisis logbook is more than an internal document. In an investigation by a regulator, a legal procedure, or a NIS2 accountability report, a well-maintained logbook provides evidence of demonstrable action: which decisions were taken when, based on what information, and who was responsible. Organizations that lack structured recording during a crisis are left empty-handed afterwards, even if they made the right choices in terms of content.
Professional loggers prefer working with dedicated crisis management software instead of separate Word documents or notebooks. Such tools provide a structured environment for recording decisions, actions, timelines, and facts, with a real-time overview that all crisis team members can view simultaneously.
Well-known examples:
- CrisisSuite (Dutch, by Merlin Software): an all-in-one platform for the Dutch market, supports BOB and offers modules for alerting and action management.
- D4H: international platform, strong in real-time collaboration and customizable dashboards per crisis type.
- Noggin: combines crisis management with business continuity planning and threat intelligence, scalable for large organizations.
- Everbridge: for large organizations, with centralized crisis response and interactive dashboards.
For smaller organizations or training situations, a structured logbook in Microsoft OneNote or SharePoint can also be used, provided the structure is well-agreed upon beforehand. The tool is a means, not an end: what counts is that the recording is complete, structured, and traceable.
Preparation & Practice
Preparation consists of four pillars: (1) an up-to-date and tested crisis plan, (2) clear roles and decision-making lines, (3) regular exercises — preferably realistic scenario exercises — and (4) a culture in which incidents are reported promptly. An organization that has never practiced discovers its vulnerabilities at the worst possible moment.
A cyber crisis exercise simulates a realistic attack scenario to test how people, processes, and systems react under pressure. The value lies not only in exposing technical gaps, but specifically in testing decision-making, communication, and collaboration. A good exercise yields more actionable insights than a year's worth of policy documents.
Yes. For organizations falling under the NIS2 directive, exercising is not an optional best practice but a demonstrable obligation. In the Dutch Cybersecurity Decree (the AMvB implementing NIS2), Article 9, paragraph 3 explicitly states that essential and important entities must have an established business continuity plan, apply that plan in the event of an incident, and test that plan periodically.
The law therefore does not just ask for a plan; it asks for proof that it works. Organizations that have a crisis plan on the shelf but have never practiced do not formally meet the NIS2 requirements.
'Demonstrable' is the keyword. Regulators can ask to see that exercises have actually taken place and that the outcomes have been used to improve the organization. In practice, this means:
- Documentation of exercises: date, participants, scenario, findings, and follow-up.
- Periodic repetition: a one-off exercise two years ago is not sufficient; the law assumes a recurring rhythm.
- Involvement of the right levels: a purely technical exercise is insufficient. NIS2 focuses on the organization as a whole, including the board level.
- Demonstrable improvements: findings from exercises must demonstrably have led to adjustments in the plan or the organization.
A well-documented cyber crisis exercise is therefore not just a learning tool, it is also an accountability document for regulators.
Laws and Regulations
That depends on the sector and the nature of the incident. Under the GDPR, there is an obligation to report to the Data Protection Authority (Autoriteit Persoonsgegevens) within 72 hours in the event of a data breach with risks for data subjects. Organizations falling under the NIS2 directive have additional reporting obligations for significant incidents. Legal advice during a crisis is therefore not a luxury, but a necessity.
Cyber Crisis vs. Other Crises
In a fire, a power outage, or an industrial accident, the cause is usually quickly visible and the scope can be determined fairly fast. In a cyber crisis, this is fundamentally different. The attack began invisibly, sometimes weeks or months earlier, and the true extent is rarely known at the moment of discovery. As a result, a crisis team operates in deep uncertainty: what exactly has been hit, how far does the impact reach, and is the attacker still active in the systems? That uncertainty makes cyber crises particularly complex in terms of decision-making.
The basic structure, lead, logger, and communication expert, is the same for all crises. What differs is the content-related composition of the team. In a fire or industrial accident, safety experts, health and safety specialists, and facility management are the core. In a cyber crisis, security expertise is indispensable: the CISO or an equivalent role plays a central content-related role. Additionally, in a cyber crisis, the involvement of legal expertise and external forensic specialists is often necessary at an early stage; in other crisis types, this usually comes later.
A fire is either extinguished or it isn't. A power outage is resolved as soon as the power returns. A cyber crisis rarely has such a clear endpoint. Restoring systems is technically complex and time-consuming. But even more importantly: as long as it hasn't been fully mapped out how the attacker got in, which systems were hit, and whether backdoors are still open, recovery is risky. The crisis period, and therefore the pressure on the crisis team, can last for days or weeks. This requires a different kind of endurance than acute physical crises.
In a fire or major industrial accident, external communication is relatively straightforward: there is a visible event, and the environment understands what happened. In a cyber crisis, communication is strategically much more complicated. Communicating too early or in too much detail can give the attacker information about what has or hasn't been discovered. At the same time, there are legal reporting obligations, under GDPR and NIS2, that require communication within strict timeframes. And unlike physical crises, cybercrime also raises the question: do we communicate about an attack where we don't yet know the perpetrator, and which might still be ongoing?
Yes, and that is an underestimated characteristic. A fire primarily affects one's own building. A cyber crisis, especially with ransomware or supply chain attacks, can quickly spread to customers, suppliers, and chain partners. Think of a software supplier being hacked, affecting hundreds of customer organizations, or an attack on a port operator that paralyzed the logistics chain. This chain impact requires crisis coordination that transcends one's own organizational boundaries, something that is less likely to occur with most other crisis types.
Partially. The governance structure, escalation logic, and communication principles are generically applicable. However, a crisis plan designed for physical incidents falls short in three areas for a cyber crisis: it lacks technical decision trees for digital scenarios, it doesn't account for specific reporting obligations under NIS2 and GDPR, and it often assumes the communication infrastructure itself is intact. Especially in a cyber crisis, email, telephony, or the internal network can be part of the problem, and the crisis team must fall back on pre-agreed alternative communication channels.
Crisis Plans & Runbooks
A crisis plan describes the governance structure and organization surrounding a crisis: who is in the crisis team, how escalation occurs, who communicates externally, which reporting obligations apply, and how decisions are made. It is the strategic framework within which a crisis is managed.
A runbook is operational and technical: it describes step-by-step which actions are performed for a specific type of incident, such as isolating an infected system, resetting accounts, or activating a backup environment. While the crisis plan governs the 'who and how' of the organization, the runbook governs the 'what and when' of the technical response.
As short as possible, as complete as necessary. A crisis plan that isn't read has no value, and thick documents are rarely read in practice, let alone during the stress of a crisis. A workable crisis plan preferably fits on a limited number of pages and contains only what the crisis team actually needs at the moment of the crisis: roles, escalation lines, decision mandates, and communication protocols. The rest is an appendix.
A crisis card is a compact summary of the crisis plan for a specific scenario, ideally one A4 or one full-screen page. It contains the most critical information: what type of incident is this, who do you activate first, what are the first three actions, who has which mandate, and which reporting obligations apply?
A crisis team member under pressure doesn't have time to sift through a thirty-page document. A crisis card provides guidance at a glance. Experience shows that a good crisis card is worth more than an extensive plan that disappears into a drawer.
Most cyber crises fall into five recognizable scenarios. For each of these, a separate crisis card is useful:
- DDoS – systems or services are made unreachable by an overwhelming amount of traffic.
- Data breach – unauthorized access to or theft of personal data or confidential information, with immediate GDPR reporting obligation.
- Ransomware – systems are encrypted and held hostage, often combined with data theft and extortion.
- Supply chain attack – the attack occurs via a supplier, partner, or software vendor and affects the own organization indirectly.
- APT (Advanced Persistent Threat) – an advanced, long-term attack where an actor remains hidden in the systems, often aimed at espionage or sabotage.
These five scenarios cover the vast majority of cyber crises that affect organizations in practice. The prioritization of actions, the composition of the crisis team, and the communication strategy differ per scenario.
Crisis team members should always have their crisis plan or crisis card within reach, even outside office hours, even if systems are down. The most practical solution is a secured digital document on their own phone, or available via a pre-arranged secure channel such as Signal, with all crisis team members. This way, the information is always accessible, even when email or the internal network is unavailable.
A paper printout in a bag seems practical but has serious drawbacks: the document becomes outdated, gets lost, or is stolen. Crisis plans usually contain highly sensitive information — escalation lines, mandate holders, technical contacts, and private data of crisis team members. A lost crisis plan is therefore a security incident in itself.
A crisis plan dates quickly: people change roles, systems change, and threats evolve. Therefore, schedule fixed moments to review the plan — at least once a year, and always after an exercise or a real incident. Link the review to a specific owner. A crisis plan without an owner is a document in decay. Scenario-specific crisis cards can be updated faster than the full plan — making them more practical to maintain as well.
About CCRC
CCRC's mission is to make organizations and their supply chains more resilient against the growing threat of cybercrime. We believe that true cyber resilience doesn't stop at the boundaries of a single organization — it's about the chain as a whole. CCRC helps organizations take, understand, and structurally implement that chain responsibility.
CCRC envisions a world where organizations deal with cyber threats proactively rather than reactively — not because it's legally required, but because they understand how vulnerable mutual dependencies are. Our vision is that cyber resilience is a shared responsibility: between departments, between organizations, and between sectors. Only those who know the chain can protect the chain.
Chain resilience means that not only is your own organization resistant to a cyberattack, but so are the suppliers, customers, and partners you work with. An organization can be excellently secured internally and still be heavily affected via a vulnerable supplier or partner. CCRC helps organizations map those mutual dependencies and stand stronger together as a chain.
Many organizations do not know exactly which chain partners play a critical role in their business operations — and therefore do not know where the greatest vulnerabilities lie. CCRC maps this systematically: which parties are indispensable? Which digital links exist? Where are the weakest links? This analysis forms the basis for targeted measures and a joint approach to chain risks.
Mature crisis management is not a final destination, but a growth path. CCRC guides organizations step-by-step: from mapping the current situation, through drafting or refining crisis plans and setting up the right roles and structures, to designing and facilitating realistic exercises. We measure progress, ensure anchoring, and help organizations learn from every exercise and every incident.
Cyber crises come in many forms: ransomware, data breaches, DDoS attacks, sabotage via a supplier, or an attack on critical infrastructure. Being resilient to every form means that an organization doesn't need a separate plan for every scenario, but possesses a robust crisis structure flexible enough to handle any type of incident. CCRC helps organizations build that structural resilience — regardless of the specific scenario.
CCRC works for organizations that play a critical role in their sector or chain — from financial institutions and healthcare organizations to industrial companies, government agencies, and logistics service providers. What our clients have in common is that a cyber crisis for them doesn't just affect their own organization, but has direct consequences for customers, partners, or society.
CCRC combines three areas of expertise that rarely come together in one party: in-depth knowledge of offensive security and red teaming, broad experience in crisis management and exercise design, and an explicit focus on the chain rather than the individual organization. We do not advise from theory, but from the practice of real attacks, real crises, and real exercises.