If you were to describe a typical attack scenario on a company in a few steps – from the initial entry point to full infrastructure takeover – what would it look like?
First, let me clarify the perception of the attack itself, as it has fundamentals we must first understand. In cybersecurity, there is still a convenient myth that organizations lose because attackers are becoming increasingly advanced. In practice, the opposite is true. Most incidents do not result from breakthrough techniques but from predictable flaws in environment design, identity management, and a lack of control over privilege flow. From the perspective of someone who regularly simulates attacks on enterprise environments, it is clear that an attack is not a chaotic event. It is a process of navigating a system of dependencies that the organization itself created. That is precisely why it can be predicted. An attack is often a consequence of architecture, not just a classically perceived incident. The biggest strategic mistake lies in treating an attack as “entering the system.” In reality, entry is only the initial moment. The key is what the environment allows you to do next.
In practice, this means that a single-entry point is rarely a problem in itself; what is decisive is how identities are managed and utilized within the infrastructure. An attack develops according to the logic of the environment, not just the creativity of the attacker. If the infrastructure allows the abuse of one identity to obtain others, then the attack doesn’t need to be sophisticated. It only needs to be consistent.
Credential theft, Pass-the-Hash, Kerberoasting – why are these techniques so effective today, and why is MFA alone not a sufficient answer?
The main task of security teams is to do everything possible to lower the risk of a breach. For many years now, MFA has also been bypassable, and through a well-executed phishing campaign, it is relatively simple. The problem, therefore, lies not in whether we have a specific solution, but in how we approach a specific security technique. Pass-the-Hash, Kerberoasting, and many other attacks have their counterparts in security or risk mitigation. From a strategic perspective, the causes are repetitive:
1. Lack of control over identity – privileges are granted ad hoc, without full visibility of their consequences.
2. Excessive trust in architecture and lack of regular, well-executed (not ad hoc) audits – here, the question of whose skills we trust always arises. It is important that the team performing the tests is experienced not only in testing itself but also in general infrastructure security – in other words, a complete team.
3. Lack of logical segmentation, not just network segmentation – boundaries are not defined by risk levels, but by organizational structure.
4. Focus on tools instead of the security model – technology often masks problems instead of solving them.

What are the most surprising security vulnerabilities your team encounters during audits? Are there errors that still surprise you?
In planning a cybersecurity strategy, the most important thing is to achieve a state of complete transparency – that is, identifying the data sources that will tell us what is happening in the infrastructure, and subsequently correlating that data in a good SIEM system. For me, this is the “hello world” of cybersecurity. I consider the lack of this to be one of the major security gaps. A theoretically secure cloud today requires a thorough look at what information we log in it so that, for example, during an incident, we can understand what happened. In practice, we almost never see a cloud adjusted to current security standards, and the same goes for Active Directory. In both areas, permissions are granted to objects that are often unnecessary and too high – this often leads to escalation. In Active Directory, we have the GenericWrite permission, which might be granted, for example, by an external application; in the bigger picture, this turns out to be lethal for the infrastructure.
Many organizations implemented Active Directory years ago and have changed little in the configuration since then. How big a risk does this pose, and where should they start “cleaning up”?
In environments based on Active Directory (which is practically all of them), an organization’s actual security level is directly proportional to the quality of its configuration; the same applies to the cloud or other components. Of course, new attack techniques also emerge, and one must react to them. A good example is PetitPotam (an attack from years ago that often still works today), which showed how authentication can be coerced and used in relay scenarios, particularly in the context of Certificate Services (Active Directory Certificate Services).
In practice, however, the key is not reacting to individual techniques but eliminating classes of problems that allow their use. In the case of AD CS, this means, among other things, removing the old web page through which a request for a new certificate can be made, e.g., for authentication. It is also worth noting that, for example, the NTLM authentication protocol, including NTLMv2, is now a de facto obsolete mechanism being systematically phased out, yet it is still widespread in enterprise environments. This is mainly due to backward compatibility and dependence on older applications that still require this protocol. From my point of view, it enables the vast majority of “zero-to-hero” attacks – meaning that with basically only network access, we can take over the most privileged account, the Domain Admin. I hope every company is already considering how to remove it or has already done so.
From a cybersecurity strategist’s perspective, maintaining control over the state of the environment over time is far more significant than individual vulnerabilities. Active Directory very rarely “breaks” all at once. Much more often, it degrades gradually through successive operational changes: extending permissions, adding exceptions, shortening access paths. Therefore, performing regular reviews like a Health Check should not be treated as an audit, but as an element of continuous risk management. Their goal is not to find individual errors but to identify: excessive permissions, uncontrolled trust relationships, escalation paths resulting from configuration, or deviations from the adopted security model.

A particularly important area is the tiered model, or so-called tiering. In a correctly designed environment, access to the most sensitive resources – Tier 0 – is strictly limited to systems and accounts responsible for identity management and domain control, such as domain controllers, AD management systems, or privileged administrative accounts. The key is not only defining these levels but, above all, enforcing them – the lack of separation between tiers leads to a situation where the compromise of a single workstation can be used to take over the entire domain. Therefore, a domain compromise is not the “next stage” of an attack. It is the logical conclusion of a process that was possible from the very beginning.
When an attack has already occurred – what is the most common mistake organizations make in the first hours after detecting a breach?
The worst mistake is acting without establishing exactly what happened, but there are many mistakes. Most organizations don’t lose to attackers because they are weak – they lose because they are blind. Therefore, from this point of view alone, infected components should be isolated as quickly as possible and internet access should be cut off to stop potential further hacker activity.
It is also necessary to properly secure evidence (certainly not restoring machines in the same location), but also to focus on restoring the business service. It is also necessary to find persistence – places where hackers may remain hidden until the infrastructure is partially restored and internet access is reinstated. You have to act fast, not only because of the need to stay within the 72-hour window to report the incident and the general responsibility of boards toward suppliers but also because you need to determine what happened in order to react properly.
I have been professionally responding to global incidents for many years, and if we are unable to answer three questions:
1. What happened?
2. Where did it spread?
3. What was accessed, did data leak, and if so – what data?
…then there is no incident response readiness. A narrative about the incident is created instead of managing it.
The question is: would we be able to provide an answer today if we simulated an attack? In the European Union, answers to these questions are, to a greater or lesser extent, required by law.
Remember that during incidents, one thing becomes painfully obvious: a lack of transparency kills the response. Typical problems include a lack of visibility in logs, a lack of a clear timeline, and a lack of shared understanding of the situation between teams. Suddenly, instead of conducting an analysis, guessing begins. Attackers don’t have to be invisible; it’s enough that the organization doesn’t see. Real-world cases show that the biggest challenges are always the same:
1. Distributed telemetry
2. Inconsistent logging
3. Missing or hidden context
4. Delayed access to critical data
That is why transparency is not about reporting; it is about control. Transparency turns chaos into evidence, evidence into decisions, and decisions into actions that mitigate the impact. And that is exactly where the real fight is won.
This article was originally published in Polish on the Poradnik biznesu website, available at this link: https://www.poradnikbiznesu.info/cyberbezpieczenstwo/bezpieczenstwo-to-nie-narzedzia-to-przemyslane-decyzje/
Want to know more?