- March 6, 2025
- Beazley Security Labs
Disabling EDR With WDAC
Beazley Security has seen attackers disabling EDR solutions leveraging Windows Defender Access Control Policies in the wild. 🫨
Executive Summary
Recently, Beazley Security's Incident Response Team identified an attack from a threat actor that was able to arbitrarily disable Endpoint Detection and Response (EDR) on Windows 10 and Windows 11 using Windows Defender Application Control (WDAC) policies. This attack requires administrative access on the machine, and when leveraged it can prevent many endpoint security tools from operating. We have identified several tactics, techniques and procedures related to this threat actor, and this blog details their exploits and specific mechanisms used to disable EDR. Beazley Security is aware of prior work on abusing WDAC and has coordinated with authors that originally wrote about this technique. Since we've now seen this abused by threat actors in the wild, Beazley Security Labs has worked directly with several EDR solution vendors to ensure they can prioritize detection capabilities and plan mitigations. As of the writing of this post, mitigations against this specific attack are limited since attackers can subvert Windows Defender Application Control configurations with more specific block definitions.
During the investigation of this attack, Beazley Security labs discovered that Microsoft's default recommended driver block list for WDAC includes a rule that prevents most EDR products we've tested from initializing.
Windows Defender Application Control (WDAC) is a well-known security feature of various Microsoft Windows products. It works by leveraging integrity policies to restrict and/or enable the code and applications that can be run in user and kernel mode. While there are many resources on leveraging this tool to ensure that only organizational tools are whitelisted, there was little writing on how attackers can use WDAC to limit security tools. The earliest we observed WDAC being used adversarially was in early August 2024 and the most recent identification of the specific payloads Beazley Security Labs team found was a blog post and proof-of-concept tooling from Jonathan Beierle and Logan Goins in late December 2024. When contacting various industry EDR solutions, we found that WDAC was recognized as a potential vulnerability, but not sufficiently prevented with a crafted policy explained below.
We would like to thank Jonathan Beierle, Logan Goins, and the Beazley Security IR Team (Jacob Wellnitz, James Navarro, Logan Tumminello, and Ralph Bailey) for their work.
Every EDR solution we have attempted to disable apart from Windows Defender is affected by this attack. We have contacted the EDR solutions expressing this vulnerability and their responses are listed in the conclusion of this post. From what we have observed and reported, all EDR solutions are vulnerable and should be aware of this technique. In this writeup, we will document the specific payload we observed, but as noted, this is not a vulnerability created from technical implementations of any individual EDR solution.
Repeating the Attack
In order to disable a currently running EDR or security product, the attacker simply needs to place a crafted policy to C:\Windows\System32\CodeIntegrity
and reboot.

After a reboot, we can see the EDR kernel hooks fail to initialize when accessing specific tools, in this case OneDrive.exe.

Technical Details
Windows Defender Access Control (WDAC) is automatically enabled on Windows Server 2016, 2019, 2022, and 2025, and Windows 10 and Windows 11. Beazley Security Labs has validated this abuse on Windows 10 and Windows 11. While WDAC is not new, using it to apply policies that subvert security tooling is. To accomplish this, policies are presented to the computer as P7B files and trivially converted into their XML definitions which can be created and edited using Windows provided utilities. We will use Microsoft's App Control Policy Wizard for this purpose, which is commonly used and can illustrate of the policy creations.
Note: While the policy wizard claims that P7B files are deployable to Windows 10, Server 2016, and Server 2019 we have confirmed their use with Windows 11 as well
The specific policy presented in Beierle's Blog is based on the Default Windows Mode policy which authorizes "Windows OS Components", "MS Store Applications", Office 365, OneDrive, Teams and "WHQL Signed Kernel Drivers". We have observed that without specific allow rules, applying the custom policy results in instability on reboot, even with a benign usermode or kernel block rule. This aligns with the attack we observed, which used these defaults in their policy.
One policy requirement is that the custom policy must disable runtime filepath rules. Beierle's post defines the process of adding a custom rule that allows arbitrary code to be executed from a specific path, but it does not mention out that custom rules can be used to deny security features in Windows. Beierle has confirmed with Beazley Security that disabling security tooling was known at the time of his reporting, but was not explicitly called out as a disabling mechanism. We observed this technique in both the whitelisting of the attackers toolchain as well as blocking of EDR solutions.
Custom policies that whitelist specific paths on the machine can be used to bypass EDR, which functionally allows the EDR to collect telemetry but not prevent the attack. Blocking an EDR solution would prevent it from initializing and, as a result, prevent the EDR solution from collecting any data on how the machine was compromised. In many of the EDR solutions we've tested blocking a specific file hash or a path from executing can prevent the machine from booting because the security tooling is often implemented at the kernel level. We have confirmed this crashing behavior by creating custom policies that deny specific EDR executables and paths, which caused instability at startup. However, policies can be created that indirectly reference an executable's signing certificate and allows the machine to boot to a vulnerable state, preventing telemetry gathering and exploit mitigations.
This behavior is accessible by is a selectable field when creating a policy with the App Control Policy Wizard.

We then can find a certificate for a specific EDR solution and export it relatively easily

SentinelOne's Intermediate Signing Certificate

Elastic's Intermediate Signing Certificate

CrowdStrike's Intermediate Signing Certificate
As you can observe, the intermediate signing key for many security tools is the same. We have concluded that this is unintended, but given that the certificate chain for many of the EDR solutions we have tested contains this DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1
, inclusion of this key as a deny rule will result in the prevention of many Security tools. Apart from how confounding the ubiquity of this signing key may be, this key is doubly troubling as it is included within Microsoft recommended driver block rules as a blocked signer.


It's worth nothing this policy is executing in `Audit Mode` and according to MS WDAC documentation Audit Mode will not prevent code from running and only log the execution of the process in the Event Log for review. However, simply following the Policy Wizard's workflow can result in the creation of a policy using these default block rules while disabling audit rules. We believe that this specific signature inclusion was what lead to the discovery of this attack.
Knowing these mechanics, we can also craft a specific policy that will take effect before the machine reboots (albeit partially) with the following rule option. However, given that most EDR solutions already running on the machine will have already been initialized, attackers would likely ere on the side of caution and force a reboot regardless.
...
<Rule>
<Option>Enabled: Update Policy No Reboot</Option>
</Rule>
...
The outputted P7B file and XML file then link the signing key with a TBSHash. The attacker can obfuscate the Signer because the TBS hash contains the required fields to map the specific certificate to one in the machines Certificate Authority.
...
<Signer Name="DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1" ID="ID_SIGNER_S_0">
<CertRoot Type="TBS" Value="65B1D4076A89AE273F57E6EEEDECB3EAE129B4168F76FA7671914CDF461D542255C59D9B85B916AE0CA6FC0FCF7A8E64"/>
</Signer>
...
This means that we can hide several certificates within the policy with any names we desire so long as we can verify that the machine boots as expected without their invocation.
Validating these TBS hashes is performed by hashing the fields from the x509 certificate and adding a poison extension to the end-entity. src
Certificate ::= SEQUENCE {
tbsCertificate TBSCertificate,
signatureAlgorithm AlgorithmIdentifier,
signatureValue BIT STRING }
TBSCertificate ::= SEQUENCE {
version [0] EXPLICIT Version DEFAULT v1,
serialNumber CertificateSerialNumber,
signature AlgorithmIdentifier,
issuer Name,
validity Validity,
subject Name,
subjectPublicKeyInfo SubjectPublicKeyInfo,
issuerUniqueID [1] IMPLICIT UniqueIdentifier OPTIONAL,
-- If present, version MUST be v2 or v3
subjectUniqueID [2] IMPLICIT UniqueIdentifier OPTIONAL,
-- If present, version MUST be v2 or v3
extensions [3] EXPLICIT Extensions OPTIONAL
-- If present, version MUST be v3
The hashes that exist in our custom policy rule and the Windows Event Log are 48 bytes long, and therefore are SHA348 hash. Fetching certificates from a machine can be trivially performed with PowerShell and their hashes can be trivially computed with any library that parses x509 certificates and a hashing library.
The last component in our policy is whitelisting. After many attempts at creating policies with allow rules, we have concluded that allow rules must be applied for every executable on the device. This behavior is counter-intuitive, as it was expected that WDAC should fail open on executions that don't have any related rules. There is documentation on the order in which these rules are processed. It appears that without Audit Mode enabled, many utilities are unable to be executed while a policy is active.

Knowing this behavior, an attacker would be lead to simply whitelist file paths for their toolkits. This was observed in the malicious payload we collected.
Now with the payload an attacker can move it into C:\Windows\System32\CodeIntegrity
with administrative credentials and trigger the rest of their attacks.
We can observe through the Windows Event Viewer the EDR solution fail to initialize in Event ID 8036 and 3077.


Compounding Factors
In an attempt at testing WDAC exhaustively, we have attempted creating other policy types and have found little success. COM Object declarations appear to affect registration rather than execution. Path and File attribute related rules are relatively well known and observed in existing WDAC Policies and do not need further elaboration. However given the time it takes to inspect a policy, and WDAC's restrictive behavior once applied, it behooves security tools to treat all WDAC policies as suspicious or an attempt to bypass or disable their behavior.
During this Incident Response investigation, we were able to retrieve the policy to begin work in understanding the attack, but it should be noted that the policy can be pre-crafted and deployed in multiple ways. Beierle and Goin's Kreuger tool will run in memory using `inLineExecute-Assemby` for example. Therefore the time needed to execute this attack is limited only to the time to transfer the file to disk. Doubly so, a policy crafted to prevent specific calls from an incident responder can be used to prevent the policy from being observed and reversed.
During our testing, we replicated this attack on a single host. However, as noted in Beierle's blog and as seen during our Incident Response investigations, attackers can abuse Group Policy Objects to distribute these policies to every machine in a domain, rendering EDR ineffective across an entire organization.
EDR Vendor Response
As part of our investigation and response to this being abused in the wild, Beazley Security Labs coordinated with our partner, SentinelOne, on February 18th to inform them of this issue. We collaborated to confirm this attacks general effectiveness against several EDR solutions. SentinelOne recently discovered the issue in parallel with our research and has coordinated with Beazley Security to ensure that protection is available for its customers. We'd like to thank SentinelOne for their collaboration in responding to this abuse.
After communication with Beierle we have also confirmed that CrowdStrike and Elastic were also notified of this attack upon his initial posting. Beazley Security begun collaborating with CrowdStrike on February 24th and they were able to confirm their development of detections related to this attack.
We have received communications from Elastic on February 28th confirming their knowledge of the attack from a Github issue. Elastic has stated they have an existing detection they will validate and promote to production, and are not do not have plans to produce a prevent against this attack due to the legitimate use of WDAC in enterprise environments.