Resolving security issues sometimes involves its own degree of managing people’s egos

Raymond Chen

Lots of reports come in to the Microsoft Security Response Center. Resolving them is not just a technical issue, but also a social one.

For example, somebody might report a potential vulnerability, but their proof-of-concept requires administrator privileges. Naturally, this fails the other side of the airtight hatchway test. But we fixed the problem anyway, not because there was any proven vulnerability, but because there wasn’t proof of lack of a vulnerability. We studied the code and couldn’t find any way to carry out the attack without administrator privileges, but were not confident in our ability to rule it out completely, so we made the fix out of an abundance of caution.

Some time later, the finder reports another potential vulnerability that also requires administrator privilege. They explained that they understood that requiring administrator privilege is normally a disqualifying factor in a vulnerability, but they noted, “You accepted my earlier vulnerability report despite it requiring administrator privilege, so I assume that you investigated the issue more closely and found a vector that didn’t require administrator privilege. So here’s another vulnerability report that requires administrator privilege. Maybe you can turn this into a true vulnerability, too.”

This is a case of a finder creating work for us. “Here, let me report a bunch of things that are clearly not vulnerabilities as written, but I’m going to make you spend a week proving that there isn’t some real vulnerability lurking beneath them that I didn’t find, but for which I’m going to take credit nevertheless.” Careful ego-management is required to thank the finder for their efforts, but to also politely request that they wait until they actually find something before reporting it.

Another category of managing people’s egos is the case of a vulnerability report that duplicates an issue that we had already identified internally as a reliability issue, but not a security issue. A fix for the reliability issue was scheduled to go out in a week, but there was concern for the repercussions of rejecting a vulnerability report while simultaneously issuing a fix for it. The finder would observe that their rejected report was nevertheless fixed and conclude that we were silently fixing vulnerabilities without disclosing them.

There were some people who proposed reverting the reliability fix to make a clear statement that the issue was not a security issue. Presumably their idea was to hold back the reliability fix for a few months to wait for the issue to blow over, and then reintroduce it.

The security release management team decided to ship the reliability fix as scheduled, but document it as a security fix even though it isn’t one. Everybody wins: Customers get a more reliable system, and the finder gets a CVE number to put on their résumé. The only loser is Microsoft: When people play “security scorecard” games and tally up the number of CVEs issued, the number in the Microsoft column is artificially inflated.

That’s okay. We’re used to it.


Discussion is closed. Login to edit/delete existing comments.

  • Bertrand AUGEREAU 0

    I love this blog because it thoroughly demonstrates, through an unbiased sampling of bug reports, that there’s never ever a privilege escalation issue in the Windows kernel.Only people submitting “other side of the airtight hatchway” ! All the time !I’m so glad I’m a paying Windows customer (I really am) because there’s no CVE up there.

    • Ji Luo 0

      Newspapers only report extraordinary events.

    • Michael Getz 0

      I think thats a misunderstanding of security in the kernel. Once a user has the ability to run code in kernel mode there isn’t much the system can do per se. The security issue is them being able to run code in the kernel at all. So it’s not an ‘escalation of privlidge’ in the kernel, it’s a remote code execution at that point. Escalation of privlidge inherently affects user mode code only (even if the actual defect is in kernel mode code). This is why the `Nt` vs `Zw` function prefixes exist, so that kernel entry points can validate their callers (and should).

    • Sacha Roscoe 0

      While discussion of actual OS security bugs can often be interesting, I can think of several reasons why Mr. Chen might not include it in his blog (he might not be allowed to; there might not be a suitable lesson applicable to much of his target audience; he might find it not sufficiently rewarding to be worth the extra effort). I expect the first is the most likely, since any in-depth discussion of a specific bug would likely deal with commercial-in-confidence information.
      Fortunately, it’s a big internet, and there are other sites you can go to if you want to read analysis of OS bugs.

  • Guillaume Knispel 0

    Does the need to be admin strictly rules-out security vulns? It is my understanding that regular Windows installs with regular configuration do not allow unsigned kernel code. If I find a way to inject some arbitrary kernel code but need admin rights to do it, is it a vuln or not?

Feedback usabilla icon