When the mighty fall, it makes all of us look over our shoulders. So with the Microsoft security incident that made several headlines over the past three months.
It started in late January 2024 when Microsoft announced that Russian state-sponsored actor Midnight Blizzard (APT29) had breached their corporate environment, accessed corporate email accounts and exploited secrets found in those emails.
Despite Microsoft’s best cybersecurity efforts, the tech giant was still trying to get Midnight Blizzard out of its systems as of early March.
How did this breach of Microsoft security incident happen - and what steps can you take to ensure it doesn’t happen to you?
OAuth Apps and Their Critical Role in This Breach of Microsoft Security Incident
Some of the key pieces used by Midnight Blizzard in their Microsoft security breach were OAuth apps with extensive permissions in the Microsoft corporate environment. How did they get access to such privileged OAuth apps? Let’s find out:
(Note: the following is an overview, but if you want an excellent, detailed technical analysis, see Wiz’s blog post on the subject here.)
Midnight Blizzard used a password-spray attack to get access to a legacy, non-production test tenant account that did not have MFA (Multi Factor Authentication) enabled.
Once within that account, they found a legacy test OAuth application that had been granted high privileges for use in the Microsoft corporate environment. These permissions had been legitimately granted by the original admin who created the app. While they weren’t currently being used, they were unfortunately still present, waiting to be abused.
Through this over-permissioned legacy test OAuth app, Midnight Blizzard was able to move to the Microsoft corporate production tenant and create new admin users there. They then utilized those users to create more OAuth apps, with permissions to access all corporate mailboxes within Microsoft's own Exchange Online tenant.
Microsoft’s private corporate emails were no longer private.
How to Protect Your Organization from Similar Cyberattacks
We don’t know what information you have in your company emails and systems. And we’re sure you don’t want anyone else to know either.
To ensure your data doesn’t get exposed the way it did in this Microsoft security incident, these are the steps to take:
- Require MFA
- Identify and remove stale OAuth apps (even from a test environment!)
- Detect over-permissioned OAuth apps
- Detect unusual/suspicious activity from OAuth apps
Let’s go through them in more detail.
Require MFA
It’s amazing how many security problems could be avoided by enforcing Multi Factor Authentication - and how many organizations don’t enforce it.
If Microsoft had MFA enabled on their legacy test tenant accounts, the password-spray attack would not have succeeded in obtaining access to those accounts.
To be fair, Microsoft noted that this was a legacy account created a long time ago, and “if the same team were to deploy the legacy tenant today, mandatory Microsoft policy and workflows would ensure MFA and our active protections are enabled to comply with current policies and guidance.”
But as we see from this Microsoft security breach, it’s not enough to enforce higher security measures going forward if you leave an available back door with a low level of security. If you’ve decided that MFA is a necessary security measure (which you should), then it should be required on any account that currently has access to your company systems.
Identify and remove stale OAuth apps (even from a test environment!)
How long had that legacy test OAuth application been sitting around, unused? Months? Years?
When you forget about an OAuth application that has access to your systems, it’s like forgetting about a landmine you buried. Maybe you’ll never walk over it. But maybe you will. And if you do, you will deeply regret that you didn’t defuse it when you had the opportunity.
Review of the OAuth apps within your SaaS environment should be a regular routine. If an app is no longer being used, get rid of it. At the very least, remove its permissions and require it to be re-permissioned should someone want to use it in the future.
Keeping track of all your third-party SaaS apps manually may be unsustainable, in which case it’s important to have an automated solution that will review and remove stale apps at regular intervals.
And the Microsoft security incident was initiated in a test environment, showing how critical app management is even outside of a production environment! (More on this below.)
Detect over-permissioned OAuth apps
OAuth apps that are actively used but have more permissions than they need for their function are just as dangerous as forgotten apps lurking in a corner.
Does an app really need to be able to read and write to main directories? Does it really need to be able to manage user roles?
If it doesn’t need to, it shouldn’t be able to.
If Microsoft’s legacy OAuth app had not had Directory.ReadWrite.All and/or User.ReadWrite.All privileges, this Microsoft cyber security incident would have been stopped in its tracks.
Just like keeping tabs on OAuth app usage often requires an automated solution, so does keeping tabs on OAuth app permissions. It’s worth it.
Detect unusual/suspicious activity from OAuth apps
Is an OAuth app making too many API calls? Accessing systems it doesn’t usually access? Requesting more access tokens than expected?
If an OAuth application is acting funny, it should be looked at right away. But in order to identify an unusual activity pattern, you need to be looking, and you need to know what a usual activity pattern is.
That’s where an automated SaaS security platform with AI can come to the rescue: it can build models of OAuth app activity patterns automatically and at scale - and then it can track all the OAuth apps in your environment and identify any deviations from the pattern. Once identified, a deviant app can either be referred to your information security team for review, or remediated directly with an automated workflow.
This is Not Only a Test
Organizations are understandably more lax about their test environments than they are about their production environments. As Wiz pointed out in their analysis regarding the new OAuth apps created by Midnight Blizzard, “we'll assume that they would not choose to create apps in the prod tenant, since Microsoft almost certainly rigidly monitors their production environments.”
But, if anything, this Microsoft security breach highlights that you must be on top of your test environment as well.
Removing stale OAuth apps, keeping app permissions to only what is absolutely necessary, detecting unusual activity… all these OAuth app SaaS security measures should absolutely be happening in every environment that has any connection to your corporate accounts.
Don’t lower your standards or your guard just because “it’s only a test.” If cyberattackers relate to corporate test environments as the production environment for their threat acts - which they certainly did in this Microsoft security incident - then you should, too.