{
	"id": "2318b772-5b57-4d4f-a70f-068eaeb36adc",
	"created_at": "2026-04-06T00:15:56.772308Z",
	"updated_at": "2026-04-10T03:20:34.576279Z",
	"deleted_at": null,
	"sha1_hash": "0623341ee56185f0f7192271b5aa25e5ca04da05",
	"title": "Legitimate Apps as Traitorware for Persistent Microsoft 365 Compromise",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 775917,
	"plain_text": "Legitimate Apps as Traitorware for Persistent Microsoft 365\r\nCompromise\r\nBy Sharon Martin\r\nPublished: 2023-08-03 · Archived: 2026-04-05 23:16:36 UTC\r\nThe idea of “persistence” in a cloud environment is not a well-studied topic. At most, you hear instances of the\r\nattacker creating backup logins to maintain their long-term presence in a cloud environment.\r\nTo continue our series exposing the tradecraft around business email compromise (BEC), this blog will dive into\r\nhow Huntress identified a threat actor using a novel form of persistence (M365 applications) in order to try to stay\r\nunder the radar and avoid detection. We discovered a compromised user account with the ability to add apps\r\nduring the beta phase of our newest product, Huntress Managed Identity Threat Detection and Response.\r\nWhat Happened\r\nThis is another unfortunate case of compromised credentials without additional security controls. \r\nThere was a failed login from a US IP, and then shortly thereafter, a successful login via a US IP. However, it was\r\nclear quite quickly that this wasn’t a normal IP—it was a proxy/VPN IP. Here’s an overall screenshot of the\r\ntimeline of events that will be explained in more detail below:\r\nClick to enlarge\r\nDetailed Event Breakdown\r\nhttps://www.huntress.com/blog/legitimate-apps-as-traitorware-for-persistent-microsoft-365-compromise\r\nPage 1 of 4\n\nThe events you saw above are where it started to get more interesting. We saw an application added with several\r\nevents in Azure around it:\r\n“Add service principal.” \r\nWhen you register a new application in Azure AD, a service principal is automatically created for the app\r\nregistration. For more details, see this Microsoft write-up.\r\nThe important detail to note here is that the “Target.Name” has the name of the application which was\r\nadded, “eM Client”. eM Client is an app that can integrate with email, calendar, etc. The “InterSystemsId”\r\ncan also be used to help correlate this eM Client target with other event logs—it’s the GUID used to track\r\nactions across components in the Office 365 service.\r\n“Add delegated permission grant.”\r\nWhen an application that’s added requires access, a delegated permission grant is created for the\r\npermissions needed by the application on behalf of the user. For more details, this article provides some\r\ngreat background.\r\nThe “InterSystemsId” was the same as the previous event, showing that it’s related to the eM Client\r\napplication being added.\r\nActual permissions granted are shown in\r\n“ModifiedProperties.DelegatedPermissionGrant_Scope.NewValue”. In this case, the application was\r\ngranted the following permissions: \r\n“EWS.AccessAsUser.All” - This configures the app for delegated authentication. The app would have the\r\nsame access to mailboxes as the signed-in user via Exchange web services.\r\n“Offline_access” - This gives the app access to resources on behalf of the user for an extended period of\r\ntime. On the permission page itself that would pop up, it’s described as “Maintain access to data you have\r\ngiven it access to.” When a user approves this access scope, the app can receive long-lived refresh tokens\r\nfrom the Microsoft Identity platform token endpoint and then reach out to get an updated access token\r\nwhen the older one expires—all without user intervention.\r\n“Email” - This can be used along with the “Openid” scope and gives the app access to the user’s primary\r\nemail address.\r\nhttps://www.huntress.com/blog/legitimate-apps-as-traitorware-for-persistent-microsoft-365-compromise\r\nPage 2 of 4\n\n“Openid” - This indicated the app signed in by using OpenID Connect. It allows the app to get a unique\r\nidentifier for the user (a sub-claim), which can then be used to acquire identity tokens that are used for\r\nauthentication.\r\n“Add app role assignment grant to user.”\r\nWe saw the same “InterSystemsId” as the above events correlating it to the same email app.\r\nThis means the app has been assigned to a user via Azure AD so that the user can access the app. The\r\n“ModifiedProperties.User_UPN.NewValue” indicates which user it’s been assigned to. In this case, it’s the\r\nsame user the threat actor was logged into.\r\n“Consent to application.”\r\nIn a well-configured Azure environment, admin approval and consent should be needed to add any new\r\napps. These configurations are called risk-based or step-up consent. \r\nAlas, the “Actor.UPN” of our hacked user account along with the success of the log for “InterSystemsId”\r\nshow that consent was able to be granted by this user. So either they were an admin or one of the consent\r\nmodels was not configured, allowing any user to add any application.\r\nWait, There’s More?\r\nAdding just one app was apparently not enough for this threat actor—or perhaps, the app didn’t allow them to do\r\neverything they wanted to do, which seems to be sending and receiving emails on behalf of the user. But before\r\nadding another app, the threat actor again showed some more sophistication in their attack. \r\nWhen there’s a risk that something you’re doing as a threat actor can generate emails to the user, the obvious\r\nsolution is to prevent the user from seeing said emails. How? Well, of course, with our favorite Microsoft 365\r\nthreat actor tradecraft of using email inbox rules. 🦹 \r\nThe rules added were pretty much as expected. They set up a rule that matched “@”. Yes, it would have matched\r\nany email. Then, messages were marked as read and moved to Deleted Items. 🪣\r\nOnce that was in place, the threat actor went through the step of adding another app to manage email. This time it\r\nwas Newsletter Software SuperMailer, another legitimate app that’s great for sending mass amounts of emails in a\r\nshort period of time. This app had some slightly different permissions in addition to “offline_access”:\r\nMail.Read - Allows the app to read email in user mailboxes. \r\nMail.Send - Allows the app to send mail as users in the organization. \r\nContacts.Read - Allows the app to read user contacts.\r\nThe permissions paired with the app name seem to indicate that the intent is to send emails to all the contacts of\r\nthe user that look like they are coming from the user. Perhaps follow-on phishing emails so the threat actor can\r\ngain access to more valuable user accounts? \r\nSetting the probability of the app sending a welcome email aside, another reason the threat actor would not want\r\nthe user to see any emails arriving in their inbox is simple: the legitimate user would be alerted faster to the\r\nhttps://www.huntress.com/blog/legitimate-apps-as-traitorware-for-persistent-microsoft-365-compromise\r\nPage 3 of 4\n\ncompromise if any contacts reply asking “what in the world is this email you just sent me?”\r\nWhy Is This a Big Deal?\r\nLet’s go back to the meaning of the “offline_access” permission. Any app with this access permission can\r\ncontinue to get new authentication tokens from Microsoft, even after the threat actor no longer controls the\r\ncompromised account. So, the threat actor would have continued happily reading and sending emails all on behalf\r\nof this user account until the application access was revoked; thus maintaining persistent access to the\r\ncompromised account. \r\nImagine someone stole your car keys including your key fob, then cloned the key fob. Even if you got back your\r\noriginal set of keys, they can use that cloned key fob to keep unlocking your car because that code is authorized to\r\ncontrol the car alarm. That’s essentially what the threat actor was doing.\r\nClosing Thoughts\r\nSo what’s the best way to prevent this kind of attack?\r\nMFA, MFA, MFA. If this account had been protected via two-factor or multi-factor authentication, this\r\nwould have made our threat actor’s job much more difficult. \r\nNormal users should not be allowed to add new apps. This is like allowing any user to install any\r\napplication on their PC—you never know what they will install. In Microsoft 365, this is as simple as\r\nturning the user consent to apps feature off.\r\nAs always, we hope this helps those of you hunting sneaky threat actors in the Microsoft Cloud. If ever you decide\r\nyou need someone to provide some Managed Identity Threat Detection and Response, so you don’t have to make\r\nyour eyes bleed reviewing arcane logging events, you know who to call. 😉\r\nCatch up on the other BEC tradecraft we exposed in part one, part two, part three, and part four.\r\nSource: https://www.huntress.com/blog/legitimate-apps-as-traitorware-for-persistent-microsoft-365-compromise\r\nhttps://www.huntress.com/blog/legitimate-apps-as-traitorware-for-persistent-microsoft-365-compromise\r\nPage 4 of 4",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://www.huntress.com/blog/legitimate-apps-as-traitorware-for-persistent-microsoft-365-compromise"
	],
	"report_names": [
		"legitimate-apps-as-traitorware-for-persistent-microsoft-365-compromise"
	],
	"threat_actors": [],
	"ts_created_at": 1775434556,
	"ts_updated_at": 1775791234,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/0623341ee56185f0f7192271b5aa25e5ca04da05.pdf",
		"text": "https://archive.orkl.eu/0623341ee56185f0f7192271b5aa25e5ca04da05.txt",
		"img": "https://archive.orkl.eu/0623341ee56185f0f7192271b5aa25e5ca04da05.jpg"
	}
}