{
	"id": "7809b510-285c-4a2f-a8c0-9739116ad25f",
	"created_at": "2026-04-06T00:14:54.635594Z",
	"updated_at": "2026-04-10T03:21:06.794696Z",
	"deleted_at": null,
	"sha1_hash": "f8eefcb177c45055c5db5b6fec07010872d5aafb",
	"title": "Navigating the Vast Ocean of Sandbox Evasions",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 4128764,
	"plain_text": "Navigating the Vast Ocean of Sandbox Evasions\r\nBy Esmid Idrizovic, Bob Jung, Daniel Raygoza, Sean Hughes\r\nPublished: 2022-12-27 · Archived: 2026-04-05 20:08:36 UTC\r\nExecutive Summary\r\nWhen malware authors go to great lengths to avoid behaving maliciously if they detect they’re running in a sandbox,\r\nsometimes the best answer for security defenders is to write their own sandbox that can’t easily be detected. There are a\r\nlot of sandboxing approaches out there with pros and cons to each. We’ll talk about why we chose to go the bespoke\r\nroute, and we’ll discuss many of the evasion types we had to cover in that effort as well as strategies that can be used to\r\ncounter them.\r\nThere are many variations on how malware authors specifically detect sandboxes, but the general theme is that they will\r\ncheck the characteristics of the environment to see whether it looks like a targeted host rather than an automated system.\r\nPalo Alto Networks customers receive improved detection for the evasions discussed in this blog through Advanced\r\nWildFire.\r\nHow We Became Evasion Connoisseurs\r\nYou could say that our day to day in the world of malware analysis has made the WildFire malware team sandbox\r\nevasion connoisseurs. Our team’s Slack channel has had its share of “look at this one!” over the years, sharing the joy of\r\nfinding new evasion techniques. Getting to the bottom of these has been a big part of our team mission to help improve\r\ndetection.\r\nThere’s a vast number of techniques malware authors use to check if they are running on a “real” targeted host, such as\r\ncounting the number of cookies in browser caches, or checking whether video memory appears too small. Given that\r\nsandbox evasions are legion and there are far too many to cover in a single article, we will first examine some of the\r\nmajor categories we typically encounter and then cover what we can do about them.\r\nChecks for Instrumentation or “Hooks”\r\nThe first broad category of evasions involves the detection of any sandbox instrumentation. This is definitely one of the\r\nmost popular techniques. The most common example is checking for API hooks, as this is a common method for\r\nsandboxes and antivirus vendors alike to instrument and log all of the API calls made by an executable under analysis.\r\nThis can be as simple as checking the function prologues of common functions to see if they are hooked.\r\nIn Figure 1, we see what the disassembly looks like for the prologue of CreateFileA in Windows 10 as well as what it\r\nmight look like if it’s been instrumented in a sandbox.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 1 of 12\n\nFigure 1. A typical sandbox hook on a function in the system API.\r\nAs you can see, this is pretty easy for attackers to detect, which is why it’s one of the most prevalent evasions we’ve\r\nseen out there.\r\nA fun variation on this technique is when malware detects and unhooks existing hooks in order to stealthily execute\r\nwithout having its activity logged. This happens when malware authors want to slide past endpoint protections without\r\nbeing detected on a targeted host.\r\nFigure 2 shows an example of how GuLoader unpatches the bytes of the ZwProtectVirtualMemory function prologue to\r\nrestore the original functionality.\r\nFigure 2. GuLoader unhooking instrumentation in a system API function.\r\nMitigating Instrumentation Evasions\r\nThe gold standard for preventing malware authors from detecting instrumentation is simply not to have anything out of\r\nthe ordinary that’s visible to the program you’re analyzing. A growing number of sandboxes are making this idea the\r\nfocus of their detection strategy. It’s easier to be evasion resistant when you don’t change a single byte anywhere in the\r\nOS.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 2 of 12\n\nInstead of instrumenting APIs by changing code, it’s a better strategy to use virtualization to invisibly instrument\r\nprograms under analysis. There are a lot of advantages to instrumenting malware from outside of the guest VM, as\r\nshown in Figure 3.\r\nFigure 3. In-guest versus a hypervisor based hooking engine. Left: Program analysis components exist in\r\nthe guest VM along with the malware sample it executes. Right: Analysis components exist entirely\r\noutside of the guest VM and are thus invisible to the program under analysis.\r\nDetecting Virtual Environments\r\nAnother common evasion category involves detecting that a file is executing in a virtual machine (VM). This can\r\ninvolve fingerprinting resources like low CPU core count, system or video memory, or screen resolution. It can also\r\ninvolve fingerprinting artifacts of the specific VM.\r\nWhen building a sandbox, vendors have a large number of VM solutions to choose from, such as KVM, VirtualBox and\r\nXen. Each one has various artifacts and idiosyncrasies that are detectable by software running in VMs underneath them.\r\nSome of these idiosyncrasies are particular to a specific system, like checking for the backdoor interface of VMware, or\r\nchecking whether the hardware presented to the OS matches the virtual hardware provided by QEMU. Other approaches\r\ncan simply detect hypervisors in general. For example, Mark Lim discussed a general evasion for hypervisors in an\r\narticle, which capitalizes on the fact that many hypervisors incorrectly emulate the behavior of the trap flag.\r\nOne of the earliest and most widely used mechanisms for malware to determine whether it’s running inside a VMware\r\nvirtual machine is to use the backdoor interface of VMware to see whether there is any valid response from the VMware\r\nhypervisor. An example of such a check is shown in Figure 4.\r\nFigure 4. Malware checking if it’s running inside a VMware virtual machine.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 3 of 12\n\nMalware families can also query the computer manufacturer or model information using Windows Management\r\nInstrumentation (WMI) queries. This allows them to get information about the system and compare it with known\r\nsandbox and/or hypervisors strings.\r\nFigure 5 shows how this is used to query against VMware, Xen, VirtualBox and QEMU. The same technique can also\r\nbe found in Al-Khaser, which is an open-source tool that contains many anti-sandbox techniques.\r\nFigure 5. WMI queries used for querying computer information.\r\nFigure 6 shows the software components that malware can potentially interact with, to reveal whether it’s executing in a\r\nvirtual environment.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 4 of 12\n\nFigure 6. Additional surfaces that processes can interact with to assess whether they’re inside a VM.\r\nAdditionally, there is also often a great deal of information sprinkled around the guest VM that can easily provide clues\r\nas to what VM platform the guest OS is running underneath. In all cases, the specifics are dependent on the VM\r\ninfrastructure used (e.g., VMware, KVM or QEMU).\r\nThe following are just a few examples of what malware authors can check for:\r\nRegistry key paths showing VM-specific hardware, drivers or services.\r\nFilesystem paths for VM-specific drivers or other services.\r\nMAC addresses specific to some VM infrastructures.\r\nVirtual hardware (e.g., if a query reports that your network card is an Intel e1000, which hasn’t been made in\r\nmany years, it can infer that you’re probably running with the Qemu hardware model).\r\nRunning processes showing VM platform-specific services to support paravirtualization, or systems for user\r\nconvenience like VMware tools.\r\nCPUID instruction that will, in many cases, helpfully inform software of the guest of the VM platform.\r\nMitigating VM Evasions\r\nThe main issue with most of these mitigations is that the mainstream virtualization platform alternatives are well known\r\nto malware authors. For ease of implementation, most sandboxes are based on systems like KVM, Xen or QEMU,\r\nwhich makes this class of evasions particularly tricky to defeat.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 5 of 12\n\nEvery mainstream VM platform out there has been targeted by sandbox evasions. The problem is that nothing short of\r\nwriting your own custom hypervisor to support malware analysis would effectively address this class of evasions.\r\n… So that’s what we did!\r\nOur detection team made the decision years ago to implement our own custom hypervisor tailored specifically for\r\nmalware analysis. The dev teams went to great lengths to build (from scratch!) our own virtualization platform for\r\ndynamic analysis.\r\nThis decision has two advantages. The first is that we are not susceptible to the same fingerprinting techniques that are\r\nused against other VM infrastructure. We do not have any backdoor interfaces, but we do have different virtual\r\nhardware and a completely different codebase.\r\nThe second advantage is that, because we have built our own system, it is easier for us to adapt and address issues\r\nwherever we see malware using trickery on us for evasion purposes. For example, the linked article mentioned earlier\r\ndiscussed how many hypervisors incorrectly emulated the trap flag for guest VMs. Our malware analysts were able to\r\nclose the loop with our dev teams to ensure that we emulated correctly and were not susceptible.\r\nLack of Human Interaction\r\nThis category includes evasions requiring specific human interaction. For example, a malware author would expect to\r\nsee mouse clicks or some other event that would happen on a system with a “real” user driving it, but this would be\r\nabsent in a typical automated analysis platform. Malware families often check for human interaction and cease\r\nexecution if it looks like there is no user driving the system, because user activity is being simulated.\r\nThe following are the general themes we’ve observed for human interaction checks:\r\nPrompting users for interaction. For example, dialogue boxes or fake EULAs that a sandbox might not know\r\nshould be clicked to ensure detonation.\r\nChecking for mouse clicks, mouse movement and key presses. Even the locations of mouse events or timing of\r\nkeystrokes can be analyzed to determine whether they look “natural” versus programmatically generated.\r\nPlacing macros in documents to check for evidence of human interaction like scrolling, clicking a cell in a\r\nspreadsheet or checking a different worksheet tab.\r\nLet’s take a look at a specific example (shown in Figure 7) of how malware might get the time elapsed since last user\r\ninput (GetLastUserInput) and the time elapsed since the system has started (GetTickCount). It can then compare how\r\nmuch time has passed since the last key was pressed, to detect if there is any activity on the system.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 6 of 12\n\nFigure 7. User interaction required to detonate.\r\nMitigating Human Interaction Evasions\r\nWhen implementing a sandbox, we have control of the virtual keyboard, mouse and monitor. If for some reason the\r\nanalyzed executable requires any input keys, we can send key presses to the analysis, or make sure to click the correct\r\nbutton to continue the execution of the executable. It’s really just a question of knowing how to automatically do what a\r\nhuman would do to put on a convincing show for the malware we’re executing.\r\nLike with all the other areas of the VM detection problem, we need to remain vigilant to what malware families are\r\nlooking for, and continually improve our evasion strategy. A recent example involved malware that required individual\r\nmouse clicks on multiple cells within an Excel spreadsheet. We had to go the extra mile to ensure that we had a solid\r\nrecipe for detection in place, in case this was used against us in the future.\r\nTiming and Computing Resource Evasions\r\nEarly on, one of the most common sandbox evasions was to just call sleep for about an hour before doing anything evil.\r\nBy doing this, it would guarantee that the malware would be well beyond the short analysis time window used by\r\nalmost all sandboxes, as it’s not feasible to run every sample for more than a few minutes.\r\nThe reaction to this by sandbox authors was to instrument sleep to shorten any long sleeps to small ones. After many\r\nmore iterations of this cat-and-mouse game, we now have a staggeringly diverse set of ever-evolving ways for malware\r\nto waste time in sandboxes and thus prevent any meaningful analysis results.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 7 of 12\n\nFigure 8 shows one evasion technique using Windows timers and Windows messages. The idea is to install a timer that\r\nwill be fired each second, which then increments an internal variable when it’s executing the timer’s callback.\r\nOnce the variable hits a specific threshold, it will send another Windows message to notify the sample to start the\r\nexecution of the malware. The problem with this evasion is that sandboxes can’t simply reduce the timer's timeout to a\r\nlow number because it might break execution for other software, but it still must be executed somehow.\r\nFigure 8. Example of sleep using timers and Windows messages.\r\nAnother example is shown in Figure 9, below, where the malicious executables simply call the time stamp counter\r\ninstruction in a loop.\r\nFigure 9. Sleep loop using time stamp counter instruction.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 8 of 12\n\nTiming Evasion Mitigations\r\nTiming evasions can be very difficult to counter, depending on the situation. As previously mentioned, we can always\r\nadjust sleep arguments and timers but this does not completely solve the problem.\r\nAnother strategy that we’ve found useful is that, because we control the hypervisor, we can use techniques to control all\r\nhardware and software to make time move faster inside the guest VM. It’s even possible to do this without having to\r\nchange arguments or install any hooks. We can run executables for an hour within minutes in real time, which allows us\r\nto reach the malicious code faster.\r\nJunk instruction loops or VM exit loops are probably the hardest scenario to counter. If a malware author executes a few\r\nmillion CPUID instructions, which take exponentially longer to perform underneath a hypervisor, it’s a dead giveaway\r\nthat our code is running in a VM. This is another situation where having a custom hypervisor tailored for malware\r\nanalysis is useful, because we can detect and log this kind of activity.\r\nPocket Litter Checks\r\nThe term “pocket litter” has been co-opted from the field of espionage, where the items in one's pocket can be used for\r\n“confirming or refuting suspects' accounts of themselves.” This is a term we’ve internally adopted for all evasions in\r\nwhich malware authors are checking to see if the environment shows evidence of being a real, targeted host.\r\nIn terms of sandbox environments, checking for “pocket litter” commonly includes looking for things like a reasonable\r\namount of system uptime, a sufficient number of files in the My Documents folder or a good number of pages in the\r\nsystem’s browser cache. These are all things that would help corroborate that the system is “real” and not a sandboxed\r\nenvironment. Like with the other categories, the number of variations are seemingly infinite.\r\nFigure 10 shows another example where the malware checks if there are more than two processors available and\r\nwhether there is enough memory available. Usually sandbox environments don’t have as much memory available as\r\nregular PCs, and this check is testing whether the target system is likely to be a desktop PC or running inside a sandbox\r\nenvironment.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 9 of 12\n\nFigure 10. Check for the minimum required number of processors and required memory to run.\r\nIn Figure 11, there is another example where an AutoIt executable exits if the volume disk serial numbers match those\r\nof emulators used by known antivirus vendors.\r\nFigure 11. Check for volume serial numbers.\r\nPocket Litter Check Mitigations\r\nIn terms of mitigations, there is no single broad stroke that can be used to cover all available techniques. Rather, we\r\nattempt to address these on a case by case basis, where we can.\r\nFor example, when we see checks for specific types of files in particular places being used by evasions (if it appears to\r\nbe an innocuous change to the VM image) we add them for any related samples to see. This pocket litter approach can\r\nfeel like a cat and mouse game because there really is no panacea that addresses all threat actor mice. Persistence is key.\r\nConclusion\r\nWe emphasize that we are in no way claiming to have “solved” all sandbox evasions. In fact, the situation is quite the\r\nopposite when you consider that evasion classes are more or less infinite.\r\nIf there is a single takeaway from our discussion, it is that there are too many sandbox evasions out there to effectively\r\naddress every single one. Anyone who tells you their sandbox is 100% evasion-proof is overdue for a run-in with reality.\r\nWe have discussed some of the high-level category evasions in broad strokes, as well as some of the strategies we’re\r\nusing to address them. Because we can’t come close to addressing them all comprehensively, we recommend a defense-in-depth strategy. This allows us to architect detection systems in such a way that we can still detect malware when an\r\nevasion is successful and there is no execution of the payload.\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 10 of 12\n\nFor us on the WildFire team, this means a departure from simply relying on system API calls and other observable\r\nactivity as the sole basis for detection. Because we have our own hypervisor, we have been able to utilize our control of\r\nall hardware/software to instrument software in the VM in a way that is invisible to the malware that is running.\r\nAs we discussed in our previous post on hypervisor memory analysis in Advanced WildFire, our goal is to make a new\r\nkind of analysis engine that targets malware in memory. For any evasion technique discussed here, the code to execute it\r\nmust, at some point, exist in memory in order for it to execute and then successfully detect it is being analyzed.\r\nFigure 12. Detecting malware in memory.\r\nIn our system, we’ve shifted detection focus to the deltas in memory during execution. As shown in Figure 12, if the\r\npayload or any code is decoded, decompressed or decrypted in memory, our system will have visibility and a solid\r\nchance at catching it.\r\nIn closing, our advice to anyone building out an automated malware analysis system is to be as flexible as possible in\r\nterms of reacting to the broad categories of sandbox evasions out there. It is guaranteed that you will run into many\r\nvariations of the themes we discussed here.\r\nWe also recommend that you architect your system in a way that is tolerant to sandbox evasions. In other words, when\r\nall else fails and there are no execution events to scrutinize for malicious behaviors, there should be additional methods\r\nto fall back on. An example of what this looks like in practice is how we can analyze memory in Advanced WildFire to\r\ndetect evasive malware even when payloads choose to remain dormant and not execute.\r\nThanks for joining us on this odyssey as we toured all of the ways malware authors go out of their way to avoid\r\ndetection. Happy hunting!\r\nPalo Alto Networks customers receive protections from threats such as those discussed in this post with Advanced\r\nWildFire.\r\nIndicators of Compromise\r\nSHA256 Description\r\nMalware\r\nfamily\r\n3bf0f489250eaaa99100af4fd9cce3a23acf2b633c25f4571fb8078d4cb7c64d WMI queries Trickbot\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 11 of 12\n\ne9f6edb73eb7cf8dcc40458f59d13ca2e236efc043d4bc913e113bd3a6af19a2\r\nTiming attack using\r\nSetTimer\r\nSundown\r\npayload\r\n3450abaf86f0a535caeffb25f2a05576d60f871e9226b1bd425c425528c65670\r\nSleep using time\r\nstamp counter\r\ninstruction\r\nVBCrypt\r\n091ffdfef9722804f33a2b1d0fe765d2c2b0c52ada6d8834fdf72d8cb67acc4b\r\nVolume Disk serial\r\nnumber check\r\nZebrocy\r\nSHA256 Description\r\nPotentially\r\nunwanted\r\napplications\r\n96a88531d207bd33b579c8631000421b2063536764ebaf069d0e2ca3b97d4f84 VMware check PUA/KingSoft\r\nde85a021c6a01a8601dbc8d78b81993072b7b9835f2109fe1cc1bad971bd1d89\r\nGetLastUserInput\r\ncheck\r\nPUA/InstallCore\r\nSource: https://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nhttps://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/\r\nPage 12 of 12",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"Malpedia"
	],
	"references": [
		"https://unit42.paloaltonetworks.com/sandbox-evasion-memory-detection/"
	],
	"report_names": [
		"sandbox-evasion-memory-detection"
	],
	"threat_actors": [],
	"ts_created_at": 1775434494,
	"ts_updated_at": 1775791266,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/f8eefcb177c45055c5db5b6fec07010872d5aafb.pdf",
		"text": "https://archive.orkl.eu/f8eefcb177c45055c5db5b6fec07010872d5aafb.txt",
		"img": "https://archive.orkl.eu/f8eefcb177c45055c5db5b6fec07010872d5aafb.jpg"
	}
}