{
	"id": "0e0f86c9-2ecd-41e4-b605-f157e6508bb2",
	"created_at": "2026-04-06T00:16:55.398433Z",
	"updated_at": "2026-04-10T13:12:24.441438Z",
	"deleted_at": null,
	"sha1_hash": "24fc3d9354a611998d9b4498f1cd1651de5d8339",
	"title": "Mitigating Browser Fingerprinting in Web Specifications",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 193274,
	"plain_text": "Mitigating Browser Fingerprinting in Web Specifications\r\nBy Nick Doty\r\nPublished: 2025-09-25 · Archived: 2026-04-05 16:58:25 UTC\r\nJump to Table of Contents Collapse Sidebar\r\nAbstract\r\nExposure of settings and characteristics of browsers can harm user privacy by allowing for browser fingerprinting.\r\nThis document defines different types of fingerprinting, considers distinct levels of mitigation for the related\r\nprivacy risks and provides guidance for Web specification authors on how to balance these concerns when\r\ndesigning new Web features.\r\nStatus of This Document\r\nThis section describes the status of this document at the time of its publication. A list of current W3C publications\r\nand the latest revision of this technical report can be found in the W3C standards and drafts index.\r\nThis document provide guidance to Web specification authors on mitigating the privacy impacts of browser\r\nfingerprinting.\r\nThe Privacy Working Group is collaborating with the Technical Architecture Group (TAG) on this guidance.\r\nThis document was published by the Privacy Working Group as a Group Note using the Note track.\r\nThis Group Note is endorsed by the Privacy Working Group, but is not endorsed by W3C itself nor its Members.\r\nThe W3C Patent Policy does not carry any licensing requirements or commitments on this document.\r\nThis document is governed by the 18 August 2025 W3C Process Document.\r\nTable of Contents\r\n1. Abstract\r\n2. Status of This Document\r\n3. 1. Browser fingerprinting\r\n1. 1.1 What is fingerprinting?\r\n2. 1.2 Privacy impacts and threat models\r\n1. 1.2.1 Identify a user\r\n2. 1.2.2 Correlation of browsing activity\r\n3. 1.2.3 Tracking without transparency or user control\r\n3. 1.3 What can we do about it?\r\n4. 2. Best Practices Summary\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 1 of 16\n\n5. 3. Types of fingerprinting\r\n1. 3.1 Passive\r\n2. 3.2 Active\r\n3. 3.3 Transient Event Correlation\r\n4. 3.4 Cookie-like\r\n6. 4. Feasibility\r\n1. 4.1 Fingerprinting mitigation levels of success\r\n2. 4.2 Feasible goals for specification authors\r\n7. 5. Identifying fingerprinting surface and evaluating severity\r\n8. 6. Mitigations\r\n1. 6.1 Weighing increased fingerprinting surface\r\n2. 6.2 Standardization\r\n3. 6.3 Detectability\r\n4. 6.4 Clearing all local state\r\n5. 6.5 Do Not Track\r\n9. A. Research\r\n1. A.1 Browser vendor documentation\r\n2. A.2 Academic research\r\n3. A.3 Testing\r\n10. B. Acknowledgements\r\n11. C. References\r\n1. C.1 Informative references\r\nIn short, browser fingerprinting is the capability of a site to identify or re-identify a visiting user, user agent, or\r\ndevice via configuration settings or other observable characteristics.\r\nA similar definition is provided by [RFC6973]. A more detailed list of types of fingerprinting is included below.\r\nThis document does not attempt to catalog all features currently used or usable for browser fingerprinting;\r\nhowever, A. Research provides links to browser vendor pages and academic findings.\r\nBrowser fingerprinting can be used as a security measure (e.g. as means of authenticating the user). However,\r\nfingerprinting is also a threat to users' privacy on the Web. This document does not attempt to provide a single\r\nunifying definition of \"privacy\" or \"personal data\", but we highlight how browser fingerprinting might impact\r\nusers' privacy. For example, browser fingerprinting can be used to:\r\nidentify a user\r\ncorrelate a user’s browsing activity within and across sessions\r\ntrack users without transparency or control\r\nThe privacy implications associated with each use case are discussed below. Following from the practice of\r\nsecurity threat model analysis, we note that there are distinct models of privacy threats for fingerprinting.\r\nDefenses against these threats differ, depending on the particular privacy implication and the threat model of the\r\nuser.\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 2 of 16\n\nThere are many reasons why users might wish to remain anonymous or unidentified online, including: concerns\r\nabout surveillance, personal physical safety, and concerns about discrimination against them based on what they\r\nread or write when using the Web. When a browser fingerprint is correlated with identifying information (like an\r\nemail address, a recognized given and sur-name, or a government-issued identifier), an application or service\r\nprovider may be able to identify an otherwise pseudonymous user. The adversary and consequences of this threat\r\nwill vary by the particular user and use case, but can include nation-state intelligence agencies and threats of\r\nviolence or imprisonment.\r\nBrowser fingerprinting raises privacy concerns even when offline identities are not implicated. Some users may be\r\nsurprised or concerned that an online party can correlate multiple visits (on the same or different sites) to develop\r\na profile or history of the user. This concern may be heightened because (see below) it may occur without the\r\nuser's knowledge or consent and tools such as clearing cookies or using a VPN do not prevent further correlation.\r\nBrowser fingerprinting also allows for tracking across origins [RFC6454]: different sites may be able to combine\r\ninformation about a single user even where a cookie policy would block accessing of cookies between origins,\r\nbecause the fingerprint is relatively unique and the same for all origins.\r\nIn contrast to other mechanisms defined by Web standards for maintaining state (e.g. cookies), browser\r\nfingerprinting allows for collection of data about user activity without clear indications that such collection is\r\nhappening. Transparency can be important for end users, to understand how ongoing collection is happening, but\r\nit also enables researchers, policymakers and others to document or regulate privacy-sensitive activity. Browser\r\nfingerprinting also allows for tracking of activity without clear or effective user controls: a browser fingerprint\r\ntypically cannot be cleared or re-set. (See the finding on unsanctioned tracking [TAG-UNSANCTIONED].)\r\nAdvances in techniques for browser fingerprinting (see A. Research, below), particularly in active fingerprinting,\r\nsuggest that complete elimination of the capability of browser fingerprinting by a determined adversary through\r\nsolely technical means that are widely deployed is implausible. However, mitigations in our technical\r\nspecifications are possible, as described below (6. Mitigations), and may achieve different levels of success (4.\r\nFeasibility).\r\nMitigations recommended here are simply mitigations, not solutions. Users of the Web cannot confidently rely on\r\nsites being completely unable to correlate traffic, especially when executing client-side code. A fingerprinting\r\nsurface extends across all implemented Web features for a particular user agent, and even to other layers of the\r\nstack; for example, differences in TCP connections. For example, a user might employ an onion routing system\r\nsuch as Tor to limit network-level linkability, but still face the risk of correlating Web-based activity through\r\nbrowser fingerprinting. In order to mitigate these privacy risks as a whole, fingerprinting must be considered\r\nduring the design and development of all specifications.\r\nThe TAG finding on Unsanctioned Web Tracking, including browser fingerprinting, includes description of the\r\nlimitations of technical measures and encourages minimizing and documenting new fingerprinting surface [TAG-UNSANCTIONED]. The best practices below detail common actions that authors of specifications for Web\r\nfeatures can take to mitigate the privacy impacts of browser fingerprinting. The Self-Review Questionnaire\r\ndocuments mitigations of privacy impacts in Web features more generally that may complement these practices\r\n[security-privacy-questionnaire].\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 3 of 16\n\nBest Practice 1: Avoid unnecessary or severe increases to fingerprinting surface, especially for passive\r\nfingerprinting.\r\nBest Practice 2: Narrow the scope and availability of a feature with fingerprinting surface to what is\r\nfunctionally necessary.\r\nBest Practice 3: Mark features that contribute to fingerprintability.\r\nBest Practice 4: Specify orderings and non-functional differences.\r\nBest Practice 5: Limit the fingerprinting surface to only the entropy necessary.\r\nBest Practice 6: Enable graceful degradation for privacy-conscious users or implementers.\r\nBest Practice 7: Design APIs to access only the entropy necessary.\r\nBest Practice 8: Require servers to advertise or opt in to access data.\r\nBest Practice 9: Avoid unnecessary new local state mechanisms.\r\nBest Practice 10: Highlight any local state mechanisms to enable simultaneous clearing.\r\nBest Practice 11: Avoid permanent or persistent state.\r\nPassive fingerprinting is browser fingerprinting based on characteristics observable in the contents of Web\r\nrequests, without the use of any code executed on the client.\r\nPassive fingerprinting would trivially include cookies (often unique identifiers sent in HTTP requests), the set of\r\nHTTP request headers and the IP address and other network-level information. The User-Agent string [RFC9110],\r\nfor example, is an HTTP request header that typically identifies the browser, renderer, version and operating\r\nsystem. For some populations, the User-Agent and IP address will often uniquely identify a particular user's\r\nbrowser [NDSS-FINGERPRINTING].\r\nFor active fingerprinting, we also consider techniques where a site runs JavaScript or other code on the local client\r\nto observe additional characteristics about the browser, user, device or other context. In recent years numerous\r\ntechniques have ab(used) CSS features to perform fingerprinting on par with JavaScript.\r\nTechniques for active fingerprinting might include accessing the window size, enumerating fonts or connected\r\ndevices, evaluating performance characteristics, reading from device sensors, and rendering graphical patterns.\r\nKey to this distinction is that active fingerprinting takes place in a way that is potentially detectable on the client.\r\nNote that in some types of active fingerprinting, characteristics are combined on the client to produce a\r\nfingerprint. In most cases; however, the characteristics are sent en masse to a server, which can combine them in\r\nunobservable ways. The latter mechanism may be detectable, but the efficacy of fingerprinting mitigation\r\ntechniques is much harder to measure in this scenario.\r\nTransient Event Correlation is a technique to associate separate simultaneous sessions on a device with one\r\nanother using observations of events that occur near simultaneously on multiple origins [EPHEMERAL-FINGERPRINTING]. These events are typically fired as a result of a change in hardware or environment, such as\r\nwhen a device's posture changes or when the set of available media devices changes.\r\nTransient event correlation is not typically a concern except in certain threat models - it is only useful when an\r\nattacker is unable to link two sessions via passive or active fingerprinting techniques, which would typically\r\ninclude considering the sessions' IP address. In uncommon situations, those techniques can fail, but event\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 4 of 16\n\ncorrelation can still be used to link sessions between e.g. two entirely different browser applications or two tabs\r\nthat are sent over different network connections.\r\nTransient event correlation may be possible with complex CSS, but typically requires JavaScript and it can be\r\ndone in a reactive manner where JavaScript merely observes events, or it can be done in a proactive manner by\r\nheavily utilizing resources such as the CPU or GPU that another origin can observe. This type of attack between\r\ncooperating origins is typically referred to as a \"covert channel\" and there have been many papers about them\r\nusing different techniques, for example [RENDERING-CONTENTION].\r\nUsers, user agents and devices may also be re-identified by a site that first sets and later retrieves state stored by a\r\nuser agent or device. This cookie-like fingerprinting allows re-identification of a user or inferences about a user in\r\nthe same way that HTTP cookies allow state management for the stateless HTTP protocol [RFC6265].\r\nCookie-like fingerprinting can also circumvent user attempts to limit or clear cookies stored by the user agent, as\r\ndemonstrated by the \"evercookie\" implementation [EVERCOOKIE]. Where state is maintained across user agents\r\n(as in the case of common plugins with local storage), across devices (as in the case of certain browser syncing\r\nmechanisms) or across software upgrades, cookie-like fingerprinting can allow re-identification of users, user\r\nagents or devices where active and passive fingerprinting might not. The Security and Privacy Self-Review\r\nQuestionnaire also considers this threat in origin state that persists across browsing sessions [security-privacy-questionnaire].\r\nThere are different levels of success in mitigating browser fingerprinting:\r\nDecreased fingerprinting surface\r\nRemoving the source of entropy or available attributes that can be used for fingerprinting, or limiting the\r\ndata to not be accessible without some form of privilege being granted explicitly or implicitly.\r\nIncreased anonymity set\r\nBy standardization, convention or common implementation, increasing the commonality of particular\r\nconfigurations to decrease the likelihood of unique fingerprintability.\r\nDetectable fingerprinting\r\nMaking fingerprinting observable to others, so that the user agent might block it or researchers can\r\ndetermine that it's happening.\r\nClearable local state\r\nHelping users respond to fingerprinting by making state mechanisms clearable.\r\nResearch has shown feasible improvement in privacy protection in all of these areas. Collected data on Web users\r\nhas shown mobile devices to have substantially larger anonymity sets than desktop browsers [HIDING-CROWD].\r\nResearch on forms of active fingerprinting has documented its use and demonstrated changes in use of those\r\ntechniques as an apparent result of increased awareness [WPM-MILLION]. Respawning of cookies has continued,\r\nwith an increasing variety of techniques, but awareness and technical responses to the issue has made the practice\r\nless widespread [FLASHCOOKIES-2].\r\nThis document works under the expectation that mitigations with different levels of success are feasible under\r\ndifferent circumstances, for different threat models and against different types of fingerprinting. In general, active\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 5 of 16\n\nfingerprinting may be made detectable; we can minimize increases to the surface of passive fingerprinting; and\r\ncookie-like mechanisms can be made clearable.\r\nSome implementers and some users may be willing to accept reduced functionality or decreased performance in\r\norder to minimize browser fingerprinting. Documenting which features have fingerprinting risk eases the work of\r\nimplementers building modes for these users; minimizing fingerprinting even in cases where common\r\nimplementations will have easy active fingerprintability allows such users to reduce the functionality trade-offs\r\nnecessary. Making browser fingerprinting more detectable also contributes to mitigations outside the\r\nstandardization process; for example, though regulatory or policy means [TAG-UNSANCTIONED].\r\nTo mitigate browser fingerprinting in your specification:\r\n1. identify features that can be used for browser fingerprinting;\r\n2. evaluate the severity of the fingerprinting surface based on these five factors; and,\r\n3. apply mitigations described in the best practices below (6. Mitigations), focused on limiting the severity of\r\nthat fingerprinting surface.\r\nThe fingerprinting surface of a user agent is the set of observable characteristics that can be used in concert to\r\nidentify a user, user agent or device or correlate its activity.\r\nData sources that may be used for browser fingerprinting include:\r\nuser configuration (of the browser or operating system)\r\ndevice characteristics\r\nenvironmental characteristics (e.g. sensor readings)\r\noperating system characteristics\r\nuser behavior\r\nbrowser characteristics\r\nThese data sources may be accessed directly for some features, but in many other cases they are inferred through\r\nsome other observation. Timing channels, in particular, are commonly used to infer details of hardware (exactly\r\nhow quickly different operations are completed may provide information on GPU capability, say), network\r\ninformation (via the latency or speed in loading a particular resource) or even user configuration (what items have\r\nbeen previously cached or what resources are not loaded). Consider the side effects of feature and how those side\r\neffects would allow inferences of any of these characteristics.\r\nThe Tor Browser design document [TOR-DESIGN] has more details on these sources and their relative priorities;\r\nthis document adds environmental characteristics in that sensor readings or data access may distinguish a user,\r\nuser agent or device by information about the environment (location, for example).\r\nFor each identified feature, consider the severity for the privacy impacts described above (1.2 Privacy impacts and\r\nthreat models) based on the following factors:\r\nentropy\r\nHow distinguishing is this new surface? Consider both the possible variations and the likely distribution of\r\nvalues. Adding 1-bit of entropy is typically of less concern; 30-some bits of entropy would be enough to\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 6 of 16\n\nuniquely identify every individual person. Different data sources may provide different distributions of\r\nvariation; for example, even 1 bit of entropy can unique identify a user if they are the only one for whom it\r\nis true.\r\ndetectability\r\nWill use of this feature for browser fingerprinting be observable to the user agent or likely to be\r\ndiscoverable by researchers? Because detectability is an important — and perhaps the most feasible —\r\nmitigation, increases to the surface for passive fingerprinting are of particular concern and should be\r\navoided.\r\npersistence\r\nHow long will the characteristics of this fingerprinting surface stay unchanged? Can users control or re-set\r\nthese values to prevent long-lived identification? While short-lived characteristics may still enable\r\nunexpected correlation of activity (for example, between two browser profiles on the same device),\r\npersistent or permanent identifiers are particularly concerning for the lack of user control.\r\navailability\r\nWill this surface be available to the \"drive-by Web\" or only in certain contexts such as when a document is\r\nvisible or where a user has interacted with the document or granted a particular permission? While browser\r\nfingerprinting is still something to mitigate in the permissioned context, the concern that a feature will end\r\nup used primarily for fingerprinting is reduced.\r\nscope\r\nIs this surface consistent across origins or only within a single origin? In general, characteristics or\r\nidentifiers that are tied to a particular origin are of less concern and can be handled with the same tools as\r\nHTTP cookies.\r\nWhile we do not recommend specific trade-offs, these factors can be used to weigh increases to that surface (6.1\r\nWeighing increased fingerprinting surface) and suggest appropriate mitigations. Although each factor may suggest\r\nspecific mitigations, in weighing whether to add fingerprinting surface they should be considered in concert. For\r\nexample, access to a new set of characteristics about the user may be high entropy, but be of less concern because\r\nit has limited availability and is easily detectable. A cross-origin, drive-by-available, permanent, passive unique\r\nidentifier is incompatible with our expectations for privacy on the Web.\r\nIn conducting this analysis, it may be tempting to dismiss certain fingerprinting surface in a specification because\r\nof a comparison to fingerprinting surface exposed by other parts of the Web platform or other layers of the stack.\r\nBe cautious about making such claims. First, while similar information may be available through other means,\r\nsimilar is not identical: information disclosures may not be exactly the same and fingerprintability is made even\r\nmore effective by combining these distinct sources. Second, where identical entropy is present, other factors of\r\nseverity or availability may differ and those factors are important for feasible mitigation. Third, the platform is\r\nneither monolithic nor static; not all other features are implemented in all cases and may change (or be removed)\r\nin the future. Fourth, circular dependencies are a danger when so many new features are under development; two\r\nspecifications sometimes refer to one another in arguing that fingerprinting surface already exists. It is more useful\r\nto reviewers and implementers to consider the fingerprinting surface provided by the particular Web feature itself,\r\nwith specific references where surface may be available through other features as well.\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 7 of 16\n\nWeb specification authors regularly attempt to strike a balance between new functionality and fingerprinting\r\nsurface. For example, feature detection functionality allows for progressive enhancement with a small addition to\r\nfingerprinting surface; detailed enumerations of fonts or connected devices may provide a large fingerprinting\r\nsurface with minimal functional support.\r\nAuthors and Working Groups determine the appropriate balance between these properties on a case-by-case basis,\r\ngiven their understanding of the functionality, its implementations and the severity of increased fingerprinting\r\nsurface. However, given the distinct privacy impacts described above and in order to improve consistency across\r\nspecifications, these practices provide some guidance:\r\nBest Practice 1\r\n: Avoid unnecessary or severe increases to fingerprinting surface, especially for passive fingerprinting.\r\nConsider each of the severity factors described above and whether that functionality is necessary and whether\r\ncomparable functionality is feasible with less severe increases to the fingerprinting surface.\r\nIn particular, unless a feature cannot reasonably be designed in any other way, increased passive fingerprintability\r\nshould be avoided. Passive fingerprinting allows for easier and widely-available identification, without\r\nopportunities for external detection or control by users or third parties.\r\nBest Practice 2\r\n: Narrow the scope and availability of a feature with fingerprinting surface to what is functionally necessary.\r\nWhat browsing contexts, resources and requests need access to a particular feature? Identifiers can often be\r\nscoped to have a different value in different origins. Some configuration may only be necessary in top-level\r\nbrowsing contexts.\r\nIf an event is to be fired in response to an environmental or hardware change, only fire that event when the\r\nWindow 's associated document's visibility state is \" visible \", or in Worker s whose owner set includes such a\r\nDocument. If background pages need to learn of the event when they're focused, also fire the event while updating\r\nthe visibility state. Consider whether it should be restricted by an iframe sandbox.\r\nShould access to this functionality be limited to where users have granted a particular permission? While\r\nexcessive permissions can create confusion and fatigue, limiting highly granular data to situations where a user\r\nhas already granted permission to access sensitive data widely mitigates the risk of that feature being used\r\nprimarily for browser fingerprinting in \"drive-by\" contexts. For example, Media Capture and Streams\r\n[mediacapture-streams] limits access to attached microphone and camera device labels to when the user has\r\ngranted permission to access a camera or microphone (while still allowing access to the number and configuration\r\nof attached cameras and microphones in all contexts, a noted increase in drive-by fingerprinting surface).\r\nSome implementations may also limit the entropy of fingerprinting surface by not exposing different capabilities\r\nfor different devices or installations of a user agent. Font lists, for example, can be limited to a list commonly\r\navailable on all devices that run a particular browser or operating system (as implemented in Tor Browser, Firefox\r\nand Safari).\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 8 of 16\n\nBest Practice 3\r\n: Mark features that contribute to fingerprintability.\r\n Where a feature does contribute to the fingerprinting surface, indicate that impact, by explaining the effect (and\r\nany known implementer mitigations) and marking the relevant section with a fingerprinting icon, as this paragraph\r\nis.\r\nThe following code can be used to mark a paragraph with the fingerprint icon.\r\n\u003cimg src=\"https://www.w3.org/Icons/fingerprint.png\"\r\n class=\"fingerprint\"\r\n alt=\"This feature may contribute to browser fingerprintability.\"\u003e\r\nSpecifications can mitigate against fingerprintability through standardization; by defining a consistent behavior,\r\nconformant implementations won't have variations that can be used for browser fingerprinting.\r\nRandomization of certain browser characteristics has been proposed as a way to combat browser fingerprinting.\r\nWhile this strategy may be pursued by some implementations, we expect in general it will be more effective for us\r\nto standardize or null values rather than setting a range over which they can vary. The Tor Browser design [TOR-DESIGN] provides more detailed information, but in short: it's difficult to measure how well randomization will\r\nwork as a mitigation and it can be costly to implement in terms of usability (varying functionality or design in\r\nunwanted ways), processing (generating random numbers) and development (including the cost of introducing\r\nnew security vulnerabilities). Standardization provides the benefit of an increased anonymity set for conformant\r\nbrowsers with the same configuration: that is, an individual can look the same as a larger group of people rather\r\nthan trying to look like a number of different individuals.\r\nBest Practice 4\r\n: Specify orderings and non-functional differences.\r\nTo reduce unnecessary entropy, specify aspects of API return values and behavior that don't contribute to\r\nfunctional differences. For example, if the ordering of return values in a list has no semantic value, specify a\r\nparticular ordering (alphabetical order by a defined algorithm, for example) so that incidental differences don't\r\nexpose fingerprinting surface.\r\nEven within a single implementation, variation can occur unexpectedly due to differences in processor\r\narchitecture or operating system configuration. Access to a list of system fonts via Flash or Java plugins notably\r\nreturned the list sorted not in a standard alphabetical order, but in an unspecified order specific to the system. This\r\nordering added to the entropy available from that plugin in a way that provided no functional advantage.\r\nBest Practice 5\r\n: Limit the fingerprinting surface to only the entropy necessary.\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 9 of 16\n\nFollowing the basic principle of data minimization [RFC6973], design your APIs such that a site can access only\r\nthe entropy necessary for particular functionality. This can take the form of:\r\nClamping decimal points: Instead of returning a high-precision floating-point value such as 2.735928 ,\r\nconsider clamping to fewer decimal places or returning a coarser value like 2.7 if that satisfies the\r\nfunctional requirements. For example, a battery API might return 0.8 (80%) instead of 0.81234567 .\r\nUsing an enumerated list of options: Replace fine-grained numeric outputs with a constrained set of\r\nmeaningful categories. For instance, instead of exposing a percentage of the battery remaining, return an\r\nenum like \"low\" , \"medium\" , \"high\" , or \"charging\" , which can guide performance decisions without\r\nleaking precise details.\r\nUsing a boolean: In some cases, a yes/no answer may be sufficient and much less fingerprintable. For\r\nexample, instead of returning detailed information about a user's battery, encode it as simply Low Battery\r\nor not.\r\nIf your specification exposes some fingerprinting surface (whether it's active or passive), some implementers (e.g.\r\nTor Browser) are going to be compelled to disable those features for certain privacy-conscious users.\r\nBest Practice 6\r\n: Enable graceful degradation for privacy-conscious users or implementers.\r\nFollowing the principle of progressive enhancement, and to avoid further divergence (which might itself expose\r\nvariation in users), consider whether some functionality in your specification is still possible if fingerprinting\r\nsurface features are disabled.\r\nExplicit hooks or API flags may be used so that browser extensions or certain user agents can easily disable\r\nspecific features or aspects of a feature. For example, an event defined in a feature might specify that certain\r\nproperties that describe the hardware device that triggered it may be blank.\r\nStandardization does not need to attempt to hide all differences between different browsers (e.g. Edge and\r\nChrome); implemented functionality and behavior differences will always exist between different\r\nimplementations. For that reason, removing User-Agent headers altogether is not a goal. However, variation in\r\nthe User-Agent string that reveals additional information about the user or device has been shown to provide\r\nsubstantial fingerprinting surface [BEAUTY-BEAST].\r\nWhere a client-side API provides some fingerprinting surface, authors can still assist User Agents via detectability.\r\nIf client-side fingerprinting activity is to some extent distinguishable from functional use of APIs, user agent\r\nimplementations may have an opportunity to prevent ongoing fingerprinting or make it observable to users and\r\nexternal researchers (including academics or relevant regulators) who may be able to detect and investigate the use\r\nof fingerprinting.\r\nBest Practice 7\r\n: Design APIs to access only the entropy necessary.\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 10 of 16\n\nFollowing the basic principle of data minimization [RFC6973], design your APIs such that by default, they expose\r\nonly the entropy necessary for particular functionality, and requires more explicit parameters to receive more\r\nexpansive data.\r\nMaking an API asynchronous and returning a promise gives User Agents the option, but not the requirement, to\r\ninterpose a permission dialog in whatever circumstances they may deem desirable.\r\nAuthors might design an API to allow for querying of a particular value, rather than returning an enumeration of\r\nall values. User agents and researchers can then more easily distinguish between sites that query for one or two\r\nparticular values (gaining minimal entropy) and those that query for all values (more likely attempting to\r\nfingerprint the browser); or implementations can cap the number of different values.\r\nFor more information, see:\r\nDevice API Privacy Requirements [dap-privacy-reqs], DAP Working Group Note, June 2010.\r\nData Minimization in Web APIs [TAG-MINIMIZATION], W3C TAG, September 2011.\r\nGeneric Sensor API: Security and privacy considerations [generic-sensor], March 2018.\r\nThe leaking battery: A privacy analysis of the HTML5 Battery Status API [LEAKING-BATTERY], 2015.\r\nRelated, detectability is improved even with data sent in HTTP headers (what we would typically consider passive\r\nfingerprinting) if sites are required to request access (or \"opt in\") to information before it's sent.\r\nBest Practice 8\r\n: Require servers to advertise or opt in to access data.\r\nEven for data sent in HTTP request headers, requiring servers to advertise use of particular data, publicly\r\ndocument a policy, or \"opt in\" before clients send configuration data provides the possibility of detection by user\r\nagents or researchers.\r\nFor example, Client Hints [client-hints-infrastructure] proposes an Accept-CH response header for services to\r\nindicate that specific hints can be used for content negotiation, rather than all supporting clients sending all hints\r\nin all requests.\r\nNote\r\nThis is a relatively new approach; we're still evaluating whether this provides meaningful and useful detectability.\r\nImplementers can facilitate detectability by providing or enabling instrumentation so that users or third parties are\r\nable to calculate when fingerprinting surface is being accessed. Of particular importance for instrumentation are:\r\naccess to all the different sources of fingerprinting surface; identification of the originating script; avoiding\r\nexposure that instrumentation is taking place. Beyond the minimization practice described above, these are largely\r\nimplementation-specific (rather than Web specification) features.\r\nFeatures which enable storage of data on the client and functionality for client- or server-side querying of that data\r\ncan increase the ease of cookie-like fingerprinting. Storage can vary between large amounts of data (for example,\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 11 of 16\n\nthe Web Storage API) or just a binary flag (has or has not provided a certain permission; has or has not cached a\r\nsingle resource).\r\nBest Practice 9\r\n: Avoid unnecessary new local state mechanisms.\r\nIf functionality does not require maintaining client-side state in a way that is subsequently queryable (or otherwise\r\nobservable), avoid creating a new cookie-like feature. Can the functionality be accomplished with existing HTTP\r\ncookies or an existing JavaScript local storage API?\r\nFor example, the Flash plugin's Local Shared Objects (LSOs) used to be abused to duplicate and re-spawn HTTP\r\ncookies cleared by the user [FLASHCOOKIES], and the single bit that indicates if the Strict-TransportSecurity\r\nHeader has been set for a domain has been abused in the same way [HSTS-SUPERCOOKIE].\r\nWhere features do require setting and retrieving local state, there are ways to mitigate the privacy impacts related\r\nto unexpected cookie-like behavior; in particular, you can help implementers prevent \"permanent\", \"zombie\",\r\n\"super\" or \"evercookies\".\r\nBest Practice 10\r\n: Highlight any local state mechanisms to enable simultaneous clearing.\r\nClearly note where state is being maintained and could be queried and provide guidance to implementers on\r\nenabling simultaneous deletion of local state for users. Such functionality can mitigate the threat of \"evercookies\"\r\nbecause the presence of state in one such storage mechanism can't be used to persist and re-create an identifier.\r\nPermanent or persistent data (including any identifiers) are of particular risk because they undermine the ability\r\nfor a user to clear or re-set the state of their device or to maintain different identities.\r\nBest Practice 11\r\n: Avoid permanent or persistent state.\r\nPermanent identifiers or other state (for example, identifiers or keys set in hardware) should typically not be used.\r\nWhere necessary, access to such identifiers would require user permission and limitation to a particular origin.\r\nHowever even heavy-weight mitigations are imperfect: explaining the implications of such permission to users\r\nmay be difficult and server-side collusion between origins is typically impossible to detect. As a result, your\r\ndesign should not rely on saving and later querying data on the client and expecting it to persist beyond a user\r\nclearing cookies or other local state. That is, you should not expect any local state information to be permanent or\r\nto persist longer than other local state.\r\nThough not strictly browser fingerprinting, there are other privacy concerns regarding user tracking for features\r\nthat provide local storage of data. Mitigations suggested in the Web Storage API specification include: safe-listing,\r\nblock-listing, expiration and secure deletion [HTML#user-tracking].\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 12 of 16\n\nExpressions of, and compliance with, a Do Not Track signal does not inhibit the capability of browser\r\nfingerprinting, but may mitigate some user concerns about fingerprinting, specifically around tracking as defined\r\nin those specifications [TRACKING-DNT] [TRACKING-COMPLIANCE] and as implemented by services that\r\ncomply with those user preferences. That is, DNT can mitigate concerns with cooperative sites.\r\nThe use of DNT in this way typically does not require changes to other functional specifications. If your\r\nspecification expects a particular behavior upon receiving a particular DNT signal, indicate that with a reference\r\nto [TRACKING-DNT]. If your specification introduces a new communication channel that could be used for\r\ntracking, you might wish to define how a DNT signal should be communicated.\r\nSome browser developers maintain pages on browser fingerprinting, including: potential mitigations or\r\nmodifications necessary to decrease the surface of that browser engine; different vectors that can be used for\r\nfingerprinting; potential future work. These are not cheery, optimistic documents.\r\nThe Chromium Projects: Technical analysis of client identification mechanisms\r\nWebKit Wiki: Fingerprinting\r\nMozilla Wiki: Fingerprinting\r\nThe Design and Implementation of the Tor Browser: Cross-Origin Fingerprinting Unlinkability\r\nWhat are the key papers to read here, historically or to give the latest on fingerprinting techniques? What are some\r\nareas of open research that might be relevant?\r\nEckersley, Peter. \"How unique is your web browser?\" Privacy Enhancing Technologies. Springer Berlin\r\nHeidelberg, 2010.\r\nMowery, Keaton, Dillon Bogenreif, Scott Yilek, and Hovav Shacham. “Fingerprinting Information in\r\nJavaScript Implementations.” In Web 2.0 Security and Privacy, 2011.\r\nYen, Ting-Fang, et al. \"Host fingerprinting and tracking on the web: Privacy and security implications.\"\r\nProceedings of NDSS. 2012. [NDSS-FINGERPRINTING]\r\nMowery, Keaton, and Hovav Shacham. \"Pixel perfect: Fingerprinting canvas in HTML5.\" Web 2.0 Security\r\nand Privacy, 2012.\r\nMattioli, Dana. \"On Orbitz, Mac Users Steered to Pricier Hotels\". Wall Street Journal, August 23, 2012.\r\nGunes Acar et al. \"FPDetective: dusting the web for fingerprinters.\" In CCS '13.\r\nNikiforakis, Nick, et al. \"Cookieless monster: Exploring the ecosystem of web-based device\r\nfingerprinting.\" IEEE Symposium on Security and Privacy (S\u0026P 2013), 2013.\r\nG. Acar, C. Eubank, S. Englehardt, M. Juarez, A. Narayanan, C. Diaz. \"The Web never forgets: Persistent\r\ntracking mechanisms in the wild.\" In Proceedings of CCS 2014, Nov. 2014.\r\nSteven Englehardt, Arvind Narayanan. \"Online tracking: A 1-million-site measurement and analysis.\" May\r\n2016. [WPM-MILLION]\r\nPierre Laperdrix, Walter Rudametkin, Benoit Baudry. \"Beauty and the Beast: Diverting modern web\r\nbrowsers to build unique browser fingerprints.\" IEEE Symposium on Security and Privacy (S\u0026P 2016),\r\nMay 2016.\r\n\"Hiding in the Crowd: an Analysis of the Effectiveness of Browser Fingerprinting at Large Scale.\"\r\nWWW2018 - TheWebConf 2018: 27th International World Wide Web Conference, April 2018. [HIDING-CROWD]\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 13 of 16\n\nA non-exhaustive list of sites that allow the visitor to test their configuration for fingerprintability.\r\namiunique.org (INRIA)\r\npanopticlick.eff.org (EFF)\r\nBrowserSPY.dk\r\npet-portal cross-browser fingerprinting test\r\np0f v3 (purely passive fingerprinting)\r\nMany thanks to Robin Berjon for ReSpec and to Tobie Langel for Github advice; to the Privacy Interest Group\r\nand the Technical Architecture Group for review; to the Tor Browser designers for references and\r\nrecommendations; and to Christine Runnegar for contributions.\r\n[BEAUTY-BEAST]\r\nBeauty and the Beast: Diverting modern web browsers to build unique browser fingerprints. Pierre\r\nLaperdrix; Walter Rudametkin; Benoit Baudry. IEEE Symposium on Security and Privacy (S\u0026P 2016).\r\nMay 2016. URL: https://inria.hal.science/hal-01285470v2/\r\n[client-hints-infrastructure]\r\nClient Hints Infrastructure. W3C. Draft Community Group Report. URL: https://wicg.github.io/client-hints-infrastructure/\r\n[dap-privacy-reqs]\r\nDevice API Privacy Requirements. Alissa Cooper; Frederick Hirsch; John Morris. W3C. 29 June 2010.\r\nW3C Working Group Note. URL: https://www.w3.org/TR/dap-privacy-reqs/\r\n[device-posture]\r\nDevice Posture API. Kenneth Christiansen; Alexis Menard; Diego Gonzalez-Zuniga. W3C. 26 November\r\n2024. W3C Candidate Recommendation. URL: https://www.w3.org/TR/device-posture/\r\n[dom]\r\nDOM Standard. Anne van Kesteren. WHATWG. Living Standard. URL: https://dom.spec.whatwg.org/\r\n[EPHEMERAL-FINGERPRINTING]\r\nEphemeral Fingerprinting On The Web. Asanka Herath. 1 September 2020. URL:\r\nhttps://github.com/asankah/ephemeral-fingerprinting\r\n[EVERCOOKIE]\r\nevercookie - virtually irrevocable persistent cookies. Samy Kamkar. September 2010. URL:\r\nhttps://samy.pl/evercookie/\r\n[FLASHCOOKIES]\r\nFlash Cookies and Privacy. Ashkan Soltani; Shannon Canty; Quentin Mayo; Lauren Thomas; Chris Jay\r\nHoofnagle. 10 August 2009. URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1446862\r\n[FLASHCOOKIES-2]\r\nFlash cookies and privacy II: Now with HTML5 and ETag respawning. Mika Ayenson; Dietrich Wambach;\r\nAshkan Soltani; Nathan Good; Chris Hoofnagle. URL:\r\nhttps://ptolemy.berkeley.edu/projects/truststc/education/reu/11/Posters/AyensonMWambachDpaper.pdf\r\n[generic-sensor]\r\nGeneric Sensor API. Rick Waldron. W3C. 22 February 2024. CRD. URL: https://www.w3.org/TR/generic-sensor/\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 14 of 16\n\n[HIDING-CROWD]\r\nHiding in the Crowd: an Analysis of the Effectiveness of Browser Fingerprinting at Large Scale. Alejandro\r\nGómez-Boix; Pierre Laperdrix; Benoit Baudry. WWW2018 - TheWebConf2018: 27th International World\r\nWide Web Conference. April 2018. URL: https://inria.hal.science/hal-01718234v2\r\n[HSTS-SUPERCOOKIE]\r\nHSTS Super Cookie. Ben Friedland. Jan 2016. URL: https://github.com/ben174/hsts-cookie\r\n[HTML]\r\nHTML Standard. Anne van Kesteren; Domenic Denicola; Dominic Farolino; Ian Hickson; Philip\r\nJägenstedt; Simon Pieters. WHATWG. Living Standard. URL: https://html.spec.whatwg.org/multipage/\r\n[LEAKING-BATTERY]\r\nThe leaking battery: A privacy analysis of the HTML5 Battery Status API. Łukasz Olejnik; Gunes Acar;\r\nClaude Castelluccia; Claudia Diaz. 2015. URL: https://eprint.iacr.org/2015/616.pdf\r\n[mediacapture-streams]\r\nMedia Capture and Streams. Cullen Jennings; Jan-Ivar Bruaroey; Henrik Boström; youenn fablet. W3C. 25\r\nSeptember 2025. CRD. URL: https://www.w3.org/TR/mediacapture-streams/\r\n[NDSS-FINGERPRINTING]\r\nHost Fingerprinting and Tracking on the Web: Privacy and Security Implications. Ting-Fang Yen; Yinglian\r\nXie; Fang Yu; Roger Peng Yu; Martin Abadi. In Proceedings of the Network and Distributed System\r\nSecurity Symposium (NDSS). February 2012. URL: https://www.microsoft.com/en-us/research/publication/host-fingerprinting-and-tracking-on-the-webprivacy-and-security-implications/\r\n[RENDERING-CONTENTION]\r\nRendering Contention Channel Made Practical in Web Browsers. Shujiang Wu; Jianjia Yu; Min Yang;\r\nYinzhi Cao. August 2022. URL: https://www.usenix.org/system/files/sec22summer_wu.pdf\r\n[RFC6265]\r\nHTTP State Management Mechanism. A. Barth. IETF. April 2011. Proposed Standard. URL:\r\nhttps://httpwg.org/specs/rfc6265.html\r\n[RFC6454]\r\nThe Web Origin Concept. A. Barth. IETF. December 2011. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc6454\r\n[RFC6973]\r\nPrivacy Considerations for Internet Protocols. A. Cooper; H. Tschofenig; B. Aboba; J. Peterson; J. Morris;\r\nM. Hansen; R. Smith. IETF. July 2013. Informational. URL: https://www.rfc-editor.org/rfc/rfc6973\r\n[RFC9110]\r\nHTTP Semantics. R. Fielding, Ed.; M. Nottingham, Ed.; J. Reschke, Ed. IETF. June 2022. Internet\r\nStandard. URL: https://httpwg.org/specs/rfc9110.html\r\n[security-privacy-questionnaire]\r\nSelf-Review Questionnaire: Security and Privacy. Theresa O'Connor; Peter Snyder; Simone Onofri. W3C.\r\n18 April 2025. W3C Working Group Note. URL: https://www.w3.org/TR/security-privacy-questionnaire/\r\n[TAG-MINIMIZATION]\r\nData Minimization in Web APIs. Daniel Appelquist. W3C Technical Architecture Group. 12 September\r\n2011. URL: https://www.w3.org/2001/tag/doc/APIMinimization\r\n[TAG-UNSANCTIONED]\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 15 of 16\n\nUnsanctioned Web Tracking. Mark Nottingham. W3C Technical Architecture Group. 17 July 2015. URL:\r\nhttps://w3ctag.github.io/unsanctioned-tracking/\r\n[TOR-DESIGN]\r\nThe Design and Implementation of the Tor Browser. Mike Perry; Erinn Clark; Steven Murdoch; Georg\r\nKoppen. 15 June 2018. URL: https://spec.torproject.org/torbrowser-design\r\n[TRACKING-COMPLIANCE]\r\nTracking Compliance and Scope. Nick Doty; Heather West; Justin Brookman; Sean Harvey; Erica\r\nNewland. W3C. 22 January 2019. W3C Working Group Note. URL: https://www.w3.org/TR/tracking-compliance/\r\n[TRACKING-DNT]\r\nTracking Preference Expression (DNT). Roy Fielding; David Singer. W3C. 17 January 2019. W3C\r\nWorking Group Note. URL: https://www.w3.org/TR/tracking-dnt/\r\n[WPM-MILLION]\r\nOnline tracking: A 1-million-site measurement and analysis. Steven Englehardt; Arvind Narayanan. May\r\n2016. URL: https://webtransparency.cs.princeton.edu/webcensus/\r\nSource: https://www.w3.org/TR/fingerprinting-guidance/\r\nhttps://www.w3.org/TR/fingerprinting-guidance/\r\nPage 16 of 16",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"origins": [
		"web"
	],
	"references": [
		"https://www.w3.org/TR/fingerprinting-guidance/"
	],
	"report_names": [
		"fingerprinting-guidance"
	],
	"threat_actors": [
		{
			"id": "9f101d9c-05ea-48b9-b6f1-168cd6d06d12",
			"created_at": "2023-01-06T13:46:39.396409Z",
			"updated_at": "2026-04-10T02:00:03.312816Z",
			"deleted_at": null,
			"main_name": "Earth Lusca",
			"aliases": [
				"CHROMIUM",
				"ControlX",
				"TAG-22",
				"BRONZE UNIVERSITY",
				"AQUATIC PANDA",
				"RedHotel",
				"Charcoal Typhoon",
				"Red Scylla",
				"Red Dev 10",
				"BountyGlad"
			],
			"source_name": "MISPGALAXY:Earth Lusca",
			"tools": [
				"RouterGod",
				"SprySOCKS",
				"ShadowPad",
				"POISONPLUG",
				"Barlaiy",
				"Spyder",
				"FunnySwitch"
			],
			"source_id": "MISPGALAXY",
			"reports": null
		},
		{
			"id": "18a7b52d-a1cd-43a3-8982-7324e3e676b7",
			"created_at": "2025-08-07T02:03:24.688416Z",
			"updated_at": "2026-04-10T02:00:03.734754Z",
			"deleted_at": null,
			"main_name": "BRONZE UNIVERSITY",
			"aliases": [
				"Aquatic Panda",
				"Aquatic Panda ",
				"CHROMIUM",
				"CHROMIUM ",
				"Charcoal Typhoon",
				"Charcoal Typhoon ",
				"Earth Lusca",
				"Earth Lusca ",
				"FISHMONGER ",
				"Red Dev 10",
				"Red Dev 10 ",
				"Red Scylla",
				"Red Scylla ",
				"RedHotel",
				"RedHotel ",
				"Tag-22",
				"Tag-22 "
			],
			"source_name": "Secureworks:BRONZE UNIVERSITY",
			"tools": [
				"Cobalt Strike",
				"Fishmaster",
				"FunnySwitch",
				"Spyder",
				"njRAT"
			],
			"source_id": "Secureworks",
			"reports": null
		},
		{
			"id": "6abcc917-035c-4e9b-a53f-eaee636749c3",
			"created_at": "2022-10-25T16:07:23.565337Z",
			"updated_at": "2026-04-10T02:00:04.668393Z",
			"deleted_at": null,
			"main_name": "Earth Lusca",
			"aliases": [
				"Bronze University",
				"Charcoal Typhoon",
				"Chromium",
				"G1006",
				"Red Dev 10",
				"Red Scylla"
			],
			"source_name": "ETDA:Earth Lusca",
			"tools": [
				"Agentemis",
				"AntSword",
				"BIOPASS",
				"BIOPASS RAT",
				"BadPotato",
				"Behinder",
				"BleDoor",
				"Cobalt Strike",
				"CobaltStrike",
				"Doraemon",
				"FRP",
				"Fast Reverse Proxy",
				"FunnySwitch",
				"HUC Port Banner Scanner",
				"KTLVdoor",
				"Mimikatz",
				"NBTscan",
				"POISONPLUG.SHADOW",
				"PipeMon",
				"RbDoor",
				"RibDoor",
				"RouterGod",
				"SAMRID",
				"ShadowPad Winnti",
				"SprySOCKS",
				"WinRAR",
				"Winnti",
				"XShellGhost",
				"cobeacon",
				"fscan",
				"lcx",
				"nbtscan"
			],
			"source_id": "ETDA",
			"reports": null
		},
		{
			"id": "d53593c3-2819-4af3-bf16-0c39edc64920",
			"created_at": "2022-10-27T08:27:13.212301Z",
			"updated_at": "2026-04-10T02:00:05.272802Z",
			"deleted_at": null,
			"main_name": "Earth Lusca",
			"aliases": [
				"Earth Lusca",
				"TAG-22",
				"Charcoal Typhoon",
				"CHROMIUM",
				"ControlX"
			],
			"source_name": "MITRE:Earth Lusca",
			"tools": [
				"Mimikatz",
				"PowerSploit",
				"Tasklist",
				"certutil",
				"Cobalt Strike",
				"Winnti for Linux",
				"Nltest",
				"NBTscan",
				"ShadowPad"
			],
			"source_id": "MITRE",
			"reports": null
		}
	],
	"ts_created_at": 1775434615,
	"ts_updated_at": 1775826744,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/24fc3d9354a611998d9b4498f1cd1651de5d8339.pdf",
		"text": "https://archive.orkl.eu/24fc3d9354a611998d9b4498f1cd1651de5d8339.txt",
		"img": "https://archive.orkl.eu/24fc3d9354a611998d9b4498f1cd1651de5d8339.jpg"
	}
}