Welcome to part 2 of our blog series discussing MITRE ATT&CK techniques for industrial control systems (ICS). It’s our hope that this series will help security teams leverage MITRE’s matrix to complement their defensive strategy and controls. If you missed part 1, you can find it here.
We’ll continue our journey by exploring one of the MITRE ATT&CK techniques known as Change Program State. You can find the technique listed under the matrix tactic of Execution, as shown below.
So, what is this technique called T875: Change Program State? MITRE describes this technique as follows:
“Adversaries may attempt to change the state of the current program on a control device. Program state changes may be used to allow for another program to take over control or be loaded onto the device.”
This technique can be likened to malware infections that often affect normal enterprise computers. A malicious program is installed, or a current program is modified, to present a new behavior on that system. To infect a device, there needs to be a method of transferring the malicious program and a way to execute that program. Additionally, to achieve a specific goal, the functionality must be compatible with the host application or operating system. MITRE describes this process, in the realm of Industrial Control Systems (ICS), as Change Program State.
The point of this technique is to modify the ICS device’s behavior to achieve an adversary’s goal. The goals can include things such as data manipulation, device functionality manipulation, masking adversarial activities, data theft, or denial of service, to name a few.
MITRE provides three examples of these types of attack. You are likely to have heard of at least one of the following:
What are some ways, then, that we can secure ICS devices from this ATT&CK technique and/or detect the attack after it has occurred? Below are three suggestions.
In part 1 of our blog series, we spoke about the importance of keeping devices shielded from direct internet access. This is a great starting point, but here are a few other ways you can limit access to your ICS environment and prevent an adversary from executing the Change Program State attack technique.
First, limit physical access to your ICS devices. Consider adding additional physical limits within your operating and controller environment such as biometrically-protected rooms, cabinets, closets and/or cages. Monitor all physical access with video surveillance.
Second, limit access to each device within each network segment. Remove unnecessary open ports, services, and other unused features. Make every effort to lock down access to specific users, and make sure to change all default passwords. Consider using network access control to block network access by unauthorized devices. Enable audit logging on supported devices.
Finally, now that you’ve locked the doors, don’t forget to watch those doors. Passively monitor the network with tools that will tell you when an unexpected connection occurs or, even worse, an actual attack. Behavioral anomaly detection engines can be used to alert on unexpected connections, and IPS can be leveraged to match against known attack signatures. Audit user information to be sure that the proper user(s) are accessing these devices.
As an example, Armis contains a threat detection engine that alerts on unexpected conditions . Such an alert looks like this:
The first element of many attacks—which typically occurs before any of the MITRE ATT&CK techniques listed on the chart — is reconnaissance. Simply stated, the attacker needs to know what devices you have, and which they can reach. Once the adversary establishes a foothold, they will often probe further to map out your network. As a defender, you need to be sure that you are listening for both types of reconnaissance.
Remember that there may be unintended bridges leading into your ICS environment as network configurations can change (intentionally or not). Be sure that you are monitoring your entire network for reconnaissance efforts, not just your ICS devices.
To detect reconnaissance, you can use IPS systems that are deployed on the ICS network. Other kinds of passive, traffic-monitoring systems can be used to detect signs that a device has already been compromised. This could include such things as identifying new users associated with a device, scans emanating from those devices, or abnormal behavior for a device.
Remember too that this reconnaissance can be performed programmatically by malware, such as that employed by the PLC Blaster worm, or LogicLocker proof of concept. Such malware can sometimes run on IoT devices that can’t accommodate security agents, so their behavior can go unnoticed unless you are monitoring network traffic.
As it stands today, monitoring the integrity of PLC programs and detecting whether the program state has changed is difficult to do. In a traditional enterprise environment, you can deploy security agents to monitor changes to processes, memory, and files. You can’t do this with PLCs due to resource limitations and limited or specialized functionality. Most built-in “security protections” on PLCs are also largely ineffective and easily bypassed.
Attackers employing MITRE ATT&CK techniques such as T875: Change Program State may try to alter data sent from a PLC to a monitoring device. For example, the incorrect data could cause the PLC to report improper coolant levels. For instances where this manipulation is attempted using a Man in the Middle (MiTM) type of attack, protect these communication channels with encrypted connections. Further, limit communication exclusively between devices using ACLs or other segregation methods.
Attackers may also try to alter a PLC program, replace legitimate programs, or add new programs to change its state. How do you detect that? One way is to keep source-of-truth hashes of all running programs for comparison, but this requires near constant vigilance and can be cumbersome to manage. A better method is to monitor commands sent to your PLCs using a network-based monitoring tool that will indicate whenever a modification has occurred on that device.
As an example, here is how Armis does this. Some of the relevant commands Armis monitors include:
A policy to detect unplanned program changes could simply look for unexpected PLC Stop and PLC Start commands. (In order to upload a program to a PLC, you need to send both a PLC Stop and a PLC Start command.) Likewise, you can monitor for other unplanned PLC Firmware Change commands, PLC Configuration Change commands, PLC Errors, and mode changes that may be indicative of unauthorized activity.
Below you will see how Armis displays a start/stop event. This can be further refined into an alert that fires in off-hours for example. Likewise, you may choose to produce a report that can then be reviewed for these types of activities.
Lastly, you will want to monitor ICS device behavior, looking for anomalies. That way, if adversaries somehow do penetrate your defense and modify a PLC program, you can detect behavior that is out of the norm for that device.
Below is an example of what an abnormal behavior policy may look like within Armis:
Due to the specific nature of ICS devices, deviations in their behavior can be quickly detected—if you are watching carefully and comparing it’s present behavior to a known baseline. At that point, however, time will be of the essence and defensive teams will have to scramble to avoid expensive downtime, or worse, dangerous effects. Give yourself the best possible defense by locking down your devices, locking down their communication channels, and looking for clues proactively ahead of any adversarial success.
If you want to learn more about how Armis helps to discover MITRE ATT&CK techniques in ICS environments, check out our white paper.
Our next blog in this series will cover MITRE ATT&CK technique T839 Module Firmware.
Sign up to receive the latest news