Red teaming is applicable in airport security, intelligence agencies, cybersecurity, and the military. It is the method of rigorously challenging policies, systems, plans, and assumptions by fostering an adversarial approach. Hence, a red team is a group that puts all these activities into practice.
To encourage an outsider perspective, any external party or internal group may contract a red team to test a company’s capacity. The main aim of red teaming entails overcoming cognitive errors like confirmation bias and groupthink. Among many others, these two can undermine the critical thinking and decision-making abilities of organizations and persons.
According to Psychology Today, groupthink is a phenomenon that occurs when a group of well-intentioned people makes irrational or non-optimal decisions spurred by the urge to conform or the belief that dissent is impossible.
Additionally, the second cognitive error, confirmation bias, is the inclination to understand new evidence as proof of one’s existing theories or beliefs.
Back to our main topic. Although external parties can hire one as well, a red team usually consists of internal IT employees.
They devise tasks that include the simulation of adversarial or malicious actions. Speaking in cybersecurity terms, the purpose of red teaming is to compromise or breach a company’s digital security. This is a valuable practice that enables firms and employees to discover their weaknesses with zero real danger.
In order to understand your own weaknesses and how your adversary could exploit those weaknesses, one first needs to be able to understand your adversaries way of thinking.
Whether it involves getting into the shoes of a terrorist, a cyber-criminal or a disgruntled employee, red teaming provides highly valuable insights into the attack process of someone with malicious intent.
Maxwell de Jong, Chief Operations Officer, SCS
Red Team vs. Blue Team
The opposite of a read team is a blue team. The latter consists of internal IT employees who have the task of simulating the behavior of individuals within the targeted company. Often a security team, the blue group must respond adequately to fend off any attacks. Hypothetically, the blue team’s purpose is to prevent a data breach.
In professional circles, this type of activity is known as red team-blue team simulation.
One trick that some companies use is creating an external red team without composing a blue team within the firm. This exposes all weaknesses within the company and gives a couple of headaches to internal employees.
Historically, red teams first emerged in the military. Their purpose there was to realistically assess the quality and strength of strategies by utilizing an outside perspective.
After they participate in the attack and defense of a company, the red team and blue team respectively provide a list of conclusions to showcase the utility of their perspective.
The apex of this practice is a register of actionable items for improving the response and detection system of the company. Some of the elements may include server configurations and firewall upgrades.
Apart from ethical hacking and penetration testing, red teaming has proven to be a valuable resource for companies and employees worldwide.
The Nitty-Gritty of a Red Team
Apart from testing the existing protocols and systems, the role of a red team is also to assess the people who manage them. With a specific objective in mind, red teaming is a goal-oriented security testing method that works wonders.
Say the goal of a red team is to evaluate a business-critical application or a sensitive server. The read team’s achievement will be judged by how well it can fulfill this objective. In case the red team manages to attain its goal, then the organization in question is not amply able to prevent such a strike.
One of the main differences between penetration testing and red teaming is the lack of notice in the latter.
The work of a red team is anything but random. It encompasses a deliberate process for the extraction of all the desired information. Nevertheless, to establish control and the measurability of the procedure, a red team must complete an assessment before the simulation can start.
An evaluation of this kind intends to employ the mindset and outcomes of legitimate cybercriminals to identify entry points and vulnerabilities. If and when a red team discovers a weakness related to physical or digital assets, operational or technical processes, the red teaming procedure will strive to prioritize its exploitation.
Red Teaming and Airport Security
With application in many industries, red teams are also helpful to have around when evaluating airport security. In terminals and other places at airports ― they are all security-sensitive ― red teaming is remarkably important.
For instance, airport security screeners sometimes fail security tests. The reason? Airports spend too little time on them for wanting to expedite passengers as quickly as possible. In one example, agents from the Department of Homeland Security acted as travelers that tried to bring explosives, drugs, and weapons through airport security.
Although these tests are taking place regularly, public authorities don’t expose them since criminals could target specific airports.
Do You Need a Red Team?
The short answer is: yes. If you are running a firm or working for one with valuable assets, you need red teaming exercises.
Any company striving to become virtually resistant to outside attacks requires red teaming from time to time. There are no strict timeframes but the more frequent, the better. The rule of thumb is to conduct such activities once per year.
A red team exercise is beneficial for all the reasons we mentioned above, while the advantages also include:
- Removing internal bias,
- Focusing on critical assets,
- Using bespoke toolsets,
- Conducting cost-effective testing, and
- Educating the staff on threat actor activities.
Red Teaming is a unique operational art that is often underutilized or not at all. Micah Zenko wrote in his book, Red Team: How to Succeed By Thinking Like The Enemy, “an astonishing number of senior leaders are systemically incapable of identifying their organization’s most glaring and dangerous shortcomings.” Zenko suggests that it is our own cognitive bias that holds us back from thinking three-dimensionally.
Red Teaming is an excellent way to get past those cognitive constraints. Its most basic form encompasses vulnerability assessment by penetration testing. However, it should also include alternative risk scenarios driven by an intelligence function. To its fullest extent, red teaming helps in security operations and provides valuable insights to other lines of business within any organization, like marketing, IT, or business strategists. Getting outside ourselves by playing devil’s advocate has profound benefits and insights… proper red teaming supports this.
As soon as the results from a red team’s activity are known, they should immediately land on the desk of a firm’s management. Whether the outcome is positive or negative, it necessitates the attention of all decision-makers. These include the CEO, CTO, and financial department, among others.
If relevant stakeholders don’t have access to or don’t show any interest in the report resulting from a red teaming exercise, the company may be in trouble. A forward-looking, dedicated management will always look to improve the organization’s structure and procedures. And the best way to do so is to adopt the mindset of occasional red teaming.
A test like any other, it is worthless unless those who take it employ actions to address all the shortcomings. That’s the difference between running a company to the ground or making it bulletproof in the face of malicious outside attacks.